TL;DR: In the context of the AI Safety camp, Karl and I developed the idea of the "trust-maximizer". This write-up makes the case for "trust" as a potentially desirable goal for advanced AI.
Trust-maximizing AGI
Trust-maximizing AGI
Trust-maximizing AGI
TL;DR: In the context of the AI Safety camp, Karl and I developed the idea of the "trust-maximizer". This write-up makes the case for "trust" as a potentially desirable goal for advanced AI.