TL;DR: I fine-tuned a large language model on my personal notes and embedded the resulting model in my everyday workflow. Personal experience, Roam Research, AI Safety.
> Every 18 months, the minimum IQ necessary to destroy the world drops by one point
The idea that superintelligent AI can pretend to be an idiot, is an interesting concept (recommended IQ difference is ~18 between bosses and employees). Yarvin thinks that this line of thought is impossible, as any IQ higher than 118 is terrifying to the general public (same can be said for 145+ for the upper class).
Instead, approaching from the other direction, if it is possible for IAN to screw over humanity as a digital servant, all it needs is a an average comparable IQ of 82 (10th percentile, or an average elementary school drop-out), to start messing with people's heads.
> Every 18 months, the minimum IQ necessary to destroy the world drops by one point
The idea that superintelligent AI can pretend to be an idiot, is an interesting concept (recommended IQ difference is ~18 between bosses and employees). Yarvin thinks that this line of thought is impossible, as any IQ higher than 118 is terrifying to the general public (same can be said for 145+ for the upper class).
Instead, approaching from the other direction, if it is possible for IAN to screw over humanity as a digital servant, all it needs is a an average comparable IQ of 82 (10th percentile, or an average elementary school drop-out), to start messing with people's heads.
https://graymirror.substack.com/p/there-is-no-ai-risk?s=r https://www.ribbonfarm.com/2010/04/14/the-gervais-principle-iii-the-curse-of-development/