TL;DR: If you're a student of cognitive science or neuroscience and are wondering whether it can make sense to work in AI Safety, this guide is for you! (Spoiler alert: the answer is "maybe yes").
Thanks for such a nice intro to AI Safety research. I'm sure you've come across the recent news of a Google engineer claiming their language model (LaMDA) was sentient. There's been a lot written about it but I was wondering if there are attempts at devising a new Turing test to address this? Is this a part of AI Safety research? Cheers,
I think it's worth linking in this post a good ressource for people in neuroscience to get started into ML and deep learning, it's the neuromatch academy. They are all open source and freely available and of good quality.
Also, why do you say "grudgingly" when mentionning Redwood Research? Is there a drama I'm unaware of?
"Brain enthusiasts" in AI Safety
Thanks for such a nice intro to AI Safety research. I'm sure you've come across the recent news of a Google engineer claiming their language model (LaMDA) was sentient. There's been a lot written about it but I was wondering if there are attempts at devising a new Turing test to address this? Is this a part of AI Safety research? Cheers,
I think it's worth linking in this post a good ressource for people in neuroscience to get started into ML and deep learning, it's the neuromatch academy. They are all open source and freely available and of good quality.
Also, why do you say "grudgingly" when mentionning Redwood Research? Is there a drama I'm unaware of?