Here’s Sam Harris (my favezies, obvi) talking about AI safety and specifically how to protect ourselves from self-destruction, something he considers both super-terrifying and inevitable, even though we don’t feel like it is. Oh, and, a quick swipe at Justin Bieber is included – what’s not to love?

So, the “intelligence explosion” is not just possible, it’s inevitable. If you don’t believe that, you must disagree with one of the following three assumptions:

  1. Intelligence is the product of information processing in physical systems. We’ve already built narrow intelligence that performs better than humans (hey, Siri!). Mere matter can give rise to general intelligence, because that’s what our brains do. “There’s just atoms in here”, Harris says, pointing to his head.

  2. We will continue to improve our intelligent machines. Rate does not matter, it’s just important that we won’t stop.

  3. We are not near the summit of possible intelligence.

Speed alone is an advantage machines already have over us, being about a million times faster than us. You can think of it as the following equivalency: machines to us might be what we are to ants. So even if AI is not intentionally murderous to humans, it might kill us accidentally. Or worse – it could cause us to kill ourselves.

And what would we do while machines do all the work? What would happen to society and economics as we know them?

Sam Harris sees this as the new arms race:

“This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.”

But don’t worry, folks, because it’s at least 50 years away. Right? WRONG! If we know that AI is 50 years away, but we don’t have any clue how long it would take us to figure out how to make sure it arrives safely, that’s a bit of a gamble.

“Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves. They’ll be grafted onto our brains, and we’ll essentially become their limbic systems. Now take a moment to consider that the safest and only prudent path forward, recommended, is to implant this technology directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one’s safety concerns about a technology have to be pretty much worked out before you stick it inside your head.

The deeper problem is that building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.”

Some smart people (including Elon Musk, Sam Altman, folks from Google DeepMind, Microsoft, Facebook and Amazon) have already started to work on creating what they believe to be the antidote to our own suicidal potential – groups like OpenAI and Partnership on AI are hoping to encourage AI safety. They want to disseminate AI progress by making it open source and establish best practices around the discipline. This, they hope, will avert us from the path of self-destruction. Only time will tell!

What do you think? Are you worried, neutral, excited, or not thinking about AI safety? Let me know by dropping me a line to sabbatical@mashakrol.com, or leave a comment below. Curious about your perspective! 🙂

Join Me!

Get weekly updates on brain tech, fitness, work, and other fun! All summary of learnings, zero spam.