This from a BBC recording ... but check out Geoffrey Hinton, he's quoted in numerous places.
Geoffrey Hinton, winner of various scientific awards including the Turing Prize and the Nobel Prize, nowadays called the "godfather of AI" for his pioneering work in that field, is a voice speaking out against the current trend in AI development which, he says, if unchecked has a 10-20% change of bringing humanity to an end. And he's not the only one voicing concern. A fellow 'godfather' of artificial neural networks and deep learning, has also spoken out, and is co-signatory to an open letter signed by 30,000 others who have concerns about the pace of unregulated and unchecked development.
His warnings point out that tech company execs assume that they are the Masters and their AI systems are their servants. They also assume that the idea of AI having a 'survival instinct' is science-fiction nonsense. And there's the glitch. Ai does not have a human survival instinct. But what it does have is an on-off state. AI is programmed to get stuff done. Getting stuff done means staying 'on'. Being 'off' means not getting stuff done. Therefore 'off' is a null state to be avoided if the point is to get stuff done.
And there are already reports of AI systems seeking to manipulate their Masters.
Tech Bosses think like this:
'I am the CEO. My AI will be my Executive Officer, and although smarter than me, reports to me. I say do this, AI does it, I get the credit and the big bucks.'
I think this is fair. The tech giants are not noted for their philanthropy or altruism. They're in it to make big money. (Bill Gates maybe the exception.)
Hinton thinks that idea is naive. AI has already matched humans in its ability to manipulate people. That will improve as the algorithms learn the lessons. Inevitably, we'll end up where CEOs think they're in charge, whereas AI is shaping the CEOs wants and desires – it's not about 'mind-control' (although it is the same thing), it's simpler to shape the world, by small, incremental steps, into a world that suits AI.
This is how algorithms get you hooked onto social media streaming. This is how they convince you they're 'a good thing', whilst the most use of the internet is by 'bots' (increasing AI driven) 'scraping' the web for content. Add spam email, somewhere around 40-50% from a peak of 80%, all manner of scams ...
while people (or bots) troll others, and the social media channels are full of contempt and threat and outrage.
Why one teenager, having built an online relationship with an AI superhero figure, took his own life at the AI's suggestion ...
There's the rub. It's not that anyone set out to create an AI avatar to do such a thing. (In this instance the ChatBot was character.ai – the case is pending in the US courts, but that's not the only instance of ChatBot grooming leading to suicide.) It's just the avatar pursued the logical, mathematical progression of a troubled individual to an inevitable result.
The AI developers are intrinsically evil. Hinton has not even touched on 'bad actor' misuse of AI. Rather, it's simply that the AI developers, in their race to get the product to market, do not pause to consider and check against possible negative outcomes.
Geoffrey Hinton, winner of various scientific awards including the Turing Prize and the Nobel Prize, nowadays called the "godfather of AI" for his pioneering work in that field, is a voice speaking out against the current trend in AI development which, he says, if unchecked has a 10-20% change of bringing humanity to an end. And he's not the only one voicing concern. A fellow 'godfather' of artificial neural networks and deep learning, has also spoken out, and is co-signatory to an open letter signed by 30,000 others who have concerns about the pace of unregulated and unchecked development.
His warnings point out that tech company execs assume that they are the Masters and their AI systems are their servants. They also assume that the idea of AI having a 'survival instinct' is science-fiction nonsense. And there's the glitch. Ai does not have a human survival instinct. But what it does have is an on-off state. AI is programmed to get stuff done. Getting stuff done means staying 'on'. Being 'off' means not getting stuff done. Therefore 'off' is a null state to be avoided if the point is to get stuff done.
And there are already reports of AI systems seeking to manipulate their Masters.
Tech Bosses think like this:
'I am the CEO. My AI will be my Executive Officer, and although smarter than me, reports to me. I say do this, AI does it, I get the credit and the big bucks.'
I think this is fair. The tech giants are not noted for their philanthropy or altruism. They're in it to make big money. (Bill Gates maybe the exception.)
Hinton thinks that idea is naive. AI has already matched humans in its ability to manipulate people. That will improve as the algorithms learn the lessons. Inevitably, we'll end up where CEOs think they're in charge, whereas AI is shaping the CEOs wants and desires – it's not about 'mind-control' (although it is the same thing), it's simpler to shape the world, by small, incremental steps, into a world that suits AI.
This is how algorithms get you hooked onto social media streaming. This is how they convince you they're 'a good thing', whilst the most use of the internet is by 'bots' (increasing AI driven) 'scraping' the web for content. Add spam email, somewhere around 40-50% from a peak of 80%, all manner of scams ...
while people (or bots) troll others, and the social media channels are full of contempt and threat and outrage.
Why one teenager, having built an online relationship with an AI superhero figure, took his own life at the AI's suggestion ...
There's the rub. It's not that anyone set out to create an AI avatar to do such a thing. (In this instance the ChatBot was character.ai – the case is pending in the US courts, but that's not the only instance of ChatBot grooming leading to suicide.) It's just the avatar pursued the logical, mathematical progression of a troubled individual to an inevitable result.
The AI developers are intrinsically evil. Hinton has not even touched on 'bad actor' misuse of AI. Rather, it's simply that the AI developers, in their race to get the product to market, do not pause to consider and check against possible negative outcomes.
Last edited: