AI Rambles #1

Large Language Models are amazing, they really highlight all facets and both sides of the human mind – incredible opportunities/wider ignorance, wonderful creativity/blind belief, exploring possibilities/ignoring implications and artificial intelligence/no wisdom.

I’m certainly not a naysayer re. AI nor a ‘destructionist’, but I do think that it should be treated with the same mindset as nuclear weapons. Although the immediate ramifications of significant ‘easily available’ AI misuse are not (yet) in the realms of destroying a city/country/humanity as with nuclear, the same process for understanding and minimising likelihood of misuse could be followed maybe?

The UN was born from the nuclear risk and the UN developed human rights and I don’t know, maybe human rights are an avenue to explore when looking at the impact of this tech (and I know the UN membership’s human rights record is ‘chequered’ at best.)?         

I stumbled on this episode of the excellent Centre for Humane Tech’s – Undivided Attention podcast – they continue to do fantastic work. One headline takeaway…

Half of AI researchers believe there's a 10% or greater chance that humans will go extinct from their inability to control AI

Not marketers, evangelists, devs, CEO’s, MD’s, investors or ‘hangers-on’ – half of those researching AI think there is a one in ten chance of humanity disappearing through its use and design of AI. That is like getting on a plane and half of the engineers say ‘well there’s a one in ten chance, it’ll fall out of the sky” – you would at least think about it, do your research, have a chat to a few people. You may not embrace the plane, smack it on its side and “Wow, isn’t it coooooooooool, I’ll be fine…..”

Dave McRobbieComment