I saw this clip of Elon Musk talking about AI and wondered what others think - are you looking forward to AI? Or do you find it concerning?
I usually join people like Elon Musk, Isaac Asimov and many others who genuinely fear AI. IMO if we invent a hyper-optimizing self-developing machine we’re most likely toast and will be used for bio-fuel without a second thought by the AI. It will likely view us as inferior / previous-generation bio-machines that need to be removed so the ecosystem can thrive (sadly they might even be correct, having in mind that we keep causing climate change) and to facilitate the emergence of next-gen machines.
That being said, what we actually should fear much more, and fear it today, is the murder of all creativity by mercilessly shovelling potentially creative minds into menial and bullsh*t jobs.
Us the techies can still make a huge difference and improve the world vastly but (a) there is a lot of politics, inertia and vested interest preventing us from doing so and (b) we actually have to have the free time, energy and motivation to work on these things. And the sad reality is that we want to care for our own health and well-being (and that of our families). Between that and work and some leisure time, a separate free time for creative and world-changing endeavours seem to be a pipe dream…
So I’d say fear of AI is hugely overrated. Even with the relentless personal data harvesting, there’s a proof emerging that the efficiency of the manipulation by well-targeted ads is slowly diminishing.
IMO the current state of affairs is – we are very quickly speeding to a head-first crash into the wall named “we will be completely stuck technologically for a long time unless the financial incentive systems change radically”.
So far all we have is SAI, Specialized AI, basically AI trained on a specific task to answer a specific problem in a specific way. This kind of AI is not to fear, and this is the AI that is everywhere nowadays.
SAI can of course be more generic within their domain, able to do more, but still only to the extent of what it was trained for, no more, no less, it does no thinking, nothing of the sort, just a pure floating point matrix cruncher.
SAI will absolutely, and I do mean Absolutely remove a lot of jobs and will continue to do so at a growing pace until it plateaus for a long time, and these are not jobs that are being ‘replaced’ with a different kind of job (like how car manufacturing made jobs that replaced horse upkeep). There is no replacement. We as humans need to adjust the economy to account for that, and this is why I am for a Basic Income, and also why I am for Automation of this style. Automation will happen, sooner or later it will happen, and I’d prefer sooner rather than later.
SAI is not to be feared, it is a tool, an automation of ‘thought’ in the way that robots provided an automation of muscle. Sure the automation of muscle reduced jobs, but those transitioned to more thought jobs (though mostly menial thought), however there is no other place for us to go now that those are being automated, ‘this’ is the only part to fear unless our economy transitions to a different method like Basic Income, as long as capitalism reigns there will be little incentive to do that.
Now the AI that most people actually ‘fear’ is GAI, General AI. This is an AI that is large enough and been fed enough data it is it able to make even the most abstract of relational connections between data (because it makes all connections and sees how strong the links are). We are not to this point, and I don’t think we are anywhere near this point ‘yet’. This is the AI that will be the General Solver, I.E. ask it a question and if it’s answerable in a human lifespan then it will be able to answer it (this is like the AI in “The Last Question” by Isaac Asimov, the short story, available for legal free read at the supplied link). It I still don’t think is something to fear, how we as Humans use it is what could be feared, it could be used to make prosperity for all humanity (I’m personally for digitizing us and running us on a matrioshka brain processing unit, optimally a human brain only needs about 12w of power to simulate in accuracy at real time, more than that could do it faster, though accounting for physical inefficiencies and power loss a higher amount is more likely, generally from 20-48w is expected to be what is achievable), or it could be used to put us all in to a near literal Class Based Hell. Regardless, such an AI is still not alive or conscious (though it could potentially make interfaces that seem to be).
For now we can work toward GAI by making SAI’s that work on specific problems, from solving how to make the best computational processors for it to creating better training and processing models and more. Regardless it is still a while away, though whether “a while” means 10 years or 10000 I don’t know.
In short, No, AI is not to be feared, it is just another tool. How humanity adapts to that tool is what might be feared.
EDIT: As an aside, my favorite GAI based story is Ra (readable all online at the authors website free at that link or purchaseable on various media) other than the above “The Last Question” short story.
Agreed. There don’t seem to be any of those magical “they’ll just appear because the world will change” jobs coming up. As you said, previously we just shifted menial physical labour to machines but now I really am not seeing how can most displaced office workers even make ends meet after their jobs get automated away…