The AI craze is proceeding apace. Alphabet (basically Google under another name) is replacing their search engine with the latest version of their AI called Google Gemini1, Microsoft is going all-in on AI in their latest hardware and software, while Apple is expected to follow suit in their upcoming WWDC. Welcome to a brave new world where our main advisors are basically black boxes whose working is literally opaque to even their own creators. In the Dark Ages, sybils, soothsayers and prophets at least had to act as if they really knew what they were predicting. Current AI doesn’t even have to do that, and we call it progress.
Big tech companies like OpenAI and Google are developing AI not with a specific end goal or usage case in mind, but because it is supposed to be the next generation of computation, the upcoming paradigm shift in intelligence. They don’t know where this is going, they don’t know how to get there let alone what to do if they get there, but they know one thing for sure—they have to get there first. Whenever, wherever or whatever it is.
It’s the new meme, the magic incantation that drives Big Tech into the next trillion-dollar-valuation, the boat that nobody wants to miss: AI.
We’re seeing something similar being done with the also previously hyped VR2 (Virtual Reality), where companies like Meta (previously Facebook), Microsoft (Hololens), Oculus and Apple developed VR headsets with no killer application in mind. Oh, sorry, Meta thinks the ‘metaverse’—a kind of virtual world where people can live in the future—is the killer app. Maybe. But the demise of Second Life3 (remember them) doesn’t bode very well for that4.
So what is AI actually doing now? Mainly the following:
1. Replacing search engines;
2. Language translation;
3. Producing text via text prompts;
4. Producing art via text prompts (including deepfakes);
5. Producing basic code5;
6. Machine learning, for example for self-driving cars and folding proteins;
Basically, 2 through 5 are tasks that humans already perform.
As to point 1: so humans cannot search the internet without a search engine. But keep in mind that Google’s search engine has become utterly enshittified while those of Bing, Duckduckgo, Yahoo and others aren’t much better. I strongly recommend to use Kagi6 which has no ads, so no underlying reason to become enshittified. Yes, you must pay if you perform more than 100 searches, but you either pay (I do), or you become the product, complete with the ensuing enshittification.
So in the near future, instead of typing your query into a search engine, you speak your questions out loud to a digital assistant who has now evolved into a unit we call AI (I have my doubts about the “intelligence” part, more on that later). But that doesn’t free you from enshittification: “Google AI search results are already getting ads”. Welcome to the new boss, same as the old boss7.
I am a writer who produces essays, reviews, travel blogs and fiction (without needing text prompts). There are many more like me. Similarly, there are many artists in the world, who can create fantastic, beautiful art (also mainly without text prompts).
Make no mistake: the likes of Google Bard (soon to become Google Gemini), ChatGPT 4.0 (higher versions sure to follow) and others so far cannot produce fiction, art or essays that can come close to—let alone compete—with the best humanity has and still is producing. Yes, the AIs plagiarise human art that they’ve ruthlessly scraped from the internet without considering intellectual property8 (for the latest example, see the abuse of Scarlett Johanson’s voice by OpenAI). Yet writing something truly compelling, especially at lengths greater than a few paragraphs, is still way out of their reach.
Even the code they produce is basically gruntwork. Therefore, I still don’t believe the current versions of ‘Artifical Intelligence’ are truly intelligent9. They scrape the internet and their training data for examples that seem to fit the queries they get, but without any consideration of its veracity, applicability, and meaning, let alone with anything approaching common sense.
Furthermore, nobody—and I do mean absolutely nobody—knows how they work. “Even the scientists who build AI can’t tell you how it works.” While there are a few scientists trying to peek into the black box of the neural networks that are powering the Large Language Models (LLMs) of our AIs, they are a tiny minority. The rest seem happy to proceed with a scattershot approach; that is, try a wild variety of approaches, use methods from human learning or accidentally find a way forward by forgetting to halt the overfitting part of an AI’s training sequence. Such modifications are seen as progress, yet what is sorely missing is an overarching procedure, a methodological approach that envisions a pathway of leading an AI towards true intelligence.
Of course, this is impossible if our AIs are basically black boxes, meaning the current ‘throw-anything-to-the-wall-to-see-what-sticks’ approach is unlikely to lead to intelligence, or at least not to intelligence as we know it10.
Problem is, they’re tring to create a human type of understanding—because that is easier to market and delivers instant applications—in a unit that has not followed the same evolutionary path as us humans. Very roughly speaking11, evolution delivered intelligence (and consciousness, which some consider a by-product) via the chain agency—sentience—(self)awareness—(self)consciousness. On top of that, this happened in an ecology where a huge variety of species interact with their highly complex environment. Our current AIs do not interact with each other, only with wildly incongruous training sets provided by their programmers and with this wild west of wanton data we call the internet.
And that is part of the problem: in a balanced ecology, the most stupid species are weeded out. On the current internet filled to bursting point with misinformation, idiotic conspiracy theories, wilful ignorance and other bullshit, stupidity seems to be a feature instead of a bug. The result: AIs behaving racist when used for recruiting personnel, behaving like bigots and otherwise taking on bad behavioural traits found en masse on the world wide web. You are what you eat, or: garbage in = garbage out. And what is the current solution: feed the AI even more garbage. On top of that AIs make things up, which their programmers call ‘hallucinating’.
And still, many AI researchers expect that a sane intelligence can arise through interaction with an absolute insane environment. Hint: that’s not going to happen unless we carefully curate the data we feed into our AIs. Right now, to paraphrase Erik Hoel, we are most likely creating ethically challenged entities at best and monsters at worst. If the current practices go on—and we have no reason to believe they don’t—the best we can hope is that this horrible AI creature they’re developing remains non-sentient. If—dog forbid—one of these AIs becomes self-aware (or even self-conscious)12, then its creators are not the new Prometheus, but rather the new Frankenstein.
What can we poor onlookers do in the meantime? I’d recommend a few things:
Dump Google and switch to Kagi;
Support organisations like the AI Ethics Initiative and the Global Alliance for Ethical AI Innovation;
Be wary of any AI system13;
Maintain a healthy dose of skepticism and criticism while following current developments;
As it is, I suspect that research into consciousness will find a way into creating a conscious entity well before the current AI research hype. I’ll go even further: I expect that nuclear fusion will be developed before actual AI14—let alone the holy grail of AGI (Artificial General Intelligence)—if the disparate, incongruous and scattershot approach to AI development continues. Keeping in mind that actual nuclear fusion is always a decade away. . .
Athor’s note: a new essay, hallelujah. Hoping to post more in the coming fortnight. Many thanks for reading, and stay tuned!
The follow-up to Google Bard;
Or AR (Alternate Reality);
Even if supposedly 200,000 people still use Second Life daily. In this modern world that is considered a failure;
Even if I speculate about a killer app in AR in my novel “The Replicant, the Mole & the Impostor”; Well, an SF writer’s gotta speculate…;-)
“The hard truth about AI? It might produce some better software” via The Guardian;
Yes, I know that Kagi is developing its own AI version called “FastGPT”. I’ll need to check that one out when I have time;
As The Who used to have it;
It’s why I put a ‘cease-and-desist’ note in my Substack introduction page. It won’t stop them, but gives me coverage in case of litigation;
Even if some researchers beg to differ in this Quanta Magazine article “New Theory Suggest Chatbot Can Understand Text”;
Star Trek homage: “It’s life, Jim, but not as we know it.”;
I’m trying to keep this piece under 1,500 words…;-)
Which, in the extremely unlikely case that it happens will be by accident and not by design;
This includes autopilots in so-called ‘self-driving’ cars;
Where the “I” indeed represents actual intelligence, not the extreme scheming it now performs;