Debates about artificial intelligence are often semantic debates. Artificial intelligence is not "far from existing," but artificial general intelligence (AGI) is far from being achieved (large language models still do not imitate human reasoning well enough and lack generalization). An artificial intelligence that functions like human intelligence, instead of merely imitating it, is far from existing.
However, several forms of artificial intelligence already exist. For example, large language models (LLMs) can now perform tasks that, until recently, required human intelligence (such as writing an essay to defend a thesis, summarizing content, critiquing, advising…).
The "AI effect" is behind some of the current confusion. As John McCarthy, AI pioneer who coined the term "artificial intelligence," once said: "As soon as it works, no one calls it AI anymore." This is why we often hear that AI is "far from existing." This led to the formulation of the Tesler's Theorem: "AI is whatever hasn't been done yet."
In reality, the debate centers on the definition of intelligence itself. What allows us to consider artificial intelligence as truly intelligent? This debate goes back to Turing's time. The question is: should the goal be to create an intelligence that functions like humans, or is it sufficient for AI to produce results that are indistinguishable from those of a human? For Turing, the second option is more important. It doesn’t matter if the machine "thinks" like a human, as long as it can simulate intelligent behavior. This is the essence of his paper "Computing Machinery and Intelligence," where he introduces the famous "imitation game."
The Imitation Game, Alan Turing
A small digression: it is truly tragic that we lost such a brilliant thinker as Turing at just 41 due to the absurd moral control of his time. I often wonder what more he could have accomplished. For the record, Turing is the one who deciphered the Enigma code, which helped shorten World War II and saved millions of lives; Turing gave us the Turing machine, the theoretical model that became the basis of modern computing; and, of course, the Turing Test, which has long guided AI research.
Personally, I follow Turing's school of thought. To me, it’s pointless to try and apply human cognitive criteria to evaluate artificial intelligence. Researchers who are obsessed with this approach often hit a dead end. AI is a different form of intelligence, with its own unique characteristics, and will never be exactly human. What matters is the result: if a machine can imitate human intelligence to the point of being indistinguishable to it, then it possesses a form of artificial intelligence.
We must stop anthropomorphizing machines. Deep learning neural networks, for instance, do not function like human neurons, and forcing that analogy is futile.
My point is that artificial intelligences already exist, though primitive in many respects (they struggle with reasoning, generalization, have no model of the world, and struggle with planning), but they are real. It’s up to us to use them wisely.