Home Top News General artificial intelligence: real science or fiction?

General artificial intelligence: real science or fiction?

65
0

By Sandra Garrido, coordinator technology area in UDITUniversity of Design, Innovation and Technology

General Artificial Intelligence (AGI) has gone from being an issue of academic and philosophical debate to appear in current media. But why do you talk so much now about AGI? Are we close to reaching it? Or are the great advances in AI be confused with a super intelligence that is still very far?

The Agi concept refers to a type of artificial intelligence capable of learning and performing any type of cognitive task regardless of context and training, as does a human being. However, the AI ​​that is being developed right now is called the narrow or narrow ai and to a large extent, it is capable of performing very specific tasks for which it has been specifically trained. For example, if a current AI is trained with the task of overcoming the first level of the Super Mario video game in the shortest possible time, I would not be able to transfer this knowledge to a Donkey Kong video game, despite being two video games of the platform genre.

Recent media attention is due in large part to the impact of generative models such as chatgpt, gemini or claude, in linguistic tasks, logical reasoning and content generation. The fluidity with which these models dialogue, write texts or solve problems has led some to proclaim them as previous steps, or even equivalent, to the AGI.

It is easy to understand this enthusiasm if we think about how popular culture has prepared us for this moment. From Hal 9000 in 2001: an odyssey to Gladas in Portal or the replicants of Blade Runner, fiction has played with the idea of ​​artificial intelligences capable of reasoning, feeling or even rebelling. Every time a chatbot responds with amazing coherence or an AI generates a hyperrealistic image, it seems that we are getting a little closer to those imagined future.

Exceptional imitators

However, current models, however sophisticated, are based on learning techniques supported by statistics that recognize patterns and relationships in large quantities of data. These models are not able to understand, much less have their own awareness.

The appearance of intelligence they show us is an illusion generated by a series of large -scale correlations, not the result of a deep understanding of the world. We see a system that responds with coherence and fluency and we assume that “understands.” But current models lack intentionality, causal reasoning and semantic representation of knowledge. They are exceptional imitators, not autonomous thinkers.

A clear example of this lack of intelligence is the hallucinations that present false statements that express with total conviction and that are easily contradicted. A truly intelligent system would know how to recognize an inconsistency or at least express uncertainty. The AGI, in theory, would not only have access to a knowledge base, but the ability to reason about it, question it and adapt it to new situations. It is not enough to seem intelligent; It must be functionally and adaptive.

In former Machina, the Android Ava not only has complex conversations, but also emotionally manipulates the surrounding humans, showing contextual understanding and capacity to anticipate. It is just that type of situational consciousness that is still completely out of the reach of current systems. In any case, although they are fiction representations, these narratives influence the perception that the agi general public can have.

Limits to get to the AGI

What limits separate us from the AGI? Currently, we are not even able to understand the functioning of the brain, which makes replicating this operation on a machine is quite complex

The current artificial intelligence models have significant limitations against human learning: they are not able to transfer knowledge between contexts as a child would learn by learning a new concept, they lack a persistent memory that allows them Do it with few examples.

In addition to all these factors, there are philosophical limits, what is exactly understood by general artificial intelligence? Is the AGI based on replicating the human brain or creating a functionally equivalent alternative? Should the AGI be aware of itself, emotions or values?

The career towards AGI is not only scientific and technical, but also business and geopolitics. Companies such as Openai, Deepmind or Anthropic have openly declared their mission of reaching AGI, which has generated debates on existential risks, governance and monopolization of knowledge. Some experts, such as Yoshua Bengio or Max Tegmark, have expressed concern about a possible lack of control and regulation in this field.

In parallel, the scientific community has been more skeptical, noting that current models lack the fundamental principles of a mind. There are those who advocate hybrid approaches, which integrate neural networks with symbolic structures, evolutionary learning or artificial mind theory.

The responsibility of the scientific and technological ecosystem is double: on the one hand, advance in knowledge in a rigorous, ethical and open way. On the other, avoid feeding sensationalist narratives that promise impending super human intelligences, when reality is much more complex.

It is urgent to establish interdisciplinary frameworks where philosophers, psychologists, neuroscientists, engineers and sociologists collaborate to define what AGI really means, how to recognize it and how to design it safely. It is also crucial to involve civil society in this debate and educate so that they are clear that current models are far from a super intelligence. AGI is not only a technical challenge, but an issue that will affect work, educational, political and social models.

In conclusion, we find more questions than answers. Are we close to AGI? The honest answer is: we don’t know. We have built powerful tools that impress us, but still far from capturing the adaptability and depth of human intelligence. Confusing what seems intelligent with what is intelligent can lead us to wrong decisions.

It is legitimate to dream of an AGI that helps us solve the great challenges of the world, but it is also necessary to maintain a critical and realistic look on the current state of technology, its limits and the interests that promote it.

Author: Sandra Garrido, technology area coordinator in UDITUniversity of Design, Innovation and Technology

Source link