How did artificial intelligence come about?Without getting lost in science fiction; learn how artificial intelligence came about and what were its goals.
More than 60 years ago, man defined that he could have the help of machine logic to help his development. See below, how artificial intelligence came about , what were its first applications and how much it has developed over the decades. Its origin was out of a need to speed up known processes to free humans to focus on other abstract thoughts.
How is the area defined?
Artificial intelligence — or AI — is a discipline in its early sixties, which is a set of sciences, theories and techniques — including mathematical logic, statistics, probabilities, computational neurobiology, computer science — that aims to mimic the cognitive abilities of a human being.
Started at the height of the Second World War, its developments are closely linked to those of computing and have led computers to perform increasingly complex tasks, which previously could only be done by a human being.
The beginnings (1940 – 1960)
It is not possible to separate the origin of artificial intelligence with the evolution of the computational process. Based on this principle, we cannot fail to mention Alan Turing , the great father of computing who simply created the machine that made the allies win the war faster.
Speaking of this computer genius, now, many experts believe that the Turing test is not a good measure of artificial intelligence, but rather an efficient chatbox tool , which strongly inspired the concept that would create artificial intelligence.
The period between 1940 and 1960 was marked by technological development — with the Second World War being an accelerator — and the desire to understand how to approximate the functioning of machines and organic beings.
The term “AI” can be attributed to John McCarthy of MIT ( Massachusetts Institute of Technology ), where we can define it as the construction of computer programs that engage in tasks that are performed more satisfactorily by human beings due to high-level mental processes. level, such as: perceptual learning, memory organization and critical thinking.
Around the year 1960, artificial intelligence cooled down due to technical limitations of the time, such as the scarcity of computer memory.
The second era (1972 – 1997)
During this period, the problems of technical limitation of computers were partially solved, with memory expansion. The great thing that revived technology was art and cinema, individuals who had contact with the concepts in their youth unleashed their creativity.
In the technical area, in fact, it was the microprocessors that made the idea possible again. Even so, the truth is that little has evolved in a palpable way and with ample knowledge, evolutions were restricted to researchers.
A first big step was taken at Stanford University in 1972 with MYCIN (a system specializing in the diagnosis of blood disorders and prescription drugs). This system was based on an “inference engine”, which was programmed to be a logical mirror of human reasoning. When entering the data, the engine provided answers of a high level of expertise.
In 1997, the Deep Blue computer beat chess master Garry Kasparov , despite this, the IBM computer was an expert in a limited universe, not capable of modeling and calculating an entire world.
The current artificial intelligence (2010 – present)
Two main factors triggered this new era. The first, access to large volumes of data. The second factor was the discovery of the very high efficiency of computer graphics card processors to accelerate the computation of learning algorithms.
- In 2012, Google X (Google’s research lab) is able to make an AI recognize cats in a video — a machine learns to distinguish something;
- In 2016, AlphaGO (Google AI specialized in Go games) won the European champion (Fan Hui) and the world champion (Lee Sedol). The GO game has absurdly greater variations than Deep Blue ‘s chess .
How was it possible? Through a complete paradigm shift of expert systems. The approach has become inductive. In short, it is no longer about coding rules for expert systems, but about allowing computers to discover them through correlation and classification, based on a large amount of data and their own answers.
All of a sudden, the vast majority of research teams turned to this technology with immense benefits.
This type of learning has also allowed considerable advances in text recognition, but there is still some way to go to produce text comprehension systems. When we say that, it’s because AI still can’t fully contextualize and analyze our intentions with certain ways of writing.