Let's be realistic about artificial intelligence |The Economist

While artificial intelligence systems spoke to humans in tasks that are often associated with a "high level of intelligence" (play Go, Chess or Jeopardy), they are not even close to distinguishing themselves in tasks that human beings can dominate withLittle or no training.

Halifax, New Scotland - in recent years, artificial intelligence (AI) has been gaining greater attention, money and talent that at any other time in its brief history.But much of the sudden stir is due to erroneous myths and ideas that are disseminated by people outside that field.

For many years, this area of knowledge was growing incrementally.Existing strategies had an improvement in the performance of between 1% and 2% each year based on standard parameters.But in 2012 there was true progress, when the computer expert Geoffrey Hinton and his colleagues from the University of Toronto showed that their “deep learning” algorithms could overcome the vision vision algorithms of avant -garde computer by a margin of 10.8 percentage points in the Imagenet Challenge (a reference data set).

At the same time, IA researchers benefited from increasingAdvances in high quality open source software.With them, automatic learning, and particularly deep learning, have taken over AI and have generated a wave of enthusiasm.Investors line up to finance promising companies of AI and governments have been investing hundreds of millions of dollars in AI investigation institutes.

While greater progress in the development of artificial intelligence is inevitable, it will not necessarily be linear.Anyway, those who generate a stir on these technologies have resorted to an amount of attractive myths, starting with the notion that AI can solve any problem.

It does not spend a week without sensational stories about how AI overcomes human beings: "Smart machines are self -working in quantum physics";"Artificial intelligence is better than humans in the detection of lung cancer".Headlines like these are usually true only in a narrow sense.For a general problem such as lung cancer detection, AI offers a solution only for a particular and simplified interpretation of the problem, reducing the task to a matter of image recognition or classification of documents.

Seamos realistas sobre la inteligencia artificial | El Economista

What these stories do not tell is that AI does not really understand images or language as human beings do.Rather, the algorithm finds hidden and complex combinations of features whose presence in a certain set of images or documents is characteristic of a specific class (for example, cancer or violent threats).And you can not necessarily trust these classifications to make decisions about people - whether they have to do with the diagnosis of a patient or with the amount of time someone should pass in jail.

It is not difficult to see why.While AI systems exceed human beings in tasks that are often associated with a "high level of intelligence" (play chess, go or jeepardy), they are not even close to distinguishing themselves in tasks that human beings can dominate withLittle or no training (such as understanding jokes).

What we call "common sense", in reality, is a gigantic basis of tacit knowledge- the cumulative effect of experiencing the world and learning about him from childhood-.Codify knowledge of common sense and feed it in computer systems is an unresolved challenge.While AI will continue to solve some difficult problems, it is far from doing many tasks that children do naturally.

This points to a related second myth: that the AI will soon exceed human intelligence.In 2005, the futuristic author with great success in sales, Ray Kurzweil, predicted that in 2045 an intelligent machine will be infinitely more powerful than all combined human intelligence.But while Kurzweil supposed that the exponential growth of AI would continue more or less constant, it is more likely that barriers arise.

One of those barriers is the mere complexity of AI systems, which depend on billions of parameters to train automatic learning algorithms from gigantic data sets.As we no longer understand the interactions between all these parts of the system, it is difficult to see how different components can be assembled and connected to perform a specific task.

Another barrier is the shortage of the data noted ("labeled") on which automatic learning algorithms are based.Large technology such as Google, Amazon, Facebook and Apple are owners of much of the most encouraging data and have few incentives to put these valuable assets available to the population.

A third myth is that AI will soon make human beings superfluous.In his successful book of 2015 Homo Deus: Brief History of Tomorrow, Israeli historian Yuval Noah Harari argues that most human beings can become second -class citizens of societies in which all higher level intellectual decision making arereserved for artificial intelligence systems.In fact, some common jobs, such as truck driver, are most likely eliminated by AI in the next ten years, as well as many administrative jobs that involve routine and repetitive tasks.

But these trends do not imply that there will be mass unemployment, with millions of homes surviving with a guaranteed basic income.The old jobs will be replaced by new jobs that we still don't even imagine.In 1980, no one could have known that millions of people would soon earn a living by adding value to the Internet.

Undoubtedly, future jobs probably demand much higher levels of training in mathematics and science.But AI itself can offer a partial solution, allowing new and more attractive methods to train future generations in the necessary competences.The jobs that the AI will be replaced by new jobs for which the AI enables people.There is no technology or history law that allocates humanity to a future of intellectual slavery.

There are more myths: AI will overcome and harm human beings, it will never be able to create a human type and can never build a causal and logical chain that connects the effects with the patterns that cause them.I think that, in short, time and research will demolish these myths.

It is a crucial moment for AI, sufficient reason to remain realistic about the future of that area of knowledge.

The author

Stan Matwin, Professor of Computer Science, Canada Research Chair and director of the Big Data Analysis Institute at the University of Dalhouseie in Halifax, Nueva Scotia, is a professor at the Institute of Computer Sciences of the Academy of Sciences of the Sciences from Poland.

Copyright: Project Syndicate, 2020

www.Project syndicate.org

Filed in:

Inteligencia ArtificialProject Syndicate