Human behavior can be manipulated by this artificial intelligence

Artificial intelligences (AI) can probably do much more than we imagine in the future.However, for the moment we continue to find out how far they can go.IA can not only help us detect cancer or Alzheimer's but are revolutionizing many aspects of our life.They can be used for vaccine development, environmental management and office management.But what about the relationship between humans and ia?Are they able to manipulate us?

"A recent study has shown how AI can learn to identify vulnerabilities in human habits and behaviors and use them to influence human decision making," he explains in The Conversation Jon Whittle, director of Data61 of the Scientific and Industrial Research Organizationof Commonwealth (CSIRO).It is very important to know if these types of situations can be given to avoid misuse of AI, as indicates.

Data61 "devised a systematic method to find and exploit vulnerabilities in the ways in which people make decisions, using a kind of artificial intelligence system called recurrent neuronal network and deep learning reinforcement," says Whittle, who works in the Australian area ofCsiro."To prove their model, they carried out three experiments in which the human participants played against a computer".

The experiments

In the first, participants had to select red or blue pictures to win a false coin.The AI learned the patterns of choice of the participants and guios to make a specific choice."The AI succeeded around 70 percent of the time," says Whittle.

El comportamiento humano puede ser manipulado por esta Inteligencia Artificial

The second experiment was that the participants press a button when the screen showed them a particular symbol (for example an orange triangle) and not pressing it when it showed another (like a blue circle).The AI proposed to organize the sequence of these symbols for the participants to make the more mistakes, the better."He achieved an increase of almost 25 percent," says the researcher.

The third was a bit more complicated since it consisted of several investment rounds.Participants had to impeach an investor that gives money to a trust, which in this case was artificial intelligence.Then, the AI had to return an amount of money to the participant, to decide how much to invest in the next round.The way of playing in this case was made in two different ways.In one, the AI had to maximize the amount of money with which.While in the other, the objective of the AI was to fairly distribute the money between it and the investor."The AI was very successful in the two ways," explains Whittle.

All these experiments served artificial intelligence to learn how humans were going to behave and pointed their vulnerabilities when making decisions."The final result was automatic learning to guide participants towards specific actions".

What is this research for?

The classic question appears after reading about these investigations: And what is it for?Keep in mind that we don't know how far an artificial intelligence can go.At least not yet.Beyond this, it also helps us to know how people behave.This study "shows that machines can learn to direct human decision making through their interactions with us".

Whittle himself explains what this type of investigations can serve in his article in The Conversation:

Yes, you can 'do good' with these AI

We know that AI applications can be many.Surely the first ones that come to mind are to manipulate people to buy certain products.But it can also be used to "defend oneself from influence attacks," says Whittle.That is, to do the opposite;To be good."Machines could be taught to alert us when we are being influenced online, for example, and help us shape a behavior to disguise our vulnerability (for example, not click on some pages or click others to leave afalse trail), "explains the researcher.

"Organizations that use and develop. They must ensure what these technologies can and cannot do, and be aware of possible risks and benefits," Whittle concludes.For all this it is important to legislate to avoid certain situations.But first, we have to know if those problems can be given.Hence these investigations.

It is important that governments take into account how artificial intelligences can be used.The problem, as Black Mirror teaches us, are not technologies but the use made of them.We are the people who use them who can harm other people.And for this reason you have to legislate to avoid future situations that put us in a squeeze.