A new danger stalks on the network: voice cloning could be the last cyber -user for users

Sometimes we get to think that the Internet is a safe place to store our passwords, keep photos in the cloud or buy online, right?Well, you are wrong, cyber attacks have fired in recent months and among the most common are Phishing's scams and the introduction of malware into the devices.However, would it be possible for cybercounts to be creating a new attack through cloning a person's voice?

¿Detectarías una estafa por voz? Esta startup española ha desarrollado una tecnología para evitar fraudes digitales

It can be somewhat shocking, but nothing new to them, because if the 'bad' want to cheat us, they will do it by any way and a half.About a week and a half ago, the BBC published a news that reported about the growing concern about these possible cybercrime to deceive the people with whom you could talk on the phone.

Since 20Bits we have interviewed the technology company Aphurithmic (dedicated to the production of scalable and automated audio) to explain if these possible attacks are 'feasible' in the world of cybersecurity.But first of all, it is convenient to know how a person's voice can be cloned.

How is a voice cloned?

Matt Lehmann, Coo - Operations Director- of Aphloithmic, explains that the process begins with recording the person who wishes to clone his voice with the aim of creating a model.For the audio to come out clean, any background sound such as music or noise will distort the model, for this reason youtube videos or radio interviews to clone voices are usually used.

Once the recordings are obtained, they are aligned with the written texts of the script that the person has recorded and the data is prepared to model the voice.Lehmann states that the process is carried out thanks to the use of artificial intelligence (AI) because it recognizes the characteristics of a person's voice and recreates them with a model.This ‘Machine Learning’ process usually takes to process a few days and, once finished, any written text can become the voice of the person left the recordings.

s used to carry out the process

Un nuevo peligro acecha en la red: la clonación de voz podría ser la última ciberamenaza para los usuarios

A range of technologies is required, but the most essential is the Machine Learning that is carried out with a neuronal network.Basically it is an artificial brain that receives the audio information from the person who wants.

What can a cloned voice be used?

At the moment, the fact of cloning the voice is used on the occasions where the content is brief, either to read a web page for people with visual disabilities, listen to the instructions for use of a program or a machine, or itself to use it inGPS.This problem is due to the fact that artificial voices lack the ability to express emotions, however, with the latest advances in technology this limitation is disappearing.

Pegasus niega las acusaciones sobre el espionaje a periodistas y activistas

On the other hand, you can current.The key is in voice applications that require customization and/or supervoid voice generation, such as a chatbot, where artificial voices have their strong point.

To put a sample, Matt Lehmann states that “a conversation with a bot of Lionel Messi talking about football seems almost like talking with him, in addition, in Aphlorithmic we have created Albert Einstein's voice for a digital human who speaks English with an accent with an accentGerman".If curiosity bites you and want to talk with the physicist, you can do in the following link to live a unique experience.

Getting into the scope of cybersecurity, could there be telephone scams due to voice cloning?

Lehmann states that “it has already passed in a very sporadic way.However, the technology is very young and it is not possible to 'steal' the voice of a person with a few seconds of recording, something that sounds to the person can be done, but it is not possible to create a message that sounds realistic, much less maintain a conversation".

The COO of Aphurithmic has also explained that both artificial intelligence companies and governments have it very in view, since there are several test models to check if a voice or video is real.To become calmer, they say from the company that it is not a new threat to users.

In the future, these would be the possible solutions to prevent threats

"The algorithms can recognize these videos and those with Upload filters could also eliminate content platforms and media would have to play a more important role for the prevention of these scams in the future," explains Lehmann.

La banda de ransomware más grande del mundo ha desaparecido de Internet

Unfortunately, there have been the first cases where the voices of celebrities have used without their consent, being the best known examples of Deep Fakes those of Trump and Obama that were created three years ago three years ago.

Lehmann acknowledges that "as a voice actor we should never accept jobs without signing a contract that would guarantee a veto to stop the use of our voice or declare the use of it very precisely".Hopefully these possible threats will never be carried out, we would not like to report from 20bits about 'voice theft'.

Sign up for our Newsletter and receive the latest news about technology in your mail.