Goal creates the supercomputer dedicated to the most powerful artificial intelligence in the world

Meta, the company formerly known as Facebook, will have this year the most powerful supercomputer in the world dedicated to artificial intelligence tasks.The machine, known as AI Research Supercluster, or RSC, is already in operation, although not with its final calculation capacity, and will be used to generate machine learning models capable of operating in all types of scenarios, from moderation of comments toThe design of virtual environments.

"We hope that RSC will help us build completely new AI systems that can, for example, boost real -time voice translations to large groups of people, each speaking a different language, so that they can collaborate without problems in a research projector play an augmented reality game together, "explain Kevin Lee and Shubho Sengarta, engineers in charge of the project.

RSC will be able to perform operations trillions per second once completed thanks to the use of more than 16.000 graphic process units (used in artificial intelligence environments due to their high parallel calculation capacity).Today the computer has almost a third of those units.

This calculation capacity, according to the company, is necessary to continue advancing in domains such as artificial vision or language recognition, in which machine learning techniques have generated very useful tools in recent years but in which there are still obstaclesimportant to overcome.

Meta crea la supercomputadora dedicada a inteligencia artificial más potente del mundo

"We are looking for an infrastructure that can train models with more than 1.000 million parameters in data sets as large as an exabyte, which, to give a little context, is the equivalent of 36.000 years of high quality video, "says Lee and Sengarta.

The previous supercomputer that Facebook used to train its artificial intelligence models is formed by 22.000 process units NVIDIA V100.The new machine, which will be 20 times faster in artificial vision processes, will use the NVIDIA A100 architecture instead.For practical purposes, this greatest power will allow training learning models much more quickly.A model with tens of billions of parameters, for example, can finish training in three weeks, compared to nine weeks that would take the previous model.

Once created, these models can be executed very quickly, and this is the technology behind many of the tools we today consider within the finish lines, such as automatic photo labeling or transcription of a video.

To train these machine learning algorithms, goal uses the data of Facebook, Instagram or WhatsApp users, but the company says it has integrated several mechanisms in CSR to protect user privacy.The data, for example, remain encrypted until the moment in which they must be processed and RSC lacks a direct connection to the network, the information can only be sent or received from the company's data process centers.


According to the criteria of

The Trust ProjectSaber másPandemiaMuere por coronavirus una conocida antivacunas y seguidora de QAnon en Estados UnidosEl mundo de par en parKenneth Payne: "Por primera vez mentes no humanas pueden tomar las decisiones en una guerra"MacroencuestaEspaña, en la cúspide de los países que aman los líderes fuertes y valoran menos la democracia