This is a game-changing first: intelligent robots are beginning to come into contact with the real world, especially through touch. (Photo: This is engineering for Unsplash)
DAMN JOB! is a section in which Olivier Schmouker answers your most exciting questions [et les plus pertinentes] about the modern business world… and of course its shortcomings. An appointment to read Tuesdays and that Thursdays. Would you like to participate? Send us your question to [email protected]
Q. – “My job is manual labor, it requires great skill and years of practice to master it well.” So it’s not tomorrow morning that an AI will replace me. So stop scaring us with this!” – Hassan
A. – Dear Hassan, I have bad news to give you. Very bad news. Sit down before reading this. I think it’s better for you.
Last week, Toyota revealed what its robotics and artificial intelligence (AI) research unit, the Toyota Research Institute (TRI), has been working on in secret for several months. It is nothing more and nothing less than a new learning method for intelligent robots – robots coupled with AI – a method that involves… manual learning.
To understand the significance of what is being portrayed as a “revolution” by Toyota researchers themselves, a little context is necessary.
Current AIs are somewhat disconnected from reality: they have neither eyes nor fingers that allow them to contact the real world. You are completely immersed in their virtual universe. What they learn about the real world happens through data, tons of data from reality: their computing power is so great that the conclusions they draw from analyzing that data are astounding. Truth.
ChatGPT, for example, is capable of writing impressive texts on almost any topic because it brilliantly compiles everything it finds on the Internet on that topic. but ChatGPT actually understands nothing of what she writes. She just calculates the probability that a given word should follow a given word in the sentence she is writing, without understanding the meaning of the sentence itself. Of course, the end result is often startling, but it is just that: a bluff.
Today, researchers in robotics and AI are stuck dealing with reality. Because intellectual bluffing isn’t enough. The slightest mistake could lead to an accident involving humans, and that’s out of the question: a robot worker accidentally cutting off the hand of a human colleague whose abilities he’s supposed to enhance, that wouldn’t happen, he would do trigger massive and lasting social rejection.
Of course we see some initiatives here and there. Robots are able to lift loads that a human cannot. They can load a truck better than a human. Or they rush through Amazon’s huge warehouses to pick up customer orders. But that’s it, it’s really not much, nothing revolutionary. At least until Toyota’s announcement…
Because the TRI has just equipped its intelligent robots with eyes and fingers. And her AI really comes into contact with the real world: it learns each of the specific gestures of manual jobs at an incredible speed that exceeds the understanding of researchers: “We show her a gesture in the afternoon, we let her practice everything.” Night, and when we arrived at the lab the next morning, we found that she had mastered it perfectly,” says Ben Burchfiel, one of the researchers.
How do you show a gesture to an intelligent robot? It’s very easy. The operator, a human, has two joysticks, one in each hand. And he takes control of the arms and hands of the intelligent robot, which can be fully guided. After a few demonstrations, some of which involve recovering from a mistake such as dropping a tool, the intelligent robot has enough data to practice on its own. Which he does all night through countless trials and errors. The next day the gesture in question is acquired.
So far, TRI has taught robots more than 60 difficult and delicate operations, mostly gestures we perform in the kitchen. They are now able to pour a liquid, grasp and manipulate soft and malleable objects, or even use tools such as a vegetable peeler to peel any vegetable. The goal is to teach them around 100 new skills by the end of the year, and 1,000 by the end of 2024.
“If someone had told me last year that our robots today would have such dexterity, I would never have believed it,” says Russ Tedrake, vice president of robotics research at TRI. What I see is nothing short of incredible. And that’s certainly nothing compared to what’s to come.”
Toyota has partnered with MIT and Columbia University to achieve such a feat. Together they discovered a completely new learning approach, the so-called diffusion policy. And that is exactly what allowed them to succeed where others failed.
Diffusion policy is about equipping an intelligent robot with eyes and fingers and instructing it to learn through them. Without any line of code. Without database. Only with the help of a human role model, a kind of teacher if you will. Just like we do ourselves to learn a new gesture: as children, for example, we saw our father see a board and nail it, then we performed the same gestures on our own to learn and master them well.
In other words, the idea couldn’t be simpler: teach smart robots dexterity, just like we teach human babies. Except that intelligent robots have a capacity for learning that is as breathtaking as it is infinite.
In his excellent book, AI, the Biggest Change in History, Kai-Fu Lee, one of the world’s leading AI experts who has worked for Apple, Microsoft and Google, explains that the introduction of artificial intelligence will occur in four waves. The first two are online AI and professional AI. The first ones came out in the early 2010s and essentially consisted of these algorithms that were able to make recommendations for anyone based on their tastes and preferences. Secondly, it corresponds to the ability of algorithms to perform certain tasks better than humans due to their phenomenal computing power: for example, thanks to the analysis of millions and millions of X-rays, AI now identifies, better than the best doctors, the beginnings of skin cancer. It is this wave that has some saying that AI is here to help people do their jobs so they can perform better without the risk of losing their jobs.
Today we are experiencing the first two waves, while the impact of the second on the general public is still in its infancy. But there are two more waves coming: perceptual AI and then autonomous AI. The latter, as the name suggests, no longer needs a human to exist and grow; We could even say that the risk is that we will be viewed as an insignificant and useless thing, similar to how we view ants, for example. What we’re really interested in is the third wave. Because perceptual AI will be equivalent to intelligent robots with senses such as sight and touch.
In his 2018 book, Kai-Fu Lee speaks only of science fiction: he illustrates what it might look like with the image of an intelligent and perceptive supermarket cart that connects with the refrigerator customer’s intelligence to know what to buy ( milk, jam, etc.) and makes personalized suggestions based on the customer’s household eating habits by saying out loud, “Here, I’m sure your teen would like to explore these new cereals.” This week there is only 10% discount,” he said in a friendly voice. According to the author, there will be no perception AI tomorrow morning. He was wrong: Toyota is in the process of bringing it to light, and it’s evolving much faster than its researchers expected.
What does Toyota’s move entail? Essentially, tomorrow’s intelligent robots will be humanoid, that is, they will have our external appearance (a head, a torso, two arms, two legs). The difference, however, is that they have a better command of the actions of manual trades and execute them much faster than we do. You will also be unstoppable: you will be able to work day and night without ever taking sick leave. Plus, they will be free in a sense: they will never ask for a salary, let alone a raise due to inflation.
In short, Hassan, I don’t want to depress you, but to give you the right information, brutal as it may be: yes, tomorrow humanoid robots will be able to do your job. Perfect, at least better and faster than you. Without ever asking for payment.
Of course that won’t happen tomorrow morning. But tomorrow is certain.
Furthermore, the “revolution” appears to be already underway. The Texas company Apptronik wants to market its humanoid robot from 2024. Agility Robotics just opened a factory in Oregon that will soon produce around 10,000 humanoid robots per year.
The question is obvious and needs to be answered quickly by our vulnerable societies: Are we likely to witness a major replacement soon?