Your AI Questions Answered by the Head of MIT’s AI Lab

[ad_1]

 

What impact will AI have on the economy?  How can AI help in the medical field? Will self-driving cars soon be a reality? As investors, we’re aware that AI is creating opportunity everywhere – but what does that truly mean?

Daniela Rus, Director of CSAIL at MIT and ROBO Global strategic advisor, answers your questions about AI and highlights some of the most outstanding use cases of the technology both today and tomorrow. Investors should walk away from this fireside chat with a better understanding of the rapidly evolving world of artificial intelligence and how to best invest in the companies disrupting our world.

Webinar Transcript

Jeremie Capron:

Hello, everybody. Thank you for joining us today on this ROBO Global webinar. We’re going to be talking about investing in AI and robotics, how investors can capitalize on these trends. My name is Jeremie Capron. I’m the director of research here at ROBO Global, and we are a research and investment company that’s focused on robotics, AI and healthcare technologies. As a lot of you know, where the creators of research driven portfolios that are designed to help investors capture the growth and the returns presented by this technology revolution.

I’m talking to you from New York city and I’m thrilled to be joined today by a very special guest, Professor Daniela Rus. Who’s the director of MIT’s Computer Science and AI Lab, and she’s a member of the National Academy of Engineering, the American Academy for Arts and Science. Daniela has been an advisor to ROBO Global since 2018. For that we’re very grateful. Daniela, thank you. Welcome.

 

Daniela Rus, PhD:

Thank you so much, Jeremie. It’s such a pleasure to be here with you, even if it’s virtual. I hope next time will be in-person.

 

Jeremie Capron:

Yes, certainly. We all hope that too. Now you’ve been into robotics and AI for quite some time now and love to get things started in this conversation by asking you, how did you get started and what led you to your current role as director of CSAIL.

 

Daniela Rus:

Well, Jeremie, thank you for this question. I think that as with most things that we cherish in life, there is always a threat that goes to your childhood dreams. I, as a child, I loved thinking about superpowers and superheroes, but ultimately I went into robotics because I was good at math, but I wanted to work on something that brought together the world of mathematics with a world of physical things. So ultimately I became very interested in developing the science and engineering of autonomy and how to make things move.

So by this, I really mean understanding the mathematical and biological foundations of autonomy. I was also interested in how you take that and you turn it to engineering. You build machines that embody these foundations. So I wanted to develop machines that give people superpowers and help people with physical and cognitive work. Because I do like to imagine a future with AI and robots, supporting people with cognitive and physical work and with the same pervasiveness with which smart phones support us with computing work.

Of course, I’m not alone in this quest. Now I can pinpoint the day when I decided to go into robotics. This was a day when I was an undergraduate student and I attended a talk that was given by John Hopcroft, who at the time had recently won the Turing Award. In that talk, John said classical computer science is solved and it is time for the grand applications of computing that interacts with the physical world and robotics is the next big thing in computing.

Now, when John said that classical computer science was solved, what he meant was that many of the graphs, theoretical algorithms that were posed in the 1970s had solutions. But this idea that you can take computing and extend it for interaction with the physical world, for creating machines that bring a physicality to the benefits of computing was what fascinated me and what convinced me to go to study with him. That was really an extraordinary journey.

Eventually I ended up at CSAIL and I was really honored and excited to be part of this community, CSAIL, now which has always been about moonshots and big dreams, about how you go from science fiction to science, and then to reality, and how to pick up questions that are never too crazy and think about a future that’s never too far away. Really our researchers at CSAIL take pride in imagining the impossible and then making that impossible, possible.

I personally feel so proud of the tradition at CSAIL that goes back to 1963 and 1956 when the world looked very, very differently than today. But how I ended up being the head of CSAIL, I have to tell you that, I have so much admiration and respect for my organization. In 2012, I was getting ready for a fantastic sabbatical. I was going to do a lot of things on my sabbatical. A CSAIL director role opened up, so that gave me pause. Then I interviewed for the role and I was offered the role.

Because I have so much admiration for my colleagues and for our mission to invent the future of computing and make the world better through computing, I decided to trade my great sabbatical for the opportunity to work even more closely with my brilliant colleagues at MIT, who are advancing computing and are inspiring so many applications and businesses. The mission is really to be the profit for the future of computing to educate the best students in the world and to make the world better through computing.

Just imagine that if Tony Stark were a student today, he would be our student. Now, how not to seize the moment and jump on that opportunity. That’s how I ended up being in my current role and every day is very inspiring and mind bending because of all the activities around me.

 

Jeremie Capron:

I think what you said about computing extending into the physical world really resonates with what we are trying to do at ROBO Global. The premise behind ROBO and the THNQ index and artificial intelligence index, the robotics index is really this vision that robotics and machine intelligence is the next technology platform. That it’s technology platform in a sense that it can be applied to every industry, every market and it’s happening now. So in a way, would you think it’s fair to compare that to what happened with the internet revolution?

When we started connected computers together and sharing information on very wide scale, the impact on our daily lives and the economic life, of course, and all aspects of business has been tremendous. The value creation has been enormous. If you think about you go back to 1997 and internet companies represented 0% of the S&P 500. Today, they are more than 10%, so we’re talking trillions dollars of value creation there. Do you think it’s fair to compare robotics and AI to the internet in that way?

 

Daniela Rus:

Absolutely and computing, absolutely. I mean, just think about the fact that 25 years ago, computers were reserved for experts and computers, networking the internet. All of that was something that computing geeks did. Computers were so large and expensive, and you really needed expertise in order to know what to do with them. Now computing and the internet and sharing of information is something that everyone does. We take it for granted and all of this happen in a short 20 years.

So to me, this is an inflection point because we live in a world that has been so changed by computation. This raises a very interesting question. What can we do beyond computation in this world so changed by computation? What would it look with robots and AI and machine learning, helping people with cognitive and physical tasks? We have made so much advances on the hardware side of things. We have made so much advances on understanding data. We have made so much advances on algorithms and you need all three of them.

Because you need the body of the machine. Then you need the brain of the machine. The brain of the machine needs data today. We are really at an inflection point in bringing autonomy, automation, AI, machine learning, intelligence, to everywhere where there’s need to support with physical tasks and with reasoning tasks.

 

Jeremie Capron:

Now, I want to remind everybody, you can type in your questions in the Q&A at the bottom. We’ll really be happy to hear what you want to learn from Daniela today. Feel free to go ahead and type them in, and I’ll keep a close eye on that. But Daniela, CSAIL, MIT is based just around the Boston area. I think for a lot of people, when it comes to AI, you tend to think Silicon Valley is the place. But in the last decade or so, we’ve seen some new clusters of robotics and AI innovation emerge in the US, I think, around Boston, but also around Pittsburgh. Tell us a little bit about what’s happening in Boston and why those clusters have emerged.

 

Daniela Rus:

Well, Boston is a hotbed for robotics and AI startups and the activity is extraordinary. I will tell you that a decade ago, most of our students would graduate and they would want to go into either academia to become professors or work for big tech. I would say that today, the majority of our students are interested in the entrepreneurial path. Because they can see that they are able to make a difference in the world now with what they know and what they know is so valuable. So we have a lot of universities in the Boston area. I mean, there is MIT, there’s Harvard, there’s BU, there’s Northeastern and lots of other universities.

So the source of talent is extraordinary. Now on top of that, Massachusetts has purposefully decided to put in place programs that support in particular, the robotics and the AI entrepreneurial ecosystem. We have created mass robotics. I’m on the board of directors of this organization. But the purpose of mass robotics is really to facilitate the starting of robotics companies. Mass robotics offers a variety of services, practical services to startups, equipment, laboratory space, company space, plus the ecosystem that connects these companies with the VC world, with the academic world and with the entrepreneurial world and with a big tech world.

So through mass robotics, we have created an extraordinary community and the activities have truly mushroomed. There are also many activities that are centered in AI. Well, I’m not telling you any news when I say that with data, with machine learning, so many capabilities that were not possible before are now possible. Capabilities related to predicting what has happened in the past, what is going on now? What should I do next? These general applications are impacting quite broadly, all industry sectors. So I just feel so fortunate to be alive, to be part of the development of this field at this exciting point in time.

 

Jeremie Capron:

Now, what are some of the most interesting projects that you’re working on at MIT today? We’ve talked over the years and I think the breadth of the research that you do at CSAIL is quite impressive. But if you were to select maybe a handful of examples, the most exciting projects that you are working on today, we’d love to hear that Daniela?

 

Daniela Rus:

Yes, of course. I have pulled together a few videos to show you what I’m talking about, but before I show you the latest results from our lab, I want to say something more philosophical. I want to observe that the first industrial robot called the Unimate was introduced in 1961. That robot was introduced to the pick and place operations. Now, today, 60 years later, the number of industrial robots has reached tens of millions. These robots are masterpieces of engineering that can do so much more than humans do. Yet they remain isolated from people on the factory floors because they’re large and heavy and dangerous to be around.

So what I would like is to bring robots into human centered worlds. To build robots that are safer to be around. This is where the field of soft robotics comes in. Now, if you think about industrial robots and organisms in nature, there’s a very stark difference, right? By comparison organisms in nature are soft and safe and compliant and dextrous and intelligent. I mean, just think about what an octopus can do with its body or what an elephant can do with its body. I’d like soft robots that can do the same. I would like to rethink our notion of a robot.

Because I believe the past 60 years have defined the field of industrial robots and empowered hard bodied robots to execute complex tasks in constraint industrial settings. These robots have been primarily inspired by the human form, their humanoids, or their robot arms, or their boxes on wheels. So what I would like for the next 60 years is to see an era that ushers in robots in human centric environments and our time with robots helping people with physical tasks.

I would like to observe that if we look at the natural world and the animal kingdom, and even at the built environment with form diversity, I think that we can broaden what we think a robot is to take inspiration, to allow robots to come in any forms, shapes, sizes. To allow robots to be made out a wide range of materials, wood, plastics, paper, ice, food, plastics, metals. All of these materials are available to us to make machines.

In my lab, we are developing computational approaches and ideas for designing robots that are made out of such a variety of materials. I’m trying to share my screen to show you some pictures, but the host has disabled sharing. I can continue to talk, but if I might be able to share, then I could show you some concrete things that we are working on. When it comes to robots for human centered environments, you will see that also the vertical application potential is so much broader. It’s so much bigger.

The market size will go into trillions. In fact, this is what many of the projections are. Ah, I can share now. So let me share. Let me show you a few images, and I want to go back to the childhood dream. Remember when Mickey summons the broomstick in the Sorcerer’s Apprentice, well, today you don’t need magic to make this happen. You can turn the broom into a robot and you can make any object in our physical world into a robot. Here’s an example where we have developed an automatic way of taking a picture and turning it into an actuatable machine.

So this is a simple example that started with a robot of… I’m sorry, with a picture of a bunny. Now, here we are through this automatic design process, we have made this robot. So now think about this robot as the broom. Now, the broom has the ability to move itself and the human can then control the broom by a new class of intuitive human machine interfaces. Like you see here where the human is able to control the robot with the arms, just like Mickey controls the broomstick with his body.

Then if you have that capability, then you can get to a place where robots can be become teammates very naturally adapting to what people need. Here you can see a robot that has not learned this particular task, but has learned how to generally follow the lead of a human in installing cable. Cable installation is a really challenging activity. You can take these intuitive interfaces and connect them to gestures and create all kinds of things. Like you can now imagine a world where clothing becomes robotic.

In this case, we have the sensorized glove that is able to understand sign language and really go from gesture to words through sign language. So you see machines are getting closer and closer to people in terms of the development of these intuitive interfaces. Now we can have soft machines where we can do extraordinary things with their soft endpoints. Here you can see soft robotic gripper. The robot itself is not soft, but the gripper is soft. Just look at how adept this robot is at handling objects that are really difficult to model. In fact, impossible to model like grapes and broccoli and lettuce.

How you can connect this idea into a new wave of applications where you can imagine warehousing robots, and you can imagine grocery store robots, you can imagine packing with robots at the level of automation that has not been practical before with hard bodied robots. The same idea can be used to get robots to interact more closely with flexible objects in the physical world. So here is a robot that uses foundational knowledge about modeling of fibers, and is able to have an adaptive controller that can do operations that require a great deal of adaptation and customization, like brushing hair.

This solution is able to handle any type of hair. So you can see some examples are beginning to lead to a world where robots are coming into our physical world to do more physical tasks. In the process of developing these robots, we observe that hard bodied robots are very strong. Most soft bodied robots don’t have high payload because of the nature of actuation. But if you can somehow combine, if we can somehow create soft bodied robots that have inner skeletons, then we can have the best of both worlds.

We can have this very compliant interaction with the world that allows us to pick up grapes without knowing a model of what the grape looks, but we can also get strengths. So in my group we are developing a new class of materials we call rigid and soft materials. These rigid and soft materials have these exoskeletons inside. We can control them very accurately. Then we can build applications that enable these robots to do delicate tasks. Like, I mean, here is a very quickly created robot hand that can do operations that are so difficult for hard body traditional actuators.

We will see so much more in the space of manipulation. Actually manipulation is an area of robotics that has not been as developed as the mobility part. Because we don’t have the same advancements on the hardware side.

The other thing I wanted to say is that robotic solutions require two parts. It requires the body, and we’ve seen some examples of what you can do with the body, but it also requires the brain. We also need AI to control the robots to do what they’re meant to do. I note that today’s AI solutions have huge carbon footprint. For instance, a small transformer with only 213 million parameters, releases 626,000 pounds of carbon dioxide in the atmosphere. This is the training part, and look at how this compares with the carbon foot print of human life, American life, a round trip from New York to San Francisco and US car. The training of this transformer is equivalent to the lifetime emissions of five cars. So is that really needed?

Well, we have developed deep neural network solutions for complex tasks. Here you can see robot car that was built at MIT, and that does pretty well. This car was trained in the city and the robot car does pretty well driving on a completely new type of country road using a deep neural network solution. This is exciting. It’s an end to end learning solution. It’s extraordinary. But now, if we look inside the decision engine of this vehicle, this is what happens. Let me orient you.

The top, sorry, the bottom left is the attention map. This is where the decision making engine is looking in the environment to make a decision. Above it, it’s the live camera input stream. The bottom right shows the map that the vehicle traverses. Then you have small boxes that are convolutional layers, that process the input stream. The decision making engine is this big rectangular box in the middle. You see these blinking yellow, green, and blue lights that show how the neurons fire.

It’s almost impossible to see patterns because there are over a 100,000 neurons and the half a million parameters that are involved in these decisions. Also, take a look at the attention map and just see how noisy it is. The system is looking all over the place to make decisions. So the question we’re asking is, can we do better? Can we create more compact solutions? Can we imagine machine learning that is much more causal and interpretable?

So using some biological inspiration, we have developed a new model for machine learning, where essentially we changed what the neuron looks like. Instead of computing a step function, which is what happens in deep neural networks, we compute a differential equation with liquid time. With this model we can now solve the same solution, how to learn end to end, how to drive by watching humans. We can learn that using only 19 nodes. So now the 19 nodes are really so much more understandable. We can actually see the firing pattern and extract the decision making of the solution.

Also note how clean the attention map of this solution is. So the attention map is on the horizon and on the sides of the road, which is what people do when they make decisions for how to steer the car. The point is that there are so many opportunities to also improve the AI side and this improvement can enable cognitive applications, but also makes big difference on physical applications. In the interest of time, I’m going to to skip and show that the same solution that can be applied to cars, can also be applied to anything that moves.

So here is a robotic boat that we have recently deployed in Amsterdam. We call it Roboat. The system has exactly the same autonomy package as the autonomous car. The low level aspect of control has to be different because this vehicle does not move on a solid road and the vehicle essentially has to be adaptive to weight and to the waves. But ultimately the high level piece, the autonomy is the same as the car. With our current understanding of autonomy, we can make anything that moves into a robot.

That is really, really exciting. We can also expand our capabilities, even for robots. Here, we’re showing you that the robotic solutions that traditionally only work in dry weather, and that’s why everyone deploys in Texas and Arizona can be expanded to work in weather to work in snow and in rain by more or less thinking about different ways of making the map of the environment. Now, most traditional solutions use maps that are built by laser scanners and cameras that look above the road. This solution uses a ground penetrating radar that looks below the surface of the road. Looks at the ground and the textures of the ground. So with these kinds of ideas, we are really trying to push the envelope on what is achievable with a state of the art.

 

Jeremie Capron:

That is fascinating. Thank you for that Daniela. Again, I think the breadth of the type of projects you’re working on is just stunning. I was very impressed with the progress in terms of the soft robotics manipulation, because I recall about two years ago, I think you were showing some other demonstrations and the progress with the finger type manipulator is quite impressive.

 

Daniela Rus:

Well, I just want to say that it’s breadth, but they’re all important and they’re all related. Because in robotics you need the body of the robot and you need it because the robot will only be able to do what the body can do. So that body has to be capable. We have to think about that. We have to think about what we want of it, but we also need the brain because without the brain, the body will be just a mechanism.

Then in order to use the machines, we also have to think about how people interact with machines. We’re dreaming about this world where anybody can use a robot without being an expert. That means we really need the same intuitive interactions that allow people to surf the web. But now we need those interactions in order to allow people to use robots.

 

Jeremie Capron:

Well, look, Daniel, I see a lot of really good questions coming through. I want to start addressing some of those, and I think you talked about the inflection earlier. I see a few questions around, what is the trajectory and what has been the impetus for this inflection technology wise? I think I would love to show… I’m going to share my screen just for a couple minutes here, going to the ROBO Global website, because I think it’s important to understand that the stock market is also telling us this inflection is here. So what I’m showing here, let me scroll down a little bit here. This is the ROBI index, which is an index made of the best in class robotics automation companies from all around the world that we started in 2013.

You can see the inflection just around 2016, 2017, where a lot of those companies started benefiting from very strong tailwinds in terms of adoption of their technology and really the scaling up. What we’ve noticed is that it’s happened across the board. The enabling technologies, of course, from compute to integration, to actuation, and some of the componentry and hardware around autonomous systems and robots. But also certain vertical applications that have really taken on in the last few years.

So I want to go back to that Daniela and ask you because the audience wants to hear from you. Of course, everybody wants to know what’s the next big thing? But before I let you do that, I want to show here the way we approach it as investors. We think it’s really important to cover the entire value chain, to capture the growth and returns. It’s a much more reasonable approach than trying to put on a handful of concentrated bets on specific applications or specific technologies. So the way we do that is that we’ve mapped the industry across 11 different sectors that you can see here on your screen hopefully.

So it goes from the enabling technologies to specific vertical applications from logistics and warehouse automation to manufacturing, where it all started some 50, 60 years ago now. But also autonomous systems and the food and agriculture, the energy sector, consumer products, etc. With that, I want to pass it on to you, Daniela, and have you answer that big question. What’s the next big thing? Where do you see AI and robotics really gaining traction over the next five to 10 years? Are there specific example of industries or applications that you’re quite positive on?

 

Daniela Rus:

Yeah, absolutely. Thank you for that. I’m very bullish about a number of sectors. First of all, I think that there is a lot to be done with respect to mobility. I don’t think we will have ROBO taxi anytime soon, but autonomy for mobility can be deployed in so many important applications. I’m a really big proponent of what I call safe, speed mobility. Autonomous vehicles that move more slowly in more structured or less complex environment. So we have a really big issue with a supply chain problem right now.

Well, autonomous vehicles can completely solve this problem, and we will see a lot more activity in this whole area of logistics. Whether it’s to automate port operations or to automate certain parts of trucking or to automate factory yards, or to automate operations inside the factory. There are already exciting opportunities and startups who are actively working in the space.

I’m also very bullish about beginning to use soft robot hands in order to enable more automation in manufacturing and in order to enable people and machines to be part of the same factory process. So I’m a really a huge believer in packing with soft hands, sorting with soft hands. Essentially doing manufacturing automation in less structured settings than the industrial robotics setting. So then there is a lot of work on the AI side. We see a lot of work around data companies, around companies that prepare data, massage data, companies that train models for a whole variety of applications.

We see data aggregators. We are beginning to see companies that are looking at ensuring that the data used to train products is the right data. Because as you may know, the performance of a machine learned model is only as good as the data used to train it. So if the data is bad, the performance will be bad. If the data is biased, the performance will be biased. That means that we really need solutions that analyze the correlation between the data used to train the model with the uncertainty of the model.

When the uncertainty is too high, these new companies are able to identify where the model needs new data and actually synthesize that data in order to make the model better. I also think that in the near future, we’re going to see a lot of activity on the assurance of AI systems, because at the moment, the activity in that space is ad hoc. So in summary, lots of applications where we have safe speed mobility for logistics in ports, factory yards, factory floors, shopping cards. I mean, hospital delivery systems, all of these are being enabled.

Support of logistics with the vision of labor between people and robots, where maybe the robots do the movement part, which is easy. They fetch things to people who can do the manipulation part. Broader adoption of the Amazon model, which currently does that. But I also see a lot of opportunity in AI, especially on the side of preparing data, so that more people can you use data effectively. That is applicable across the board to all industries.

 

Jeremie Capron:

I think if you look at how the industry or the technologies represented today in terms of public companies, so the more mature, not the startups, but the ones that have already scaled to say at least a $50 million in annual revenue, and that have gone public. Today, the structure of that market really reflects what you just described to some extent. You have about half in enabling technology and the half in terms of providers of turnkey solutions to automate specific industries.

So logistics warehouse automation, we find is around 10 to 12% of the pie. Then healthcare automation, including surgical robotics and things like that, that’s another 10%. Then manufacturing still the biggest, factory robotics and automotive manufacturing, electronic device assembly and things like that. Now, I see quite a few other interesting questions around the technology bottlenecks and what has been bottleneck that has been overcome and that really maybe became a catalyst for this inflection? Then if you take a forward looking approach, what are some of the technologies or hurdles that you’d love to see magically solved today?

 

Daniela Rus:

Well, what I would really like to see magically solved is the manipulation problem. I would really love to see robot hands that are able to have the same sensory capabilities that the human hand has. We just don’t have that. I mean, with soft robot hands, we’re trying to go along that path, but we are not really there. So I would say that with every aspect of the technology, we have seen advances that have enabled progress, but they remain challenges.

So we have seen tremendous advances on fast and reliable hardware, but we still have a long way to go from the point of view of manipulation in particular. We have also seen fast progress on sensors, but the sensors have to be miniaturized and the cost has to come down. Here I’m primarily referring to LIDAR sensors or to the ground penetrating radar sensor that we have to demonstrate that it will be possible to do mobility if it snows and it rains.

Because with a sensor which is not dependent on visibility, we can still get a good sense of localization by looking down, instead of looking up. So data has enabled a lot of capabilities and so it’s really extraordinary. I mean, it’s extraordinary to think about how this data centric computation has been adopted in the recent past and what the options there are. But there’s still a lot of challenge around data. For robotics in particular, it is still hard to get the right kind of data.

So how do we solve that problem? I mean, in sense text data is readily available on the internet. There is a lot of imaging data that is available, but applications that need different kind of data need seamless solutions to collect that data infrastructure. Then I will tell you, I believe that we use too much data for AI engines. So we really need to rethink the data side because right now it just costs too much in data, in human labeling, in electricity costs to train with the methods that we have.

So looking at solutions that reduce the amount of data required and the computation required for the learning process remains a bottleneck. I mean, how many pictures of dogs or cats do you need to look at in order to recognize those objects? Well, the research community is advancing in multiple directions, is developing few short learning where the objective is to synthesize the right features. So that training is fast. It’s looking at techniques based on core sets, where the idea is to select which data items are really most informative for the learning.

It’s looking at various other types of active learning that are able to learn online. Then the other big issue with AI that remains a bottleneck is the interpretation and the explanation of the decisions that come out because deep neural network engines are rooted in these decades old technologies that are enhanced by data and computation, and they need to be really large. For that reason, it’s difficult to understand their inner workings. But with new efforts on the development of models and out algorithms, we are beginning to see the possibility of more compact models.

We’re beginning to see the possibility of human level explanations and interpretations that can be extracted through these engines. We’ve come a long way on hardware and on data and everything that is enabled by them, but we still need a long way to go primarily in the cost of sensors, in the space of manipulators and in the space of data computation and machine learning.

 

Jeremie Capron:

There’s a number of questions around super heroes and Tony Stark, who you referred to earlier, and Elon Musk, maybe the modern day, Tony Stark and Tesla in particular, which has been very controversial company and stock in recent years. Certainly from our perspective, the phenomenal stock, up more than 10 times in the last few years alone. Tesla is a company we’ve included in our artificial intelligence portfolio, based on the view that there is some degree of technology, leadership and market leadership around advanced driver assistance, and some form of autonomous driving capabilities and the data collection network around the fleet of equals.

But the questions I see are more about the Tesla Bot that was recently announced. People want to hear your thoughts on Tesla and the Tesla Bot.

 

Daniela Rus:

Well, Tesla is a very visionary company, and it’s really creating a catatonic shift in the industry for sure. I will tell you if you have the Tesla with the autopilot, please don’t go to sleep. Despite what you may read in the press because the autopilot does not deliver safe mobility. I will tell you that there are simple aspects of driving, like following in lane or following the car in front of you, but not all driving is like that. You just never know when some new condition arrives. For your safety, please stay alert even though the car may offload.

Some parts of driving may lower the cognitive load required to drive. We are very far from level five autonomy. We do not have ROBO taxi. We do not have full autonomy. The Tesla autopilot will give you a little bit of support, but with no guarantees. You really have to be mentally present, which is not to say that the capability is not extraordinary. It’s just to be aware of what it can and what it cannot do. The Tesla Bot project is again, an audacious project with a lot of great opportunities. I have no doubt that it will fuel a lot of activity in the space at Tesla and elsewhere. It’s a project that attracts attention. It captures people’s imagination. We all want more capable machines around us.

 

Jeremie Capron:

Got it. I want to shift gears a little bit and ask you about the talent aspect of robotics and AI. We all know there’s very tight labor market today in the United States and other places in the world. That’s about service jobs, manufacturing jobs. But what we hear from the leaders of the companies we invest in is that there’s also shortage of data scientists and AI and robotics talent. What do you see from your perspective at MIT? Can you comment around the trends and what should we, and those companies expect in terms of the availability of such talent going forward?

 

Daniela Rus:

Absolutely. You might know what would it take for you to hire our Tony Stark, right? Or maybe you want to know if it’s important to get the highest performers. What I can tell you is that as AI becomes more mainstream, winners and losers will be determined by the level of access they have to the AI and data technologies and by their knowledge on how to leverage them. I like to quote a study that was conducted among several hundred thousand researchers, entertainers, athletes. The study found that high performers are 400% more productive as compared to their average counterparts.

So it does make a difference if you get the highest performers. What’s even more remarkable is that the seismic shift in performance occurs in highly complex occupations, such as AI. So in the AI occupations, the highest performers are 800% more productive. You probably really want to get our Tony Stark.

Now, what are the practices? Well, I told you that some time ago, our students wanted to go to work for big tech or for universities. But these days, our students think differently about the world. They really want to have an impact. So positioning the purpose of your product, of your company in the right kind of light for your AI candidates will make a difference. The AI candidates don’t want to merely spend their time crunching data or working on lack laster projects. They want to feel invigorated. They want to feel like they’re making an impact, and they’re changing the world for the better.

So it’s important to highlight what is exciting about your work, what are the unique opportunities that your data and your solutions present and how should protective employees take advantage of what is unique in your company to change the world as a result of joining your company. So these angles should be part of the hiring process for any company that wants to attract top talent. I tell you that many companies today aim to hire students on their last day of internship. So that’s a practice that works really well with foreign students, because that’s a ticket to stay in the United States.

But when you get a PhD student who comes in before the completion of the degree, and they get an offer, this offer usually matches what the student does during the internship, not what the student is capable of. So often these people find that they go to the company, but then they get bored because the job they get on the basis of their internship contract is not really what they’re dreaming about. So if your company has this practice, it’s really important to consider the progress and the potential. Consider, now that you have this critical talent, how can you make most use? Don’t have these people just sit around tuning model parameters, because they will find that boring.

I would say that partnership with universities really accelerate the ability of companies to recruit top AI talent. Because through these partnerships companies get to know what is happening around at the university and what is coming around the corner. Students get to know the company. That’s another opportunity. In AI there is great need of applying core technologies to specific domains. So thinking about how do you put AI experts with domain experts together, as part of the development process, will accelerate get the solution to the market.

The other thing I would say is to prioritize diversity. Right now, women comprise only, let’s say 15% of the workforce in the AI sphere. In particular at big tech companies like Facebook and Google, there’s between 10 and 15% women. I will tell you that the MIT EECS degree has more than 50% women undergraduate majors. So think about how to attract those extraordinary researchers and scientists into your companies. Online training is a way to bring AI knowledge and technology to your company.

But once you bring the skill into your company, a big issue is retention. So you should not only try to attract people, but keep them. Make sure that they’re happy and they have rewarding jobs that give them a sense of accomplishment, a sense of doing good for the world.

 

Jeremie Capron:

Thank you, Daniela. If you have a couple more minutes, I’d love to finish off with that one question around the farming and agriculture and food industry. Last month at CES, John Deere announced the grand launch of a fully autonomous tractor. That’s able to do all sorts of things in a fully automated way. I think that’s a very important advancement for the agriculture industry. Even though they’ve been experimenting with those technologies for more than a decade now, and assembling the values, bits and pieces together in a very acquisitive way over time. But more broadly, where do you see robotics and AI supporting, helping us grow food sustainably for the world?

 

Daniela Rus:

I am very bullish about this particular angle for robotics and AI. I think that there are so many opportunities. Autonomous agricultural vehicles are in some sense in the right niche. So agriculture is the place where our current state of the art in autonomy is applicable. Things are not moving very fast. There is enough time to process perception. The environment is quite structured. So making agricultural vehicles of autonomous is a great domain. It’s challenging because now those vehicles have to move on soil, that is sometimes wet. That is not even. So it’s not quite the autonomous car solution.

But we have understanding, and we have solutions like you have seen with a John Deere product. In addition to that, harvesters bringing more intelligence in harvesting and in sorting and packaging, the harvested products is also extraordinary. Especially in the case of the delicate produce lettuce, grapes, strawberries. Picking these products really require delicate interaction. That’s why humans have been so good at it. But with the most recent advances in soft robotic hands, that is a fantastic target for progress.

We are going to see vertical farming. I’m really excited about growing things indoors. In fact, in my house, I have actually experimented with how do you bring the right lighting and the right spectrum to encourage growing produce inside the house. I’m very excited about that. I’m also very excited about livestock management with the use of AI and robotic technologies. I don’t know if you’re aware, but about 20 years ago, I had a project to herd cattle with virtual fences. So we developed a little hat that the cat would wear, and this hat allowed gave us knowledge of the position of the animal in the field.

We had music and sound as a stimulus to steer the animal so that it would go where we needed it to go. With these kinds of technologies you can keep track of the animal whereabouts. You can ensure that products are organic. Indeed you can ensure that if an animal is ill, then the trace of animals that had been in contact could be detected quite accurately. So we would have less waste if action and intervention has to be made.

I think that there are so many opportunities, and it’s a very important area. It’s an area where the labor is lacking. People don’t want to do those jobs. So figuring out how to bring machines to replace or to address the shortage of workforce is super important. I get a lot of my produce and my food from a local farm in Massachusetts and I go there once a month and I get my box with all the packaged products. Every time I go, the owner asks me, “Daniela, when are you going to make me a robot that can help automate the slaughterhouse, the smoking house, the mending of the animals?” There is a lot of need there.

 

Jeremie Capron:

All right. Well, I think we’re well past the hour, so we’re going to have wrap it up. I want to thank you Daniela so much for sharing your time with us today. I know you’re on the road and it’s a busy day for you. So greatly appreciate it. I want to thank our audience and ROBO Global clients for being on this webcast. As you can feel free to reach out to us via email on our website at roboglobal.com. We look forward to speaking to you again soon. Thank you all and have a great day.

 

Daniela Rus:

Thank you, Jeremie. Thank you all for joining us and have a nice day.



[ad_2]

Source link

Leave a Reply

Your email address will not be published.