A Conversation with Illah Nourbakhsh on The Promise of AI

[ad_1]

Prof. Illah Nourbakhsh: I guess I operate on both the innovation side of what we do with robotics and artificial intelligence, also on the funding side. I’m a trustee of Benedum Foundation here, and we do a great deal of funding in West Virginia and Pennsylvania. Then I sit on a couple of school boards, and I also interact with the Environmental Health Project, I’m the chairman of their board, which deals with how we use technology to really understand air pollution and the toxicological impacts of air pollution on communities that are marginalized in rural areas and in urban areas. How I got started is complicated and messy. I actually was a comp lit major in college.

Then I got interested in organic chemistry and the Genome Project, and I actually worked on protein structure prediction, except I used AI to try and estimate protein structure prediction back when X-ray crystallography was really expensive. Then following that work, I got excited about the AI tools that I was using to do the protein structure prediction in the Genome Project, and that’s when I started using them on robots to actually travel around campus and deliver stuff. Once I get going with that, I came to a professorship at Carnegie Mellon, where the challenge became, how do we invent new robotic and AI technologies that change the world in positive ways, that have prosocial consequences on humanity? That’s what I’ve devoted the last 25 years of my life to, and that takes us to today.


Prof. Illah Nourbakhsh: One of the biggest challenges with AI, and I’ve talked to the heads of the banking institutions about this for years at the World Economic Forum, is there’s a mismatch often in people’s understanding of where AI might make mistakes, where it might not perform up to snuff, or rather the way in which it’s not ideal is just different from the way humans aren’t ideal. Banking’s a great example because banks are constantly worrying about the question, how do I make my credit decisions? When do I allow AI systems to make my credit decisions for me, for instance, to prove or deny a mortgage application? When do I have humans do it, which is better? How well does the AI have to be for me to switch over from humans to AI systems? I constantly see this question, especially from the captains of the banking industry.

The trick here is, first of all, we need to understand something about AI. AI can be really good at numbers games. But when it makes mistakes, the mistakes are nothing like the mistakes humans make. A simple example that’s not about banking, but about driving. Some of the viewers will have read about this, and those of you who haven’t will be a little bit gobsmacked by this. But there’s a researcher, in fact, a set researcher in France, who tries and show how machine vision is really awesome but also makes weird mistakes that we could never have possibly predicted. The machine vision system is that autonomous car companies use to recognize stop signs. It sees a stop sign, it says, “That’s a stop sign.” Stop signs are obvious, right? They’re big rectangular red things that say stop on them. It doesn’t get any better than that.

If you look at how well autonomous car companies can detect, it’s 98%. It’s really good. Now, is 98% good enough? Would you actually trust your child to cross the road in front of a stop sign if there’s a robot with 98% chance? Now you’re thinking, “Well, I don’t know. How good are humans? Right? How often do humans run stop signs because they’re looking down at their phone?” Because that’s the comparison point in a way. But what they did in France is they showed by taking four little squares of electrical tape, these big, tiny little squares, and putting them in four places on a single stop sign, to you and me, it looks exactly like a stop sign still. There’s nothing about … We just think, “Well, some dummy put four pieces of graffiti on the stop sign.”

To all the algorithms they tested, with 98% confidence, it looks like a 45 miles per hour speed limit sign. Now that’s crazy because we humans don’t empathically understand how a machine that’s that smart could have taken this rectangular red stop sign that still says stop on it and decided it looks like a 45 miles per hour speed limit sign. The difference there is it’s an alien technology. It’s not us. It doesn’t work the way we work. The kinds of mistakes it makes are different. We make a mistake and run a stop sign because we’re looking at our phones. The AI is never looking at its phone. But if there’s something wrong with the stop sign, it might see something that we couldn’t wildly imagine or speculate it’s going to see. That is a metaphor for what happens in any field with AI.

For instance, in the banking industry, you can rip out racial information and assume that your system is not showing racial bias anymore, and yet the system can use crazy cakes demographic information you weren’t even aware is in the data, such as names, to figure out, “Oh, this person is African-American. We’re going to deny their mortgage.” Then you look at the data afterward, you go, “Oh my God, it’s denying all the African-Americans the mortgages and I never told it to do that.” It’s like, “Well, yeah, you didn’t tell it to do that, but you never told it not to do that and it doesn’t even know what race is.” It’s a computer. It’s an alien technology. For it, confusing the stop sign with the speed limit sign, confusing a racist decision with a non-racist decision, it doesn’t know any better.

That’s something we can often lose track of, is where the boundaries are and where the errors are. The trick becomes how you put in the right ethics, checks, and balances, to make sure the system, in retrospect under audits, is actually fair, balanced, and equitable, and to make sure the mistakes it makes aren’t going to take us to a really dangerous space, where we let them drive all our school buses, but then, God forbid, they do something terrible to our children because it doesn’t know any better. That’s where the trick is, is in understanding and reminding ourselves constantly that the systems aren’t humans. They’re not going to make human mistakes. They’re going to make robot mistakes, and robot mistakes are not the same as human mistakes.


Prof. Illah Nourbakhsh: Every time companies think that they’re going to use AI and robotics to do something that has a social engagement to it, a social interaction component, they totally underestimate how hard it is. Because what they underestimate is all of the varieties of ways in which humans behave and how hard it is to accommodate for all of that. Social doesn’t just mean robots that take care of people in nursing homes, say. It means self-driving cars because driving is social. You’re not just on a highway. If you were just on a highway, Mercedes has been doing that for 10 years, as well as Tesla. But if you’re in an urban area, now you’re dealing with strollers and kids and beach balls and dogs, and it’s social.

That tail of the distribution where you have weirdo social interactions that are complicated and hard to predict, where somebody’s trying to get in their car, the walkers in the way, but somebody’s on the other side trying to cross the street and they’re waving you on, and then they get annoyed when the car does not go when it’s being waved on, that’s completely social and that’s very, very hard to solve. I see that being the place people end up failing. Another example of that is actually Watson. In healthcare, data analytics, and research, yeah, that works great. But as soon as you say, “We’re going to take actual patient medical records written by a doctor, often handwritten, and stick them all into a system that’s supposed to do uniform AI on it,” the problem is doctors aren’t uniform, they’re human beings.

Even the way in which they do diagnostics and express their diagnostic sentiments aren’t the same. Watson at IBM was blown away to realize how much variability there was. They made it too hard for their AI system to operate correctly. Often, that’s where you see people fall short of expectations when you’re at that boundary between automation that you’re hoping just works automatically and the social messiness of human reality and human behavior. The places where in fact systems have done better than I ever imagined have to do with narrow examples of gameplay, where systems with deep learning have just managed to figure out techniques that blow away the people. Examples are Jeopardy, examples are Go. A really great example is poker. Nobody in my field could have imagined, 10 years ago, that the world champion at poker would be a robot because we thought poker was hypersocial. Right?

But the thing that’s special about it is, in the case of online poker, it’s not about facial expression, it’s not about gestures anymore. The essence of poker that ends up being social is around bluffing and mental models of others. In that part, the robot can do better than the human, and we never imagined that. The part where I thought we’d have more leverage by now, and it’s going and it’s very exciting but we haven’t quite got there yet, is in the area of exoskeletons. Electromechanical systems are interesting because when we talk about computation, just pure thinking, computers really do get better every few years. It’s like we’ve created our own deadline for that. We’ve forced Intel into a corner where they have to make them faster and faster, and so they do, and AMD does too and everybody else does.

Computers get faster, which means our weather models get better, it means our air quality models get better, it means chess playing gets better, Go gets better. But when you get to electromechanical systems and battery chemistry, those don’t get cornered by Moore’s law. They don’t double in speed every 18 months. Batteries get better every once in a while. Every three years, somebody has a discovery and goes, “Oh my God, lithium iron phosphate is amazing,” so then you have suddenly a step function improvement. But you don’t know when those discoveries are going to happen. Same thing with mechanical systems. We introduce a new kind of harmonic drive and the exoskeletons get better, but we didn’t know when we were going to get that new harmonic drive. It just happens when the mechanical engineers have a really big aha moment.

I’ve seen those systems improve and they continue to improve, but we can never quite predict when. The dream I have and the thing that we have to get to is that those exoskeletal systems become game-changing for the elderly and for those who are paraplegic in our society, to be able to walk with us and hike with us and enjoy the world with us. That applies to millions of people. But we haven’t quite got there yet because they’re not quite affordable yet. Even though the DOD has them, they cost millions of dollars each right now, and no insurance company can supply that and afford that. That’s been slower than I expected, but it’s going. But anytime it’s mechanical and electromechanical and battery-based, it’s just going to be a slower game that we have to play. It’s a longer game.


Prof. Illah Nourbakhsh: One major area of advancement that we’re already seeing in the architecture department right here at Carnegie Mellon has to do with smart buildings and building envelope management. There are incredible technologies being born in robotics now that do things like make the porosity of the buildings envelope change over time. We can breathe or not breathe depending on the humidity, depending on mold, depending on wind outside, and temperature, of course. But we know we can control infrared reflectance of windows, we can control the porosity of the walls and we can control point-by-point HVAC systems instead of a building, instead of one big on-off. They have many, many more knobs and dials that you can twist and turn.

You take all that and combine it, and what you can do is create an efficiency level that was just impossible to have predicted just a few years ago, just three or four years ago. Right now it’s on the research table, but I can see that that’s going to become prime time. That’s really interesting because that means the overall energetics consumption of a city is going to go down. We know that verticalization urbanization is the path of the future. That’s how we’re going to live as humanity. I think I’ve forgotten the exact number, but something like 55% of people within 10 years will live in urban areas across the world. There’s a mass migration from rural to urban, but that mass migration helps us actually be able to do things like manage wetlands, manage farms and manage land in such a way that we reduce carbon and pack people in ways that are more efficient.

But we need to have a healthy building’s image to do that, and this new kind of direction, I think, gives us the opportunity to do that. That’s all robotics and AI, because it’s all about predictive management, it’s about learning the behaviors of people in the building, and then accommodating them, so that it knows I come to my office at 8:30 after I drop my kids off and it’s going to have the right temperature in my building at 8:30 when I come into my office. That’s all coming and it’s going to be changing the way we live. I think that’s one really neat way to think about AI and robotics changing our day-to-day activities.


Prof. Illah Nourbakhsh: Situations where we can fence-line the automation operations, that’s where it’s going to go all automation early. For example, remediation. Environmental remediation of coal reclamation lands. I think you’re going to see, in 10 years, a lot of the remediation effort done by machines because you can simply fence-line the area in which the machines are operating and off it goes. You already see that in automated harvesting, where the farmer sits at home, and at the right time of year, the harvesting equipment is running itself because there’s a very clear spatial-temporal boundary. It’s bounded in time, it’s bounded in space, you can define it all, and the machines can operate in that zone without having incidental interactions with people that they weren’t supposed to interact with.

There are no deer hunters on the farmland hopefully. You’re going to see the same thing in construction, and basically, commercial construction, where more and more of the construction processes can be done in a firmly automated way if you can fence-line the automation away from the people. I think that’s somewhere you’re going to see pure automation really run. Everywhere else, it’s a boundary condition, and instead, you’re going to see automation and people interacting carefully together. You mentioned drones. Incredibly useful for things like bridges inspection. We have to figure out ways to do that in populated areas like New York City, like Washington D.C., where they can’t. We’re going to do that, not by just saying, “Oh, forget about safety. Let’s just let the drones fly around the Capitol, the US Capitol.” That’s not how we’re going to do it.

We’re going to do it by having certified operators, who are the drone wranglers and work with the drones and ensure that the drones are being safe, and they’re going to need really interesting heads-up displays that let them see exactly what the drones are doing in real-time. That’s where it’s going to get really cool. That’s the innovation area. But in a way asking when we’re going to have pure automation is the wrong question. The question is, when is automation going to be so undeniably the right direction to go in terms of productivity, reliability, and profitability, that we will see a full-throttled embodiment of automation inside that space? Whether it’s fence-line by humans or by spatial limits that make sure it doesn’t interact with people in the wrong ways.

That’s going to happen a lot. I’ll give you another example. It’s heavily automated, but it’s not purely automated, which is sewer inspection. A massive infrastructure problem we have across the whole United States is the condition of water and sewer pipes under our cities. It’s an unbelievably big problem. Nobody can even estimate the total costs right now. There are cities where you lose more than a third of all the water to leakage underground. Can you imagine? All the water the city’s using, a third of it is just wasted. Goes right back into the aquifer. If you’re unlucky and you’re in a place like Florida, it actually becomes saltwater. You’re losing it forever. In those situations right now, you have companies that have incredible semi-autonomous sewage and water inspection robots that have become the principal way in which inspectors can deal with these pipes.

But they’re not automated, right? They’re working hand-in-hand with inspectors and with repairmen. You go in and understand exactly where the problem is with GPS-like accuracy and then go in and fix the problem. That’s what you’re going to see. It’s fence-lined, right? It’s a water pipe. There’s no danger of hitting a tricycle and a child or a basketball player. But still in that environment, what’s critical is, is it improving safety? Is it improving reliability? Is it making the job of the human operators more efficacious so we can solve the infrastructure problem we have in the land of crumbling infrastructure?


Prof. Illah Nourbakhsh: I think what’s groundbreaking right now are situations in which we take human know-how, human content expertise, and AI-based robotic-based analytics and sensor comprehensive, sensor acquisition, and we marry the two. Cases where humans couldn’t possibly figure out how something works and robots don’t have context, they don’t have the wisdom to know how something works, but they can collect immense amounts of data and then present it to humans in a partnership where that partnership together figures it out. I’ll give you an example that we’re doing deeply right now. One of the interesting things about air quality is that we don’t really understand exactly how local industry impacts local health in communities across the US.

It’s complicated because prevailing winds don’t tell the whole story. It has to do with where are the inversion layers? When are the inversion layers? Which neighborhoods are affected most by, let’s say, a coke plant that’s refactoring coal into coke for steel making or a paint plant? Then what are the ways in which, neighborhood by neighborhood, we can understand air quality, the actual building envelopes people live in, whether they have well-sealing windows or not, and then how they’re impacted by that and how do we change that? But to do all that requires this crazy concatenation of data. We put up literally hundreds of sensors that we invent to measure volatile organic chemicals, what we call VOCs, in the air like benzene and toluene, as well as particulate matter.

Those are really interesting robotic devices that do things bounce very, very special laser diode light rays off the particles, and then measure the type of light to figure out how big the particles are. At the same time as we’re measuring all that, we work with NOAA, the National Oceanic Atmospheric Observatory, to measure and model exactly how turbulent air flows around buildings and in hills and hilltops because the wind doesn’t just move like one homogeneous mass. We measure that stuff. Then we do machine learning by taking the measurements we’re making of particulate matter of VOCs and the models of vortices and turbulent wind direction and putting them all together. The result of all that is you start to build a model that lets you predict, day by day, where is the pollution going to be?

When do we do tell people not to run in the park? When do we tell people to keep their children in the house, and when can they go outside and where should they go outside? That kind of project where you take natural information and human behavioral information, combine it all, create predictive models, and then use the predictive models to estimate what’s going to happen and give people some insight that helps them live better. That’s the name of the game.


Prof. Illah Nourbakhsh: Agriculture is huge. Ends to end agricultural operations of all kinds are going to benefit tremendously from this, including animal husbandry, by the way. Literally everything from the way we deal with filtration and handling of chicken, all the way up through large-scale agriculture. Another one is agile manufacturing, in general. Another one that’s huge is warehousing and logistics, in general.

Logistics, not around warehousing and warehouse management alone, but around warehousing and then supply chain resolution. Of course, we know that’s a big deal now, but it’s going to be the AI-based techniques that we’re going to be reformulating now to solve this problem today and in the future. Then another huge vertical is medical. It’s a very, very big deal. It’s a vertical in fact because it has to do with big data analytics, it has to do with all kinds of really interesting robotics technologies and the ways in which those two things come together to create a better outcome for people.


IllahNourbakhshbw-e1570482775428

Illah Nourbakhsh is Professor of Robotics, Director of the Community Robotics, Education and Technology Empowerment (CREATE) lab and Associate Director for robotics faculty at Carnegie Mellon University. He has served as Robotics Group lead at NASA/Ames Research Center, and he was a founder and chief scientist of Blue Pumpkin Software, Inc. His current research projects explore community-based robotics, including educational and social robotics and ways to use robotic technology to empower individuals and communities.

The CEO and Chairman of Airviz, Inc., Illah is a World Economic Forum Global Steward, a member of the Global Future Council on the Future of AI and Robotics, and a member of the IEEE Global Initiative for the Ethical Considerations in the Design of Autonomous Systems. He also serves on the Global Innovation Council of the Varkey Foundation and is a Senior Advisor to The Future Society, Harvard Kennedy School.  Illah earned his BS, MA, and PhD degrees in computer science from Stanford University and is a Kavli Fellow of the National Academy of Sciences. He is an active member of the ROBO Global Strategic Advisory Board.



[ad_2]

Source link

Leave a Reply

Your email address will not be published.