Thursday, May 25, 2017

Artificial Intelligence has long way to go but it's already creating much value: Neil Jacobstein


Neil Jacobstein chairs the artificial intelligence (AI) and robotics track at Singularity University on the National Aeronautics and Space Administration (Nasa) Research Park campus in Mountain View, California. A former CEO of Teknowledge Corp., an early AI company, Jacobstein was in India recently to speak at the two-day SingularityU India Summit (held recently in association with INK, which hosts events such as INKtalks for the exchange of cutting-edge ideas). In an interview, Jacobstein talks about the confusion around AI, how job losses from AI should be tackled and the possibilities of a brighter future for humanity. Edited excerpts:
There are several definitions of AI. Which one is your favourite?
Artificial intelligence allows us to create pattern-recognition and problem-solving capability in a computer, using software algorithms. AI allows us to tackle practical business and technical problems, and it presents an opportunity for us to allow computers to do things that previously only humans did.
There seems to be a lot of confusion about what AI can or cannot do. What is your reading of the prevailing situation?
I think part of the confusion in the market might be that science-fiction movies have given people very vivid and sometimes incorrect view of what AI is capable of doing. Today, we have AI that is already at human levels of problem-solving in very narrow domains such as chess or go (a Japanese board game) or certain kinds of medical diagnostics. But we don’t have human-level AI that is general across the board. So we don’t have AI with natural language understanding at human level, and we don’t have AI that has humour or empathy at human levels. So it’s a kind of mixed landscape.
When do you think will we achieve “true AI”, so to say? What are the challenges to be overcome?
We have already achieved true AI in the sense of creating problem-solvers that add billions of dollars of value every year to various industries. That’s happening now. But if you are referring to artificial general intelligence that is at human levels, I think that probably won’t happen for several years: it could be as early as mid-2020s or as late as 2030s. The critical thing is not the time frame but the consequences of having AI at a human level and what that means for jobs, for global security, and for opportunity to solve problems.
While there are those who believe in the potential of AI and its applications, a sizeable number— including Stephen Hawking, Bill Gates and Elon Musk—have expressed fears that AI-powered machines could rule over humans. What’s your take on this?
To his credit, Elon has changed his views on this over time. He has invested over $1 billion in an entity called OpenAI to democratize access to AI and to create new AI test beds and capabilities that will allow us to build layers of control into AI software. He has also participated in creating conferences on the future of AI and sponsored Future of Life Institute’s conferences around developing new principles of AI safety, the so-called Asilomar 23 principles (futureoflife.org/ai-principles). So he’s interested in capturing the benefits of AI and wants to help us work systematically to reduce the downside risk.
There may be an alarmist element to job losses resulting from AI, but robots are indeed replacing humans. How do you think should the situation be handled?
I think there is a need to anticipate things and to have some empathy and foresightedness for people who will be affected by job losses. For one, the quality of life for the rich people goes down when there are a lot of angry and alienated and armed people around. So it makes sense to think ahead as to how we can educate people doing routine jobs now and, in anticipation of problems downstream, provide access to free, high-quality education. Not everyone will take advantage of that and not everyone will achieve high levels of skill in some new job. So it makes sense to have some kind of basic minimum income and there are different potential schemes for doing that—but nobody knows the exact answer to this.
While Peter Diamandis talks optimistically about the future in his book Abundance, there’s a widening gap between the rich and the poor? Do you think a technology like AI can bridge this gap?
I think rather than focus on the gap, it would be better to focus on the quality of life metrics: do people have access to high-quality, nutritious food? Do they have access to first-rate education or clean water? If you look at the evidence for abundance on Peter’s website or read Steven Pinker’s book, The Better Nature of Our Angels, what’s clear is that in some respects, we are living in the best times for humanity. The challenge is to create a world where, instead of having a world of haves and have-nots, we have a world of haves and super-haves. Now, the gap between haves and super-haves might still be very big, but the haves will at least have things they never had before.
You have spoken about the huge impact of atomically precise manufacturing in nanotechnology. When will it be achieved?
The kind of nanotech we have today is mostly materials science; it’s not molecular machines or atomically precise manufacturing. But I do think we will eventually have atomically precise manufacturing, as we know it’s possible and researchers have demonstrated in the lab the ability to manipulate atoms and molecules with precision. What’s missing is to do it at industrial scale; that may take years.
(Note: This interview first appeared on www.livemint.com.)