100 DAYS, MEGABRAINMasha Krol

46/100: A Panel on Artificial Intelligence

100 DAYS, MEGABRAINMasha Krol
46/100: A Panel on Artificial Intelligence

I got a chance to attend a panel focused on Artificial Intelligence today. The panelists were:

What follows is as much of the discussion as I was able to transcribe, while also attempting to listen. I really enjoyed the panel, as the topic of AI is near and dear to my Megabrain.

Enjoy.

What do you know about AI that the average person does not realize?

JF: What most people don’t realize is that AI is already as good as we are at complex things like hearing and vision.

Jon: AI is in everything, but it’s invisible.

Jeremy: Thus far, AI has kind of meant that it’s surpassing human intelligence at tedious tasks like sifting through large data sets. Now, there’s reinforcement learning, which is allowing intelligence move from simpler perceptive tasks to planning, games, long-running rewards tasks. Innately human characteristics are on the cusp of being done better by machines.

Robin: Special-purpose AI has been around for a very long time. Small, closed domain problems have been solved with AI for maybe a decade. AGI – Artificial General Intelligence – is the thing that Elon Musk writes about as the thing that’s gonna destroy humanity. Go and look at DeepMind; their purpose was to make a general intelligence. Elon Musk invested in the company to keep an eye on them. Then, they were bought by Google.

They use a method of training which is a neural net approach that made an AI that taught itself to play video games. The inputs are the screen pixels, controls – left, right, shoot – it played Space Invaders. All it knew was the score at the end of the game. In just 8 hours, the AI trained itself to be better than a human.

There is this misconception, though, that deep learning can solve everything – well, not so much. But AIs are training themselves to do certain things, like play Atari games.

Gabe: It’s not new. In the late 90s we were already working on autonomous vehicles in San Francisco, but these were very specific, not general problems.

Are decreasing costs impacting the advances of AI?

Robin: In terms of cost, here’s an example. Google [X] wired together 16,000 computers to create a neural network. Then they pointed it at YouTube and let it watch videos. And it very quickly learn to distinguish two things in the videos: people, and cats. So, these guys spent $120,000 to build a cat detector. But this is a special-purpose AI, whereas AGIs are just really hard to get right.

Gabe: Cost and capability are interesting. We’re working on blue collar AI – that is, using AI to augment a person, so that the average Joe is getting the output of AI at the point where warm hands touch cold steel. That’s where we’re focused, started the company a year ago.

If we compute at the edge [pushing data processing out to the edges of the network instead of sending everything back to the core –Ed.], we can do it faster. The cost of mobile devices and using AI at the edge is at a point where companies can afford it.

If you wanted to get started with AI, where would you look?

Jeremy: Any work that’s published in any academic field, or conference proceedings of any industrial area, you’ll find AI everywhere. However, it’s the idea of how can we actually apply it that’s difficult. Even the glue that needs to be written to wire things up isn’t that hard – it’s really the framework that you need to have around it that’s more important to understand. Otherwise – garbage in, garbage out. A lot of people I talk to don’t even know what they want to do with AI.

Martin: Some people see this as a competition between human capability and AI capability – that’s not how I see it. It’s about the combination of AI and humans, especially on the plant floor. IoT has opened up the opportunity for application – but before doing analysis, you have to get clean data. There’s so much data out there, it’s a mess. Before we talk about AI, we need to talk about clean data. Quality of data is so key to unlock the potential of AI. Then we can use the right tool for the right jobs. In manufacturing, people are operating with the lights out, but once we turn them on [that is, start collecting data –Ed.], there is huge potential for both simple approaches and AI.

Robin: A lot of people see machine learning and AI as synonymous, and deep learning and AI as synonymous. But again, machine learning has been around forever. You don’t have to call it all AI.

Just to digress: one thing I’d say is that you do need data, but having perfectly clean data is not necessary – there are techniques to fix that.

When will the next AI Winter hit?

Robin: We are currently in an AI spring. First AI spring was during the Cold War, people got excited about AI, decided that what they needed help with was English to Russian translation. They had some spectacularly bad results, the translation failed completely on nuance. DoD pulled the plug, and everyone went into the depression. This happened three times – first, the translations; second was LISP, expert systems; lots of ups and downs.

The problem is, people get too optimistic. “Ooh it’s an AI spring, what’s gonna happen?” They slap the AI badge on, and then everyone gets disappointed because we hyped it too much.

Gabe: To go back to the question… In order to start – find a real business problem. In our case, we know airspace and defence, oil and gas, then we provide solutions that employ AI.

Jeremy: Exactly. Some people take the approach of applying a solution, before knowing the problem. You really need to know what problem you’re trying to solve. You need to know, “What’s the measure?” Figure out how good it is now, then if you do something simple, does it make it better? That way you can begin to understand the problem.

Unless your core business is to create the tech, which only DeepMind has been recently successful with – it’s hard; believe me, I’ve tried.

JF: There have been advances in computing power, sensors, memory capacity, and machine learning algorithms – and they kind of go together. We passed a threshold in 2012 – that’s the moment when we applied deep learning algorithms to GPUs. These thresholds that exist – we don’t always know where they are – but we need the confluence of sufficient computing power, sufficient information, and sufficient algorithms to surpass them.

Deep learning has been around for 60 years. Pursuing this track, there were significant advances using GPUs to solve image recognition, like ImageNet, went from something like 72% accuracy to 86% accuracy. And now it’s performing better than humans. We’re at a point where instead of a system that is command and control, we are evolving to a system where we provide feedback, and the system is capable of improving itself.

There are other GPU-intensive processes than self-driving vehicles. Self-driving cars are a contained problem – you need tagging info, feedback loops, and quality data for problems to be eligible to be worked on. Things like manufacturing, lots of new source of input. There’s a company in Montreal called Imagia that uses deep learning to detect cancer. Looking at MRI images and by the shapes of the tumours, they can figure out what type of tumour it is. The next step for them is genetic information integration from the patient in addition to the images. Combining those two models will provide even better data.

How soon do the robots take over?

Jon: Currently, many of the ways ML is being used is: you have this problem, then you have a specific solution. We’re solving something in robotics that’s a little bit more generic, touching on many different types of areas. But even then, people think, “OK, ML is doing all these cool things”, well, forget it – for general intelligence, it’s gonna be a while. Analytical neuro labs have modelled this, and we laughed at just how long it’s gonna be.

It’s just really hard to predict. Whatever we imagine is usually very off. I have no clue. I know it’s not going to be in the next decade. This is what we can do now: we can use Moore’s Law, and predict computation capacity. Computation is still a huge problem, we’ve got a long way to go in terms of performance power. And the sophistication of the algorithms is really not there yet. We don’t even have a clue how the brain is doing this.

Long story short, I think it’ll be a while.

Robin: The reason it’s difficult to predict the future is that we’re still building on Von Neumann architecture. One reason the DeepMind guy made a really big leap was that he had a background in game design, programming, artificial intelligence and neuroscience.

Also the power envelope problem: there’s only something like 20 watts of power in the human brain. And yet I can tell you all about cats in videos, versus 10,000 machines.

By the way, Moore’s Law is happening in ML as well – if compute capacity is hockey-sticking, at what point do we cross over? I think it’s far away. But it’ll creep up on us really fast.

Gabe: I think what’s next is going to be more Iron Man than Skynet, augmenting the human more than anything else. But we have to really think, “What challenge will be the barrier?” What if it’s not technical, but instead ethical or societal – what if you have these displaced people out of jobs, or you run over someone in your self-driving car.

JF: The connectedness of the memory is another difference between human brains and machines.

I have to agree, the barriers are not going to be around technology. There’s going to be large tech companies that will set up huge tech centres and try to take all that talent, all that power and apply it to everything. There might be no more oxygen left in the ecosystem.

In terms of AI talent and work being done, how do we in Canada stack up?

JF: We’re starting to get recognized. We have commitment from the federal government, they’ve decided to invest $93M in the universities to expand neuro-research labs, and to reach out and feed it into the corporations. And then they’ll inject more than $100M. We’ll need that funding to train and support the core infrastructure.

At Element.AI, our project is one that tries to launch more AI companies from Canada – we’re based out of Montreal, and we intend to spin off a few companies every year, and facilitate access to world-class talent.

In AI approaches, there’s no silver bullet, no one right answer – the talent makes a huge different. Human judgement is necessary: how you define your product and problem, support the system, the experience you deliver, and algorithms you use, how you will evolve. People and talent make a big difference.

Currently, top AI talent is hired at over a million dollars a year. We’re seeing PhD students getting hired for 300,000 British pounds. These are data scientists that may not even know how to code! They are fresh out of school, have never done a commercial implementation.

In Montreal, we’ve got incredible reach from the Valley, they’re very interested in what we’re doing. There will be a gap if these big companies begin to open labs. We need to be attractive as an ecosystem so that more people come from other countries. There’s risk, but there’s also hope.

Jonathan: The acquihire average is USD$2.4M per head. I imagine that’s in the Valley, it’s really high.

Robin: The point about talent is really important. Look at that professor of ML at Stanford – he just got hired by Baidu. We need to both attract talent, and build talent. We need to attract people to the discipline of data science. There’s responsibility for innovators in the space to build and grow that talent. We don’t have to import it all.

What will be the first billion-dollar AI business?

JF: In 10 years from now, most big companies will have AI. Some of these big companies already exist. There are a few big players with access to data, and distribution networks, they’ll make the transition. On the B2C – Google, Facebook, etc. I think that on the B2C front, it’ll be less likely that we’ll see spectacular outcomes from newcomers – we might see some, but there’s less probability.

Martin: I think the most successful ones will be companies that are doing solving really big problems in a new way using AI.

Jeremy: I think it won’t necessarily be a company, but applications of AI, that might be a billion people that will be better off. If we’re only looking through the lens of large companies and revenue, I don’t think that’s doing justice to what AI can do and what people working on it can achieve. Maybe it’s a mobile application that is free, which allows farmers to make better decisions on what crops to grow. My sincere hope that all the value created in AI will not be accrued in the large companies, but will be shared.

Jonathan: There are organizations dedicated to that, like OpenAI.

Robin: There won’t be technology companies going forward, every company will be a technology company. Membership of Fortune 500 will change. When we think of the top 10 – Apple, Amazon, etc. – we think of them as tech companies, but really, it’s “book seller becomes web service provider”. The other 490 companies, can they adapt to the disruptive business problems that are coming?

Since the Singularity is not quite near, what human augmentation applications of AI are most exciting to you?

Gabe: It’s exciting to me to think about augmenting humans to be able to do higher-level, more skilled jobs with data and AI. If we think about the historic election the US just had, folks are clearly worried about losing their jobs. If we can use AI to help level up the average blue collar worker, then everyone can benefit economically.

JF: There are really interesting applications in assisted design – creating spaces, and objects. Of course, healthcare is a huge area for augmentation.

Jonathan: I may be biased, but I think virtual reality – doing things like assisted design, creating and shaping objects with your hands in this virtual space.

So there you have it. It was a really cool discussion, and I’m bound to have missed some gems – so if you ever get a chance to see any of these guys speak, I encourage it.

As a completely random aside, our hosts were Assent Compliance, who, inexplicably, have installed a giant boxing ring in the middle of their open plan office.

When I asked, Martin Sendyk, Assent Compliance’s CPO, said

The ring is kind of like Fight Club, in that we don’t talk about it.

Intriguing! Leaves me entirely free to assume that that’s how business decisions get made at AC :)