Conversation with Nigel Toon, CEO of Graphcore about his book 'How AI Thinks'
Navigating the AI Landscape: Human-Centric Development, GenAI Disruption, Ethical Challenges, and Geopolitical Influence
In this conversation, I speak with Nigel Toon, CEO, Chairman, and Co-founder of Graphcore, a leading semiconductor company that develops accelerators for AI and machine learning.
Our discussion covers a wide range of topics, beginning with Nigel's extensive experience in the tech industry and his journey towards founding Graphcore. We explore the central themes of his recently published book, "How AI Thinks: How We Built It, How It Can Help Us, and How We Can Control It," which offers a balanced perspective on the AI revolution. Nigel shares his insights on the importance of a human-centric approach to AI development and the key lessons we can draw from great scientists like Claude Shannon. We delve into the seismic shift towards GenAI and LLMs (Large Language Models), discussing how businesses can navigate this rapidly evolving landscape. Nigel provides valuable advice on balancing job displacement and re-skilling in the age of AI, emphasising the crucial role of humanities in AI education. We also touch upon the challenges of striking the right balance between AI innovation and regulation, and the practicality of implementing a Hippocratic Oath for AI developers. Additionally, we explore the geopolitical influence of China and the Middle East on the global AI landscape. Lastly, Nigel offers his wisdom to aspiring AI entrepreneurs, stressing the importance of focusing on solving difficult, important, and valuable problems.
Here is a transcript of our conversation
Intro to Nigel
Aditya: Hello, welcome to the Uncharted Algorithm, where we shatter the status quo at the crossroads of AI, enterprise, culture, and the future of work. Today, I'm speaking with Nigel Toon. Nigel is CEO, chairman, and co-founder of Graphcore, one of the leading semiconductor companies that develops accelerators for AI and machine learning. Nigel is a technology business leader, entrepreneur and engineer. Prior to Graphcore, Nigel was CEO at two successful venture backed processor companies and co-founder and board director at Icera, a 3G cellular modem chip company, which sold to Nvidia in 2011 for $435 million. He was previously a senior executive at a major Silicon Valley based, publicly listed semiconductor company. He has served as board member and chairman for several technology businesses and currently sits as a senior non-executive director on the board of UK Research Innovation. He's also been a member of UK Prime Minister's Business Council. He's also the author of three granted patents and has been awarded a doctor of science degree from the University of Bristol. And Nigel is also a published author now, and his first book, which recently came out, is called "How AI Thinks: How We Built It, How It Can Help Us, and How We Can Control It." Welcome, Nigel.
Nigel: Hi. It's good to be here.
The Human-Centric Approach
Aditya: One thing that really stood out for me was the narrative places humans at the heart or the core of this technology revolution, which is great. So given your deep-seated expertise across semiconductors, technology and notably Graphcore, was this human-centric approach one of your primary motivations for writing the book?
Nigel: Yeah, I guess over the time of working in AI - I've been interested in looking at AI even back in the 1980s when AI was impossible. We started thinking about Graphcore back in 2012 when the deep learning phase of AI was really just kicking off. Over that time, I've talked to lots of people and the common reaction is "I don't understand it. I'm scared of it. I wonder what it's going to do. Will it take my job?" - all of these kind of very natural reactions to it.
From my experience though, AI is a tool. I sort of think of it like a piece of paper and a pencil. You don't try and solve a complex math problem all just in your head. You sit down with a piece of paper and you write things out - it's that interaction that helps you solve the problem. AI is similar in some ways, it will let us solve problems outside of ourselves, using a tool that is going to help augment our intelligence.
I very much see a cycle here where humans will always be in the loop with AI. It's here to help us and so we need to understand it, we need to know where it's come from. We need to know what it can do for us, but also how we can control it. This is a very, very powerful tool so we need to think about what it can do, how people might misuse it. It's not the AI that's going to do it - don't fear the machine, it's the people who use the machine and what they might do with it, which I think we need to be aware of.
Key Insights from Shannon, Schrodinger etc.
Aditya: In the book you say that intelligence helps us build structure and organise in order to push back against the chaos that would engulf us. And you also go on to define intelligence as the ability to gather and use information in order to adapt and survive. There's lots of these ideas that you've gathered and you lay them out very nicely from say Shannon, Schrodinger and lots of other great thinkers. So what key insights might we draw from viewing intelligence through that lens, especially regarding AI's development and societal integration?
Nigel: Yeah, so I guess it's Claude Shannon's views of information theory - this is a guy who's incredibly important, probably the father of our information age, but very few people know about him. He came up with this concept of bits, but more importantly, how you reliably communicate. One of the things he talks about is this idea that we encode information so we can send it to someone else or somewhere else.
Language is a form of encoding, so it's an encoding we use to transfer information between humans. Certain words are trying to encode a concept, but they don't necessarily do a perfect job of encoding that concept. I think information, knowledge, thinking - these are concepts which are very complex and really require quite a bit of thought to understand what they are and to define accurately, so that we can communicate what we're actually meaning.
When I say intelligence, you might have a very different view of what intelligence is. We tend to think of it as being the very conscious stuff we do like solving a cryptic crossword or playing chess - our brain seems to be active and is thinking and we think of that somehow as intelligence. Whereas in the book I try to show that if you play tennis for example, something which is almost subconscious if you're doing it well, there's huge amounts of intelligence required to understand the flight of the ball, the spin, where it's going to go, where you need to be, the shot you need to apply to get the ball back and win the point.
We tend to just throw that away as though somehow that's not intelligence, but actually a huge amount of intelligence is built up in that. So I think spending time trying to understand some of those concepts is very important. And with the intelligence one, this idea that intelligence is about surviving as a species - we do it, animals do it, flowers do it, all biological objects do this. But it's interesting, AI doesn't need to survive as a species, it's a tool for us, it will help us build our intelligence. So although we use intelligence in the word, you could think of it as an intelligence tool, rather than some separate thing.
Anticipating the Shift Towards Gen AI and LLMs
Aditya: You've been in the industry and the tech industry for a long time. Was there something during this phase of the last, say, five or even 10 years, or maybe more recently in the last three to five years - was there something that you were early on to anticipate the seismic shift that we've seen towards GenAI and LLMs? And how had this anticipation influenced the insights that you share in this book? Also, do you think these insights will stand the test of time amidst this pace of evolution?
Nigel: I'd probably go back a little bit further to around about 2012, when deep learning really started to work. A lot of the early work in AI had been more about trying to understand this concept of thinking and semantic AI, expert systems and things. That is fraught with challenge - it's very difficult to deterministically describe intelligence. And then this other approach, the connectionist approach where you build these deep neural networks and somehow they will capture from information enough knowledge that you can then come up with intelligent outcomes.
That really didn't work until about 2012 when we had enough compute, we had enough information and those systems suddenly started to work. Since then, obviously the models have got bigger. There's a couple of recent defining things. I think the breakthroughs with AlphaGo using reinforcement learning around 2016 were very important. We ignore the fact that probably 240 million people watched that competition of Lee Sedol losing to the DeepMind AlphaGo AI champion.
240 million people in China watched that live, and it's interesting, if you look at the progress of people's ability at Go since that point, humans have got much better because the AI has taught people techniques that were sort of out of reach for humans, and now humans are again challenging some of the AI Go machines because they're using some of these new techniques.
So that's a really interesting breakthrough point. And then more recently, it was probably around 2018, the Google "Attention is all you need" breakthrough. The reason that was so important was not so much the transformer approach, which is very important and probably quite fundamental, like convolutions are in other deep learning.
It's really the fact that it opened up this idea that you could use any information. You didn't have to curate the information. You didn't have to do the flashcard thing of showing the machine "Here's a picture of a cat. Here's another cat. Here's another," until it could work out what a cat looked like. You just use language and from language, it's able to build a semantic understanding so that it can start to predict in a story what the next word should be.
And then the same with pixels, we can start to predict what the next pixel should be so that we could create a photorealistic picture, generate that. But it's almost like GenAI is misnamed, because it's not the fact it generates something. It's the fact that we're able to tell it what to do very simply using some very natural prompts.
It's amazing that some of the best prompt engineers are not the geeks. They're actually the arts and humanities who actually understand about language and they can give it prompts that will make it write better, or designers who will know something about art so that they can actually generate images, which are really interesting and exciting.
Navigating the Gen AI Landscape as a Business Leader
Aditya: Yes, I think you're right. I also have a big issue with this term "generative AI." In fact, A16Z has talked about in one of the blogs they did a few years ago, they said we should call it "synthesis AI" - it is basically synthesising information. There's many other terms which we could use, but that's the term that has stuck and is used.
Coming to the enterprise side of things, which I write a lot about in my newsletter, and you talk a lot to Fortune 500 companies, governments as a part of your role at Graphcore - what is your advice to business leaders into how they should navigate the GenAI landscape? You talk about this as well in the book, that we're still at the very early stages. How should they think about this coming revolution?
Nigel: Yeah, as I've continued to think about this, I think one of the best historical analogies is probably electricity. When electricity started to appear, the first power generation stations were actually built in London in the 1880s, the first Edison power stations. It took another 40 to 60 years before it really became kind of endemic, but the thing that electricity did was it started off just being used for light bulbs. It was just used for light, that's all people did with electricity, very much a point solution and that's kind of where we are I think with AI today.
We're building out some of the infrastructure. We're building out the power stations, if you like, so that we could do AI. We're building some of the fundamental models that will allow us to build applications more easily and more quickly. But we haven't yet started to see the wave of those.
What happened in electricity was one of the big things was factory development got redesigned. Rather than having one steam engine with these complex sets of pulleys and belts and things to drive all the different machines, and it all had to be very close together, you could have separate motors for each of the machines and they could each run at their own speed. Then they could be laid out in the factory to match the manufacturing process and we got interchangeable parts and mass production. It completely transformed the way that businesses operated.
The productivity benefits were just enormous from this change in the basic underlying infrastructure. So it feels like that is an interesting analogy for where AI is going to go, the way that AI will transform the way businesses operate. Our competitor, NVIDIA, is a good example. Jensen Huang at NVIDIA has over 50 direct reports. The span that they have inside that company going down in the organisation is incredibly wide. Because AI actually helps you sort through that information that is coming from all these different places to really understand what's happening across your company.
Sam Altman has talked about the idea of a single person, billion dollar company. Because AI will help them to do something on their own that would create a billion dollar organisation. Maybe that's overstated. But maybe it's a handful of people that could become an incredibly successful business built using AI.
The way that things will transform and certain industries will transform is going to be quite profound. I think businesses need to look at that and understand where have they got risks that is going to substantially change their business. Do they need to be doing work on that?
And also just in terms of that, even if the AI isn't affecting their products or their service directly in some way, how is AI going to change their internal processes and their internal ways of working? Are they getting, are they fully understanding how their company is going to look in five, 10 years time as a result of AI and getting ahead of that and thinking about how do we retrain our people so that they can do this work? Because you won't just be able to go out and hire a whole bunch of new people. You've probably got to retrain some of your existing people to take on this work. And that's an opportunity for them and it's an opportunity for you and your company as well.
Balancing Job Displacement and Reskilling in the Age of AI
Aditya: Talking about the retraining aspect, there's been a lot of talk and fear amongst people, companies, employees about job displacement and new skills being needed. How do you think AI is going to impact some of these issues?
In your conversations with companies, how are they going to balance these two forces of job displacement and retraining and augmenting humans, which you also talk about in your book?
Nigel: It's interesting, I think there's two aspects to it. One is how do companies evolve? The other side is just more fundamental around how does it change education. So maybe I start with the education piece first, because that's kind of the fundamental piece.
I remember when I was at school, calculators were starting to appear, starting to become common. But I wasn't allowed to use a calculator in my school exams. Obviously commonplace today. Because people were scared. They thought "Oh, you won't be able to use the calculation. You're cheating." I think we're at the same stage now with GenAI in education where people are worried that GenAI will allow kids to cheat. The reality is GenAI is what they're going to live with and what they'll grow up with and what they'll use in their life. So they better get used to how they use it.
We need to change the way education works. I think one of the goals of education has been, and research has proven that if you give people individual tutorial help, they will move forward in their grades, they'll go two grades higher than if they're not getting that individual tuition, but we've always thought that's impossible. We'd never have enough teachers to be able to achieve that objective. Whereas AI actually potentially can allow that. Look at some of the things that the Kahn Institute is doing, some of the education companies in China, which really seem to be running well ahead on this in terms of providing at the students own pace tuition in different subjects, particularly things like maths, where it's really easy to just fall behind.
Then you can never keep up. You never get back on the wagon, basically. So you can keep people understanding. And then the other side of it is, rather than this rote learning, three R's - I never really understood it's, it should be the three C's: curiosity, creativity, critical thinking. These become the things, the idea that AI is going to produce something for you, and it's kind of eight out of ten, you're going to add your creativity to it, you're going to critically analyse it, and you'll get it from eight out of ten to ten out of ten. But with your individual spark added into it, I think that's a really important piece.
So that's how we need to think about education. Going back to try and help people who are currently in the workplace to re-skill, can we use the same techniques to re-skill people? Again, history's a great teacher of these things. In the first phase of the industrial revolution, the skilled weavers were all put aside. The machines that came in were used by children and other non-skilled people, and there was a huge Luddite revolution that happened as a result of that.
In the second phase of the Industrial Revolution, really driven around electricity, factory automation and things, particularly through the 50s and 60s, there was a much more collaborative approach where it was recognised that we need to re-skill the people that we have, we need to up-skill the people we have to use this new automation. Actually as a result of that, people got different jobs, they got better jobs, they were paid more because they were skilled. It was an interesting period in history where blue collar workers had strong union representation, so they were able to negotiate for better salaries.
The challenge has been since the 1980s, most of the IT automation that we've seen has been in white collar industries, service industries, where unions aren't really a factor. People haven't been paid extra for taking on and using these new skills. It's almost been hidden, it's just kind of accepted - "You'll use a computer now. And that will make your job better." But they're not going to pay you anymore for doing that.
So I think with AI, we're going to see that same kind of up-skilling that we saw in the fifties and sixties in manufacturing. We're going to see that in the service industries. We're going to see that in white collar work. We need to think about how are we going to upskill those people? And how are we going to pay them more for this new - because we're going to ask them to add the human element to what's being produced. So we're going to need to make sure that we really value that and make sure that people are earning from that.
It's going to be a really interesting period. We could get it wrong. We could definitely get it wrong, but if we plan for it, maybe we can do a good job.
Aditya: This is where the humanities aspect comes in. Don't just teach kids about STEM, but also have humanities, which I think is missing today. So there has to be a drastic shift in trying to balance the education within schools and universities as well. In fact Singapore just announced that they're going to push everybody above the age of 40 into re-skilling, just to better transition into this new world of GenAI.
So spending a little bit more time on some of the fears - I really like your book that you offer a very balanced perspective. While there's a sort of a growing concern about doomerism, I think you have a very balanced perspective, you don't take sides, you basically offer a very balanced view.
Now, I personally have become more of a techno-optimist, effective accelerationist supporter myself. I feel that the dial has turned too far towards doomerism. So what I wanted to ask you was what your thoughts are on how you propose society balances this integration of AI without succumbing to excessive caution or fear. From your perspective, how do you strike the right balance between fear and innovation?
Striking the Right Balance Between AI Innovation and Regulation
Nigel: Yeah, regulation versus innovation - that trade-off is quite interesting. It also feels like people, governments have woken up to the power of the Internet at last. 40 years too late, probably 30 years too late. And realise that there's a lot of power in the hands of a small number of companies here. These platforms are very influential. You look what happens in China around internet platforms in terms of the level of government control over the content. Part of that is to do with trying to ensure that the government message is getting out. But part of it, I think, is quite genuine, where they're concerned about the risk and the damage that this content could do for people. Getting that balance right is really difficult.
It almost feels like with AI, we're throwing the lever almost the other direction. Whereas with the internet, it was like "Hey, it's the wild west, run fast, break things." With AI, it feels like the governments are trying to throw a block on this and engage themselves in it without really fully understanding what AI is and how it works. Getting that balance right between the governance, regulation and the innovation is really difficult.
We've also got to worry about this idea of regulatory capture, where some people throw their hands up and say "Regulate me, regulate me," and then "Oh, good idea to regulate me, but let me help you to define what those regulations should be." So they capture for themselves their own current leading market position and maintain that leading market position.
I think you see some of that going on. There's a big debate at the moment about should large language models be closed and only available to a few, or should they be open source and available to everybody? I'm very much more in the camp of "Look, we've got to make this stuff open and make sure that lots of people are looking at it, and lots of eyes are on it." Because if we put it in the hands of a few, that's going to be quite dangerous, and will put a lot of power in the hands of a few people. The real balancing act here is, and somebody else came up with this great phrase, "How do we ensure 8 billion people on our planet benefit from AI, rather than 8 billionaires?"
This is somehow what regulation is going to do. The problem is, some stuff's quite clear. If you take my image and you take my voice and you create a deepfake of me for your podcast - this is not a deepfake, by the way - then that should be illegal without my permission. That seems clear to me. That's the sort of a black and white case that you can clearly define.
But when does replication become inspiration? Where is the line on that transition? If I'm a musician and I'm inspired by the Beatles to come up with some new music, when's the line between copying and I'm creating? Those are really gray, difficult issues that are very hard to capture in regulation. The European Union, for example, with the AI Act, is actually setting some pretty hard statements around things like we shouldn't have any kind of social control. But it's undefined what that is. Is a social media site social control?
And there's this concept of the idea that you shouldn't do harm, but it doesn't define what harm is. So we're going to end up with legislation potentially, which almost goes too far. And then we're going to have to through precedent backtrack from it. The danger of that is big companies will be in a much better position to argue the case on that. The people who are going to suffer are the small innovative companies who find themselves on the wrong side of the line and get shut down over it. Because most of these regulations are targeted at the end application.
So you can build a whole bunch of enabling technology and you're kind of outside of the regulation. It's when somebody actually uses that to build a product that that small innovative company potentially ends up in trouble. It's really hard. Some of these aspects, again, I think is why part of the reason I wrote the book was hopefully the more people who can understand about AI and can start to be aware of some of these issues, the more eyes, the more voices we can have trying to come up with sensible ideas around it.
We all need to be involved in this because it's going to affect us all. We all need to get engaged in this debate and not say "Oh, somebody else will decide, somebody else will understand how it works. We'll leave it to them."
Hippocratic Oath for AI Developers
Aditya: Yes. And one really good idea that you have in the book, and you've talked about this, I think, in public as well, is this idea of a Hippocratic oath that's similar to doctors. This is kind of putting the onus on the developers to avoid harm. So how practical do you think this is, or have you thought a little bit more about the Hippocratic oath?
Nigel: Yeah, obviously what happens with doctors, when we get sick, we rely on doctors to help us. The idea of the Hippocratic Oath is that first do no harm. You might see somebody that might be sick, but actually by intervening and doing something, it might make them worse.
As a doctor, you have to think quite carefully about that. Hospitals typically have ethics committees, where this concept of you're unsure, so you go and see three other doctors who will opine on what the right course is and you can from that decide perhaps what to do. In the UK, we have the British Medical Council that equally has an ethics board. Some of these decisions in medicine are like a hospital - when you have a coronavirus, you've got to make trade-offs of how much of my resources do I put to save people's lives around coronavirus and how does that detract from maybe longer term care that is required for people with cancer and other things.
Can I make sure, how do I get that balance right? Difficult decisions, life and death decisions that are being made. I guess the good news in AI is, let's say the cat is identified incorrectly on the internet. It's unlikely somebody's going to die. The jeopardy is much less in many cases.
But it can still do things which are upsetting, bad in terms of the outcome. It will take very clever people to put these AI systems together. And so they have, or should have, a sense of responsibility about what they're creating and how their technology is being used. And what we also need is people with those skills to be willing to step forward and say "No, no, I want to work in a field, which is trying to make AI safer," or "I want work in a field, maybe an independent institute where I'm going to help to ensure that people are kept safe from this technology."
"I'll use my skills to help ensure that bad things aren't happening." So this idea of people who learn these technical skills, having some understanding of ethics and the implications of their work, maybe unintended consequences, becomes quite important.
Can we build some of those things into, for example, when you become a chartered engineer, do you need to take some kind of ethics course or sign up to some ethical standard? Already, I think a lot of these engineering fellowships and things, they do require that, but maybe that's a source for doing that. Maybe in AI, we need some of these independent institutes that are helping people to think through these ethical issues because we can't just rely on the corporates and we can't just rely on the governments. That's typically the choice you're faced with today.
Do you trust the government to keep you safe and control the corporates or do you just say the corporates are going to look after you and I'm happy to share all my information with them and they'll look after me and it'll be fine.
The Geopolitical Influence of AI
Aditya: Right. So talking about governments a little bit, and you've had a lot of experience in China with your own experience with Graphcore and traveling. I know you traveled to China quite a bit. So from your perspective how do you see their influence shaping the global AI landscape?
There's also the Middle East now that is seeing a lot of activity, especially Saudi Arabia. Interestingly, Sam Altman is trying to raise $7 trillion or something through the Saudi government being part of it. So from a geopolitical perspective, what are your thoughts around how this techno-political landscape is going to evolve?
Nigel: Yeah, I never find myself agreeing with Putin very often but he did say whoever controls AI will control the world.
I'm not sure it's quite that far, but it's sort of an interesting perspective to bring here. That this is actually a geopolitical capability that is going to be very important for major countries. If you fall behind in AI, you're going to become dependent upon this technology from some nation state, and that is going to force you to have certain types of strategic partnerships, which may or may not align with your values.
So China's interesting. They are trying to move from being the factory of the world to being a technology nation. That's going to be a difficult transition for them. But they are 100% focused on achieving that. They have a particular way of going about that. As a growing country and a growing economy, they probably have some flexibility to kind of do certain things in a different way. They're certainly, my impression is they are much more focused and probably further ahead on the applications of AI, making sure that AI is being applied in their industries and their businesses and in their cities, than we are perhaps in the West where we've got some of that existing infrastructure already and would need to be replaced. And they're using AI a lot in education already. So probably much further than we are.
I think they are very focused on how AI will help them become an important country in the world. We in the US are much more dependent on the large corporates to drive that innovation. Europe is sort of, unfortunately, probably in the middle here. Potentially, I've made this statement before - imagine a country that doesn't have its own large internet players, doesn't have its own cloud, that its government is dependent on some third party cloud even for its military. Where the stock market has no trillion dollar companies. Imagine what that country looks like - missed out on the internet revolution. Well, that's UK, it's many of the European countries who are in that boat. And now imagine what happens if they miss the AI step. They're going to be even further behind.
The idea of sovereignty, digital sovereignty - you won't have digital sovereignty, you'll be dependent upon one of these other large countries for a lot of your fundamental technology. I think, I'm not sure how people are thinking that through, what steps they're trying to take to ensure that even if they can't be at the same level, they at least have a seat at the table and have some leg of influence into that conversation. We need to have skills. We need to have important companies. We need to have our seat at the table so that we can have some power in those discussions.
Advice for AI Entrepreneurs
Aditya: Great. And just one last very quick question. You've been an entrepreneur, a very successful entrepreneur. What would be your advice to budding entrepreneurs in this space who are specifically focused on AI and trying to do something new and innovative?
Nigel: I think it's the same advice for any entrepreneur. It's not about the underlying technology or what great thing that you think you have. It's really about what problem does it solve? Is that a difficult problem? Is that an important problem? Is it a valuable problem?
Can you put together a team that will allow you to actually be the best at delivering the solution to that problem? Can you raise the money that will allow you to deliver that solution, hire the people and build the world class company that solves that problem? That's the fundamentals of it.
AI just opens up a whole bunch of new opportunities to solve problems or to solve problems that maybe are partially solved, but you can do it in a much, much better way using AI. But you've got to focus in on what is the problem that we're trying to solve here.
Aditya: Wonderful advice. Thank you so much, Nigel. Very much appreciate you taking the time today.
Nigel: Pleasure. Yeah, I really enjoyed it. Thanks so much.
Aditya: Thank you.