Rule Breaker Investing: When Robots Rule

By Markets Fool.com

The idea that robots may someday overtake humanity has been the stuff of science fiction for decades.

Continue Reading Below

From Terminator to Battlestar Galactica, we have imagined a world where robots become our overlords and the human race struggles for survival. In a sense, that's what Motley Fool co-founder David Gardner and his guest discuss on this episode of Rule Breaker Investing. They may not be picturing a world where robots try to destroy human society entirely, but they very much examine a time when humanity no longer serves as the dominant intellect on the planet.

It's a discussion that goes in directions RBI rarely does, but it's a lot of fun, and Gardner does bring it all back to an investor's perspective. It may not be Cylons and T-1000s, but the future is coming, and this episode helps humanity know what to expect. So listen along -- if you want to live.

A transcript follows the video.

A secret billion-dollar stock opportunity
The world's biggest tech company forgot to show you something, but a few Wall Street analysts and the Fool didn't miss a beat: There's a small company that's powering their brand-new gadgets and the coming revolution in technology. And we think its stock price has nearly unlimited room to run for early in-the-know investors! To be one of them, just click here.

Continue Reading Below

This podcast was recorded on Sept. 14, 2016.

David Gardner: And welcome back to Rule Breaker Investing. I am joined by a special guest today. Really, any guest on this podcast is special by default, since I've only done it two or three times in our 60 or so podcasts.

But Robin Hanson, our new friend here, is an associate professor of economics at George Mason University, which is not too far from Fool HQ. He's a research associate at the Future of Humanity Institute of Oxford University. Professor Hanson has master's degrees in physics and philosophy from the University of Chicago -- what a great school. Nine years of experience in artificial intelligence research at Lockheed and NASA. This is a very smart guest that we have today, by the way. Every single one of these lines blows away the host and what the host has achieved in his life.

A doctorate in social science from California Institute of Technology. Three thousand fifty citations, which I'm gathering, Robin, has probably amped up since this book was published on June 1. It's probably gone up higher ... and 60 academic publications. This is his first book. The book is The Age of Em. We're going to talk some about the book, but this is not a book discussion, per se. This is a discussion -- what we love to do here at Rule Breaker Investing -- talking about the future.

Now, before I ask you my first question, I want to mention that Andy McAfee, whom I'm sure you know because he's blurbed your book, is a really good guy at MIT. He came here to Fool HQ a few years ago, and his watchword at the time was "things are about to get weird." Based on the pace of growth of technology, and the assumptions that we've all had about linear versus exponential growth, and I know you know a lot of this, things are about to get weird. We're at the middle place in the chessboard, a metaphor you may be familiar with.

But Robin, as I read and looked through your book, which I've not finished, but I've read some of it -- which I hope is better than average, having written a few books myself. Often you're talking to people who have no idea what your book is. But having read some, I know that things are about to get weird. You're talking about something that's very weird. We're going to go to the future in a sec, but let me start with the past. You do a really nice job telling the story of humanity. Can you do it in two minutes or less?

Robin Hanson: Absolutely. So the story of humanity is three great ages. During each age, growth was steady, not accelerating, and then there were sudden transitions where, within a previous doubling time, the economy started doubling 50 or more times faster.

So first, there was the age of foragers, where we slowly hunted and gathered and accumulated, via culture, more kinds of ways and environments we could survive in. Then farming, about 10,000 years ago, suddenly started doubling every thousand years, vastly faster, and farmers could grow via accumulating more plants and animals, and ways to survive in many different environments that way.

And then there was the great Industrial Revolution a few hundred years ago. And since the Industrial Revolution, we've been doubling roughly every 15 years, and we've been doubling steadily.

So when people say things are going to get weird, I say, "Yes. When they do, it will get really weird, really fast," but I don't actually see much happening in the next 20 years or so. I disagree with Andrew McAfee, for example, of thinking that we're on the verge of something right about now. But when it does get weird, it will get weird really fast, and you probably shouldn't try to time it. You should probably just be ready for it.

Gardner: All right. We might even get into investment implications before we're done, but before we move forward, one of the things you point out in The Age of Em is that as each new age dawned, the dominators, the leaders of the previous age, are still around, but they're kind of marginalized. They're sort of off to the side. So foragers don't count for as much when farming shows up, and farmers don't count as much when the Industrial Revolution hits. And to foreshadow your thinking, about a hundred years hence, which is the time frame that you're addressing in your book, you see humans being that outmoded group, that fringe, rural ...

Hanson: Marginalized.

Gardner: Marginalized. And what's taking over?

Hanson: Robots of a certain sort. Now, there's a lot we don't know about the future, and so my book is premised on assuming that a particular kind of robot is the first kind of robot to be as smart as humans broadly, all across the board. I don't know if that's true, but it seems worth considering a whole bunch of different scenarios and working them out. The future is important enough to have a hundred different books working out a hundred different scenarios that have only a 1% chance. I'm happy to think my book at least meets that standard.

So a particular kind of robot, and it's called a "brain emulation." So that's why it's called The Age of Em. "Em" short for emulation.

Gardner: E-M.

Hanson: Yep. And the idea of a brain emulation is you take particular human brains, you scan them, and find spatial and chemical detail. You have models on how each of the kinds of cells works and you make a big computer model that works like the original brain in terms of input/output signals. And that means if you would hook it up with hands, eyes, ears, mouth, you could talk to it. It would talk back. You might ask it to do a job and it might do it. And if it were cheaper than humans, everything would change.

Now, this route of artificial intelligence plausibly might happen roughly sometime in the next century, which is why I think it's an interesting scenario. Actually at the rate we've been going in the ordinary kind of artificial intelligence, it would take two to four centuries before we reach human-level abilities, there, all across the board.

Gardner: But what has happened, typically, is -- and certainly Ray Kurzweil and other futurists, but particularly Ray comes to mind -- just running Moore's law, and just making assumptions that double, double, double. Maybe it takes a year or two to do those doubles, and it doesn't look like much when you go from one to eight.

Hanson: Yeah, that's a standard story, but honestly my guess is that there's a law of normal distribution of the cutoff for jobs in terms of automation. So in the last 50 years, we've seen exponential growth continuing where we double capacities every two years, but we have not seen exponential displacement of human jobs. We've seen relatively steady displacement of human jobs. That suggests there's this wide distribution of how much computing power is required to displace any particular job, and there's a long way ahead. We need to have a thousand, a million times more computing power to displace more jobs farther up the ladder.

Gardner: So let's go with a little bit of storytelling. And any time you're writing a book about a hundred years from now, obviously there's going to be a lot of storytelling. It's a story. It's in your head. What's remarkable about your book is the depth to which you've thought about the implications of brain emulation and how it affects all sectors of society -- everything from how the economy works to who would be president, if there even is democracy. The degree of admittedly speculative thinking that you've indulged in goes as deep as anything I've ever read about something this distant.

In fact, I'm wondering, Robin, if anybody in the world has thought more about the world a hundred years from now than you have -- literally and specifically a hundred years from now. Do a little bit of storytelling. What do you, from the book -- I'm still alive, let's say. Let's say you're still alive. Where are we? What does the world look like?

Hanson: So it's like a science fiction novel, except there's no plot and there are no characters, and the story environment makes sense. So I've been a science fiction reader for a long time -- I've enjoyed it -- but the more I've learned over the years, the more frustrated I get with story environments not actually making sense if you're flicking through. And that's a standard thing in an action story.

In an action story, you're carried along, and you do this and you do that, and if you ever go back and think about, well, what were my other options there, you'll find there was lots of other things people could have done that would have made a lot more sense, but at the moment you didn't think of them, and so that's how a lot of stories are. So I say this makes sense. I've worked all through it.

So I'm claiming I'm not being very speculative in the sense of making things up. I am just applying standard theories, but to an unusual scenario. So the speculative part, I would say, is that I assume that this kind of robot shows up sometime roughly in the next century, and it's cheaper than humans. And everything after that, I claim, is applying standard theories. Not speculation. Not making stuff up. Not guesswork -- applying standard theories.

Now, that's how we work in the world today. That is, we have data from the past and we abstract that data into our best theories, and then we apply those theories in finance or everything else. And that's what I'm doing here. So, but you would like an image, so let's start with that image.

You are in a vast city. Almost all emulations are crammed into a small number of very big densities. If you look at it objectively, which sometimes you could, it's just racks and racks of computing hardware. Boiling hot. Huge cooling pipes crammed through it. It's ugly. It's functional. It's stuck.

But you don't look at it that way. You're an em. You mostly live in virtual reality. In virtual reality, it's a gorgeous city. It's got gleaming spires; broad, green boulevards; etc. And at any time you can meet with anybody in the city by just thinking of it, and you are instantaneously moved anywhere in the city, anytime you want, in order to meet with anybody.

So you can go for a stroll, if you want, just to relax, but you don't really have to do things to travel. You can just instantly be somewhere else. Most of the ems are working most of the time, but they like work, and they're workaholics like many people you might know, so our image should start with them at work.

Gardner: OK, and so let me paint a picture and you tell me if I have this right. So let's pretend, for example, that we have an Einstein-like figure, a human being at the time, sometime in the next century. And his brain would, naturally, we would want to emulate that brain, so we take the most genius human that we can find. And we, with technology that does not exist today ...

Hanson: Right.

Gardner: ...we map everything about it so that we're able, essentially, to replicate to the cell level what's happening in that brain. And then because stuff gets cheaper over time, the first human genome cost a billion dollars or more to develop, and now you can have yours 15 years later for 250 bucks. So we're able to take Einstein, and we're able to copy him into a robot, and then we're able to make a hundred or a thousand of those robots ...

Hanson: Or a billion.

Gardner: Or a billion of those robots. Of those Einsteins. And then, Einstein, they start at the same starting line -- all billion of them -- but they then, from that point forward, autonomously have different experiences. Naturally, uniquely one has this or that experience versus another, and so you end up with all of these different versions of Einstein. Is that a fair recounting of some of the picture that you have in your head when you talk about ems?

Hanson: Yes. So picture a billion Einsteins, but they aren't all born at the same moment. They're spread out over time. They take on different jobs and live in different cities. Some of them are plumbers. Some of them do physics. Some of them do music. They just do a wide range of different things, and each of them has thousands of other versions around them, so trained, slightly older, and recent. So ems run out in the sense of they get fragile with time and need to retire. Even though they're electronic, this is how software works today. Software rots, and so the em minds also rot.

So to be ready, they have versions of the ems who were started a year later, trained in slightly newer ways, and will retire a year later, all the way down the line. So they see older and younger versions of themselves around, so they know where their life is going. They have a pretty good idea where it's going, and where they'll live, and who they'll marry, and what jobs they'll have.

Gardner: And do they have consciousness?

Hanson: They have all of the human psychological features that you and I do. So that's the whole idea. We don't, the whole idea is that we don't have to understand how the brain works. All we have to understand is how each cell works, and we model the cells, and we don't care how the larger structure above it works. We can just make a copy and turn it on. So yes, they're conscious. They fall in love. They get mad. They lie to themselves.

Gardner: And in the meantime, human beings are still around. It's just that our brains can only move at about a billionth, and it's hard to compete against a billion Einsteins if you want to be a plumber, and so we're still on this Earth, but we've been marginalized as the farmers and foragers were before us.

Hanson: That's right. Now, this whole Age of Em I'm talking about might be as much growth in that Age of Em as there had been during the entire industrial era or during the entire farming era before, but it would all happen within a year or two. The economy might double every month. To typical emulations, who are running a thousand times human speed, that's actually a relatively slow change. They see the economy doubling every subjective century. But to humans sitting on the outside, it's blistering fast.

So the humans can't really change that much in a year or two. They're on the side, retired, but they only experience a year or two. So, but the ems can experience thousands of years of cultural change, and so they do.

Gardner: How do you react to people who today, Elon Musk and a few others, who say things like we need to make sure that we don't program them to destroy us all? And a lot of the Skynet worries and questions? What is your reaction as the author of The Age of Em?

Hanson: Well, ems can't be programmed. If you really hated the idea that your descendants would be free and able to choose their values and their attitudes that are different from yours, you won't like this world, because these descendants have that choice. Of course, that's the freedom you've had relative to your ancestors.

But some people say, "Yeah, but we don't want to tolerate that for our descendants." There are many people who say, "We must figure out a way to make sure our descendants can't have values and attitudes that differ from ours, because otherwise it could randomly drift away. And who knows how far away it could go, and that would be terrible."

The Age of Em may only last a year or two and then something else may happen, and a plausible thing that might happen next is that we achieve artificial intelligence through other means. That is the way we've been doing it for the last half-century -- slowly writing software. It's possible that we will continue along that vein, and eventually ordinary software will replace ems and be better than ems. I don't know. At that point, you may worry about that other kind of software and whether it can drift away.

But honestly, I think people are so quick to have policy recommendations, and to have evaluations, and they need to first have the foggiest idea of what might happen if you do nothing. And therefore, I've written this book mainly in the mode of telling you what might happen if you do little to stop it. It's a positive evaluation of the most likely outcomes. It's not my job to make you love it or hate it. You may want to change it, but first, know what's likely to happen.

Gardner: So one thing that I've said before on this podcast -- and I'm going to re-channel Ray Kurzweil, with whom you may or may not agree on this point -- just the concept that the predictability of the future keeps narrowing. That in ancient Egypt, you could predict pretty well what your grandchildren would be doing. By the time I came of age, when I was 13 years old in 1979, you could look forward and say things like, "Well, satellites, cable TV. It looks like that's a good growth industry."

Fast-forward now till today. It becomes, we're down to, maybe -- it feels like only a few years ahead that I feel like I can predict meaningful technologies.

And we see things ahead of time. We see 3-D printing before it really hits. Those kinds of things.

Hanson: I think your impression is a little misleading. I think what we can see is into a certain number of doublings. So today the economy doubles every 15 years. That's the time scale over which a lot of fundamental change is happening. It's hard to see through many doublings, but in the past, when the economy only doubled every thousand years, why, then, if I could see through a few doublings meant seeing through a few thousand years. So fundamentally the problem is seeing through doublings, and not so much seeing through time. The more change happens in any given time, the harder it is to see through time.

But we actually can see quite a lot. There are people, dozens, many decades ago, who foresaw roughly when digital cameras would show up and all these things. They just used straightforward projections. I was actually part of a group of people, before the World Wide Web showed up, who foresaw the World Wide Web and were trying to predict it and then influence what would happen. There are many things you can see through a few doublings, and there's also things you can just see that are more fundamental. So The Age of Em is a scenario that if a certain technology shows up and gets cheap.

Now, it's hard to guess which technologies will show up when and be cheap, but conditional on certain technologies showing up, I think we can actually say a lot about how the world might play out.

Gardner: I want to ask you more about that, Robin. I think somewhere in the book you say the chances of what you're describing in the book actually happening are something like one in a thousand. Is that right?

Hanson: Well, I was trying to say if you take all the things I say and make a conjunction out of it, it's a ridiculously low probability.

Gardner: And it's remarkable, because you have an incredible if this, then that going on in your head that you put into the book, where well, if that's true, then this will be true, and it's very, very deep. But getting back to looking in the nearer term, which is what I wanted to talk about.

So one thing we do here at The Motley Fool is when we pick a stock, we imagine some things that we hope will happen that to us will be indicators -- green flags, future -- that if those start lighting up green, we were right and the stock's going to do well. And if, we put some red ones out there, too, and that's what I call our 5-and-3.

I was wondering if you could do that a little bit, for thinking backwards from a hundred years from now. Could you provide us a few flags that if those flip up and they're green, then you're looking more and more right, and if they don't, then you're looking more and more wrong?

Hanson: Well, sure. There are three technologies required to make emulations feasible, and they all have to achieve much better levels than they have now. One is we need lots of cheap, fast, parallel computers, so if computers just hit a block and they just don't get better, that's a big red flag, but if they continue to exponentially improve as they have ...

Gardner: Quantum?

Hanson: ... that's a green flag.

Gardner: Yep.

Hanson: Quantum may not actually make that much difference. It's just ordinary computers I would focus on.

Gardner: OK.

Hanson: Second, brain scans. You need to scan brains in fine spatial and chemical detail. Now we can actually do scanning at fine enough spatial and chemical resolution already. You just need to scale it up. And people have been doing that, but you'd want to see that that actually continues, as we hope.

And third, you need good models of cells. There is a whole academic literature where people work out computational models of particular kinds of brain cells, but we just need them for all the different brain cells. We don't even really know how many kinds there are. So it would be nice to see that people start to count how many there are and start to work down the list of having models for all the different cells. If they somehow reach a roadblock there or something, that would be a red flag, but if they keep going, that's more of a green flag.

Now, another red flag is, of course, if other kinds of artificial intelligence, such as the other kind that's more in the news these days, if those accelerate as fast as many people seem to hope or expect relative to my expectations. I guess that's kind of a red flag. That might mean that that kind of AI would reach human-level abilities before this kind, in which case the scenario is less relevant.

Gardner: So Robin, some of the reflection on your work and your thinking comes from people who say, "It sounds a little bit sad, or even grim. Dystopian." Do you think of it that way? Are you an optimist in life -- forget about The Age of Em -- are you a realist, or a pessimist?

Hanson: I think I'm somebody who's happy with the world we live in, which many people aren't. The world, even, of a thousand years ago was, I think, still a pretty good world. I like humans. I like people. Even when they're poor. Even when they're struggling. Even when they lie to themselves, I just like people. So a world with a lot more people-like things enjoying themselves, I think, is an OK world.

Now, could I think of better things? Great. But I'm also an economist. We economists are somewhat cynical. We don't think everything is possible that other people think. We think there are big constraints in organizing things and figuring things out, which mean that it's just hard to have the perfect world many people imagine would be possible.

Gardner: Do you think -- I mean, by so many measures, we humans have made tremendous recent progress in things like longevity, lower violent crime rates, lower infant mortality rates, higher empathy. I enjoy following -- I don't know, Max Roser, if you're familiar with his work, Our World in Data. I follow him on Twitter. To what do you attribute this, what I would think of as an unprecedented progress for humanity over the past 50 years?

Hanson: Well, humanity has been making unprecedented progress for several million years.

Gardner: Maybe unprecedentedly accelerated, like we were not even ...

Hanson: It's been faster ...

Gardner: Right ...

Hanson: ... but I honestly it's been faster for the last few hundred years. We've been in the industrial era ...

Gardner: Yes ...

Hanson: ... and I think progress has been relatively steady in the last few hundred years. So I don't see any acceleration recently, but I do think there will come a time, in the next century or so, when suddenly things will speed up a lot.

But the growth we've had recently is because we've reached the industrial era, and the industrial era seems to be faster growing than the farming era because we found new ways to share insights and innovations. It seems like most growth, for the last few million years, has been about generating and mostly sharing innovations.

Humans could share innovations faster than genes via culture. Farmers could share innovations faster than could foragers, because they had trading networks that were long distance and they could embody innovations in plants and animals. Industry has had even faster innovations because we have networks of expert specialists. We can talk -- the plumbers in one place can talk to the plumbers in another place and pass innovations through specialized networks and that's allowed, apparently, much faster innovation.

So the very fact that humans can accumulate innovations and insight outside of genes is the key thing that allowed humans to break away from the rest of the biosphere, and we've been learning faster and faster ways to do that ever since.

Gardner: Have you read the book Ready Player One?

Hanson: No, I haven't.

Gardner: Are you familiar with it?

Hanson: I am.

Gardner: Yes. I think it's going to be a movie next year.

Hanson: I'll watch it.

Gardner: I know. It sounds like you're like me, although you're much, much better at this. Kind of a sci-fi geek. Is that fair?

Hanson: Yeah, yeah, although I like science fiction that makes more sense. I've got to say The Martian made a lot of sense, so hats off to a movie like that where all the way through, it mostly makes sense. That's really rare, honestly.

Gardner: Wow, we're running out of time too fast. Robin, I guess I have to ask you the Rule Breaker Investing question. Given what you're saying on this podcast, and how you think in your book and your everyday work as an economist, are there implications for investors that you're here to help let us know about?

Hanson: There are. They're not new or radical, but I should repeat them, because we economists and finance experts often just repeat the same standard financial advice, which is, look, if you don't know better than other people, stop making bets against them. Try to just make investments that aren't making bets, but that are covering the risks that you need to cover.

So the Age of Em isn't coming soon, but when it does, it will happen really fast, and if you wait to see the signs of it, it might be too late then. What you'll need, then, is some assets other than your ability to earn income. So for God's sake, try to set up some sort of insurance, sharing arrangements, or other sorts of assets so that when you suddenly lose your ability to earn wages, you've got something else to rely on.

And make sure it's global; that is, the Age of Em may not be distributed at all equally across the globe. If you set up some sort of asset or insurance agreement just in your area, it could be that the em economy doesn't show up much in your area, and so you don't have a stake in the big thing that's growing. If the em economy doubles every month, your investments would double every month there, so you could get very rich very fast if you start out with a nest egg. If you start out with nothing, then it doubling every month doesn't help much.

Gardner: There are so many more questions I'd like to ask, but our time is limited. In fact, Robin is graciously speaking to our employees shortly after we tape this podcast. So thank you very much, Professor Hanson, for your insights.

Hanson: Thanks so much for having me.

Gardner: Yeah. It was really a fun conversation. Fool on!

As always, people on this program may have interest in the stocks they talk about, and The Motley Fool may have formal recommendations for or against, so don't buy or sell stocks based solely on what you hear. Learn more about Rule Breaker Investing at RBI.Fool.com.

David Gardner has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Twitter. Try any of our Foolish newsletter services free for 30 days. We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy.