Google I/O 2014 – Biologically inspired models of intelligence

By admin,

  Filed under: AI, Artificial Intelligence, Big Data, Biology, Biomimicry, Cognitive, Computing, Google, Intelligence, Knowledge, Machine Learning, Neuroscience, Ray Kurzweil, Reasoning, Singularity, Software, Speech Analysis, Speech Recognition, Watson
  Comments: Comments Off on Google I/O 2014 – Biologically inspired models of intelligence

For decades Ray Kurzweil has explored how artificial intelligence can enrich and expand human capabilities. In his latest book, How To Create A Mind, he takes this exploration to the next step: reverse-engineering the brain to understand precisely how it works, then applying that knowledge to create intelligent machines. In the near term, Ray’s project at Google is developing artificial intelligence based on biologically inspired models of the neocortex to enhance functions such as search, answering questions, interacting with the user, and language translation. The goal is to understand natural language to communicate with the user as well as to understand the meaning of web documents and books. In the long term, Ray believes it is only by extending our minds with our intelligent technology that we can overcome humanity’s grand challenges.

Watch all Google I/O 2014 videos at: g.co/io14videos

0:00
0:06 RJ MICHAEL: Hello everyone I’m RJ Michael.
0:09 I’m Director of Games for Google.
0:14 And is my great pleasure to introduce you today one Ray
0:17 Kurzweil.
0:19 Ray Kurzweil is one of the world’s leading inventors,
0:23 thinkers, and futurists, with a 30-year track
0:28 record of accurate predictions.
0:31 He’s called many things by a number of sources.
0:34 He’s called “the restless genius” by the “Wall Street
0:37 Journal,” “the ultimate thinking machine” by “Forbes Magazine.”
0:42 PBS selected him as one of their 16 revolutionaries
0:45 who made America.
0:48 Kurzweil was selected as one of the top entrepreneurs
0:50 by “Inc. Magazine,” who described him
0:53 as “the rightful heir to Thomas Edison.”
0:59 Kurzweil was a principal inventor
1:01 of the first CCD flatbed scanner, the first omnifont,
1:06 optical character recognition system.
1:09 He did the first omnifont OCR.
1:13 He did the first printed speech reading machine
1:15 for the blind, the first text-to-speech synthesizer,
1:18 the first music synthesizer capable of recreating
1:21 the grand piano and other orchestral instruments,
1:25 and the first commercially marketed large vocabulary
1:29 speech recognition system.
1:31 This is the stuff that the guy’s done with his life,
1:33 with his career so far.
1:34 Among his many honors, he’s the recipient
1:36 of the National Medal of Technology.
1:39 He was inducted into the National Inventors
1:41 Hall of Fame.
1:42 He holds 20 honorary doctorates, and he has received honors
1:46 from three US Presidents.
1:50 Ray has written five national bestselling books, including
1:53 the New York Times best seller, “The Singularity is Near,”
1:58 in 2005, and more recently “How to Create a Mind” in 2012.
2:04 He is Director of Engineering at Google.
2:06 He is now heading up a team developing machine intelligence
2:11 and natural language understanding.
2:14 He has also been a personal hero of mine since I was young.
2:19 And he has this drive to make AI happen at Google.
2:25 And I couldn’t be more delighted to announce to you guys
2:28 that, as of today, I am now going
2:30 to start exploring the entertainment and education
2:34 space directly with Ray Kurzweil in his new organization.
2:38 And who knows where this is going to go,
2:40 but it’s going to be awesome.
2:42 Ladies and gentlemen, may I introduce Ray Kurzweil.
2:45 [APPLAUSE]
2:49
2:59 RAY KURZWEIL: Thank you, and thanks, RJ.
3:01 And now that you’re going to be working with me,
3:04 we have to work on your enthusiasm a little bit.
3:08 And it’s great to be at Google.
3:10 It’s actually my first job.
3:11 I’ll talk a little bit more about what I’m doing,
3:14 but I’d like to share with you some ideas about thinking.
3:19 I’ve been thinking about thinking for 50 years.
3:21 First, I would like to say a few words
3:23 about the law of accelerating returns.
3:25 I won’t say too much about it, because many of you
3:28 have heard me talk about it before,
3:30 but the law of accelerating returns is alive and well.
3:34 It’s not just Moore’s Law.
3:36 I keep hearing people say, well, Moore’s Law
3:38 is coming to an end, as if that were
3:40 synonymous with the exponential growth of information
3:43 technology.
3:44 Moore’s law was just one paradigm among many.
3:47 When I first studied this in 1981,
3:49 Moore’s Law had only been underway for a few years, which
3:52 is shrinking components on an integrated circuit.
3:55 1950s, there were shrinking vacuum tubes
3:57 to keep this exponential growth going.
3:59
4:02 Gordon Moore originally said Moore’s Law
4:04 would come to an end in 2002.
4:06 Justin Rattner, the CTO of Intel, now says 2022.
4:09 But he’ll also show you, in their labs,
4:12 the sixth paradigm– three-dimensional,
4:14 self-organizing molecular circuits
4:16 working, which will keep the law of accelerating returns going
4:20 well into late of this century.
4:23 So what is the law of accelerating returns?
4:25 It’s not that everything grows exponentially, or even
4:27 every aspect of information technology.
4:30 We’re speeding up transistors, making
4:32 them smaller so the electrons have less distance to travel,
4:35 so transistors were faster.
4:37 That was a sub-paradigm.
4:39 I always felt we were going too fast.
4:41 The human brain is actually very slow,
4:43 computes about 100 calculations per second
4:47 in the interneuronal connections.
4:49 But it’s massively parallel.
4:51 So there’s actually a 100 trillion-fold parallelism.
4:54 There’s 100 billion neurons with about 10,000 connections each,
4:59 and that’s where the computations take place.
5:01 So it’s a very massively parallel, but slow.
5:03 So it uses very little energy, and we
5:06 needed to move more in that direction,
5:08 towards more parallelism and less speed.
5:10 So speeding up transistors was just one sub-paradigm.
5:15 And we already have taken steps in the third dimension.
5:19 Many memory circuits are already three-dimensional.
5:22 The law of accelerating returns is not
5:25 limited at all to Moore’s Law.
5:27 Moore’s Law is one paradigm among many in computation.
5:30 Computation is one example of many
5:33 of the law of accelerating returns.
5:35 So what exactly does it pertain to?
5:37 It pertains to the price performance and capacity
5:41 of information technology, and every form
5:45 of information technology.
5:46 So in computation, calculations per second per constant dollar
5:50 has been speeding up at an exponential pace since the 1890
5:53 census.
5:54 But it’s also true of communications,
5:56 biological technologies, brain scanning, modeling the brain,
6:02 being able to reprogram biological data,
6:05 printing– the spatial resolution
6:08 of three-dimensional printing, which is turning information
6:11 into physical products– these are
6:12 all examples of either price performance or capacity
6:16 of information technology.
6:18 And I’ll show you this first example.
6:21
6:25 I mean, this is the graph I had in 1981 of computation.
6:28 It’s a logarithmic scale.
6:29 Every label level is 100,000 times greater
6:31 than the level below it.
6:33 And so this represents trillions-fold increase
6:36 in calculations per second for constant dollar.
6:39 Moore’s Law is the fifth paradigm.
6:42 But notice how smooth a trajectory that is.
6:46 It really has a mind of its own.
6:47 It goes through thick and thin through one piece.
6:49 And exponential growth– the second point I want to make–
6:52 is not intuitive.
6:54 If you ever wonder, gee, why do I have a brain?
6:56 It’s really to make predictions about the future
6:58 so we can anticipate the consequences of our action,
7:01 and the consequences of inaction.
7:03 But those built-in predictors are linear.
7:06 When we were walking through the fields 1,000 years ago,
7:08 we would make a prediction– OK, that animal is going that way.
7:11 I don’t want to meet him.
7:12 I’m going to go a different route.
7:13 That was good for survival.
7:14 That became hardwired in our brains,
7:16 but those predictors are linear.
7:18 And people still use their linear intuition
7:22 about the future.
7:23 That’s the principal difference between myself and my critics.
7:26 We’re both looking at the same reality.
7:28 We have similar judgments about it.
7:30 And if I thought progress was linear,
7:32 I’d be pessimistic also.
7:35 And many things are linear.
7:37 Biology– health and medicine was linear.
7:39 That was still useful.
7:41 The life expectancy was 19 1,000 years ago.
7:43 We’ve quadrupled it.
7:45 I talked recently to some junior high school students
7:47 and pointed out they would all be senior citizens
7:49 if it hadn’t been for that progress.
7:51 Life expectancy was 37 200 years ago.
7:55 Schubert and Mozart died in their 30s
7:56 of bacterial infections, and that was typical.
7:59 That’s all been from linear progress.
8:01 Exponential progress is quite different.
8:04 In a linear progression, that’s our intuition about the future
8:07 goes one, two, three.
8:09 And exponential progression– that’s
8:10 the reality, not of everything, but of price performance
8:14 capacity, of information technology,
8:16 goes one, two, four.
8:18 It doesn’t sound that different, except by the time
8:20 you get to 30, the linear progression– our intuition’s
8:23 at 30.
8:24 The exponential progression is at a billion.
8:29 And that’s not an idle speculation.
8:31 I mean, this is several billion times
8:32 more powerful per dollar than the computer
8:35 I used when I was an undergraduate at MIT.
8:37 It’s several trillion-fold since we
8:40 started with the 1890 American census.
8:43 But again, look at how smooth a regression that is.
8:45 Nothing has any impact on it.
8:48 The third point I want to make is
8:49 that it’s not just computation.
8:51 It really affects every form of information technology.
8:54 And information technology is ultimately
8:56 going to transform everything we care about,
8:59 as a result of application developers like yourself.
9:03 I mean that’s what drives it forward,
9:04 and I’ll say more about that.
9:07 We could buy one transistor for $1 in 1968.
9:10 I was pretty excited about that because I
9:12 was used to spending $50 for a telephone relay
9:14 that could do that.
9:14 We can now buy 10 billion.
9:17 Cost of a transistor cycle has come down by half every year.
9:20 That’s a 50% deflation rate, so I
9:24 can get the same computation, or communication,
9:27 or biological technologies as I could
9:29 a year ago for half the price.
9:31 Economists actually worry about that because we
9:33 had massive deflation during the Great
9:35 Depression– a different reason.
9:37 There was a collapse of consumer confidence.
9:39 But the concern is, if I can get the same stuff–
9:41 and I’ll talk about three-dimensional printing
9:43 in a moment to include physical stuff– for half the price,
9:48 I’ll buy more.
9:49 I mean, that’s economics 101, but I’m actually
9:51 going to double my consumption.
9:53 And if I don’t, let’s say I increase my consumption
9:56 in terms of bits and bytes 50%, which
9:58 is a lot– the size of the economy, not as measured
10:02 in bits, bytes, and basepeds, but as measured in currency–
10:05 will shrink for a variety of good reasons.
10:08 That would be a bad thing.
10:10 But that’s not what happens.
10:12 I’ve got 50 different consumption charts like this.
10:14 This is bits of memory consumed.
10:17 We actually more than double it every year.
10:19 There’s been 18% growth in constant currency each year
10:23 for the last 50 years in every form of information technology,
10:26 despite the fact that you can get twice as much of it
10:29 each year for the same price.
10:31 And the reason for that is application developers
10:35 like all of you.
10:37 I mean, why weren’t there social networks eight years ago?
10:40 Was it because Mark Zuckerberg was still
10:42 in junior high school?
10:43 No, there were attempts to do it,
10:45 and there are arguments– can we afford
10:47 to allow users to download a picture?
10:49 The price performance wasn’t there.
10:51 Why weren’t there search engines more than 15 years ago?
10:55 I wrote in the early ’80s, when the ARPANET connected
10:58 a few thousand scientists, that this was growing exponentially,
11:01 would necessarily do that.
11:02 This would therefore be a World Wide Web–
11:04 I didn’t use that term– connecting hundreds of millions
11:07 of people to each other, and to vast knowledge resources
11:10 by the late ’90s.
11:11 People thought that was nuts when
11:13 the entire American defense budget could tie together
11:15 a few thousand scientists, but that’s
11:18 the power of exponential growth.
11:19 That’s what happened.
11:21 And I wrote that there would be so much information
11:23 you couldn’t find anything without search engines,
11:25 and the computational communication resources
11:28 needed for a search engine would come into place.
11:31 What you could not predict is that it
11:33 would be these couple of kids in a Stanford dorm near here,
11:36 who would take over the world of search among the 50 projects
11:39 that were seeking to do that.
11:40 I’m not saying everything is predictable,
11:42 but you could predict that search engines would
11:45 be needed and feasible in the late ’90s.
11:47 And Google was founded in ’98.
11:50 So “Time Magazine” wanted a particular computer
11:54 they’d covered as the last point on this cover story
11:58 about the law of accelerating returns.
11:59 It’s right there.
12:00 And this was a curve I’d laid out 30 years earlier.
12:03 So this is, in fact, very predictable.
12:06 And in terms of all my predictions,
12:08 when they’re like this, in terms of actual numbers– price
12:12 performance or capacity of different information
12:14 technologies– they’re really right on the money.
12:16 If you Google how my predictions are faring,
12:19 you’ll get 150-page essay looking at all the predictions
12:25 that I’ve made as of that time, which was a couple of years
12:27 ago, including 147 that I made in the age
12:30 of spiritual machines, which I wrote in the late ’90s,
12:33 came out in ’99, about 2009– 78% were exactly
12:38 correct to the year.
12:40 And these were predictions that were by decade.
12:45 Another 8% were off one year, so I
12:47 called them essentially correct.
12:48 So 86% were correct or essentially correct.
12:52 The ones that were wrong, included things
12:56 like regulatory and cultural issues.
13:00 Like, for example, one that was wrong
13:01 is that we’d have self-driving cars, which, in fact, began
13:06 a glimmer of working in 2009.
13:08 If I had said 2014, it would have been correct,
13:12 because Google self-driving cars have already gone 8,000 miles.
13:16 And Google’s going to launch a fleet of experimental vehicles
13:20 in Mountain View this year.
13:21 But it wasn’t correct for 2009.
13:26 The number of bits we move around wirelessly
13:28 in the world over the last century–
13:31 there was Morse code, over AM radio a century ago,
13:35 today’s 4G networks– trillions-fold increase.
13:38 But again, look at how smooth of a trajectory is.
13:42 Internet data traffic doubling every year.
13:44 Here’s that graph I had of the ARPANET in the early ’80s,
13:48 and projected that out.
13:50 The graph on the right is the same data,
13:52 but on a linear scale.
13:54 And that’s how we experience it.
13:56 We don’t experience it in the logarithmic domain.
13:58 So to the casual observer, it looked
14:00 like, whoa, World Wide Web, new thing, came out of nowhere.
14:03 But you could see coming.
14:05 And biology, this has been a perfect exponential.
14:10 The Genome Project was announced in 1990,
14:12 was not a mainstream project.
14:14 Halfway through the project, critics
14:16 blasted it, saying, look, here it’s halfway
14:19 through this 15-year project.
14:20 You’ve only collected 1% of the genome.
14:22 So seven years, 1%, it’s going to take 700 years,
14:25 just like we said.
14:27 My reaction is, we’re at 1%.
14:29 We’re almost done, because it’s an exponential progression.
14:34 [LAUGHTER]
14:35 And indeed, it was done seven years later,
14:37 because 1% is seven doublings from 100%.
14:40 That’s continued since the end of the Genome Project.
14:42 That first genome cost a billion dollars.
14:43 We’re now down to a few thousand dollars.
14:45 But it’s not just sequencing.
14:47 We can now reprogram this outdated data.
14:50 And there are many examples of that.
14:53 The Johnson Diabetes Center, they’ve
14:54 turned off the fat insulin receptor gene.
14:56 We have technologies like RNA interference
14:58 that can turn genes off.
14:59 These animals ate ravenously and remained slim,
15:02 and lived 20% longer.
15:04 I’ve worked with a company that adds a gene to patients
15:07 with a terminal disease called pulmonary hypertension, caused
15:10 by one missing gene.
15:11 They scrape out these cells non-invasively from the throat,
15:14 add the gene in vitro, inspect it got done correctly,
15:17 replicate it several million fold– another new technology.
15:21 So they now have millions of cells with that patient’s DNA,
15:23 but with the gene they’re missing.
15:25 Inject it back in the body.
15:27 The body recognizes them as lung cells,
15:29 and this is cured, this terminal disease in human patients.
15:34 There’s an interesting case combining
15:36 a number of different exponentially growing
15:38 information technologies, with this young girl who
15:41 had a damaged windpipe.
15:42 She was not going to be able to survive with it.
15:45 So they scanned her throat with noninvasive imaging– spatial
15:48 resolution of noninvasive imaging
15:50 is doubling every year– that’s an important technology
15:53 for reverse engineering the brain– designed
15:56 her new windpipe in the computer using computer design,
16:00 printed it out with a 3D printer using biodegradable materials,
16:04 then populated this scaffolding with her stem cells,
16:08 using the same 3D printer, grew out a new windpipe for her
16:12 and installed it surgically.
16:13 And it worked fine.
16:16 You can now fix a broken heart– not yet from romance.
16:20 That’ll take us a few more decades.
16:22 [LAUGHTER]
16:23 But half of all heart attack survivors have a damaged heart.
16:26 It’s called low ejection fraction.
16:28 My father had that in the ’60s, and could hardly walk.
16:31 Now you can fix that by reprogramming adult stem cells,
16:34 and rejuvenating the heart.
16:36 You have to be a medical tourist, because it’s not
16:38 yet approved here, although it will be soon.
16:41 There are many other examples.
16:42 We could talk all day about this,
16:45 but our ability to reprogram this outdated software
16:48 is growing exponentially.
16:50 These technologies are now 1,000 times more powerful
16:52 than they were a decade ago, when
16:54 the Genome Project was completed.
16:56 They’ll be another 1,000 times more powerful in a decade,
16:58 a million times more powerful in 20 years.
17:01 That’s the implication of a doubling in power every year.
17:04 Somewhere between that 10 and 20 year mark,
17:06 we’ll see significant differences in life
17:09 expectancy– not just infant life expectancy,
17:11 but your remaining life expectancy.
17:14 The models that are used by life insurance companies
17:17 sort of continue the linear progress
17:19 we’ve made before health and medicine
17:22 was in information technology, which
17:23 is based on, basically, accidental findings.
17:27 This is going to go into high gear.
17:31 Life expectancy is a statistical phenomenon.
17:34 You could still be hit by the proverbial bus tomorrow.
17:37 Of course, we’re working on that here at Google,
17:39 also, with self-driving cars.
17:41
17:43 And three-dimensional printing– I
17:45 think we’re in the hype phase of this.
17:49 I’ve written about the life cycle of technologies.
17:52 Usually early enthusiasts who see the vision,
17:55 but haven’t really calculated the timing correctly–
17:59 and exponential growth ultimately
18:01 becomes transformative, but it actually
18:02 starts out very slowly.
18:04 You’re doubling tiny, little numbers,
18:05 like 1/10,000 of the genome in 1990, 2/10,000 in 1991.
18:11 So the progress doesn’t come on schedule.
18:13 Disillusionment lets in, and then you have basically a bust.
18:19 And then it comes back, as we really,
18:21 truly understand the true markers
18:23 of what it will take to be successful.
18:27 This looks like a young audience,
18:28 but you may remember the internet boom in the 1990s.
18:32 If you had the URL dog.com, you were a billionaire.
18:36 Then around the year 2000, people
18:38 realized you can’t really make money
18:40 with these internet companies.
18:41 And there was the internet bust, which
18:43 almost took down the economy.
18:45 But then it came back, and today you
18:46 have internet companies like Google, and Apple, and Facebook
18:51 that are worth hundreds of billions of dollars.
18:55 And we’re kind of in that hype phase now.
18:58 I think we’re still five or six years away from the parameters
19:03 we need to make this successful.
19:04 We need sub-micron resolutions.
19:07 The resolution accuracy is improving
19:09 at a rate of about 100 in 3D volume per decade,
19:12 so it’s exponential progress.
19:14 But right now, it’s multi-micron.
19:16 We can do interesting things.
19:18 We had an opening a year ago at Singularity University
19:22 where the band– all the instruments that the band was
19:25 playing were printed out on three-dimensional printers.
19:27 So there are interesting niche applications.
19:30 But by 2020, we’ll be able to print out clothing,
19:32 for example.
19:33 So people will go– great, there goes the fashion industry.
19:36 But not so fast.
19:38 I mean, look at other industries that have already
19:42 undergone the transformation from physical products
19:44 to digital products.
19:46 A few years ago, if I wanted to send you a book, or a movie,
19:48 or a music album, I’d send you a FedEx package.
19:51 Today, I can send you an email attachment.
19:54 And there is indeed an open source market
19:58 with millions of free songs, videos, movies, books,
20:02 documents that you can download legally for free.
20:05 And you can have a very good time with these free media
20:08 products.
20:09 But people still spend money to read Harry Potter,
20:11 or go to the latest blockbuster, or get music
20:14 from their favorite artists.
20:15 And you have the coexistence of the open source market, which
20:18 is a great leveler, bringing high-quality products
20:21 to everyone at almost no cost, or no cost,
20:24 and a proprietary market.
20:26 That’ll be the nature of the economy going forward.
20:28 So in the 2020s, you’ll be able to download cool fashion
20:32 designs, print them out at pennies per pound,
20:34 which is what it costs for three-dimensional printing.
20:37 But people will still spend money for the latest
20:41 hot designs from their favorite designer.
20:42
20:45 And I mentioned I’ve been thinking
20:47 about thinking for a long time.
20:48 50 years ago, I wrote a paper when
20:51 I was 14, how I thought the brain worked.
20:54 There was actually very little to go on.
20:56 But I described it as a series of modules,
20:58 and each module could recognize a pattern.
21:00 And the essence of the human brain was pattern recognition.
21:03 We actually weren’t very good at logical thinking.
21:06 Even then, I could see that chess computers were based
21:09 on logic, and being able to look ahead, and calculate
21:12 all the kind of move sequences.
21:15 The human brain did it by deep forms of pattern recognition.
21:19 In ’97, Kasparov was asked Deep Blue analyzes 200,000 board
21:24 positions a second.
21:25 How many do you analyze?
21:27 And he said, maybe less than one.
21:29 So how is it that he was able to hold up at all?
21:31 It’s his deep powers of pattern recognition.
21:34 And I described it as a series of modules, each of which
21:37 could learn a pattern, remember a pattern, implement a pattern,
21:40 discover a pattern.
21:42 And that these were organized in hierarchies,
21:44 and we created that hierarchy with our own thinking.
21:47 And that the whole neocortex worked the same way.
21:50 And that was actually contrary to the common wisdom
21:53 of the time, because it was noticed
21:55 that these different regions, and they
21:57 do very different things.
21:58 So it was thought they must be organized very differently.
22:01 V1 in the back of the head recognizes visual images,
22:04 because that’s where the optic nerve spills into,
22:07 and it can recognize the fact that the edge of this table
22:09 is flat, that there’s a horizontal crossbar
22:13 in that capital A, and so on.
22:16 The cruciform gyrus up here recognizes faces.
22:19 We know that because if you conk somebody
22:21 over the head in that region, and knock it out,
22:23 people can’t recognize faces.
22:25 The frontal cortex is famous for language, and art, and science.
22:29 They do very different things.
22:31 They must be using different algorithms.
22:34 There was one neuroscientist who actually did
22:36 autopsies of the neocortex in all of these different regions,
22:40 and found they looked exactly the same–
22:42 the same neurons, the same interconnection patterns.
22:44 He said neocortex is neocortex– Vernon Mountcastle.
22:49 And so I use that.
22:51 And I also use observations of human brains
22:55 in action, which is still our best laboratory,
22:57 and described this basic method.
23:00 This actually describes the same algorithm,
23:05 and actually describes how each of these modules,
23:08 which I count now at 300 million,
23:10 can recognize a pattern, learn a pattern.
23:12 It’s basically functionally similar
23:16 to a hierarchical hidden Markov model, which
23:19 is a technology I worked on the 1990s, and speech recognition,
23:23 and early forms of natural language understanding.
23:26 And it’s a little bit different than neural nets,
23:29 because neural nets– the basic element is a neuron, which
23:32 can kind of learn one state, not really a whole pattern.
23:37 And in my view, the basic module is a pattern-recognition module
23:42 that can learn a more complex pattern.
23:44 And a hierarchical hidden Markov model
23:46 is a hierarchy of Markov models, each of which
23:49 can learn a fairly complicated pattern.
23:52 I believe that is how the human brain works.
23:55 And that’s what I’m doing here at Google.
23:58 I’ve given an early version of this book
24:00:00 to Larry Page a couple years ago.
24:03:00 He liked it.
24:04:00 I met with him to ask him for an investment in the company I was
24:07:00 going to start to develop these ideas.
24:10:00 And he said, have you thought of doing this here at Google?
24:14:00 We have these terrific resources in terms
24:18:00 of data, and computation, and talent.
24:20:00 He actually said it in a very low key way,
24:22:00 but that was his message.
24:25:00 So I met with him– Alan Eustace,
24:30:00 who’s here in the audience.
24:32:00 They said I’d have the kind of independence
24:34:00 I’d have with my own company, but these Google resources–
24:37:00 and so I’ve been doing that now for a year and a half.
24:42:00 And it’s been terrific.
24:43:00 It’s really the only place I could do this project, which
24:46:00 has been a 50-year endeavor.
24:49:00 And the spatial resolution of brain scanning, as I mentioned,
24:52:00 is doubling every year.
24:53:00 We can now see inside a living brain,
24:54:00 see it create your thoughts, see your thoughts
24:56:00 create your brain.
24:58:00 And there’s a few significant milestones
25:01:00 in the history of the biological version of this thinking.
25:04:00 You see it up there– the neocortex.
25:06:00 The neocortex emerged 200 million years ago with mammals.
25:10:00 It was capable of a different type of thinking,
25:13:00 basically hierarchical thinking.
25:15:00 Other animals could learn things,
25:17:00 but not with elaborate hierarchies.
25:19:00 They could have a behavior that might
25:21:00 have a hierarchical aspect to it, but it was fixed.
25:24:00 They couldn’t learn a new hierarchy–
25:26:00 at least not in one lifetime.
25:27:00 Maybe over 10,000 lifetimes, they
25:29:00 could evolve a new behavior.
25:32:00 Next significant thing that happened– 65 million years
25:34:00 ago, there was a violent change in the environment.
25:37:00 We call it the Cretaceous Extinction Event.
25:39:00 That’s when the dinosaurs went extinct.
25:40:00 That’s when 75% of all the animal and plant species
25:44:00 went extinct.
25:46:00 And that’s when mammals overtook their ecological niche,
25:49:00 because they could adapt their behavior quickly enough
25:51:00 to cope with it.
25:53:00 Next significant milestone was 2 million years ago.
25:56:00 We developed these large foreheads,
25:58:00 so we now had more neocortex.
26:00:00 That neocortex has all these folds and convolutions
26:03:00 basically to increase its surface area.
26:05:00 It’s a flat structure.
26:06:00 It’s about the size of a table napkin and just as thin.
26:09:00 It’s one module thick, but it has so many convolutions
26:12:00 and ridges, it’s 80% of your brain.
26:15:00 And the frontal cortex– it’s often been thought
26:18:00 it must be qualitatively different,
26:20:00 because it does these amazing things, like art and poetry.
26:25:00 But recent research projects discovered, or examined,
26:31:00 the issue, what happens to V1, which I mentioned
26:34:00 is in the back of the head, and does these very simple things,
26:37:00 like the fact that that’s a straight line– what
26:40:00 happens to it in a congenitally blind person?
26:43:00 They’re not getting any visual images.
26:46:00 Well, the frontal cortex notices hey, V1 isn’t doing anything,
26:49:00 and it actually harnesses it to help it with language,
26:51:00 and art, and science– at the opposite extreme end
26:55:00 of the continuum of complexity of features,
26:58:00 showing that they’re both basically using
27:00:00 the same algorithm.
27:03:00 And so we are already doing a very good job as primates
27:07:00 without the frontal cortex.
27:08:00 Now we had this additional quantity,
27:11:00 and so we could think higher thoughts.
27:13:00 Because the neocortex is built on conceptual levels.
27:16:00 Each level is more abstract than the one below it.
27:19:00 So the first thing we invented was
27:21:00 language, and that was about a couple hundred thousand years
27:24:00 ago.
27:25:00 So I have an idea in my head.
27:27:00 It’s a hierarchy of other ideas, and symbols,
27:31:00 and other structures.
27:34:00 And I want to actually communicate that and basically
27:37:00 transmit that hierarchical structure to your neocortex.
27:41:00 So we invented language in order to do that.
27:44:00 And it’s a communication medium that is hierarchical,
27:48:00 so it could reflect the hierarchical structures
27:50:00 in the neocortex.
27:51:00 The neocortex was successful because the world
27:54:00 is hierarchical.
27:55:00 That’s the best way to understand it.
27:56:00 Trees have limbs.
27:58:00 Limbs have branches, branches have other branches.
28:00:00 Some branches have leaves.
28:01:00 The world is organized in a hierarchical fashion.
28:04:00 And we could now represent this in language.
28:07:00 And if you ever want to see some entertaining examples
28:11:00 of the hierarchy in language, read a Gabriel Garcia Marquez
28:19:00 novel.
28:20:00 He has one sentence that’s six pages long,
28:22:00 and it’s grammatically correct.
28:24:00 And it has a fantastic array of hierarchical structures showing
28:28:00 the indefinite hierarchy we can create with language,
28:31:00 reflecting the indefinite hierarchy we
28:33:00 can have in our ideas.
28:36:00 And there’s then been a continual acceleration
28:40:00 of technology.
28:40:00 Written language only took a few thousand years.
28:42:00 The first examples were thousands of years ago.
28:45:00 The printing press took 400 years to reach a mass audience.
28:48:00 The telephone reached a quarter of the American and European
28:51:00 population in 50 years.
28:52:00 The cellphone took seven years.
28:54:00 Social networks, wikis, and blogs took three years.
28:57:00 We continually are accelerating, basically,
29:00:00 these information technologies because
29:02:00 of the law of accelerating returns.
29:05:00 And we can now simulate aspects of the neocortex.
29:09:00 And fundamentally what my team– and we’re not
29:12:00 the only team doing this– is trying
29:14:00 to create a functional simulation of the neocortex.
29:17:00 And I’ll tell you the key problem
29:19:00 is we can create hierarchies.
29:21:00 So even in the 1990s, we had a hierarchy of acoustic states,
29:25:00 and then phonemes, and then word models
29:28:00 for speech recognition, and then simple grammatical models,
29:31:00 so that we could have a sentence like, Move this paragraph to
29:35:00 after the third paragraph in the next page.
29:36:00 And it would carry out that simple command.
29:40:00 But we couldn’t actually add a new layer ourselves.
29:43:00 That’s actually, if you want to speak technically,
29:46:00 the key technical challenge in trying to create more and more
29:50:00 flexible AI, is how do we create the next conceptual level
29:55:00 that’s more abstract than the ones we have, automatically
29:58:00 from the data, rather than reprogramming it ourselves
30:02:00 using our human intelligence?
30:03:00
30:11:00 I’ll skip some of this to– at the very high level,
30:16:00 you have very abstract ideas, like that’s funny,
30:20:00 that’s ironic, she’s pretty.
30:22:00 This 16-year-old girl is having brain surgery.
30:25:00 And whenever they stimulated these points showed in red,
30:27:00 she would laugh.
30:29:00 They wanted to be able to talk to her.
30:30:00 There’s no pain receptors in the brain,
30:32:00 so you can do that during brain surgery.
30:34:00 And they thought they were stimulating some kind of laugh
30:36:00 reflex, but they quickly realized that no, they
30:38:00 were triggering the genuine perception of humor.
30:42:00 She just found everything hilarious
30:44:00 whenever they simulated these points.
30:45:00 You guys are so funny just standing around,
30:47:00 was a typical comment.
30:50:00 And they weren’t funny, so–
30:52:00 [LAUGHTER]
30:54:00 So an example from another company
30:56:00 that shows our beginning ability to actually understand
31:00:00 human language is WATSON.
31:02:00 As you can see, WATSON got a higher score
31:04:00 than the best two human players in Jeopardy combined.
31:07:00 It got this query correct in the rhyme category,
31:09:00 A long, tiresome speech delivered
31:12:00 by a frothy pie topping,” and WATSON quickly
31:14:00 said, “What is a meringue harangue?
31:17:00 And WATSON got its knowledge by reading 200 million documents
31:20:00 of natural language, including all
31:23:00 of Wikipedia and other encyclopedias.
31:25:00 It doesn’t understand each page as well as you or I,
31:27:00 but it makes up for that by reading a lot of pages.
31:31:00 And that’s the kind of thing we’re trying to do here.
31:34:00 We have a model that I believe actually
31:37:00 will solve this key problem of being
31:41:00 able to add to the hierarchy automatically,
31:44:00 so that we can handle, ultimately, complex documents.
31:47:00 So one application, for example, would
31:49:00 be in language translation.
31:50:00 Right now, it does a very good job through the power of data,
31:52:00 and these Rosetta Stone databases of translated text.
31:56:00 By matching word sequences, we’re
31:58:00 improving the way that we match them.
32:00:00 But we really would like to do it
32:01:00 the way a human does it, which is to understand it.
32:04:00 What does it mean to understand?
32:05:00 It means to take the language, and actually
32:06:00 create this hierarchical structure
32:08:00 of the ideas in my head, and then resynthesize it,
32:11:00 re-articulate it in the new language.
32:13:00 That’s the kind of thing we hope to do.
32:14:00 We’d like the search engine to read for meeting.
32:18:00 So if you put out a tweet, “Everything Ray Kurzweil
32:21:00 is saying at I/O is nonsense,” there’s
32:25:00 actually, in that simple text, a hierarchy,
32:28:00 which you need to understand to really understand
32:30:00 what that’s trying to say.
32:32:00 If you write a blog post, you have something to say,
32:34:00 and the search already goes substantially
32:36:00 beyond the base forms.
32:39:00 It will understand the syntactic structure.
32:42:00 If you see the word “he,” it’ll do
32:43:00 that co-reference resolution.
32:45:00 It’ll understand synonyms.
32:47:00 But it’s not fully modeling the ideas
32:49:00 that you have to say when you write an article or a blog
32:53:00 post.
32:54:00 And that’s what we would like to actually understand.
32:57:00 And then you would be able to dialogue
32:59:00 with your search engine to give it complex tasks,
33:02:00 and interact with it the way you would with a human assistant.
33:06:00 And it would then go out.
33:07:00 And maybe it wouldn’t even find the information that day,
33:10:00 but a week later, would pop up and say,
33:12:00 you asked me this question a week ago,
33:14:00 and new research just came out that answers that question,
33:18:00 and so on.
33:19:00
33:24:00 So let’s stop here, and I notice that time clock isn’t working,
33:29:00 so I have no idea where we are.
33:30:00 But RJ, maybe you’ve got some questions.
33:36:00 RJ MICHAEL: Yes, indeed.
33:38:00 So I’ve gathered up some questions
33:40:00 from people, fellow Googlers, from some of our friends
33:44:00 throughout the community.
33:46:00 And we’re going to take this opportunity
33:49:00 to ask some of these questions, starting
33:52:00 with a quote from William Gibson,
33:55:00 the famed author who said that, the future is here.
33:58:00 It’s just unevenly distributed.
34:01:00 Where do you think things are running fast,
34:03:00 and where they lagging further behind
34:05:00 than what you would have expected?
34:08:00 RAY KURZWEIL: Well, I think it’s actually
34:10:00 very widely distributed.
34:12:00 Companies like Google– and not just
34:13:00 Google– Apple, Microsoft, Facebook–
34:16:00 are not just using these technologies
34:19:00 with a few corporations and government agencies.
34:21:00 It’s in billions of hands.
34:24:00 Google search is used by between 1 and 2 billion people,
34:28:00 and we hope to expand that to the next couple billion users
34:32:00 and a couple billion after that.
34:34:00 That’s the business model, and I believe
34:38:00 that is actually how we will use these technologies.
34:40:00 They’ll be very widely distributed.
34:42:00 And they’re very democratizing.
34:45:00 The technologies that move very smoothly
34:47:00 are the sort of pure application of the law of accelerating
34:49:00 returns.
34:50:00 When you get into regulatory issues,
34:52:00 like we have with the self-driving cars, maybe
34:56:00 a little less predictable.
34:57:00 There’s a lot of regulation in medicine.
35:00:00 But I believe these technologies ultimately
35:03:00 will be so profoundly superior, that they will actually
35:07:00 accelerate these regulatory processes as well.
35:10:00
35:12:00 RJ MICHAEL: So you outlined in your book, “How
35:15:00 to Create a Mind,” the idea of what it’s
35:19:00 going to take to actually create a mind.
35:20:00 And you’ve chosen to pursue these ideas here at Google.
35:25:00 Why Google?
35:28:00 RAY KURZWEIL: Well, it’s actually
35:29:00 the first time I’ve done that.
35:32:00 But I’ve realized that you need unique resources to do this.
35:37:00 It’s a very difficult problem.
35:39:00 So for one thing you need a tremendous amount of talent.
35:42:00 That’s, I think, the primary resource that’s unique–
35:48:00 maybe not completely unique, but it’s certainly
35:50:00 evident at Google.
35:52:00 And then you want to run something
35:54:00 on a million computers, and you want tremendous data
35:57:00 that reflects language.
35:59:00 And we have tens of billions of pages–
36:01:00 virtually all books and web pages.
36:05:00 And so this is not a project I could do with my own company,
36:08:00 even if I raised all the money that I could hope for.
36:13:00 And it’s a bold company that takes on major challenges,
36:18:00 and tries to improve the world with these applications,
36:21:00 and make them widely available.
36:24:00 So I like the philosophy of the leadership.
36:28:00 RJ MICHAEL: Me too.
36:30:00 It’s true.
36:33:00 But I’m an engineer at heart, and the engineer in me
36:36:00 wants to know how you intend to build this thing.
36:39:00 Could you describe to us what the engineered mind is like?
36:44:00 What tech must we implement versus
36:46:00 what behavior do we expect to emerge?
36:49:00 What must be done in software?
36:52:00 What must be done in hardware?
36:54:00 RAY KURZWEIL: Well, on the hardware requirements,
36:56:00 I mean, to functionally emulate the human brain–
36:58:00 I’ve analyzed that.
37:00:00 I’ve estimated it about 10 to the 14th calculations
37:03:00 per second and singularity is near.
37:06:00 So I hedged that a bit and said 10 to the 14th to 10
37:08:00 to the 16th.
37:10:00 I’ve reanalyzed it using different methods in “How
37:12:00 to Create a mind” that come up again with 10 to the 14th.
37:15:00 There’ve been a number of independent analyses of that.
37:18:00 They come with the same figure.
37:20:00 We’ve already surpassed that by three orders of magnitude
37:24:00 in supercomputers.
37:27:00 It’d be hard to provide 10 to the 14th calculations
37:30:00 per second to all of a billion users
37:33:00 kind of using it more or less at the same time.
37:36:00 I’ve discussed this with Larry Page,
37:38:00 and he thinks no, actually that could be possible.
37:40:00 But the law of accelerating returns
37:43:00 will make that easy by the early 2020.
37:46:00 So it really comes down to a software problem.
37:49:00 I described, I think, the key software problem.
37:52:00 We can already create these hierarchies.
37:54:00 In Google, and in other companies,
37:56:00 there’s debate between several different learning methods,
37:59:00 and they have pros and cons.
38:02:00 We need one that can actually represent hierarchies
38:05:00 of complicated patterns, where each pattern has
38:08:00 its position in a complicated hierarchy of patterns.
38:12:00 And the key unsolved problem is, how do we then
38:15:00 add a conceptually more abstract level to that?
38:19:00 And I think we can use machine learning
38:22:00 to find the patterns at that high level,
38:25:00 but you need to be able to model them correctly.
38:27:00 And so that’s what we’re exploring.
38:29:00 And then applying it to language.
38:31:00 And ultimately, Google will apply it
38:33:00 to other types of input, like videos and pictures.
38:35:00 And we’re already making a lot of progress.
38:38:00 Machine learning at Google is already very powerful.
38:41:00 Once we have a system that’s working,
38:43:00 there will be little loops that are very tight that
38:46:00 are taking up the bulk of the computation
38:48:00 that we could put in an ASIC, in a dedicated
38:51:00 circuit, because you can get 1,000 fold increase in price
38:55:00 performance by hard-wiring repetitive algorithms
38:59:00 in hardware.
39:00:00 But and there are attempts, of course, to do that.
39:03:00 I think it’s premature now, because we haven’t really
39:05:00 settled on the right type of machine learning.
39:08:00 So we don’t really know what algorithm to speed up.
39:10:00 But that’ll be a straightforward engineering trade-off,
39:13:00 once we can actually mess with the software.
39:16:00 So it’s a fundamentally a software problem.
39:19:00 RJ MICHAEL: So most of the people in this room
39:21:00 are engineers, or are heavily involved in app development
39:24:00 as well.
39:25:00 And everything that you’re talking about here, these
39:28:00 are exciting visions to a lot of people here,
39:30:00 and myself as well.
39:32:00 But what can the developers in this room
39:35:00 do to turn these ideas of yours into actual working
39:38:00 products and systems?
39:40:00 What role do these developers play in pushing all of this
39:43:00 forward for us?
39:45:00 RAY KURZWEIL: Well it’s application developers
39:46:00 that drive it forward.
39:48:00 And I mean, there’s a debate in the AI
39:51:00 field between traditional artificial intelligence,
39:54:00 and something called AGI, Artificial General
39:56:00 Intelligence, which is implicitly a criticism that AI
39:59:00 has not pursued general intelligence,
40:02:00 and it’s gone often to narrow things
40:04:00 like OCRs, speech recognition, or robotics.
40:08:00 But I actually think we get from here to there– there
40:11:00 being future AI, strong AI– one step at a time.
40:16:00 And the steps are applications, and we
40:18:00 need to actually optimize the technology
40:20:00 for the applications.
40:21:00 It’s very hard to develop a technology
40:23:00 if you don’t have something to optimize it for.
40:26:00 So I like the idea of crossing the river
40:29:00 kind of from one stone to the next.
40:31:00 People say, what about that part of the river
40:33:00 where it’s too deep, and there are no stones?
40:35:00 So I’m not sure the answer to that.
40:38:00 But we do get from here to there through one step at a time.
40:42:00 And each step is sort of benign.
40:44:00 It’s exciting in the application world,
40:47:00 but it’s not the grand step to AI.
40:53:00 But that’s how we’re going to get there,
40:55:00 and the application developers push it forward,
40:58:00 and make it practical, and provide an economic business
41:01:00 model for it.
41:04:00 RJ MICHAEL: So this law of accelerating return
41:06:00 that you talk about– you’ve made
41:09:00 it clear in natural space, that it exists like this.
41:14:00 And in a technology space, it’s definitely true.
41:17:00 It all starts to feel just like natural law,
41:20:00 like natural progression.
41:21:00 Why do we have to work toward it?
41:23:00 Why don’t we just sit back and let it happen for us?
41:26:00 RAY KURZWEIL: Yeah, that question comes up a lot–
41:28:00 why don’t we just sit back and let it happen?
41:31:00 Why are we working so hard?
41:33:00 And if we did that, it wouldn’t happen.
41:35:00 So what is actually predictable is the human passion
41:39:00 to make improvement, to use the computers of 2014
41:42:00 to create the computers of 2015.
41:44:00 We couldn’t do that a decade ago.
41:47:00 And we’re able to improve things in an exponential manner.
41:52:00 Things are 1x.
41:53:00 We try to make it 2x.
41:54:00 If they are 1,000x, we don’t seek to make it 1,001x.
41:58:00 We try to make it 2,000.
42:00:00 And we have the tools to do that.
42:03:00 I have a mathematical treatment in “Singularity is Near.”
42:05:00 The empirical data is the strongest evidence
42:09:00 for the law o accelerating returns.
42:10:00 But it is driven by application developers and technology
42:15:00 developers taking each step with the current state of the art.
42:18:00
42:23:00 RJ MICHAEL: And speaking about the curves,
42:25:00 and the rising curves– sorry, I lost my place on my page here.
42:34:00 Well, so let’s talk about specialized smarts, then.
42:39:00 There’s a role for specialized smarts,
42:41:00 and the neocortex seems to have a very general architecture–
42:45:00 repeated architecture, as you were mentioning–
42:47:00 but there also seems to be specific modules that
42:50:00 have evolved.
42:51:00 Do you think that there are specific functions
42:53:00 that we’re going to need to build for our learning
42:55:00 machines?
42:56:00 And how do you know when to go really specific,
43:01:00 and focus on one particular thing, versus just allowing
43:04:00 it to be handled by the general architecture?
43:07:00 RAY KURZWEIL: Well, we still have the old brain,
43:09:00 like the amygdala puts out an ancient cascade of hormones,
43:13:00 to prepare us for a fight or flight.
43:15:00 It’s no longer able to decide what
43:17:00 to be afraid of, so your boss walks in the room,
43:19:00 and whether that causes laughter or fear is really
43:22:00 up to the neocortex.
43:23:00 And the neocortex is a general architecture.
43:26:00 There are no specialized regions.
43:28:00 There’s no music region.
43:31:00 But the patterns in music, or even
43:33:00 particular types of music– whether it’s
43:36:00 Chopin waltzes, or hip-hop– are specialized types of knowledge.
43:41:00 And we have a limited capacity in neocortex,
43:45:00 so you can really be a world-class master of one
43:48:00 thing.
43:49:00 Einstein played the violin, but he was no Jascha Heifetz.
43:53:00 Heifetz was interested in physics,
43:55:00 but he was no Einstein.
43:58:00 We really need to devote the bulk of our neocortex
44:01:00 that’s not devoted to every-day concerns
44:04:00 to one type of knowledge, which has its own type of patterns,
44:09:00 and learn the patterns that others have created,
44:11:00 and then push it forward.
44:13:00 But really, the architecture is pretty much the same.
44:19:00 RJ MICHAEL: I find that fascinating.
44:20:00 It’s the same algorithm repeated.
44:23:00 So while it’s great for all of us
44:26:00 that personal technology has been
44:28:00 freeing us up, and empowering us all,
44:31:00 giving us a lot more free time, and giving us
44:33:00 more capabilities under our fingertips,
44:35:00 do you think that we are actually
44:37:00 going to make good use of it?
44:39:00 The thing that I keep wondering about
44:41:00 is are we essentially going to use this free time, and all
44:43:00 these extra capabilities that computers give to us, to allow
44:48:00 us to watch more television, and consume more sugary snacks?
44:52:00 Which is what I’m afraid of.
44:54:00 So how are we going to use this technology that you’re
44:57:00 developing, to ensure that we actually will live better?
45:01:00 RAY KURZWEIL: Well, there’s always pros and cons.
45:03:00 We just happen to be on the right slide here.
45:07:00 And this is just one perspective.
45:09:00 People quickly lose perspective.
45:11:00 We forget what things were like eight years
45:14:00 ago, before there were social networks, 15 years ago
45:16:00 before there were search engines.
45:18:00 Once these things happen, we assume
45:20:00 they have always been around.
45:22:00 People certainly forget what things were like 200 years ago,
45:24:00 when Thomas Hobbes described life
45:26:00 as short, brutish, disaster-prone, poverty-filled,
45:29:00 disease-filled.
45:30:00 Let’s take a quick, one-minute trip
45:32:00 through the last two centuries.
45:34:00 These are countries.
45:34:00 The big, red circle is China.
45:36:00 It does some interesting things.
45:37:00 Keep an eye on that.
45:38:00 The x-axis is income per person in today’s dollars,
45:42:00 so you can understand it.
45:43:00 So there were wealthy countries and poor countries,
45:45:00 but nobody was very wealthy.
45:46:00 Income per person was hundreds of dollars in today’s dollars.
45:51:00 On the y-axis is life expectancy.
45:53:00 In the ’20s and ’30s– worldwide average, 37.
45:57:00 So this was the beginning of the Industrial Revolution, started
46:00:00 in the textile industry in England in 1800.
46:03:00 A few countries are making progress.
46:07:00 But as you get to the 20th century, the 1900s,
46:10:00 you’ll see a wind that carries all
46:12:00 of these countries towards the upper right-hand corner
46:14:00 of the graph.
46:15:00 The have, have-not divide does not go away.
46:19:00 There’s still rich countries and poor countries.
46:21:00 But the countries that are worst off at the end of the process
46:24:00 are far better off than the countries that
46:26:00 were best off at the beginning of the process.
46:28:00 And I shouldn’t say end of the process,
46:30:00 because the process isn’t ending.
46:32:00 It’s going to go into high gear as we
46:33:00 get to the more mature phases of the biotechnology
46:36:00 and three-dimensional printing revolutions, AI, and so on.
46:41:00 RJ MICHAEL: That’s just awesome.
46:43:00
46:45:00 RAY KURZWEIL: So to be continued.
46:47:00
46:52:00 RJ MICHAEL: So, and I think we might have a few minutes left
46:56:00 to take some questions from the audience at this point.
47:00:00 If–
47:01:00 RAY KURZWEIL: Well–
47:02:00 RJ MICHAEL: There’s two more left to ask,
47:03:00 but I’m saving those babies for the end.
47:05:00 Unless you have something you wanted to address.
47:08:00 RAY KURZWEIL: I don’t know how much time we have,
47:10:00 because the countdown clock isn’t working.
47:11:00 RJ MICHAEL: Yeah, the countdown clock stopped.
47:13:00 We have two minutes left?
47:14:00 Oh, two minutes.
47:15:00 RAY KURZWEIL: OK.
47:15:00 RJ MICHAEL: All right, well, then, I’m
47:16:00 going to stick to my questions, then.
47:18:00 Sorry, you guys.
47:20:00 Because I’ve got one that is just
47:22:00 want of my favorite interview questions,
47:23:00 that have been dying my whole life
47:25:00 to ask Ray these questions.
47:26:00 So if you would tell us please, what
47:29:00 are some of the more humbling experiences
47:33:00 you’ve had researching and developing
47:35:00 your concepts over the years?
47:39:00 RAY KURZWEIL: Well, I think it’s this one unsolved research
47:42:00 question, that the human brain is able to do,
47:46:00 and I think it’s the key to making further advances
47:48:00 in artificial intelligence.
47:51:00 The neocortex is organized in these layers,
47:54:00 and each layer is more abstract than the one below it.
47:58:00 And we’re able to actually– if we understand something,
48:00:00 like we understood speech recognition,
48:03:00 we can actually identify phonemes should be here,
48:05:00 words should be here, and we can create that hierarchy,
48:08:00 and then use machine learning to learn each level.
48:12:00 But how do we then create a more abstract level on its own?
48:16:00 Because the neocortex does that.
48:19:00 I’ve been watching my grandson go through level after level.
48:22:00 Now he’s almost three, and he’s got quite a few levels
48:25:00 under his belt.
48:27:00 And that’s done with the neocortex,
48:31:00 without really any external input, other than his parents
48:37:00 saying, that was good, Leo.
48:41:00 So how do we do that?
48:44:00 That’s what we hope to solve.
48:46:00 I think we can have a stable set of hierarchies,
48:52:00 and then find patterns at that level using machine learning,
48:55:00 and then automatically add a new level.
48:58:00 But that has never been demonstrated,
49:00:00 and if we could do that, I think we’ll
49:03:00 make great strides in artificial intelligence.
49:05:00 But so far, that has eluded the AI field.
49:09:00 RJ MICHAEL: So then I’ll end with this–
49:11:00 you mentioned your grandchild, three-year-old.
49:13:00 There’s certain definitions of the word consciousness
49:17:00 that would suggest that a three-year-old has not yet
49:20:00 achieved consciousness– awareness of self,
49:22:00 awareness of what’s in a mirror, and things like that.
49:25:00 RAY KURZWEIL: Well, he would disagree with that.
49:26:00 But–
49:26:00
49:28:00 RJ MICHAEL: So I would like to end, then,
49:30:00 with three simple questions, that I would
49:34:00 ask you to take all together for us.
49:36:00 What is consciousness?
49:38:00 What is free will?
49:41:00 And what is soul?
49:42:00
49:46:00 RAY KURZWEIL: Well, I always thought
49:47:00 you had a good sense of humor, so one minute
49:52:00 should be plenty for that.
49:54:00 [LAUGHTER]
49:56:00 We’ve debated that for thousands of years,
49:58:00 going back to the Platonic dialogues.
50:00:00 But to summarize, consciousness–
50:03:00 whether or not an entity–
50:06:00 [LAUGHTER]
50:12:00 Whether or not an entity has consciousness
50:14:00 is not a scientific question, because there’s
50:16:00 no experiment you could run that would really definitively–
50:23:00 falsifiable experiment that you could run,
50:25:00 whether or not an entity is conscious or not.
50:27:00 We assume that each other is conscious,
50:30:00 that human agreement falls apart when
50:32:00 you go outside of human experience.
50:34:00 People disagree about animals.
50:36:00 They will disagree about future AIs.
50:40:00 An AI could claim it’s conscious.
50:42:00 Eugene Goostman claimed that he was conscious,
50:45:00 but it wasn’t very convincing.
50:46:00
50:49:00 And so you actually, but, so some scientists say,
50:52:00 well, it’s not a scientific question.
50:54:00 It’s not important.
50:55:00 We should dismiss it.
50:56:00 It’s just an illusion.
50:57:00 I think that’s a mistake, because our whole moral system
50:59:00 is based on consciousness.
51:02:00 So you need a leap of faith.
51:04:00 My leap of faith is that if an entity seems conscious,
51:07:00 and seems to be having the subjective experiences
51:11:00 it claims to be having, I’ll believe it’s conscious.
51:14:00 I will also make an objective prediction
51:15:00 that most people will accept the consciousness
51:18:00 of these entities.
51:22:00 And so a valid Turing Test– I mean, I have a long, now,
51:27:00 Turing Test bet with Mitch Kapor that by 2029, a computer
51:32:00 will pass the Turing Test.
51:33:00 And we actually set a very difficult set of rules.
51:36:00 I think if an AI passes that, people will really be convinced
51:40:00 that it’s really conscious, and we will accept it
51:44:00 as having those subjective experiences.
51:46:00 Identity is really a continuation of a pattern.
51:49:00 People say, what are you talking about?
51:51:00 Your identity is this, you’re physical stuff.
51:53:00 It’s this flesh and blood, but actually this is very different
51:56:00 than it was six months ago.
51:58:00 All of our cells turn over, some in hours,
52:01:00 some in days, some in weeks.
52:02:00 The different components of the neurons,
52:04:00 the tubules, the ion channels, the filaments,
52:07:00 change over in either hours, or days, or weeks.
52:10:00 I’m completely different stuff than I was six months ago.
52:13:00 So I make a comparison to water in the stream.
52:17:00 It may make that certain pattern as it goes around a rock.
52:22:00 That pattern can stay the same for days, weeks, years.
52:27:00 But the water actually changes in milliseconds.
52:29:00 So is that the same river?
52:32:00 There’s a Chinese proverb, you can’t walk in the same river
52:34:00 twice.
52:35:00 But it’s actually a continuation of a pattern.
52:38:00 And that’s what we are.
52:39:00 The pattern changes slowly, but continuation of pattern,
52:44:00 even if we introduce non-biological elements to it,
52:46:00 we would be continuing that pattern.
52:50:00 And free will, that’s impossible to define.
52:55:00 I’m not convinced I have free will.
52:56:00 Major decisions like starting a project,
52:59:00 coming to work at Google, speaking at IO–
53:03:00 did I really make that decision?
53:05:00 What was I thinking?
53:06:00
53:08:00 They just seem to happen on their own.
53:11:00 I do actually think a lot, and I think
53:13:00 I do have free will deciding what to eat for lunch,
53:16:00 so I think making eating choices is maybe
53:18:00 the heart of free will.
53:21:00 So I’ll leave it at that.
53:23:00 RJ MICHAEL: OK, that’s very good.
53:24:00 Thank you so much, Ray.
53:26:00 You did just awesome.
53:28:00 Thank you so much.
53:30:00 Excellent.

Comments are closed for this post.