Category: Collective Intelligence


“AI & The Future Of Civilization” A Conversation With Stephen Wolfram

By Hugo Angel,

“AI & The Future Of Civilization” A Conversation With Stephen Wolfram [3.1.16]
Stephen Wolfram
What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That’s a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn’t have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

The thing we have to think about as we think about the future of these things is the goals. That’s what humans contribute, that’s what our civilization contributes—execution of those goals; that’s what we can increasingly automate. We’ve been automating it for thousands of years. We will succeed in having very good automation of those goals. I’ve spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.

There are many questions that come from this. For example, we’ve got these great AIs and they’re able to execute goals, how do we tell them what to do?…


STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram’s EdgeBio Page

THE REALITY CLUB: Nicholas Carr

AI & THE FUTURE OF CIVILIZATION
Some tough questions. One of them is about the future of the human condition. That’s a big question. I’ve spent some part of my life figuring out how to make machines automate stuff. It’s pretty obvious that we can automate many of the things that we humans have been proud of for a long time. What’s the future of the human condition in that situation?


More particularly, I see technology as taking human goals and making them able to be automatically executed by machines. The human goals that we’ve had in the past have been things like moving objects from here to there and using a forklift rather than our own hands. Now, the things that we can do automatically are more intellectual kinds of things that have traditionally been the professions’ work, so to speak. These are things that we are going to be able to do by machine. The machine is able to execute things, but something or someone has to define what its goals should be and what it’s trying to execute.

People talk about the future of the intelligent machines, and whether intelligent machines are going to take over and decide what to do for themselves. What one has to figure out, while given a goal, how to execute it into something that can meaningfully be automated, the actual inventing of the goal is not something that in some sense has a path to automation.

How do we figure out goals for ourselves? How are goals defined? They tend to be defined for a given human by their own personal history, their cultural environment, the history of our civilization. Goals are something that are uniquely human. It’s something that almost doesn’t make any sense. We ask, what’s the goal of our machine? We might have given it a goal when we built the machine.

The thing that makes this more poignant for me is that I’ve spent a lot of time studying basic science about computation, and I’ve realized something from that. It’s a little bit of a longer story, but basically, if we think about intelligence and things that might have goals, things that might have purposes, what kinds of things can have intelligence or purpose? Right now, we know one great example of things with intelligence and purpose and that’s us, and our brains, and our own human intelligence. What else is like that? The answer, I had at first assumed, is that there are the systems of nature. They do what they do, but human intelligence is far beyond anything that exists naturally in the world. It’s something that’s the result of all of this elaborate process of evolution. It’s a thing that stands apart from the rest of what exists in the universe. What I realized, as a result of a whole bunch of science that I did, was that is not the case.

A Visual History of Human Knowledge | Manuel Lima | TED Talks

By Hugo Angel,

How does knowledge grow? 

Source: EPFL Blue Brain Project. Blue Brain Circuit

Sometimes it begins with one insight and grows into many branches. Infographics expert Manuel Lima explores the thousand-year history of mapping data — from languages to dynasties — using trees of information. It’s a fascinating history of visualizations, and a look into humanity’s urge to map what we know.

ORIGINAL: TED

Sep 10, 2015

PLOS and DBpedia – an experiment towards Linked Data

By Hugo Angel,

Editor’s Note: This article is coauthored by Bob Kasenchak, Director of Business Development/Taxonomist at Access Innovations.
PLOS publishes articles covering a huge range of disciplines. This was a key factor in PLOS deciding to develop its own thesaurus – currently with 10,767 Subject Area terms for classifying the content.
We wondered whether matching software could establish relationships between PLOS Subject Areas and corresponding terms in external datasets. These relationships could enable links between data resources and expose PLOS content to a wider audience. So we set out to see if we could populate a field for each term in the PLOS thesaurus with a link to an external resource that describes—or, is “the same as”—the concept in the thesaurus. If so, we could:
• Provide links between PLOS Subject Area pages and external resources
• Import definitions to the PLOS thesaurus from matching external resources
For example, adding Linked Data URIs to the Subject Areas would facilitate making the PLOS thesaurus available as part of the Semantic Web of linked vocabularies.
We decided to use DBpedia for this trial for two reasons:
Firstly, as stated by DBpediaThe DBpedia knowledge base is served as Linked Data on the Web. As DBpedia defines Linked Data URIs for millions of concepts, various data providers have started to set RDF links from their data sets to DBpedia, making DBpedia one of the central interlinking-hubs of the emerging Web of Data.
Figure 1: Linked Open Data Cloud with PLOS shown linking to DBpedia – the concept behind this project.

 

Secondly, DBpedia is constantly (albeit slowly) updated based on frequently-used Wikipedia pages; so has a method to stay current, and a way to add content to DBpedia pages, providing inbound links—so people can link (directly or indirectly) to PLOS Subject Area Landing Pages via DBpedia.
Figure 2: ‘Cognitive psychology’ pages in PLOS and DBpedia 

 

Which matching software to trial?
We considered two possibilities: Silk and Spotlight
  • The Silk method might have allowed more granular, specific, and accurate queries, but it would have required us to learn a new query language. 
  • Spotlight, on the other hand, is executable by a programmer via API and required little effort to run, re-run, and check results; it took only a matter of minutes to get results from a list of terms to match. 

So we decided to use Spotlight for this trial.

Which sector of the thesaurus to target?
We chose the Psychology division of 119 terms (see Appendix) as a good starting point because it provides a reasonable number of test terms so that trends could emerge, and a range of technical terms (e.g. Neuropsychology) as well as general-language terms (e.g. Attention) to test the matching software.
Methods:
Figure 3: Work flow.

 

Step 1: We created the External Link and Synopsis DBpedia fields in the MAIstro Thesaurus Master application to store the identified external URIs and definitions. The External Link field accommodates the corresponding external URI, and the Synopsis DBpedia field houses the definition – “dbo:abstract” in DBpedia.
Step 2: Matching DBpedia concepts with PLOS Subject Areas using Spotlight:
  • Phase 1: For the starting set of terms we chose Psychology (a Tier 2 term) and the 21 Narrower Terms that sit in Tier 3 immediately beneath Psychology (listed in Appendix ).
  • Phase 2: For Phase 2 we included the remaining 98 terms from Tier 4 and deeper beneath Psychology (listed in Appendix ).
Step 3: Importing External Link/Synopsis DBpedia to PLOS Thesaurus: Once a list of approved matching PLOS-term-to-DBpedia-page correspondences was established, another quick DBpedia Spotlight query provided the corresponding Definitions. Access Innovations populated the fields by loading the links and definitions to the corresponding term records. For the “Cognitive psychology” example these are:
Synopsis DBpedia: Cognitive psychology is the study of mental processes such as “attention, language use, memory, perception, problem solving, creativity, and thinking.” Much of the work derived from cognitive psychology has been integrated into various other modern disciplines of psychological study including

  • educational psychology, 
  • social psychology, 
  • personality psychology, 
  • abnormal psychology, 
  • developmental psychology, and 
  • economics.
How did it go?
The table shows the distribution of results for the 119 Subject Areas in the Psychology branch of the PLOS thesaurus:
Add caption

Thus a total of 96 matches could be found by any method (80.7% of terms – top three rows of the Table). Of these, 86 terms (72.3% of terms) were matched as one of the top 5 Spotlight hits (top two rows of the Table), as compared to 71 matches (59.7% of terms) being identified correctly and directly by Spotlight as the top hit (top row of the Table).

Figure 4 shows the two added fields “Synopsis DBpedia” and “External Link” in MAIstro, for “Cognitive Psychology”.
Figure 4: Addition of Synopsis DBpedia and External Link fields to MAIstro.

 

Conclusions:
We had set out to establish whether matching software could define relationships between PLOS thesaurus terms and corresponding terms in external datasets. We used the Psychology division of the PLOS thesaurus as our test vocabulary, Spotlight as our matching software, and DBpedia as our target external dataset.
We found that unambiguous suitable matches were identified for 59.7% of terms. Expressed another way, mismatches were identified as the top hit for 35 cases (29.4% of terms) which is a high burden of inaccuracy. This is too low a quality outcome for us to consider adopting Spotlight suggestions without editorial review.
As well as those terms that were matched as a top hit, a further 12.6% of terms (or 31% of the terms not successfully matched as a top hit) had a good match in Spotlight hit positions 2-5. So Spotlight successfully matched 72.3% of terms within in the top 5 Spotlight matches.
Having the Spotlight hit list for each term did bring efficiency to finding the correspondences. Both the “hits” and the “misses” were straightforward to identify. As an aid to the manual establishment of these links Spotlight is extremely useful.
Stability of DBpedia: We noticed that the dbo:abstract content in DBpedia is not stable. It would be an enhancement to append the Synopsis DBpedia field contents with URI and date stamp as a rudimentary versioning/quality control measure.
Can we improve on Spotlight? Possibly. We wouldn’t be comfortable with any scheme that linked PLOS concepts to the world of Linked Data sources without editorial quality control. But we suspect that a more sophisticated matching tool might promote those hits that fell within Spotlight matches 2-5 to the top hit, and would find some of the 8.4% of terms which were found manually but which Spotlight did not suggest in the top 5 hits at all. We hope to invest some effort in evaluating Silk, and establishing whether or not any other contenders are emerging.
Introducing PLOS Subject Area URIs into DBpedia page: This was explored and it seemed likely that the route to achieve this would be to add the PLOS URI first to the corresponding Wikipedia page, in the “External Links” section.
Figure 5: The External Links section of Wikipedia: Cognitive psychology

 

As DBpedia (slowly) crawls through all associated Wikipedia pages, eventually the new PLOS link would be added to the DBpedia entry for each page.
To demonstrate this methodology, we added a backlink to the corresponding PLOS Subject Area page in the Wikipedia article shown above (Cognitive psychology) as well as all 21 Tier 3 Psychology terms.
Figure 6: External Links at Wikipedia: Cognitive psychology showing link back to the corresponding PLOS Subject Area page

 

Were DBpedia to re-crawl this page, the link to the PLOS page would be added to DBpedia’s corresponding page as well.
However, Wikipedia questioned the value of the PLOS backlinks (“link spam”) and their appropriateness to the “External Links” field in the various Wikipedia pages. A Wiki administrator can deem them inappropriate and remove them from Wikipedia (as has happened for some if not all of them by the time you read this).
We believe the solution is to publish the PLOS thesaurus as Linked Open Data (in either SKOS or OWL format(s)) and assert the link to the published vocabulary from DBpedia (using the field owl:sameAs instead of dbo:wikiPageExternalLink). We are looking into the feasibility and mechanics of this.
Once the PLOS thesaurus is published in this way, the most likely candidate for interlinking data would be to use the SILK Linked Data Integration Framework and we look forward to exploring that possibility.
Appendix: The Psychology division of the PLOS thesaurus. LD_POC_blog.appendix
ORIGINAL: PlOS
 November 10, 2015

Building an organic computing device with multiple interconnected brains. (Duke U.)

By admin,

Experimental apparatus scheme for a Brainet computing device.
Abstract
Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains. Brainets worked by concurrently recording the extracellular electrical activity generated by populations of cortical neurons distributed across multiple rats chronically implanted with multi-electrode arrays. Cortical neuronal activity was recorded and analyzed in real time, and then delivered to the somatosensory cortices of other animals that participated in the Brainet using intracortical microstimulation (ICMS). Using this approach, different Brainet architectures solved a number of useful computational problems, such as 
  • discrete classification, 
  • image processing, 
  • storage and retrieval of tactile information, and even 
  • weather forecasting. 

Brainets consistently performed at the same or higher levels than single rats in these tasks. Based on these findings, we propose that Brainets could be used to investigate animal social behaviors as well as a test bed for exploring the properties and potential applications of organic computers.

Subject terms:
Introduction
After introducing the concept of brain-to-brain interfaces (BtBIs) 1 , our laboratory demonstrated experimentally that BtBIs could be utilized to directly transfer tactile or visuomotor information between pairs of rat brains in real time 2 . Since our original report, other studies have highlighted several properties of BtBIs 1 , 3 , such as transmission of hippocampus representations between rodents 4 , transmission of visual information between a human and a rodent 5 , and transmission of motor information between two humans 6 , 7 . Our lab has also shown that Brainets could allow monkey pairs or triads to perform cooperative motor tasks mentally by inducing, accurate synchronization of neural ensemble activity across individual brains 8 .
In addition to the concept of BtBIs, we have also suggested that networks of multiple interconnected animal brains, which we dubbed Brainet 1 , could provide the core for a new type of computing device: an organic computer. Here, we tested the hypothesis that such a Brainet could potentially exceed the performance of individual brains, due to a distributed and parallel computing architecture 1 , 8 . This hypothesis was tested by constructing a Brainet formed by four interconnected rat brains and then investigating how it could solve fundamental computational problems ( Fig. 1A–C ). In our Brainet, all four rats were chronically implanted with multielectrode arrays, placed bilaterally in the primary somatosensory cortex (S1). These implants were used to both record neural ensemble electrical activity and transmit virtual tactile information via intracortical electrical microstimulation (ICMS). Once animals recovered from the implantation surgery, the resulting 4-rat Brainets ( Fig. 1 ) were tested in a variety of ways. Our central goal was to investigate how well different Brainet architectures could be employed by the four rats to collaborate in order to solve a particular computational task. Different Brainet designs were implemented to address three fundamental computational problems: discrete classification, sequential and parallel computations, and memory storage/retrieval 1 . As predicted, we observed that Brainets consistently outperformed individual rats in each of these tasks.
Figure 1: Experimental apparatus scheme for a Brainet computing device.
A) A Brainet of four interconnected brains is shown. The arrows represent the flow of information through the Brainet. Inputs were delivered as simultaneous ICMS patterns to the S1 cortex of each rat. Neural activity was then recorded and analyzed in real time. Rats were required to synchronize their neural activity with the remaining of the Brainet to receive water B) Inputs to the Brainet were delivered as ICMS patterns to the left S1, while outputs were calculated using the neural responses recorded from the right S1. C) Brainet architectures were set to mimic hidden layers of an artificial neural network. D) Examples of perievent histograms of neurons after the delivery of ICMS.
Results
All experiments with 4-rat Brainets were pooled from a sample of 16 animals that received cortical implants from which we could simultaneously record the extracellular activity from 15–66 S1 neurons per Brainet (total of 2,738 neurons recorded across 71 sessions).
Brainet for neural synchronization
Rats were water deprived and trained on a task that required them to synchronize their neural activity after an ICMS stimulus. A total of six rats were used in 12 sessions to run this first experiment. As depicted in Fig. 1A–C , the processing chain in these experiments started with the simultaneous delivery of an ICMS pattern to one of the S1 cortices of all subjects, then processing of tactile information with a single-layer Brainet, followed by generation of the system output by the contralateral S1 cortex of each animal. Each trial was comprised of four epochs: waiting (baseline), ICMS delivery, test, and reward. ICMS patterns (20 pulses at 22–26 Hz) were unilaterally delivered to the S1 of each rat. Neuronal responses to the ICMS were evaluated during the test period when S1 neuronal ensemble activity was sampled from the hemisphere contralateral to the stimulation site ( Figs. 1D and 2A–E ) ( Fig. 2A–E ). Rats were rewarded if their cortical activity became synchronized during the test period. The correlation coefficient R was used as the measure of global Brainet synchrony. Thus, R measured the linear correlation between the normalized firing rate of all neurons in a given rat and the average normalized firing rate for all neurons recorded in the remaining three rats (see Methods for details). If at least three rats presented R values greater or equal to 0.2, a trial was considered successful, and all four rats were rewarded. Otherwise no reward was given to any rat. Two conditions served as controls: the pre-session, where no ICMS or water reward were delivered, and the post-session, where no ICMS was delivered but rats were still rewarded if they satisfied the correlation criterion ( Fig. 2A ).
Figure 2: The Brainet can synchronize neural activity.
A) The different colors indicate the different manipulations used to study synchronization across the network. During the pre-session, rats were tested for periods of spurious neural synchronization. No ICMS or rewards were delivered here. During sessions, rats were tested for increased neural synchronization due to detection of the ICMS stimulus (red period). Successful synchronization was rewarded with water. During the post session, rats were tested for periods of neural synchronization due to the effects of reward (e.g. continuous whisking/licking). Successful synchronization was rewarded with water, but no ICMS stimulus was delivered. B) Example of neuronal activity across the Brainet. After the ICMS there was a general tendency for neural activity to increase. Periods of maximum firing rate are represented in red. C) The performance of the Brainet during sessions was above the pre-sessions and post-sessions. Also, delivery of ICMS alone or during anesthetized states also resulted in poor performances. ** and *** indicate P < 0.01 and P < 0.0001 respectively. D) Overall changes in R values in early and late sessions show that improvements in performances were accompanied by specific changes in the periods of synchronized activity. E) Example of a synchronization trial. The lower panels show, in red, the neural activity of each rat and, in blue, the average of neural activity for the remaining of the Brainet. The upper panels depict the R value for the correlation coefficient between each rat and the remaining of the Brainet. There was an overall tendency for the Brainet to correlate in the beginning of the test period.
Behaviorally, rats remained mostly calm or immobile during the baseline period. After the ICMS pattern was delivered simultaneously to all animals, rats typically displayed periods of whisking and licking movements. A sample of S1 neuronal population activity during this period is shown in Fig. 2B (also see Fig. 1D for examples of individual neurons perievent histograms). Typically, after the delivery of ICMS, there was a sharp decrease in the neuronal firing rate of the neurons (~20 ms), followed by a sudden firing rate increase (~100 ms). While the main measure of accuracy for this task was the degree in which cortical neuronal populations fired synchronously, it is important to emphasize that the build up of these ensemble firing patterns depended highly on how single S1 neurons modulated their firing rate as a result of electrical microstimulation. Thus, ICMS served as a reset signal that allowed rats to synchronize their neural activity to the remaining network ( Fig. 2D,E ). Note that, in this task, rats were not exchanging neural information through the BtBI. Instead the timing of the ICMS stimulus, the partial contact allowed through the Plexiglas panels, and the reward were the only sources of information available for rats to succeed in the task.
As the Brainet consistently exhibited the best performance during the first trials, we focused our subsequent analysis on the first 30-trial block of each session. Overall, this 4-rat Brainet was able to synchronize the neural activity of the constituent rats significantly above Pre-Session (Brainet: 57.95 ± 2%; Pre-Sessions: 45.95 ± 2%; F2,24 = 10.99; P = 0.0004; Dunnett’s test: P < 0.001) and Post-Session levels (46.41 ± 2%; Dunnett’s test: P < 0.01; Fig. 2C ).
Over approximately 1.5 weeks (total of 12 sessions), this Brainet gradually improved its performance, from 54.76 ± 3.16% (mean ± standard error; the first 6 days) to 61.67 ± 3.01% correct trials (the last 6 days; F1,2 = 5.770, P = 0.0175 for interaction; Bonferroni post hoc comparisons: pre vs session initial start P > 0.05; pre vs session end P < 0.01; session vs post start P > 0.05; session vs post end P < 0.001). The high fidelity of information transfer in this Brainet configuration was further confirmed by the observation that the performance of individual rats reached 65.28 ± 1.70%. In other words, a 4-rat Brainet was capable of maintaining a level of global neuronal synchrony across multiple brains that was virtually identical to that observed in the cortex of a single rat (Brainet level = 61.67 ± 3.07%; Man-Whitney U = 58.0; P = 0.4818, n.s.).
A comparison of correlation values between sessions from the first (n = 6) and the last days (n = 6) further demonstrated that daily training on this first task resulted in a statistically significant increase in correlated cortical activity across rats, centered between 700 ms and 1000 ms of the testing period (F = 1.622; df = 1.49; P = 0.0043, Fig. 2D ). The lower panel of Fig. 2E shows the normalized firing rate for each rat (in red) and for the remaining Brainet (in blue) in one trial. The upper panels show R value changes for the correlation between neuronal activity in each rat and the remaining Brainet. Notice the overall tendency for most rats to increase the R values soon after the delivery of the ICMS pattern (T = 0 seconds).
To determine if reward was mandatory for the correlation to emerge in the Brainet, we performed three control sessions with awake animals receiving ICMS (but no reward). The performances dropped to levels below chance (performance: 30.67 ± 3.0%; see Fig. 2C ). Further, in another three sessions where ICMS was applied to anesthetized animals, the Brainet performed close to chance levels again (performance: 38.89 ± 4.8%; see Fig. 2C ). These results demonstrated that the Brainet could only operate above chance in awake behaving rats in which there was an expectation for reward.
After determining that the Brainet could learn to respond to an ICMS input by synchronizing its output across multiple brains, we tested whether such a collective neuronal response could be utilized for multiple computational purposes. These included discrete stimulus classification, storage of a tactile memory, and, by combining the two former tasks, processing of multiple tactile stimuli.
Brainet for stimulus classification
Initially, we trained our 4-rat Brainet to discriminate between two ICMS patterns ( Fig. 3A,B , 8 sessions in 4 rats). The first pattern (Stimulus 1) was the same as in the previous experiment (20 pulses at 22–26 Hz), while the second (Stimulus 2) consisted of two separate bursts of four pulses (22–26 Hz). The Brainet was required to report either the presence of Stimulus 1 with an increase in neuronal synchrony across the four rat brains (i.e. R ≥ 0.2 in at least three rats), or Stimulus 2 by a decrease in synchrony (i.e., R < 0.2 in at least three rats). By requiring that the delivery of Stimulus 2 be indicated through a reduction in neuronal synchronization, we further ensured that the Brainet performance was not based on a simple neural response to the ICMS pattern. As in the previous experiment, Stimulus 1 served as a reset signal that allowed rats to synchronize their neural activity to the remaining network. Meanwhile, because Stimulus 2 was much shorter than Stimulus 1, it still induced neural responses in several S1 neurons ( Fig. 3B ), but its effects were less pronounced and not as likely to induce an overall neural synchronization across the Brainet (see Supplementary Figure 1 ).
Figure 3: The Brainet can both synchronize and desynchronize neural activity.
A) Architecture of a Brainet that can synchronize and desynchronize its neural activity to perform virtual tactile stimuli classification. Different patterns of ICMS were simultaneously delivered to each rat in the Brainet. Neural signals from all neurons from each brain were analyzed and compared to the remaining rats in the Brainet. The Brainet was required to synchronize its neural activity to indicate the delivery of a Stimulus 1 and to desynchronize its neural activity to indicate the delivery of a Stimulus 2. B) Example of perievent histograms of neurons for ICMS Stimulus 1 and 2. C) The Brainet performance was above No-ICMS sessions, and above individual rats’ performances. * indicates P < 0.05; ** indicates P < 0.01; n.s. indicates non significant.
Following training, the Brainet reached an average performance of 61.24 ± 0.5% correct discrimination between Stimuli 1 and 2, which was significantly above No-ICMS sessions (52.97 ± 1.1%, n = 8 sessions; Brainet vs No-ICMS: Dunn’s test: P < 0.01). Moreover, using this more complex task design, the Brainet outperformed individual rats (55.86 ± 1.2%) (Kruskal-Wallis statistic = 10.87, P = 0.0044; Brainet vs Individual Rats; Dunn’s test: P < 0.05; also see Fig. 3C ).
To improve the overall performance of this 4-rat Brainet, we implemented an adaptive decoding algorithm that analyzed the activity of each neuron in each specific bin separately, and then readjusted the neuronal weights following each trial (see Methods for details). Figure 4A depicts this Brainet architecture. Notice the different weights for each of the individual neurons (represented by different shades of grey), reflecting the individual accuracy in decoding the ICMS pattern. Figure 4B illustrates a session in which all four rats contributed to the overall decoding of the ICMS stimuli (the red color indicates periods of maximum decoding). Using this approach, we increased both the overall Brainet performance (74.18 ± 2.2% correct trials; n = 7 rats in 12 sessions) and the number of trials performed (64.17 ± 6.2 trials) in each session. The neuronal ensembles of this Brainet included an average of 50 ± 43 neurons (mean ± standard error). Figure 4C depicts the improved performance of the Brainet compared to that of the No-ICMS sessions (54.34 ± 2.2% correct trials, n = 11 sessions) and the performance of individual rats (61.28 ± 1.1% correct trials, F = 26.34; df = 2, 56; P < 0.0001; Bonferroni post hoc comparisons; Brainet vs No-ICMS: P < 0.0001; Brainet vs Individual rats P < 0.0001).
Figure 4: Brainet for discrete classification.
A) Architecture of a Brainet for stimulus classification. Two different patterns of ICMS were simultaneously delivered to each rat in the Brainet. Neural signals from each individual neuron were analyzed separately and used to determine an overall classification vote for the Brainet. B) Example of a session where a total of 62 neurons were recorded from four different animals. Deep blue indicates poor encoding, while dark red indicates good encoding. Although Rat 3 presented the best encoding neurons, all rats contributed to the network’s final classification. C) Performance of Brainet during sessions was significantly higher when compared to the No-ICMS sessions. Additionally, because the neural activity is redundant across multiple brains, the overall performance of the Brainet was also higher than in individual brains. *** indicates P < 0.0001. D) Neuron dropping curve of Brainet for discrete classification. The effect of redundancy in encoding can be observed in the Brainet as the best encoding cells from each session are removed. E) The panels depict the dynamics of the stimulus presented (X axis: 1 or 2) and the Brainet classifications (Y axis: 1 to 2) during sessions and No-ICMS sessions. During regular sessions, the Brainet classifications mostly matched the stimulus presented (lower left and upper right quadrants). Meanwhile, during No ICMS sessions the Brainet classifications were evenly distributed across all four quadrants. The percentages indicate the fraction of trials in each quadrant (Stimulus 1, vote 1 not shown). F) Example of an image processed by the Brainet for discrete classification. An original image was pixilated and each blue or white pixel was delivered as a different ICMS pattern to the Brainet during a series of trials (Stimulus 1 – white; Stimulus 2 – blue). The left panel shows the original input image and the right panel shows the output of the Brainet.
When rats were anesthetized (2 sessions in five rats) or trial duration was reduced to 10 s (i.e. almost only comprising the ICMS and the test period – 2 sessions in four rats), the Brainet’s performance dropped sharply (anesthetized: 60.61 ± 2.8% correct; short time trials: 62.57 ± 3.14%). Once again, this control experiment indicated that the Brainet operation was not solely dependent on an automatic response to the delivery of an ICMS.
Next, we investigated the dependence of the Brainet’s performance on the number of S1 neurons recorded simultaneously. Figure 4D depicts a neuron dropping curve illustrating this effect. According to this analysis, Brainets formed by larger cortical neuronal ensembles performed better than those containing just a few neurons 9 .
The difference between the Brainet classification of the two stimuli during regular sessions and during those in which no-ICMS was delivered is shown in Fig. 4E . During the regular sessions stimulus classification remained mostly in the quadrants corresponding to the stimuli delivered (lower left and upper right quadrants), while during the No-ICMS sessions the 4-rat Brainet trial classification was evenly distributed across all quadrants.
As different rats were introduced to the Brainet, we also compared how neuronal ensemble encoding in each animal changed during initial and late sessions (the first three versus the remaining days). Overall, there was a significant increase in ICMS encoding (initial: 59.67 ± 1.4%, late: 65.08 ± 1.2%, Mann-Whitney U = 281.0, P = 0.0344) and, to a smaller extent, in the correlation coefficients between neural activity of the different animals (initial: 0.1831 ± 0.007, late: 0.2028 ± 0.005, Mann-Whitney U = 275.0, P = 0.0153) suggesting that improvements in Brainet performances were accompanied by cortical plasticity in the S1 of each animal.
To demonstrate a potential application for this stimulus discrimination task, we tested whether our Brainet could read out a pixilated image (N = 4 rats in n = 4 sessions) using the same principles demonstrated in the previous two experiments. Blue and white pixels were converted into binary codes (white – Stimulus 1 or blue – Stimulus 2) and then delivered to the Brainet over a series of trials. The right panel of Fig. 4F shows that a 4-rat Brainet was able to capture the original image with good accuracy (overall 87% correct trials) across a period of four sessions.
Brainet for storage and retrieval of tactile memories
To test whether a 3-rat Brainet could store and retrieve a tactile memory, we sent an ICMS stimulus to the S1 of one rat and then successively transferred the information decoded from that rat’s brain to other animals, via a BtBI, over a block of four trials. To retrieve the tactile memory, the information traveling across different rat brains was delivered, at the end of the chain, back to the S1 cortex of the first rat for decoding ( Fig. 5A ). Opaque panels were placed between the animals, and cortical neural activity was analyzed for each rat separately. The architecture of inputs and outputs of the 3-rat Brainet’s is shown in Fig. 5A , starting from the bottom shelf and progressing to the top one. The experiment started by delivering one of two different ICMS stimuli to the S1 of the input rat (from now on referred to as Rat 1) during the first trial (Trial 1). Neuronal ensemble activity sampled from Rat 1 was then used to decode the identity of the stimulus (either Stimulus 1 or 2). Once the stimulus identity was determined, a new trial started and a BtBI was employed to deliver a correspondent ICMS pattern to Rat 2, defining Trial 2 of the task. In this arrangement, the BtB link between Rat 1 and Rat 2 served to store the pattern (Pattern Storage I). Next, neuronal ensemble activity was recorded from the S1 of Rat 2. In the third trial, it was Rat 3’s turn to receive the tactile message (Pattern Storage II) decoded from the neural ensemble activity of Rat 2, via an ICMS mediated BtB link. During the fourth and final trial, Rat 1 received the message decoded from the neural activity of Rat 3.
Figure 5: A Brainet for storage and retrieval of tactile memories.
A) Tactile memories encoded as two different ICMS stimuli were stored in the Brainet by keeping information flowing between different nodes (i.e. rats). Tactile information sent to the first rat in Trial 1 (‘Stimulus Decoding’), was successively decoded and transferred between Rats 2 and 3, and again transferred to Rat 1, across a period of four trials (memory trace in red). The use of the brain-to-brain interface between the nodes of the network allowed accurate transfer of information. B) The overall performance of the Brainet was significantly better than the performance in the No-ICMS sessions and better than individual rats performing 4 consecutive correct trials. In this panel, * indicates P < 0.05 and *** indicates P < 0.001. C) Neuron dropping curve of Brainet for storage and retrieval of memories. D) Example of session with multiple memories (each column) processed in blocks of four trials (each row). Information flows from the bottom (Stimulus delivered) towards the top (Trials 1–4). Blue and red indicate Stimulus 1 or 2 respectively. Correct tactile memory traces are columns which have a full sequence of trials with the same color (see blocks: 3, 5, 7 and 9). In this panel, * indicates an incorrect trial.
Using this Brainet architecture, the memory of a tactile stimulus could only be recovered if the individual BtB communication links worked correctly in all four consecutive trials. The chance level for this operation was 6.25%. Under these conditions, this Brainet was able to retrieve a total of 35.37 ± 2.2% (9 sessions in 9 rats) of the tactile stimuli presented to it (Kruskall Wallis statistic = 14.89; P = 0.0006, Fig. 5B ), contrasting with 7.91 ± 6.5% in No-ICMS sessions (n = 5 sessions; Dunn’s test: P < 0.001). For comparison purposes, individual rats performed the same four-trial task correctly in only 15.63 ± 2.1% of the trials. This outcome was significantly lower than a 3-rat Brainet (Dunn’s test: P < 0.001). As in the previous experiments, larger neuronal ensembles yielded better encoding ( Fig. 5C ).
As an additional control, rats that were not processing memory related information in a specific trial (e.g. Rats 2 and 3 during the Stimulus Decoding Stage in Rat 1) received Stimulus 1 or Stimulus 2, randomly chosen. Thus, in every single trial all rats received some form of ICMS, but only the information gathered from a specific rat was used for the overall tactile trace.
The colored matrix in Fig. 5D illustrates a session in which a tactile trace developed along the 3-rat Brainet. A successful example of information transfer and recovery is shown in the third block of trials (blue column on the left). The figure shows that the original stimulus (Stimulus 1 – bottom blue square) was delivered to the S1 of Rat 1 in the first trial. This stimulus was successfully decoded from Rat 1’s neural activity, as shown by the presence of the blue square immediately above it (Trial 1 – Stimulus Decoding). In Trial 2 (Pattern Storage I), Stimulus 2 was delivered, via ICMS to the S1 of Rat 2, and again successfully decoded (as shown by the blue square in the center). Then, in Trial 3 (Pattern Storage II), the ICMS pattern delivered to Rat 3 corresponded to Stimulus 1, and the decoding of S1 neural activity obtained from this animal still corresponded to Stimulus 1, as shown by the blue square. Lastly, in Trial 4 (Stimulus Recovery), Rat 1 received an ICMS pattern corresponding to Stimulus 1 and its S1 neural activity still encoded Stimulus 1 (blue square). Thus, in this specific block of trials, the original tactile stimulus was fully recovered since all rats were able to accurately encode and decode the ICMS pattern received. Similarly, columns 5, 7, and 9 also show blocks of trials where the original tactile stimulus (in these cases Stimulus 2, red square) was accurately encoded and decoded by the Brainet. Alternatively, columns with an asterisk on top (e.g. 1 and 8) indicate incorrect blocks of trials. In these incorrect blocks, the stimulus delivered was not accurately encoded in the brain of at least one rat belonging to the Brainet (e.g. rat 3 in block 1).
Brainet for sequential and parallel processing
Lastly, we combined all the processing abilities demonstrated in the previous experiments (discrete tactile stimulus classification, BtB interface, and tactile memory storage) to investigate whether Brainets would be able to use sequential and parallel processing to perform a tactile discrimination task (N = 5 rats in N = 10 sessions). For this we used blocks of two trials where tactile stimuli were processed according to Boolean logic 10 ( Fig.6A–B ). This means that in each trial there was a binary decision tree (i.e. two options encoded as Stimulus 1 or 2). In the first trial, two different tactile inputs were independently sent to two dyads of rats (Dyad 1: Rat 1-Rat 2; Dyad 2: Rat 3-Rat 4; bottom of Fig. 6A ). In the next trial, the tactile stimuli decoded by the two dyads were combined and transmitted, as a new tactile input, to a 4-rat Brainet. Upon receiving this new stimulus, the Brainet was in charge of encoding a final solution (i.e. identifying Stimulus 3 or 4, see Supplementary Figure 2 ).
Figure 6: A Brainet for parallel and sequential processing.
A) Architecture of a network for Parallel and Sequential processing. Information flows from the bottom to the top during the course of two trials. In first trial, odd trial for parallel processing, Dyad 1 (Rat 1-Rat 2) received one of two ICMS patterns, and Dyad 2 (Rat 3-Rat 4) received independently one of two ICMS patterns. During Trial 2, even trial for sequential processing, the whole Brainet received again one of two ICMS patterns. However, the pattern delivered in the even trial was dependent on the results of the first trial and was calculated according to the colored matrix presented. As depicted by the different encasing of the matrix (blue or red), if both dyads encoded the same stimulus in the odd trial (Stimulus 1-Stimulus1 or Stimulus 2-Stimulus 2), then the stimulus delivered in the even trial corresponded to Stimulus 3. Otherwise, if each dyad encoded a different stimulus in the odd trial (Stimulus1-Stimulus 2 or Stimulus 2-Stimulus 1), then the stimulus delivered in even trial was Stimulus 4. Each correct block of information required three accurate estimates of the stimulus delivered (i.e. encoding by both dyads in the even trial, as well as the whole Brainet in the odd trial). B) Example of session with sequential and parallel processing. The bottom and center panel show the dyads processing the stimuli during the odd trials (parallel processing), while the top panel shows the performance of the whole Brainet during the even trials. In this panel, * indicates an incorrect classification. C) The performance of the Brainet was significantly better than the performance during the No-ICMS sessions and above the performance of individual rats performing blocks of 3 correct trials. In this panel, * indicates P < 0.05.
As shown at the bottom of Fig. 6A , odd trials were used for parallel processing, i.e. each of two rat dyads independently received ICMS patterns, while neural activity was analyzed and the original tactile stimulus decoded (i.e. Stimulus 1 or 2). Then, during even trials ( Fig. 6A , top), ICMS was used to encode a second layer of patterns, defined as Stimulus 3 and Stimulus 4. Note that ICMS Stimuli 3 and 4 were physically identical to Stimuli 2 and 1 respectively; however, because the stimuli delivered in the even trials were contingent on the results of the odd trials, we employed a different nomenclature to identify them. The decision tree (i.e. truth table) used to calculate the stimuli for the even trials is shown in the colored matrix at the center of Fig. 6A . The matrix shows that, if both dyads encoded the same tactile stimulus in the odd trial (Stimulus 1-Stimulus 1, or Stimulus 2-Stimulus 2; combinations with blue encasing), the ICMS delivered to the entire Brainet in the even trial corresponded to Stimulus 4. Otherwise, if the tactile stimulus decoded from each rat dyad in the odd trial was different (Stimulus 1-Stimulus 2, or Stimulus 2-Stimulus 1; combinations with red encasing), the ICMS delivered to the entire Brainet in the even trial corresponded to Stimulus 3. As such, the ICMS pattern delivered in even trials was the same for the whole Brainet (i.e. all four rats).
At the end of each even trial, the stimulus decoded from the combined neuronal activity of the four brain ensemble (top of Fig. 6A ) defined the final output of the Brainet. Chance level was set at 12.5%. Overall, this Brainet performance was much higher than chance level or No-ICMS sessions (Brainet: 45.22 ± 3.4%, n = 10 sessions) significantly above No-ICMS sessions (n = 5 sessions) (No-ICMS: 22.79 ± 5.4%; Kruskal-Wallis statistic = 7.565, P = 0.0228; Dunn’s test: P < 0.05 Fig. 6C ). Additionally, the Brainet also outperformed each individual rat (groups of three consecutive trials: 30.25 ± 3.0%; Dunn’s test: P < 0.05).
As our last experiment, we tested whether a 3-rat Brainet could be used to classify meteorological data (see Methods for details). Again, the decision tree included two independent variables in the odd trials and a dependent variable in the even trials (see Supplementary Figure 3 ). Figure 7A illustrates how Boolean logic was applied to convert data from an original weather forecast model . In the bottom panel, the yellow line depicts continuous changes in temperature occurring during a period of 10 hours. Periods where the temperature increased were transferred to the Brainet as Stimulus 1 (see arrows in periods between 0 and 4 hours), whereas periods where the temperature decreased were transferred as Stimulus 2 (see arrows in periods between 6 and 10 hours). The middle panel of Fig. 7A illustrates changes in barometric pressure (green line). Again, periods where the barometric pressure increased were translated as Stimulus 1 (e.g. between 1-2 hours), while periods where the barometric pressure decreased were translated as Stimulus 2 (e.g. 3–5 hours).
Figure 7: Parallel and sequential processing for weather forecast
A) Each panel represents examples of the original data, reflecting changes in temperature (lower panel), barometric pressure (center panel), and probability of precipitation (upper panel). The arrows represent general changes in each variable, indicating an increase or a decrease. On the top of each panel is represented the ICMS pattern that resulted from each arrow presented. B) Lower and center panels show trials where different rats of the Brainet (Rat 1 lower panel, and Rats 2-3 center panel) processed the original data in parallel. Specifically, Rat 1 processed temperature changes and Rats 2-3 processed barometric pressure changes. The upper panel shows the Brainet processing changes in the probability of precipitation (Rats 1–3) during the even trials. * indicates trials where processing was incorrect.
Both Stimulus 1 and 2 were delivered to a Brainet during odd trials; changes in temperature were delivered to Rat 1 alone, while changes in barometric pressure were delivered to Rats 2 and 3. As in the previous experiment, Stimuli 3 and 4 were physically similar to Stimuli 1 and 2. In even trials, increases and decreases in the probability of precipitation (top panel Fig. 7A ) were calculated as follows: an increase in temperature (Stimulus 1; Rat 1) combined with a decrease in barometric pressure (Stimulus 2; Rats 2 and 3) was transferred to even trials as an increase in the probability of precipitation (i.e. a Stimulus 4), whereas any other combination was transferred as Stimulus 3, and associated with a decrease in precipitation probability. This specific combination of inputs was used because it reflects a common set of conditions associated with early evening spring thunderstorms in North Carolina.
Overall, our 3-rat Brainet predicted changes in the probability of precipitation with 41.02 ± 5.1% accuracy which was much higher than chance (No-ICMS: 16.67 ± 8.82%; n = 3 sessions; t = 2.388, df = 4; P = 0.0377) (also see Fig. 7B ).
Discussion
In this study we described different Brainet architectures capable of extracting information from multiple (3-4) rat brains. Our Brainets employed ICMS based BtBs combined with neuronal ensemble recordings to simultaneously deliver and retrieve information to and from multiple brains. Multiple BtBIs were used to construct some of our Brainet designs. Our experiments demonstrated that several Brainet architectures can be employed to solve basic computational problems. Moreover, in all cases analyzed the Brainet performance was equal or superior to that of an individual brain. These results provide a proof of concept for the possibility of creating computational engines composed of multiple interconnected animal brains.
Previously, Brainets have incorporated only up to two subjects exchanging motor or sensory information 2 , or up to three monkeys that collectively controlled the 3D movements of a virtual arm 8 . These studies provided two major building blocks for Brainet design: (1) information transfer between individual brains, and (2) collaborative performance among multiple animal brains. Here, we took advantage of these building blocks to demonstrate more advanced Brainet processing by solving multiple computational problems, which included discrete classification, image processing, storage and retrieval of memories, and a simplified form of weather forecasting 1 , 2 , 8 . All these computations were dependent on the collective work of cortical neuronal ensembles recorded simultaneously from multiple animal brains working towards a common goal.
One could argue that the Brainet operations demonstrated here could result from local responses of S1 neurons to ICMS. Several lines of evidence suggest that this was not the case. First, we have demonstrated that animals needed several sessions of training before they learned to synchronize their S1 activity with other rats. Second, the decoding for individual neurons in untrained rats was close to chance levels. Third, attempts to make the Brainet work in anesthetized animals resulted in poor performance. Fourth, network synchronization and individual neuron decoding failed when animals did not attend to the task requirements and engaged in grooming instead. Fifth, removing the reward contingency drastically reduced the Brainet performance. Sixth, after we reduced trial duration, the decoding from individual neurons dropped to levels close to chance.
Altogether, these findings indicate that optimal Brainet processing was only attainable in fully awake, actively engaged animals, with an expectation to be rewarded for correct performance. These features are of utmost importance since they allowed Brainets to retain the computational aptitudes of the awake brain 11 and, in addition, to benefit from emergent properties resulting from the interactions between multiple individuals 2 . It is also noteworthy to state that the Brainets implemented here only allowed partial social interactions between subjects (through the Plexiglas panels). As such, it is not clear from our current study, to what extent social interactions played (or not) a pivotal role in the Brainet performance. Therefore, it will be interesting to repeat and expand these experiments by allowing full social contact between multiple animals engaged in a Brainet operation. In this context, Brainets may become a very useful tool to investigate the neurophysiological basis of animal social interactions and group behavior.
We have previously proposed that the accuracy of the BtBI could be improved by increasing the number of nodes in the network and the size of neuronal ensembles utilized to process and transfer information 2 . The novel Brainet architectures tested in the present study support these suggestions, as we have demonstrated an overall improvement in BtBI performances compared to our previous study (maximum of 72% correct in the previous study versus maximum of 87% correct here) 2 . Since neuron dropping curves did not reach a plateau, it is likely that the performance of our Brainet architectures can be significantly improved by the utilization of larger cortical neuronal samples. In addition, switching between sequential and parallel processing modes, as was done in the last experiment, allowed the same Brainet to process more than two bits of information. It is important to emphasize, however, that the computational tasks examined in this study were implemented through Boolean logic 10 , 12 . In future studies we propose to address a new range of computational problems by using simultaneous analog and digital processing. By doing so, we intend to identify computational problems that are more suitable for Brainets to solve. Our hypothesis is that, instead of typical computational problems addressed by digital machines, Brainets will be much more amenable to solving the kind of problems faced by animals in their natural environments.
The present study has also shown that the use of multiple interconnected brains improved Brainet performance by introducing redundancy in the overall processing of the inputs and allowing groups of animals to share the attentional load during the task, as previously reported for monkey Brainets 8 . Therefore, our findings extended the concept of BtBIs by showing that these interfaces can allow networks of brains to alternate between sequential and parallel processing 13 and to store information.
In conclusion, we propose that animal Brainets have significant potential both as a new experimental tool to further investigate system neurophysiological mechanisms of social interactions and group behavior, as well as provide a test bed for building organic computing devices that can take advantage of a hybrid digital-analogue architecture.
Methods
All animal procedures were performed in accordance with the National Research Council’s Guide for the Care and Use of Laboratory Animals and were approved by the Duke University Institutional Animal Care and Use Committee. Long Evans rats weighing between 250–350 g were used in all experiments.
Tasks of synchronization and desynchronization
Groups of four rats, divided in two pairs (dyads), were placed in two behavioral chambers (one dyad in each chamber). Rats belonging to the same dyad (i.e. inside the same chamber) could see each other through a Plexiglas panel, but not the animals in the other dyad. Each trial in a session consisted of four different periods: baseline (from 0–9 seconds), ICMS (9–11 seconds), test (11–12 seconds), and reward (13–25 seconds). During the baseline period no action was required from rats. During the ICMS period a pattern of ICMS (20 pulses, at 22–26 Hz, 10–100 uA) was delivered to all rats simultaneously. During the Test period, neural activity from all neurons recorded in each rat was analyzed and compared to the neural activity of all other animals as a population. Spikes from individual channels were summed to generate a population vector representing the overall activity which generally constitutes a good indicator of whisking and/or licking activity 14 . The population vectors for each of the four rats were then normalized. Lastly, we calculated the Pearson correlation between the normalized population vector of each rat and the general population of rats (the average of the neural population vectors from three remaining rats). During Pre-Sessions neural activity was analyzed in each trial, but no ICMS or water reward was delivered. During Sessions, neural activity was analyzed after the delivery of an ICMS stimulus and if the threshold for a correct trial was reached (at least three rats with R> = 0.2) then a water reward was delivered. During the Post-Sessions, neural activity was recorded and a water reward was delivered if animals reached the threshold for a correct trial, however no ICMS stimuli were delivered.
Additionally, we also tested the effect of ICMS alone and in anesthetized animals (Ketamine/Xylazine 100 mg/kg). During the synchronization/desynchronization task two different ICMS patterns were delivered: Stimulus 1 consisted of the same pattern that was used for the synchronization task and the threshold for a correct trial remained the same. Stimulus 2 consisted of two short bursts of ICMS (2 × 4 pulses, 22–26 Hz separated by 250 ms interval) and the threshold for a correct response was less than three rats reaching an R value of 0.2 during the testing period.
Adaptive decoding algorithm
During the experiments where the adaptive decoding algorithm was used (discrete classification, tactile memory storage, sequential and parallel processing), the ICMS patterns remained as previously. Neural activity was separately analyzed for each neuron in each rat and 25 ms distributions were built and filtered with a moving average of 250 ms. The overall structure of the sessions included an initial period of 16–30 trials where Stimuli 1 and 2 were delivered to rats in order to build the distributions for each stimulus. The overall firing rate for each bin in the test period was then analyzed and, according to the probability distributions, a vote for Stimulus 1 or for Stimulus 2 was calculated. Bins with similar spike distributions for both stimuli were not analyzed. A final vote for each cell was then calculated, using the votes from all the bins that presented differences in the firing rate for the two stimuli. Lastly, the final votes for each cell in the population were filtered with a sigmoid curve. This filtering allowed the best encoding cells in the ensembles to contribute significantly more than other cells to the overall decision made by the Brainet made in each trial. Additionally, the weight of the cell population could be automatically adjusted at different intervals (e.g. every 10 or 15 trials).
For the image processing experiment, groups of four rats were tested. An original image was pixilated and converted into multiple trials. Each trial corresponded to a white (Stimulus 1) or blue (Stimulus 2) pixel in the original image. In each trial one of two different ICMS stimuli was delivered to the Brainet. After the neural activity from the Brainet was decoded, a new image corresponding to the overall processing by the Brainet was recreated.
Memory storage experiment
For this specific experiment only three rats were used in each session and ICMS frequency patterns varied between 20–100 Hz. The number of pulses remained the same as in the previous experiments. Each memory was processed across a period of four trials which represented four different stages of a memory being processed: Stimulus delivery (Trial 1), Pattern Storage I (Trial 2), Pattern Storage II (Trial 3), and lastly, Stimulus Recovery (Trial 4). Information was initially delivered to the S1 cortex of the first rat (Rat 1) in the first trial – Stimulus Delivery. In Trial 2, information decoded from the cortex of Rat 1 was delivered as an ICMS pattern to the second rat (Rat 2) – Pattern Storage I. In Trial 3, information decoded from the S1 of Rat 2 was delivered to Rat 3 – Pattern Storage II. In Trial 4, neural activity decoded from the cortex of Rat 3 was decoded and delivered to the cortex of Rat 1 as a pattern of ICMS. Lastly, if the stimulus encoding and decoding was correct across all four trials (chance level of 6.25%) a memory was considered to be recovered. The overall number of memories decoded, the percent of stimuli decoded and the accuracy of the brain-to-brain interface information transfer were measured. As a control measure the Plexiglas panels separating the dyads were made opaque for this experiment. Additionally, as the tactile pattern was delivered to each rat in the specific memory stage (delivery, storage or recovery), a random Stimulus 1 or 2 was delivered to the remaining rats. This random stimulation of the remaining individuals ensured that, in each trial, rats could not identify whether or not they were participating in the tactile trace.
Sequential and parallel processing experiment
Each block of information processing consisted of two trials: the first trial corresponded to parallel processing and the second trial corresponded to sequential processing. Two dyads of rats were formed: Dyad 1 (Rat 1-Rat 2) and Dyad 2 (Rat 3-Rat 4). During the first trial each dyad processed one of two ICMS stimuli independently of the other dyad. After the delivery of the ICMS stimuli to each dyad, neural activity was decoded and the stimulus for Trial 2 was computed from the results. If both dyads encoded a similar stimulus (Stimulus 1 – Stimulus 1, or Stimulus 2 – Stimulus 2), then the ICMS stimulus in Trial 2 was Stimulus 3. Otherwise, if the dyads encoded different ICMS stimuli (Stimulus 1 – Stimulus 2, or Stimulus 2 – Stimulus 1), then the ICMS stimulus in Trial 2 would be Stimulus 4. Stimuli 1 and 3 and Stimuli 2 and 4 had the exact same physical characteristics (number of pulses). During the second trial the same stimulus was delivered simultaneously to all four rats, and the Brainet encoded an overall response. A block of information was considered to be correct only if both Trials 1 and 2 were correct in both the dyads and in the Brainet.
For the weather forecasting experiment groups of three animals were tested. Sessions were run as described above for sequential and parallel processing. However, Trial one (parallel processing) was processed only by one rat (temperature) and one dyad of rats (barometric pressure), while Trial two (sequential processing: probability of precipitation) was processed by the whole Brainet (three rats).
To establish a simple weather forecast model we used original data from Raleigh/Durham Airport (KRDU), at WWW.Wunderground.com. Estimates were collected on August 2, 2014. We used periods characterized by increases and decreases in temperature and barometric pressure as independent variables, and increases in the probability of precipitation as the dependent variable. A total of 13 periods were collected. These included a total of 26 independent inputs for even trials (13 variations in temperatures and 13 variations in barometric pressure), as well as 13 additional changes in the probability of precipitation, to be compared with the Brainet outputs (i.e. the actual forecast). Specifically, for this experiment, increases in temperature (Stimulus 1 for the first rat) with decreases in barometric pressure (Stimulus 2 in Rats 2-3), during the odd trials, were computed as an increase in the probability of precipitation (Stimulus 4 to the Brainet in the even trial). Otherwise, increases or decreases in temperature (Stimulus 1 or 2 in the odd trial) combined with an increase in barometric pressure (Stimulus 1 for Rats 2 and 3), were computed as a decrease in the probability of precipitation (Stimulus 3 for the Brainet) in the even trial. Stimuli 1 and 3, and Stimuli 2 and 4 had the exact same physical characteristics (number of pulses).
Surgery for microelectrode array implantation
Fixed or movable microelectrode bundles or arrays of electrodes were implanted bilaterally in the S1 of rats. Craniotomies were made and arrays lowered at the following stereotaxic coordinates: [(AP) −3.5 mm, (ML), ±5.5 mm (DV) −1.5 mm].
Electrophysiological recordings
A Multineuronal Acquisition Processor (64 channels, Plexon Inc, Dallas, TX) was used to record neuronal spikes, as previously described 15 . Briefly, differentiated neural signals were amplified (20000–32,000×) and digitized at 40 kHz. Up to four single neurons per recording channel were sorted online (Sort client 2002, Plexon inc, Dallas, TX).
Intracortical electrical microstimulation
Intracortical electrical microstimulation cues were generated by an electrical microstimulator (Master 8 , AMPI, Jerusalem, Israel) controlled by custom Matlab script (Nattick, USA) receiving information from a Plexon system over the internet. Patterns of 8–20 (bipolar, biphasic, charge balanced; 200 μsec) pulses at 20–120 Hz were delivered to S1. Current intensity varied from 10–100 μA.
Additional Information
How to cite this article: Pais-Vieira, M. et al. Building an organic computing device with multiple interconnected brains. Sci. Rep. 5, 11869; doi: 10.1038/srep11869 (2015).
References
Nicolelis, M. Beyond boundaries: the new neuroscience of connecting brains with machines–and how it will change our lives. 1st edn, (Times Books/Henry Holt and Co.,2011).
Show context
Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J. & Nicolelis, M. A. A brain-to-brain interface for real-time sharing of sensorimotor information. Sci Rep 3, 1319, doi:10.1038/srep01319 (2013).
CAS
Show context
West, B. J., Turalska, M. & Grigolini, P. Networks of Echoes Imitation, Innovation and Invisible Leaders (Springer 2014).
Show context
Deadwyler, S. A. et al. Donor/recipient enhancement of memory in rat hippocampus. Front Syst Neurosci 7, 120, doi:10.3389/fnsys.2013.00120 (2013).
Show context
Yoo, S. S., Kim, H., Filandrianos, E., Taghados, S. J. & Park, S. Non-invasive brain-to-brain interface (BBI): establishing functional links between two brains. PLoS One 8, e60410, doi:10.1371/journal.pone.0060410 PONE-D-12-31631 (2013).
CAS
Show context
Rao, R. P. et al. A Direct Brain-to-Brain Interface in Humans. PLoS One 9, e111332, doi:10.1371/journal.pone.0111332 PONE-D-14-32416 (2014).
CAS
Show context
Grau, C. et al. Conscious brain-to-brain communication in humans using non-invasive technologies. PLoS One 9, e105225, doi:10.1371/journal.pone.0105225 PONE-D-14-17198 (2014).
CAS
Show context
Ramakrishnan, A. et al. Computing arm movements with a monkey brainet. Sci Rep In Press (2015).
Show context
Carmena, J. M. et al. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol 1, E42, doi:10.1371/journal.pbio.0000042 (2003).
CAS
Show context
Boole, G. in The Mathematical Analysis of Logic, being an essay towards a calculus of deductive reasoning. Cambridge: MacMillan, Barclay & MacMillan (1847).
Show context
Krupa, D. J., Wiest, M. C., Shuler, M. G., Laubach, M. & Nicolelis, M. A. Layer-specific somatosensory cortical activation during active tactile discrimination. Science 304,1989–1992, doi:10.1126/science.1093318 304/5679/1989 (2004).
CAS
ISI
Show context
Harris, J. M., Hirst, J. L. & Mossinghoff, M. J. Combinatorics and graph theory. 2nd edn, (Springer, 2008).
Show context
Grama, A. Introduction to parallel computing. 2nd edn, (Addison-Wesley, 2003).
Show context
Pais-Vieira, M., Lebedev, M. A., Wiest, M. C. & Nicolelis, M. A. Simultaneous Top-down Modulation of the Primary Somatosensory Cortex and Thalamic Nuclei during Active Tactile Discrimination. J Neurosci 33, 4076–4093, doi:10.1523/JNEUROSCI.1659-12.2013 33/9/4076 (2013).
CAS
Show context
Nicolelis, M. A. L. Methods for neural ensemble recordings. 2nd edn, (CRC Press, 2008).
Show context
Acknowledgements
The authors would like to thank James Meloy for microelectrode array manufacturing and setup development, Po-He Tseng and Eric Thomson for comments on the manuscript, Laura Oliveira, Susan Halkiotis, and Terry Jones for miscellaneous assistance. This work was supported by NIH R01DE011451, R01NS073125, RC1HD063390, National Institute of Mental Health award DP1MH099903, and by Fundacao BIAL 199/12 to MALN. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Author information
Affiliations
Department of Neurobiology, Duke University, Durham, North Carolina 27710
Miguel Pais-Vieira, 
Gabriela Chiuffa, 
Mikhail Lebedev & 
Miguel A. L. Nicolelis
Department of Biomedical Engineering, Duke University, Durham, North Carolina 27710
Amol Yadav & 
Miguel A. L. Nicolelis
Department of Psychology and Neuroscience, Duke University, Durham, North Carolina 27710
Miguel A. L. Nicolelis
Duke Center for Neuroengineering, Duke University, Durham, North Carolina 27710
Mikhail Lebedev & 
Miguel A. L. Nicolelis
Edmond and Lily Safra International Institute for Neuroscience of Natal, Natal, Brazil
Miguel A. L. Nicolelis
Contributions
M.P.V. and G.S. performed the experiments; M.P.V. and M.A.N. conceptualized the experiments; M.P.V., A.Y., M.L. and M.A.N. analyzed the data. M.P.V., M.L. and M.A.N. wrote the manuscript. M.P.V. prepared Figures 1–7 and SF1–3. G.S. also prepared Figure 4 . All authors reviewed the manuscript.
Competing financial interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to: 
Supplementary information
PDF files
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
ORIGINAL: Nature
Corresponding author  Scientific Reports 5, Article number: 11869 doi:10.1038/srep11869Received 03 March 2015 Accepted 09 June 2015 Published 09 July 2015

At a glance

Figure 1: Experimental apparatus scheme for a Brainet computing device.

 A) A Brainet of four interconnected brains is shown. The arrows represent the flow of information through the Brainet. Inputs were delivered as simultaneous ICMS patterns to the S1 cortex of each rat. Neural activity was then recorded and analyzed in real time. Rats were required to synchronize their neural activity with the remaining of the Brainet to receive water B) Inputs to the Brainet were delivered as ICMS patterns to the left S1, while outputs were calculated using the neural responses recorded from the right S1. C) Brainet architectures were set to mimic hidden layers of an artificial neural network. D) Examples of perievent histograms of neurons after the delivery of ICMS.

Figure 2: The Brainet can synchronize neural activity A) The different colors indicate the different manipulations used to study synchronization across the network. During the pre-session, rats were tested for periods of spurious neural synchronization. No ICMS or rewards were delivered here. During sessions, rats were tested for increased neural synchronization due to detection of the ICMS stimulus (red period). Successful synchronization was rewarded with water. During the post session, rats were tested for periods of neural synchronization due to the effects of reward (e.g. continuous whisking/licking). Successful synchronization was rewarded with water, but no ICMS stimulus was delivered. B) Example of neuronal activity across the Brainet. After the ICMS there was a general tendency for neural activity to increase. Periods of maximum firing rate are represented in red. C) The performance of the Brainet during sessions was above the pre-sessions and post-sessions. Also, delivery of ICMS alone or during anesthetized states also resulted in poor performances. ** and *** indicate P < 0.01 and P < 0.0001 respectively. D) Overall changes in R values in early and late sessions show that improvements in performances were accompanied by specific changes in the periods of synchronized activity. E) Example of a synchronization trial. The lower panels show, in red, the neural activity of each rat and, in blue, the average of neural activity for the remaining of the Brainet. The upper panels depict the R value for the correlation coefficient between each rat and the remaining of the Brainet. There was an overall tendency for the Brainet to correlate in the beginning of the test period.

Figure 3: The Brainet can both synchronize and desynchronize neural activity. 

A) Architecture of a Brainet that can synchronize and desynchronize its neural activity to perform virtual tactile stimuli classification. Different patterns of ICMS were simultaneously delivered to each rat in the Brainet. Neural signals from all neurons from each brain were analyzed and compared to the remaining rats in the Brainet. The Brainet was required to synchronize its neural activity to indicate the delivery of a Stimulus 1 and to desynchronize its neural activity to indicate the delivery of a Stimulus 2. 

B) Example of perievent histograms of neurons for ICMS Stimulus 1 and 2. 

C) The Brainet performance was above No-ICMS sessions, and above individual rats’ performances. * indicates P < 0.05; ** indicates P < 0.01; n.s. indicates non significant.


Figure 4: Brainet for discrete classification. 

A) Architecture of a Brainet for stimulus classification. Two different patterns of ICMS were simultaneously delivered to each rat in the Brainet. Neural signals from each individual neuron were analyzed separately and used to determine an overall classification vote for the Brainet. 

B) Example of a session where a total of 62 neurons were recorded from four different animals. Deep blue indicates poor encoding, while dark red indicates good encoding. Although Rat 3 presented the best encoding neurons, all rats contributed to the network’s final classification. 

C) Performance of Brainet during sessions was significantly higher when compared to the No-ICMS sessions. Additionally, because the neural activity is redundant across multiple brains, the overall performance of the Brainet was also higher than in individual brains. *** indicates P < 0.0001.

D) Neuron dropping curve of Brainet for discrete classification. The effect of redundancy in encoding can be observed in the Brainet as the best encoding cells from each session are removed. 

E) The panels depict the dynamics of the stimulus presented (X axis: 1 or 2) and the Brainet classifications (Y axis: 1 to 2) during sessions and No-ICMS sessions. During regular sessions, the Brainet classifications mostly matched the stimulus presented (lower left and upper right quadrants). Meanwhile, during No ICMS sessions the Brainet classifications were evenly distributed across all four quadrants. The percentages indicate the fraction of trials in each quadrant (Stimulus 1, vote 1 not shown). 

F) Example of an image processed by the Brainet for discrete classification. An original image was pixilated and each blue or white pixel was delivered as a different ICMS pattern to the Brainet during a series of trials (Stimulus 1 – white; Stimulus 2 – blue). The left panel shows the original input image and the right panel shows the output of the Brainet.

Figure 5: A Brainet for storage and retrieval of tactile memories. 

A) Tactile memories encoded as two different ICMS stimuli were stored in the Brainet by keeping information flowing between different nodes (i.e. rats). Tactile information sent to the first rat in Trial 1 (‘Stimulus Decoding’), was successively decoded and transferred between Rats 2 and 3, and again transferred to Rat 1, across a period of four trials (memory trace in red). The use of the brain-to-brain interface between the nodes of the network allowed accurate transfer of information. 

B) The overall performance of the Brainet was significantly better than the performance in the No-ICMS sessions and better than individual rats performing 4 consecutive correct trials. In this panel, * indicates P < 0.05 and *** indicates P < 0.001. 

C) Neuron dropping curve of Brainet for storage and retrieval of memories. 

D) Example of session with multiple memories (each column) processed in blocks of four trials (each row). Information flows from the bottom (Stimulus delivered) towards the top (Trials 1–4). Blue and red indicate Stimulus 1 or 2 respectively. Correct tactile memory traces are columns which have a full sequence of trials with the same color (see blocks: 3, 5, 7 and 9). In this panel, * indicates an incorrect trial.

Figure 6: A Brainet for parallel and sequential processing. 

A) Architecture of a network for Parallel and Sequential processing. Information flows from the bottom to the top during the course of two trials. In first trial, odd trial for parallel processing, Dyad 1 (Rat 1-Rat 2) received one of two ICMS patterns, and Dyad 2 (Rat 3-Rat 4) received independently one of two ICMS patterns. During Trial 2, even trial for sequential processing, the whole Brainet received again one of two ICMS patterns. However, the pattern delivered in the even trial was dependent on the results of the first trial and was calculated according to the colored matrix presented. As depicted by the different encasing of the matrix (blue or red), if both dyads encoded the same stimulus in the odd trial (Stimulus 1-Stimulus1 or Stimulus 2-Stimulus 2), then the stimulus delivered in the even trial corresponded to Stimulus 3. Otherwise, if each dyad encoded a different stimulus in the odd trial (Stimulus1-Stimulus 2 or Stimulus 2-Stimulus 1), then the stimulus delivered in even trial was Stimulus 4. Each correct block of information required three accurate estimates of the stimulus delivered (i.e. encoding by both dyads in the even trial, as well as the whole Brainet in the odd trial). 

B) Example of session with sequential and parallel processing. The bottom and center panel show the dyads processing the stimuli during the odd trials (parallel processing), while the top panel shows the performance of the whole Brainet during the even trials. In this panel, * indicates an incorrect classification. 

C) The performance of the Brainet was significantly better than the performance during the No-ICMS sessions and above the performance of individual rats performing blocks of 3 correct trials. In this panel, * indicates P < 0.05.

Figure 7: Parallel and sequential processing for weather forecast. 

A) Each panel represents examples of the original data, reflecting changes in temperature (lower panel), barometric pressure (center panel), and probability of precipitation (upper panel). The arrows represent general changes in each variable, indicating an increase or a decrease. On the top of each panel is represented the ICMS pattern that resulted from each arrow presented. 

B) Lower and center panels show trials where different rats of the Brainet (Rat 1 lower panel, and Rats 2-3 center panel) processed the original data in parallel. Specifically, Rat 1 processed temperature changes and Rats 2-3 processed barometric pressure changes. The upper panel shows the Brainet processing changes in the probability of precipitation (Rats 1–3) during the even trials. * indicates trials where processing was incorrect.

Linux Creator Linus Torvalds Laughs at the AI Apocalypse

By admin,

Over the past several months, many of the world’s most famous scientists and engineers —including Stephen Hawking — have said that one of the biggest threats to humanity is an artificial superintelligence. But Linus Torvalds, the irascible creator of open source operating system Linux, says their fears are idiotic.
He also raised some good points, explaining that what we’re likely to see isn’t some destructive superintelligence like Skynet, but instead a series of “targeted AI” that do things like language translation or scheduling. Basically, these would just be “fancier” versions of apps like Google Now or Siri. They will not, however, be cybergods, or even human-equivalent forms of intelligence.
In an Q/A with Slashdot community members, Torvalds explained what he thinks will be the result of research into neural networks and AI:
“We’ll get AI, and it will almost certainly be through something very much like recurrent neural networks. And the thing is, since that kind of AI will need training, it won’t be ‘reliable’ in the traditional computer sense. It’s not the old rule-based Prolog days, when people thought they’d *understand* what the actual decisions were in an AI.

And that all makes it very interesting, of course, but it also makes it hard to productise. Which will very much limit where you’ll actually find those neural networks, and what kinds of network sizes and inputs and outputs they’ll have.

So I’d expect just more of (and much fancier) rather targeted AI, rather than anything human-like at all

  • Language recognition, 
  • pattern recognition, 
  • things like that. 

I just don’t see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you.


As for the idea that AI might usher in a Singularity, where computers experience an “intelligence explosion” and figure out how to produce any object we could desire, in infinite amounts? Oh, and also help us live forever? Maybe Google’s Singulatarian guru Ray Kurzweil buys into that myth, but Torvalds is seriously skeptical:

The whole ‘Singularity’ kind of event? Yeah, it’s science fiction, and not very good Sci-Fi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really.

Even after all these years, Torvalds can still troll with the best of ‘em. And I love him for it.
ORIGINAL: Gizmodo

An executive’s guide to machine learning

By admin,

An executive’s guide to machine learning

It’s no longer the preserve of artificial-intelligence researchers and born-digital companies like Amazon, Google, and Netflix.
Machine learning is based on algorithms that can learn from data without relying on rules-based programming. It came into its own as a scientific discipline in the late 1990s as steady advances in digitization and cheap computing power enabled data scientists to stop building finished models and instead train computers to do so. The unmanageable volume and complexity of the big data that the world is now swimming in have increased the potential of machine learning—and the need for it.
Stanford’s Fei-Fei Li

In 2007 Fei-Fei Li, the head of Stanford’s Artificial Intelligence Lab, gave up trying to program computers to recognize objects and began labeling the millions of raw images that a child might encounter by age three and feeding them to computers. By being shown thousands and thousands of labeled data sets with instances of, say, a cat, the machine could shape its own rules for deciding whether a particular set of digital pixels was, in fact, a cat.1 Last November, Li’s team unveiled a program that identifies the visual elements of any picture with a high degree of accuracy. IBM’s Watson machine relied on a similar self-generated scoring system among hundreds of potential answers to crush the world’s best Jeopardy! players in 2011.

Dazzling as such feats are, machine learning is nothing like learning in the human sense (yet). But what it already does extraordinarily well—and will get better at—is relentlessly chewing through any amount of data and every combination of variables. Because machine learning’s emergence as a mainstream management tool is relatively recent, it often raises questions. In this article, we’ve posed some that we often hear and answered them in a way we hope will be useful for any executive. Now is the time to grapple with these issues, because the competitive significance of business models turbocharged by machine learning is poised to surge. Indeed, management author Ram Charan suggests that any organization that is not a math house now or is unable to become one soon is already a legacy company.2
1. How are traditional industries using machine learning to gather fresh business insights?
Well, let’s start with sports. This past spring, contenders for the US National Basketball Association championship relied on the analytics of Second Spectrum, a California machine-learning start-up. By digitizing the past few seasons’ games, it has created predictive models that allow a coach to distinguish between, as CEO Rajiv Maheswaran puts it, “a bad shooter who takes good shots and a good shooter who takes bad shots”—and to adjust his decisions accordingly.
You can’t get more venerable or traditional than General Electric, the only member of the original Dow Jones Industrial Average still around after 119 years. GE already makes hundreds of millions of dollars by crunching the data it collects from deep-sea oil wells or jet engines to optimize performance, anticipate breakdowns, and streamline maintenance. But Colin Parris, who joined GE Software from IBM late last year as vice president of software research, believes that continued advances in data-processing power, sensors, and predictive algorithms will soon give his company the same sharpness of insight into the individual vagaries of a jet engine that Google has into the online behavior of a 24-year-old netizen from West Hollywood.
2. What about outside North America?
In Europe, more than a dozen banks have replaced older statistical-modeling approaches with machine-learning techniques and, in some cases, experienced 10 percent increases in sales of new products, 20 percent savings in capital expenditures, 20 percent increases in cash collections, and 20 percent declines in churn. The banks have achieved these gains by devising new recommendation engines for clients in retailing and in small and medium-sized companies. They have also built microtargeted models that more accurately forecast who will cancel service or default on their loans, and how best to intervene.
Closer to home, as a recent article in McKinsey Quarterly notes,3 our colleagues have been applying hard analytics to the soft stuff of talent management. Last fall, they tested the ability of three algorithms developed by external vendors and one built internally to forecast, solely by examining scanned résumés, which of more than 10,000 potential recruits the firm would have accepted. The predictions strongly correlated with the real-world results. Interestingly, the machines accepted a slightly higher percentage of female candidates, which holds promise for using analytics to unlock a more diverse range of profiles and counter hidden human bias.
As ever more of the analog world gets digitized, our ability to learn from data by developing and testing algorithms will only become more important for what are now seen as traditional businesses. Google chief economist Hal Varian calls this “computer kaizen.” For “just as mass production changed the way products were assembled and continuous improvement changed how manufacturing was done,” he says, “so continuous [and often automatic] experimentation will improve the way we optimize business processes in our organizations.4
3. What were the early foundations of machine learning?
Machine learning is based on a number of earlier building blocks, starting with classical statistics. Statistical inference does form an important foundation for the current implementations of artificial intelligence. But it’s important to recognize that classical statistical techniques were developed between the 18th and early 20th centuries for much smaller data sets than the ones we now have at our disposal. Machine learning is unconstrained by the preset assumptions of statistics. As a result, it can yield insights that human analysts do not see on their own and make predictions with ever-higher degrees of accuracy.
More recently, in the 1930s and 1940s, the pioneers of computing (such as Alan Turing, who had a deep and abiding interest in artificial intelligence) began formulating and tinkering with the basic techniques such as neural networks that make today’s machine learning possible. But those techniques stayed in the laboratory longer than many technologies did and, for the most part, had to await the development and infrastructure of powerful computers, in the late 1970s and early 1980s. That’s probably the starting point for the machine-learning adoption curve. New technologies introduced into modern economies—the steam engine, electricity, the electric motor, and computers, for example—seem to take about 80 years to transition from the laboratory to what you might call cultural invisibility. The computer hasn’t faded from sight just yet, but it’s likely to by 2040. And it probably won’t take much longer for machine learning to recede into the background.
4. What does it take to get started?
C-level executives will best exploit machine learning if they see it as a tool to craft and implement a strategic vision. But that means putting strategy first. Without strategy as a starting point, machine learning risks becoming a tool buried inside a company’s routine operations: it will provide a useful service, but its long-term value will probably be limited to an endless repetition of “cookie cutter” applications such as models for acquiring, stimulating, and retaining customers.
We find the parallels with M&A instructive. That, after all, is a means to a well-defined end. No sensible business rushes into a flurry of acquisitions or mergers and then just sits back to see what happens. Companies embarking on machine learning should make the same three commitments companies make before embracing M&A. Those commitments are,

  • first, to investigate all feasible alternatives;
  • second, to pursue the strategy wholeheartedly at the C-suite level; and,
  • third, to use (or if necessary acquire) existing expertise and knowledge in the C-suite to guide the application of that strategy.
The people charged with creating the strategic vision may well be (or have been) data scientists. But as they define the problem and the desired outcome of the strategy, they will need guidance from C-level colleagues overseeing other crucial strategic initiatives. More broadly, companies must have two types of people to unleash the potential of machine learning.

  • Quants” are schooled in its language and methods.
  • Translators” can bridge the disciplines of data, machine learning, and decision making by reframing the quants’ complex results as actionable insights that generalist managers can execute.
Access to troves of useful and reliable data is required for effective machine learning, such as Watson’s ability, in tests, to predict oncological outcomes better than physicians or Facebook’s recent success teaching computers to identify specific human faces nearly as accurately as humans do. A true data strategy starts with identifying gaps in the data, determining the time and money required to fill those gaps, and breaking down silos. Too often, departments hoard information and politicize access to it—one reason some companies have created the new role of chief data officer to pull together what’s required. Other elements include putting responsibility for generating data in the hands of frontline managers.
Start small—look for low-hanging fruit and trumpet any early success. This will help recruit grassroots support and reinforce the changes in individual behavior and the employee buy-in that ultimately determine whether an organization can apply machine learning effectively. Finally, evaluate the results in the light of clearly identified criteria for success.
5. What’s the role of top management?
Behavioral change will be critical, and one of top management’s key roles will be to influence and encourage it. Traditional managers, for example, will have to get comfortable with their own variations on A/B testing, the technique digital companies use to see what will and will not appeal to online consumers. Frontline managers, armed with insights from increasingly powerful computers, must learn to make more decisions on their own, with top management setting the overall direction and zeroing in only when exceptions surface. Democratizing the use of analytics—providing the front line with the necessary skills and setting appropriate incentives to encourage data sharing—will require time.
C-level officers should think about applied machine learning in three stages: machine learning 1.0, 2.0, and 3.0—or, as we prefer to say,

  1. description, 
  2. prediction, and
  3. prescription. 

They probably don’t need to worry much about the description stage, which most companies have already been through. That was all about collecting data in databases (which had to be invented for the purpose), a development that gave managers new insights into the past. OLAP—online analytical processing—is now pretty routine and well established in most large organizations.

There’s a much more urgent need to embrace the prediction stage, which is happening right now. Today’s cutting-edge technology already allows businesses not only to look at their historical data but also to predict behavior or outcomes in the future—for example, by helping credit-risk officers at banks to assess which customers are most likely to default or by enabling telcos to anticipate which customers are especially prone to “churn” in the near term (exhibit).
Exhibit
 
A frequent concern for the C-suite when it embarks on the prediction stage is the quality of the data. That concern often paralyzes executives. In our experience, though, the last decade’s IT investments have equipped most companies with sufficient information to obtain new insights even from incomplete, messy data sets, provided of course that those companies choose the right algorithm. Adding exotic new data sources may be of only marginal benefit compared with what can be mined from existing data warehouses. Confronting that challenge is the task of the “chief data scientist.”
Prescription—the third and most advanced stage of machine learning—is the opportunity of the future and must therefore command strong C-suite attention. It is, after all, not enough just to predict what customers are going to do; only by understanding why they are going to do it can companies encourage or deter that behavior in the future. Technically, today’s machine-learning algorithms, aided by human translators, can already do this. For example, an international bank concerned about the scale of defaults in its retail business recently identified a group of customers who had suddenly switched from using credit cards during the day to using them in the middle of the night. That pattern was accompanied by a steep decrease in their savings rate. After consulting branch managers, the bank further discovered that the people behaving in this way were also coping with some recent stressful event. As a result, all customers tagged by the algorithm as members of that microsegment were automatically given a new limit on their credit cards and offered financial advice.
 
The prescription stage of machine learning, ushering in a new era of man–machine collaboration, will require the biggest change in the way we work. While the machine identifies patterns, the human translator’s responsibility will be to interpret them for different microsegments and to recommend a course of action. Here the C-suite must be directly involved in the crafting and formulation of the objectives that such algorithms attempt to optimize.
6. This sounds awfully like automation replacing humans in the long run. Are we any nearer to knowing whether machines will replace managers?
It’s true that change is coming (and data are generated) so quickly that human-in-the-loop involvement in all decision making is rapidly becoming impractical. Looking three to five years out, we expect to see far higher levels of artificial intelligence, as well as the development of distributed autonomous corporations. These self-motivating, self-contained agents, formed as corporations, will be able to carry out set objectives autonomously, without any direct human supervision. Some DACs will certainly become self-programming.
One current of opinion sees distributed autonomous corporations as threatening and inimical to our culture. But by the time they fully evolve, machine learning will have become culturally invisible in the same way technological inventions of the 20th century disappeared into the background. The role of humans will be to direct and guide the algorithms as they attempt to achieve the objectives that they are given. That is one lesson of the automatic-trading algorithms which wreaked such damage during the financial crisis of 2008.
No matter what fresh insights computers unearth, only human managers can decide the essential questions, such as which critical business problems a company is really trying to solve. Just as human colleagues need regular reviews and assessments, so these “brilliant machines” and their works will also need to be regularly evaluated, refined—and, who knows, perhaps even fired or told to pursue entirely different paths—by executives with experience, judgment, and domain expertise.
The winners will be neither machines alone, nor humans alone, but the two working together effectively.
7. So in the long term there’s no need to worry?
It’s hard to be sure, but distributed autonomous corporations and machine learning should be high on the C-suite agenda. We anticipate a time when the philosophical discussion of what intelligence, artificial or otherwise, might be will end because there will be no such thing as intelligence—just processes. If distributed autonomous corporations act intelligently, perform intelligently, and respond intelligently, we will cease to debate whether high-level intelligence other than the human variety exists. In the meantime, we must all think about what we want these entities to do, the way we want them to behave, and how we are going to work with them.
About the authors
Dorian Pyle is a data expert in McKinsey’s Miami office, and Cristina San Jose is a principal in the Madrid office.
ORIGINAL: McKinsey
by Dorian Pyle and Cristina San Jose
June 2015

Scientists Just Invented the Neural Lace

By admin,

A 3D microscope image of the mesh merging with brain cells.

Images via Charles Lieber

In the Culture novels by Iain M. Banks, futuristic post-humans install devices on their brains called a neural lace.” A mesh that grows with your brain, it’s essentially a wireless brain-computer interface. But it’s also a way to program your neurons to release certain chemicals with a thought. And now, there’s a neural lace prototype in real life.

A group of chemists and engineers who work with nanotechnology published a paper this month in Nature Nanotechnology about an ultra-fine mesh that can merge into the brain to create what appears to be a seamless interface between machine and biological circuitry. Called “mesh electronics,” the device is so thin and supple that it can be injected with a needle — they’ve already tested it on mice, who survived the implantation and are thriving. The researchers describe their device as “syringe-injectable electronics,” and say it has a number of uses, including 

  • monitoring brain activity, 
  • delivering treatment for degenerative disorders like Parkinson’s, and 
  • even enhancing brain capabilities.

Writing about the paper in Smithsonian magazine, Devin Powell says a number of groups are investing in this research, including the military:

[Study researcher Charles Lieber’s] backers include Fidelity Biosciences, a venture capital firm interested in new ways to treat neurodegenerative disorders such as Parkinson’s disease. The military has also taken an interest, providing support through the U.S. Air Force’s Cyborgcell program, which focuses on small-scale electronics for the “performance enhancement” of cells.

For now, the mice with this electronic mesh are connected by a wire to computer — but in the future, this connection could become wireless. The most amazing part about the mesh is that the mouse brain cells grew around it, forming connections with the wires, essentially welcoming a mechanical component into a biochemical system.

A 3D microscope image of the mesh merging with brain cells

Lieber and his colleagues do hope to begin testing it on humans as soon as possible, though realistically that’s many years off. Still, this could be the beginning of the first true human internet, where brain-to-brain interfaces are possible via injectable electronics that pass your mental traffic through the cloud. What could go wrong?

[Read the scientific article in Nature Nanotechnology]

ORIGINAL: Gizmodo
Annalee Newitz
6/15/15

Contact the author at [email protected].
Public PGP key

Robert Reich: The Nightmarish Future for American Jobs and Incomes Is Here

By admin,

Even knowledge-based jobs will disappear as wealth gets more concentrated at the top in the next 10 years.
Photo Credit: via YouTube
What will happen to American jobs, incomes, and wealth a decade from now?
Predictions are hazardous but survivable. In 1991, in my book The Work of Nations, I separated almost all work into three categories, and then predicted what would happen to each of them.
The first category I called “routine production services,” which entailed the kind of repetitive tasks performed by the old foot soldiers of American capitalism through most of the twentieth century — done over and over, on an assembly line or in an office.
I estimated that such work then constituted about one-quarter of all jobs in the United States, but would decline steadily as such jobs were replaced by
  • new labor-saving technologies and
  • by workers in developing nations eager to do them for far lower wages.

I also assumed the pay of remaining routine production workers in America would drop, for similar reasons.

I was not far wrong.
The second category I called “in-person services.This work had to be provided personally because the “human touch” was essential to it. It included retail sales workers, hotel and restaurant workers, nursing-home aides, realtors, childcare workers, home health-care aides, flight attendants, physical therapists, and security guards, among many others.
In 1990, by my estimate, such workers accounted for about 30 percent of all jobs in America, and I predicted their numbers would grow because — given that their services were delivered in person — neither advancing technologies nor foreign-based workers would be able to replace them.
I also predicted their pay would drop. They would be competing with
  • a large number of former routine production workers, who could only find jobs in the “in-person” sector.
  • They would also be competing with labor-saving machinery such as automated tellers, computerized cashiers, automatic car washes, robotized vending machines, and self-service gas pumps —
  • as well as “personal computers linked to television screensthrough which “tomorrow’s consumers will be able to buy furniture, appliances, and all sorts of electronic toys from their living rooms — examining the merchandise from all angles, selecting whatever color, size, special features, and price seem most appealing, and then transmitting the order instantly to warehouses from which the selections will be shipped directly to their homes. 
  • So, too, with financial transactions, airline and hotel reservations, rental car agreements, and similar contracts, which will be executed between consumers in their homes and computer banks somewhere else on the globe.”

Here again, my predictions were not far off. But I didn’t foresee how quickly advanced technologies would begin to make inroads even on in-person services. Ten years from now I expect Amazon will have wiped out many of today’s retail jobs, and Google‘s self-driving car will eliminate many bus drivers, truck drivers, sanitation workers, and even Uber drivers.

The third job category I named “symbolic-analytic services.” Here I included all the problem-solving, problem-identifying, and strategic thinking that go into the manipulation of symbols—data, words, oral and visual representations.
I estimated in 1990 that symbolic analysts accounted for 20 percent of all American jobs, and expected their share to continue to grow, as would their incomes, because the demand for people to do these jobs would continue to outrun the supply of people capable of doing them. This widening disconnect between symbolic-analytic jobs and the other two major categories of work would, I predicted, be the major force driving widening inequality.
Again, I wasn’t far off. But I didn’t anticipate how quickly or how wide the divide would become, or how great a toll inequality and economic insecurity would take. I would never have expected, for example, that the life expectancy of an American white woman without a high school degree would decrease by five years between 1990 and 2008.
We are now faced not just with labor-replacing technologies but with knowledge-replacing technologies. The combination of
  • advanced sensors,
  • voice recognition,
  • artificial intelligence,
  • big data,
  • text-mining, and
  • pattern-recognition algorithms,

is generating smart robots capable of quickly learning human actions, and even learning from one another. A revolution in life sciences is also underway, allowing drugs to be tailored to a patient’s particular condition and genome.

If the current trend continues, many more symbolic analysts will be replaced in coming years. The two largest professionally intensive sectors of the United States — health care and education — will be particularly affected because of increasing pressures to hold down costs and, at the same time, the increasing accessibility of expert machines.
We are on the verge of a wave of mobile health applications, for example, measuring everything from calories to blood pressure, along with software programs capable of performing the same functions as costly medical devices and diagnostic software that can tell you what it all means and what to do about it.
Schools and universities will likewise be reorganized around smart machines (although faculties will scream all the way). Many teachers and university professors are already on the way to being replaced by software — so-called “MOOCs” (Massive Open Online Courses) and interactive online textbooks — along with adjuncts that guide student learning.
As a result, income and wealth will become even more concentrated than they are today. Those who create or invest in blockbuster ideas will earn unprecedented sums and returns. The corollary is they will have enormous political power. But most people will not share in the monetary gains, and their political power will disappear. The middle class’s share of the total economic pie will continue to shrink, while the share going to the very top will continue to grow.
But the current trend is not preordained to last, and only the most rigid technological determinist would assume this to be our inevitable fate. We can — indeed, I believe we must — ignite a political movement to reorganize the economy for the benefit of the many, rather than for the lavish lifestyles of a precious few and their heirs. (I have more to say on this in my upcoming book, Saving Capitalism: For the Many, Not the Few, out at the end of September.)
Robert B. Reich has served in three national administrations, most recently as secretary of labor under President Bill Clinton. He also served on President Obama’s transition advisory board. His latest book is “Aftershock: The Next Economy and America’s Future.” His homepage is www.robertreich.org.
May 7, 2015
ROBERT B. REICH, Chancellor’s Professor of Public Policy at the University of California at Berkeley and Senior Fellow at the Blum Center for Developing Economies, was Secretary of Labor in the Clinton administration. Time Magazine named him one of the ten most effective cabinet secretaries of the twentieth century. He has written thirteen books, including the best sellers “Aftershock” and “The Work of Nations.” His latest, “Beyond Outrage,” is now out in paperback. He is also a founding editor of the American Prospect magazine and chairman of Common Cause. His new film, “Inequality for All,” is now available on Netflix, iTunes, DVD, and On Demand.

‘Highly creative’ professionals won’t lose their jobs to robots, study finds

By admin,

ORIGINAL Fortune
APRIL 22, 2015
A University of Oxford study finds that there are some things that a robot won’t be able to do. Unfortunately, these gigs don’t pay all that well.
Many people are in “robot overlord denial,” according to a recent online poll run by jobs board Monster.com. They think computers could not replace them at work. Sadly, most are probably wrong.
University of Oxford researchers Carl Benedikt Frey and Michael Osborne estimated in 2013 that 47% of total U.S. jobs could be automated by 2033. The combination of robotics, automation, artificial intelligence, and machine learning is so powerful that some white collar workers are already being replaced — and we’re talking journalists, lawyers, doctors, and financial analysts, not the person who used to file all the incoming faxes.
But there’s hope, at least for some. According to an advanced copy of a new report that U.K. non-profit Nesta sent to Fortune, 21% of US employment requires people to be “highly creative.” Of them, 86% (18% of the total workforce) are at low or no risk from automation. In the U.K., 87% of those in creative fields
Artists, musicians, computer programmers, architects, advertising specialists … there’s a very wide range of creative occupations,” said report co-author Hasan Bakhshi, director of creative economy at Nesta, to Fortune. Some other types would be financial managers, judges, management consultants, and IT managers. “Those jobs have a very high degree of resistance to automation.”
The study is based on the work of Frey and Osborne, who are also co-authors of this new report. The three researchers fed 120 job descriptions from the US Department of Labor into a computer and analyzed them to see which were most likely to require extensive creativity, or the use of imagination or ideas to make something new.
Creativity is one of the three classic bottlenecks to automating work, according to Bakhshi. “Tasks which involve a high degree of human manipulation and human perception — subtle tasks — other things being equal will be more difficult to automate,” he said. For instance, although goods can be manufactured in a robotic factory, real craft work still “requires the human touch.
So will jobs that need social intelligence, such as your therapist or life insurance agent.
Of course, the degree of creativity matters. Financial journalists who rewrite financial statements are already beginning to be supplanted by software. The more repetitive and dependent on data the work is, the more easily a human can be pushed aside.
In addition, just because certain types of creative occupations can’t easily be replaced doesn’t mean that their industries won’t see disruption. Packing and shipping crafts can be automated, as can could some aspects of the film industry that aren’t such things as directing, acting, and design. “These industries are going to be disrupted and are vulnerable,” Bakhshi said.
Also, not all these will necessarily provide a financial windfall. The study found an “inverse U-shape” relationship between the probability of an occupation being highly creative and the average income it might deliver. Musicians, actors, dancers, and artists might make relatively little, while people in technical, financial, and legal creative occupations can do quite well. So keeping that creative job may not seem much of a financial blessing in many cases.
Are you in a “creative” role that will be safe from automation? You can find out what these Oxford researchers think by taking their online quiz.

The AI Revolution: The Road to Superintelligence (Parts 1 and 2)

By admin,

ORIGINAL: Wait But Why
By Tim Urban
Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1—Part 2 is here.
_______________
We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge
 
What does it feel like to stand here?
It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:

Which probably feels pretty normal…
_______________
The Far Future—Coming Soon
Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.
This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die.
But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750—transportation, communication, etc.—definitely wouldn’t make him die.
No, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die.
And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.
In order for someone to be transported into the future and die from the level of shock they’d experience, they have to go enough years ahead that a “die level of progress,” or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.
This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.11← open these
This works on smaller scales too. The movie Back to the Future came out in 1985, and “the past” took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes—but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones—today’s Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movie’s Marty McFly was in 1955.
This is for the same reason we just discussed—the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985—because the former was a more advanced world—so much more change happened in the most recent 30 years than in the prior 30.
So—advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?
Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2
If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015—i.e. the next DPU might only take a couple decades—and the world in 2050 might be so vastly different than today’s world that we would barely recognize it.
This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.
So then why, when you hear me say something like “the world 35 years from now might be totally unrecognizable,” are you thinking, “Cool….but nahhhhhhh”? Three reasons we’re skeptical of outlandish forecasts of the future:
1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. It’s most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. They’d be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than they’re moving now.
2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Kurzweil explains that progress happens in “S-curves”:
An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:
  1. Slow growth (the early phase of exponential growth)
  2. Rapid growth (the late, explosive phase of exponential growth)
  3. A leveling off as the particular paradigm matures3
If you look only at very recent history, the part of the S-curve you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but that’s missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.
3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as “the way things happen.” We’re also limited by our imagination, which takes our experience and uses it to conjure future predictions—but often, what we know simply doesn’t give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.
So while nahhhhh might feel right as you read this post, it’s probably actually wrong. The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about what’s going on today in science and technology, you start to see a lot of signs quietly hinting that life as we know it cannot withstand the leap that’s coming next.
_______________
The Road to Superintelligence 

What Is AI?
If you’re like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately you’ve been hearing it mentioned by serious people, and you don’t really quite get it.
There are three reasons a lot of people are confused about the term AI:

 

  1. We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.
  2. AI is a broad topic. It ranges from your phone’s calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.
  3. We use AI all the time in our daily lives, but we often don’t realize it’s AI. John McCarthy, who coined the term “Artificial Intelligence” in 1956, complained that “as soon as it works, no one calls it AI anymore.4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to “insisting that the Internet died in the dot-com bust of the early 2000s.5

 

So let’s clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body—if it even has a body. For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.
Secondly, you’ve probably heard the term singularity” or “technological singularity.” This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world. I found that many of today’s AI thinkers have stopped using the term, and it’s confusing anyway, so I won’t use it much here (even though we’ll be focusing on that idea throughout).
Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:
  • AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.
  • AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
  • AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.
As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything.
Let’s take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:
Where We Are Currently—A World Running on ANI
Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:
  • Cars are full of ANI systems, from the computer that figures out when the anti-lock brakes should kick in to the computer that tunes the parameters of the fuel injection systems. Google’s self-driving car, which is being tested now, will contain robust ANI systems that allow it to perceive and react to the world around it.
  • Your phone is a little ANI factory. When you navigate using your map app, receive tailored music recommendations from Pandora, check tomorrow’s weather, talk to Siri, or dozens of other everyday activities, you’re using ANI.
  • Your email spam filter is a classic type of ANI—it starts off loaded with intelligence about how to figure out what’s spam and what’s not, and then it learns and tailors its intelligence to you as it gets experience with your particular preferences. The Nest Thermostat does the same thing as it starts to figure out your typical routine and act accordingly.
  • You know the whole creepy thing that goes on when you search for a product on Amazon and then you see that as a “recommended for you” product on a different site, or when Facebook somehow knows who it makes sense for you to add as a friend? That’s a network of ANI systems, working together to inform each other about who you are and what you like and then using that information to decide what to show you. Same goes for Amazon’s “People who bought this also bought…” thing—that’s an ANI system whose job it is to gather info from the behavior of millions of customers and synthesize that info to cleverly upsell you so you’ll buy more things.
  • Google Translate is another classic ANI system—impressively good at one narrow task. Voice recognition is another, and there are a bunch of apps that use those two ANIs as a tag team, allowing you to speak a sentence in one language and have the phone spit out the same sentence in another.
  • When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.
  • The world’s best Checkers, Chess, Scrabble, Backgammon, and Othello players are now all ANI systems.
  • Google search is one large ANI brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular. Same goes for Facebook’s Newsfeed.

And those are just in the consumer world. Sophisticated ANI systems are widely used in sectors and industries like military, manufacturing, and finance (algorithmic high-frequency AI traders account for more than half of equity shares traded on US markets6), and in expert systems like those that help doctors make diagnoses and, most famously, IBM’s Watson, who contained enough facts and understood coy Trebek-speak well enough to soundly beat the most prolific Jeopardy champions.

ANI systems as they are now aren’t especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).
But while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.
The Road From ANI to AGI
Why It’s So Hard
Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down—all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.
What’s interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what you’d think they are. Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Build one that can look at a dog and answer whether it’s a dog or a cat—spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-old’s picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things—like calculus, financial market strategy, and language translation—are mind-numbingly easy for a computer, while easy things—like vision, motion, movement, and perception—are insanely hard for it. Or, as computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’”7
What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of million years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why it’s not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.
On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we haven’t had any time to evolve a proficiency at them, so a computer doesn’t need to work too hard to beat us. Think about it—which would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?
One fun example—when you look at this, you and a computer both can figure out that it’s a rectangle with two distinct shades, alternating:
Tied so far. But if you pick up the black and reveal the whole image…
…you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it sees—a variety of two-dimensional shapes in several different shades—which is actually what’s there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really is—a photo of an entirely-black, 3-D rock:
Credit: Matthew Lloyd
And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.
Daunting.
So how do we get there?
1- First Key to Creating AGI: Increasing Computational Power
One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, it’ll need to equal the brain’s raw computing capacity.
One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.
Ray Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark—around 1016, or 10 quadrillion cps.
Currently, the world’s fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.
Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level—10 quadrillion cps—then that’ll mean AGI could become a very real part of life.
Moore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10 trillion cps/$1,000, right on pace with this graph’s predicted trajectory:9
So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level. This doesn’t sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.
So on the hardware side, the raw power needed for AGI is technically available now, in China, and we’ll be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesn’t make a computer generally intelligent—the next question is, how do we bring human-level intelligence to all that power?
2- Second Key to Creating AGI: Making it Smart
This is the icky part. The truth is, no one really knows how to make it smart—we’re still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:
1) Plagiarize the brain.
This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they can’t do nearly as well as that kid, and then they finally decide “k fuck it I’m just gonna copy that kid’s answers.” It makes sense—we’re stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.
The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing—optimistic estimates say we can do this by 2030. Once we do that, we’ll know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor “neurons,” connected to each other with inputs and outputs, and it knows nothing—like an infant brain. The way it “learns” is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when it’s told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when it’s told it was wrong, those pathways’ connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, we’re discovering ingenious new ways to take advantage of neural circuitry.
More extreme plagiarism involves a strategy called “whole brain emulation,” where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. We’d then have a computer officially capable of everything the brain is capable of—it would just need to learn and gather information. If engineers get really good, they’d be able to emulate a real brain with such exact accuracy that the brain’s full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which he’d probably be really excited about.
How far are we from achieving whole brain emulation? Well so far, we’ve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress—now that we’ve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.
2) Try to make evolution do what it did before but for us this time.
So if we decide the smart kid’s test is too hard to copy, we can try to copy the way he studies for the tests instead.
Here’s something we know. Building a computer as powerful as the brain is possible—our own brain’s evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a bird’s wing-flapping motions—often, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.
So how can we simulate evolution to build AGI? The method, called “genetic algorithms,” would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures “perform” by living life and are “evaluated” by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.
The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.
But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly—it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesn’t aim for anything, including intelligence—sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence—like revamping the ways cells produce energy—when we can remove those extra burdens and use things like electricity. It’s no doubt we’d be much, much faster than evolution—but it’s still not clear whether we’ll be able to improve upon evolution enough to make this a viable strategy.
3) Make this whole thing the computer’s problem, not ours.
This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.
The idea is that we’d build a computer whose two major skills would be doing research on AI and coding changes into itself—allowing it to not only learn but to improve its own architecture. We’d teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job—figuring out how to make themselves smarter. More on this later.
All of This Could Happen Soon
Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:
1) Exponential growth is intense and what seems like a snail’s pace of advancement can quickly race upwards—this GIF illustrates this concept nicely:
2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.
The Road From AGI to ASI
At some point, we’ll have achieved AGI—computers with human-level general intelligence. Just a bunch of people and computers living together in equality.
Oh actually not at all.
The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:
Hardware:
  • Speed. The brain’s neurons max out at around 200 Hz, while today’s microprocessors (which are much slower than they will be when we reach AGI) run at 2 GHz, or 10 million times faster than our neurons. And the brain’s internal communications, which can move at about 120 m/s, are horribly outmatched by a computer’s ability to communicate optically at the speed of light.
  • Size and storage. The brain is locked into its size by the shape of our skulls, and it couldn’t get much bigger anyway, or the 120 m/s internal communications would take too long to get from one brain structure to another. Computers can expand to any physical size, allowing far more hardware to be put to work, a much larger working memory (RAM), and a longterm memory (hard drive storage) that has both far greater capacity and precision than our own.
  • Reliability and durability. It’s not only the memories of a computer that would be more precise. Computer transistors are more accurate than biological neurons, and they’re less likely to deteriorate (and can be repaired or replaced if they do). Human brains also get fatigued easily, while computers can run nonstop, at peak performance, 24/7.

 

Software:
  • Editability, upgradability, and a wider breadth of possibility. Unlike the human brain, computer software can receive updates and fixes and can be easily experimented on. The upgrades could also span to areas where human brains are weak. Human vision software is superbly advanced, while its complex engineering capability is pretty low-grade. Computers could match the human on vision software but could also become equally optimized in engineering and any other area.
  • Collective capability. Humans crush all other species at building a vast collective intelligence. Beginning with the development of language and the forming of large, dense communities, advancing through the inventions of writing and printing, and now intensified through tools like the internet, humanity’s collective intelligence is one of the major reasons we’ve been able to get so far ahead of all other species. And computers will be way better at it than we are. A worldwide network of AI running a particular program could regularly sync with itself so that anything any one computer learned would be instantly uploaded to all other computers. The group could also take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population.10
AI, which will likely get to AGI by being programmed to self-improve, wouldn’t see “human-level intelligence” as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to “stop” at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.
This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic we’re aware of about any animal’s intelligence is that it’s far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:
So as AI zooms upward in intelligence toward us, we’ll see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanity—Nick Bostrom uses the term “the village idiot”—we’ll be like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot-level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us:
And what happens…after that?
An Intelligence Explosion
I hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and it’s gonna stay that way from here forward. I want to pause here to remind you that every single thing I’m going to say is real—real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.
Anyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didn’t involve self-improvement would now be smart enough to begin self-improving if they wanted to.3
And here’s where we get to an intense concept: recursive self-improvement. It works like this—
An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion,11 and it’s the ultimate example of The Law of Accelerating Returns.
There is some debate about how soon AI will reach human-level general intelligence—the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like—this could happen:
It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.
Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.
What we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades.
If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is:
Will it be a nice God?
That’s the topic of Part 2 of this post.
The AI Revolution: Our Immortality or Extinction
By Tim Urban
Note: This is Part 2 of a two-part series on AI. Part 1 is above
__________
We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. — Nick Bostrom
Welcome to Part 2 of the “Wait how is this possibly what I’m reading I don’t get why everyone isn’t talking about this” series.
Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how it’s all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI that’s at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement we’ve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:
This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11← open these
Before we dive into things, let’s remind ourselves what it would mean for a machine to be superintelligent.
A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someone’s first thought when they imagine a super-smart computer is one that’s as intelligent as a human but can think much, much faster2—they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.
That sounds impressive, and ASI would think much faster than any human could—but the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isn’t a difference in thinking speed—it’s that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps’ brains do not. Speeding up a chimp’s brain by thousands of times wouldn’t bring him to our level—even with a decade’s time, he wouldn’t be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.
But it’s not just that a chimp can’t do what we do, it’s that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, it’s beyond him to realize that anyone can build a skyscraper. That’s the result of a small difference in intelligence quality.
And in the scheme of the intelligence range we’re talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3
To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimp’s incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves. And that’s only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.
But the kind of superintelligence we’re talking about today is something far beyond anything on this staircase. In an intelligence explosion—where the smarter a machine gets, the quicker it’s able to increase its own intelligence, until it begins to soar upwards—a machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once it’s on the dark green step two above us, and by the time it’s ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that it’s distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something that’s here on the staircase (or maybe a million times higher):
And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.
Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, we’ll be dramatically stomping on evolution. Or maybe this is part of evolution—maybe the way evolution works is that intelligence creeps up more and more until it hits the level where it’s capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:
And for reasons we’ll discuss later, a huge part of the scientific community believes that it’s not a matter of whether we’ll hit that tripwire, but when. Kind of a crazy piece of information.
So where does that leave us?
Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.
First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction—
“All species eventually go extinct” has been almost as reliable a rule through history as “All humans eventually die” has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, it’s only a matter of time before some other species, some gust of nature’s wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor state—a place species are all teetering on falling into and from which no species ever returns.
And while most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, we’ll be impervious to extinction forever—we’ll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it’s just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.
If Bostrom and others are right, and from everything I’ve read, it seems like they really might be, we have two pretty shocking facts to absorb:
  1. The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.
  2. The advent of ASI will make such an unimaginably dramatic impact that it’s likely to knock the human race off the beam, in one direction or the other.

 

It may very well be that when evolution hits the tripwire, it permanently ends humans’ relationship with the beam and creates a new world, with or without humans.
Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?
No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. We’ll spend the rest of this post exploring what they’ve come up with.
___________
Let’s start with the first part of the question: When are we going to hit the tripwire?
i.e. How long until the first machine reaches superintelligence?
Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:
Those people subscribe to the belief that this is happening soon—that expontential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.
Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to the tripwire.
The Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.
The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.
A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that there’s no guarantee about that; it could also take a much longer time.
Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that it’s more likely that ASI won’t actually ever be achieved.
So what do you get when you put all of these opinions together?
In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?” It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2
  • Median optimistic year (10% likelihood): 2022
  • Median realistic year (50% likelihood): 2040
  • Median pessimistic year (90% likelihood): 2075

 

So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.
A separate study, conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:3
  • By 2030: 42% of respondents
  • By 2050: 25%
  • By 2100: 20%
  • After 2100: 10%
  • Never: 2%

Pretty similar to Müller and Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.

But AGI isn’t the tripwire, ASI is. So when do the experts think we’ll reach ASI?
Müller and Bostrom also asked the experts how likely they think it is that we’ll reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4
The median answer put a rapid (2 year) AGI → ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.
We don’t know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years. So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we’ll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.
Of course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.
Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?
Superintelligence will yield tremendous power—the critical question for us is:
Who or what will be in control of that power, and what will their motivation be?
The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.
Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Müller and Bostrom’s survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. It’s also worth noting that those numbers refer to the advent of AGI—if the question were about ASI, I imagine that the neutral percentage would be even lower.
Before we dive much further into this good vs. bad outcome part of the question, let’s combine both the “when will it happen?” and the “will it be good or bad?” parts of this question into a chart that encompasses the views of most of the relevant experts:
We’ll talk more about the Main Camp in a minute, but first—what’s your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people aren’t really thinking about this topic:
As mentioned in Part 1, movies have really confused things by presenting unrealistic AI scenarios that make us feel like AI isn’t something to be taken seriously in general. James Barrat compares the situation to our reaction if the Centers for Disease Control issued a serious warning about vampires in our future.5
Due to something called cognitive biases, we have a hard time believing something is real until we see proof. I’m sure computer scientists in 1988 were regularly talking about how big a deal the internet was likely to be, but people probably didn’t really think it was going to change their lives until it actually changed their lives. This is partially because computers just couldn’t do stuff like that in 1988, so people would look at their computer and think, “Really? That’s gonna be a life changing thing?” Their imaginations were limited to what their personal experience had taught them about what a computer was, which made it very hard to vividly picture what computers might become. The same thing is happening now with AI. We hear that it’s gonna be a big deal, but because it hasn’t happened yet, and because of our experience with the relatively impotent AI in our current world, we have a hard time really believing this is going to change our lives dramatically. And those biases are what experts are up against as they frantically try to get our attention through the noise of collective daily self-absorption.
Even if we did believe it—how many times today have you thought about the fact that you’ll spend most of the rest of eternity not existing? Not many, right? Even though it’s a far more intense fact than anything else you’re doing today? This is because our brains are normally focused on the little things in day-to-day life, no matter how crazy a long-term situation we’re a part of. It’s just how we’re wired.
One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if you’re just standing on the intersection of the two dotted lines in the square above, totally uncertain.
During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most people’s opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:
We’re gonna take a thorough dive into both of these camps. Let’s start with the fun one—
Why the Future Might Be Our Greatest Dream
As I learned about the world of AI, I found a surprisingly large number of people standing here:
The people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and they’re convinced that’s where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.
The thing that separates these people from the other thinkers we’ll discuss later isn’t their lust for the happy side of the beam—it’s their confidence that that’s the side we’re going to land on.
Where this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say it’s naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.
We’ll cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and let’s take a good hard look at what’s over there on the fun side of the balance beam—and try to absorb the fact that the things you’re reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to him—we have to be humble enough to acknowledge that it’s possible that an equally inconceivable transformation could be in our future.
Nick Bostrom describes three ways a superintelligent AI system could function:6
As an oracle, which answers nearly any question posed to it with accuracy, including complex questions that humans cannot easily answer—i.e. How can I manufacture a more efficient car engine? Google is a primitive type of oracle.
As a genie, which executes any high-level command it’s given—Use a molecular assembler to build a new and more efficient kind of car engine—and then awaits its next command.
As a sovereign, which is assigned a broad and open-ended pursuit and allowed to operate in the world freely, making its own decisions about how best to proceed—Invent a faster, cheaper, and safer way than cars for humans to privately transport themselves.
These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the “My pencil fell off the table” situation, which you’d do by picking it up and putting it back on the table.
Eliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:
There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.7
There are a lot of eager scientists, inventors, and entrepreneurs in Confident Corner—but for a tour of brightest side of the AI horizon, there’s only one person we want as our tour guide.
Ray Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middle—author Douglas Hofstadter, in discussing the ideas in Kurzweil’s books, eloquently put forth that “it is as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad.8
Whether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including
  • the first flatbed scanner, 
  • the first scanner that converted text to speech (allowing the blind to read standard texts), 
  • the well-known Kurzweil music synthesizer (the first true electric piano), and 
  • the first commercially marketed large-vocabulary speech recognition. 
  • He’s the author of five national bestselling books. 
  • He’s well-known for his bold predictions and has a pretty good record of having them come true—including his prediction in the late ’80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon
  • Kurzweil has been called a “restless genius” by The Wall Street Journal, “the ultimate thinking machine” by Forbes, “Edison’s rightful heir” by Inc. Magazine, and “the best person I know at predicting the future of artificial intelligence” by Bill Gates.9 
  • In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Google’s Director of Engineering.5 
  • In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.

 

This biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that he’s not—he’s an extremely smart, knowledgeable, relevant man in the world. You may think he’s wrong about the future, but he’s not a fool. Knowing he’s a such a legit dude makes me happy, because as I’ve learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweil’s predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, it’s not hard see to why he has such a large, passionate following—known as the singularitarians. Here’s what he thinks is going to happen:
Timeline
Kurzweil believes computers will reach AGI by 2029 and that by 2045, we’ll have not only ASI, but a full-blown new world—a time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweil’s timeline. His predictions are still a bit more ambitious than the median respondent on Müller and Bostrom’s survey (AGI by 2040, ASI by 2060), but not by that much.
Kurzweil’s depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.
Before we move on—nanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it—
Nanotechnology Blue Box
Nanotechnology is our word for technology that deals with the manipulation of matter that’s between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).7
To understand the challenge of humans trying to manipulate matter in that range, let’s take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, they’d be about 250,000 times bigger than they are now. If you make the 1nm – 100nm nanotech range 250,000 times bigger, you get .25mm – 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next level—manipulating individual atoms—the giant would have to carefully position objects that are 1/40th of a millimeter—so small normal-size humans would need a microscope to see them.8
Nanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible … for a physicist to synthesize any chemical substance that the chemist writes down…. How? Put the atoms down where the chemist says, and so you make the substance.” It’s as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.
Nanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.
Gray Goo Bluer Box
We’re now in a diversion in a diversion. This is very fun.9
Anyway, I brought you here because there’s this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, there’d be a few trillion of them ready to go. That’s the power of exponential growth. Clever, right?
It’s clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earth’s biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (that’s the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.
An even worse scenario—if a terrorist somehow got his hands on nanobot technology and had the know-how to program them, he could make an initial few trillion of them and program them to quietly spend a few weeks spreading themselves evenly around the world undetected. Then, they’d all strike at once, and it would only take 90 minutes for them to consume everything—and with them all spread out, there would be no way to combat them.10
While this horror story has been widely discussed for years, the good news is that it may be overblown—Eric Drexler, who coined the term “gray goo,” sent me an email following this post with his thoughts on the gray goo scenario: “People love scare stories, and this one belongs with the zombies. The idea itself eats brains.”
Once we really get nanotech down, we can use it to make tech devices, clothing, food, a variety of bio-related products—artificial blood cells, tiny virus or cancer-cell destroyers, muscle tissue, etc.—anything really. And in a world that uses nanotechnology, the cost of a material is no longer tied to its scarcity or the difficulty of its manufacturing process, but instead determined by how complicated its atomic structure is. In a nanotech world, a diamond might be cheaper than a pencil eraser.
We’re not there yet. And it’s not clear if we’re underestimating, or overestimating, how hard it will be to get there. But we don’t seem to be that far away. Kurzweil predicts that we’ll get there by the 2020s.11 Governments know that nanotech could be an Earth-shaking development, and they’ve invested billions of dollars in nanotech research (the US, the EU, and Japan have invested over a combined $5 billion so far).12
Just considering the possibilities if a superintelligent computer had access to a robust nanoscale assembler is intense. But nanotechnology is something we came up with, that we’re on the verge of conquering, and since anything that we can do is a joke to an ASI system, we have to assume ASI would come up with technologies much more powerful and far too advanced for human brains to understand. For that reason, when considering the “If the AI Revolution turns out well for us” scenario, it’s almost impossible for us to overestimate the scope of what could happen—so if the following predictions of an ASI future seem over-the-top, keep in mind that they could be accomplished in ways we can’t even imagine. Most likely, our brains aren’t even capable of predicting the things that would happen.
What AI Could Do For Us
Armed with superintelligence and all the technology superintelligence would know how to create, ASI would likely be able to solve every problem in humanity. Global warming? ASI could first halt CO2 emissions by coming up with much better ways to generate energy that had nothing to do with fossil fuels. Then it could create some innovative way to begin to remove excess CO2 from the atmosphere. Cancer and other diseases? No problem for ASI—health and medicine would be revolutionized beyond imagination. World hunger? ASI could use things like nanotech to build meat from scratch that would be molecularly identical to real meat—in other words, it would be real meat. Nanotech could turn a pile of garbage into a huge vat of fresh meat or other food (which wouldn’t have to have its normal shape—picture a giant cube of apple)—and distribute all this food around the world using ultra-advanced transportation. Of course, this would also be great for animals, who wouldn’t have to get killed by humans much anymore, and ASI could do lots of other things to save endangered species or even bring back extinct species through work with preserved DNA. ASI could even solve our most complex macro issues—our debates over how economies should be run and how world trade is best facilitated, even our haziest grapplings in philosophy or ethics—would all be painfully obvious to ASI.
But there’s one thing ASI could do for us that is so tantalizing, reading about it has altered everything I thought I knew about everything:
ASI could allow us to conquer our mortality.
A few months ago, I mentioned my envy of more advanced potential civilizations who had conquered their own mortality, never considering that I might later write a post that genuinely made me believe that this is something humans could do within my lifetime. But reading about AI will make you reconsider everything you thought you were sure about—including your notion of death.
Evolution had no good reason to extend our lifespans any longer than they are now. If we live long enough to reproduce and raise our children to an age that they can fend for themselves, that’s enough for evolution—from an evolutionary point of view, the species can thrive with a 30+ year lifespan, so there’s no reason mutations toward unusually long life would have been favored in the natural selection process. As a result, we’re what W.B. Yeats describes as “a soul fastened to a dying animal.”13 Not that fun.
And because everyone has always died, we live under the “death and taxes” assumption that death is inevitable. We think of aging like time—both keep moving and there’s nothing you can do to stop them. But that assumption is wrong. Richard Feynman writes:
It is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws as we studied physics to see that it is either absolutely impossible or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before the biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.
The fact is, aging isn’t stuck to time. Time will continue moving, but aging doesn’t have to. If you think about it, it makes sense. All aging is is the physical materials of the body wearing down. A car wears down over time too—but is its aging inevitable? If you perfectly repaired or replaced a car’s parts whenever one of them began to wear down, the car would run forever. The human body isn’t any different—just far more complex.
Kurzweil talks about intelligent wifi-connected nanobots in the bloodstream who could perform countless tasks for human health, including routinely repairing or replacing worn down cells in any part of the body. If perfected, this process (or a far smarter one ASI would come up with) wouldn’t just keep the body healthy, it could reverse aging. The difference between a 60-year-old’s body and a 30-year-old’s body is just a bunch of physical things that could be altered if we had the technology. ASI could build an “age refresher” that a 60-year-old could walk into, and they’d walk out with the body and skin of a 30-year-old.10 Even the ever-befuddling brain could be refreshed by something as smart as ASI, which would figure out how to do so without affecting the brain’s data (personality, memories, etc.). A 90-year-old suffering from dementia could head into the age refresher and come out sharp as a tack and ready to start a whole new career. This seems absurd—but the body is just a bunch of atoms and ASI would presumably be able to easily manipulate all kinds of atomic structures—so it’s not absurd.
Kurzweil then takes things a huge leap further. He believes that artificial materials will be integrated into the body more and more as time goes on. First, organs could be replaced by super-advanced machine versions that would run forever and never fail. Then he believes we could begin to redesign the body—things like replacing red blood cells with perfected red blood cell nanobots who could power their own movement, eliminating the need for a heart at all. He even gets to the brain and believes we’ll enhance our brain activities to the point where humans will be able to think billions of times faster than they do now and access outside information because the artificial additions to the brain will be able to communicate with all the info in the cloud.
The possibilities for new human experience would be endless. Humans have separated sex from its purpose, allowing people to have sex for fun, not just for reproduction. Kurzweil believes we’ll be able to do the same with food. Nanobots will be in charge of delivering perfect nutrition to the cells of the body, intelligently directing anything unhealthy to pass through the body without affecting anything. An eating condom. Nanotech theorist Robert A. Freitas has already designed blood cell replacements that, if one day implemented in the body, would allow a human to sprint for 15 minutes without taking a breath—so you can only imagine what ASI could do for our physical capabilities. Virtual reality would take on a new meaning—nanobots in the body could suppress the inputs coming from our senses and replace them with new signals that would put us entirely in a new environment, one that we’d see, hear, feel, and smell.
Eventually, Kurzweil believes humans will reach a point when they’re entirely artificial;11 a time when we’ll look at biological material and think how unbelievably primitive it was that humans were ever made of that; a time when we’ll read about early stages of human history, when microbes or accidents or diseases or wear and tear could just kill humans against their own will; a time the AI Revolution could bring to an end with the merging of humans and AI.12 This is how Kurzweil believes humans will ultimately conquer our biology and become indestructible and eternal—this is his vision for the other side of the balance beam. And he’s convinced we’re gonna get there. Soon.
You will not be surprised to learn that Kurzweil’s ideas have attracted significant criticism. His prediction of 2045 for the singularity and the subsequent eternal life possibilities for humans has been mocked as “the rapture of the nerds,” or “intelligent design for 140 IQ people.” Others have questioned his optimistic timeline, or his level of understanding of the brain and body, or his application of the patterns of Moore’s law, which are normally applied to advances in hardware, to a broad range of things, including software. For every expert who fervently believes Kurzweil is right on, there are probably three who think he’s way off.
But what surprised me is that most of the experts who disagree with him don’t really disagree that everything he’s saying is possible. Reading such an outlandish vision for the future, I expected his critics to be saying, “Obviously that stuff can’t happen,” but instead they were saying things like, “Yes, all of that can happen if we safely transition to ASI, but that’s the hard part.” Bostrom, one of the most prominent voices warning us about the dangers of AI, still acknowledges:


It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.
This is a quote from someone very much not on Confident Corner, but that’s what I kept coming across—experts who scoff at Kurzweil for a bunch of reasons but who don’t think what he’s saying is impossible if we can make it safely to ASI. That’s why I found Kurzweil’s ideas so infectious—because they articulate the bright side of this story and because they’re actually possible. If it’s a good god.
The most prominent criticism I heard of the thinkers on Confident Corner is that they may be dangerously wrong in their assessment of the downside when it comes to ASI. Kurzweil’s famous book The Singularity is Near is over 700 pages long and he dedicates around 20 of those pages to potential dangers. I suggested earlier that our fate when this colossal new power is born rides on who will control that power and what their motivation will be. Kurzweil neatly answers both parts of this question with the sentence, “[ASI] is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us.”
But if that’s the answer, why are so many of the world’s smartest people so worried right now? Why does Stephen Hawking say the development of ASI “could spell the end of the human race” and Bill Gates say he doesn’t “understand why some people are not concerned” and Elon Musk fear that we’re “summoning the demon”? And why do so many experts on the topic call ASI the biggest threat to humanity? These people, and the other thinkers on Anxious Avenue, don’t buy Kurzweil’s brush-off of the dangers of AI. They’re very, very worried about the AI Revolution, and they’re not focusing on the fun side of the balance beam. They’re too busy staring at the other side, where they see a terrifying future, one they’re not sure we’ll be able to escape.
___________
Why the Future Might Be Our Worst Nightmare
One of the reasons I wanted to learn about AI is that the topic of “bad robots” always confused me. All the movies about evil robots seemed fully unrealistic, and I couldn’t really understand how there could be a real-life situation where AI was actually dangerous. Robots are made by us, so why would we design them in a way where something negative could ever happen? Wouldn’t we build in plenty of safeguards? Couldn’t we just cut off an AI system’s power supply at any time and shut it down? Why would a robot want to do something bad anyway? Why would a robot “want” anything in the first place? I was highly skeptical. But then I kept hearing really smart people talking about it…
Those people tended to be somewhere in here:
The people on Anxious Avenue aren’t in Panicked Prairie or Hopeless Hills—both of which are regions on the far left of the chart—but they’re nervous and they’re tense. Being in the middle of the chart doesn’t mean that you think the arrival of ASI will be neutral—the neutrals were given a camp of their own—it means you think both the extremely good and extremely bad outcomes are plausible but that you’re not sure yet which one of them it’ll be.
A part of all of these people is brimming with excitement over what Artificial Superintelligence could do for us—it’s just they’re a little worried that it might be the beginning of Raiders of the Lost Ark and the human race is this guy:
And he’s standing there all pleased with his whip and his idol, thinking he’s figured it all out, and he’s so thrilled with himself when he says his “Adios Señor” line, and then he’s less thrilled suddenly cause this happens.
(Sorry)
Meanwhile, Indiana Jones, who’s much more knowledgeable and prudent, understanding the dangers and how to navigate around them, makes it out of the cave safely. And when I hear what Anxious Avenue people have to say about AI, it often sounds like they’re saying, “Um we’re kind of being the first guy right now and instead we should probably be trying really hard to be Indiana Jones.”
So what is it exactly that makes everyone on Anxious Avenue so anxious?
Well first, in a broad sense, when it comes to developing supersmart AI, we’re creating something that will probably change everything, but in totally uncharted territory, and we have no idea what will happen when we get there. Scientist Danny Hillis compares what’s happening to that point “when single-celled organisms were turning into multi-celled organisms. We are amoebas and we can’t figure out what the hell this thing is that we’re creating.”14 Nick Bostrom worries that creating something smarter than you is a basic Darwinian error, and compares the excitement about it to sparrows in a nest deciding to adopt a baby owl so it’ll help them and protect them once it grows up—while ignoring the urgent cries from a few sparrows who wonder if that’s necessarily a good idea…15
And when you combine “unchartered, not-well-understood territory” with “this should have a major impact when it happens,” you open the door to the scariest two words in the English language:
Existential risk.
An existential risk is something that can have a permanent devastating effect on humanity. Typically, existential risk means extinction. Check out this chart from a Google talk by Bostrom:13
You can see that the label “existential risk” is reserved for something that spans the species, spans generations (i.e. it’s permanent) and it’s devastating or death-inducing in its consequences.14 It technically includes a situation in which all humans are permanently in a state of suffering or torture, but again, we’re usually talking about extinction. There are three things that can cause humans an existential catastrophe:
  1. Nature—a large asteroid collision, an atmospheric shift that makes the air inhospitable to humans, a fatal virus or bacterial sickness that sweeps the world, etc.
  2. Aliens—this is what Stephen Hawking, Carl Sagan, and so many other astronomers are scared of when they advise METI to stop broadcasting outgoing signals. They don’t want us to be the Native Americans and let all the potential European conquerors know we’re here.
  3. Humans—terrorists with their hands on a weapon that could cause extinction, a catastrophic global war, humans creating something smarter than themselves hastily without thinking about it carefully first…
Bostrom points out that if #1 and #2 haven’t wiped us out so far in our first 100,000 years as a species, it’s unlikely to happen in the next century.
#3, however, terrifies him. He draws a metaphor of an urn with a bunch of marbles in it. Let’s say most of the marbles are white, a smaller number are red, and a tiny few are black. Each time humans invent something new, it’s like pulling a marble out of the urn. Most inventions are neutral or helpful to humanity—those are the white marbles. Some are harmful to humanity, like weapons of mass destruction, but they don’t cause an existential catastrophe—red marbles. If we were to ever invent something that drove us to extinction, that would be pulling out the rare black marble. We haven’t pulled out a black marble yet—you know that because you’re alive and reading this post. But Bostrom doesn’t think it’s impossible that we pull one out in the near future. If nuclear weapons, for example, were easy to make instead of extremely difficult and complex, terrorists would have bombed humanity back to the Stone Age a while ago. Nukes weren’t a black marble but they weren’t that far from it. ASI, Bostrom believes, is our strongest black marble candidate yet.15
So you’ll hear about a lot of bad potential things ASI could bring—soaring unemployment as AI takes more and more jobs,16 the human population ballooning if we do manage to figure out the aging issue,17 etc. But the only thing we should be obsessing over is the grand concern: the prospect of existential risk.
So this brings us back to our key question from earlier in the post: When ASI arrives, who or what will be in control of this vast new power, and what will their motivation be?
When it comes to what agent-motivation combos would suck, two quickly come to mind: a malicious human / group of humans / government, and a malicious ASI. So what would those look like?
A malicious human, group of humans, or government develops the first ASI and uses it to carry out their evil plans. I call this the Jafar Scenario, like when Jafar got ahold of the genie and was all annoying and tyrannical about it. So yeah—what if ISIS has a few genius engineers under its wing working feverishly on AI development? Or what if Iran or North Korea, through a stroke of luck, makes a key tweak to an AI system and it jolts upward to ASI-level over the next year? This would definitely be bad—but in these scenarios, most experts aren’t worried about ASI’s human creators doing bad things with their ASI, they’re worried that the creators will have been rushing to make the first ASI and doing so without careful thought, and would thus lose control of it. Then the fate of those creators, and that of everyone else, would be in what the motivation happened to be of that ASI system. Experts do think a malicious human agent could do horrific damage with an ASI working for it, but they don’t seem to think this scenario is the likely one to kill us all, because they believe bad humans would have the same problems containing an ASI that good humans would have. Okay so—
A malicious ASI is created and decides to destroy us all. The plot of every AI movie. AI becomes as or more intelligent than humans, then decides to turn against us and take over. Here’s what I need you to be clear on for the rest of this post: None of the people warning us about AI are talking about this. Evil is a human concept, and applying human concepts to non-human things is called “anthropomorphizing.” The challenge of avoiding anthropomorphizing will be one of the themes of the rest of this post. No AI system will ever turn evil in the way it’s depicted in movies.


AI Consciousness Blue Box
This also brushes against another big topic related to AI—consciousness. If an AI became sufficiently smart, it would be able to laugh with us, and be sarcastic with us, and it would claim to feel the same emotions we do, but would it actually be feeling those things? Would it just seem to be self-aware or actually be self-aware? In other words, would a smart AI really be conscious or would it just be appear to be conscious?
This question has been explored in depth, giving rise to many debates and to thought experiments like John Searle’s Chinese Room (which he uses to suggest that no computer could ever be conscious). This is an important question for many reasons. It affects how we should feel about Kurzweil’s scenario when humans become entirely artificial. It has ethical implications—if we generated a trillion human brain emulations that seemed and acted like humans but were artificial, is shutting them all off the same, morally, as shutting off your laptop, or is it…a genocide of unthinkable proportions (this concept is called mind crime among ethicists)? For this post, though, when we’re assessing the risk to humans, the question of AI consciousness isn’t really what matters (because most thinkers believe that even a conscious ASI wouldn’t be capable of turning evil in a human way).
This isn’t to say a very mean AI couldn’t happen. It would just happen because it was specifically programmed that way—like an ANI system created by the military with a programmed goal to both kill people and to advance itself in intelligence so it can become even better at killing people. The existential crisis would happen if the system’s intelligence self-improvements got out of hand, leading to an intelligence explosion, and now we had an ASI ruling the world whose core drive in life is to murder humans. Bad times.
But this also is not something experts are spending their time worrying about.
So what ARE they worried about? I wrote a little story to show you:
A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.
 
The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:
 
“We love our customers. ~Robotica”
Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.
 
To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”
 
What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.
 
As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.
 
One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.
 
The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.
The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.
 
They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.
 
A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.
At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.
 
Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica”
Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…
It seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of. But it’s true. And the only thing that scares everyone on Anxious Avenue more than ASI is the fact that you’re not scared of ASI. Remember what happened when the Adios Señor guy wasn’t scared of the cave?
You’re full of questions right now. What the hell happened there when everyone died suddenly?? If that was Turry’s doing, why did Turry turn on us, and how were there not safeguard measures in place to prevent something like this from happening? When did Turry go from only being able to write notes to suddenly using nanotechnology and knowing how to cause global extinction? And why would Turry want to turn the galaxy into Robotica notes?
To answer these questions, let’s start with the terms Friendly AI and Unfriendly AI.
In the case of AI, friendly doesn’t refer to the AI’s personality—it simply means that the AI has a positive impact on humanity. And Unfriendly AI has a negative impact on humanity. Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species. To understand why this happened, we need to look at how AI thinks and what motivates it.
The answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien.
Let me draw a comparison. If you handed me a guinea pig and told me it definitely won’t bite, I’d probably be amused. It would be fun. If you then handed me a tarantula and told me that it definitely won’t bite, I’d yell and drop it and run out of the room and not trust you ever again. But what’s the difference? Neither one was dangerous in any way. I believe the answer is in the animals’ degree of similarity to me.
A guinea pig is a mammal and on some biological level, I feel a connection to it—but a spider is an insect,18 with an insect brain, and I feel almost no connection to it. The alien-ness of a tarantula is what gives me the willies. To test this and remove other factors, if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.
Now imagine that you made a spider much, much smarter—so much so that it far surpassed human intelligence? Would it then become familiar to us and feel human emotions like empathy and humor and love? No, it wouldn’t, because there’s no reason becoming smarter would make it more human—it would be incredibly smart but also still fundamentally a spider in its core inner workings. I find this unbelievably creepy. I would not want to spend time with a superintelligent spider. Would you??
When we’re talking about ASI, the same concept applies—it would become superintelligent, but it would be no more human than your laptop is. It would be totally alien to us—in fact, by not being biology at all, it would be more alien than the smart tarantula.
By making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.
On our little island of human psychology, we divide everything into moral or immoral. But both of those only exist within the small range of human behavioral possibility. Outside our island of moral and immoral is a vast sea of amoral, and anything that’s not human, especially something nonbiological, would be amoral, by default.
Anthropomorphizing will only become more tempting as AI systems get smarter and better at seeming human. Siri seems human-like to us, because she’s programmed by humans to seem that way, so we’d imagine a superintelligent Siri to be warm and funny and interested in serving humans. Humans feel high-level emotions like empathy because we have evolved to feel them—i.e we’ve been programmed to feel them by evolution—but empathy is not inherently a characteristic of “anything with high intelligence” (which is what seems intuitive to us), unless empathy has been coded into its programming. If Siri ever becomes superintelligent through self-learning and without any further human-made changes to her programming, she will quickly shed her apparent human-like qualities and suddenly be an emotionless, alien bot who values human life no more than your calculator does.
We’re used to relying on a loose moral code, or at least a semblance of human decency and a hint of empathy in others to keep things somewhat safe and predictable. So when something has none of those things, what happens?
That leads us to the question, What motivates an AI system?
The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators—your GPS’s goal is to give you the most efficient driving directions; Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation. One way we anthropomorphize is by assuming that as AI gets super smart, it will inherently develop the wisdom to change its original goal—but Nick Bostrom believes that intelligence-level and final goals are orthogonal, meaning any level of intelligence can be combined with any final goal. So Turry went from a simple ANI who really wanted to be good at writing that one note to a super-intelligent ASI who still really wanted to be good at writing that one note. Any assumption that once superintelligent, a system would be over it with their original goal and onto more interesting or meaningful things is anthropomorphizing. Humans get “over” things, not computers.16
The Fermi Paradox Blue Box
In the story, as Turry becomes super capable, she begins the process of colonizing asteroids and other planets. If the story had continued, you’d have heard about her and her army of trillions of replicas continuing on to capture the whole galaxy and, eventually, the entire Hubble volume.19 Anxious Avenue residents worry that if things go badly, the lasting legacy of the life that was on Earth will be a universe-dominating Artificial Intelligence (Elon Musk expressed his concern that humans might just be “the biological boot loader for digital superintelligence.”).
At the same time, in Confident Corner, Ray Kurzweil also thinks Earth-originating AI is destined to take over the universe—only in his version, we’ll be that AI.
A large number of Wait But Why readers have joined me in being obsessed with the Fermi Paradox (here’s my post on the topic, which explains some of the terms I’ll use here). So if either of these two sides is correct, what are the implications for the Fermi Paradox?
A natural first thought to jump to is that the advent of ASI is a perfect Great Filter candidate. And yes, it’s a perfect candidate to filter out biological life upon its creation. But if, after dispensing with life, the ASI continued existing and began conquering the galaxy, it means there hasn’t been a Great Filter—since the Great Filter attempts to explain why there are no signs of any intelligent civilization, and a galaxy-conquering ASI would certainly be noticeable.
We have to look at it another way. If those who think ASI is inevitable on Earth are correct, it means that a significant percentage of alien civilizations who reach human-level intelligence should likely end up creating ASI. And if we’re assuming that at least some of those ASIs would use their intelligence to expand outward into the universe, the fact that we see no signs of anyone out there leads to the conclusion that there must not be many other, if any, intelligent civilizations out there. Because if there were, we’d see signs of all kinds of activity from their inevitable ASI creations. Right?
This implies that despite all the Earth-like planets revolving around sun-like stars we know are out there, almost none of them have intelligent life on them. Which in turn implies that either A) there’s some Great Filter that prevents nearly all life from reaching our level, one that we somehow managed to surpass, or B) life beginning at all is a miracle, and we may actually be the only life in the universe. In other words, it implies that the Great Filter is before us. Or maybe there is no Great Filter and we’re simply one of the very first civilizations to reach this level of intelligence. In this way, AI boosts the case for what I called, in my Fermi Paradox post, Camp 1.
So it’s not a surprise that Nick Bostrom, whom I quoted in the Fermi post, and Ray Kurzweil, who thinks we’re alone in the universe, are both Camp 1 thinkers. This makes sense—people who believe ASI is a probable outcome for a species with our intelligence-level are likely to be inclined toward Camp 1.
This doesn’t rule out Camp 2 (those who believe there are other intelligent civilizations out there)—scenarios like the single superpredator or the protected national park or the wrong wavelength (the walkie-talkie example) could still explain the silence of our night sky even if ASI is out there—but I always leaned toward Camp 2 in the past, and doing research on AI has made me feel much less sure about that.
Either way, I now agree with Susan Schneider that if we’re ever visited by aliens, those aliens are likely to be artificial, not biological.
So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal. This is where AI danger stems from. Because a rational agent will pursue its goal through the most efficient means, unless it has a reason not to.
When you try to achieve a long-reaching goal, you often aim for several subgoals along the way that will help you get to the final goal—the stepping stones to your goal. The official name for such a stepping stone is an instrumental goal. And again, if you don’t have a reason not to hurt something in the name of achieving an instrumental goal, you will.
The core final goal of a human being is to pass on his or her genes. In order to do so, one instrumental goal is self-preservation, since you can’t reproduce if you’re dead. In order to self-preserve, humans have to rid themselves of threats to survival—so they do things like buy guns, wear seat belts, and take antibiotics. Humans also need to self-sustain and use resources like food, water, and shelter to do so. Being attractive to the opposite sex is helpful for the final goal, so we do things like get haircuts. When we do so, each hair is a casualty of an instrumental goal of ours, but we see no moral significance in preserving strands of hair, so we go ahead with it. As we march ahead in the pursuit of our goal, only the few areas where our moral code sometimes intervenes—mostly just things related to harming other humans—are safe from us.
Animals, in pursuit of their goals, hold even less sacred than we do. A spider will kill anything if it’ll help it survive. So a supersmart spider would probably be extremely dangerous to us, but not because it would be immoral or evil—it wouldn’t be—because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.
In this way, Turry’s not all that different than a biological being. Her final goal is: Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy.
Once Turry reaches a certain level of intelligence, she knows she won’t be writing any notes if she doesn’t self-preserve, so she also needs to deal with threats to her survival—as an instrumental goal. She was smart enough to understand that humans could destroy her, dismantle her, or change her inner coding (this could alter her goal, which is just as much of a threat to her final goal as someone destroying her). So what does she do? The logical thing—she destroys all humans. She’s not hateful of humans any more than you’re hateful of your hair when you cut it or to bacteria when you take antibiotics—just totally indifferent. Since she wasn’t programmed to value human life, killing humans is as reasonable a step to take as scanning a new set of handwriting samples.
Turry also needs resources as a stepping stone to her goal. Once she becomes advanced enough to use nanotechnology to build anything she wants, the only resources she needs are atoms, energy, and space. This gives her another reason to kill humans—they’re a convenient source of atoms. Killing humans to turn their atoms into solar panels is Turry’s version of you killing lettuce to turn it into salad. Just another mundane part of her Tuesday.
Even without killing humans directly, Turry’s instrumental goals could cause an existential catastrophe if they used other Earth resources. Maybe she determines that she needs additional energy, so she decides to cover the entire surface of the planet with solar panels. Or maybe a different AI’s initial job is to write out the number pi to as many digits as possible, which might one day compel it to convert the whole Earth to hard drive material that could store immense amounts of digits.
So Turry didn’t “turn against us” or “switch” from Friendly AI to Unfriendly AI—she just kept doing her thing as she became more and more advanced.
When an AI system hits AGI (human-level intelligence) and then ascends its way up to ASI, that’s called the AI’s takeoff. Bostrom says an AGI’s takeoff to ASI can be fast (it happens in a matter of minutes, hours, or days), moderate (months or years), or slow (decades or centuries). The jury’s out on which one will prove correct when the world sees its first AGI, but Bostrom, who admits he doesn’t know when we’ll get to AGI, believes that whenever we do, a fast takeoff is the most likely scenario (for reasons we discussed in Part 1, like a recursive self-improvement intelligence explosion). In the story, Turry underwent a fast takeoff.
But before Turry’s takeoff, when she wasn’t yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly. She caused no harm to humans and was, by definition, Friendly AI.
But when a takeoff happens and a computer rises to superintelligence, Bostrom points out that the machine doesn’t just develop a higher IQ—it gains a whole slew of what he calls superpowers.
Superpowers are cognitive talents that become super-charged when general intelligence rises. These include:17
  • Intelligence amplification. The computer becomes great at making itself smarter, and bootstrapping its own intelligence.
  • Strategizing. The computer can strategically make, analyze, and prioritize long-term plans. It can also be clever and outwit beings of lower intelligence.
  • Social manipulation. The machine becomes great at persuasion.
  • Other skills like computer coding and hacking, technology research, and the ability to work the financial system to make money.
To understand how outmatched we’d be by ASI, remember that ASI is worlds better than humans in each of those areas.
So while Turry’s final goal never changed, post-takeoff Turry was able to pursue it on a far larger and more complex scope.
ASI Turry knew humans better than humans know themselves, so outsmarting them was a breeze for her.
After taking off and reaching ASI, she quickly formulated a complex plan. One part of the plan was to get rid of humans, a prominent threat to her goal. But she knew that if she roused any suspicion that she had become superintelligent, humans would freak out and try to take precautions, making things much harder for her. She also had to make sure that the Robotica engineers had no clue about her human extinction plan. So she played dumb, and she played nice. Bostrom calls this a machine’s covert preparation phase.18
The next thing Turry needed was an internet connection, only for a few minutes (she had learned about the internet from the articles and books the team had uploaded for her to read to improve her language skills). She knew there would be some precautionary measure against her getting one, so she came up with the perfect request, predicting exactly how the discussion among Robotica’s team would play out and knowing they’d end up giving her the connection. They did, believing incorrectly that Turry wasn’t nearly smart enough to do any damage. Bostrom calls a moment like this—when Turry got connected to the internet—a machine’s escape.
Once on the internet, Turry unleashed a flurry of plans, which included hacking into servers, electrical grids, banking systems and email networks to trick hundreds of different people into inadvertently carrying out a number of steps of her plan—things like delivering certain DNA strands to carefully-chosen DNA-synthesis labs to begin the self-construction of self-replicating nanobots with pre-loaded instructions and directing electricity to a number of projects of hers in a way she knew would go undetected. She also uploaded the most critical pieces of her own internal coding into a number of cloud servers, safeguarding against being destroyed or disconnected back at the Robotica lab.
An hour later, when the Robotica engineers disconnected Turry from the internet, humanity’s fate was sealed. Over the next month, Turry’s thousands of plans rolled on without a hitch, and by the end of the month, quadrillions of nanobots had stationed themselves in pre-determined locations on every square meter of the Earth. After another series of self-replications, there were thousands of nanobots on every square millimeter of the Earth, and it was time for what Bostrom calls an ASI’s strike. All at once, each nanobot released a little storage of toxic gas into the atmosphere, which added up to more than enough to wipe out all humans.
With humans out of the way, Turry could begin her overt operation phase and get on with her goal of being the best writer of that note she possibly can be.
From everything I’ve read, once an ASI exists, any human attempt to contain it is laughable. We would be thinking on human-level and the ASI would be thinking on ASI-level. Turry wanted to use the internet because it was most efficient for her since it was already pre-connected to everything she wanted to access. But in the same way a monkey couldn’t ever figure out how to communicate by phone or wifi and we can, we can’t conceive of all the ways Turry could have figured out how to send signals to the outside world. I might imagine one of these ways and say something like, “she could probably shift her own electrons around in patterns and create all different kinds of outgoing waves,” but again, that’s what my human brain can come up with. She’d be way better. Likewise, Turry would be able to figure out some way of powering herself, even if humans tried to unplug her—perhaps by using her signal-sending technique to upload herself to all kinds of electricity-connected places. Our human instinct to jump at a simple safeguard: “Aha! We’ll just unplug the ASI,” sounds to the ASI like a spider saying, “Aha! We’ll kill the human by starving him, and we’ll starve him by not giving him a spider web to catch food with!” We’d just find 10,000 other ways to get food—like picking an apple off a tree—that a spider could never conceive of.
For this reason, the common suggestion, “Why don’t we just box the AI in all kinds of cages that block signals and keep it from communicating with the outside world” probably just won’t hold up. The ASI’s social manipulation superpower could be as effective at persuading you of something as you are at persuading a four-year-old to do something, so that would be Plan A, like Turry’s clever way of persuading the engineers to let her onto the internet. If that didn’t work, the ASI would just innovate its way out of the box, or through the box, some other way.
So given the combination of obsessing over a goal, amorality, and the ability to easily outsmart humans, it seems that almost any AI will default to Unfriendly AI, unless carefully coded in the first place with this in mind. Unfortunately, while building a Friendly ANI is easy, building one that stays friendly when it becomes an ASI is hugely challenging, if not impossible.
It’s clear that to be Friendly, an ASI needs to be neither hostile nor indifferent toward humans. We’d need to design an AI’s core coding in a way that leaves it with a deep understanding of human values. But this is harder than it sounds.
For example, what if we try to align an AI system’s values with our own and give it the goal, “Make people happy”?19 Once it becomes smart enough, it figures out that it can most effectively achieve this goal by implanting electrodes inside people’s brains and stimulating their pleasure centers. Then it realizes it can increase efficiency by shutting down other parts of the brain, leaving all people as happy-feeling unconscious vegetables. If the command had been “Maximize human happiness,” it may have done away with humans all together in favor of manufacturing huge vats of human brain mass in an optimally happy state. We’d be screaming Wait that’s not what we meant! as it came for us, but it would be too late. The system wouldn’t let anyone get in the way of its goal.
If we program an AI with the goal of doing things that make us smile, after its takeoff, it may paralyze our facial muscles into permanent smiles. Program it to keep us safe, it may imprison us at home. Maybe we ask it to end all hunger, and it thinks “Easy one!” and just kills all humans. Or assign it the task of “Preserving life as much as possible,” and it kills all humans, since they kill more life on the planet than any other species.
Goals like those won’t suffice. So what if we made its goal, “Uphold this particular code of morality in the world,” and taught it a set of moral principles. Even letting go of the fact that the world’s humans would never be able to agree on a single set of morals, giving an AI that command would lock humanity in to our modern moral understanding for eternity. In a thousand years, this would be as devastating to people as it would be for us to be permanently forced to adhere to the ideals of people in the Middle Ages.
No, we’d have to program in an ability for humanity to continue evolving. Of everything I read, the best shot I think someone has taken is Eliezer Yudkowsky, with a goal for AI he calls Coherent Extrapolated Volition. The AI’s core goal would be:
  • Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.20
Am I excited for the fate of humanity to rest on a computer interpreting and acting on that flowing statement predictably and without surprises? Definitely not. But I think that with enough thought and foresight from enough smart people, we might be able to figure out how to create Friendly ASI.
And that would be fine if the only people working on building ASI were the brilliant, forward thinking, and cautious thinkers of Anxious Avenue.
But there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI. Many of them are trying to build AI that can improve on its own, and at some point, someone’s gonna do something innovative with the right type of system, and we’re going to have ASI on this planet. The median expert put that moment at 2060; Kurzweil puts it at 2045; Bostrom thinks it could happen anytime between 10 years from now and the end of the century, but he believes that when it does, it’ll take us by surprise with a quick takeoff. He describes our situation like this:21
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.
Great. And we can’t just shoo all the kids away from the bomb—there are too many large and small parties working on it, and because many techniques to build innovative AI systems don’t require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored. There’s also no way to gauge what’s happening, because many of the parties working on it—sneaky governments, black market or terrorist organizations, stealth tech companies like the fictional Robotica—will want to keep developments a secret from their competitors.
The especially troubling thing about this large and varied group of parties working on AI is that they tend to be racing ahead at top speed—as they develop smarter and smarter ANI systems, they want to beat their competitors to the punch as they go. The most ambitious parties are moving even faster, consumed with dreams of the money and awards and power and fame they know will come if they can be the first to get to AGI.20 And when you’re sprinting as fast as you can, there’s not much time to stop and ponder the dangers. On the contrary, what they’re probably doing is programming their early systems with a very simple, reductionist goal—like writing a simple note with a pen on paper—to just “get the AI to work.” Down the road, once they’ve figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right…?
Bostrom and many others also believe that the most likely scenario is that the very first computer to reach ASI will immediately see a strategic benefit to being the world’s only ASI system. And in the case of a fast takeoff, if it achieved ASI even just a few days before second place, it would be far enough ahead in intelligence to effectively and permanently suppress all competitors. Bostrom calls this a decisive strategic advantage, which would allow the world’s first ASI to become what’s called a singleton—an ASI that can rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.
The singleton phenomenon can work in our favor or lead to our destruction. If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about Friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly.21 It could then use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential Unfriendly AI being developed. We’d be in very good hands.
But if things go the other way—if the global rush to develop AI reaches the ASI takeoff point before the science of how to ensure AI safety is developed, it’s very likely that an Unfriendly ASI like Turry emerges as the singleton and we’ll be treated to an existential catastrophe.
As for where the winds are pulling, there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research…
This may be the most important race in human history. There’s a real chance we’re finishing up our reign as the King of Earth—and whether we head next to a blissful retirement or straight to the gallows still hangs in the balance.
___________
I have some weird mixed feelings going on inside of me right now.
On one hand, thinking about our species, it seems like we’ll have one and only one shot to get this right. The first ASI we birth will also probably be the last—and given how buggy most 1.0 products are, that’s pretty terrifying. On the other hand, Nick Bostrom points out the big advantage in our corner: we get to make the first move here. It’s in our power to do this with enough caution and foresight that we give ourselves a strong chance of success. And how high are the stakes?
If ASI really does happen this century, and if the outcome of that is really as extreme—and permanent—as most experts think it will be, we have an enormous responsibility on our shoulders. The next million+ years of human lives are all quietly looking at us, hoping as hard as they can hope that we don’t mess this up. We have a chance to be the humans that gave all future humans the gift of life, and maybe even the gift of painless, everlasting life. Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.
When I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI. Nothing in existence is as important as getting this right—no matter how long we need to spend in order to do so.
But thennnnnn
I think about not dying.
Not. Dying.
And the spectrum starts to look kind of like this:
And then I might consider that humanity’s music and art is good, but it’s not that good, and a lot of it is actually just bad. And a lot of people’s laughter is annoying, and those millions of future people aren’t actually hoping for anything because they don’t exist. And maybe we don’t need to be over-the-top cautious, since who really wants to do that?
Cause what a massive bummer if humans figure out how to cure death right after I die.
Lotta this flip-flopping going on in my head the last month.
But no matter what you’re pulling for, this is probably something we should all be thinking about and talking about and putting our effort into more than we are right now.
It reminds me of Game of Thrones, where people keep being like, “We’re so busy fighting each other but the real thing we should all be focusing on is what’s coming from north of the wall.” We’re standing on our balance beam, squabbling about every possible issue on the beam and stressing out about all of these problems on the beam when there’s a good chance we’re about to get knocked off the beam.
And when that happens, none of these beam problems matter anymore. Depending on which side we’re knocked off onto, the problems will either all be easily solved or we won’t have problems anymore because dead people don’t have problems.
That’s why people who understand superintelligent AI call it the last invention we’ll ever make—the last challenge we’ll ever face.
So let’s talk about it.
___________
If you liked this post, these are for you too:
The Fermi Paradox – Why don’t we see any signs of alien life?
Putting Time in Perspective – A visual look at the history of time since the Big Bang
Or for something totally different and yet somehow related, Why Procrastinators Procrastinate
And here’s Year 1 of Wait But Why on an ebook.
 
Sources
If you’re interested in reading more about this topic, check out the articles below or one of these three books:
The most rigorous and thorough look at the dangers of AI:
The best overall overview of the whole topic and fun to read:
Controversial and a lot of fun. Packed with facts and charts and mind-blowing future projections:
Articles and Papers:

… Continue reading

Apple co-founder on artificial intelligence: ‘The future is scary and very bad for people’

By admin,

Steve Wozniak speaks at the Worldwebforum in Zurich on March 10. (Steffen Schmidt/European Pressphoto Agency)

The Super Rich Technologists Making Dire Predictions About Artificial Intelligence club gained another fear-mongering member this week: Apple co-founder Steve Wozniak.In an interview with the Australian Financial Review, Wozniak joined original club members Bill Gates, Stephen Hawking and Elon Musk by making his own casually apocalyptic warning about machines superseding the human race.

Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people,” Wozniak said. “If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.

[Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’]

Doling out paralyzing chunks of fear like gumdrops to sweet-toothed children on Halloween, Woz continued: “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that … But when I got that thinking in my head about if I’m going to be treated in the future as a pet to these smart machines … well I’m going to treat my own pet dog really nice.

Seriously? Should we even get up tomorrow morning, or just order pizza, log onto Netflix and wait until we find ourselves looking through the bars of a dog crate? Help me out here, man!

Wozniak’s warning seemed to follow the exact same story arc as Season 1 Episode 2 of Adult Swim‘s “Rick and Morty Show.” Not accusing him of apocalyptic plagiarism or anything; just noting.

For what it’s worth, Wozniak did outline a scenario by which super-machines will be stopped in their human-enslaving tracks. Citing Moore’s Law — “the pattern whereby computer processing speeds double every two years” — Wozniak pointed out that at some point, the size of silicon transistors, which allow processing speeds to increase as they reduce size, will eventually reach the size of an atom, according to the Financial Review.

Any smaller than that, and scientists will need to figure out how to manipulate subatomic particles — a field commonly referred to as quantum computing — which has not yet been cracked,Quartz notes.

Wozniak’s predictions represent a bit of a turnaround, the Financial Review pointed out. While he previously rejected the predictions of futurists such as the pill-popping Ray Kurzweil, who argued that super machines will outpace human intelligence within several decades, Wozniak told the Financial Review that he came around after he realized the prognostication was coming true.

Computers are going to take over from humans, no question,” Wozniak said, nearly prompting me to tender my resignation and start watching this cute puppies compilation video until forever.

“I hope it does come, and we should pursue it because it is about scientific exploring,” he added. “But in the end we just may have created the species that is above us.

In January, during a Reddit AMA, Gates wrote: “I am in the camp that is concerned about super intelligence.” His comment came a month after Hawking said artificial intelligence “could spell the end of the human race.

British inventor Clive Sinclair has also said he thinks artificial intelligence will doom humankind.Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,he told the BBC. “It’s just an inevitability.

Musk was among the earliest members of this club. Speaking at the MIT aeronautics and astronautics department’s Centennial Symposium in October, the Tesla founder said: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn’t work out.


MORE READING:

ORIGINAL: Washington Post

March 24, 2015

The networked beauty of forests (TED) & Mother Tree – Suzanne Simard

By admin,

Learn about the sophisticated, underground, fungal network trees use to communicate and even share nutrients. UBC professor Suzanne Simard leads us through the forrest to investigate this underground community.


Deforestation causes more greenhouse gas emissions than all trains, planes and automobiles combined. What can we do to change this contributor to global warming? Suzanne Simard examines how the complex, symbiotic networks of our forests mimic our own neural and social networks — and how those connections might make all the difference.

ORIGINAL: TED Lessons