The Brain’s Inner Language

Video|2:25 Credit Probing the Parliament of Neurons Clay Reid and colleagues are going deep into

SEATTLE — When Clay Reid decided to leave his job as a professor at Harvard Medical School to become a senior investigator at the Allen Institute for Brain Science in Seattle in 2012, some of his colleagues congratulated him warmly and understood right away why he was making the move.
Others shook their heads. He was, after all, leaving one of the world’s great universities to go to the academic equivalent of an Internet start-up, albeit an extremely well- financed, very ambitious one, created in 2003 by Paul Allen, a founder of Microsoft.

Related Coverage 



Mapping the Highways of the Brain
Deanna Barch and her colleagues are trying to map connections in the human brain. The study is part of the Human Connectome Project.

Still, “it wasn’t a remotely hard decision,” Dr. Reid said. He wanted to mount an all-out investigation of a part of the mouse brain. And although he was happy at Harvard, the Allen Institute offered not only great colleagues and deep pockets, but also an approach to science different from the classic university environment. The institute was already mapping the mouse brain in fantastic detail, and specialized in the large-scale accumulation of information in atlases and databases available to all of science. Photo


When neurons in the brain of a live mouse, top, are active, they flash brightly. Dr. Clay Reid, above left, and colleagues at the Allen Institute for Brain Science are working with mice to better understand the human mind. Above center, areas of the mouse cortex related to vision, and connected to other parts involving visual perception. Credit Zach Wise for The New York Times

Now, it was expanding, and trying to merge its semi-industrial approach to data gathering with more traditional science driven by individual investigators, by hiring scientists like Christof Koch from the California Institute of Technology as chief scientific officer in 2011 and Dr. Reid. As a senior investigator, he would lead a group of about 100, and work with scientists, engineers and technicians in other groups.
Without the need to apply regularly for federal grants, Dr. Reid could concentrate on one piece of the puzzle of how the brain works. He would try to decode the workings of one part of the mouse brain, the million neurons in the visual cortex, from, as he puts it, “molecules to behavior.
There are many ways to map the brain and many kinds of brains to map. Although the ultimate goal of most neuroscience is understanding how human brains work, many kinds of research can’t be done on human beings, and the brains of mice and even flies share common processes with human brains.
The work of Dr. Reid, and scientists at Allen and elsewhere who share his approach, is part of a surge of activity in brain research as scientists try to build the tools and knowledge to explain — as well as can ever be explained — how brains and minds work. Besides the Obama administration’s $100 million Brain Initiative and the European Union’s $1 billion, decade-long Human Brain Project, there are numerous private and public research efforts in the United States and abroad, some focusing on the human brain, others like Dr. Reid’s focusing on nonhumans.

While the Human Connectome Project, which is spread among several institutions, aims for an overall picture of the associations among parts of the human brain, other scientific teams have set their sights on drilling to deeper levels. For instance, the Connectome Project at Harvard is pursuing a structural map of the mouse brain at a level of magnification that shows packets of neurochemicals at the tips of brain cells.

At Janelia Farm, the Virginia research campus of the Howard Hughes Medical Institute, researchers are aiming for an understanding of the complete fly brain — a map of sorts, if a map can be taken to its imaginable limits, including structure, chemistry, genetics and activity.
I personally am inspired by what they’re doing at Janelia,” Dr. Reid said.
All these efforts start with maps and enrich them. If Dr. Reid is successful, he and his colleagues will add what you might call the code of a brain process, the language the neurons use to store, transmit and process information for this function.
Not that this would be any kind of final answer. In neuroscience, perhaps more than in most other disciplines, every discovery leads to new questions.
With the brain,” Dr. Reid said, “you can always go deeper.
‘Psychoanalyst’s Kid Probes Brain!’ Photo
A diamond-tipped slicer is used to prepare a piece of a mouse’s brain for examination with a modified electron microscope at the Allen Institute. Credit Zach Wise for The New York Times
Dr. Reid, 53, grew up in Boston, in a family with deep roots in medicine. His grandfather taught physiology at Harvard Medical School. “My parents were both psychoanalysts,” he said during an interview last fall, smiling as he imagined a headline for this article, “Psychoanalyst’s Kid Probes Brain!
I pretty much always knew that I wanted to be a scientist,” he said.
As an undergraduate at Yale, he majored in physics and philosophy and in mathematics, but in the end decided he didn’t want to be a physicist. Biology was attractive, but he was worried enough about his mathematical bent to talk to one of his philosophy professors about concerns that biology would too fuzzy for him.
The professor had some advice. “You really should read Hubel and Wiesel,” he said, referring to David Hubel and Torsten Wiesel, who had just won the Nobel Prize in 1981 for their work showing how binocular vision develops in the brain.
He read their work, and when he graduated in 1982, he was convinced that the study of the brain was both hard science and a wide-open field. He went on to an M.D.-Ph.D. program at Cornell Medical College and Rockefeller University, where Dr. Wiesel had his lab (he would go on to be president of Rockefeller).
As his studies progressed, Dr. Reid began to have second thoughts about pursuing medicine rather than research. Just a week before he was to commit to a neurology residency, he said, “I ran into a friend from the Wiesel lab and said, ‘Save me.’
That plea led to postdoctoral research in the Rockefeller lab. He stayed as a faculty member until moving to Harvard in 1996.
Mathematics and physics were becoming increasingly important in neurobiology, a trend that has continued, but there was still a certain tension between different mind-sets, he recalled. He found that there were intangible skills involved in biological research. “Good biological intuition was equally important to chops in math and physics,” he said.
Torsten once said to me, ‘You know, Clay, science is not an intelligence test.’
Though he didn’t recall that specific comment, Dr. Wiesel said recently that it sounded like something he would have said. “I think there are a lot of smart people who never make it in science. Why is it? What is it that is required in addition?
Intuition is important, he said, “knowing what kind of questions to ask.” And, he said, “the other thing is a passion for getting to the core of the problem.

Dr. Reid, he said, was not only smart and full of energy, but also “interested in asking questions that I think can get to the core of a problem.
At Harvard, Dr. Reid worked on the Connectome Project to map the connections between neurons in the mouse brain. The Connectome Project aims at a detailed map, a wiring diagram at a level fantastically more detailed than the work being done to map the human brain with M.R.I. machines. But electron microscopes produce a static picture from tiny slices of preserved brain.
Dr. Reid began working on tying function to mapping. He and one of his graduate students, Davi Bock, now at Janelia Farm, linked studies of active mouse brains to the detailed structural images produced by electron microscopes.
Dr. Bock said he recalled Dr. Reid as having developed exactly the kind of intuition and “good lab hands” that Dr. Wiesel seemed to be encouraging. He and another graduate student were stumped by a technical problem involving a new technique for studying living brains, and Dr. Reid came by.
Clay got on this bench piled up with components,” Dr. Bock said. “He started plugging and unplugging different power cables. We just stood there watching him, and I was sure he was going to scramble everything.” But he didn’t. Whatever he did worked.
That was part of the fun of working in the lab, Dr. Bock said, “not that he got it right every time.” But his appreciation for Dr. Reid as a leader and mentor went beyond admiration for his “mad scientist lab hands.
He has a deep gut level enthusiasm for what’s beautiful and what’s profound in neuroscience, and he’s kind of relentless,” Dr. Bock said.
Showing a Mouse a Picture
That instinct, enthusiasm and relentlessness will be necessary for his current pursuit. To crack the code of the brain, Dr. Reid said, two fundamental problems must be solved.
The first is: “How does the machine work, starting with its building blocks, cell types, going through their physiology and anatomy,” he said. That means knowing all the different types of neurons in the mouse visual cortex and their function — information that science doesn’t have yet.
It also means knowing what code is used to pass on information. When a mouse sees a picture, how is that picture encoded and passed from neuron to neuron? That is called neural computation

Nuno da Costa of the Allen Institute prepared a slice of mouse brain for the modified electron microscope at Dr. Reid’s lab in Seattle. “With the brain, you can always go deeper,” Dr. Reid said. Credit Zach Wise for The New York Times
“The other highly related problem is: How does that neural computation create behavior?” he said. How does the mouse brain decide on action based on that input?
He imagined the kind of experiment that would get at these deep questions. A mouse might be trained to participate in an experiment now done with primates in which an animal looks at an image. Later, seeing several different images in sequence, the animal presses a lever when the original one appears. Seeing the image, remembering it, recognizing it and pressing the lever might take as long as two seconds and involve activity in several parts of the brain.
Understanding those two seconds, Dr. Reid said, would mean knowing “literally what photons hit the retina, what information does the retina send to the thalamus and the cortex, what computations do the neurons in the cortex do and how do they do it, how does that level of processing get sent up to a memory center and hold the trace of that picture over one or two seconds.
Then, when the same picture is seen a second time, “the hard part happens,” he said. “How does the decision get made to say, ‘That’s the one’?
In pursuit of this level of understanding, Dr. Reid and others are gathering chemical, electrical, genetic and other information about what the structure of that part of the mouse brain is and what activity is going on.
They will develop electron micrographs that show every neuron and every connection in that part of a mouse brain. That is done on dead tissue. Then they will use several techniques to see what goes on in that part of the brain when a living animal reacts to different situations. “We can record the activity of every single cell in a volume of cortex, and capture the connections,” he said.
With chemicals added to the brain, the most advanced light microscopes can capture movies of neurons firing. Electrodes can record the electrical impulses. And mathematical analysis of all that may decipher the code in which information is moved around that part of the brain.
Dr. Reid says solving the first part of the problem — receiving and analyzing sensory informationmight be done in 10 years. An engineer’s precise understanding of everything from photons to action could be more on the order of 20 to 30 years away, and not reachable through the work of the Allen Institute alone. But, he wrote in an email, “the large-scale, coordinated efforts at the institute will get us there faster.” He is studying only one part of one animal’s brain, but, he said, the cortex — the part of the mammalian brain where all this calculation goes on — is something of a general purpose computer. So the rules for one process could explain other processes, like hearing. And the rules for decision-making could apply to many more complicated situations in more complicated brains. Perhaps the mouse visual cortex can be a kind of Rosetta stone for the brain’s code.
All research is a gamble, of course, and the Allen Institute’s collaborative approach, while gaining popularity in neuroscience, is not universally popular. Dr. Wiesel said it was “an important approach” that would “provide a lot of useful information.” But, he added, “it won’t necessarily create breakthroughs in our understanding of how the brain works.
I think the main advances are going to be made by individual scientists working in small groups,” he said.
Of course, in courting and absorbing researchers like Dr. Reid, the Allen Institute has been moving away from its broad data-gathering approach toward more focused work by individual investigators.
Dr. Bock, his former student, said his experience suggested that Dr. Reid had not only a passion and intensity for research, but a good eye for where science is headed as well.
That’s what Clay does,” he said. “He is really good in that Wayne Gretzky way of skating to where the puck will be.
A version of this article appears in print on February 25, 2014, on page D1 of the New York edition with the headline: The Brain’s Inner Language.



ORIGINAL: NYTimes
Tagged , , , , , , ,

Leonardo’s Brain: What a Posthumous Brain Scan Six Centuries Later Reveals about the Source of Da Vinci’s Creativity

ORIGINAL: Dangerous Minds
How the most creative human who ever lived was able to access a different state of consciousness.
Add caption
One September day in 2008, Leonard Shlainfound himself having trouble buttoning his shirt with his right hand. He was admitted into the emergency room, diagnosed with Stage 4 brain cancer, and given nine months to live. Shlain — a surgeon by training and a self-described “synthesizer by nature” with an intense interest in the ennobling intersection of art and science, author of the now-legendary Art & Physics — had spent the previous seven years working on what he considered his magnum opus: a sort of postmortem brain scan of Leonardo da Vinci, performed six centuries after his death and fused with a detective story about his life, exploring what the unique neuroanatomy of the man commonly considered humanity’s greatest creative genius might reveal about the essence of creativity itself. Shlain finished the book on May 3, 2009. He died a week later. His three children — Kimberly, Jordan, and filmmaker Tiffany Shlain — spent the next five years bringing their father’s final legacy to life. The result is Leonardo’s Brain: Understanding Da Vinci’s Creative Genius (public library | IndieBound) — an astonishing intellectual, and at times spiritual, journey into the center of human creativity via the particular brain of one undereducated, left-handed, nearly ambidextrous, vegetarian, pacifist, gay, singularly creative Renaissance male, who Shlain proposes was able to attain a different state of consciousness than “practically all other humans.
Illustration by Ralph Steadman from ‘I, Leonardo.’ Click image for more.
Noting that “a writer is always refining his ideas,” Shlain points out that the book is a synthesis of his three previous books, and an effort to live up to Kafka’s famous proclamation that “a book must be the axe for the frozen sea inside us.” It is also a beautiful celebration of the idea that art and science belong together and enrich one another whenever they converge. To understand Leonardo’s brain, Shlain points out as he proves himself once again the great poet of the scientific spirit, we must first understand our own:
The human brain remains among the last few stubborn redoubts to yield its secrets to the experimental method. During the period that scientists expanded the horizons of astronomy, balanced the valences of chemistry, and determined the forces of physics, the crowning glory of Homo sapiens and its most enigmatic emanation, human consciousness, resisted the scientific model’s persistent searching. The brain accounts for only 2 percent of the body’s volume, yet consumes 20 percent of the body’s energy. A pearly gray, gelatinous, three-pound universe, this exceptional organ can map parsecs and plot the whereabouts of distant galaxies measured in quintillions of light-years. The brain accomplishes this magic trick without ever having to leave its ensorcelled ovoid cranial shell. From minuscule-wattage electrical currents crisscrossing and ricocheting within its walls, the brain can reconstruct a detailed diorama of how it imagines the Earth appeared four billion years ago. It can generate poetry so achingly beautiful that readers weep, hatred so intense that otherwise rational people revel in the torture of others, and love so oceanic that entwined lovers lose the boundaries of their physical beings.

Shlain argues that Leonardo —

  • who painted the eternally mysterious Mona Lisa, 
  • created visionary anatomical drawings long before medical anatomy existed, 
  • made observations of bird flight in greater detailed than any previous scientist, 
  • mastered engineering, architecture, mathematics, botany, and cartography, 

might be considered history’s first true scientist long before Mary Somerville coined the word, presaged Newton’s Third Law, Bernoulli’s law, and elements of chaos theory, and was a deft composer who sang “divinely,” among countless other domains of mastery — is the individual most worthy of the title “genius” in both science and art

The divergent flow of art and science in the historical record provides evidence of a distinct compartmentalization of genius. The river of art rarely intersected with the meander of science.
[…]
Although both art and science require a high degree of creativity, the difference between them is stark. For visionaries to change the domain of art, they must make a breakthrough that can only be judged through the lens of posterity. Great science, on the other hand, must be able to predict the future. If a scientist’s hypotheses cannot be turned into a law that can be verified by future investigators, it is not scientifically sound. Another contrast: Art and science represent the difference between “being” and “doing.” Art’s raison d’être is to evoke an emotion. Science seeks to solve problems by advancing knowledge.
[…]
Leonardo’s story continues to compel because he represents the highest excellence all of us lesser mortals strive to achieve — to be intellectually, creatively, and emotionally well-rounded. No other individual in the known history of the human species attained such distinction both in science and art as the hyper-curious, undereducated, illegitimate country boy from Vinci.
Artwork from Alice and Martin Provensen’s vintage pop-up book about the life of Leonardo. Click image for more. 

Using a wealth of available information from Leonardo’s notebooks, various biographical resources, and some well-reasoned speculation, Shlain sets out to perform a “posthumous brain scan” seeking to illuminate the unique wiring of Da Vinci’s brain and how it explains his unparalleled creativity. Leonardo was an outlier in a number of ways — socially, culturally, biologically, and in some seemingly unimportant yet, as Shlain explains, notable ways bridging these various aspects of life.For instance:

Leonardo was a vegetarian in a culture that thought nothing of killing animals for food. His explanation for his unwillingness to participate in carnivory was that he did not want to contribute to any animal’s discomfort or death. He extended the courtesy of staying alive to all living creatures, and demonstrated a feeling of connectedness to all life, which was in short supply during a time that glorified hunting.
He was also the only individual in recorded history known to write comfortably backwards, performing what is known as “mirror writing,” which gives an important clue about the wiring of his brain:
Someone wishing to read Leonardo’s manuscripts must first hold the pages before a mirror. Instead of writing from left to right, which is the standard among all European languages, he chose to write from right to left — what the rest of us would consider backward writing. And he used his left hand to write.
Thoroughly confusing the issue was the fact that sometimes he would switch in mid-sentence, writing some words in one direction followed by other words heading in the opposite direction. Another intriguing neurological datum: Careful examination of two samples of his handwriting show the one written backward moving from right to left across the page is indistinguishable from the handwriting that is not reversed.
Leonardo’s quirks of penmanship strongly suggest that his two hemispheres were intimately connected in an extraordinary way. The traditional dominance pattern of one hemisphere lording it over the other does not seem to have been operational in Leonardo’s brain. Based on what we can extrapolate from the brains of people who share Leonardo’s ability to mirror-write, the evidence points to the presence of a large corpus callosum that kept each hemisphere well informed as to what the other was doing.

Further evidence that his corpus callosum — that thick bundle of fibers connecting the left and right hemispheres, consisting of more than 200 million neurons — was “afairly bursting with an overabundance of connecting neurons” comes from his unusually deft fusion of art and science. For instance, Shlain points out, no other artist in history labored so obsessively over perfecting the geometrical details of the science of perspective. Before delving into Leonardo’s specific neuroanatomy, Shlain points out that because our brains have the maximum number of neurons at the age of eight months and because a dramatic pruning of our neurocircuitry unfolds over the next decade, those early years are crucially formative in our cognitive development and warrant special attention. (Tolstoy captured this beautifully when he wrote, “From a five-year-old child to my present self there is only one step. From a new-born infant to a five-year-old child there is an awesome distance.”) Leonardo’s own childhood was so unusual and tumultuous that it calls for consideration in examining his brain development. The illicit child of a rich playboy from the city and a poor peasant girl from the picturesque Tuscan town of Vinci, he grew up without a real father — an ambitious notary, his father refused to marry Leonardo’s mother in order to avoid compromising his social status. The little boy was raised by a single mother in the countryside. Eventually, his father arranged for his mother to marry another man, and he himself married a sixteen-year-old girl. Leonardo was taken from his mother and awkwardly included in his father’s household as a not-quite-son. But the father-figure in his life ended up being his kindly uncle Francesco, whom the boy grew to love dearly. He remained in contact with his mother throughout his life and evidence from his notebooks suggests that, like Andy Warhol, he invited her to live with him as she became elderly. Shlain to two perplexities that stand out in Leonardo’s upbringing:

  • First, contemporary psychologists agree that removing young children from their mothers makes for substantial attachment and anxiety issues throughout life, producing emotionally distant adults. 
  • Secondly, Leonardo’s illegitimacy greatly limited his education options, as the Church, in one of its many strokes of gobsmacking lack of the very compassion it preaches, decreed that children born to unwed parents were not eligible for enrollment in its cathedral schools

Shlain writes: 

Outside of the prohibitively expensive alternative of private tutors, admission to one of these schools was the only means to learning the secret code that opened the doors of opportunity.
That secret code was knowledge of Latin and Greek, without which it was practically impossible to participate in the making of the Renaissance. And yet Leonardo had an especially blistering response to those who dismissed his work due to his lack of education:
They will say that because of my lack of book learning, I cannot properly express what I desire to treat of. Do they not know that my subjects require for their exposition experience rather than the words of others? And since experience has been the mistress, and to her in all points make my appeal.
(More than half a millennium later, Werner Herzog would go on to offer aspiring filmmakers similarly spirited advice.) Shlain writes:
Creativity is a combination of courage and inventiveness. One without the other would be useless.
So how did Leonardo muster the courage and inventiveness to turn the dismal cards he was dealt into the supreme winning hand of being history’s greatest genius? Shlain argues that while we can speculate about how much more remarkable work Leonardo may have done had he been able to command the respect, resources, and recognition “of one who claims noble blood, a university position, and powerful friends in high places,” there is an even more powerful counteragent to be made — one that resonates with Nietzsche’s ideas about the value of difficulty and bespeaks the immeasurable benefits of what Orson Welles called “the gift of ignorance, or what is commonly known as “beginner’s mind”:
A strong counterargument can also be put forth that it was precisely his lack of indoctrination into the reigning dogma taught in these institutions that liberated him from mental restraints. Unimpeded by the accretion of misconceptions that had fogged the lens of the educated, Leonardo was able to ask key questions and seek fresh answers. Although he could not quote learned books, he promised, “I will quote something far greater and more worthy: experience, the mistress of their masters.He disdained “trumpets and reciters of the works of others,” and tried to live by his own dictum: “Better a small certainty, than a big lie.” He referred to himself as omo sanza lettere — an “unlettered man” — because he had not received the kind of liberal arts schooling that led to the university. Somewhere in his late thirties and early forties, Leonardo made a concerted effort to teach himself Latin. Long lists of vocabulary words appear in his notebooks. Anyone who has tried to learn a foreign language in adulthood knows how difficult the task can be.
One silver lining to his lack of formal education and attentive parenting is that he was never trained out of his left-handedness as was the practice during the Middle Ages and the Renaissance — something that turned out to be crucial in the anatomy of his genius.
Illustration by Ralph Steadman from ‘I, Leonardo.’ Click image for more
But Leonardo’s social disadvantages didn’t end with education. Based on evidence from his notebooks and biographical accounts from a handful of contemporaries, he was most likely homosexual — at a time when it was not only a crime but a “sin” punishable by death. Even in his fashion and demeanor, Leonardo appeared to be the Walt Whitman of his day — in other words, a proto-dandy who “fell into the flamboyant set.” Shlain quotes Anonimo Gaddiano, a contemporary of Leonardo’s: 
He wore a rose colored tunic, short to the knee, although long garments were then in fashion. He had, reaching down to the middle of his breasts, a fine beard, curled and well kept.
Leonardo was also unorthodox in his universal empathy for animals and philosophical stance against eating them — a complete anomaly in a carnivorous era when the poor longed for meat and the rich threw elaborate feasts around it, showcasing it as a status symbol of their wealth and power. Instead, Leonardo was known to buy caged birds whenever he saw them in the town’s shops and set them free. But Leonardo’s most significant source of exceptionalism goes back to his handedness. Left-handedness might still be an evolutionary mystery, but it is also an enduring metaphor for the powers of intuition. For Leonardo, the physical and the intuitive were inextricably linked:
Leonardo intuited that a person’s face, despite appearing symmetrical, is actually divided into two slightly different halves. Because of the crossover in sensory and motor nerves from each side of the face within the brain, the left hemisphere controls the muscles of the right side of the face and the right hemisphere controls the muscles of the left side. The majority of people are left-brained/right-handed, which means that the right half of their face is under better conscious control than their left. In contrast, the left half of the face connects to the emotional right brain, and is more revealing of a person’s feelings. Right-handers have more difficulty trying to suppress emotional responses on the left side of their face.
In a recent psychology experiment, a group of unsuspecting college students were ushered into a photographer’s studio one at a time and informed that they were to pose for a picture to be given to members of their family. The majority of these right-handed students positioned themselves unaware that they were turning the left side of their face toward the camera’s lens. All of them smiled.
Brought back a second time, the researchers informed them that, now, they were to pose for a job application photo. In this case, they adopted a more professional demeanor, and the majority of right-handers emphasized the right side of their face. The results of this experiment, along with several others of similar design, strongly suggest that unconsciously, most people know that the right side of their face is best to present to the outside world. They are also subliminally aware that their left side is a more natural reflection of who they really are.
Leonardo understood these subtleties of expression. Mona Lisa is best appreciated by observing the left side of her face.
One of Leonardo’s great artistic innovations was his inclusion of the subject’s hands in a portrait. Up to that point, portraiture included only the upper chest and head, but Leonardo saw in the expressiveness of hands a gateway to the subject’s state of mind, his psychological portraiture implicitly invalidating the mind-body split and painting consciousness itself. This brings us back to Leonardo’s own brain. Shlain’s most salient point has to do with the splitting of the brain into two functionally different hemispheres, an adaptation that catapulted us ahead of all other creatures in intellectual capacity and also accounted for Leonardo’s singular genius. Reflecting on findings from studies of split-brain patients, Shlain explains:
The most sublime function of the left hemisphere — critical thinking — has at its core a set of syllogistic formulations that undergird logic. In order to reach the correct answer, the rules must be followed without deviation. So dependent is the left brain on rules that Joseph Bogen, the neurosurgeon who operated on many of the first split-brain patients, called it the propositional brain: It processes information according to an underlying set of propositions. In contrast, he called the right hemisphere the appositional brain, because it does just the opposite: It processes information through nonlinear, non-rule-based means, incorporating differing converging determinants into a coherent thought. Bogen’s classification of the brain into two different types, proposition versus apposition, has been generally accepted by neuroscientists, and it appears often in neurocognitive literature.
The right brain’s contribution to creativity, however, is not absolute, because the left brain is constantly seeking explanations for inexplicable events. Unfortunately, although many are extremely creative, without the input of the right hemisphere, they are almost universally wrong. It seems that there is no phenomenon for which the left brain has not confabulated an explanation. This attribute seems specific for the left language lobe.
Artwork from Alice and Martin Provensen’s vintage pop-up book about the life of Leonardo. Click image for more.
Echoing Hanna Arendt’s assertion that the ability to ask “unanswerable questions” is the hallmark of the human mind and F. Scott Fitzgerald’s famous aphorism that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function,” Shlain describes how this interplay illuminates the creative process:
The first step in the creative process is for an event, an unidentified object, an unusual pattern, or a strange juxtaposition to alert the right brain. In a mysterious process not well understood, it prods the left brain to pose a question. Asking the right question goes to the heart of creativity. Questions are a Homo sapiens forte. Despite the amazing variation in animal communication, there is only one species that can ask a question and — most impressively — dispute the answer. But Mother Nature would not have provided us with language simply to ask a question. She had to equip us with a critical appendage that could investigate those questions. That appendage was the opposable thumb. Thumbs have a lot to do with curiosity, which in turn leads to creativity
Building on previous research on the four stages of the creative process, Shlain outlines the role of the two hemispheres which, despite working in concert most of the time, are subject to the dominance of the left hemisphere:
Natural Selection gave the left hemisphere hegemony over the right. Under certain circumstances, however, the minor hemisphere must escape the control of the major one to produce its most outstanding contribution — creativity. For creativity to manifest itself, the right brain must free itself from the deadening hand of the inhibitory left brain and do its work, unimpeded and in private. Like radicals plotting a revolution, they must work in secret out of the range of the left hemisphere’s conservatives. After working out many of the kinks in the darkness of the right hemisphere’s subterranean processes, the idea, play, painting, theory, formula, or poetic metaphor surfaces exuberantly, as if from beneath a manhole cover that was overlaying the unconscious, and demands the attention of the left brain. Startled, the other side responds in wonderment.
When a creative impulse arises in the right hemisphere, Shlain writes, it is ferried over to the left side of the brain via the mighty corpus callosum — the largest and most poorly understood structure in the human brain, and a significant key to the mystery of Leonardo’s extraordinary creativity in attaining the two grand goals of his life: to study and discern the truth behind natural phenomena, and to communicate that truth with astounding artistry. 
Illustration by Ralph Steadman from ‘I, Leonardo.’ Click image for more.
But Shlain’s most intriguing point about Leonardo’s brain has to do with the corpus callosum and its relation to the gendered brain. We already know that “psychological androgyny” is key to creativity, and it turns out that the corpus callosum has a major role in that. For one thing, Shlain points out, there are differences in the size of that essential bundle of fibers between right-handed heterosexual males, or RHHM, and all other variants of handedness, gender, and orientation — left-handed heterosexual males, heterosexual women of both hand dominances, and homosexual men and women. The notion of the gendered brain is, of course, problematic and all sweeping statistical generalizations tend to exist on bell-shaped curves, with outliers on either side. Still, Shlain relays some fascinating findings:
The most dichotomous brain — that is, where the two hemispheres are the most specialized — belongs to a right-handed heterosexual male. Approximately 97 percent of key language modules reside in his left hemisphere, making it unequivocally his dominant lobe. This extreme skewing is not present to the same degree in women, both right- and left-handed; gays and lesbians; and left-handers of both sexes.
[…]
Females, right- or left-handed, have a more even distribution between the lobes regarding language and brain dominance. Right-handed women still have the large majority of their language modules in their left brains, but whereas an RHHM would most likely have 97 percent of his wordsmithing skills concentrated in the left lobe, a woman would be more likely to have a lesser percentage (about 80 percent) in the left brain, and the remaining 20 percent in the right brain.
Shlain cites MRI research by Sandra Witelson, who found that the anterior commissure, the largest of the corpus callosum’s anatomically distinct “component cables,can be up to 30% larger in women than in men, and other studies have found that it is 15% larger in gay men than in straight men. Taken together, these two findings about the corpus callosum — that RHHMs have more specialized brains and slimmer connecting conduits between the two hemispheres — reveal important deductive insight about Leonardo’s multi-talented brain, which fused so elegantly the prototypical critical thinking of the left hemisphere with the wildly creative and imaginative faculties of the right. Evidence from his notebooks and life strongly suggests that Leonardo was what scientists call an ESSP — an individual with exclusive same-sex preference. He never married or had children, rarely referenced women in his writings and whenever he did, it was only in the context of deciphering beauty; he was once jailed for homosexual conduct and spent some time in prison while awaiting a verdict; his anatomical drawings of the female reproductive system and genitalia are a stark outlier of inaccuracy amid his otherwise remarkably medically accurate illustrations. All of this is significant because ESSP’s don’t conform to the standard brain model of RHHM. They are also more likely to be left-handed, as Leonardo was. In fact, Shlain points out, left-handers tend to have a larger corpus callosum than right-handers, and artists in general are more likely to be left-handed than the average personaround 9% of the general population are estimated to be left-handed, and 30-40% of the student body in art schools are lefties. A left-handed ESSP, Leonardo was already likely to have a larger corpus callosum, but Shlain turns to the power of metaphor in illuminating the imagination for further evidence suggesting heightened communication between his two hemispheres:
The form of language that Leonardo used was highly metaphorical. He posed riddles and buried metaphors in his paintings. For this to occur, he had to have had a large connection of corpus callosum fibers between his right hemisphere and his left. The form of language based on metaphor— poetry, for instance—exists in the right hemisphere, even though language is primarily a left hemispheric function. To accomplish the task of the poet, a significant connection must exist between the parts of the right hemisphere, and, furthermore, there must be many interconnections between the two hemispheres. These fibers must be solidly welded to the language centers in the left hemisphere so that poetic metaphors can be expressed in language. Leonardo used the metaphor in his writings extensively— another example of connected hemispheres.
And therein lies Shlain’s point: The source of Leonardo’s extraordinary creativity was his ability to access different ways of thinking, to see more clearly the interconnectedness of everything, and in doing so, to reach a different state of consciousness than the rest of us:
His ESSP-ness put him somewhere between the masculine and the feminine. His left-handedness, ambidexterity, and mirror writing were indications of a nondominant brain. His adherence to vegetarianism at a time when most everyone was eating meat suggests a holistic view of the world. The equality between his right and left hemispheres contributed to his achievements in art and science, unparalleled by any other individual in history. His unique brain wiring also allowed him the opportunity to experience the world from the vantage point of a higher dimension. The inexplicable wizardry present in both his art and his science can be pondered only by stepping back and asking: Did he have mental faculties that differed merely in degree, or did he experience a form of cognition qualitatively different from the rest of us? I propose that many of Leonardo’s successes (and failures) were the result of his gaining access to a higher consciousness.
Significantly, Leonardo was able to envision time and space differently from the rest of us, something evidenced in both his art and his scientific studies, from revolutionizing the art perspective to predating Newton’s famous action-reaction law by two centuries when he wrote, “See how the wings, striking the air, sustain the heavy eagle in the thin air on high. As much force is exerted by the object against the air as by the air against the object.” Shlain poses the ultimate question:
When pondering Leonardo’s brain we must ask the question: Did his brain perhaps represent a jump toward the future of man? Are we as a species moving toward an appreciation of space-time and nonlocality?
Illustration by Ralph Steadman from ‘I, Leonardo.’ Click image for more.
With an eye to Leonardo’s unflinching nonconformity — his pacifism in an era that glorified war, his resolute left-handedness despite concentrated efforts at the time to train children out of that devilish trait, his vegetarianism and holistic faith in nature amid a carnivorous culture — Shlain turns an optimistic gaze to the evolution of our species:
The appearance of Leonardo in the gene pool gives us hope. He lived in an age when war was accepted. Yet, later in life, he rejected war and concentrated on the search for truth and beauty. He believed he was part of nature and wanted to understand and paint it, not control it. […] We humans are undergoing a profound metamorphosis as we transition into an entirely novel species. For those who doubt it is happening, remember: For millions of years dogs traveled in packs as harsh predators, their killer instinct close to the surface. Then humans artificially interfered with the canine genome beginning a mere six thousand years ago. No dog could have predicted in prehistoric times that the huge, snarling member, faithful to a pack, would evolve into individual Chihuahuas and lap-sitting poodles.
Leonardo’s Brain is a mind-bending, consciousness-stretching read in its totality. Complement it with Shlain on integrating wonder and wisdom and how the alphabet sparked the rise of patriarchy.
Tagged , , , , , , , ,

Are Telepathy Experiments Stunts, or Science?

November 21, 2014
Scientists have established direct communication between two human brains, but is it more than a stunt?
WHY IT MATTERS
Communicating directly with the brain could help scientists better understand how it encodes information.
Two scientific teams this year patched together some well-known technologies to directly exchange information between human brains.
The projects, in the U.S. and Europe, appear to represent the first occasions in history that any two people have transmitted information without either of them speaking or moving any muscle. For now, however, the “telepathy” technology remains so crude that it’s unlikely to have any practical impact.
In a paper published last week in the journal PLOS One, neuroscientists and computer engineers at the University of Washington in Seattle described a brain-to-brain interface they built that lets two people coöperatively play a simple video game. Earlier this year, a company in Barcelona called Starlab described transmitting short words like “ciao,” encoded as binary digits, between the brains of individuals on different continents.
Both studies used a similar setup: the sender of the message wore an EEG (electroencephalography) cap that captured electrical signals generated by his cortex while he thought about moving his hands or feet. These signals were then sent over the Internet to a computer that translated them into jolts delivered to a recipient’s brain using a magnetic coil. In Starlab’s case, the recipient perceived a flash of light. In the University of Washington’s case, the magnetic pulse caused an involuntary twitch of the wrist over a touchpad, to shoot a rocket in a computer game.
Neither EEG recording nor this kind of brain stimulation (called transcranial magnetic stimulation, or TMS) are new technologies. What is novel is bringing the two together for the purposes of simple communication. The Starlab researchers suggested that such “hyperinteraction technologies” could “eventually have a profound impact on the social structure of our civilization.
For now, however, the technology remains extremely limited. Neither experiment transmitted emotions, thoughts, or ideas. Instead they used human brains essentially as relays to convey a simple signal between two computers. The rate as which information was transmitted was also glacial.
Safety guidelines limit the use of TMS devices to a single pulse every 20 seconds. But even without that restriction, a person can only transmit a few bits of information per minute wearing an EEG cap, because willfully changing the shape of their brain wave takes deliberate concentration.
By comparison, human speech conveys information at roughly 3,000 bits per minute, according to one estimate. That means the information content of a 90-second conversation would take a day or more to transmit mentally.
Researchers intend to explore more precise, and faster, ways of conveying information. Andreas Stocco, one of the University of Washington researchers, says his team has a $1 million grant from the WM Keck Foundation to upgrade its equipment and to carry out experiments with different ways of exchanging information between minds, including with focused ultrasound waves that can stimulate nerves through the skull.
Stocco says an important use of the technology would be to help scientists test their ideas about how neurons in the brain represent information, especially about abstract concepts. For instance, if a researcher believed she could identify the neuronal pattern reflecting, say, the idea of a yellow airplane, one way to prove it would be to transmit that pattern to another person and ask what she was thinking.
You can see this interface as two different things,” says Stocco. “One is a super-cool toy that we have developed because it’s futuristic and an engineering feat but that doesn’t produce science. The other is, in the future, the ultimate way to test hypotheses about how the brain encodes information.
Tagged , , , , , ,

Pathway Genomics: Bringing Watson’s Smarts to Personal Health and Fitness

ORIGINAL: A Smarter Planet
November, 12th 2014
By Michael Nova M.D.
Michael Nova, Chief Medical Officer, Pathway Genomics
To describe me as a health nut would be a gross understatement. I run five days a week, bench press 275 pounds, do 120 pushups at a time, and surf the really big waves in Indonesia. I don’t eat red meat, I typically have berries for breakfast and salad for dinner, and I consume an immense amount of kale—even though I don’t like the way it tastes. My daily vitamin/supplement regimen includes Alpha-lipoic acid, Coenzyme Q and Resveratrol. And, yes, I wear one of those fitness gizmos around my neck to count how many steps I take in a day.
I have been following this regimen for years, and it’s an essential part of my life.
For anybody concerned about health, diet and fitness, these are truly amazing times. There’s a superabundance of health and fitness information published online. We’re able to tap into our electronic health records, we can measure just about everything we do physically, and, thanks to the plummeting price of gene sequencing, we can map our complete genomes for as little as $3000 and get readings on smaller chunks of genomic data for less than $100.
Think of it as your own personal health big-data tsunami.
The problem is we’re confronted with way too much of a good thing. There’s no way an individual like me or you can process all of the raw information that’s available to us—much less make sense out of it. That’s why I’m looking forward to being one of the first customers for a new mobile app that my company, Pathway Genomics, is developing with help from IBM Watson Group.
Surfing in Indonesia
Called Pathway Panorama, the smartphone app will make it possible for individuals to ask questions in everyday language and get answers in less than three seconds that take into consideration their personal health, diet and fitness scenarios combined with more general information. The result is recommendations that fit each of us like a surfer’s wet suit. Say you’ve just flown from your house on the coast to a city that’s 10,000 feet above sea level. You might want to ask how far you could safely run on your first day after getting off the plane—and at what pulse rate should you slow your jogging pace.
Or say you’re diabetic and you’re in a city you have never visited before. You had a pastry for breakfast and you want to know when you should take your next shot of insulin. In an emergency, you’ll be able to find specialized healthcare providers near where you are who can take care of you.
Whether you’re totally healthy and want to maximize your physical performance or you have health issues and want to reduce risks, this service will give you the advice you need. It’s like a guardian angel sitting on your shoulder who will also pre-emptively offer you help even if you don’t ask for it.
We use Watson’s language processing and cognitive abilities and combine them with information from a host of sources. The critical data comes from individual 
DNA and biomarker analysis that Pathway Genomics performs using a variety of devices and software tools.
Pathway Genomics, which launched 6 years ago in San Diego, already has a growing business of providing individual health reports delivered primarily through individuals’ personal physicians. With our Pathway Panorama app, we’ll reach out directly to consumers in a big way.
We’re in the middle of raising a new round of venture financing to pay for the expansion of our business. This brings to $80 million the amount of venture capital we have raised in the past six years—which makes us one of the best capitalized healthcare startups.
IBM is investing in Pathway Genomics as part of its commitment of $100 million to companies that are bringing to market a new generation of apps and services infused with Watson’s cognitive computing intelligence. This is the third such investment IBM has made this year.
We expect the app to be available in midi2015. We have not yet set pricing, but we expect to charge a small monthly fee. We also are creating a version for physicians.
To me, the real beauty of the Panorama app is that it will make it possible for us to safeguard our health and improve our fitness without obsessing all the time. We’ll just live our lives, and, when we need help, we’ll get it.
——-
To learn more about the new era of computing, read Smart Machines: IBM’s Watson and the Era of Cognitive Computing.
Tagged , , , , , , , , ,

A Worm’s Mind In A Lego Body

ORIGINAL: i-Programmer
Written by Lucy Black
16 November 2014
Take the connectome of a worm and transplant it as software in a Lego Mindstorms EV3 robot – what happens next?
It is a deep and long standing philosophical question. Are we just the sum of our neural networks. Of course, if you work in AI you take the answer mostly for granted, but until someone builds a human brain and switches it on we really don’t have a concrete example of the principle in action. 
KDS444, modified by Nnemo
The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped and the OpenWorm project is working to build a complete simulation of the worm in software. One of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program.
The model is accurate in its connections and makes use of UDP packets to fire neurons. If two neurons have three synaptic connections then when the first neuron fires a UDP packet is sent to the second neuron with the payload “3”. The neurons are addressed by IP and port number. The system uses an integrate and fire algorithm. Each neuron sums the weights and fires if it exceeds a threshold. The accumulator is zeroed if no message arrives in a 200ms window or if the neuron fires. This is similar to what happens in the real neural network, but not exact.
The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm’s nose. If anything comes within 20cm of the “nose” then UDP packets are sent to the sensory neurons in the network.
The same idea is applied to the 95 motor neurons but these are mapped from the two rows of muscles on the left and right to the left and right motors on the robot. The motor signals are accumulated and applied to control the speed of each motor. The motor neurons can be excitatory or inhibitory and positive and negative weights are used.
And the result?
It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.
Watch the video to see it in action. 
The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge.
The conectome may only consist of 302 neurons but it is self-stimulating and it is difficult to understand how it works – but it does.
Currently the connectome model is being transferred to a Raspberry Pi and a self-contained Pi robot is being constructed. It is suggested that it might have practical application as some sort of mobile sensor – exploring its environment and reporting back results. Given its limited range of behaviors, it seems unlikely to be of practical value, but given more neurons this might change.
  • Is the robot a C. elegans in a different body or is it something quite new? 
  • Is it alive?
These are questions for philosophers, but it does suggest that the ghost in the machine is just the machine.
For us AI researchers, we still need to know if the principle of implementing a connectome scales.
More Information
Related Articles
To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for our weekly newsletter.
Tagged , , , , , ,

IBM’s new email app learns your habits to help get things done

Email can be overwhelming, especially at work; it can take a while to get back to an important conversation or project. IBM clearly knows how bad that deluge can be, though, since its new Verse email client is built to eliminate as much clutter as possible. The app learns your habits and puts the highest-priority people and tasks at the top level. You’ll know if a key team member emailed you during lunch, or that you have a meeting in 10 minutes. Verse also puts a much heavier emphasis on collaboration and search. It’s easier to find a particular file, message or topic, and there will even be a future option to get answers from a Watson thinking supercomputer — you may get insights without having to speak to a colleague across the hall.
It’s quite clever at first glance, although you may have to wait a while to give it a spin; a Verse beta on the desktop will be available this month, but only to a handful of IBM’s customers and partners. You’ll have to wait until the first quarter of 2015 to get a version built for individual use. It’ll be “freemium” (free with paid add-ons) when it does reach the public, however, and there are promises of apps for Android and iOS to make sure you’re productive while on the road.
SOURCE: IBM (1), (2)
ORIGINAL: Engadget
November 18th 2014
Tagged , , , , , ,

Machine Learning Algorithm Ranks the World’s Most Notable Authors

ORIGINAL: Tech Review
November 17, 2014
Deciding which books to digitise when they enter the public domain is tricky; unless you have an independent ranking of the most notable authors.
Public Domain Day, 1 January, is the day on which previously copyrighted works become freely available to print, digitise, modify or re-use in more or less any way. In most countries, this happens 50 or 70 years after the death of the author.
There is even a website that celebrates this event, announcing all the most notable authors whose works become freely available on that day. This allows organisations such as Project Gutenberg to prepare digital editions and LibriVox to create audio versions and so on
But here’s an interesting question. While the works of thousands of authors enter the public domain each year, only a small percentage of these end up being widely available. So how to choose the ones to focus on?
Today, Allen Riddell at Dartmouth College in New Hampshire, says he has the answer. Riddell has developed an algorithm that automatically generates an independent ranking of notable authors for a given year. It is then a simple task to pick the works to focus on or to spot notable omissions from the past.
Riddell’s approach is to look at what kind of public domain content the world has focused on in the past and then use this as a guide to find content that people are likely to focus on in the future. For this he uses a machine learning algorithm to mine two databases. The first is a list of over a million online books in the public domain maintained by the University of Pennsylvania. The second is Wikipedia.
Riddell’s begins with the Wikipedia entries of all authors in the English language edition—more than a million of them. His algorithm extracts information such as the article length, article age, estimated views per day, time elapsed since last revision and so on.
The algorithm then takes the list of all authors on the online book database and looks for a correlation between the biographical details on Wikipedia and the existence of a digital edition in the public domain.
That produces a “public domain ranking” of all the authors that appear on Wikipedia. For example, the author Virginia Woolf has a ranking of 1081 out of 1,011,304 while the Italian painter Giuseppe Amisani, who died in the same year as Woolf, has a ranking of 580,363. So Riddell’s new ranking clearly suggests that organisations like Project Guttenberg should focus more on digitising Woolf’s work than Amisani’s.
The beauty of this approach is that it is entirely independent. That’s in stark contrast to the committees that are often set up to rank works subjectively.
Of the individuals who died in 1965 and whose work will enter the public domain next January in many parts of the world, the new algorithm picks out T S Eliot as the most highly ranked individual. Others highly ranked include Somerset Maugham, Winston Churchill and Malcolm X.
As well as by year of death, it’s possible to rank authors according to categories of interest. For example, the top-ranked Mexican poet is Homero Aridjis, the top-ranked French philosopher, Jean-Paul Sartre and the top-ranked female American writer, Terri Windling.
Riddell says his ranking system compares well with existing rankings compiled by human experts, such as one compiled by the editorial board of the Modern Library. “The Public Domain Rank of the authors selected by the Modern Library editorial board are consistently high,” he says.
It is not perfect, however. Riddell acknowledges that his new Public Domain Ranking is likely to reflect the biases inherent in Wikipedia, which is well known for having few female editors, for example.
But with that in mind, the ranking is still likely to be useful. It should be handy for finding notable authors in the public domain whose works are not yet available electronically because they have somehow been overlooked. “Flannery O’Connor and Sylvia Plath stand out as significant examples of authors whose works might be made available today on Project Gutenberg Canada, “ says Riddell. (Canada follows the 50 year rule rather than 70).
It may even change the nature of Public Domain Day. “Public Domain Rank promises to facilitate—and even automate—Public Domain Day,” says Riddell.
Handy!
Ref: arxiv.org/abs/1411.2180 Public Domain Rank: Identifying Notable Individuals with the Wisdom of the Crowd
Tagged , , , , , , , ,

Robot Brains Catch Humans in 25 Years, Then Speed Right On By

ORIGINAL: Bloomberg
By Tom Randall
Nov 10, 2014

 

An android Repliee S1, produced by Japan’s Osaka University professor Hiroshi Ishiguro, performing during a dress rehearsal of Franz Kafka‘s “The Metamorphosis.” Phototographer: Yoshikazu Tsuno/AFP via Getty Images
We’ve been wrong about these robots before.
Soon after modern computers evolved in the 1940s, futurists started predicting that in just a few decades machines would be as smart as humans. Every year, the prediction seems to get pushed back another year. The consensus now is that it’s going to happen in … you guessed it, just a few more decades.
There’s more reason to believe the predictions today. After research that’s produced everything from self-driving cars to Jeopardy!-winning supercomputers, scientists have a much better understanding of what they’re up against. And, perhaps, what we’re up against.
Nick Bostrom, director of the Future of Humanity Institute at Oxford University, lays out the best predictions of the artificial intelligence (AI) research community in his new book, “Superintelligence: Paths, Dangers, Strategies.” Here are the combined results of four surveys of AI researchers, including a poll of the most-cited scientists in the field, totalling 170 respondents.
Human-level machine intelligence is defined here as “one that can carry out most human professions at least as well as a typical human.
By that definition, maybe we shouldn’t be so surprised about these predictions. Robots and algorithms are already squeezing the edges of our global workforce. Jobs with routine tasks are getting digitized: farmers, telemarketers, stock traders, loan officers, lawyers, journalists — all of these professions have already felt the cold steel nudge of our new automated colleagues.
Replication of routine isn’t the kind of intelligence Bostrom is interested in. He’s talking about an intelligence with intuition and logic, one that can learn, deal with uncertainty and sense the world around it. The most interesting thing about reaching human-level intelligence isn’t the achievement itself, says Bostrom; it’s what comes next. Once machines can reason and improve themselves, the skynet is the limit.
Computers are improving at an exponential rate. In many areas — chess, for example — machine skill is already superhuman. In others — reason, emotional intelligence — there’s still a long way to go. Whether human-level general intelligence is reached in 15 years or 150, it’s likely to be a little-observed mile marker on the road toward superintelligence.
Superintelligence: one that “greatly exceeds the cognitive performance of humans in virtually all domains of interest.
Inventor and Tesla CEO Elon Musk warns that superintelligent machines are possibly the greatest existential threat to humanity. He says the investments he’s made in artificial-intelligence companies are primarily to keep an eye on where the field is headed.
Hope we’re not just the biological boot loader for digital superintelligence,” Musk Tweeted in August. “Unfortunately, that is increasingly probable.
There are lots of caveats before we prepare to hand the keys to our earthly kingdom over to robot offspring.

  • First, humans have a terrible track record of predicting the future. 
  • Second, people are notoriously optimistic when forecasting the future of their own industries. 
  • Third, it’s not a given that technology will continue to advance along its current trajectory, or even with its current aims.
Still, the brightest minds devoted to this evolving technology are predicting the end of human intellectual supremacy by midcentury. That should be enough to give everyone pause. The direction of technology may be inevitable, but the care with which we approach it is not.
Success in creating AI would be the biggest event in human history,” wrote theoretical physicist Stephen Hawking, in an Independent column in May. “It might also be the last.”
Tagged , , , ,

The Myth Of AI. A Conversation with Jaron Lanier

ORIGINAL: EDGE

11.14.14

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There’s always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn’t be a program. There has been a domineering subculture—that’s been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there’s an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we’re inevitably making computers that will be smarter and better than us and will take over from us. 



That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You’ll have a figure say, “The computers will take over the Earth, but that’s a good thing, because people had their chance and now we should give it to the machines.” Then you’ll have other people say, “Oh, that’s horrible, we must stop these computers.Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: “Oh my God, these things are an existential threat. They must be stopped.

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what was perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. … That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allows the data schemes to operate, contributing to the fortunes of whoever runs the computers. You’re saying, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” and, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The new religious idea of AI is a lot like the economic effect of the old idea, religion.

 

[39:47]
JARON LANIER is a Computer Scientist; Musician; Author of Who Owns the Future?
INTRODUCTION
This past weekend, during a trip to San Francisco, Jaron Lanier stopped by to talk to me for an Edge feature. He had something on his mind: news reports about comments by Elon Musk and Stephen Hawking, two of the most highly respected and distinguished members of the science and technology communiity, on the dangers of AI. (“Elon Musk, Stephen Hawking and fearing the machine” by Alan Wastler, CNBC 6.21.14). He then talked, uninterrupted, for an hour.
As Lanier was about to depart, John Markoff, the Pulitzer Prize-winning technology correspondent for THE NEW YORK TIMES, arrived. Informed of the topic of the previous hour’s conversation, he said, “I have a piece in the paper next week. Read it.” A few days later, his article, “Fearing Bombs That Can Pick Whom to Kill” (11.12.14), appeared on the front page. It’s one of a continuing series of articles by Markoff pointing to the darker side of the digital revolution.
This is hardly new territory. Cambridge cosmologist Martin Rees, the former Astronomer Royal and President of the Royal Society, addressed similar topics in his 2004 book, Our Final Hour: A Scientist’s Warning, as did computer scientist, Bill Joy, co-founder of Sun Microsystems, in his highly influential 2000 article in Wired, “Why The Future Doesn’t Need Us: Our most powerful 21st-century technologies — robotics, genetic engineering, and nanotech — are threatening to make humans an endangered species”.
But these topics are back on the table again, and informing the conversation in part is Superintelligence: Paths, Dangers, Strategies, the recently published book by Nick Bostrom, founding director of Oxford University’s Institute for the Future of Humanity. In his book, Bostrom asks questions such as “what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?
I am encouraging, and hope to publish, a Reality Club conversation, with comments (up to 500 words) on, but not limited to, Lanier’s piece. This is a very broad topic that involves many different scientific fields and I am sure the Edgies will have lots of interesting things to say.
—JB
Related on Edge:
THE MYTH OF AI (Transcript)
A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn’t been asked to decide, and declare that corporations are people. That’s a cover for making it easier for big money to have an influence in politics. But there’s another angle to it, which I don’t think has been considered as much: the tech companies, which are becoming the most profitable, the fastest rising, the richest companies, with the most cash on hand, are essentially people for a different reason than that. They might be people because the Supreme Court said so, but they’re essentially algorithms.

Continue reading

Tagged , , , , , , , , , ,

10 IBM Watson-Powered Apps That Are Changing Our World

ORIGINAL: CIO
Nov 6, 2014
By IBM 

 

IBM is investing $1 billion in its IBM Watson Group with the aim of creating an ecosystem of startups and businesses building cognitive computing applications with Watson. Here are 10 examples that are making an impact.
IBM considers Watson to represent a new era of computing — a step forward to cognitive computing, where apps and systems interact with humans via natural language and help us augment our own understanding of the world with big data insights.
Big Blue isn’t playing small ball with that claim. It has opened a new IBM Watson Global Headquarters in the heart of New York City’s Silicon Alley and is investing $1 billion into the Watson Group, focusing on development and research as well as bringing cloud-delivered cognitive applications and services to the market. That includes $100 million available for venture investments to support IBM’s ecosystem of start-ups and businesses building cognitive apps with Watson.
Here are 10 examples of Watson-powered cognitive apps that are already starting to shake things up.
USAA and Watson Help Military Members Transition to Civilian Life
USAA, a financial services firm dedicated to those who serve or have served in the military, has turned to IBM’s Watson Engagement Advisor in a pilot program to help military men and women transition to civilian life.
According to the U.S. Bureau of Labor Statistics, about 155,000 active military members transition to civilian life each year. This process can raise many questions, like “Can I be in the reserve and collect veteran’s compensation benefits?” or “How do I make the most of the Post-9/11 GI Bill?” Watson has analyzed and understands more than 3,000 documents on topics exclusive to military transitions, allowing members to ask it questions and receive answers specific to their needs.

LifeLearn Sofie is an intelligent treatment support tool for veterinarians of all backgrounds and levels of experience. Sofie is powered by IBM WatsonTM, the world’s leading cognitive computing system. She can understand and process natural language, enabling interactions that are more aligned with how humans think and interact.

Implement Watson
Dive deeper into subjects. Find insights where no one ever thought to look before. From Healthcare to Retail, there’s an IBM Watson Solution that’s right for your enterprise.

Healthcare

Helping doctors identify treatment options
The challenge
According to one expert, only 20 percent of the knowledge physicians use to diagnose and treat patients today is evidence based. Which means that one in five diagnoses is incorrect or incomplete.

Continue reading

Tagged , , , , , , ,

fMRI Data Reveals the Number of Parallel Processes Running in the Brain

ORIGINAL: Tech Review
November 5, 2014
The human brain carries out many tasks at the same time, but how many? Now fMRI data has revealed just how parallel gray matter is.
The human brain is often described as a massively parallel computing machine. That raises an interesting question: just how parallel is it?
Today, we get an answer thanks to the work of Harris Georgiou at the National Kapodistrian University of Athens in Greece, who has counted the number of “CPU cores” at work in the brain as it performs simple tasks in a functional magnetic resonance imaging (fMRI) machine. The answer could help lead to computers that better match the performance of the human brain.
The brain itself consists of around 100 billion neurons that each make up to 10,000 connections with their neighbors. All of this is packed into a structure the size of a party cake and operates at a peak power of only 20 watts, a level of performance that computer scientists observe with unconcealed envy.
fMRI machines reveal this activity by measuring changes in the levels of oxygen in the blood passing through the brain. The thinking is that more active areas use more oxygen so oxygen depletion is a sign of brain activity.
Typically, fMRI machines divide the brain into three-dimensional pixels called voxels, each about five cubic millimeters in size. The complete activity of the brain at any instant can be recorded using a three-dimensional grid of 60 x 60 x 30 voxels. These measurements are repeated every second or so, usually for tasks lasting two or three minutes. The result is a dataset of around 30 million data points.
Georgiou’s work is in determining the number of independent processes at work within this vast data set. “This is not much different than trying to recover the (minimum) number of actual ‘cpu cores’ required to ‘run’ all the active cognitive tasks that are registered in the entire 3-D brain volume,” he says.
This is a difficult task given the size of the dataset. To test his signal processing technique, Georgiou began by creating a synthetic fMRI dataset made up of eight different signals with statistical characteristics similar to those at work in the brain. He then used a standard signal processing technique, called independent component analysis, to work out how many different signals were present, finding that there are indeed eight, as expected.
Next, he applied the same independent component analysis technique to real fMRI data gathered from human subjects performing two simple tasks. The first was a simple visuo-motor task in which a subject watches a screen and then has to perform a simple task depending on what appears.
In this case, the screen displays either a red or green box on the left or right side. If the box is red, the subject must indicate this with their right index finger, and if the box is green, the subject indicates this with their left index finger. This is easier when the red box appears on the right and the green box appears on the left but is more difficult when the positions are swapped. The data consisted of almost 100 trials carried out on nine healthy adults.
The second task was easier. Subjects were shown a series of images that fall into categories such as faces, houses, chairs, and so on. The task was to spot when the same object appears twice, albeit from a different angle or under different lighting conditions. This is a classic visual recognition task.
The results make for interesting reading. Although the analysis is complex, the outcome is simple to state. Georgiou says that independent component analysis reveals that about 50 independent processes are at work in human brains performing the complex visuo-motor tasks of indicating the presence of green and red boxes. However, the brain uses fewer processes when carrying out simple tasks, like visual recognition.
That’s a fascinating result that has important implications for the way computer scientists should design chips intended to mimic human performance. It implies that parallelism in the brain does not occur on the level of individual neurons but on a much higher structural and functional level, and that there are about 50 of these.
Georgiou points out that a typical voxel corresponds to roughly three million neurons, each with several thousand connections with its neighbors. However, the current state-of-the-art neuromorphic chips contain a million artificial neurons each with only 256 connections. What is clear from this work is that the parallelism that Georgiou has measured occurs on a much larger scale than this.
This means that, in theory, an artificial equivalent of a brain-like cognitive structure may not require a massively parallel architecture at the level of single neurons, but rather a properly designed set of limited processes that run in parallel on a much lower scale,” he concludes.
Anybody thinking of designing brain-like chips might find this a useful tip.
Ref: arxiv.org/abs/1410.7100 Estimating The Intrinsic Dimension In fMRI Space Via Dataset Fractal Analysis
Tagged , , , , , , ,

The Next Big Programming Language You’ve Never Heard Of

ORIGINAL: Wired
07.07.14
 Getty
Andrei Alexandrescu didn’t stand much of a chance. And neither did Walter Bright.
When the two men met for beers at a Seattle bar in 2005, each was in the midst of building a new programming language, trying to remake the way the world creates and runs its computer software. That’s something pretty close to a hopeless task, as Bright knew all too well. “Most languages never go anywhere,” he told Alexandrescu that night. “Your language may have interesting ideas. But it’s never going to succeed.
Alexandrescu, a graduate student at the time, could’ve said the same thing to Bright, an engineer who had left the venerable software maker Symantec a few years earlier. People are constantly creating new programming languages, but because the software world is already saturated with so many if them, the new ones rarely get used by more than a handful of coders—especially if they’re built by an ex-Symantec engineer without the backing of a big-name outfit. But Bright’s new language, known as D, was much further along than the one Alexandrescu was working on, dubbed Enki, and Bright said they’d both be better off if Alexandrescu dumped Enki and rolled his ideas into D. Alexandrescu didn’t much like D, but he agreed. “I think it was the beer,” he now says.
Andrei Alexandrescu.
Photo: Ariel Zambelich/WIRED
The result is a programming language that just might defy the odds. Nine years after that night in Seattle, a $200-million startup has used D to build its entire online operation, and thanks to Alexandrescu, one of biggest names on the internet is now exploring the new language as well. Today, Alexandrescu is a research scientist at Facebook, where he and a team of coders are using D to refashion small parts of the company’s massive operation. Bright, too, has collaborated with Facebook on this experimental software, as an outsider contractor. The tech giant isn’t an official sponsor of the language—something Alexandrescu is quick to tell you—but Facebook believes in D enough to keep him working on it full-time, and the company is at least considering the possibility of using D in lieu of C++, the venerable language that drives the systems at the heart of so many leading web services.
C++ is an extremely fast language—meaning software built with it runs at high speed—and it provides great control over your code. But it’s not as easy to use as languages like Python, Ruby, and PHP. In other words, it doesn’t let coders build software as quickly. D seeks to bridge that gap, offering the performance of C++ while making things more convenient for programmers.

Among the giants of tech, this is an increasingly common goal. Google’s Go programming language aims for a similar balance of power and simplicity, as does the Swift language that Apple recently unveiled. In the past, the programming world was split in two:

  • the fast languages and 
  • the simpler modern languages

But now, these two worlds are coming together. “D is similar to C++, but better,” says Brad Anderson, a longtime C++ programmer from Utah who has been using D as well. “It’s high performance, but it’s expressive. You can get a lot done without very much code.

IN THE PAST, THE PROGRAMMING WORLD WAS SPLIT IN TWO: THE FAST LANGUAGES AND THE SIMPLER MODERN LANGUAGES. BUT NOW, THESE TWO WORLDS ARE COMING TOGETHER.
In fact, Facebook is working to bridge this gap with not one but two languages. As it tinkers with D, the company has already revamped much of its online empire with a new language called Hack, which, in its own way, combines speed with simplicity. While using Hack to build the front-end of its service—the webpages you see when you open the service in your web browser—Facebook is experimenting with D on the back-end, the systems that serve as the engine of its social network.
But Alexandrescu will also tell you that programmers can use D to build anything, including the front-end of a web service. The language is so simple, he says, you can even use it for quick-and-dirty programming scripts. “You want to write a 50-line script? Sure, go for it.” This is what Bright strove for—a language suitable for all situations. Today, he says, people so often build their online services with multiple languages—a simpler language for the front and a more powerful language for the back. The goal should be a single language that does it all. “Having a single language suitable for both the front and the back would be a lot more productive for programmers,” Bright says. “D aims to be that language.
The Cape of a Superhero
When Alexandrescu discusses his years of work on D, he talks about wearing the “cape of a superhero”—being part of a swashbuckling effort to make the software world better. That’s not said with arrogance. Alexandrescu, whose conversations reveal a wonderfully self-deprecating sense of humor, will also tell you he “wasn’t a very good” programming language researcher at the University of Washington—so bad he switched his graduate studies to machine learning. The superhero bit is just a product of his rather contagious enthusiasm for the D project.
For years, he worked on the language only on the side. “It was sort of a free-time activity, in however much free-time a person in grad school can have, which is like negative,” says Alexandrescu, a Romanian who immigrated to the States in the late ’90s. Bright says the two of them would meet in coffee shops across Seattle to argue the ins and outs of the language. The collaboration was fruitful, he explains, because they were so different. Alexandrescu was an academic, and Bright was an engineer. “We came at the same problems from opposite directions. That’s what made the language great–the yin and the yang of these two different viewpoints of how the language should be put together.
For Alexandrescu, D is unique. It’s not just that it combines speed and simplicity. It also has what he calls “modeling power.” It lets coders more easily create models of stuff we deal with in the real world, including everything from bank accounts and stock exchanges to automative sensors and spark plugs. D, he says, doesn’t espouse a particular approach to modeling. It allows the programmer “to mix and match a variety of techniques to best fit the problem.
THAT’S WHAT MADE THE LANGUAGE GREAT–THE YIN AND THE YANG OF THESE TWO DIFFERENT VIEWPOINTS OF HOW THE LANGUAGE SHOULD BE PUT TOGETHER.
He ended up writing the book on D. But when he joined Facebook in 2009, it remained a side project. His primary research involved machine learning. Then, somewhere along the way, the company agreed to put him on language full-time. “It was better,” he says, “to do the caped-superhero-at-night thing during the daytime.
 
For Facebook, this is still a research project. But the company has hosted the past two D conferences—most recently in May—and together with various Facebook colleagues, Alexandrescu has used D to rebuild two select pieces of Facebook software. They rebuilt the Facebook “linter,” known as Flint, a means of identifying errors in other Facebook software, and they fashioned a new Facebook “preprocessor,” dubbed Warp, which helps generate the company’s core code.
In both cases, D replaced C++. That, at least for the moment, is where the language shines the most. When Bright first started the language, he called it Mars, but the community that sprung up around the language called it D, because they saw it as the successor to C++. “D became the nickname,” Bright says. “And the nickname stuck.

The Interpreted Language That Isn’t

Facebook is the most high-profile D user. But it’s not alone. Sociomantic—a German online advertising outfit recently purchased by British grocery giant Tesco for a reported $200 million—has built its operation in D. About 10,000 people download the D platform each month. “I’m assuming it’s not the same 10,000 every month,” Alexandrescu quips. And judging from D activity on various online developer services—from GitHub to Stackoverflowthe language is now among the 20 to 30 most popular in the world.
For coder Brad Anderson, the main appeal is that D feels like interpreted languages such as Ruby and PHP. “It results in code that’s more compact,” he says. “You’re not writing boilerplate as much. You’re not writing as much stuff you’re obligated to write in other languages.It’s less “verbose” than C++ and Java.
Yes, like C++ and Java, D is a compiled language, meaning that you must take time to transform it into executable software before running it. Unlike with interpreted languages, you can’t run your code as soon as you write it. But it compiles unusually quickly. Bright—who worked on C++, Java, and Javascript compilers at Symantec and Sun Microsystems—says this was a primary goal.When your compiler runs fast,” he says, “it transforms the way your write code. It lets you see the results much faster. For Anderson, this is another reason that D feels more like an interpreted language. “It’s usually very, very fast to compile–fast enough that the edit [and] run cycle usually feels just like an interpreted language.” He adds, however, that this begins to change if your program gets very large.
IT’S USUALLY VERY, VERY FAST TO COMPILE–FAST ENOUGH THAT THE EDIT AND RUN CYCLE USUALLY FEELS JUST LIKE AN INTERPRETED LANGUAGE.
What’s more, Anderson explains, a D program has this unusual ability to generate additional D code and weave this into itself at compile time. That may sound odd, but the end result is a program more finely tuned to the task at hand. Essentially, a program can optimize itself as it compiles. “It makes for some amazing code generation capabilities,” Anderson says.
The trouble with the language, according to Alexandrescu, is that it still needs a big-name backer. “Corporate support would be vital right now,” he says. This shows you that Facebook’s involvement only goes so far, and it provides some insight into why new languages have such trouble succeeding. In addition to backing Hack, Facebook employs some of the world’s leading experts in Haskell, another powerful but relatively underused language. What D needs, Alexandrescu says, is someone willing to pump big money into promoting it. The Java programming language succeeded, he says, because Sun Microsystems put so much money behind it back in the ’90s.
Certainly, D still faces a long road to success. But this new language has already come further than most.
Tagged , , , , , , ,

DARPA funds $11 million tool that will make coding a lot easier

ORIGINAL: Engadget
November 9, 2014
DARPA is funding a new project by Rice University called PLINY, and it’s neither a killer robot nor a high-tech weapon. PLINY, named after Pliny the Elder who wrote one of the earliest encyclopedias ever, will actually be a tool that can automatically complete a programmer’s draft — and yes, it will work somewhat like the autocomplete on your smartphones. Its developers describe it as a repository of terabytes upon terabytes of all the open-source code they’ll find, which people will be able to query in order to easily create complex software or quickly finish a simple one. Rice University assistant professor Swarat Chaudhuri says he and his co-developers “envision a system where the programmer writes a few of lines of code, hits a button and the rest of the code appears.” Also, the parts PLINY conjures up “should work seamlessly with the code that’s already been written.
In the video below, Chaudhuri used a sheet of paper with a hole in the middle to represent a programmer’s incomplete work. If he uses PLINY to fill that hole, the tool will look through the billions of lines of code in its collection to find possible solutions (represented by different shapes in the video). Once it finds the nearest fit, the tool will clip any unnecessary parts, polish the code further to come up with the best solution it can, and make sure the final product has no security flaws. More than a dozen Rice University researchers will be working on PLINY for the next four years, fueled by the $11 million funding from the Pentagon’s mad science division.
[Image credit: Shutterstock / Yellowj]
Tagged , , , , , ,

Amazon Echo: An Intelligent Speaker That Listens to Your Commands

ORIGINAL: Gizmodo

By Mario Aguilar

Amazon Echo: An Intelligent Speaker That Listens to Your Commands

Amazon Echo is a speaker that has a voice assistant built in. If you ask it a question its got an answer. If you tell it to do stuff, it complies. Well, this is different.

Echo is an always-on speaker that you plop into a corner of your house and turns it into the futuristic homes we’ve been dreaming about. It’s like Jarvis, or the assistant computer from Her.

When you say the wake word “Alexa,” it starts listening and you can ask it for information or to perform any of a number of tasks. For example, you can ask it for the weather, to play a particular style of music, or to add something to you calendar.

Of course voice assistants aren’t an entirely new concept, but building the technology into a home appliance rather than into a a smartphone makes a lot of sense and gives the technology a more conversational and natural feel. To that end, its got what Amazon calls “far-field recognition” that allows you to talk to it from across the room. It eliminates the clumsiness of assistants like Siri and Google Now that you have to be right on top of.

Besides being an assistant, Echo is also a little Bluetooth speaker with 360-degree sound. It stands 9-inches tall, has a 2-inch tweeter and a 2.5-inch woofer.

If you’re not near the speaker, you can also access it using an app for Android and Fire OS as well as through web browsers on iOS.

Right now, Echo is available by invitation only. It costs $200 for regular people and $100 for people who have an Amazon Prime account. [Amazon]

Tagged , , ,

Robotic Micro-Scallops Can Swim Through Your Eyeballs

ORIGINAL: IEEE Spectrum
By Evan Ackerman
Posted 4 Nov 2014

Image: Alejandro Posada/MPI-IS
An engineered scallop that is only a fraction of a millimeter in size and that is capable of swimming in biomedically relevant fluids has been developed by researchers at the Max Planck Institute for Intelligent Systems in Stuttgart.
Designing robots on the micro or nano scale (like, small enough to fit inside your body) is all about simplicity. There just isn’t room for complex motors or actuation systems. There’s barely room for any electronics whatsoever, not to mention batteries, which is why robots that can swim inside your bloodstream or zip around your eyeballs are often driven by magnetic fields. However, magnetic fields drag around anything and everything that happens to be magnetic, so in general, they’re best for controlling just one single microrobot robot at a time. Ideally, you’d want robots that can swim all by themselves, and a robotic micro-scallop, announced today in Nature Communications, could be the answer.
When we’re thinking about robotic microswimmers motion, the place to start is with understanding how fluids (specifically, biological fluids) work at very small scales. Blood doesn’t behave like water does, in that blood is what’s called a non-Newtonian fluid. All that this means is that blood behaves differently (it changes viscosity, becoming thicker or thinner) depending on how much force you’re exerting on it. The classic example of a non-Newtonian fluid is oobleck, which you can make yourself by mixing one part water with two parts corn starch. Oobleck acts like a liquid until you exert a bunch of force on it (say, by rapidly trying to push your hand into it), at which point its viscosity increases to the point where it’s nearly solid.
These non-Newtonian fluids represent most of the liquid stuff that you have going on in your body (blood, joint fluid, eyeball goo, etc), which, while it sounds like it would be more complicated to swim through, is actually an opportunity for robots. Here’s why:
At very small scales, robotic actuators tend to be simplistic and reciprocal. That is, they move back and forth, as opposed to around and around, like you’d see with a traditional motor. In water (or another Newtonian fluid), it’s hard to make a simple swimming robot out of reciprocal motions, because the back and forth motion exerts the same amount of force in both directions, and the robot just moves forward a little, and backward a little, over and over. Biological microorganisms generally do not use reciprocal motions to get around in fluids for this exact reason, instead relying on nonreciprocal motions of flagella and cilia.
However, if we’re dealing with a non-Newtonian fluid, this rule (it’s actually a theorem called the Scallop theorem) doesn’t apply anymore, meaning that it should be possible to use reciprocal movements to get around. A team of researchers led by Prof. Peer Fischer at the Max Planck Institute for Intelligent Systems, in Germany, have figured out how, and appropriately enough, it’s a microscopic robot that’s based on the scallop:
As we discussed above, these robots are true swimmers. This particular version is powered by an external magnetic field, but it’s just providing energy input, not dragging the robot around directly as other microbots do. And there are plenty of kinds of micro-scale reciprocal actuators that could be used, like piezoelectrics, bimetal strips, shape memory alloys, or heat or light-actuated polymers. There’s lots of design optimizations that can be made as well, like making the micro-scallop more streamlined or “optimizing its surface morphology,” whatever that means.
The researchers say that the micro-scallop is more of a “general scheme” for micro-robots rather than a specific micro-robot that’s intended to do anything in particular. It’ll be interesting to see how this design evolves, hopefully to something that you can inject into yourself to fix everything that could ever be wrong with you. Ever.
Tagged , , ,

Google CEO: Computers Are Going To Take Our Jobs, And ‘There’s No Way Around That’

ORIGINAL: BusinessInsider
OCT. 31, 2014
Google+/Larry Page Google CEO Larry Page
When Google co-founders Larry Page and Sergey Brin formed the company in 1998, they sought to package all the information on the internet into an index that’s simple to use.
Today, Google is much more than a search engine. The company appears to be involved in every type of new technology ranging from self-driving cars to contact lenses that can test for disease.
In a recent interview with the Financial Times, CEO Larry Page provided some insight as to why the company has decided to take on so many different tasks.
Part of the reason is because Page believes there’s this inevitable shift coming in which computers will be much better-suited to take on most jobs.
You can’t wish away these things from happening, they are going to happen,” he told the Financial Times on the subject of artificial intelligence infringing on the job market. “You’re going to have some very amazing capabilities in the economy. When we have computers that can do more and more jobs, it’s going to change how we think about work. There’s no way around that. You can’t wish it away.
But people shouldn’t fear computers taking over their occupations, according to Page, who says it doesn’t make sense” for people to work so much.
The idea that everyone should slavishly work so they do something inefficiently so they keep their job — that just doesn’t make any sense to me,” he told the Financial Times. “That can’t be the right answer.
Based on Page’s quotes in the Financial Times, it sounds as if he feels like Google has an obligation to invest in forward-thinking technologies.
…We have all these billions we should be investing to make people’s lives better,” Page said to the Financial Times. “If we just do the same thing we did before and don’t do something new, it seems like a crime to me.
Tagged , , , ,

Almost human: Xerox brings higher level of AI to its virtual customer support agents

Almost human: Xerox brings higher level of AI to its virtual customer support agents
 Above: A brain depicting Xerox WDS Virtual Agent
Image Credit: Xerox

The WDS Virtual Agent taps into intelligence gleaned from terabytes of data that the company keeps about real customer interactions. Armed with this info, the virtual agent can more reliably solve problems itself, as it learns through experience. The more customer care data it is exposed to, the more effective it becomes in delivering relevant responses to real customer questions.

Of course, AI proponents have been saying this for decades, so the proof will be in how well it works.

It may be a long time before we get virtual AI companions like in the movie Her, where actor Joaquin Phoenix’s character falls in love with Siri-like AI. But virtual assistants are becoming popular because, Xerox says, they cost about a fiftieth of what a human being costs.

Xerox has applied its research from its PARC (formerly Palo Alto Research Center) and Xerox Research Centre Europe in AI, machine learning, and natural language processing. The AI can understand, diagnose, and solve customer problems — without being specifically programmed to give rote responses. It analyzes and learns from human agents

Because many first-generation virtual agents rely on basic keyword searches, they aren’t able to understand the context of a customer’s question like a human agent can,” said WDS’ Nick Gyles, chief technology officer, in a statement. “The WDS Virtual Agent has the confidence to solve problems itself because it learns just like we do, through experience. The more care data it’s exposed to, the more effective it becomes in delivering relevant and proven responses.

Xerox captures data like customer sentiment, described symptoms, problem types, root causes and the techniques agents use to resolve customer problems. The data have been there for a while; it just needs AI that is smart enough to absorb it all.

We’ve found a way for organizations to unlock that data potential to deliver benefit across their wider care channels,” Gyles said. “No other virtual agent technology is able to deliver this consistency and connect intelligence from multiple sources to ensure that the digital experience is as reliable and authentic as a human one.

Xerox is delivering the WDS Virtual Agent as a cloud-based solution. It will be available in the fourth quarter.

Our technology helps overcome one of the key barriers brands face in trying to deliver a truly omni-channel care experience; the ability to be consistent. Digital care tools often lag behind the intelligence that resides in the contact center, with outdated content or no awareness of new problems. Our research in artificial intelligence is changing this,” said Jean-Michel Renders, senior scientist at XRCE in a statement. “With our machine learning technology, the WDS Virtual Agent has the ability to learn how to solve new problems as they arise across a company’s wider care channels.

Tagged , , , ,

Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

ORIGINAL: IEEE Spectrum
By Lee Gomes
20 Oct 2014
Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong
Photo-Illustration: Randi Klett

The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool’s errand. 

Despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.

Those may sound like the Luddite ravings of a crackpot who breached security at an IEEE conference. In fact, the opinions belong to IEEE Fellow Michael I. Jordan, Pehong Chen Distinguished Professor at the University of California, Berkeley. Jordan is one of the world’s most respected authorities on machine learning and an astute observer of the field. His CV would require its own massive database, and his standing in the field is such that he was chosen to write the introduction to the 2013 National Research Council report “Frontiers in Massive Data Analysis.” San Francisco writer Lee Gomes interviewed him for IEEE Spectrum on 3 October 2014.
Michael Jordan on…

 

1- Why We Should Stop Using Brain Metaphors When We Talk About Computing

Continue reading

Tagged , , , , , , ,

Artificial Intelligence Planning Course at Coursera by U of Edimurgh

ORIGINAL: Coursera

The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications.


About the Course

The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications. It will allow you to:

  • Understand different planning problems
  • Have the basic know how to design and implement AI planning systems
  • Know how to use AI planning technology for projects in different application domains
  • Have the ability to make use of AI planning literature

Planning is a fundamental part of intelligent systems. In this course, for example, you will learn the basic algorithms that are used in robots to deliberate over a course of actions to take. Simpler, reactive robots don’t need this, but if a robot is to act intelligently, this type of reasoning about actions is vital.

Course Syllabus

Week 1: Introduction and Planning in Context
Week 2: State-Space Search: Heuristic Search and STRIPS
Week 3: Plan-Space Search and HTN Planning
One week catch up break
Week 4: Graphplan and Advanced Heuristics

Week 5: Plan Execution and ApplicationsM

Exam week

Recommended Background

The MOOC is based on a Masters level course at the University of Edinburgh but is designed to be accessible at several levels of engagement from an “Awareness Level”, through the core “Foundation Level” requiring a basic knowledge of logic and mathematical reasoning, to a more involved “Performance Level” requiring programming and other assignments.

Suggested Readings

The course follows a text book, but this is not required for the course:
Automated Planning: Theory & Practice (The Morgan Kaufmann Series in Artificial Intelligence) by M. Ghallab, D. Nau, and P. Traverso (Elsevier, ISBN 1-55860-856-7) 2004.

Course Format

Five weeks of study comprising 10 hours of video lecture material and special features videos. Quizzes and assessments throughout the course will assist in learning. Some weeks will involve recommended readings. Discussion on the course forum and via other social media will be encouraged. A mid-course catch up break week and a final week for exams and completion of assignments allows for flexibility in study.

You can engage with the course at a number of levels to suit your interests and the time you have available:

  • Awareness Level – gives an overview of the topic, along with introductory videos and application related features. This level is likely to require 2-3 hours of study per week.
  • Foundation Level – is the core taught material on the course and gives a grounding in AI planning technology and algorithms. This level is likely to require 5-6 hours of study per week of study.
  • Performance Level – is for those interested in carrying out additional programming assignments and engaging in creative challenges to understand the subject more deeply. This level is likely to require 8 hours or more of study per week.

FAQ

  • Will I get a certificate after completing this class?Students who complete the class will be offered a Statement of Accomplishment signed by the instructors.
  • Do I earn University of Edinburgh credits upon completion of this class?The Statement of Accomplishment is not part of a formal qualification from the University. However, it may be useful to demonstrate prior learning and interest in your subject to a higher education institution or potential employer.
  • What resources will I need for this class?Nothing is required, but if you want to try out implementing some of the algorithms described in the lectures you’ll need access to a programming environment. No specific programming language is required. Also, you may want to download existing planners and try those out. This may require you to compile them first.
  • Can I contact the course lecturers directly?You will appreciate that such direct contact would be difficult to manage. You are encouraged to use the course social network and discussion forum to raise questions and seek inputs. The tutors will participate in the forums, and will seek to answer frequently asked questions, in some cases by adding to the course FAQ area.
  • What Twitter hash tag should I use?Use the hash tag #aiplan for tweets about the course.
  • How come this is free?We are passionate about open on-line collaboration and education. Our taught AI planning course at Edinburgh has always published its course materials, readings and resources on-line for anyone to view. Our own on-campus students can access these materials at times when the course is not available if it is relevant to their interests and projects. We want to make the materials available in a more accessible form that can reach a broader audience who might be interested in AI planning technology. This achieves our primary objective of getting such technology into productive use. Another benefit for us is that more people get to know about courses in AI in the School of Informatics at the University of Edinburgh, or get interested in studying or collaborating with us.
  • When will the course run again?It is likely that the 2015 session will be the final time this course runs as a Coursera MOOC, but we intend to leave the course wiki open for further study and use across course instances.
Tagged , ,

How IBM Got Brainlike Efficiency From the TrueNorth Chip

ORIGINAL: IEEE Spectrum
By Jeremy Hsu
Posted 29 Sep 2014 | 19:01 GMT


TrueNorth takes a big step toward using the brain’s architecture to reduce computing’s power consumption

Photo: IBM

Neuromorphic computer chips meant to mimic the neural network architecture of biological brains have generally fallen short of their wetware counterparts in efficiency—a crucial factor that has limited practical applications for such chips. That could be changing. At a power density of just 20 milliwatts per square centimeter, IBM’s new brain-inspired chip comes tantalizingly close to such wetware efficiency. The hope is that it could bring brainlike intelligence to the sensors of smartphones, smart cars, and—if IBM has its way—everything else.

The latest IBM neurosynaptic computer chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of the chip’s 4,096 neurosynaptic cores includes the entire computing package:

  • memory, 
  • computation, and 
  • communication. 

Such architecture helps to bypass the bottleneck in traditional von Neumann computing, where program instructions and operation data cannot pass through the same route simultaneously.
This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” says Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing at IBM Research-Almaden, in San Jose, Calif.

Such chips can emulate the human brain’s ability to recognize different objects in real time; TrueNorth showed it could distinguish among pedestrians, bicyclists, cars, and trucks. IBM envisions its new chips working together with traditional computing devices as hybrid machines, providing a dose of brainlike intelligence. The chip’s architecture, developed together by IBM and Cornell University, was first detailed in August in the journal Science.

Tagged , , , , , ,

Meet Amelia: the computer that’s after your job

29 Sep 2014
A new artificially intelligent computer system called ‘Amelia’ – that can read and understand text, follow processes, solve problems and learn from experience – could replace humans in a wide range of low-level jobs

Amelia aims to answer the question, can machines think? Photo: IPsoft

In February 2011 an artificially intelligent computer system called IBM Watson astonished audiences worldwide by beating the two all-time greatest Jeopardy champions at their own game.

Thanks to its ability to apply

  • advanced natural language processing,
  • information retrieval,
  • knowledge representation,
  • automated reasoning, and
  • machine learning technologies, Watson consistently outperformed its human opponents on the American quiz show Jeopardy.

Watson represented an important milestone in the development of artificial intelligence, but the field has been progressing rapidly – particularly with regard to natural language processing and machine learning.

In 2012, Google used 16,000 computer processors to build a simulated brain that could correctly identify cats in YouTube videos; the Kinect, which provides a 3D body-motion interface for Microsoft’s Xbox, uses algorithms that emerged from artificial intelligence research, as does the iPhone’s Siri virtual personal assistant.

Today a new artificial intelligence computing system has been unveiled, which promises to transform the global workforce. Named ‘Amelia‘ after American aviator and pioneer Amelia Earhart, the system is able to shoulder the burden of often tedious and laborious tasks, allowing human co-workers to take on more creative roles.

Watson is perhaps the best data analytics engine that exists on the planet; it is the best search engine that exists on the planet; but IBM did not set out to create a cognitive agent. It wanted to build a program that would win Jeopardy, and it did that,” said Chetan Dube, chief executive Officer of IPsoft, the company behind Amelia.

Amelia, on the other hand, started out not with the intention of winning Jeopardy, but with the pure intention of answering the question posed by Alan Turing in 1950 – can machines think?


Amelia learns by following the same written instructions as her human colleagues, but is able to absorb information in a matter of seconds.
She understands the full meaning of what she reads rather than simply recognising individual words. This involves

  • understanding context,
  • applying logic and
  • inferring implications.

When exposed to the same information as any new employee in a company, Amelia can quickly apply her knowledge to solve queries in a wide range of business processes. Just like any smart worker she learns from her colleagues and, by observing their work, she continually builds her knowledge.

While most ‘smart machines’ require humans to adapt their behaviour in order to interact with them, Amelia is intelligent enough to interact like a human herself. She speaks more than 20 languages, and her core knowledge of a process needs only to be learned once for her to be able to communicate with customers in their language.

Independently, rather than through time-intensive programming, Amelia creates her own ‘process map’ of the information she is given so that she can work out for herself what actions to take depending on the problem she is solving.

Intelligence is the ability to acquire and apply knowledge. If a system claims to be intelligent, it must be able to read and understand documents, and answer questions on the basis of that. It must be able to understand processes that it observes. It must be able to solve problems based on the knowledge it has acquired. And when it cannot solve a problem, it must be capable of learning the solution through noticing how a human did it,” said Dube.

IPsoft has been working on this technology for 15 years with the aim of developing a platform that does not simply mimic human thought processes but can comprehend the underlying meaning of what is communicated – just like a human.

Just as machines transformed agriculture and manufacturing, IPsoft believes that cognitive technologies will drive the next evolution of the global workforce, so that in the future companies will have digital workforces that comprise a mixture of human and virtual employees.

Amelia has already been trialled within a number of Fortune 1000 companies, in areas such as manning technology help desks, procurement processing, financial trading operations support and providing expert advice for field engineers.

In each of these environments, she has learnt not only from reading existing manuals and situational context but also by observing and working with her human colleagues and discerning for herself a map of the business processes being followed.

In a help desk situation, for example, Amelia can understand what a caller is looking for, ask questions to clarify the issue, find and access the required information and determine which steps to follow in order to solve the problem.

As a knowledge management advisor, she can help engineers working in remote locations who are unable to carry detailed manuals, by diagnosing the cause of failed machinery and guiding them towards the best steps to rectifying the problem.

During these trials, Amelia was able to go from solving very few queries independently to 42 per cent of the most common queries within one month. By the second month she could answer 64 per cent of those queries independently.

That’s a true learning cognitive agent. Learning is the key to the kingdom, because humans learn from experience. A child may need to be told five times before they learn something, but Amelia needs to be told only once,” said Dube.

Amelia is that Mensa kid, who personifies a major breakthrough in cognitive technologies.

Analysts at Gartner predict that, by 2017, managed services offerings that make use of autonomics and cognitive platforms like Amelia will drive a 60 per cent reduction in the cost of services, enabling organisations to apply human talent to higher level tasks requiring creativity, curiosity and innovation.

IPsoft even has plans to start embedding Amelia into humanoid robots such as Softbank’s Pepper, Honda’s Asimo or Rethink Robotics’ Baxter, allowing her to take advantage of their mechanical functions.

The robots have got a fair degree of sophistication in all the mechanical functions – the ability to climb up stairs, the ability to run, the ability to play ping pong. What they don’t have is the brain, and we’ll be supplementing that brain part with Amelia,” said Dube.

I am convinced that in the next decade you’ll pass someone in the corridor and not be able to discern if it’s a human or an android.

Given the premise of IPsoft’s artificial intelligence system, it seems logical that the ultimate measure of Amelia’s success would be passing the Turing Test – which sets out to see whether humans can discern whether they are interacting with a human or a machine.

Earlier this year, a chatbot named Eugene Goostman became the first machine to pass the Turing Test by convincingly imitating a 13-year-old boy. In a five-minute keyboard conversation with a panel of human judges, Eugene managed to convince 33 per cent that it was human.

Interestingly, however, IPsoft believes that the Turing Test needs reframing, to redefine what it means to ‘think’. While Eugene was able to immitate natural language, he was only mimicking understanding. He did not learn from the interaction, nor did he demonstrate problem solving skills.

Natural language understanding is a big step up from parsing. Parsing is syntactic, understanding is semantic, and there’s a big cavern between the two,” said Dube.

The aim of Amelia is not just to get an accolade for managing to fool one in three people on a panel. The assertion is to create something that can answer to the fundamental need of human beings – particularly after a certain age – of companionship. That is our intent.

Tagged , , , ,

AI For Everyone: Startups Democratize Deep Learning So Google And Facebook Don’t Own It All

ORIGINAL: Forbes
9/17/2014When I arrived at a Stanford University auditorium Tuesday night for what I thought would be a pretty nerdy panel on deep learning, a fast-growing branch of artificial intelligence, I figured I must be in the wrong place–maybe a different event for all the new Stanford students and their parents visiting the campus. Nope. Despite the highly technical nature of deep learning, some 600 people had shown up for the sold-out AI event, presented by VLAB, a Stanford-based chapter of the MIT Enterprise Forum. The turnout was a stark sign of the rising popularity of deep learning, an approach to AI that tries to mimic the activity of the brain in so-called neural networks. In just the last couple of years, deep learning software from giants like Google GOOGL +1.27%, Facebook, and China’s Baidu as well as a raft of startups, has led to big advances in image and speech recognition, medical diagnostics, stock trading, and more. “There’s quite a bit of excitement in this area,” panel moderator Steve Jurvetson, a partner with the venture firm DFJ, said with uncustomary understatement.

In the past year or two, big companies have been locked in a land grab for talent, paying big bucks for startups and even hiring away deep learning experts from each other. But this event, focused mostly on startups, including several that demonstrated their products before the panel, also revealed there’s still a lot of entrepreneurial activity. In particular, several companies aim to democratize deep learning by offering it as a service or coming up with cheaper hardware to make it more accessible to businesses.

Jurvetson explained why deep learning has pushed the boundaries of AI so much further recently.

  • For one, there’s a lot more data around because of the Internet, there’s metadata such as tags and translations, and there’s even services such as Amazon’s Mechanical Turk, which allows for cheap labeling or tagging.
  • There are also algorithmic advances, especially for using unlabeled data.
  • And computing has advanced enough to allow much larger neural networks with more synapses–in the case of Google Brain, for instance, 1 billion synapses (though that’s still a very long way form the 100 trillion synapses in the adult human brain).

Adam Berenzweig, cofounder and CTO of image recognition firm Clarifai and former engineer at Google for 10 years, made the case that deep learning is “adding a new primary sense to computing” in the form of useful computer vision. “Deep learning is forming that bridge between the physical world and the world of computing,” he said.

And it’s allowing that to happen in real time. “Now we’re getting into a world where we can take measurements of the physical world, like pixels in a picture, and turn them into symbols that we can sort,” he said. Clarifai has been working on taking an image and producing a meaningful description very quickly, like 80 milliseconds, and show very similar images.

One interesting application relevant to advertising and marketing, he noted: Once you can recognize key objects in images, you can target ads not just on keywords but on objects in an image.

DFJ’s Steve Jurvetson led a panel of AI experts at a Stanford even Sept. 16.

Even more sweeping, said Naveen Rao, cofounder and CEO of deep-learning hardware and software startup Nervana Systems and former researcher in neuromorphic computing at Qualcomm, deep learning is “that missing link between computing and what the brain does.” Instead of doing specific computations very fast, as conventional computers do, “we can start building new hardware to take computer processing in a whole new direction,” assessing probabilities, like the brain does. “Now there’s actually a business case for this kind of computing,” he said.

And not just for big businesses. Elliot Turner, founder and CEO of AlchemyAPI, a deep-learning platform in the cloud, said his company’s mission is to “democratize deep learning.” The company is working in 10 industries from advertising to business intelligence, helping companies apply it to their businesses. “I look forward to the day that people actually stop talking about deep learning, because that will be when it has really succeeded,” he added.

Despite the obvious advantages of large companies such as Google, which have untold amounts of both data and computer power that deep learning requires to be useful, startups can still have a big impact, a couple of the panelists said. “There’s data in a lot of places. There’s a lot of nooks and crannies that Google doesn’t have access to,” Berenzweig said hopefully. “Also, you can trade expertise for data. There’s also a question of how much data is enough.

Turner agreed. “It’s not just a matter of stockpiling data,” he said. “Better algorithms can help an application perform better.” He noted that even Facebook, despite its wealth of personal data, found this in its work on image recognition.

Those algorithms may have broad applicability, too. Even if they’re initially developed for specific applications such as speech recognition, it looks like they can be used on a wide variety of applications. “These algorithms are extremely fungible,” said Rao. And he said companies such as Google aren’t keeping them as secret as expected, often publishing them in academic journals and at conferences–though Berenzweig noted that “it takes more than what they publish to do what they do well.

For all that, it’s not yet clear how much deep learning will actually emulate the brain, even if they are intelligent. But Ilya Sutskever, research scientist at Google Brain and a protege of Geoffrey Hinton, the University of Toronto deep learning guru since the 1980s who’s now working part-time at Google, said it almost doesn’t matter. “You can still do useful predictions” using them. And while the learning principles for dealing with all the unlabeled data out there remain primitive, he said he and many others are working on this and likely will make even more progress.

Rao said he’s unworried that we’ll end up creating some kind of alien intelligence that could run amok if only because advances will be driven by market needs. Besides, he said, “I think a lot of the similarities we’re seeing in computation and brain functions is coincidental. It’s driven that way because we constrain it that way.

OK, so how are these companies planning to make money on this stuff? Jurvetson wondered. Of course, we’ve already seen improvements in speech and image recognition that make smartphones and apps more useful, leading more people to buy them. “Speech recognition is useful enough that I use it,” said Sutskever. “I’d be happy if I didn’t press a button ever again. And language translation could have a very large impact.

Beyond that, Berenzweig said, “we’re looking for the low-hanging fruit,” common use cases such as visual search for shopping, organizing your personal photos, and various business niches such as security.

Follow me on Twitter and Google+, and read the rest of my Forbes posts here.

Tagged , , , , , , ,

Danko Nikolic on Singularity 1 on 1: Practopoiesis Tells Us Machine Learning Is Not Enough!

If there’s ever been a case when I just wanted to jump on a plane and go interview someone in person, not because they are famous but because they have created a totally unique and arguably seminal theory, it has to be Danko Nikolic. I believe Danko’s theory of Practopoiesis is that good and he should and probably eventually would become known around the world for it. Unfortunately, however, I don’t have a budget of thousands of dollars per interview which will allow me to pay for my audio and video team to travel to Germany and produce the quality that Nikolic deserves. So, I’ve had to settle with Skype. And Skype refused to cooperate on that day even though both me and Danko have pretty much the fastest internet connections money can buy. Luckily, despite the poor video quality, our audio was very good and I would urge that if there’s ever been an interview where you ought to disregard the video quality and focus on the content – it has to be this one.
During our 67 min conversation with Danko we cover a variety of interesting topics such as:

As always you can listen to or download the audio file above or scroll down and watch the video interview in full.
To show your support you can write a review on iTunes or make a donation.
Who is Danko Nikolic?
The main motive for my studies is the explanatory gap between the brain and the mind. My interest is in how the physical world of neuronal activity produces the mental world of perception and cognition. I am associated with

  • the Max-Planck Institute for Brain Research,
  • Ernst Strüngmann Institute,
  • Frankfurt Institute for Advanced Studies, and
  • the University of Zagreb.
I approach the problem of explanatory gap from both sides, bottom-up and top-down. The bottom-up approach investigates brain physiology. The top-down approach investigates the behavior and experiences. Each of the two approaches led me to develop a theory: The work on physiology resulted in the theory of practopoiesis. The work on behavior and experiences led to the phenomenon of ideasthesia.
The empirical work in the background of those theories involved

  • simultaneous recordings of activity of 100+ neurons in the visual cortex (extracellular recordings),
  • behavioral and imaging studies in visual cognition (attention, working memory, long-term memory), and
  • empirical investigations of phenomenal experiences (synesthesia).
The ultimate goal of my studies is twofold.

  • First, I would like to achieve conceptual understanding of how the dynamics of physical processes creates the mental ones. I believe that the work on practopoiesis presents an important step in this direction and that it will help us eventually address the hard problem of consciousness and the mind-body problem in general.
  • Second, I would like to use this theoretical knowledge to create artificial systems that are biologically-like intelligent and adaptive. This would have implications for our technology.
A reason why one would be interested in studying the brain in the first place is described here: Why brain?
Tagged , , , , , , , ,

Neurons in human skin perform advanced calculations

[2014-09-01] Neurons in human skin perform advanced calculations, previously believed that only the brain could perform. This is according to a study from Umeå University in Sweden published in the journal Nature Neuroscience.

A fundamental characteristic of neurons that extend into the skin and record touch, so-called first-order neurons in the tactile system, is that they branch in the skin so that each neuron reports touch from many highly-sensitive zones on the skin.
According to researchers at the Department of Integrative Medical Biology, IMB, Umeå University, this branching allows first-order tactile neurons not only to send signals to the brain that something has touched the skin, but also process geometric data about the object touching the skin.
- Our work has shown that two types of first-order tactile neurons that supply the sensitive skin at our fingertips not only signal information about when and how intensely an object is touched, but also information about the touched object’s shape, says Andrew Pruszynski, who is one of the researchers behind the study.
The study also shows that the sensitivity of individual neurons to the shape of an object depends on the layout of the neuron’s highly-sensitive zones in the skin.
- Perhaps the most surprising result of our study is that these peripheral neurons, which are engaged when a fingertip examines an object, perform the same type of calculations done by neurons in the cerebral cortex. Somewhat simplified, it means that our touch experiences are already processed by neurons in the skin before they reach the brain for further processing, says Andrew Pruszynski.
For more information about the study, please contact Andrew Pruszynski, post doc at the Department of Integrative Medical Biology, IMB, Umeå University. He is English-speaking and can be reached at:
Phone: +46 90 786 51 09; Mobile: +46 70 610 80 96
Tagged , , , , , , ,

This AI-Powered Calendar Is Designed to Give You Me-Time

ORIGINAL: Wired
09.02.14 |
Timeful intelligently schedules to-dos and habits on your calendar. Timeful
No one on their death bed wishes they’d taken a few more meetings. Instead, studies find people consistently say things like: 
  • I wish I’d spent more time with my friends and family;
  • I wish I’d focused more on my health; 
  • I wish I’d picked up more hobbies.
That’s what life’s all about, after all. So, question: Why don’t we ever put any of that stuff on our calendar?
That’s precisely what the folks behind Timeful want you to do. Their app (iPhone, free) is a calendar designed to handle it all. You don’t just put in the things you need to do—meeting on Thursday; submit expenses; take out the trash—but also the things you want to do, like going running more often or brushing up on your Spanish. Then, the app algorithmically generates a schedule to help you find time for it all. The more you use it, the smarter that schedule gets.
Even in the crowded categories of calendars and to-do lists, Timeful stands out. Not many iPhone calendar apps were built by renowned behavioral psychologists and machine learning experts, nor have many attracted investor attention to the tune of $7 million.
It was born as a research project at Stanford, where Jacob Bank, a computer science PhD candidate, and his advisor, AI expert Yoav Shoham, started exploring how machine learning could be applied to time management. To help with their research, they brought on Dan Ariely, the influential behavior psychologist and author of the book Predictably Irrational. It didn’t take long for the group to realize that there was an opportunity to bring time management more in step with the times. “It suddenly occurred to me that my calendar and my grandfather’s calendar are essentially the same,” Shoham recalls.
A Tough Problem and an Artifically Intelligent Solution
Like all of Timeful’s founders, Shoham sees time as our most valuable resource–far more valuable, even, than money. And yet he says the tools we have for managing money are far more sophisticated than the ones we have for managing time. In part, that’s because time poses a tricky problem. Simply put, it’s tough to figure out the best way to plan your day. On top of that, people are lazy, and prone to distraction. “We have a hard computational problem compounded by human mistakes,” Shoham says.
To address that lazy human bit, Timeful is designed around a simple fact: When you schedule something, you’re far more likely to get it done. Things you put in the app don’t just live in some list. Everything shows up on the calendar. Meetings and appointments get slotted at the times they take place, as you’d expect. But at the start of the day, the app also blocks off time for your to-dos and habits, rendering them as diagonally-slatted rectangles on your calendar which you can accept, dismiss, or move around as you desire.
Suggestions have diagonal slats. Timeful
Suggestions have diagonal slats. Timeful
In each case, Timeful takes note of how you respond and adjusts its “intention rank,” as the company calls its scheduling algorithm. This is the special sauce that elevates Timeful from dumb calendar to something like an assistant. As Bank sees it, the more nebulous lifestyle events we’d never think to put on our calendar are a perfect subject for some machine learning smarts. “Habits have the really nice property that they repeat over time with very natural patterns,” he says. “So if you put in, ‘run three times a week,’ we can quickly learn what times you like to run and when you’re most likely to do it.
The other machine learning challenge involved with Timeful is the problem of input. Where many other to-do apps try to make the input process as frictionless as possible, Timeful often needs to ask a few follow-up questions to schedule tasks properly, like how long you expect them to take, and if there’s a deadline for completion. As with all calendars and to-do apps, Timeful’s only as useful as the stuff you put on it, and here that interaction’s a fairly heavy one. For many, it could simply be too much work for the reward. Plus, isn’t it a little weird to block off sixty minutes to play with your kid three times a week?
Bank admits that it takes longer to put things into Timeful than some other apps, and the company’s computer scientists are actively trying to come up with new ways to offload the burden algorithmically. In future versions, Bank hopes to be able to automatically pull in data from other apps and services. A forthcoming web version could also make input easier (an Android version is on the way too). But as Bank sees it, there may be an upside to having a bit of friction here. By going through the trouble of putting something in the app, you’re showing that you truly want to get it done, and that could help keep Timeful from becoming a “list of shame” like other to-do apps. (And as far as the kid thing goes, it might feel weird, but if scheduling family time on your calendar results in more family time, then it’s kinda hard to knock, no?)
How Much Scheduling Is Too Much?
Perhaps the bigger question is how much day-to-day optimization people can really swallow. Having been conditioned to see the calendar as a source of responsibilities and obligations, opening up one’s preferred scheduling application and seeing a long white column stretching down for the day can be the source of an almost embarrassing degree of relief. Thank God, now I can finally get something done! With Timeful, that feeling becomes extinct. Every new dawn brings a whole bunch of new stuff to do.
Two of Timeful’s co-founders, Jacob Bank (top) and Yoav Shoham Timeful
Bank and Shoham are acutely aware of this thorny problem. “Sometimes there’s a tension between what’s best for a user and what the user wants to accept, and we need to be really delicate about that,” Bank says. In the app, you can fine-tune just how aggressive you want it to be in its planning, and a significant part of the design process was making sure the app’s suggestions felt like suggestions, not demands. Still, we might crave that structure more than we think. After some early user tests, the company actually cranked up the pushiness of Timeful’s default setting; the overwhelming response from beta testers was “give me more!
The vision is for Timeful to become something akin to a polite assistant. Shoham likens it to Google Now for your schedule–a source of informed suggestions about what to do next. Whether you take those suggestions or leave them is entirely up to you. “This is not your paternalistic dad telling you, ‘thou shall do this!’” he says. “It’s not your guilt-abusing mom. Well, maybe there’s a little bit of that.”
Tagged , , , , , ,
%d bloggers like this: