Category: Uncategorized


AI Software Juggles Probabilities to Learn from Less Data

By Hugo Angel,

Gamalon has developed a technique that lets machines learn to recognize concepts in images or text much more efficiently.
An app developed by Gamalon recognizes objects after seeing a few examples. A learning program recognizes simpler concepts such as lines and rectangles.
 
Machine learning is becoming extremely powerful, but it requires extreme amounts of data.
You can, for instance, train a deep-learning algorithm to recognize a cat with a cat-fancier’s level of expertise, but you’ll need to feed it tens or even hundreds of thousands of images of felines, capturing a huge amount of variation in size, shape, texture, lighting, and orientation. It would be lot more efficient if, a bit like a person, an algorithm could develop an idea about what makes a cat a cat from fewer examples.
A Boston-based startup called Gamalon has developed technology that lets computers do this in some situations, and it is releasing two products Tuesday based on the approach.
If the underlying technique can be applied to many other tasks, then it could have a big impact. The ability to learn from less data could let robots explore and understand new environments very quickly, or allow computers to learn about your preferences without sharing your data.
Gamalon uses a technique that it calls Bayesian program synthesis to build algorithms capable of learning from fewer examples. Bayesian probability, named after the 18th century mathematician Thomas Bayes, provides a mathematical framework for refining predictions about the world based on experience. Gamalon’s system uses probabilistic programming—or code that deals in probabilities rather than specific variables—to build a predictive model that explains a particular data set. From just a few examples, a probabilistic program can determine, for instance, that it’s highly probable that cats have ears, whiskers, and tails. As further examples are provided, the code behind the model is rewritten, and the probabilities tweaked. This provides an efficient way to learn the salient knowledge from the data.
Probabilistic programming techniques have been around for a while. In 2015, for example, a team from MIT and NYU used probabilistic methods to have computers learn to recognize written characters and objects after seeing just one example (see “This AI Algorithm Learns Simple Tasks as Fast as We Do”). But the approach has mostly been an academic curiosity.
There are difficult computational challenges to overcome, because the program has to consider many different possible explanations, says Brenden Lake, a research fellow at NYU who led the 2015 work.
Still, in theory, Lake says, the approach has significant potential because it can automate aspects of developing a machine-learning model.Probabilistic programming will make machine learning much easier for researchers and practitioners,” Lake says. “It has the potential to take care of the difficult [programming] parts automatically.
There are certainly significant incentives to develop easier-to-use and less data-hungry machine-learning approaches. Machine learning currently involves acquiring a large raw data set, and often then labeling it manually. The learning is then done inside large data centers, using many computer processors churning away in parallel for hours or days. “There are only a few really large companies that can really afford to do this,” says Ben Vigoda, cofounder and CEO of Gamalon.
When Machines Have Ideas | Ben Vigoda | TEDxBoston
Our CEO, Ben Vigoda, gave a talk at TEDx Boston 2016 called “When Machines Have Ideas” that describes why building “stories” (i.e. Bayesian generative models) into machine intelligence systems can be very powerful.
In theory, Gamalon’s approach could make it a lot easier for someone to build and refine a machine-learning model, too. Perfecting a deep-learning algorithm requires a great deal of mathematical and machine-learning expertise. “There’s a black art to setting these systems up,” Vigoda says. With Gamalon’s approach, a programmer could train a model by feeding in significant examples.
Vigoda showed MIT Technology Review a demo with a drawing app that uses the technique. It is similar to the one released last year by Google, which uses deep learning to recognize the object a person is trying to sketch (see “Want to Understand AI? Try Sketching a Duck for a Neural Network”). But whereas Google’s app needs to see a sketch that matches the ones it has seen previously, Gamalon’s version uses a probabilistic program to recognize the key features of an object. For instance, one program understands that a triangle sitting atop a square is most likely a house. This means even if your sketch is very different from what it has seen before, providing it has those features, it will guess correctly.
The technique could have significant near-term commercial applications, too. The company’s first products use Bayesian program synthesis to recognize concepts in text.
  • One product, called Gamalon Structure, can extract concepts from raw text more efficiently than is normally possible. For example, it can take a manufacturer’s description of a television and determine what product is being described, the brand, the product name, the resolution, the size, and other features.
  • Another product, Gamalon Match, is used to categorize the products and price in a store’s inventory. In each case, even when different acronyms or abbreviations are used for a product or feature, the system can quickly be trained to recognize them.
Vigoda believes the ability to learn will have other practical benefits.

  • A computer could learn about a user’s interests without requiring an impractical amount of data or hours of training.
  • Personal data might not need to be shared with large companies, either, if machine learning can be done efficiently on a user’s smartphone or laptop.
  • And a robot or a self-driving car could learn about a new obstacle without needing to see hundreds of thousands of examples.
February 14, 2017

Top 10 Hot Artificial Intelligence (AI) Technologies

By Hugo Angel,

forrester-ai-technologiesThe market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.

Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making.

Based on Forrester’s analysis, here’s my list of the 10 hottest AI technologies:

  1. Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarizing business intelligence insights. Sample vendors:
    • Attivio,
    • Automated Insights,
    • Cambridge Semantics,
    • Digital Reasoning,
    • Lucidworks,
    • Narrative Science,
    • SAS,
    • Yseop.
  2. Speech Recognition: Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors:
    • NICE,
    • Nuance Communications,
    • OpenText,
    • Verint Systems.
  3. Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors:
    • Amazon,
    • Apple,
    • Artificial Solutions,
    • Assist AI,
    • Creative Virtual,
    • Google,
    • IBM,
    • IPsoft,
    • Microsoft,
    • Satisfi.
  4. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors:
    • Amazon,
    • Fractal Analytics,
    • Google,
    • H2O.ai,
    • Microsoft,
    • SAS,
    • Skytree.
  5. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors:
    • Alluviate,
    • Cray,
    • Google,
    • IBM,
    • Intel,
    • Nvidia.
  6. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors:
    • Advanced Systems Concepts,
    • Informatica,
    • Maana,
    • Pegasystems,
    • UiPath.
  7. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors:
    • Deep Instinct,
    • Ersatz Labs,
    • Fluid AI,
    • MathWorks,
    • Peltarion,
    • Saffron Technology,
    • Sentient Technologies.
  8. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language. Currently used primarily in market research. Sample vendors:
    • 3VR,
    • Affectiva,
    • Agnitio,
    • FaceFirst,
    • Sensory,
    • Synqera,
    • Tahzoo.
  9. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors:
    • Advanced Systems Concepts,
    • Automation Anywhere,
    • Blue Prism,
    • UiPath,
    • WorkFusion.
  10. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors:
    • Basis Technology,
    • Coveo,
    • Expert System,
    • Indico,
    • Knime,
    • Lexalytics,
    • Linguamatics,
    • Mindbreeze,
    • Sinequa,
    • Stratifyd,
    • Synapsify.

There are certainly many business benefits gained from AI technologies today, but according to a survey Forrester conducted last year, there are also obstacles to AI adoption as expressed by companies with no plans of investing in AI:

There is no defined business case 42%
Not clear what AI can be used for 39%
Don’t have the required skills 33%
Need first to invest in modernizing data mgt platform 29%
Don’t have the budget 23%
Not certain what is needed for implementing an AI system 19%
AI systems are not proven 14%
Do not have the right processes or governance 13%
AI is a lot of hype with little substance 11%
Don’t own or have access to the required data 8%
Not sure what AI means 3%
Once enterprises overcome these obstacles, Forrester concludes, they stand to gain from AI driving accelerated transformation in customer-facing applications and developing an interconnected web of enterprise intelligence.

Follow me on Twitter @GilPress or Facebook or Google+

Scientists Find Single Molecule that Controls Fate of Mature Sensory Neurons

By admin,

Neocortex Salk Institute
In the neocortex, neighboring cells are shown making connections to the visual cortex (red) and the somatosensory cortex (green). Image: Salk Institute for Biological Studies
La Jolla, CA (Scicasts) — Scientists at the Salk Institute have discovered that the role of neurons—which are responsible for specific tasks in the brain—is much more flexible than previously believed.
By studying sensory neurons in mice, the Salk team found that the malfunction of a single molecule can prompt the neuron to make an “early-career” switch, changing a neuron originally destined to process sound or touch, for example, to instead process vision.
The finding, reported May 11, 2015 in PNAS, will help neuroscientists better understand how brain architecture is molecularly encoded and how it can become miswired. It may also point to ways to prevent or treat human disorders (such as autism) that feature substantial brain structure abnormalities.
We found an unexpected mechanism that provides surprising brain plasticity in maturing sensory neurons,” says the study’s first author, Andreas Zembrzycki, a senior research associate at the Salk Institute.
The mechanism, a transcription factor called Lhx2 that was inactivated in neurons, can be used to switch genes on or off to change the function of a sensory neuron in mice. It has been known that Lhx2 is present in many cell types other than in the brain and is needed by a developing foetus to build body parts. Without Lhx2, animals typically die in utero. However, it was not well known that Lhx2 also affects cells after birth.
This process happens while the neuron matures and no longer divides. We did not understand before this study that relatively mature neurons could be reprogrammed in this way,” says senior author Dennis O’Leary, Salk professor and holder of the Vincent J. Coates Chair in Molecular Neurobiology. “This finding opens up a new understanding about how brain architecture is established and a potential therapeutic approach to altering that blueprint.
Scientists had believed that programming neurons was a one-step process. They thought that the stem cells that generate the neurons also programmed their functions once they matured. While this is true, the Salk team found that another step is needed: the Lhx2 transcription factor in mature neurons then ultimately controls the fate of the neuron.
In the mouse study, the scientists manipulated Lhx2 to make the switch in neuronal fate shortly after birth (when the mouse neurons are fully formed and considered mature). The team observed that controlling Lhx2 let them instruct neurons situated in one sensory area to process a different sense, thus enlarging one region at the expense of the other. The scientists don’t know yet if targeting Lhx2 would allow neurons to change their function throughout an organism’s life.
This study provides proof that the brain is very plastic and that it responds to both genetic and epigenetic influences well after birth,” says O’Leary. “Clinical applications for brain disorders are a long way away, but we now have a new way to think about them.
Since this study was conducted in mice, we don’t know the time frame in which Lhx2 would be operating in humans, but we know that post-birth, neurons in a baby’s brain still have not settled into their final position—they are still being wired up. That could take years,” Zembrzycki says.
However, the findings may be an ingredient that contributes to the success of early intervention in some very young children diagnosed with autism, adds Zembrzycki. “The brain’s wiring is determined genetically as well as influenced epigenetically by environmental influences and early intervention preventing brain miswiring may be an example of converging genetic and epigenetic mechanisms that are controlled by Lhx2.
Article adapted from a Salk Institute for Biological Studies news release.
Publication: Postmitotic regulation of sensory area patterning in the mammalian neocortex by Lhx2. Andreas Zembrzycki, Carlos G. Perez-Garcia, Chia-Fang Wang, Shen-Ju Chou, Dennis D.M. O’Leary. PNAS (2015):
http://www.pnas.org/content/early/2015/05/12/1424440112
ORIGINAL: SciCasts

  Category: Uncategorized
  Comments: Comments Off on Scientists Find Single Molecule that Controls Fate of Mature Sensory Neurons

The Algorithm That Unscrambles Fractured Images

By admin,

The ongoing revolution in image processing has produced yet another way to extract images from a complex environment.

Take a hammer to a mirror and you will fracture the image it produces as well as the glass. Keep smashing and the image becomes more broken. When the pieces of glass are the size of glitter, the reflections will be random and the image unrecognisable.

It’s easy to imagine that reconstructing this image would be close to impossible. Not so, say Zhengdong Zhang and pals at the Massachusetts Institute of Technology in Cambridge. Today, these guys unveil SparkleVision, an image processing algorithm that reassembles the smashed imaged.

The problem that Zhang and co attack is to work out the contents of a picture reflected off a screen covered in glitter. The approach is to photograph the glitter and then process the resulting image in a way that unscrambles the picture.

It turns out that there is a straightforward way to approach this. Zhang and co consider each piece of glitter to be a randomly oriented micromirror. So light from the picture hits a micromirror and is reflected to a sensor inside the camera.

That means there is a simple mapping from each pixel in the original picture to a sensor in the camera. The task is to determine that mapping for every pixel. “There exists a forward scrambling matrix, and in principle we can find its inverse and unscramble the image,” they say.

To find this unscrambling matrix, Zhang and co shine a set of test images at the glitter screen and record where the pixels in the original image end up in the camera.

From this, they can create an algorithm that unscrambles any other image placed in exactly the same spot as the test images. They call this algorithm SparkleVision.

That’s a handy piece of software that could have interesting applications in retrieving images reflected off glitter-like surfaces such as certain types of foliage, wet surfaces, metals and so on.

And Zhang and co hope to make the software more useful. In its current incarnation, the software can only unscramble images placed in the exact location of the test images. But in theory, the test images should provide enough data to unscramble images from any part of the light field. “Thus, our system could be naturally extended to work as a lightfield camera,” they say.

The work is part of a growing body that is currently revolutionising photography and image processing, Other researchers have worked out how to unscramble images from all kinds of distorted reflections and surfaces, sometimes even without using lenses.

These so-called “random cameras” are dramatically widening the capability of optics specialists. And SparkleVision looks set to take its place among them.

Ref: http://arxiv.org/abs/1412.7884 : SparkleVision: Seeing the World through Random Specular Microfacets

  Category: Uncategorized
  Comments: Comments Off on The Algorithm That Unscrambles Fractured Images

Leonardo’s Brain: What a Posthumous Brain Scan Six Centuries Later Reveals about the Source of Da Vinci’s Creativity

By admin,

ORIGINAL: Dangerous Minds
How the most creative human who ever lived was able to access a different state of consciousness.
Add caption
One September day in 2008, Leonard Shlainfound himself having trouble buttoning his shirt with his right hand. He was admitted into the emergency room, diagnosed with Stage 4 brain cancer, and given nine months to live. Shlain — a surgeon by training and a self-described “synthesizer by nature” with an intense interest in the ennobling intersection of art and science, author of the now-legendary Art & Physics — had spent the previous seven years working on what he considered his magnum opus: a sort of postmortem brain scan of Leonardo da Vinci, performed six centuries after his death and fused with a detective story about his life, exploring what the unique neuroanatomy of the man commonly considered humanity’s greatest creative genius might reveal about the essence of creativity itself. Shlain finished the book on May 3, 2009. He died a week later. His three children — Kimberly, Jordan, and filmmaker Tiffany Shlain — spent the next five years bringing their father’s final legacy to life. The result is Leonardo’s Brain: Understanding Da Vinci’s Creative Genius (public library | IndieBound) — an astonishing intellectual, and at times spiritual, journey into the center of human creativity via the particular brain of one undereducated, left-handed, nearly ambidextrous, vegetarian, pacifist, gay, singularly creative Renaissance male, who Shlain proposes was able to attain a different state of consciousness than “practically all other humans.
Illustration by Ralph Steadman from ‘I, Leonardo.’ Click image for more.
Noting that “a writer is always refining his ideas,” Shlain points out that the book is a synthesis of his three previous books, and an effort to live up to Kafka’s famous proclamation that “a book must be the axe for the frozen sea inside us.” It is also a beautiful celebration of the idea that art and science belong together and enrich one another whenever they converge. To understand Leonardo’s brain, Shlain points out as he proves himself once again the great poet of the scientific spirit, we must first understand our own:
The human brain remains among the last few stubborn redoubts to yield its secrets to the experimental method. During the period that scientists expanded the horizons of astronomy, balanced the valences of chemistry, and determined the forces of physics, the crowning glory of Homo sapiens and its most enigmatic emanation, human consciousness, resisted the scientific model’s persistent searching. The brain accounts for only 2 percent of the body’s volume, yet consumes 20 percent of the body’s energy. A pearly gray, gelatinous, three-pound universe, this exceptional organ can map parsecs and plot the whereabouts of distant galaxies measured in quintillions of light-years. The brain accomplishes this magic trick without ever having to leave its ensorcelled ovoid cranial shell. From minuscule-wattage electrical currents crisscrossing and ricocheting within its walls, the brain can reconstruct a detailed diorama of how it imagines the Earth appeared four billion years ago. It can generate poetry so achingly beautiful that readers weep, hatred so intense that otherwise rational people revel in the torture of others, and love so oceanic that entwined lovers lose the boundaries of their physical beings.

Shlain argues that Leonardo —

  • who painted the eternally mysterious Mona Lisa, 
  • created visionary anatomical drawings long before medical anatomy existed, 
  • made observations of bird flight in greater detailed than any previous scientist, 
  • mastered engineering, architecture, mathematics, botany, and cartography, 

might be considered history’s first true scientist long before Mary Somerville coined the word, presaged Newton’s Third Law, Bernoulli’s law, and elements of chaos theory, and was a deft composer who sang “divinely,” among countless other domains of mastery — is the individual most worthy of the title “genius” in both science and art

The divergent flow of art and science in the historical record provides evidence of a distinct compartmentalization of genius. The river of art rarely intersected with the meander of science.
[…]
Although both art and science require a high degree of creativity, the difference between them is stark. For visionaries to change the domain of art, they must make a breakthrough that can only be judged through the lens of posterity. Great science, on the other hand, must be able to predict the future. If a scientist’s hypotheses cannot be turned into a law that can be verified by future investigators, it is not scientifically sound. Another contrast: Art and science represent the difference between “being” and “doing.” Art’s raison d’être is to evoke an emotion. Science seeks to solve problems by advancing knowledge.
[…]
Leonardo’s story continues to compel because he represents the highest excellence all of us lesser mortals strive to achieve — to be intellectually, creatively, and emotionally well-rounded. No other individual in the known history of the human species attained such distinction both in science and art as the hyper-curious, undereducated, illegitimate country boy from Vinci.
Artwork from Alice and Martin Provensen’s vintage pop-up book about the life of Leonardo. Click image for more. 

Using a wealth of available information from Leonardo’s notebooks, various biographical resources, and some well-reasoned speculation, Shlain sets out to perform a “posthumous brain scan” seeking to illuminate the unique wiring of Da Vinci’s brain and how it explains his unparalleled creativity. Leonardo was an outlier in a number of ways — socially, culturally, biologically, and in some seemingly unimportant yet, as Shlain explains, notable ways bridging these various aspects of life.For instance:

Leonardo was a vegetarian in a culture that thought nothing of killing animals for food. His explanation for his unwillingness to participate in carnivory was that he did not want to contribute to any animal’s discomfort or death. He extended the courtesy of staying alive to all living creatures, and demonstrated a feeling of connectedness to all life, which was in short supply during a time that glorified hunting.
He was also the only individual in recorded history known to write comfortably backwards, performing what is known as “mirror writing,” which gives an important clue about the wiring of his brain:
Someone wishing to read Leonardo’s manuscripts must first hold the pages before a mirror. Instead of writing from left to right, which is the standard among all European languages, he chose to write from right to left — what the rest of us would consider backward writing. And he used his left hand to write.
Thoroughly confusing the issue was the fact that sometimes he would switch in mid-sentence, writing some words in one direction followed by other words heading in the opposite direction. Another intriguing neurological datum: Careful examination of two samples of his handwriting show the one written backward moving from right to left across the page is indistinguishable from the handwriting that is not reversed.
Leonardo’s quirks of penmanship strongly suggest that his two hemispheres were intimately connected in an extraordinary way. The traditional dominance pattern of one hemisphere lording it over the other does not seem to have been operational in Leonardo’s brain. Based on what we can extrapolate from the brains of people who share Leonardo’s ability to mirror-write, the evidence points to the presence of a large corpus callosum that kept each hemisphere well informed as to what the other was doing.

Further evidence that his corpus callosum — that thick bundle of fibers connecting the left and right hemispheres, consisting of more than 200 million neurons — was “afairly bursting with an overabundance of connecting neurons” comes from his unusually deft fusion of art and science. For instance, Shlain points out, no other artist in history labored so obsessively over perfecting the geometrical details of the science of perspective. Before delving into Leonardo’s specific neuroanatomy, Shlain points out that because our brains have the maximum number of neurons at the age of eight months and because a dramatic pruning of our neurocircuitry unfolds over the next decade, those early years are crucially formative in our cognitive development and warrant special attention. (Tolstoy captured this beautifully when he wrote, “From a five-year-old child to my present self there is only one step. From a new-born infant to a five-year-old child there is an awesome distance.”) Leonardo’s own childhood was so unusual and tumultuous that it calls for consideration in examining his brain development. The illicit child of a rich playboy from the city and a poor peasant girl from the picturesque Tuscan town of Vinci, he grew up without a real father — an ambitious notary, his father refused to marry Leonardo’s mother in order to avoid compromising his social status. The little boy was raised by a single mother in the countryside. Eventually, his father arranged for his mother to marry another man, and he himself married a sixteen-year-old girl. Leonardo was taken from his mother and awkwardly included in his father’s household as a not-quite-son. But the father-figure in his life ended up being his kindly uncle Francesco, whom the boy grew to love dearly. He remained in contact with his mother throughout his life and evidence from his notebooks suggests that, like Andy Warhol, he invited her to live with him as she became elderly. Shlain to two perplexities that stand out in Leonardo’s upbringing:

  • First, contemporary psychologists agree that removing young children from their mothers makes for substantial attachment and anxiety issues throughout life, producing emotionally distant adults. 
  • Secondly, Leonardo’s illegitimacy greatly limited his education options, as the Church, in one of its many strokes of gobsmacking lack of the very compassion it preaches, decreed that children born to unwed parents were not eligible for enrollment in its cathedral schools

Shlain writes: 

Outside of the prohibitively expensive alternative of private tutors, admission to one of these schools was the only means to learning the secret code that opened the doors of opportunity.
That secret code was knowledge of Latin and Greek, without which it was practically impossible to participate in the making of the Renaissance. And yet Leonardo had an especially blistering response to those who dismissed his work due to his lack of education:
They will say that because of my lack of book learning, I cannot properly express what I desire to treat of. Do they not know that my subjects require for their exposition experience rather than the words of others? And since experience has been the mistress, and to her in all points make my appeal.
(More than half a millennium later, Werner Herzog would go on to offer aspiring filmmakers similarly spirited advice.) Shlain writes:
Creativity is a combination of courage and inventiveness. One without the other would be useless.
So how did Leonardo muster the courage and inventiveness to turn the dismal cards he was dealt into the supreme winning hand of being history’s greatest genius? Shlain argues that while we can speculate about how much more remarkable work Leonardo may have done had he been able to command the respect, resources, and recognition “of one who claims noble blood, a university position, and powerful friends in high places,” there is an even more powerful counteragent to be made — one that resonates with Nietzsche’s ideas about the value of difficulty and bespeaks the immeasurable benefits of what Orson Welles called “the gift of ignorance, or what is commonly known as “beginner’s mind”:
A strong counterargument can also be put forth that it was precisely his lack of indoctrination into the reigning dogma taught in these institutions that liberated him from mental restraints. Unimpeded by the accretion of misconceptions that had fogged the lens of the educated, Leonardo was able to ask key questions and seek fresh answers. Although he could not quote learned books, he promised, “I will quote something far greater and more worthy: experience, the mistress of their masters.He disdained “trumpets and reciters of the works of others,” and tried to live by his own dictum: “Better a small certainty, than a big lie.” He referred to himself as omo sanza lettere — an “unlettered man” — because he had not received the kind of liberal arts schooling that led to the university. Somewhere in his late thirties and early forties, Leonardo made a concerted effort to teach himself Latin. Long lists of vocabulary words appear in his notebooks. Anyone who has tried to learn a foreign language in adulthood knows how difficult the task can be.
One silver lining to his lack of formal education and attentive parenting is that he was never trained out of his left-handedness as was the practice during the Middle Ages and the Renaissance — something that turned out to be crucial in the anatomy of his genius.
Illustration by Ralph Steadman from ‘I, Leonardo.’ Click image for more
But Leonardo’s social disadvantages didn’t end with education. Based on evidence from his notebooks and biographical accounts from a handful of contemporaries, he was most likely homosexual — at a time when it was not only a crime but a “sin” punishable by death. Even in his fashion and demeanor, Leonardo appeared to be the Walt Whitman of his day — in other words, a proto-dandy who “fell into the flamboyant set.” Shlain quotes Anonimo Gaddiano, a contemporary of Leonardo’s: 
He wore a rose colored tunic, short to the knee, although long garments were then in fashion. He had, reaching down to the middle of his breasts, a fine beard, curled and well kept.
Leonardo was also unorthodox in his universal empathy for animals and philosophical stance against eating them — a complete anomaly in a carnivorous era when the poor longed for meat and the rich threw elaborate feasts around it, showcasing it as a status symbol of their wealth and power. Instead, Leonardo was known to buy caged birds whenever he saw them in the town’s shops and set them free. But Leonardo’s most significant source of exceptionalism goes back to his handedness. Left-handedness might still be an evolutionary mystery, but it is also an enduring metaphor for the powers of intuition. For Leonardo, the physical and the intuitive were inextricably linked:
Leonardo intuited that a person’s face, despite appearing symmetrical, is actually divided into two slightly different halves. Because of the crossover in sensory and motor nerves from each side of the face within the brain, the left hemisphere controls the muscles of the right side of the face and the right hemisphere controls the muscles of the left side. The majority of people are left-brained/right-handed, which means that the right half of their face is under better conscious control than their left. In contrast, the left half of the face connects to the emotional right brain, and is more revealing of a person’s feelings. Right-handers have more difficulty trying to suppress emotional responses on the left side of their face.
In a recent psychology experiment, a group of unsuspecting college students were ushered into a photographer’s studio one at a time and informed that they were to pose for a picture to be given to members of their family. The majority of these right-handed students positioned themselves unaware that they were turning the left side of their face toward the camera’s lens. All of them smiled.
Brought back a second time, the researchers informed them that, now, they were to pose for a job application photo. In this case, they adopted a more professional demeanor, and the majority of right-handers emphasized the right side of their face. The results of this experiment, along with several others of similar design, strongly suggest that unconsciously, most people know that the right side of their face is best to present to the outside world. They are also subliminally aware that their left side is a more natural reflection of who they really are.
Leonardo understood these subtleties of expression. Mona Lisa is best appreciated by observing the left side of her face.
One of Leonardo’s great artistic innovations was his inclusion of the subject’s hands in a portrait. Up to that point, portraiture included only the upper chest and head, but Leonardo saw in the expressiveness of hands a gateway to the subject’s state of mind, his psychological portraiture implicitly invalidating the mind-body split and painting consciousness itself. This brings us back to Leonardo’s own brain. Shlain’s most salient point has to do with the splitting of the brain into two functionally different hemispheres, an adaptation that catapulted us ahead of all other creatures in intellectual capacity and also accounted for Leonardo’s singular genius. Reflecting on findings from studies of split-brain patients, Shlain explains:
The most sublime function of the left hemisphere — critical thinking — has at its core a set of syllogistic formulations that undergird logic. In order to reach the correct answer, the rules must be followed without deviation. So dependent is the left brain on rules that Joseph Bogen, the neurosurgeon who operated on many of the first split-brain patients, called it the propositional brain: It processes information according to an underlying set of propositions. In contrast, he called the right hemisphere the appositional brain, because it does just the opposite: It processes information through nonlinear, non-rule-based means, incorporating differing converging determinants into a coherent thought. Bogen’s classification of the brain into two different types, proposition versus apposition, has been generally accepted by neuroscientists, and it appears often in neurocognitive literature.
The right brain’s contribution to creativity, however, is not absolute, because the left brain is constantly seeking explanations for inexplicable events. Unfortunately, although many are extremely creative, without the input of the right hemisphere, they are almost universally wrong. It seems that there is no phenomenon for which the left brain has not confabulated an explanation. This attribute seems specific for the left language lobe.
Artwork from Alice and Martin Provensen’s vintage pop-up book about the life of Leonardo. Click image for more.
Echoing Hanna Arendt’s assertion that the ability to ask “unanswerable questions” is the hallmark of the human mind and F. Scott Fitzgerald’s famous aphorism that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function,” Shlain describes how this interplay illuminates the creative process:
The first step in the creative process is for an event, an unidentified object, an unusual pattern, or a strange juxtaposition to alert the right brain. In a mysterious process not well understood, it prods the left brain to pose a question. Asking the right question goes to the heart of creativity. Questions are a Homo sapiens forte. Despite the amazing variation in animal communication, there is only one species that can ask a question and — most impressively — dispute the answer. But Mother Nature would not have provided us with language simply to ask a question. She had to equip us with a critical appendage that could investigate those questions. That appendage was the opposable thumb. Thumbs have a lot to do with curiosity, which in turn leads to creativity
Building on previous research on the four stages of the creative process, Shlain outlines the role of the two hemispheres which, despite working in concert most of the time, are subject to the dominance of the left hemisphere:
Natural Selection gave the left hemisphere hegemony over the right. Under certain circumstances, however, the minor hemisphere must escape the control of the major one to produce its most outstanding contribution — creativity. For creativity to manifest itself, the right brain must free itself from the deadening hand of the inhibitory left brain and do its work, unimpeded and in private. Like radicals plotting a revolution, they must work in secret out of the range of the left hemisphere’s conservatives. After working out many of the kinks in the darkness of the right hemisphere’s subterranean processes, the idea, play, painting, theory, formula, or poetic metaphor surfaces exuberantly, as if from beneath a manhole cover that was overlaying the unconscious, and demands the attention of the left brain. Startled, the other side responds in wonderment.
When a creative impulse arises in the right hemisphere, Shlain writes, it is ferried over to the left side of the brain via the mighty corpus callosum — the largest and most poorly understood structure in the human brain, and a significant key to the mystery of Leonardo’s extraordinary creativity in attaining the two grand goals of his life: to study and discern the truth behind natural phenomena, and to communicate that truth with astounding artistry. 
Illustration by Ralph Steadman from ‘I, Leonardo.’ Click image for more.
But Shlain’s most intriguing point about Leonardo’s brain has to do with the corpus callosum and its relation to the gendered brain. We already know that “psychological androgyny” is key to creativity, and it turns out that the corpus callosum has a major role in that. For one thing, Shlain points out, there are differences in the size of that essential bundle of fibers between right-handed heterosexual males, or RHHM, and all other variants of handedness, gender, and orientation — left-handed heterosexual males, heterosexual women of both hand dominances, and homosexual men and women. The notion of the gendered brain is, of course, problematic and all sweeping statistical generalizations tend to exist on bell-shaped curves, with outliers on either side. Still, Shlain relays some fascinating findings:
The most dichotomous brain — that is, where the two hemispheres are the most specialized — belongs to a right-handed heterosexual male. Approximately 97 percent of key language modules reside in his left hemisphere, making it unequivocally his dominant lobe. This extreme skewing is not present to the same degree in women, both right- and left-handed; gays and lesbians; and left-handers of both sexes.
[…]
Females, right- or left-handed, have a more even distribution between the lobes regarding language and brain dominance. Right-handed women still have the large majority of their language modules in their left brains, but whereas an RHHM would most likely have 97 percent of his wordsmithing skills concentrated in the left lobe, a woman would be more likely to have a lesser percentage (about 80 percent) in the left brain, and the remaining 20 percent in the right brain.
Shlain cites MRI research by Sandra Witelson, who found that the anterior commissure, the largest of the corpus callosum’s anatomically distinct “component cables,can be up to 30% larger in women than in men, and other studies have found that it is 15% larger in gay men than in straight men. Taken together, these two findings about the corpus callosum — that RHHMs have more specialized brains and slimmer connecting conduits between the two hemispheres — reveal important deductive insight about Leonardo’s multi-talented brain, which fused so elegantly the prototypical critical thinking of the left hemisphere with the wildly creative and imaginative faculties of the right. Evidence from his notebooks and life strongly suggests that Leonardo was what scientists call an ESSP — an individual with exclusive same-sex preference. He never married or had children, rarely referenced women in his writings and whenever he did, it was only in the context of deciphering beauty; he was once jailed for homosexual conduct and spent some time in prison while awaiting a verdict; his anatomical drawings of the female reproductive system and genitalia are a stark outlier of inaccuracy amid his otherwise remarkably medically accurate illustrations. All of this is significant because ESSP’s don’t conform to the standard brain model of RHHM. They are also more likely to be left-handed, as Leonardo was. In fact, Shlain points out, left-handers tend to have a larger corpus callosum than right-handers, and artists in general are more likely to be left-handed than the average personaround 9% of the general population are estimated to be left-handed, and 30-40% of the student body in art schools are lefties. A left-handed ESSP, Leonardo was already likely to have a larger corpus callosum, but Shlain turns to the power of metaphor in illuminating the imagination for further evidence suggesting heightened communication between his two hemispheres:
The form of language that Leonardo used was highly metaphorical. He posed riddles and buried metaphors in his paintings. For this to occur, he had to have had a large connection of corpus callosum fibers between his right hemisphere and his left. The form of language based on metaphor— poetry, for instance—exists in the right hemisphere, even though language is primarily a left hemispheric function. To accomplish the task of the poet, a significant connection must exist between the parts of the right hemisphere, and, furthermore, there must be many interconnections between the two hemispheres. These fibers must be solidly welded to the language centers in the left hemisphere so that poetic metaphors can be expressed in language. Leonardo used the metaphor in his writings extensively— another example of connected hemispheres.
And therein lies Shlain’s point: The source of Leonardo’s extraordinary creativity was his ability to access different ways of thinking, to see more clearly the interconnectedness of everything, and in doing so, to reach a different state of consciousness than the rest of us:
His ESSP-ness put him somewhere between the masculine and the feminine. His left-handedness, ambidexterity, and mirror writing were indications of a nondominant brain. His adherence to vegetarianism at a time when most everyone was eating meat suggests a holistic view of the world. The equality between his right and left hemispheres contributed to his achievements in art and science, unparalleled by any other individual in history. His unique brain wiring also allowed him the opportunity to experience the world from the vantage point of a higher dimension. The inexplicable wizardry present in both his art and his science can be pondered only by stepping back and asking: Did he have mental faculties that differed merely in degree, or did he experience a form of cognition qualitatively different from the rest of us? I propose that many of Leonardo’s successes (and failures) were the result of his gaining access to a higher consciousness.
Significantly, Leonardo was able to envision time and space differently from the rest of us, something evidenced in both his art and his scientific studies, from revolutionizing the art perspective to predating Newton’s famous action-reaction law by two centuries when he wrote, “See how the wings, striking the air, sustain the heavy eagle in the thin air on high. As much force is exerted by the object against the air as by the air against the object.” Shlain poses the ultimate question:
When pondering Leonardo’s brain we must ask the question: Did his brain perhaps represent a jump toward the future of man? Are we as a species moving toward an appreciation of space-time and nonlocality?
Illustration by Ralph Steadman from ‘I, Leonardo.’ Click image for more.
With an eye to Leonardo’s unflinching nonconformity — his pacifism in an era that glorified war, his resolute left-handedness despite concentrated efforts at the time to train children out of that devilish trait, his vegetarianism and holistic faith in nature amid a carnivorous culture — Shlain turns an optimistic gaze to the evolution of our species:
The appearance of Leonardo in the gene pool gives us hope. He lived in an age when war was accepted. Yet, later in life, he rejected war and concentrated on the search for truth and beauty. He believed he was part of nature and wanted to understand and paint it, not control it. […] We humans are undergoing a profound metamorphosis as we transition into an entirely novel species. For those who doubt it is happening, remember: For millions of years dogs traveled in packs as harsh predators, their killer instinct close to the surface. Then humans artificially interfered with the canine genome beginning a mere six thousand years ago. No dog could have predicted in prehistoric times that the huge, snarling member, faithful to a pack, would evolve into individual Chihuahuas and lap-sitting poodles.
Leonardo’s Brain is a mind-bending, consciousness-stretching read in its totality. Complement it with Shlain on integrating wonder and wisdom and how the alphabet sparked the rise of patriarchy.

How Watson Changed IBM

By admin,

ORIGINAL: HBR
by Brad Power
August 22, 2014

Remember when IBM’s “Watson” computer competed on the TV game show “Jeopardy” and won? Most people probably thought “Wow, that’s cool,” or perhaps were briefly reminded of the legend of John Henry and the ongoing contest between man and machine. Beyond the media splash it caused, though, the event was viewed as a breakthrough on many fronts. Watson demonstrated that machines could understand and interact in a natural language, question-and-answer format and learn from their mistakes. This meant that machines could deal with the exploding growth of non-numeric information that is getting hard for humans to keep track of: to name two prominent and crucially important examples,

  • keeping up with all of the knowledge coming out of human genome research, or 
  • keeping track of all the medical information in patient records.
So IBM asked the question: How could the fullest potential of this breakthrough be realized, and how could IBM create and capture a significant portion of that value? They knew the answer was not by relying on traditional internal processes and practices for R&D and innovation. Advances in technology — especially digital technology and the increasing role of software in products and services — are demanding that large, successful organizations increase their pace of innovation and make greater use of resources outside their boundaries. This means internal R&D activities must increasingly shift towards becoming crowdsourced, taking advantage of the wider ecosystem of customers, suppliers, and entrepreneurs.
IBM, a company with a long and successful tradition of internally-focused R&D activities, is adapting to this new world of creating platforms and enabling open innovation. Case in point, rather than keep Watson locked up in their research labs, they decided to release it to the world as a platform, to run experiments with a variety of organizations to accelerate development of natural language applications and services. In January 2014 IBM announced they were spending $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud.” More than 2,500 developers and start-ups have reached out to the IBM Watson Group since the Watson Developers Cloud was launched in November 2013.

So how does it work? First, with multiple business models. Mike Rhodin, IBM’s senior vice president responsible for Watson, told me, “There are three core business models that we will run in parallel. 

  • The first is around industries that we think will go through a big change in “cognitive” [natural language] computing, such as financial services and healthcare. For example, in healthcare we’re working with The Cleveland Clinic on how medical knowledge is taught. 
  • The second is where we see similar patterns across industries, such as how people discover and engage with organizations and how organizations make different kinds of decisions. 
  • The third business model is creating an ecosystem of entrepreneurs. We’re always looking for companies with brilliant ideas that we can partner with or acquire. With the entrepreneur ecosystem, we are behaving more like a Silicon Valley startup. We can provide the entrepreneurs with access to early adopter customers in the 170 countries in which we operate. If entrepreneurs are successful, we keep a piece of the action.
IBM also had to make some bold structural moves in order to create an organization that could both function as a platform as well as collaborate with outsiders for open innovation. They carved out The Watson Group as a new, semi-autonomous, vertically integrated unit, reporting to the CEO. They brought in 2000 people, a dozen projects, a couple of Big Data and content analytics tools, and a consulting unit (outside of IBM Global Services). IBM’s traditional annual budget cycle and business unit financial measures weren’t right for Watson’s fast pace, so, as Mike Rhodin told me, “I threw out the annual planning cycle and replaced it with a looser, more agile management system. In monthly meetings with CEO Ginni Rometty, we’ll talk one time about technology, and another time about customer innovations. I have to balance between strategic intent and tactical, short-term decision-making. Even though we’re able to take the long view, we still have to make tactical decisions.

More and more, organizations will need to make choices in their R&D activities to either create platforms or take advantage of them. 

Those with deep technical and infrastructure skills, like IBM, can shift the focus of their internal R&D activities toward building platforms that can connect with ecosystems of outsiders to collaborate on innovation.

The second and more likely option for most companies is to use platforms like IBM’s or Amazon’s to create their own apps and offerings for customers and partners. In either case, new, semi-autonomous agile units, like IBM’s Watson Group, can help to create and capture huge value from these new customer and entrepreneur ecosystems.

More blog posts by Brad Power

  Category: Uncategorized
  Comments: Comments Off on How Watson Changed IBM

IBM Chip Processes Data Similar to the Way Your Brain Does

By admin,

A chip that uses a million digital neurons and 256 million synapses may signal the beginning of a new era of more intelligent computers.
WHY IT MATTERS

Computers that can comprehend messy data such as images could revolutionize what technology can do for us.

New thinking: IBM has built a processor designed using principles at work in your brain.
A new kind of computer chip, unveiled by IBM today, takes design cues from the wrinkled outer layer of the human brain. Though it is no match for a conventional microprocessor at crunching numbers, the chip consumes significantly less power, and is vastly better suited to processing images, sound, and other sensory data.
IBM’s SyNapse chip, as it is called, processes information using a network of just over one million “neurons,” which communicate with one another using electrical spikes—as actual neurons do. The chip uses the same basic components as today’s commercial chips—silicon transistors. But its transistors are configured to mimic the behavior of both neurons and the connections—synapses—between them.
The SyNapse chip breaks with a design known as the Von Neuman architecture that has underpinned computer chips for decades. Although researchers have been experimenting with chips modeled on brains—known as neuromorphic chips—since the late 1980s, until now all have been many times less complex, and not powerful enough to be practical (see “Thinking in Silicon”). Details of the chip were published today in the journal Science.
The new chip is not yet a product, but it is powerful enough to work on real-world problems. In a demonstration at IBM’s Almaden research center, MIT Technology Review saw one recognize cars, people, and bicycles in video of a road intersection. A nearby laptop that had been programed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip. IBM researchers are now experimenting with connecting multiple SyNapse chips together, and they hope to build a supercomputer using thousands.
When data is fed into a SyNapse chip it causes a stream of spikes, and its neurons react with a storm of further spikes. The just over one million neurons on the chip are organized into 4,096 identical blocks of 250, an arrangement inspired by the structure of mammalian brains, which appear to be built out of repeating circuits of 100 to 250 neurons, says Dharmendra Modha, chief scientist for brain-inspired computing at IBM. Programming the chip involves choosing which neurons are connected, and how strongly they influence one another. To recognize cars in video, for example, a programmer would work out the necessary settings on a simulated version of the chip, which would then be transferred over to the real thing.
In recent years, major breakthroughs in image analysis and speech recognition have come from using large, simulated neural networks to work on data (see “Deep Learning”). But those networks require giant clusters of conventional computers. As an example, Google’s famous neural network capable of recognizing cat and human faces required 1,000 computers with 16 processors apiece (see “Self-Taught Software”).
Although the new SyNapse chip has more transistors than most desktop processors, or any chip IBM has ever made, with over five billion, it consumes strikingly little power. When running the traffic video recognition demo, it consumed just 63 milliwatts of power. Server chips with similar numbers of transistors consume tens of watts of power—around 10,000 times more.
The efficiency of conventional computers is limited because they store data and program instructions in a block of memory that’s separate from the processor that carries out instructions. As the processor works through its instructions in a linear sequence, it has to constantly shuttle information back and forth from the memory store—a bottleneck that slows things down and wastes energy.
IBM’s new chip doesn’t have separate memory and processing blocks, because its neurons and synapses intertwine the two functions. And it doesn’t work on data in a linear sequence of operations; individual neurons simply fire when the spikes they receive from other neurons cause them to.
Horst Simon, the deputy director of Lawrence Berkeley National Lab and an expert in supercomputing, says that until now the industry has focused on tinkering with the Von Neuman approach rather than replacing it, for example by using multiple processors in parallel, or using graphics processors to speed up certain types of calculations. The new chip “may be a historic development,” he says. “The very low power consumption and scalability of this architecture are really unique.”
One downside is that IBM’s chip requires an entirely new approach to programming. Although the company announced a suite of tools geared toward writing code for its forthcoming chip last year (see “IBM Scientists Show Blueprints for Brainlike Computing”), even the best programmers find learning to work with the chip bruising, says Modha: “It’s almost always a frustrating experience.” His team is working to create a library of ready-made blocks of code to make the process easier.
Asking the industry to adopt an entirely new kind of chip and way of coding may seem audacious. But IBM may find a receptive audience because it is becoming clear that current computers won’t be able to deliver much more in the way of performance gains. “This chip is coming at the right time,” says Simon.
ORIGINAL: Tech Review
August 7, 2014

  Category: Uncategorized
  Comments: Comments Off on IBM Chip Processes Data Similar to the Way Your Brain Does

How to Have Great Ideas More Often, According to Science

By admin,

Ah, ideas. Who doesn’t want more great ideas? I know I do. I usually think about ideas as being magical and hard to produce. I expect them to just show up without me cultivating them, and I often get frustrated when they don’t show up when I need them. The good news is that it turns out cultivating ideas is a process, and one that we can practice to produce more (and hopefully better) ideas.
On the other hand, often times great ideas just come to us while in the shower or in another relaxing environment. Let’s take a look at the science of the creative process.How Our Brains Work Creatively?
So far, science hasn’t really determined exactly what happens in our brains during the creative process, since it really combines a whole bunch of different brain processes. And, contrary to popular belief, it includes both sides of our brains working together, rather than just one or the other.

 

The truth is, our brain hemispheres are inextricably connected. The two sides of our brains are simply distinguished by their different processing styles. The idea that people can be “right brain thinkers” or “left brain thinkers” is actually a myth that I’ve debunked before
The origins of this common myth came from some 1960s research on patients whose corpus callosum (the band of neural fibers that connect the hemispheres) had been cut as a last-resort treatment for epilepsy. This removed the natural process of cross-hemisphere communication, and allowed scientists to conduct experiments on how each hemisphere worked in isolation.
Unless you’ve had this procedure yourself, or had half of your brain removed, you’re not right or left brained.
The Three Areas of the Brain Used for Creative Thinking
Among all the networks and specific centers in our brains, there are three that are known for being used in creative thinking.
  • The Attentional Control Network helps us with laser focus on a particular task. It’s the one that we activate when we need to concentrate on complicated problems or pay attention to a task like reading or listening to a talk.
  • The Imagination Network as you might have guessed, is used for things like imagining future scenarios and remembering things that happened in the past. This network helps us to construct mental images when we’re engaged in these activities.
  • The Attentional Flexibility Network has the important role of monitoring what’s going on around us, as well as inside our brains, and switching between the Imagination Network and Attentional Control for us.
You can see the Attentional Control Network (in green) and the Imagination Network (in red) in the image below.
A recent review by Rex Junge and colleagues explained what they think might be happening in our brains when we get creative. It generally involves reducing activation of the Attentional Control Network. Reducing this partially helps us to allow inspiration in, and new ideas to form. The second part is increasing the activation of the Imagination and Attentional Flexibility Networks.
Research on jazz musicians and rappers who were improvising creative work on the spot showed that when they enter that coveted flow state of creativity, their brains were exhibiting these signs.Producing New Ideas Is a Process
In his book A Technique for Producing Ideas, James Webb Young explains that while the process for producing new ideas is simple enough to explain, “it actually requires the hardest kind of intellectual work to follow, so that not all who accept it use it.”
He also explains that working out where to find ideas is not the solution to finding more of them, but rather we need to train our minds in the process of producing new ideas naturally.
The Two General Principles of Ideas
James describes two principles of the production of ideas, which I really like:
  1. An idea is nothing more or less than a new combination of old elements.
  2. The capacity to bring old elements into new combinations depends largely on the ability to see relationships.
This second one is really important in producing new ideas, but it’s something our minds need to be trained in. To help our brains get better at delivering good ideas to us, we need to do some preparation first. Let’s take a look at what it takes to prime our brains for idea-generation.
Preparing to Get New Ideas
Since ideas are made from finding relationships between existing elements, we need to collect a mental inventory of these elements before we can start connecting them. James also notes in his book how we often approach this process incorrectly:
Instead of working systematically at the job of gathering raw material we sit around hoping for inspiration to strike us.

Preparing your brain for the process of making new connections takes time and effort. We need to get into the habit of collecting information that’s all around us so our brains have something to work with. James offers a couple of ideas in his book, such as 

  • using index cards to organize and distill information into bite-sized pieces. Another suggestion is
  • to use a scrapbook or file, and cross-index everything so you can find what you need, when you need it.
Bringing it All Together
The hard work is mostly in gathering the materials your brain needs to form new connections, but you can do a lot to help your brain process all of this information, as well. In a paper by neuroscientist Dr. Mark Beeman, he explains how we come to our final “aha” moment of producing an idea, by way of other activities:
A series of studies have used electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) to study the neural correlates of the “Aha! moment” and its antecedents. Although the experience of insight is sudden and can seem disconnected from the immediately preceding thought, these studies show that insight is the culmination of a series of brain states and processes operating at different time scales.
I love the way that John Cleese talks about these aspects of creativity and how our minds work. He gave an excellent talk years ago about how our brains develop ideas and solve creative problems, wherein he discussed the idea of our brains being like tortoises. Here’s how I explained his theory when I wrote about it earlier this year:

The idea is that your creativity acts like a tortoise—poking its head out nervously to see if the environment is safe before it fully emerges. Thus, you need to create a tortoise enclosure—an oasis amongst the craziness of modern life—to be a safe haven where your creativity can emerge.

He offers a couple of useful ideas to help you achieve this, as well:
  • Set Aside Time John says your thoughts need time to settle down before your creativity will feel safe enough to emerge and get to work. Setting aside time to think regularly can be a good way to train your mind to relax, eventually making this set time a safe haven for your tortoise mind to start putting together connections that could turn into ideas.
  • Find a Creative Space
    Setting aside time regularly sends a signal to your brain that it’s safe to work on creative ideas. Finding a particular space to be creative in can help, too. This is similar to the research on how the temperature and noise around us affects our creativity.
  • Let Your Brain Do the Work
    This may be one of the hardest, yet most important parts of the process of producing ideas. I think James Webb Young says it best: “Drop the whole subject and put it out of your mind and let your subconscious do its thing.”
Something else John Cleese talks about is how beneficial it can be to “sleep on a problem.” He recalls observing a dramatic change in his approach to a creative problem after having left it alone. He not only awoke with a perfectly clear idea on how to continue his work, but the problem itself was no longer apparent.
The trick here is to trust enough to let go. As we engage our conscious minds in other tasks, like sleeping or taking a shower, our subconscious can go to work on finding relationships in all the data we’ve collected so far.
The A-Ha Moment
James Webb Young explains the process of producing ideas in stages. Once we’ve completed the first three, which include gathering material and letting our subconscious process the data and find connections, he says we’ll come to an “Aha!” moment, when a great idea hits us:
It will come to you when you are least expecting it—while shaving, or bathing, or most often when you are half awake in the morning. It may waken you in the middle of the night.
How to Have More Great Ideas
Understanding the process our brains go through to produce ideas can help us to replicate this, but there are a few things we can do to nudge ourselves towards having better ideas, too.
Don’t Accept Your Ideas Immediately
The final stage of James’s explanation of idea production is to criticize your ideas:”Do not make the mistake of holding your idea close to your chest at this stage. Submit it to the criticism of the judicious.
James says this will help you to expand on the idea and uncover possibilities you might have otherwise overlooked. Here it’s especially important to know whether you’re introverted or extroverted to criticize your ideas from the right perspective.
Overwhelm Your Brain
Surprisingly, you can actually hit your brain with more than it can handle and it will step up to the task. Robert Epstein explained in a Psychology Today article how challenging situations can bring out our creativity. Even if you don’t succeed at whatever you’re doing, you’ll wake up the creative areas of your brain and they’ll perform better after the failed task, to compensate.
Have More Bad Ideas to Have More Good Ones
It turns out that having a lot of bad ideas also means you’ll have a lot of good ideas. Studies have proved this at both MIT and the University of California Davis. The sheer volume of ideasproduced by some people means that they can’t help having ots of bad ones, but they’re likely to have more good ones, as well.
Seth Godin wrote about how important it is to be willing to produce a lot of bad ideas, saying that people who have lots of ideas like entrepreneurs, writers and musicians all fail far more often than they succeed, but they fail less than those who have no ideas at all. He summed this up with an example that I love: “Someone asked me where I get all my good ideas, explaining that it takes him a month or two to come up with one and I seem to have more than that. I asked him how many bad ideas he has every month. He paused and said, “none.””
Belle Beth Cooper is a content crafter at Buffer and co-founder of Hello Code. She writes about social media, startups, lifehacking, and science.
ORIGINAL: LifeHacker
BELLE BETH COOPER

Introducing Qualcomm Zeroth Processors: Brain-Inspired Computing

By admin,

ORIGINAL: Qualcomm
By Samir Kumar
October 10, 2013

Add caption

Qualcomm’s technologies are designed from the ground-up with speed and power efficiency in mind. This way, devices that use our products can run smoothly and maximize battery life driven experiences. As mobile computing becomes increasingly pervasive, so do our expectations of the devices we use and interact with in our everyday lives. We want these devices to be smarter, anticipate our needs, and share our perception of the world so we can interact with them more naturally. The computational complexity of achieving these goals using traditional computing architectures is quite challenging, particularly in a power- and size-constrained environment vs. in the cloud and using supercomputers.

For the past few years our Research and Development teams have been working on a new computer architecture that breaks the traditional mold. We wanted to create a new computer processor that mimics the human brain and nervous system so devices can have embedded cognition driven by brain inspired computing—this is Qualcomm Zeroth processing.

We have three main goals for Qualcomm Zeroth processors:

1. Biologically Inspired Learning

We want Qualcomm Zeroth products to not only mimic human-like perception but also have the ability to learn how biological brains do. Instead of preprogramming behaviors and outcomes with a lot of code, we’ve developed a suite of software tools that enable devices to learn as they go and get feedback from their environment.

In the video below, we outfitted a robot with a Qualcomm Zeroth processor and placed it in an environment with colored boxes. We were then able to teach it to visit white boxes only. We did this through dopaminergic-based learning, a.k.a. positive reinforcement—not by programming lines of code.

2. Enable Devices To See and Perceive the World as Humans Do

Another major pillar of Zeroth processor function is striving to replicate the efficiency with which our senses and our brain communicate information. Neuroscientists have created mathematical models that accurately characterize biological neuron behavior when they are sending, receiving or processing information. Neurons send precisely timed electrical pulses referred to as “spikes” only when a certain voltage threshold in a biological cell’s membrane is reached. These spiking neural networks (SNN) encode and transmit data very efficiently in both how our senses gather information from the environment and then how our brain processes and fuses all of it together.

3. Creation and definition of an Neural Processing Unit—NPU

The final goal of Qualcomm Zeroth is to create, define and standardize this new processing architecture—we call it a Neural Processing Unit (NPU.) We envision NPU’s in a variety of different devices, but also able to live side-by-side in future system-on-chips. This way you can develop programs using traditional programing languages, or tap into the NPU to train the device for human-like interaction and behavior.

We’re looking forward on sharing more information; check back here for more developments on Qualcomm Zeroth processors.

Topics: Qualcomm Zeroth, Qualcomm Neo, Neural Processing Unit

ORIGINAL: Qualcomm
October 10, 2013

Samir Kumar
Director, Business Development
Biography
More from this author

 

Smart Machines: IBM’s Watson and the era of Cognitive Computing

By admin,

ORIGINAL: U Columbia
Now available from Columbia Business School Publishing: Smart Machines: IBM’s Watson and the Era of Cognitive Computing—by John E. Kelly III, Director of IBM Research, and Steve Hamm, writer at IBM and former business and technology journalist—introduces readers to the fascinating world of “cognitive systems,” allowing a glimpse into the possible future of computing.
   
Today, the world is on the cusp of a new phase in the evolution of computing–the era of cognitive systems. The victory of IBM’s Watson on the TV game show Jeopardy! signaled the dawn of this new era. Now, scientists and engineers at IBM and elsewhere are pushing the boundaries of science and technology with the goal of creating machines that sense, learn, reason and interact with people in new ways. Cognitive systems will help people and organizations penetrate complexity and make better decisions—potentially transforming business and society. This is a comprehensive perspective on the future of technology and a call for government, academia and the global tech industry to help power this wave of innovation. Read an excerpt from the book (click on the icon on the bottom right-hand side to view in full screen):
About the authors: 
John E. Kelly III is senior vice president and director of IBM Research, one of the world’s largest commercial research organizations with over 3,000 scientists and technical employees at 12 laboratories in 10 countries. He also helps guide IBM’s overall technical strategy. Kelly’s top priorities as head of IBM Research are to stimulate innovation in key areas of information technology and quickly bring those innovations into the marketplace. Kelly received a bachelor of science degree in physics from Union College in 1976. He received a master of science degree in physics from the Rensselaer Polytechnic Institute in 1978 and his doctorate in materials engineering from RPI in 1980.Steve Hamm is a writer at IBM. Previously, he was a business and technology journalist, most recently a senior writer at BusinessWeek magazine. He’s the author of two earlier books, Bangalore Tiger, about the rise of the Indian tech services industry, and The Race for Perfect, about innovation in mobile computing. He has a bachelor of arts degree in English and creative writing from Carnegie Mellon University.

How to crawl a quarter billion webpages in 40 hours

By admin,

ORIGINAL: Michael Nielsen
by Michael Nielsen
August 10, 2012

More precisely, I crawled 250,113,669 pages for just under 580 dollars in 39 hours and 25 minutes, using 20 Amazon EC2 machine instances.

I carried out this project because (among several other reasons) I wanted to understand what resources are required to crawl a small but non-trivial fraction of the web. In this post I describe some details of what I did. Of course, there’s nothing especially new: I wrote a vanilla (distributed) crawler, mostly to teach myself something about crawling and distributed computing. Still, I learned some lessons that may be of interest to a few others, and so in this post I describe what I did. The post also mixes in some personal working notes, for my own future reference.

What does it mean to crawl a non-trivial fraction of the web?
In fact, the notion of a “non-trivial fraction of the web” isn’t well defined. Many websites generate pages dynamically, in response to user input – for example, Google’s search results pages are dynamically generated in response to the user’s search query. Because of this it doesn’t make much sense to say there are so-and-so many billion or trillion pages on the web. This, in turn, makes it difficult to say precisely what is meant by “a non-trivial fraction of the web”. However, as a reasonable proxy for the size of the web we can use the number of webpages indexed by large search engines. According to this presentation by Googler Jeff Dean, as of November 2010 Google was indexing “tens of billions of pages”. (Note that the number of urls is in the trillions, apparently because of duplicated page content, and multiple urls pointing to the same content.) The now-defunct search engine Cuil claimed to index 120 billion pages. By comparison, a quarter billion is, obviously, very small. Still, it seemed to me like an encouraging start.

Code:
Originally I intended to make the crawler code available under an open source license at GitHub. However, as I better understood the cost that crawlers impose on websites, I began to have reservations. My crawler is designed to be polite and impose relatively little burden on any single website, but could (like many crawlers) easily be modified by thoughtless or malicious people to impose a heavy burden on sites. Because of this I’ve decided to postpone (possibly indefinitely) releasing the code.

There’s a more general issue here, which is this: who gets to crawl the web? Relatively few sites exclude crawlers from companies such as Google and Microsoft. But there are a lot of crawlers out there, many of them without much respect for the needs of individual siteowners. Quite reasonably, many siteowners take an aggressive approach to shutting down activity from less well-known crawlers. A possible side effect is that if this becomes too common at some point in the future, then it may impede the development of useful new services, which need to crawl the web. A possible long-term solution may be services like Common Crawl, which provide access to a common corpus of crawl data.

I’d be interested to hear other people’s thoughts on this issue.


Architecture: Here’s the basic architecture:

The master machine (my laptop) begins by downloading Alexa’s list of the top million domains. These were used both as a domain whitelist for the crawler, and to generate a starting list of seed urls.

The domain whitelist was partitioned across the 20 EC2 machine instances in the crawler. This was done by numbering the instances and then allocating the domain domain to instance number hash(domain) % 20, where hash is the standard Python hash function. … Continue reading

The Impact of Brain and Mind Research

By admin,

ORIGINAL: CMUCèilidh Weekend, McConomy Auditorium, University Center, Carnegie Mellon University
September 28, 2013

 

Understanding the brain is a grand challenge of science, and in April 2013, President Obama announced the federal BRAIN Initiative, whose goal is to create dramatic improvements in our understanding of brain function and dysfunction. Modeled loosely on the Human Genome Project, this initiative will require the development of new technologies, models, and computational approaches.

With so much at stake, what role can CMU and Pittsburgh play in this initiative? 

This panel of experts will discuss

  • the opportunities and challenges posed by the BRAIN Initiative, including
  • the potential of this work to bring about revolutionary changes in our understanding of the brain;
  • in our ability to understand, diagnose and treat brain disorders; and
  • in the development of models that mimic brain functions.
Opening Remarks: Subra Suresh, Carnegie Mellon University
Moderator: Michael Tarr, Carnegie Mellon University Panelists:
Nathan Urban, Carnegie Mellon University
Marlene Behrmann, Carnegie Mellon University
Tom Mitchell, Carnegie Mellon University
Emery Brown, Massachusetts Institute of Technology, Harvard Medical School
Philip Rubin, Executive Office of the President of the United States