Interview: How Ray Kurzweil Plans To Revolutionize Search At Google

By admin,

ORIGINAL: Forbes

Ray Kurzweil

Raymond Kurzweil (Photo: Wikipedia)

When Google announced in January that Ray Kurzweil would be joining the company, a lot of people wondered why the phenomenally accomplished entrepreneur and futurist would want to work for a large company he didn’t start.

Kurzweil’s answer: No one but Google could provide the kind of computing and engineering resources he needed to fulfill his life’s work. Ever since age 14, the 65-year-old inventor of everything from music synthesizers to speech recognition systems has aimed to create a true artificial intelligence, even going so far as to predict that machines would match human intelligence by 2029.

Now, as a director of engineering at Google, he’s focusing specifically on enabling computers to truly understand and even speak in natural language. As I outlined in a recent story on deep learning–a fast-rising branch of AI that attempts to mimic the human neocortex to recognize patterns in speech, images, and other data–Kurzweil eventually wants to help create a “cybernetic friend” that knows what you want before you do.

Kurzweil’s focus is timely from a competitive standpoint as well. Google upped the ante on Apr. 29 by bringing its Google Now voice search app to the iPhone and iPad, in direct competition with Apple’s Siri. And Facebook just revealed that itbuilt a natural-language interface for its Graph Search service announced earlier this year. It’s becoming clear that search is already starting to move beyond the “caveman queries” that characterized effective search techniques until recently.

In a recent interview I conducted for the story, Kurzweil revealed a surprising amount of detail about his planned work at Google. No doubt the nature of that work will evolve as he settles in at the company, but so far, this interview provides possibly the deepest look so far at his plans.

At least initially, that work won’t relate directly to advertising, the main subject of this blog. But marketers will need to understand how profoundly Kurzweil’s and others’ work at Google could change not only what search will become in the age of more and more intelligent machines, but  the way we interact with information and even each other. All that is sure to mean big changes in the nature of advertising and marketing–well before 2029.

Q: In your book, How to Create a Mind, you lay out a theory of how the brain works. Can you explain it briefly?

A: The world is hierarchical. Only mammals have a neocortex, and the neocortex evolved to provide a better understanding of the structure of the world so you can do a better job of modifying it to your needs and solving problems within a hierarchical world. We think in a hierarchical manner. Our first invention was language, and language is hierarchical.

  Category: Uncategorized
  Comments: Comments Off on Interview: How Ray Kurzweil Plans To Revolutionize Search At Google

Talking about the Computational Future at SXSW 2013

By admin,

March 19, 2013
Last week I gave a talk at SXSW 2013 in Austin about some of the things I’m thinking about these days—including quite a few that I’ve never talked publicly about before. Here’s a video, and a slightly edited transcript:
Well, this is a pretty exciting time for me. Because it turns out that a whole bunch of things that I’ve been working on for more than 30 years are all finally converging, in a very nice way. And what I’d like to do here today is tell you a bit about that, and about some things I’ve figured out recently—and about what it all means for our future.
 
This is going to be a bit of a wild talk in some ways. It’s going to go from pretty intellectual stuff about basic science and so on, to some really practical technology developments, with a few sneak peeks at things I’ve never shown before.
 
Let’s start from some science. And you know, a lot of what I’ll say today connects back to what I thought at first was a small discovery that I made about 30 years ago. Let me tell you the story.
 
I started out at a pretty young age as a physicist. Diligently doing physics pretty much the way it had been done for 300 years. Starting from this-or-that equation, and then doing the math to figure out predictions from it. That worked pretty well in some cases. But there were too many cases where it just didn’t work. So I got to wondering whether there might be some alternative; a different approach.
 
At the time I’d been using computers as practical tools for quite a while—and I’d even created a big software system that was a forerunner of Mathematica. And what I gradually began to think was that actually computers—and computation—weren’t just useful tools; they were actually the main event. And that one could use them to generalize how one does science: to think not just in terms of math and equations, but in terms of arbitrary computations and programs.
 
So, OK, what kind of programs might nature use? Given how complicated the things we see in nature are, we might think the programs it’s running must be really complicated. Maybe thousands or millions of lines of code. Like programs we write to do things.
 
But I thought: let’s start simple. Let’s find out what happens with tiny programs—maybe a line or two of code long. And let’s find out what those do. So I decided to do an experiment. Just set up programs like that, and run them. Here’s one of the ones I started with. It’s called a cellular automaton. It consists of a line of cells, each one either black or not. And it runs down the page computing the new color of each cell using the little rule at the bottom there.
 
 
OK, so there’s a simple program, and it does something simple. But let’s point our computational telescope out into the computational universe and just look at all simple programs that work like the one here.
 
 
Well, we see a bunch of things going on. Often pretty simple. A repeating pattern. Sometimes a fractal. But you don’t have to go far before you see much stranger stuff.
 
This is a program I call “rule 30“. What’s it doing? Let’s run it a little longer.
 
 
That’s pretty complicated. And if we just saw this somewhere out there, we’d probably figure it was pretty hard to make. But actually, it all comes just from that tiny program at the bottom. That’s it. And when I first saw this, it was my sort of little modern “Galileo moment”. I’d seen something through my computational telescope that eventually made me change my whole world view. And made me realize that computation—even as done by a tiny program like the one here—is vastly more powerful and important than I’d ever imagined.
 
 
Well, I’ve spent the past few decades working through the consequences of this. And it’s led me to build a new kind of science, to create all sorts of practical technology, and to make me think about almost everything in a different way. I published a big book about the science about ten years ago. And at the time when the book came out, there was a quite a bit of “paradigm shift turbulence“. But looking back it’s really nice to see how well the science has taken root.
 
 
 
And for example there are models based on my kinds of simple programs showing up everywhere. After 300 years of being dominated by Newton-style equations and math, the frontiers are definitely now going to simple programs and the new kind of science.
 
But there’s still one ultimate app out there to be done: to figure out the fundamental theory of physics—to figure out how our whole universe works. It’s kind of tantalizing. We see these very simple programs, with very complex behavior.
 
 
It makes one think that maybe there’s a simple program for our whole universe. And that even though physics seems to involve more and more complicated equations, that somewhere underneath it all there might just be a tiny little program. We don’t know if things work that way. But if out there in the computational universe of possible programs, the program for our universe is just sitting there waiting to be found, it seems embarrassing not to be looking for it.
 
Now if there is indeed a simple program for our universe, it’s sort of inevitable that it has to operate kind of underneath our standard notions like space and time and so on. Maybe it’s a little like this.
 
 
A giant network of nodes, that make up space a bit like molecules make up the air in this room. Well, you can start just trying possible programs that create such things. Each one is in a sense a candidate universe.
 
 
And when you do this, you can pretty quickly say most of them can’t be our universe. Time stops after an instant. There are an infinite number of dimensions. There can’t be particles or matter. Or other pathologies.

What is a Smarter Planet? Instrumented. Intelligent. Interconnected.

By admin,

ORIGINAL: IBM

On a smarter planet, we want to change the paradigm from react to anticipate

For five years, IBMers have been working with companies, cities and communities around the world to build a Smarter Planet.

We’ve seen enormous advances, as leaders are using an explosion of data to transform their enterprises and institutions through analytics, mobile technology, social business and the cloud.

We’ve also seen how this new era is starting to create winners. They’re changing how their decisions are made. They’re redesigning how their teams work, reassessing how to serve their customers, and changing the very nature of business.

It’s the ability to harness data that gives these leaders their competitive advantage in the era of “smart.”

Today, conventions once universally held are giving way to new perspectives, new ways of working, and new solutions acrossindustries. Roles are changing. And more than ever, leaders need a partner to help them adapt.

 

What can you do on a smarter planet?

To outperform on a smarter planet, enterprises face some fundamental needs:

Turn information into insights

Organizations are overwhelmed with data. On a smarter planet, the most successful organizations can turn this data into valuable insights about customers, operations, even pricing. With advanced analytics, you can open new opportunities for business optimization by enabling rapid, informed and confident decisions and actions.

Read more about Smarter Analytics.

Connect and empower people

Innovation comes from collaboration. And collaboration comes from everywhere. Firms that embrace the power of social technologies will unleash the productivity and innovation throughout the entire value chain—from employees to partners to suppliers to customers.

Read more about Social Business.

 

The cloud removes restraints

Smarter comes at a cost: hardware, programs, people to run them. Cloud computing offers multiple ways to reduce that cost through efficient use of resources. Utilizing the cloud means not having to power idle equipment and being able to rethink and redistribute software quickly and easily. It also means a nimbler, more efficient organization.

Read more about Cloud Computing.

 

Customers come of age

There’s a new breed of customers today. Empowered by technology, transparency and abundant information, they want to engage with companies on their own terms―when they want and how they want. To engage and keep these customers, organizations need a whole new integrated approach. There’s no room for business and usual.

Read more about Smarter Commerce.

 

Business moves to mobility

Even as storefronts had to adapt to the Internet, commerce is adapting to mobility. Armed with smartphones and tablets, consumers want to use those devices to browse, shop and pay. Today’s leaders recognize that desire and are building mobile enterprises in response.

Read more about Mobile Enterprise.

Manage risk, security and compliance

Even on a smarter planet there are risks: security, credit, market, operational, environmental and compliance risks…to name a few. With the right process and system improvements, leaders can identify, assess and monitor these risks to mitigate and prevent them.

Read more about Smarter Security.

 

Integrated solutions pave the way

While many enterprises share similarities, those are mostly superficial. To achieve the most from an information technology system today, your organization needs a solution that is tailored to your objectives and needs. Integrating the hardware and software into a single system provides the most power, the least pain and the best outcomes.

Read more about PureSystems and PureData.

Drive enterprises' effectiveness and efficiency

In a slow growth environment, organizations must do more with less. To succeed, your organization must drive continuous and sustainable operational improvements to lower costs and reduce complexity.

Read more about Smarter Computing.

 

 

  Category: Computing, IBM, Sensors, Smart City
  Comments: Comments Off on What is a Smarter Planet? Instrumented. Intelligent. Interconnected.

MIT’s 2013 Top 10 Breakthrough Technologies – 1: Deep Learning

By admin,

Deep Learning
With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.
Image by Jimmy Turrell

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.




It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.
Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.
The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.
With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.
Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.
All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.
Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.
Building a Brain
There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.
Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.
Some of today’s artificial neural networks can train themselves to recognize complex patterns.
Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.
But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.


In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons.

  • The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance.
  • Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds.
  • The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Like cats.Last June, Google demonstrated one of the largest neural networks yet, with more than a billion connections. A team led by Stanford computer science professor Andrew Ng and Google Fellow Jeff Dean showed the system images from 10 million randomly selected YouTube videos. One simulated neuron in the software model fixated on images of cats. Others focused on human faces, yellow flowers, and other objects. And thanks to the power of deep learning, the system identified these discrete objects even though no humans had ever defined or labeled them.

What stunned some AI experts, though, was the magnitude of improvement in image recognition. The system correctly categorized objects and themes in the ­YouTube images 16 percent of the time. That might not sound impressive, but it was 70 percent better than previous methods. And, Dean notes, there were 22,000 categories to choose from; correctly slotting objects into some of them required, for example, distinguishing between two similar varieties of skate fish. That would have been challenging even for most humans. When the system was asked to sort the images into 1,000 more general categories, the accuracy rate jumped above 50 percent.
Big Data
Training the many layers of virtual neurons in the experiment took 16,000 computer processors—the kind of computing infrastructure that Google has developed for its search engine and other services. At least 80 percent of the recent advances in AI can be attributed to the availability of more computer power, reckons Dileep George, cofounder of the machine-learning startup Vicarious.
There’s more to it than the sheer size of Google’s data centers, though. Deep learning has also benefited from the company’s method of splitting computing tasks among many machines so they can be done much more quickly. That’s a technology Dean helped develop earlier in his 14-year career at Google. It vastly speeds up the training of deep-learning neural networks as well, enabling Google to run larger networks and feed a lot more data to them.
Already, deep learning has improved voice search on smartphones. Until last year, Google’s Android software used a method that misunderstood many words. But in preparation for a new release of Android last July, Dean and his team helped replace part of the speech system with one based on deep learning. Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments such as subway platforms. Since it’s likelier to understand what was actually uttered, the result it returns is likelier to be accurate as well. Almost overnight, the number of errors fell by up to 25 percent—results so good that many reviewers now deem Android’s voice search smarter than Apple’s more famous Siri voice assistant.
For all the advances, not everyone thinks deep learning can move artificial intelligence toward something rivaling human intelligence. Some critics say deep learning and AI in general ignore too much of the brain’s biology in favor of brute-force computing.
One such critic is Jeff Hawkins, founder of Palm Computing, whose latest venture, Numenta, is developing a machine-learning system that is biologically inspired but does not use deep learning. Numenta’s system can help predict energy consumption patterns and the likelihood that a machine such as a windmill is about to fail. Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time. Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment. “Google’s attitude is: lots of data makes up for everything,” Hawkins says.

But if it doesn’t make up for everything, the computing (data & human)resources a company like Google throws at these problems can’t be dismissed. They’re crucial, say deep-learning advocates, because the brain itself is still so much more complex than any of today’s neural networks. “You need lots of computational resources to make the ideas work at all,” says Hinton.

What’s Next
Although Google is less than forthcoming about future applications, the prospects are intriguing. Clearly, better image search would help YouTube, for instance. And Dean says deep-learning models can use phoneme data from English to more quickly train systems to recognize the spoken sounds in other languages. It’s also likely that more sophisticated image recognition could make Google’s self-driving cars much better. Then there’s search and the ads that underwrite it. Both could see vast improvements from any technology that’s better and faster at recognizing what people are really looking for—maybe even before they realize it.
Sergey Brin has said he wants to build a benign version of HAL in 2001: A Space Odyssey.

This is what intrigues Kurzweil, 65, who has long had a vision of intelligent machines. In high school, he wrote software that enabled a computer to create original music in various classical styles, which he demonstrated in a 1965 appearance on the TV show I’ve Got a Secret. Since then, his inventions have included several firsts—

  • a print-to-speech reading machine,
  • software that could scan and digitize printed text in any font,
  • music synthesizers that could re-create the sound of orchestral instruments, and
  • a speech recognition system with a large vocabulary.
Today, he envisions a “cybernetic friend” that listens in on your phone conversations, reads your e-mail, and tracks your every move—if you let it, of course—so it can tell you things you want to know even before you ask. This isn’t his immediate goal at Google, but it matches that of Google cofounder Sergey Brin, who said in the company’s early days that he wanted to build the equivalent of the sentient computer HAL in 2001: A Space Odyssey—except one that wouldn’t kill people.
For now, Kurzweil aims to help computers understand and even speak in natural language.My mandate is to give computers enough understanding of natural language to do useful things—do a better job of search, do a better job of answering questions,” he says. Essentially, he hopes to create a more flexible version of IBM’s Watson, which he admires for its ability to understand Jeopardy!queries as quirky as “a long, tiresome speech delivered by a frothy pie topping.” (Watson’s correct answer: “What is a meringue harangue?”)
Kurzweil isn’t focused solely on deep learning, though he says his approach to speech recognition is based on similar theories about how the brain works. He wants to model the actual meaning of words, phrases, and sentences, including ambiguities that usually trip up computers. “I have an idea in mind of a graphical way to represent the semantic meaning of language,” he says.
That in turn will require a more comprehensive way to graph the syntax of sentences. Google is already using this kind of analysis to improve grammar in translations. Natural-language understanding will also require computers to grasp what we humans think of as common-sense meaning. For that, Kurzweil will tap into the Knowledge Graph Google’s catalogue of some 700 million topics, locations, people, and more, plus billions of relationships among them. It was introduced last year as a way to provide searchers with answers to their queries, not just links.
Finally, Kurzweil plans to apply deep-learning algorithms to help computers deal with the “soft boundaries and ambiguities in language.” If all that sounds daunting, it is. “Natural-language understanding is not a goal that is finished at some point, any more than search,” he says. “That’s not a project I think I’ll ever finish.
Though Kurzweil’s vision is still years from reality, deep learning is likely to spur other applications beyond speech and image recognition in the nearer term. For one, there’s drug discovery. The surprise victory by Hinton’s group in the Merck contest clearly showed the utility of deep learning in a field where few had expected it to make an impact.
That’s not all. Microsoft’s Peter Lee says there’s promising early research on potential uses of deep learning in machine vision—technologies that use imaging for applications such as industrial inspection and robot guidance. He also envisions personal sensors that deep neural networks could use to predict medical problems. And sensors throughout a city might feed deep-learning systems that could, for instance, predict where traffic jams might occur.
In a field that attempts something as profound as modeling the human brain, it’s inevitable that one technique won’t solve all the challenges. But for now, this one is leading the way in artificial intelligence. “Deep learning,” says Dean, “is a really powerful metaphor for learning about the world.

bootstrapping biotechnology: engineers cooperate to realize precision grammar for programming cells

By admin,

ORIGINAL: BIOFAB
by vivek_mutalik
16 mar 2013

Palo Alto and Berkeley, Calif. – March 10, 2013 An unprecedented collaboration among academia, industry, government and civil society has resulted in the launch of a professional-grade collection of public domain DNA parts that greatly increases the reliability and precision by which biology can be engineered.

Researchers at the International Open Facility Advancing Biotechnology (aka, BIOFAB) have just announced that they have, in effect, established rules for the first language for engineering gene expression, the layer between the genome and all the dynamic processes of life. The feat is all the more remarkable considering that just a few years ago several prominent scientists claimed that it would be impossible to develop frameworks enabling reliably reusable standard biological parts.

Collectively, the BIOFAB team has produced thousands of high quality standard biological parts. The DNA sequences that encode all parts and the data about them are free and available online. The project is detailed in three research papers, “Precise and Reliable Gene Expression via Standard Transcription and Translation Initiation Elements,” and “Quantitative Estimation of Activity and Quality for Collections of Functional Genetic Elements,” published simultaneously in Nature Methods, and “Measurement and Modeling of Intrinsic Transcription Terminators,” forthcoming in Nucleic Acids Research (see full citations below).

The BIOFAB’s rules for engineering expression come in the form of mathematical models that can be used to predict and characterize the individual parts used in synthetic biology. The work establishes a much-needed technological foundation for the field, allowing researchers to engineer the function of DNA more precisely, and to better predict the resultant behavior.

Dr. Vivek Mutalik, a BIOFAB team leader, says that synthetic biology has been plagued by a lack of reliability and predictability. “Until now, virtually every project has been a one-off – we haven’t figured out how to standardize the genetic parts that are the building blocks of this new field. Researchers produce amazing new parts all the time, but much like trying to use someone else’s house key in your own door, it’s been difficult to directly reuse parts across projects.” Without the ability to characterize parts – that is, to understand how they will behave in multiple contexts – biotech researchers are doomed to a lengthy process of trial-and-error. Fortunately, notes Mutalik, “Our work in the BIOFAB changes all that.”

The plan for establishing the rules for how genetic parts fit together was ambitious and complex. First, researchers needed to figure out the functional patterns of genetic parts. They had to ask, “To what extent do the basic genetic parts that control gene expression ‘misbehave’ when reused over and again in novel combinations,” said Mutalik. BIOFAB researchers had to make and test hundreds of combinations of frequently used parts, then take the resulting data and build mathematical models that demonstrated part quality.

Joao Guimaraes, a member of the BIOFAB team and graduate student in computational biology, explains that difficult-to-predict parts are deemed to be low quality, while “high quality parts behave the same when reused.” Once they found a way to determine part quality, the BIOFAB team set to work on establishing rules for precision control of gene expression, a process that underlies all of biotechnology. They learned by observing natural examples of genetic junctions, and built reliable transcription and translation initiation elements. “We also created standard junctions for transcription terminators, a molecular ‘stop sign’ for gene expression,” said Dr. Guillaume Cambray, a BIOFAB team leader.

While the initial BIOFAB project was able to tame three types of core genetic parts, much more work remains. “We ask that others expand upon the genetic grammar initiated here, to incorporate additional genetic functions and to translate the common rule set beyond E. coli,” says Stanford professor and BIOFAB co-director Drew Endy. (Endy also serves as president of the BioBricks Foundation.)

The BIOFAB’s seed money came from the National Science Foundation, but this funding came only after 10 years of knocking on doors. Part of the difficulty was that the BIOFAB represented a fundamental engineering research project. It’s not the kind of work that is suitable for a single graduate student thesis, and it wasn’t economically practical for a biotechnology company to take it on. UC Berkeley professor and BIOFAB co-director Adam Arkin noted that, “We knew that we would only be successful if we could bring together the skills represented by both academia and industry to establish a professional team that could specify and solve the fundamental engineering puzzles that slow the development of effective biotechnologies”

The BIOFAB’s collaboration with not only the NSF, but also with industry, has been one of the keys to its success. “Pre-competitive and unrestricted partnerships with industry were essential to guide the work and help secure and extend public funding,” said UC Berkeley professor and BIOFAB advisor Jay Keasling. (Both Arkin and Keasling are also affiliated with Lawrence Berkeley Lab; Arkin is Director of the Physical Biosciences Division, and Keasling is an Associate Lab Director for Biosciences.) Other partners came from civil society, including the BioBricks Foundation, a public-benefit organization that helps to advance best practices in the emerging field of synthetic biology. “We were thrilled to help make all BIOFAB engineered parts free-to-use via the BioBrick Public Agreement and the public domain,” said Holly Million, the foundation’s executive director.

The BIOFAB’s standardized parts are specific for E. coli but the “grammar” – the way in which the rules are constructed for how the parts fit together – should apply to nearly any organism; many of the BIOFAB’s rules for E.coli are expected to apply to other prokaryotes. The initial parts have already begun to have an impact in academic research. Caroline Ajo-Franklin, staff scientist at the Lawrence Berkeley Lab’s Biological Nanostructures Facility, noted that her work was able to progress much faster because of the availability of the source code. “We knew we needed a quantitatively characterized library of reliable promoters to move our research efforts forward. Teaming up with BIOFAB changed what would have been at least six months of work into a few weeks.”

The BIOFAB’s work was supported by the National Science Foundation, the Synthetic Biology Engineering Research Center, Lawrence Berkeley National Laboratory, the BioBricks Foundation, Agilent, Genencor, DSM, and Autodesk.

###

“Quantitative estimation of activity and quality for collections of functional genetic elements.” Vivek K Mutalik, Joao C Guimaraes, Guillaume Cambray, Quynh-Anh Mai, Marc Juul Christoffersen, Lance Martin, Ayumi Yu, Colin Lam, Cesar Rodriguez, Gaymon Bennett, Jay D Keasling, Drew Endy & Adam P Arkin, 10 March 2013, Nature Methods. doi:10.1038/nmeth.2403

“Precise and reliable gene expression via standard transcription and translation initiation elements.” Vivek K Mutalik, Joao C Guimaraes, Guillaume Cambray, Colin Lam, Marc Juul Christoffersen, Quynh-Anh Mai, Andrew B Tran, Morgan Paull, Jay D Keasling, Adam P Arkin & Drew Endy, 10 March 2013, Nature Methods. doi:10.1038/nmeth.2404

“Measurement and modeling of intrinsic transcription terminators.” Guillaume. Cambray, Joao C. Guimaraes, Vivek K. Mutalik, Colin Lam, Quynh-Anh Mai, Tim Thimmaiah, James M. Carothers, Adam P. Arkin and Drew Endy, March 2013, Nucleic Acids Research.

#

Press Release

Additional Coverage

DNA tool kit goes live online

Predictability: The brass ring for synthetic biology

Graphene foams: Cozy and conductive scaffolds for neural stem cells

By admin,

ORIGINAL: Physorg
by John Hewitt
Apr 04, 2013 
Graphene Foam. Credit: from google, via ars technical 
(Phys.org) —Graphene foams have been around now for a couple years. Their widespread application in everything from electronics and energy storage to substitutes for helium in balloons is still greatly anticipated. Researchers from the Chinese Academy of Sciences in Suzhou, and Beijing, have now shown that graphene foams can also be used to craft conductive scaffolds for neural stem cells. Their open paper, published yesterday in Nature’s Scientific Reports, suggests new approaches for neural tissue engineering, and possibly for interfacing with neural prosthetics. 
It has been previously reported that graphene sheets support growth and differentiation of human neural stem cells (NSCs) in a similar fashion to other common substrates like glass or polymer PDMS. Chinese researchers have done pioneering work in synthesizing graphene foams to exacting standards of purity and uniformity. When coated with laminin or other matrix proteins, these foams could potentially serve not only as compatible neural housing but also as a means to control the tenants electrically.
To probe the electrical characteristics of the foam the researchers used cyclic voltammetry, a common technique often used in basic electrochemistry. Their results indicated that the cells could be safely stimulated via capacitive charge injection in the potential window range from -0.2 to + 0.8 V, similar again to results from 2D graphene film studies. They further noted that the 3D foam architecture provides more efficient charge injection and potentially more specific stimulation capability.
It is instructive here to note that our painful collective history with asbestos fiber has shown us that geometry can make the poison as much as any chemical effect. It is not just the aspect of the asbestos fiber, but its inconvenient scale that makes its presence so insidious within the lung. Similarly the researchers did not expect to just toss a few neurons onto a random trusswork and expect the ladder’s rungs to be ideally spaced. Indeed the images provided by the authors show the seeded NSCs clinging to the graphene structure like spacewalking astronauts trawling along a space station—but somehow they not only survived, but seemed to thrive.
The graphene foams were synthesized by chemical vapor deposition using a Ni foam template. Scanning electron microscope observation showed a porous structure, which was determined to average 100-300um while the graphene skeleton width was around 100-200um. The surface chemistry of the graphene foams was characterized the X-ray photoelectron spectroscopy (XPS). The criteria used to gauge inertness of the surface was the presence of a large peak corresponding to non-oxygenated rings and small peaks for the C-O bonds.
NSC adhesion and proliferation on 3D-GF scaffold. Credit: Scientific Reports, doi:10.1038/srep01604 

Cytotoxicity, evaluated using Calcein-AM and EthD-I staining, showed that 90% of the cells were viable at 5 days out. Proliferation of NSCs was measured from the expression of Ki-67 protein, a marker for cell proliferation that is absent during interphase, and was initially expressed in 80% of the cells. After 5 days, the cells exhibited elongated cell shape with neurite outgrowth, and covered the entire foam surface to confluence. Tuj-1-postive neurons, O4-positive oligodendrocytes, and GFAP-positive astrocytes were all observed in healy abundance.

The longer term clinical scenario for these kinds of studies is still unfolding. In the absence of vascularization, neurons can only bear to have so many neighbors nearby, and still receive adequate nourishment through diffusion. The traditional concept of using degradable matrices that would be later implanted into the cortex has yet to be realized. Chemical enticements to integrate with the local neuritic field and vasculature are just beginning to be explored for these kinds of explant studies. Permanent matrices with functionalized surfaces that would also be electrically addressable would be a welcome addition to this toolkit. 
Real cortical gray matter is a jungle where ceaseless competition for every cubic nanometer of space is not just a game of survival, it is the lifeblood of every thought and memory. If you were to imagine wrestlers in a steel cage match, packed to hilt, you would not be too far off. Every electromechanical spike, every mini-potential produced in a dendrite, is a breath. A little extra bit of power upon inhalation to exert upon competitors held in mutual death-grip, only to have it ever so tightened again after each exhalation. Successful introduction of novice and metabolically disadvantaged tissue into this strategic landscape would require certain considerations on its behalf. Extra stimulation, growth factor, or oxygenation might be just what it takes to ensure productive evolution of new structure.
Another final take-home message of the paper suggests is that some level of editorial patience must have been afforded for the many obvious grammatical missteps and outright phraseological foibles understandably introduced by the Chinese authorship. It is a small price to pay perhaps for our mutual collaboration. It is encouraging that Western journals welcome the continued publication of Chinese advances in fields like graphene processing, along with the efforts of the authors taken to make it understandable to us.
More information: Three-dimensional graphene foam as a biocompatible and conductive scaffold for neural stem cells, Scientific Reports 3, Article number: 1604. doi:10.1038/srep01604
Abstract 
Neural stem cell (NSC) based therapy provides a promising approach for neural regeneration. For the success of NSC clinical application, a scaffold is required to provide three-dimensional (3D) cell growth microenvironments and appropriate synergistic cell guidance cues. Here, we report the first utilization of graphene foam, a 3D porous structure, as a novel scaffold for NSCs in vitro. It was found that three-dimensional graphene foams (3D-GFs) can not only support NSC growth, but also keep cell at an active proliferation state with upregulation of Ki67 expression than that of two-dimensional graphene films. Meanwhile, phenotypic analysis indicated that 3D-GFs can enhance the NSC differentiation towards astrocytes and especially neurons. Furthermore, a good electrical coupling of 3D-GFs with differentiated NSCs for efficient electrical stimulation was observed. Our findings implicate 3D-GFs could offer a powerful platform for NSC research, neural tissue engineering and neural prostheses.
Journal reference: Scientific Reports

See-through brains clarify connections

By admin,

ORIGINAL: Nature
Helen Shen
10 April 2013
Technique to make tissue transparent offers three-dimensional view of neural networks. 
Mind readers 
Nature Video reveals how Karl Deisseroth and his team created 3D visualizations of mouse brains.
A chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ — the push to map the brain’s fiendishly complicated wiring. Scientists could use the technique to view large networks of neurons with unprecedented ease and accuracy. The technology also opens up new research avenues for old brains that were saved from patients and healthy donors.
This is probably one of the most important advances for doing neuroanatomy in decades,” says Thomas Insel, director of the US National Institute of Mental Health in Bethesda, Maryland, which funded part of the work. Existing technology allows scientists to see neurons and their connections in microscopic detail — but only across tiny slivers of tissue. Researchers must reconstruct three-dimensional data from images of these thin slices. Aligning hundreds or even thousands of these snapshots to map long-range projections of nerve cells is laborious and error-prone, rendering fine-grain analysis of whole brains practically impossible.
Related stories
The new method instead allows researchers to see directly into optically transparent whole brains or thick blocks of brain tissue. Called CLARITY, it was devised by Karl Deisseroth and his team at Stanford University in California. “You can get right down to the fine structure of the system while not losing the big picture,” says Deisseroth, who adds that his group is in the process of rendering an entire human brain transparent.
The technique, published online in Nature on 10 April, turns the brain transparent using the detergent SDS, which strips away lipids that normally block the passage of light (K. Chung et al. Nature http://dx.doi.org/10.1038/nature12107; 2013). Other groups have tried to clarify brains in the past, but many lipid-extraction techniques dissolve proteins and thus make it harder to identify different types of neurons. Deisseroth’s group solved this problem by first infusing the brain with acryl­amide, which binds proteins, nucleic acids and other biomolecules. When the acrylamide is heated, it polymerizes and forms a tissue-wide mesh that secures the molecules. The resulting brain–hydrogel hybrid showed only 8% protein loss after lipid extraction, compared to 41% with existing methods. 
resulting brain–hydrogel hybrid showed only 8% protein loss after lipid extraction, compared to 41% with existing methods. 
Applying CLARITY to whole mouse brains, the researchers viewed fluorescently labelled neurons in areas ranging from outer layers of the cortex to deep structures such as the thalamus. They also traced individual nerve fibres through 0.5-millimetre-thick slabs of formalin-preserved autopsied human brain — orders of magnitude thicker than slices currently imaged.
Neurons in an intact mouse hippocampus visualized using CLARITY and fluorescent labelling. Kwanghun Chung & Karl Deisseroth, HHMI/Stanford Univ. 

The work is spectacular. The results are unlike anything else in the field,” says Van Wedeen, a neuroscientist at the Massachusetts General Hospital in Boston and a lead investigator on the US National Institutes of Health’s Human Connectome Project (HCP), which aims to chart the brain’s neuronal communication networks. The new technique, he says, could reveal important cellular details that would complement data on large-scale neuronal pathways that he and his colleagues are mapping in the HCP’s 1,200 healthy participants using magnetic resonance imaging.

Francine Benes, director of the Harvard Brain Tissue Resource Center at McLean Hospital in Belmont, Massachusetts, says that more tests are needed to assess whether the lipid-clearing treatment alters or damages the fundamental structure of brain tissue. But she and others predict that CLARITY will pave the way for studies on healthy brain wiring, and on brain disorders and ageing.
Researchers could, for example, compare circuitry in banked tissue from people with neurological diseases and from controls whose brains were healthy. Such studies in living people are impossible, because most neuron-tracing methods require genetic engineering or injection of dye in living animals. Scientists might also revisit the many specimens in repositories that have been difficult to analyse because human brains are so large.
The hydrogel–tissue hybrid formed by CLARITY — stiffer and more chemically stable than untreated tissue — might also turn delicate and rare disease specimens into re­usable resources, Deisseroth says. One could, in effect, create a library of brains that different researchers check out, study and then return. Nature 496, 151 (11 April 2013) doi:10.1038/496151a

  Category: Uncategorized
  Comments: Comments Off on See-through brains clarify connections

Combining Nanowires and Memristors Could Lead to Brain-like Computing

By admin,

ORIGINAL: IEEE Spectrum

BY: DEXTER JOHNSON
ABRIL 04, 2013
For decades now, researchers have been trying to get computers to behave like artificial brains instead of merely binary data crunchers. One of the obstacles in creating this capability has been that computers are based on silicon CMOS chips rather than the dendrites and synapses found in the human brain. One of the drawbacks with silicon chips is that they lack what is known as “plasticity” in which the brain’s neurons adapt in order to learn and remember.
To overcome such limitations, nanotechnology has been offering alternatives to silicon chip architecture that will more closely resemble the human brain. DARPA’s SyNAPSE project is one example.
Now researchers at the Centre for Research on Adaptive Nanostructures and Nanodevices (CRANN) at Trinity College in Dublin are pursuing a new nanomaterial-based approach to neural networks that combines work in nanowires and memristors. The aim of the project, for which the researchers have just received a €2.5 million research grant from the European Research Council (ERC), is to develop a new computing paradigm that mimics the neural networks of the human brain. A video describing the CRANN research can be seen below.
Both nanowires and memristors are part of the history of research into neural networks and artificial intelligence (AI). Researchers have been investigating the use of nanowires in building electronic meshes on which nerve tissues can be grown; the mesh, they hope, could link nerve cells with electronics. And almost from the time memristors were first isolated and characterized, researchers have been looking at using them in chips that would lead to artificial intelligence.
Professor John Boland, director of CRANN, and his colleagues will be using the research grant to build on their previous work. They already discovered that when electricity—or other stimuli such as chemicals or light—is applied to a random network of nanowires, it generates a chemical reactions at the junctions where the nanowires cross over each other.
This phenomenon is similar to the way the brain works, in that there are bundles of nerves that cross over one another, forming junctions. Over time, the human brain begins to learn which of these junctions is important and discards the rest.
This is where the memristor aspect of the research becomes critical. As Allen Bellew, one of the CRANN researchers, describes in the video, the nanowires that the CRANN team are working with also display some of the characteristics of memristors, such as their ability to “remember” the charge that has passed through it.
One of the application areas that the CRANN team thinks could benefit from nanowire-based neural networks is facial recognition. At present, digital computing is still pretty ineffective at it, but human brains performs this task well.
This funding from the European Research Council allows me to continue my work to deliver the next generation of computing, which differs from the traditional digital approach,” says Boland in a press release. “The human brain is neurologically advanced and exploits connectivity that is controlled by electrical and chemical signals. My research will create nanowire networks that have the potential to mimic aspects of the neurological functions of the human brain, which may revolutionize the performance of current day computers. It could be truly ground-breaking.

  Category: Uncategorized
  Comments: Comments Off on Combining Nanowires and Memristors Could Lead to Brain-like Computing

Techniques Series: Creating a Molecular Brain Map

By admin,

ORIGINAL: Science Exchange
April 8, 2013 | Posted by Ana in Research | 

  • cajal_drawingbest1

This is the first in a series of posts on scientific techniques, and how to use them in your research.

The brain is comprised of billions of individual neurons. Cells in the brain are densely packed with intermixed, often overlapping types. An excitatory neuron for instance may be surrounded by dozens of inhibitory interneurons and glia. So how can you tell which cell is which?

The classic approach has been to classify cells based on their shape, chemistry, or connectivity. However, this old tradition ignores the enormous diversity within a broad class of cells. These are important questions scientists are just now starting to explore with new tools. This post explores some of these newer techniques, including immunohistochemistry and RT-PCR.

The Need for More Accurate Techniques

Past scientific techniques focused on describing “principal neurons” and “secondary neurons” of a certain brain region, with descriptions based on physiology or anatomy alone. These data are now insufficient given our modern molecular tools, and even can be misleading. Moreover, heterogeneity within cell populations like “dopamine neurons” and new molecular techniques allow a far more accurate description of neurons based on their molecular properties [Ungless and Grace, 2012].

Questions regarding the identity of the cell recorded, what kind of neurotransmitter or peptide does it release, and which enzymes synthesize that chemical require even more precise techniques. Further queries regarding the types of receptors a cell expresses, how it differs from surrounding cells, are noteworthy as well.

A modern approach would take take into account the molecular profile of the neuron, and requires measurement of mRNA and protein expression.

Identifying Neurons with RT-PCR and Immunohistochemistry

Identifying the specific neurons recorded from brain slice preparations can be difficult. Their electrophysiological properties alone are insufficient to correctly identify a cell type. And unless you have a transgenic animal with expression of GFP or other fluorophore in a specific cell type, you have no basis for verifying what type of neuron you recorded.

A better experiment would begin with an acute slice preparation of brain tissue followed by whole-cell patch-clamp recording of individual neurons [Davie et al 2006]. After this characterization, the brain slices can be further processed through two popular methods for molecular characterization of neurons: single-cell reverse transcriptase polymerase chain reaction (RT-PCR) or Immunohistochemistry.

RT-PCR

Start with mRNA from single cells (via aspiration through recording pipette of living cell, or via laser capture microdissection from sectioned tissue), described in [Lin et al 2007]. A typical study employing these methods for the amygdala, a brain region responsible for fear learning, was done by [Sosulina et al 2010].

If you want to order this analysis for your own study, check out the 58 facilities who offer RT-PCR on Science Exchange: https://www.scienceexchange.com/services/real-time-qpcr

Immunohistochemistry

Immunohistochemistry (IHC) can also be used for this procedure, labeling tissue with antibodies for a molecule of interest, and visualizing with a fluorescent secondary antibody or a reactive dark precipitate stain.

At the beginning of this protocol, the electrophysiologist must perform recordings with a pipette filled with dye (e.g. 0.1% biocytin, or a fluorescent dye such as rhodamine dextran) for sufficient length of time to fill cell (at least 30 minutes). 

After recording, brain slices are fixed overnight in paraformaldehyde (4%) and cyroprotected in sucrose (30%) for further sectioning (typically 30-40 microns) and staining.

If you want to order this analysis for your own study, check out the 27 facilities who offer IHC on Science Exchange: https://www.scienceexchange.com/services/immunohistochemistry

Troubleshooting Techniques

PCR: Contamination of mRNA from nearby cells will prevent an accurate identification of mRNA from the neuron of interest. To prevent this problem, all the buffers, solutions, and glass pipettes used for mRNA extraction must be kept sterile during procedure. The use of dissociated cells (via mechanical trituration or enzymatic digestion) may be superior for isolating individual neurons as compared to acute brain slices in which cells are much more densely packed [Hodne et al 2010] [Kay and Krupa 2001].

Immunohistochemistry: The challenge is finding a selective antibody at the appropriate concentration to get the best signal with low background staining. This is a matter of trial and error because multiple vendors manufacture antibodies of varying specificity (e.g. a monoclonal primary antibody is more selective than a polyclonal antibody) and several dilutions must be tested. The labeled protein of interest can be visualized with a fluorescent secondary antibody or dark precipitate staining such as DAB.

A second problem may be in recovering the dye-filled cell of interest after the staining procedure. There are many cutting and washing steps along the way, and a single section containing your dye-filled neuron can easily be lost. It is critical to recover all sections (only 30 microns thick) during these steps.

 

ABOUT THE AUTHOR

Ana Mrejeru (Twitter: Miss_Anamaria) is a postdoctoral scientist at the Columbia University Medical Center. Her focus of research is on healthcare technologies for brain disorders, building neuroscience apps for improved learning, and is also a member of the Science Exchange Advocate program.

  Category: Uncategorized
  Comments: Comments Off on Techniques Series: Creating a Molecular Brain Map

Connected The Film, Synopsis

By admin,

ORIGINAL: Connected The Film


“Examining everything from the Big Bang to Twitter… a cinematic clickstream…incredibly engaging!” –The New York Times

Have you ever faked a restroom trip to check your email? Slept with your laptop? Or become so overwhelmed that you just unplugged from it all? In this funny, eye-opening, and inspiring film, Director Tiffany Shlain takes audiences on an exhilarating rollercoaster ride to discover what it means to be connected in the 21st century. From founding The Webby Awards to being a passionate advocate for The National Day of Unplugging, Her love/hate relationship with technology serves as the springboard for a thrilling exploration of modern life…and our interconnected future. Equal parts documentary and memoir, the film unfolds during a year in which technology and science literally become a matter of life and death for the director. As Shlain’s father battles brain cancer and she confronts a high-risk pregnancy, her very understanding of connection is challenged. Using a brilliant mix of animation, archival footage, and home movies, Shlain reveals the surprising ties that link us not only to the people we love but also to the world at large. A personal film with universal relevance,Connected explores how, after centuries of declaring our independence, it may be time for us to declare our interdependence instead.

Connected Distribution Highlights: 

  • World Premiere at the 2011 Sundance Film Festival
  • Selected by the U.S. State Department to be part of the 2012 American Film Showcase and as the first film to launch the Showcase (screened in Capetown, South Africa & Moscow, Russia)
  • 11 city theatrical tour
  • Over 1000 screenings since launch including 75 film festivals including Rio de Janeiro Film Festival, Jerusalem Int’l Film Festival, special screening at Cannes Int’l Film Market, Cleveland Int’l, and many more (complete list here)
  • Used in over 200 educational institutions around the world, on 6 continents.
  • Connected script included in official Motion Picture Academy Library
  • On Paste Magazine’s 2011 list of best documentaries
  • Winner of 15 awards & distinctions including:
    • Selected by the U.S. State Department to play at embassies around the world as part of the US Film Showcase
    • Selected for the Disruptive Innovation Award from The 2012 Tribeca Film Festival
    • The Interdependence Film Prize from The Berlin International Film Festival and the Interdependence Movement
    • Best Documentary Feature from the Atlanta Int’l Film Festival
    • Best of Festival Documentary from the Portland, ME Film Festival
    • Women in Film Award from the National Geographic All Roads Grant at Sundance

  Category: Uncategorized
  Comments: Comments Off on Connected The Film, Synopsis

Leonardo’s Notebook Digitized in All Its Befuddling Glory

By admin,

ORIGINAL: The Atlantic

The British Library has been digitizing some of its prize pieces and they announced a new round of six artifacts had been completed including Beowulf, a gold-ink penned Gospel, and one of Leonardo Da Vinci’s notebooks.

“Each of these six manuscripts is a true splendour, and has immense significance in its respective field, whether that be Anglo-Saxon literature, Carolingian or Flemish art, or Renaissance science and learning,” Julian Harrison, the library’s curator of medieval artifacts, blogged. “On Digitised Manuscripts you’ll be able to view every page in full and in colour, and to see the finer details using the deep zoom facility.”

All of these texts can be appreciated on a visual level, particularly because the scans are so good. Even the grain of the paper is fascinating.

grainleonardo.jpg

Or here’s a few Da Vinci drawings, including what appears to be a doodle of a man’s head.
faceface.jpg
But there is a fundamental inscrutability to these texts to the untrained eye. Not only is the language unfamiliar, and the script, in Leonardo’s case, a simple code, but without the context of the times, it’s hard to make heads or tails of them, aside from aesthetic appreciation.
Of course, I’m happy such objects exist in more accessible, digital formats, but what the primary documents remind me is how important the interpreters of these works are. The raw documents do not make sense without the added layer of analysis that comes from the scholars who study these works.
Perhaps we can read this as a kind of parable for opening up data and archives. The digitization of key historical artifacts does not replace historians so much as make their work more visible to different audiences. The necessity of what they do is made plain.
An earlier version of this article incorrectly stated the digitizing institution was the British Museum, not the British Library. We regret the error.

  Category: All, Articles
  Comments: Comments Off on Leonardo’s Notebook Digitized in All Its Befuddling Glory

Flashing fish brains filmed in action

By admin,

ORIGINAL: Nature

18 March 2013

 Fast imaging in larval zebrafish produces first neuron-level vertebrate brain-activity map.

It looks like an oddly shaped campfire, but it is activity of individual neurons across a larval fish brain. It is the first time that researchers have been able to image an entire vertebrate brain at the level of single cells.

 

At first glance, it looks like an oddly shaped campfire: smoky grey shapes light up with red sparks and flashes. But the video actually represents a different sort of crackle — the activity of individual neurons across a larval fish brain. It is the first time that researchers have been able to image an entire vertebrate brain at the level of single cells.

 “We see the big picture without losing resolution,” says Phillipp Keller, a microscopist at the Howard Hughes Medical Institute’s Janelia Farm Research Campus in Ashburn, Virginia, who developed the system with Janelia neurobiologist Misha Ahrens. The researchers are able to record activity across the whole fish brain almost every second, detecting 80% of its 100,000 neurons. (The rest lie in hard-to-access areas, such as between the eyes; their activity is visible but cannot be pinned down to single cells.) The work is published today in Nature Methods1.

 “It’s phenomenal,” says Rafael Yuste, a neuroscientist at Columbia University in New York. “It is a bright star now in the literature, suggesting that it is not crazy to map every neuron in the brain of an animal.” Yuste has been leading the call for a big biology project2 that would do just that in the human brain, which contains about 85,000 times more neurons than the zebrafish brain.

 Related stories

 

 The resolution offered by the zebrafish study will enable researchers to understand how different regions of the brain work together, says Ahrens. With conventional techniques, imaging even 2,000 neurons at once is difficult, so researchers must pick and choose which to look at, and extrapolate. Now, he says, “you don’t need to guess what is happening — you can see it”.

 The increased imaging power could, for example, help to explain how the brain coordinates movement, consolidates learning or processes sights and smells. “It allows a much better view of the dynamics throughout the brain during different behaviours and during learning paradigms,” says Joseph Fetcho, a neurobiologist at Cornell University in Ithaca, New York.

 Light, camera, activity 

The imaging system relies on a genetically engineered zebrafish (Danio rerio). The fish’s neurons make a protein that fluoresces in response to fluctuations in the concentration of calcium ions, which occur when nerve cells fire. A microscope sends sheets of light rather than a conventional beam through the fish’s brain, and a detector captures the signals like a viewer watching a cinema screen. The system records activity from the full brain every 1.3 seconds.

Ahrens, Keller and others have previously used light-sheet microscopy to image developing embryos over days3; for the latest study, they modified light detectors and other aspects of the system to increase the rate of imaging tenfold. In a series of hour-long experiments, each of which generated 1 terabyte (1 million megabytes) of data, the researchers were able to see populations of neurons in distinct regions that correlated to their activity (see video above).

 The technique does have its limitations. For one thing, it works best in zebrafish embryos, which are transparent. Ahrens and Keller think that it could work in intact mammal brains, but it would require surgery and would cover only a small fraction of the brain.

 Another limitation is that neither the protein sensor nor the imaging system yet works fast enough to distinguish whether a neuron has fired once or several times in quick succession. But Fetcho says that it is fast enough to start to understand how activity flows through the brain. “No one is anywhere in the ball park of this for any other animal model.

 Nature doi:10.1038/nature.2013.12621

Ahrens, M. B. & Keller, P. J. Nature Meth. http://dx.doi.org/10.1038/NMETH.2434 (2013). Show context

 Alivasatos, A. P. et al. Science 339, 1284–1285 (2013).

  Tomer, R., Khairy, K., Amat, F. & Keller, P. J. Nature Meth. 9, 755–763 (2012).

  Category: All, Articles
  Comments: Comments Off on Flashing fish brains filmed in action

%d bloggers like this: