On a smarter planet, we want to change the paradigm from react to anticipate
For five years, IBMers have been working with companies, cities and communities around the world to build a Smarter Planet.
We’ve seen enormous advances, as leaders are using an explosion of data to transform their enterprises and institutions through analytics, mobile technology, social business and the cloud.
We’ve also seen how this new era is starting to create winners. They’re changing how their decisions are made. They’re redesigning how their teams work, reassessing how to serve their customers, and changing the very nature of business.
It’s the ability to harness data that gives these leaders their competitive advantage in the era of “smart.”
Today, conventions once universally held are giving way to new perspectives, new ways of working, and new solutions acrossindustries. Roles are changing. And more than ever, leaders need a partner to help them adapt.
What can you do on a smarter planet?
To outperform on a smarter planet, enterprises face some fundamental needs:
Organizations are overwhelmed with data. On a smarter planet, the most successful organizations can turn this data into valuable insights about customers, operations, even pricing. With advanced analytics, you can open new opportunities for business optimization by enabling rapid, informed and confident decisions and actions.
Read more about Smarter Analytics.
Innovation comes from collaboration. And collaboration comes from everywhere. Firms that embrace the power of social technologies will unleash the productivity and innovation throughout the entire value chain—from employees to partners to suppliers to customers.
Read more about Social Business.
Smarter comes at a cost: hardware, programs, people to run them. Cloud computing offers multiple ways to reduce that cost through efficient use of resources. Utilizing the cloud means not having to power idle equipment and being able to rethink and redistribute software quickly and easily. It also means a nimbler, more efficient organization.
Read more about Cloud Computing.
There’s a new breed of customers today. Empowered by technology, transparency and abundant information, they want to engage with companies on their own terms―when they want and how they want. To engage and keep these customers, organizations need a whole new integrated approach. There’s no room for business and usual.
Read more about Smarter Commerce.
Even as storefronts had to adapt to the Internet, commerce is adapting to mobility. Armed with smartphones and tablets, consumers want to use those devices to browse, shop and pay. Today’s leaders recognize that desire and are building mobile enterprises in response.
Read more about Mobile Enterprise.
Even on a smarter planet there are risks: security, credit, market, operational, environmental and compliance risks…to name a few. With the right process and system improvements, leaders can identify, assess and monitor these risks to mitigate and prevent them.
Read more about Smarter Security.
While many enterprises share similarities, those are mostly superficial. To achieve the most from an information technology system today, your organization needs a solution that is tailored to your objectives and needs. Integrating the hardware and software into a single system provides the most power, the least pain and the best outcomes.
Read more about PureSystems and PureData.
In a slow growth environment, organizations must do more with less. To succeed, your organization must drive continuous and sustainable operational improvements to lower costs and reduce complexity.
Read more about Smarter Computing.
|With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.
Image by Jimmy Turrell
When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.
In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.
Finally, however, in the last decade Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons.
- The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance.
- Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds.
- The process is repeated in successive layers until the system can reliably recognize phonemes or objects.
Like cats.Last June, Google demonstrated one of the largest neural networks yet, with more than a billion connections. A team led by Stanford computer science professor Andrew Ng and Google Fellow Jeff Dean showed the system images from 10 million randomly selected YouTube videos. One simulated neuron in the software model fixated on images of cats. Others focused on human faces, yellow flowers, and other objects. And thanks to the power of deep learning, the system identified these discrete objects even though no humans had ever defined or labeled them.
But if it doesn’t make up for everything, the computing (data & human)resources a company like Google throws at these problems can’t be dismissed. They’re crucial, say deep-learning advocates, because the brain itself is still so much more complex than any of today’s neural networks. “You need lots of computational resources to make the ideas work at all,” says Hinton.
This is what intrigues Kurzweil, 65, who has long had a vision of intelligent machines. In high school, he wrote software that enabled a computer to create original music in various classical styles, which he demonstrated in a 1965 appearance on the TV show I’ve Got a Secret. Since then, his inventions have included several firsts—
- a print-to-speech reading machine,
- software that could scan and digitize printed text in any font,
- music synthesizers that could re-create the sound of orchestral instruments, and
- a speech recognition system with a large vocabulary.
Palo Alto and Berkeley, Calif. – March 10, 2013 An unprecedented collaboration among academia, industry, government and civil society has resulted in the launch of a professional-grade collection of public domain DNA parts that greatly increases the reliability and precision by which biology can be engineered.
Researchers at the International Open Facility Advancing Biotechnology (aka, BIOFAB) have just announced that they have, in effect, established rules for the first language for engineering gene expression, the layer between the genome and all the dynamic processes of life. The feat is all the more remarkable considering that just a few years ago several prominent scientists claimed that it would be impossible to develop frameworks enabling reliably reusable standard biological parts.
Collectively, the BIOFAB team has produced thousands of high quality standard biological parts. The DNA sequences that encode all parts and the data about them are free and available online. The project is detailed in three research papers, “Precise and Reliable Gene Expression via Standard Transcription and Translation Initiation Elements,” and “Quantitative Estimation of Activity and Quality for Collections of Functional Genetic Elements,” published simultaneously in Nature Methods, and “Measurement and Modeling of Intrinsic Transcription Terminators,” forthcoming in Nucleic Acids Research (see full citations below).
The BIOFAB’s rules for engineering expression come in the form of mathematical models that can be used to predict and characterize the individual parts used in synthetic biology. The work establishes a much-needed technological foundation for the field, allowing researchers to engineer the function of DNA more precisely, and to better predict the resultant behavior.
Dr. Vivek Mutalik, a BIOFAB team leader, says that synthetic biology has been plagued by a lack of reliability and predictability. “Until now, virtually every project has been a one-off – we haven’t figured out how to standardize the genetic parts that are the building blocks of this new field. Researchers produce amazing new parts all the time, but much like trying to use someone else’s house key in your own door, it’s been difficult to directly reuse parts across projects.” Without the ability to characterize parts – that is, to understand how they will behave in multiple contexts – biotech researchers are doomed to a lengthy process of trial-and-error. Fortunately, notes Mutalik, “Our work in the BIOFAB changes all that.”
The plan for establishing the rules for how genetic parts fit together was ambitious and complex. First, researchers needed to figure out the functional patterns of genetic parts. They had to ask, “To what extent do the basic genetic parts that control gene expression ‘misbehave’ when reused over and again in novel combinations,” said Mutalik. BIOFAB researchers had to make and test hundreds of combinations of frequently used parts, then take the resulting data and build mathematical models that demonstrated part quality.
Joao Guimaraes, a member of the BIOFAB team and graduate student in computational biology, explains that difficult-to-predict parts are deemed to be low quality, while “high quality parts behave the same when reused.” Once they found a way to determine part quality, the BIOFAB team set to work on establishing rules for precision control of gene expression, a process that underlies all of biotechnology. They learned by observing natural examples of genetic junctions, and built reliable transcription and translation initiation elements. “We also created standard junctions for transcription terminators, a molecular ‘stop sign’ for gene expression,” said Dr. Guillaume Cambray, a BIOFAB team leader.
While the initial BIOFAB project was able to tame three types of core genetic parts, much more work remains. “We ask that others expand upon the genetic grammar initiated here, to incorporate additional genetic functions and to translate the common rule set beyond E. coli,” says Stanford professor and BIOFAB co-director Drew Endy. (Endy also serves as president of the BioBricks Foundation.)
The BIOFAB’s seed money came from the National Science Foundation, but this funding came only after 10 years of knocking on doors. Part of the difficulty was that the BIOFAB represented a fundamental engineering research project. It’s not the kind of work that is suitable for a single graduate student thesis, and it wasn’t economically practical for a biotechnology company to take it on. UC Berkeley professor and BIOFAB co-director Adam Arkin noted that, “We knew that we would only be successful if we could bring together the skills represented by both academia and industry to establish a professional team that could specify and solve the fundamental engineering puzzles that slow the development of effective biotechnologies”
The BIOFAB’s collaboration with not only the NSF, but also with industry, has been one of the keys to its success. “Pre-competitive and unrestricted partnerships with industry were essential to guide the work and help secure and extend public funding,” said UC Berkeley professor and BIOFAB advisor Jay Keasling. (Both Arkin and Keasling are also affiliated with Lawrence Berkeley Lab; Arkin is Director of the Physical Biosciences Division, and Keasling is an Associate Lab Director for Biosciences.) Other partners came from civil society, including the BioBricks Foundation, a public-benefit organization that helps to advance best practices in the emerging field of synthetic biology. “We were thrilled to help make all BIOFAB engineered parts free-to-use via the BioBrick Public Agreement and the public domain,” said Holly Million, the foundation’s executive director.
The BIOFAB’s standardized parts are specific for E. coli but the “grammar” – the way in which the rules are constructed for how the parts fit together – should apply to nearly any organism; many of the BIOFAB’s rules for E.coli are expected to apply to other prokaryotes. The initial parts have already begun to have an impact in academic research. Caroline Ajo-Franklin, staff scientist at the Lawrence Berkeley Lab’s Biological Nanostructures Facility, noted that her work was able to progress much faster because of the availability of the source code. “We knew we needed a quantitatively characterized library of reliable promoters to move our research efforts forward. Teaming up with BIOFAB changed what would have been at least six months of work into a few weeks.”
The BIOFAB’s work was supported by the National Science Foundation, the Synthetic Biology Engineering Research Center, Lawrence Berkeley National Laboratory, the BioBricks Foundation, Agilent, Genencor, DSM, and Autodesk.
“Quantitative estimation of activity and quality for collections of functional genetic elements.” Vivek K Mutalik, Joao C Guimaraes, Guillaume Cambray, Quynh-Anh Mai, Marc Juul Christoffersen, Lance Martin, Ayumi Yu, Colin Lam, Cesar Rodriguez, Gaymon Bennett, Jay D Keasling, Drew Endy & Adam P Arkin, 10 March 2013, Nature Methods. doi:10.1038/nmeth.2403
“Precise and reliable gene expression via standard transcription and translation initiation elements.” Vivek K Mutalik, Joao C Guimaraes, Guillaume Cambray, Colin Lam, Marc Juul Christoffersen, Quynh-Anh Mai, Andrew B Tran, Morgan Paull, Jay D Keasling, Adam P Arkin & Drew Endy, 10 March 2013, Nature Methods. doi:10.1038/nmeth.2404
“Measurement and modeling of intrinsic transcription terminators.” Guillaume. Cambray, Joao C. Guimaraes, Vivek K. Mutalik, Colin Lam, Quynh-Anh Mai, Tim Thimmaiah, James M. Carothers, Adam P. Arkin and Drew Endy, March 2013, Nucleic Acids Research.
|Graphene Foam. Credit: from google, via ars technical|
|Neurons in an intact mouse hippocampus visualized using CLARITY and fluorescent labelling. Kwanghun Chung & Karl Deisseroth, HHMI/Stanford Univ.|
“The work is spectacular. The results are unlike anything else in the field,” says Van Wedeen, a neuroscientist at the Massachusetts General Hospital in Boston and a lead investigator on the US National Institutes of Health’s Human Connectome Project (HCP), which aims to chart the brain’s neuronal communication networks. The new technique, he says, could reveal important cellular details that would complement data on large-scale neuronal pathways that he and his colleagues are mapping in the HCP’s 1,200 healthy participants using magnetic resonance imaging.
ORIGINAL: IEEE Spectrum
This is the first in a series of posts on scientific techniques, and how to use them in your research.
The brain is comprised of billions of individual neurons. Cells in the brain are densely packed with intermixed, often overlapping types. An excitatory neuron for instance may be surrounded by dozens of inhibitory interneurons and glia. So how can you tell which cell is which?
The classic approach has been to classify cells based on their shape, chemistry, or connectivity. However, this old tradition ignores the enormous diversity within a broad class of cells. These are important questions scientists are just now starting to explore with new tools. This post explores some of these newer techniques, including immunohistochemistry and RT-PCR.
The Need for More Accurate Techniques
Past scientific techniques focused on describing “principal neurons” and “secondary neurons” of a certain brain region, with descriptions based on physiology or anatomy alone. These data are now insufficient given our modern molecular tools, and even can be misleading. Moreover, heterogeneity within cell populations like “dopamine neurons” and new molecular techniques allow a far more accurate description of neurons based on their molecular properties [Ungless and Grace, 2012].
Questions regarding the identity of the cell recorded, what kind of neurotransmitter or peptide does it release, and which enzymes synthesize that chemical require even more precise techniques. Further queries regarding the types of receptors a cell expresses, how it differs from surrounding cells, are noteworthy as well.
A modern approach would take take into account the molecular profile of the neuron, and requires measurement of mRNA and protein expression.
Identifying Neurons with RT-PCR and Immunohistochemistry
Identifying the specific neurons recorded from brain slice preparations can be difficult. Their electrophysiological properties alone are insufficient to correctly identify a cell type. And unless you have a transgenic animal with expression of GFP or other fluorophore in a specific cell type, you have no basis for verifying what type of neuron you recorded.
A better experiment would begin with an acute slice preparation of brain tissue followed by whole-cell patch-clamp recording of individual neurons [Davie et al 2006]. After this characterization, the brain slices can be further processed through two popular methods for molecular characterization of neurons: single-cell reverse transcriptase polymerase chain reaction (RT-PCR) or Immunohistochemistry.
Start with mRNA from single cells (via aspiration through recording pipette of living cell, or via laser capture microdissection from sectioned tissue), described in [Lin et al 2007]. A typical study employing these methods for the amygdala, a brain region responsible for fear learning, was done by [Sosulina et al 2010].
If you want to order this analysis for your own study, check out the 58 facilities who offer RT-PCR on Science Exchange: https://www.scienceexchange.com/services/real-time-qpcr
Immunohistochemistry (IHC) can also be used for this procedure, labeling tissue with antibodies for a molecule of interest, and visualizing with a fluorescent secondary antibody or a reactive dark precipitate stain.
At the beginning of this protocol, the electrophysiologist must perform recordings with a pipette filled with dye (e.g. 0.1% biocytin, or a fluorescent dye such as rhodamine dextran) for sufficient length of time to fill cell (at least 30 minutes).
After recording, brain slices are fixed overnight in paraformaldehyde (4%) and cyroprotected in sucrose (30%) for further sectioning (typically 30-40 microns) and staining.
If you want to order this analysis for your own study, check out the 27 facilities who offer IHC on Science Exchange: https://www.scienceexchange.com/services/immunohistochemistry
PCR: Contamination of mRNA from nearby cells will prevent an accurate identification of mRNA from the neuron of interest. To prevent this problem, all the buffers, solutions, and glass pipettes used for mRNA extraction must be kept sterile during procedure. The use of dissociated cells (via mechanical trituration or enzymatic digestion) may be superior for isolating individual neurons as compared to acute brain slices in which cells are much more densely packed [Hodne et al 2010] [Kay and Krupa 2001].
Immunohistochemistry: The challenge is finding a selective antibody at the appropriate concentration to get the best signal with low background staining. This is a matter of trial and error because multiple vendors manufacture antibodies of varying specificity (e.g. a monoclonal primary antibody is more selective than a polyclonal antibody) and several dilutions must be tested. The labeled protein of interest can be visualized with a fluorescent secondary antibody or dark precipitate staining such as DAB.
A second problem may be in recovering the dye-filled cell of interest after the staining procedure. There are many cutting and washing steps along the way, and a single section containing your dye-filled neuron can easily be lost. It is critical to recover all sections (only 30 microns thick) during these steps.
ABOUT THE AUTHOR
Ana Mrejeru (Twitter: Miss_Anamaria) is a postdoctoral scientist at the Columbia University Medical Center. Her focus of research is on healthcare technologies for brain disorders, building neuroscience apps for improved learning, and is also a member of the Science Exchange Advocate program.
ORIGINAL: Connected The Film
“Examining everything from the Big Bang to Twitter… a cinematic clickstream…incredibly engaging!” –The New York Times
Connected Distribution Highlights:
- World Premiere at the 2011 Sundance Film Festival
- Selected by the U.S. State Department to be part of the 2012 American Film Showcase and as the first film to launch the Showcase (screened in Capetown, South Africa & Moscow, Russia)
- 11 city theatrical tour
- Over 1000 screenings since launch including 75 film festivals including Rio de Janeiro Film Festival, Jerusalem Int’l Film Festival, special screening at Cannes Int’l Film Market, Cleveland Int’l, and many more (complete list here)
- Used in over 200 educational institutions around the world, on 6 continents.
- Connected script included in official Motion Picture Academy Library
- On Paste Magazine’s 2011 list of best documentaries
- Winner of 15 awards & distinctions including:
- Selected by the U.S. State Department to play at embassies around the world as part of the US Film Showcase
- Selected for the Disruptive Innovation Award from The 2012 Tribeca Film Festival
- The Interdependence Film Prize from The Berlin International Film Festival and the Interdependence Movement
- Best Documentary Feature from the Atlanta Int’l Film Festival
- Best of Festival Documentary from the Portland, ME Film Festival
- Women in Film Award from the National Geographic All Roads Grant at Sundance
The British Library has been digitizing some of its prize pieces and they announced a new round of six artifacts had been completed including Beowulf, a gold-ink penned Gospel, and one of Leonardo Da Vinci’s notebooks.
“Each of these six manuscripts is a true splendour, and has immense significance in its respective field, whether that be Anglo-Saxon literature, Carolingian or Flemish art, or Renaissance science and learning,” Julian Harrison, the library’s curator of medieval artifacts, blogged. “On Digitised Manuscripts you’ll be able to view every page in full and in colour, and to see the finer details using the deep zoom facility.”
All of these texts can be appreciated on a visual level, particularly because the scans are so good. Even the grain of the paper is fascinating.
18 March 2013
Fast imaging in larval zebrafish produces first neuron-level vertebrate brain-activity map.
It looks like an oddly shaped campfire, but it is activity of individual neurons across a larval fish brain. It is the first time that researchers have been able to image an entire vertebrate brain at the level of single cells.
At first glance, it looks like an oddly shaped campfire: smoky grey shapes light up with red sparks and flashes. But the video actually represents a different sort of crackle — the activity of individual neurons across a larval fish brain. It is the first time that researchers have been able to image an entire vertebrate brain at the level of single cells.
“We see the big picture without losing resolution,” says Phillipp Keller, a microscopist at the Howard Hughes Medical Institute’s Janelia Farm Research Campus in Ashburn, Virginia, who developed the system with Janelia neurobiologist Misha Ahrens. The researchers are able to record activity across the whole fish brain almost every second, detecting 80% of its 100,000 neurons. (The rest lie in hard-to-access areas, such as between the eyes; their activity is visible but cannot be pinned down to single cells.) The work is published today in Nature Methods1.
“It’s phenomenal,” says Rafael Yuste, a neuroscientist at Columbia University in New York. “It is a bright star now in the literature, suggesting that it is not crazy to map every neuron in the brain of an animal.” Yuste has been leading the call for a big biology project2 that would do just that in the human brain, which contains about 85,000 times more neurons than the zebrafish brain.
The resolution offered by the zebrafish study will enable researchers to understand how different regions of the brain work together, says Ahrens. With conventional techniques, imaging even 2,000 neurons at once is difficult, so researchers must pick and choose which to look at, and extrapolate. Now, he says, “you don’t need to guess what is happening — you can see it”.
The increased imaging power could, for example, help to explain how the brain coordinates movement, consolidates learning or processes sights and smells. “It allows a much better view of the dynamics throughout the brain during different behaviours and during learning paradigms,” says Joseph Fetcho, a neurobiologist at Cornell University in Ithaca, New York.
Light, camera, activity
The imaging system relies on a genetically engineered zebrafish (Danio rerio). The fish’s neurons make a protein that fluoresces in response to fluctuations in the concentration of calcium ions, which occur when nerve cells fire. A microscope sends sheets of light rather than a conventional beam through the fish’s brain, and a detector captures the signals like a viewer watching a cinema screen. The system records activity from the full brain every 1.3 seconds.
Ahrens, Keller and others have previously used light-sheet microscopy to image developing embryos over days3; for the latest study, they modified light detectors and other aspects of the system to increase the rate of imaging tenfold. In a series of hour-long experiments, each of which generated 1 terabyte (1 million megabytes) of data, the researchers were able to see populations of neurons in distinct regions that correlated to their activity (see video above).
The technique does have its limitations. For one thing, it works best in zebrafish embryos, which are transparent. Ahrens and Keller think that it could work in intact mammal brains, but it would require surgery and would cover only a small fraction of the brain.
Another limitation is that neither the protein sensor nor the imaging system yet works fast enough to distinguish whether a neuron has fired once or several times in quick succession. But Fetcho says that it is fast enough to start to understand how activity flows through the brain. “No one is anywhere in the ball park of this for any other animal model.”
Alivasatos, A. P. et al. Science 339, 1284–1285 (2013).
Tomer, R., Khairy, K., Amat, F. & Keller, P. J. Nature Meth. 9, 755–763 (2012).
ORIGINAL: Allen Institute
A visually compelling tour of the human brain, from anatomy to cells to genes and back.