Category: IoT


The Chinese Artificial Intelligence Revolution

By Hugo Angel,

The Bund by Shizhao This file is licensed under the Creative CommonsAttribution 1.0 Generic license
The artificial intelligence (AI) world summit took place in Singapore on 3 and 4 October 2017 (The AI summit Singapore). If we follow this global trend of heavy emphasis on AI, we can note the convergence between artificial intelligence and the emergence of “smart cities” in Asia, especially in China (Imran Khan, “Asia is leading the “Smart city” charge, but we’re not there yet, TechinAsia, January 19 2016). The development of artificial intelligence indeed combines with the current urbanization of the Chinese population.

This “intelligentization” of smart cities in China is induced by the necessity to master urban growth, while adapting urban areas to the emerging energy, water, food, health challenges, through the treatment of big data by artificial intelligence (Jean-Michel Valantin, “China: Towards the digital ecological revolution?”, The Red (Team) Analysis Society, October 22, 2017). Reciprocally, the smart urban development is a powerful driver, among others, of the development of artificial intelligence (Linda Poon, “What artificial intelligence reveals about urban change?” City Lab, July 13, 2017).

In this article, we shall thus focus upon the combination of artificial intelligence and cities that indeed creates the so-called “smart cities” in China. After having presented how this combination looks like through Chinese examples, we shall explain how this trend is implemented. Finally, we shall see how the development of artificial intelligence within the latest generations of smart cities is disrupting geopolitics through the combination of industry and intelligentization.

Artificial intelligence and smart cities
In China, the urban revolution induced by the acceleration of rural exodus is entwined with the digital and artificial intelligence revolution. This can be seen through the national program of urban development that is transforming “small” (3 million people) and middle size cities (5 million people) into smart cities. The new 95 Chinese smart cities are meant to shelter the coming 250 millions people expected to relocate into towns between the end of 2017 and 2026 (Chris Weller, “Here’s China’s genius plan to move 250 millions people from farms to cities”, Business Insider, 5 August 2015). However, these 95 cities are part of the 500 smart cities that are expected to be developed before the end of 2017 (“Chinese “smart cities” to number 500 before end of 2017“, China Daily, 21-04-2017).

In order to manage the mammoth challenges of these huge cities, artificial intelligence is on the rise. Deep learning is notably the type of AI that is used to make these cities smart. Deep learning is both able to treat the massive flow of data generated by cities and made possible by the exponentially growing flows of these big data – as these very data allow the AI to learn by themselves, through the creation, among other things, of the codes needed to apprehend new kinds of data and issues (Michael Copeland, “What’s the difference between AI, machine learning and deep learning?”, NVIDIA Blog, July 29, 2016).

For example, since 2016, the Hangzhou municipal government has integrated artificial intelligence, notably with “city brain”, which helps improving traffic efficiency through the use of the big data streams generated by a myriad of captors and cameras. The “city brain” project is led the giant technology company Alibaba. This “intelligentization” of traffic management helps reduce traffic jam, improves street surveillance, as well as air pollution for the 9 millions residents of Hangzhou. However, it is only the first step before turning the city into an intelligent and sustainable smart city (Du Yifei, “Hangzhou growing “smarter” thanks to AI technology”, People’s Daily, October 20, 2017).

“Intelligentizing cities”

Through the developing internet of things (IoT), the convergence of “intelligent” infrastructures, of big data management, and of urban artificial intelligence is going to be increasingly important to improve traffic, and thus energy efficiency, air pollution and economic development (Sarah Hsu, “China is investing heavily into Artificial intelligence, and could soon catch up with US”, Forbes, July 3, 2017). The Hangzhou experiment is duplicated in Suzhou, Quzhou and Macao.

Meanwhile, Baidu Inc, the Chinese largest search engine, develops a partnership with the Shanxi province in order to implement “city brain”, which is dedicated to create smart cities in the northern province, while improving coal mining management and chemical treatment (“Baidu partners with Shanxi province to integrate AI with city management”, China Money Network, July 13). As a result, the AI is going to be used to alleviate the use of this energy, which is also responsible of the Chinese “airpocalypse” (Jean-Michel Valantin, “The Arctic, Russia and China’s energy transition”, The Red (Team) Analysis Society, February 2, 2017).

In the meantime, Tencent, another mammoth Chinese technology company, is multiplying partnerships with 14 Chinese provinces and 50 cities to develop and integrate urban artificial intelligences. In the same time, the Hong Kong government is getting ready to implement an artificial intelligence program to tackle the 21st urban challenges, chief among them urban development management and climate change impacts.

When looking closely at this development of artificial intelligence in order to support the management of Chinese cities and at the multiplication of smart cities, we notice both also coincide with the political will aimed at reducing the growth of already clogged Chinese megacities of more than ten million people – such as

  • Beijing (21,5 millions people), 
  • Shanghai (25 millions), and 
  • the urban areas around them – and of the network of very great cities where more than 5 to 10 million people live. 

Indeed, the problem is that these very large cities and megalopolis have reached highly dangerous levels of water and air pollution, hence the “airpocalypse”, created by the noxious mix of car fume and coal plants exhaust.

From the intelligentization of Chinese cities to the “smart cars revolution”
This Chinese AI-centred urban development strategy also drives a gigantic urban, technological and industrial revolution, that turns China into a possible world leader

  • in clean energy, 
  • in electric and smart cars and 
  • in urban development. 

The development of the new generations of smart car is thus going to be coupled with latest advances in artificial intelligence. As a result, China can position itself in the “middle” of the major trends of globalization. Indeed, smart electric cars are the “new frontier” of the car industry that supports the economy of great economic powers as such as the U.S., Japan, and Germany (Michael Klare, Blood and oil, 2005), while artificial intelligence is the new frontier of industry and the building of the future. The emergence of China as an “electric and smart cars” provider could have massive implications for the industrial and economic development of these countries.

Add caption

In 2015, in the case of Shanghai, the number of cars grew by more than 13%, reaching the staggering total of 2.5 million cars in a 25 millions people strong megacity. In order to mitigate the impact of the car flow on the atmosphere, the municipal authorities use new “smart street” technologies. For example, the Ningbo-Hangzhou-Shanghai highway, daily used by more than 40 000 cars, is being equipped with a cyber network allowing drivers to pay tolls in advance with their smartphones. This application allows a significant decrease in pollution, because the lines of thousands of cars stopping in front of paybooths are reduced (“Chinese “smart cities” to number 500 before end of 2017”, China Daily, 21 April 2017).

In the meantime, the tech giant Tencent, the creator of WeChat, the enormous Chinese social network, which attracts more than 889 million users per month (“2017 WeChat Users Behavior Report”, China Channel, April 25, 2017), is developing a partnership with the Guangzhou automobile Group to develop smart cars. Baidu is doing the same with the Chinese BYD, Chery and BAIC, while launching Apollo, the open source platform on AI-powered smart cars. Alibaba, the giant of e-commerce, with more than 454 millions users during the first quarter of 2017 (“Number of active buyers across Alibaba’s online shopping properties from 2nd quarter 2012 to 1st quarter 2017 (in millions)”, Statista, The Statistical Portal, 2017) is developing a partnership with the Chinese brand SAIC motors and has already launched the Yunos System, which connects cars to the cloud and internet services. (Charles Clover and Sherry Fei Ju, “Tencent and Guangzhou team up to produce smart cars“, Financial Times, 19 september 2017).

It must be kept in mind that these three Chinese giant tech companies are thus connecting the development of their own services with artificial intelligence development, notably with smart cars development, in the context of the urban, digital and ecological transformation of China. In other terms, “city brains” and “smart cars” are going to become an immense “digital ecosystem” that artificial intelligences are going to manage, thus giving China an imposing technological edge.

This means that artificial intelligence is becoming the common support of the social and urban transformation of China, as well as the ways and means of the transformation of the Chinese urban network into smart cities. It is also a scientific, technological and industrial revolution.

This revolution is going to be based on the new international distribution of power between artificial intelligence-centred countries, and the others.

Indeed, in China, artificial intelligence is creating new social, economic and political conditions. This means that China is using artificial intelligence in order to manage its own social evolution, while becoming a mammoth artificial intelligence great power.

It now remains to be seen how the latest generations of smart cities powered by developing artificial intelligence accompanies the way some countries are getting ready for the economic, industrial and ecological, as well as security and military challenges of the 21 century, and how this urban and artificial intelligence is preparing an immense geopolitical revolution. This revolution is going to be based on the new international distribution of power between artificial intelligence-centred countries, and the others.

About the author: Jean-Michel Valantin (PhD Paris) leads the Environment and Geopolitics Department of The Red (Team) Analysis Society. He is specialised in strategic studies and defence sociology with a focus on environmental geostrategy.

ORIGINAL: RedAnalysis

IBM, Local Motors debut Olli, the first Watson-powered self-driving vehicle

By Hugo Angel,

Olli hits the road in the Washington, D.C. area and later this year in Miami-Dade County and Las Vegas.
Local Motors CEO and co-founder John B. Rogers, Jr. with “Olli” & IBM, June 15, 2016.Rich Riggins/Feature Photo Service for IBM

IBM, along with the Arizona-based manufacturer Local Motors, debuted the first-ever driverless vehicle to use the Watson cognitive computing platform. Dubbed “Olli,” the electric vehicle was unveiled at Local Motors’ new facility in National Harbor, Maryland, just outside of Washington, D.C.

Olli, which can carry up to 12 passengers, taps into four Watson APIs (

  • Speech to Text, 
  • Natural Language Classifier, 
  • Entity Extraction and 
  • Text to Speech

) to interact with its riders. It can answer questions like “Can I bring my children on board?” and respond to basic operational commands like, “Take me to the closest Mexican restaurant.” Olli can also give vehicle diagnostics, answering questions like, “Why are you stopping?

Olli learns from data produced by more than 30 sensors embedded throughout the vehicle, which will added and adjusted to meet passenger needs and local preferences.
While Olli is the first self-driving vehicle to use IBM Watson Internet of Things (IoT), this isn’t Watson’s first foray into the automotive industry. IBM launched its IoT for Automotive unit in September of last year, and in March, IBM and Honda announced a deal for Watson technology and analytics to be used in the automaker’s Formula One (F1) cars and pits.
IBM demonstrated its commitment to IoT in March of last year, when it announced it was spending $3B over four years to establish a separate IoT business unit, whch later became the Watson IoT business unit.
IBM says that starting Thursday, Olli will be used on public roads locally in Washington, D.C. and will be used in Miami-Dade County and Las Vegas later this year. Miami-Dade County is exploring a pilot program that would deploy several autonomous vehicles to shuttle people around Miami.
ORIGINAL: ZDnet
By Stephanie Condon for Between the Lines
June 16, 2016

Former NASA chief unveils $100 million neural chip maker KnuEdge

By Hugo Angel,

Daniel Goldin
It’s not all that easy to call KnuEdge a startup. Created a decade ago by Daniel Goldin, the former head of the National Aeronautics and Space Administration, KnuEdge is only now coming out of stealth mode. It has already raised $100 million in funding to build a “neural chip” that Goldin says will make data centers more efficient in a hyperscale age.
Goldin, who founded the San Diego, California-based company with the former chief technology officer of NASA, said he believes the company’s brain-like chip will be far more cost and power efficient than current chips based on the computer design popularized by computer architect John von Neumann. In von Neumann machines, memory and processor are separated and linked via a data pathway known as a bus. Over the years, von Neumann machines have gotten faster by sending more and more data at higher speeds across the bus as processor and memory interact. But the speed of a computer is often limited by the capacity of that bus, leading to what some computer scientists to call the “von Neumann bottleneck.” IBM has seen the same problem, and it has a research team working on brain-like data center chips. Both efforts are part of an attempt to deal with the explosion of data driven by artificial intelligence and machine learning.
Goldin’s company is doing something similar to IBM, but only on the surface. Its approach is much different, and it has been secretly funded by unknown angel investors. And Goldin said in an interview with VentureBeat that the company has already generated $20 million in revenue and is actively engaged in hyperscale computing companies and Fortune 500 companies in the aerospace, banking, health care, hospitality, and insurance industries. The mission is a fundamental transformation of the computing world, Goldin said.
It all started over a mission to Mars,” Goldin said.

Above: KnuEdge’s first chip has 256 cores.Image Credit: KnuEdge
Back in the year 2000, Goldin saw that the time delay for controlling a space vehicle would be too long, so the vehicle would have to operate itself. He calculated that a mission to Mars would take software that would push technology to the limit, with more than tens of millions of lines of code.
Above: Daniel Goldin, CEO of KnuEdge.
Image Credit: KnuEdge
I thought, Former NASA chief unveils $100 million neural chip maker KnuEdge

It’s not all that easy to call KnuEdge a startup. Created a decade ago by Daniel Goldin, the former head of the National Aeronautics and Space Administration, KnuEdge is only now coming out of stealth mode. It has already raised $100 million in funding to build a “neural chip” that Goldin says will make data centers more efficient in a hyperscale age.
Goldin, who founded the San Diego, California-based company with the former chief technology officer of NASA, said he believes the company’s brain-like chip will be far more cost and power efficient than current chips based on the computer design popularized by computer architect John von Neumann. In von Neumann machines, memory and processor are separated and linked via a data pathway known as a bus. Over the years, von Neumann machines have gotten faster by sending more and more data at higher speeds across the bus as processor and memory interact. But the speed of a computer is often limited by the capacity of that bus, leading to what some computer scientists to call the “von Neumann bottleneck.” IBM has seen the same problem, and it has a research team working on brain-like data center chips. Both efforts are part of an attempt to deal with the explosion of data driven by artificial intelligence and machine learning.
Goldin’s company is doing something similar to IBM, but only on the surface. Its approach is much different, and it has been secretly funded by unknown angel investors. And Goldin said in an interview with VentureBeat that the company has already generated $20 million in revenue and is actively engaged in hyperscale computing companies and Fortune 500 companies in the aerospace, banking, health care, hospitality, and insurance industries. The mission is a fundamental transformation of the computing world, Goldin said.
It all started over a mission to Mars,” Goldin said.

Above: KnuEdge’s first chip has 256 cores.Image Credit: KnuEdge
Back in the year 2000, Goldin saw that the time delay for controlling a space vehicle would be too long, so the vehicle would have to operate itself. He calculated that a mission to Mars would take software that would push technology to the limit, with more than tens of millions of lines of code.
Above: Daniel Goldin, CEO of KnuEdge.
Image Credit: KnuEdge
I thought, holy smokes,” he said. “It’s going to be too expensive. It’s not propulsion. It’s not environmental control. It’s not power. This software business is a very big problem, and that nation couldn’t afford it.
So Goldin looked further into the brains of the robotics, and that’s when he started thinking about the computing it would take.
Asked if it was easier to run NASA or a startup, Goldin let out a guffaw.
I love them both, but they’re both very different,” Goldin said. “At NASA, I spent a lot of time on non-technical issues. I had a project every quarter, and I didn’t want to become dull technically. I tried to always take on a technical job doing architecture, working with a design team, and always doing something leading edge. I grew up at a time when you graduated from a university and went to work for someone else. If I ever come back to this earth, I would graduate and become an entrepreneur. This is so wonderful.
Back in 1992, Goldin was planning on starting a wireless company as an entrepreneur. But then he got the call to “go serve the country,” and he did that work for a decade. He started KnuEdge (previously called Intellisis) in 2005, and he got very patient capital.
When I went out to find investors, I knew I couldn’t use the conventional Silicon Valley approach (impatient capital),” he said. “It is a fabulous approach that has generated incredible wealth. But I wanted to undertake revolutionary technology development. To build the future tools for next-generation machine learning, improving the natural interface between humans and machines. So I got patient capital that wanted to see lightning strike. Between all of us, we have a board of directors that can contact almost anyone in the world. They’re fabulous business people and technologists. We knew we had a ten-year run-up.
But he’s not saying who those people are yet.
KnuEdge’s chips are part of a larger platform. KnuEdge is also unveiling KnuVerse, a military-grade voice recognition and authentication technology that unlocks the potential of voice interfaces to power next-generation computing, Goldin said.
While the voice technology market has exploded over the past five years due to the introductions of Siri, Cortana, Google Home, Echo, and ViV, the aspirations of most commercial voice technology teams are still on hold because of security and noise issues. KnuVerse solutions are based on patented authentication techniques using the human voice — even in extremely noisy environments — as one of the most secure forms of biometrics. Secure voice recognition has applications in industries such as banking, entertainment, and hospitality.
KnuEdge says it is now possible to authenticate to computers, web and mobile apps, and Internet of Things devices (or everyday objects that are smart and connected) with only a few words spoken into a microphone — in any language, no matter how loud the background environment or how many other people are talking nearby. In addition to KnuVerse, KnuEdge offers Knurld.io for application developers, a software development kit, and a cloud-based voice recognition and authentication service that can be integrated into an app typically within two hours.
And KnuEdge is announcing KnuPath with LambdaFabric computing. KnuEdge’s first chip, built with an older manufacturing technology, has 256 cores, or neuron-like brain cells, on a single chip. Each core is a tiny digital signal processor. The LambdaFabric makes it possible to instantly connect those cores to each other — a trick that helps overcome one of the major problems of multicore chips, Goldin said. The LambdaFabric is designed to connect up to 512,000 devices, enabling the system to be used in the most demanding computing environments. From rack to rack, the fabric has a latency (or interaction delay) of only 400 nanoseconds. And the whole system is designed to use a low amount of power.
All of the company’s designs are built on biological principles about how the brain gets a lot of computing work done with a small amount of power. The chip is based on what Goldin calls “sparse matrix heterogeneous machine learning algorithms.” And it will run C++ software, something that is already very popular. Programmers can program each one of the cores with a different algorithm to run simultaneously, for the “ultimate in heterogeneity.” It’s multiple input, multiple data, and “that gives us some of our power,” Goldin said.

Above: KnuEdge’s KnuPath chip.
Image Credit: KnuEdge
KnuEdge is emerging out of stealth mode to aim its new Voice and Machine Learning technologies at key challenges in IoT, cloud based machine learning and pattern recognition,” said Paul Teich, principal analyst at Tirias Research, in a statement. “Dan Goldin used his experience in transforming technology to charter KnuEdge with a bold idea, with the patience of longer development timelines and away from typical startup hype and practices. The result is a new and cutting-edge path for neural computing acceleration. There is also a refreshing surprise element to KnuEdge announcing a relevant new architecture that is ready to ship… not just a concept or early prototype.”
Today, Goldin said the company is ready to show off its designs. The first chip was ready last December, and KnuEdge is sharing it with potential customers. That chip was built with a 32-nanometer manufacturing process, and even though that’s an older technology, it is a powerful chip, Goldin said. Even at 32 nanometers, the chip has something like a two-times to six-times performance advantage over similar chips, KnuEdge said.
The human brain has a couple of hundred billion neurons, and each neuron is connected to at least 10,000 to 100,000 neurons,” Goldin said. “And the brain is the most energy efficient and powerful computer in the world. That is the metaphor we are using.”
KnuEdge has a new version of its chip under design. And the company has already generated revenue from sales of the prototype systems. Each board has about four chips.
As for the competition from IBM, Goldin said, “I believe we made the right decision and are going in the right direction. IBM’s approach is very different from what we have. We are not aiming at anyone. We are aiming at the future.
In his NASA days, Goldin had a lot of successes. There, he redesigned and delivered the International Space Station, tripled the number of space flights, and put a record number of people into space, all while reducing the agency’s planned budget by 25 percent. He also spent 25 years at TRW, where he led the development of satellite television services.
KnuEdge has 100 employees, but Goldin said the company outsources almost everything. Goldin said he is planning to raised a round of funding late this year or early next year. The company collaborated with the University of California at San Diego and UCSD’s California Institute for Telecommunications and Information Technology.
With computers that can handle natural language systems, many people in the world who can’t read or write will be able to fend for themselves more easily, Goldin said.
I want to be able to take machine learning and help people communicate and make a living,” he said. “This is just the beginning. This is the Wild West. We are talking to very large companies about this, and they are getting very excited.
A sample application is a home that has much greater self-awareness. If there’s something wrong in the house, the KnuEdge system could analyze it and figure out if it needs to alert the homeowner.
Goldin said it was hard to keep the company secret.
I’ve been biting my lip for ten years,” he said.
As for whether KnuEdge’s technology could be used to send people to Mars, Goldin said. “This is available to whoever is going to Mars. I tried twice. I would love it if they use it to get there.
ORIGINAL: Venture Beat

holy smokes

,” he said. “It’s going to be too expensive. It’s not propulsion. It’s not environmental control. It’s not power. This software business is a very big problem, and that nation couldn’t afford it.

So Goldin looked further into the brains of the robotics, and that’s when he started thinking about the computing it would take.
Asked if it was easier to run NASA or a startup, Goldin let out a guffaw.
I love them both, but they’re both very different,” Goldin said. “At NASA, I spent a lot of time on non-technical issues. I had a project every quarter, and I didn’t want to become dull technically. I tried to always take on a technical job doing architecture, working with a design team, and always doing something leading edge. I grew up at a time when you graduated from a university and went to work for someone else. If I ever come back to this earth, I would graduate and become an entrepreneur. This is so wonderful.
Back in 1992, Goldin was planning on starting a wireless company as an entrepreneur. But then he got the call to “go serve the country,” and he did that work for a decade. He started KnuEdge (previously called Intellisis) in 2005, and he got very patient capital.
When I went out to find investors, I knew I couldn’t use the conventional Silicon Valley approach (impatient capital),” he said. “It is a fabulous approach that has generated incredible wealth. But I wanted to undertake revolutionary technology development. To build the future tools for next-generation machine learning, improving the natural interface between humans and machines. So I got patient capital that wanted to see lightning strike. Between all of us, we have a board of directors that can contact almost anyone in the world. They’re fabulous business people and technologists. We knew we had a ten-year run-up.
But he’s not saying who those people are yet.
KnuEdge’s chips are part of a larger platform. KnuEdge is also unveiling KnuVerse, a military-grade voice recognition and authentication technology that unlocks the potential of voice interfaces to power next-generation computing, Goldin said.
While the voice technology market has exploded over the past five years due to the introductions of Siri, Cortana, Google Home, Echo, and ViV, the aspirations of most commercial voice technology teams are still on hold because of security and noise issues. KnuVerse solutions are based on patented authentication techniques using the human voice — even in extremely noisy environments — as one of the most secure forms of biometrics. Secure voice recognition has applications in industries such as banking, entertainment, and hospitality.
KnuEdge says it is now possible to authenticate to computers, web and mobile apps, and Internet of Things devices (or everyday objects that are smart and connected) with only a few words spoken into a microphone — in any language, no matter how loud the background environment or how many other people are talking nearby. In addition to KnuVerse, KnuEdge offers Knurld.io for application developers, a software development kit, and a cloud-based voice recognition and authentication service that can be integrated into an app typically within two hours.
And KnuEdge is announcing KnuPath with LambdaFabric computing. KnuEdge’s first chip, built with an older manufacturing technology, has 256 cores, or neuron-like brain cells, on a single chip. Each core is a tiny digital signal processor. The LambdaFabric makes it possible to instantly connect those cores to each other — a trick that helps overcome one of the major problems of multicore chips, Goldin said. The LambdaFabric is designed to connect up to 512,000 devices, enabling the system to be used in the most demanding computing environments. From rack to rack, the fabric has a latency (or interaction delay) of only 400 nanoseconds. And the whole system is designed to use a low amount of power.
All of the company’s designs are built on biological principles about how the brain gets a lot of computing work done with a small amount of power. The chip is based on what Goldin calls “sparse matrix heterogeneous machine learning algorithms.” And it will run C++ software, something that is already very popular. Programmers can program each one of the cores with a different algorithm to run simultaneously, for the “ultimate in heterogeneity.” It’s multiple input, multiple data, and “that gives us some of our power,” Goldin said.

Above: KnuEdge’s KnuPath chip.
Image Credit: KnuEdge
KnuEdge is emerging out of stealth mode to aim its new Voice and Machine Learning technologies at key challenges in IoT, cloud based machine learning and pattern recognition,” said Paul Teich, principal analyst at Tirias Research, in a statement. “Dan Goldin used his experience in transforming technology to charter KnuEdge with a bold idea, with the patience of longer development timelines and away from typical startup hype and practices. The result is a new and cutting-edge path for neural computing acceleration. There is also a refreshing surprise element to KnuEdge announcing a relevant new architecture that is ready to ship… not just a concept or early prototype.”
Today, Goldin said the company is ready to show off its designs. The first chip was ready last December, and KnuEdge is sharing it with potential customers. That chip was built with a 32-nanometer manufacturing process, and even though that’s an older technology, it is a powerful chip, Goldin said. Even at 32 nanometers, the chip has something like a two-times to six-times performance advantage over similar chips, KnuEdge said.
The human brain has a couple of hundred billion neurons, and each neuron is connected to at least 10,000 to 100,000 neurons,” Goldin said. “And the brain is the most energy efficient and powerful computer in the world. That is the metaphor we are using.”
KnuEdge has a new version of its chip under design. And the company has already generated revenue from sales of the prototype systems. Each board has about four chips.
As for the competition from IBM, Goldin said, “I believe we made the right decision and are going in the right direction. IBM’s approach is very different from what we have. We are not aiming at anyone. We are aiming at the future.
In his NASA days, Goldin had a lot of successes. There, he redesigned and delivered the International Space Station, tripled the number of space flights, and put a record number of people into space, all while reducing the agency’s planned budget by 25 percent. He also spent 25 years at TRW, where he led the development of satellite television services.
KnuEdge has 100 employees, but Goldin said the company outsources almost everything. Goldin said he is planning to raised a round of funding late this year or early next year. The company collaborated with the University of California at San Diego and UCSD’s California Institute for Telecommunications and Information Technology.
With computers that can handle natural language systems, many people in the world who can’t read or write will be able to fend for themselves more easily, Goldin said.
I want to be able to take machine learning and help people communicate and make a living,” he said. “This is just the beginning. This is the Wild West. We are talking to very large companies about this, and they are getting very excited.
A sample application is a home that has much greater self-awareness. If there’s something wrong in the house, the KnuEdge system could analyze it and figure out if it needs to alert the homeowner.
Goldin said it was hard to keep the company secret.
I’ve been biting my lip for ten years,” he said.
As for whether KnuEdge’s technology could be used to send people to Mars, Goldin said. “This is available to whoever is going to Mars. I tried twice. I would love it if they use it to get there.
ORIGINAL: Venture Beat

Seven Emerging Technologies That Will Change the World Forever

By admin,

By Gray Scott
Sep 29, 2015

When someone asks me what I do, and I tell them that I’m a futurist,
the first thing they ask “what is a futurist?” The short answer that I
give is “I use current scientific research in emerging technologies to
imagine how we will live in the future.”  
However, as you can imagine the art of futurology and foresight is much more complex. I spend my days thinking, speaking and writing about the future, and emerging technologies. On any given day I might be in Warsaw speaking at an Innovation Conference, in London speaking at a Global Leadership Summit, or being interviewed by the Discovery Channel. Whatever the situation, I have one singular mission. I want you to think about the future. 


How will we live in the future? How will emerging technologies change our lives, our economy and our businesses? We should begin to think about the future now. It will be here faster than you think.


Let’s explore seven current emerging technologies that I am thinking about that are set to change the world forever.

1. Age Reversal
We will see the emergence of true biological age reversal by 2025.


It may be extraordinarily expensive, complex and risky, but for people who want to turn back the clock, it may be worth it. It may sound like science fiction but the science is real, and it has already begun. In fact, according to new research published in Nature’s Scientific Reports, Professor Jun-Ichi Hayashi from the University of Tsukuba in Japan has already reversed ageing in human cell lines by “turning on or off”mitochondrial function.


Another study published in CELL reports that Australian and US researchers have successfully reversed the aging process in the muscles of mice. They found that raising nuclear NAD+ in old mice reverses pseudohypoxia and metabolic dysfunction. Researchers gave the mice a compound called nicotinamide adenine dinucleotide or NAD for a week and found that the age indicators in two-year-old mice were restored to that of six-month-old mice. That would be like turning a 60-year-old human into a 20-year-old!


How will our culture deal with age reversal? Will we set limits on who can age-reverse? Do we ban criminals from this technology? These are the questions we will face in a very complex future. One thing is certain, age reversal will happen and when it does it will change our species and our world forever.


2. Artificial General Intelligence
The robots are coming and they are going to eat your job for lunch. Worldwide shipments of multipurpose industrial robots are forecast to exceed 207,000 units in 2015, and this is just the beginning. Robots like Care-o-bot 4 and Softbank’s Pepper may be in homes, offices and hotels within the next year. These robots will be our personal servants, assistants and caretakers.


Amazon has introduced a new AI assistant called ECHO that could replace the need for a human assistant altogether. We already have robots and automation that can make pizza, serve beer, write news articles, scan our faces for diseases, and drive cars. We will see AI in our factories, hospitals, restaurants and hotels around the world by 2020.

This “pinkhouse” at Caliber Biotherapeutics in Bryan, Texas, grows 2.2 million plants under the glow of blue and red LEDs.
Courtesy of Caliber Therapeutics


3. Vertical Pink Farms
We are entering the techno-agricultural era. Agricultural science is changing the way we harvest our food. Robots and automation are going to play a decisive role in the way we hunt and gather. The most important and disruptive idea is what I call “Vertical PinkFarms” and it is set to decentralise the food industry forever.


The United Nations (UN) predicts by 2050 80% of the Earth’s population will live in cities. Climate change will also make traditional food production more difficult and less productive in the future. We will need more efficient systems to feed these hungry urban areas. Thankfully, several companies around the world are already producing food grown in these Vertical PinkFarms and the results are remarkable.

Vertical PinkFarms will use blue and red LED lighting to grow organic, pesticide free, climate controlled food inside indoor environments. Vertical PinkFarms use less water, less energy and enable people to grow food underground or indoors year round in any climate.


Traditional food grown on outdoor farms are exposed to the full visible light spectrum. This range includes Red, Orange, Yellow, Green, Blue and Violet. However, agricultural science is now showing us that O, Y, G and V are not necessary for plant growth. You only need R and B. LED lights are much more efficient and cooler than indoor florescent grow lights used in most indoor greenhouses. LED lights are also becoming less expensive as more companies begin to invest in this technology. Just like the solar and electric car revolution, the change will be exponential. By 2025, we may see massive Vertical PinkFarms in most major cities around the world. We may even see small Vertical PinkFarm units in our homes in the future.


4. Transhumanism
By 2035, even if a majority of humans do not self-identify as Transhuman, technically they will be. If we define any bio-upgrade or human enhancement as Transhumanism, then the numbers are already quite high and growing exponentially. According to a UN Telecom Agency report, around 6 billion people have cell phones. This demonstrates the ubiquitous nature of technology that we keep on or around our body.


As human bio-enhancements become more affordable, billions of humans will become Transhuman. Digital implants, mind-controlled exoskeletal upgrades, age reversal pills, hyper-intelligence brain implants and bionic muscle upgrades. All of these technologies will continue our evolution as humans.


Reconstructive joint replacements, spinal implants, cardiovascular implants, dental implants, intraocular lens and breast implants are all part of our human techno-evolution into this new Transhuman species.


5. Wearables and Implantables  
Smartphones will fade into digital history as the high-resolution smart contact lens and corresponding in-ear audio plugs communicate with our wearable computers or “smart suits.” The digital world will be displayed directly on our eye in stunning interactive augmented beauty. The Gent University’s Centre of Microsystems Technology in Belgium has recently developed a spherical curved LCD display that can be embedded in contact lenses. This enables the entire lens to display information.


The bridge to the smart contact starts with smart glasses, VR headsets and yes, the Apple watch. Wearable technologies are growing exponentially. New smart augmented glasses like 
  • Google Glass, 
  • RECON JET, 
  • METAPro, and 
  • Vuzix M100 Smart Glasses 
are just the beginning. In fact, CastAR augmented 3D glasses recently received over a million dollars in funding on Kickstarter. Their goal was only four hundred thousand. The market is ready for smart vision, and tech companies should move away from handheld devices if they want to compete.

The question of what is real and augmented will be irrelevant in the future. We will be able to create our reality with clusters of information cults that can only see certain augmented information realities if you are in these groups. All information will be instantaneously available in the augmented visual future.

Mist Water Canarias
Gray Scott, an IEET Advisory Board member, is a futurist,
techno-philosopher, speaker, writer and artist. He is the founder and
CEO of SeriousWonder.com and a professional member of The World Future Society.


6. Atmospheric Water Harvesting
California and parts of the south-west in the US are currently experiencing an unprecedented drought. If this drought continues, the global agricultural system could become unstable.


Consider this: California and Arizona account for about 98% of commercial lettuce production in the United States. Thankfully we live in a world filled with exponential innovation right now.


An emerging technology called Atmospheric Water Harvesting could save California and other arid parts of the world from severe drought and possibly change the techno-agricultural landscape forever.


Traditional agricultural farming methods consume 80% of the water in California. According to the California Agricultural Resource Directory of 2009, California grows 
  • 99% of the U.S. almonds, artichokes, and walnuts; 
  • 97% of the kiwis, apricots and plums; 
  • 96% of the figs, olives and nectarines; 
  • 95% of celery and garlic; 
  • 88% of strawberries and lemons; 
  • 74% of peaches; 
  • 69% of carrots; 
  • 62% of tangerines and 
  • the list goes on.
Several companies around the world are already using atmospheric water harvesting technologies to solve this problem. Each company has a different technological approach but all of them combined could help alleviate areas suffering from water shortages.


The most basic, and possibly the most accessible, form of atmospheric water harvesting technology works by collecting water and moisture from the atmosphere using micro netting. These micro nets collect water that drains down into a collection chamber. This fresh water can then be stored or channelled into homes and farms as needed.


A company called FogQuest is already successfully using micro netting or “fog collectors” to harvest atmospheric water in places like Ethiopia, Guatemala, Nepal, Chile and Morocco.
Will people use this technology or will we continue to drill for water that may not be there?


7. 3D Printing
Today we already have 3D printers that can print clothing, circuit boards, furniture, homes and chocolate. A company called BigRep has created a 3D printer called the BigRep ONE.2 that enables designers to create entire tables, chairs or coffee tables in one print. Did you get that?


You can now buy a 3D printer and print furniture!
Fashion designers like 
  • Iris van Herpen, 
  • Bryan Oknyansky, 
  • Francis Bitonti, 
  • Madeline Gannon, and 
  • Daniel Widrig 
have all broken serious ground in the 3D printed fashion movement. These avant-garde designs may not be functional for the average consumer so what is one to do for a regular tee shirt? Thankfully a new Field Guided Fabrication 3D printer called ELECTROLOOM has arrived that can print and it may put a few major retail chains out of business. The ELECTROLOOM enables anyone to create seamless fabric items on demand.

So what is next? 3D printed cars. Yes, cars. Divergent Microfactories (DM) has recently created a first 3D printed high-performance car called the Blade. This car is no joke. The Blade has a chassis weight of just 61 pounds, goes 0-60 MPH in 2.2 seconds and is powered by a 4-cylinder 700-horsepower bi-fuel internal combustion engine.


These are just seven emerging technologies on my radar. I have a list of hundreds of innovations that will change the world forever. Some sound like pure sci-fi but I assure you they are real. Are we ready for a world filled with abundance, age reversal and self-replicating AI robots? I hope so.


——

Artificial Intelligence Is Almost Ready for Business

By admin,

Artificial Intelligence (AI) is an idea that has oscillated through many hype cycles over many years, as scientists and sci-fi visionaries have declared the imminent arrival of thinking machines. But it seems we’re now at an actual tipping point. AI, expert systems, and business intelligence have been with us for decades, but this time the reality almost matches the rhetoric, driven by

  • the exponential growth in technology capabilities (e.g., Moore’s Law),
  • smarter analytics engines, and
  • the surge in data.

Most people know the Big Data story by now: the proliferation of sensors (the “Internet of Things”) is accelerating exponential growth in “structured” data. And now on top of that explosion, we can also analyze “unstructured” data, such as text and video, to pick up information on customer sentiment. Companies have been using analytics to mine insights within this newly available data to drive efficiency and effectiveness. For example, companies can now use analytics to decide

  • which sales representatives should get which leads,
  • what time of day to contact a customer, and
  • whether they should e-mail them, text them, or call them.

Such mining of digitized information has become more effective and powerful as more info is “tagged” and as analytics engines have gotten smarter. As Dario Gil, Director of Symbiotic Cognitive Systems at IBM Research, told me:

Data is increasingly tagged and categorized on the Web – as people upload and use data they are also contributing to annotation through their comments and digital footprints. This annotated data is greatly facilitating the training of machine learning algorithms without demanding that the machine-learning experts manually catalogue and index the world. Thanks to computers with massive parallelism, we can use the equivalent of crowdsourcing to learn which algorithms create better answers. For example, when IBM’s Watson computer played ‘Jeopardy!,’ the system used hundreds of scoring engines, and all the hypotheses were fed through the different engines and scored in parallel. It then weighted the algorithms that did a better job to provide a final answer with precision and confidence.”

Beyond the Quants

Interestingly, for a long time, doing detailed analytics has been quite labor- and people-intensive. You need “quants,” the statistically savvy mathematicians and engineers who build models that make sense of the data. As Babson professor and analytics expert Tom Davenport explained to me, humans are traditionally necessary to

  • create a hypothesis,
  • identify relevant variables,
  • build and run a model, and
  • then iterate it.

Quants can typically create one or two good models per week.

However, machine learning tools for quantitative data – perhaps the first line of AI – can create thousands of models a week. For example, in programmatic ad buying on the Web, computers decide which ads should run in which publishers’ locations. Massive volumes of digital ads and a never-ending flow of clickstream data depend on machine learning, not people, to decide which Web ads to place where. Firms like DataXu use machine learning to generate up to 5,000 different models a week, making decisions in under 15 milliseconds, so that they can more accurately place ads that you are likely to click on.

Tom Davenport:

I initially thought that AI and machine learning would be great for augmenting the productivity of human quants. One of the things human quants do, that machine learning doesn’t do, is to understand what goes into a model and to make sense of it. That’s important for convincing managers to act on analytical insights. For example, an early analytics insight at Osco Pharmacy uncovered that people who bought beer also bought diapers. But because this insight was counter-intuitive and discovered by a machine, they didn’t do anything with it. But now companies have needs for greater productivity than human quants can address or fathom. They have models with 50,000 variables. These systems are moving from augmenting humans to automating decisions.”

In business, the explosive growth of complex and time-sensitive data enables decisions that can give you a competitive advantage, but these decisions depend on analyzing at a speed, volume, and complexity that is too great for humans. AI is filling this gap as it becomes ingrained in the analytics technology infrastructure in industries like health care, financial services, and travel.

The Growing Use of AI

IBM is leading the integration of AI in industry. It has made a $1 billion investment in AI through the launch of its IBM Watson Group and has made many advancements and published research touting the rise of “cognitive computing” – the ability of computers like Watson to understand words (“natural language”), not just numbers. Rather than take the cutting edge capabilities developed in its research labs to market as a series of products, IBM has chosen to offer a platform of services under the Watson brand. It is working with an ecosystem of partners who are developing applications leveraging the dynamic learning and cloud computing capabilities of Watson.

The biggest application of Watson has been in health care. Watson excels in situations where you need to bridge between massive amounts of dynamic and complex text information (such as the constantly changing body of medical literature) and another mass of dynamic and complex text information (such as patient records or genomic data), to generate and evaluate hypotheses. With training, Watson can provide recommendations for treatments for specific patients. Many prestigious academic medical centers, such as The Cleveland Clinic, The Mayo Clinic, MD Anderson, and Memorial Sloan-Kettering are working with IBM to develop systems that will help healthcare providers better understand patients’ diseases and recommend personalized courses of treatment. This has provento be a challenging domain to automate and most of the projects are behind schedule.Another large application area for AI is in financial services. Mike Adler, Global Financial Services Leader at The Watson Group, told me they have 45 clients working mostly on three applications:

  • (1) a “digital virtual agent” that enables banks and insurance companies to engage their customers in a new, personalized way,
  • (2) a “wealth advisor” that enables financial planning and wealth management, either for self-service or in combination with a financial advisor, and
  • (3) risk and compliance management.

For example, USAA, the $20 billion provider of financial services to people that serve, or have served, in the United States military, is using Watson to help their members transition from the military to civilian life. Neff Hudson, vice president of emerging channels at USAA, told me, “We’re always looking to help our members, and there’s nothing more critical than helping the 150,000+ people leaving the military every year. Their financial security goes down when they leave the military. We’re trying to use a virtual agent to intervene to be more productive for them.” USAA also uses AI to enhance navigation on their popular mobile app. The Enhanced Virtual Assistant, or Eva, enables members to do 200 transactions by just talking, including transferring money and paying bills. “It makes search better and answers in a Siri-like voice. But this is a 1.0 version. Our next step is to create a virtual agent that is capable of learning. Most of our value is in moving money day-to-day for our members, but there are a lot of unique things we can do that happen less frequently with our 140 products. Our goal is to be our members’ personal financial agent for our full range of services.

In addition to working with large, established companies, IBM is also providing Watson’s capabilities to startups. IBM has set aside $100 million for investments in startups. One of the startups that is leveraging Watson is WayBlazer, a new venture in travel planning that is led by Terry Jones, a founder of Travelocity and Kayak. He told me:

I’ve spent my whole career in travel and IT.

  • I started as a travel agent, and people would come in, and I’d send them a letter in a couple weeks with a plan for their trip. 
  • The Sabre reservation system made the process better by automating the channel between travel agents and travel providers
  • Then with Travelocity we connected travelers directly with travel providers through the Internet. 
  • Then with Kayak we moved up the chain again, providing offers across travel systems
  • Now with WayBlazer we have a system that deals with words. Nobody has helped people with a tool for dreaming and planning their travel. 

Our mission is to make it easy and give people several personalized answers to a complicated trip, rather than the millions of clues that search provides today. This new technology can take data out of all the silos and dark wells that companies don’t even know they have and use it to provide personalized service.
What’s Next

As Moore’s Law marches on, we have more power in our smartphones than the most powerful supercomputers did 30 or 40 years ago. Ray Kurzweil has predicted that the computing power of a $4,000 computer will surpass that of a human brain in 2019 (20 quadrillion calculations per second).

What does it all mean for the future of AI?

To get a sense, I talked to some venture capitalists, whose profession it is to keep their eyes and minds trained on the future. Mark Gorenberg, Managing Director at Zetta Venture Partners, which is focused on investing in analytics and data startups, told me, “AI historically was not ingrained in the technology structure. Now we’re able to build on top of ideas and infrastructure that didn’t exist before. We’ve gone through the change of Big Data. Now we’re adding machine learning. AI is not the be-all and end-all; it’s an embedded technology. It’s like taking an application and putting a brain into it, using machine learning. It’s the use of cognitive computing as part of an application.” Another veteran venture capitalist, Promod Haque, senior managing partner at Norwest Venture Partners, explained to me, “if you can have machines automate the correlations and build the models, you save labor and increase speed. With tools like Watson, lots of companies can do different kinds of analytics automatically.

Manoj Saxena, former head of IBM’s Watson efforts and now a venture capitalist, believes that analytics is moving to the “cognitive cloud” where massive amounts of first- and third-party data will be fused to deliver real-time analysis and learning. Companies often find AI and analytics technology difficult to integrate, especially with the technology moving so fast; thus, he sees collaborations forming where companies will bring their people with domain knowledge, and emerging service providers will bring system and analytics people and technology. Cognitive Scale (a startup that Saxena has invested in) is one of the new service providers adding more intelligence into business processes and applications through a model they are calling “Cognitive Garages.” Using their “10-10-10 method”: they

  • deploy a cognitive cloud in 10 seconds,
  • build a live app in 10 hours, and
  • customize it using their client’s data in 10 days.

Saxena told me that the company is growing extremely rapidly.

I’ve been tracking AI and expert systems for years. What is most striking now is its genuine integration as an important strategic accelerator of Big Data and analytics. Applications such as USAA’s Eva, healthcare systems using IBM’s Watson, and WayBlazer, among others, are having a huge impact and are showing the way to the next generation of AI.
Brad Power has consulted and conducted research on process innovation and business transformation for the last 30 years. His latest research focuses on how top management creates breakthrough business models enabling today’s performance and tomorrow’s innovation, building on work with the Lean Enterprise Institute, Hammer and Company, and FCB Partners.


ORIGINAL:
HBR

Brad PowerMarch 19, 2015

What will happen when the internet of things becomes artificially intelligent?

By admin,

ORIGINAL: The Guardian
Stephen Balkam
Friday 20 February 2015
From Stephen Hawking to Spike Jonze, the existential threat posed by the onset of the ‘conscious web’ is fuelling much debate – but should we be afraid?

Who’s afraid of artificial intelligence? Quite a few notable figures, it turns out. Photograph: Alamy

When Stephen Hawking, Bill Gates and Elon Musk all agree on something, it’s worth paying attention.

All three have warned of the potential dangers that artificial intelligence or AI can bring. The world’s foremost physicist, Hawking said that the full development of artificial intelligence (AI) could spell the end of the human race. Musk, the tech entrepreneur who brought us PayPal, Tesla and SpaceX described artificial intelligence as our biggest existential threatand said that playing around with AI was like “summoning the demon”. Gates, who knows a thing or two about tech, puts himself in the concerned camp when it comes to machines becoming too intelligent for us humans to control.

What are these wise souls afraid of? AI is broadly described as the ability of computer systems to ape or mimic human intelligent behavior. This could be anything from recognizing speech, to visual perception, making decisions and translating languages. Examples run from Deep Blue who beat chess champion Garry Kasparov to supercomputer Watson who outguessed the world’s best Jeopardy player. Fictionally, we have Her, Spike Jonze’s movie that depicts the protagonist, played by Joaquin Phoenix, falling in love with his operating system, seductively voiced by Scarlet Johansson. And coming soon, Chappie stars a stolen police robot who is reprogrammed to make conscious choices and to feel emotions.

An important component of AI, and a key element in the fears it engenders, is the ability of machines to take action on their own without human intervention. This could take the form of a computer reprogramming itself in the face of an obstacle or restriction. In other words, to think for itself and to take action accordingly.

Needless to say, there are those in the tech world who have a more sanguine view of AI and what it could bring. Kevin Kelly, the founding editor of Wired magazine, does not see the future inhabited by HAL’s – the homicidal computer on board the spaceship in 2001: A Space Odyssey. Kelly sees a more prosaic world that looks more like Amazon Web Services: a cheap, smart, utility which is also exceedingly boring simply because it will run in the background of our lives. He says AI will enliven inert objects in the way that electricity did over 100 years ago. “Everything that we formerly electrified, we will now cognitize.” And he sees the business plans of the next 10,000 startups as easy to predict: “Take X and add AI.

While he acknowledges the concerns about artificial intelligence, Kelly writes: “As AI develops, we might have to engineer ways to prevent consciousness in them – our most premium AI services will be advertised as consciousness-free.” (my emphasis).

Running parallel to the extraordinary advances in the field of AI is the even bigger development of what is loosely called, the internet of things (IoT). This can be broadly described as the emergence of countless objects, animals and even people with uniquely identifiable, embedded devices that are wirelessly connected to the internet. These ‘nodes’ can send or receive information without the need for human intervention. There are estimates that there will be 50 billion connected devices by 2020. Current examples of these smart devices include Nest thermostats, wifi-enabled washing machines and the increasingly connected cars with their built-in sensors that can avoid accidents and even park for you.

The US Federal Trade Commission is sufficiently concerned about the security and privacy implications of the Internet of Things, and has conducted a public workshop and released a report urging companies to adopt best practices and “bake in” procedures to minimise data collection and to ensure consumer trust in the new networked environment.


Tim O’Reilly
, coiner of the phrase “Web 2.0” sees the internet of things as the most important online development yet. He thinks the name is misleading – that IoT is “really about human augmentation”. O’Reilly believes that we should “expect our devices to anticipate us in all sorts of ways”. He uses the “intelligent personal assistant”, Google Now, to make his point.

So what happens when these millions of embedded devices connect to artificially intelligent machines? What does AI + IoT = ? Will it mean the end of civilisation as we know it? Will our self-programming computers send out hostile orders to the chips we’ve added to our everyday objects? Or is this just another disruptive moment, similar to the harnessing of steam or the splitting of the atom? An important step in our own evolution as a species, but nothing to be too concerned about?

The answer may lie in some new thinking about consciousness. As a concept, as well as an experience, consciousness has proved remarkably hard to pin down. We all know that we have it (or at least we think we do), but scientists are unable to prove that we have it or, indeed, exactly what it is and how it arises.

Dictionaries describe consciousness as the state of being awake and aware of our own existence. It is an “internal knowledge” characterized by sensation, emotions and thought.

Just over 20 years ago, an obscure Australian philosopher named David Chalmers created controversy in philosophical circles by raising what became known as the Hard Problem of Consciousness. He asked how the grey matter inside our heads gave rise to the mysterious experience of being. What makes us different to, say, a very efficient robot, one with, perhaps, artificial intelligence? And are we humans the only ones with consciousness?

  • Some scientists propose that consciousness is an illusion, a trick of the brain.
  • Still others believe we will never solve the consciousness riddle.
  • But a few neuroscientists think we may finally figure it out, provided we accept the remarkable idea that soon computers or the internet might one day become conscious.

In an extensive Guardian article, the author Oliver Burkeman wrote how Chalmers and others put forth a notion that all things in the universe might be (or potentially be) conscious, “providing the information it contains is sufficiently interconnected and organized.” So could an iPhone or a thermostat be conscious? And, if so, could we in the midst of a ‘Conscious Web’?Back in the mid-1990s, the author Jennifer Cobb Kreisberg wrote an influential piece for Wired, A Globe, Clothing Itself with a Brain. In it she described the work of a little known Jesuit priest and paleontologist, Teilhard de Chardin, who 50 years earlier described a global sphere of thought, the “living unity of a single tissue” containing our collective thoughts, experiences and consciousness.

Teilhard called it the “nooshphere” (noo is Greek for mind). He saw it as the evolutionary step beyond our geosphere (physical world) and biosphere (biological world). The informational wiring of a being, whether it is made up of neurons or electronics, gives birth to consciousness. As the diversification of nervous connections increase, de Chardin argued, evolution is led towards greater consciousness. Or as John Perry Barlow, Grateful Dead lyricist, cyber advocate and Teilhard de Chardin fan said: “With cyberspace, we are, in effect, hard-wiring the collective consciousness.

So, perhaps we shouldn’t be so alarmed. Maybe we are on the cusp of a breakthrough not just in the field of artificial intelligence and the emerging internet of things, but also in our understanding of consciousness itself. If we can resolve the privacy, security and trust issues that both AI and the IoT present, we might make an evolutionary leap of historic proportions. And it’s just possible Teilhard’s remarkable vision of an interconnected “thinking layer” is what the web has been all along.

• Stephen Balkam is CEO of the Family Online Safety Institute in the US

10 IBM Watson-Powered Apps That Are Changing Our World

By admin,

ORIGINAL: CIO
Nov 6, 2014
By IBM 

 

IBM is investing $1 billion in its IBM Watson Group with the aim of creating an ecosystem of startups and businesses building cognitive computing applications with Watson. Here are 10 examples that are making an impact.
IBM considers Watson to represent a new era of computing — a step forward to cognitive computing, where apps and systems interact with humans via natural language and help us augment our own understanding of the world with big data insights.
Big Blue isn’t playing small ball with that claim. It has opened a new IBM Watson Global Headquarters in the heart of New York City’s Silicon Alley and is investing $1 billion into the Watson Group, focusing on development and research as well as bringing cloud-delivered cognitive applications and services to the market. That includes $100 million available for venture investments to support IBM’s ecosystem of start-ups and businesses building cognitive apps with Watson.
Here are 10 examples of Watson-powered cognitive apps that are already starting to shake things up.
USAA and Watson Help Military Members Transition to Civilian Life
USAA, a financial services firm dedicated to those who serve or have served in the military, has turned to IBM’s Watson Engagement Advisor in a pilot program to help military men and women transition to civilian life.
According to the U.S. Bureau of Labor Statistics, about 155,000 active military members transition to civilian life each year. This process can raise many questions, like “Can I be in the reserve and collect veteran’s compensation benefits?” or “How do I make the most of the Post-9/11 GI Bill?” Watson has analyzed and understands more than 3,000 documents on topics exclusive to military transitions, allowing members to ask it questions and receive answers specific to their needs.

LifeLearn Sofie is an intelligent treatment support tool for veterinarians of all backgrounds and levels of experience. Sofie is powered by IBM WatsonTM, the world’s leading cognitive computing system. She can understand and process natural language, enabling interactions that are more aligned with how humans think and interact.

Implement Watson
Dive deeper into subjects. Find insights where no one ever thought to look before. From Healthcare to Retail, there’s an IBM Watson Solution that’s right for your enterprise.

Healthcare

Helping doctors identify treatment options
The challenge
According to one expert, only 20 percent of the knowledge physicians use to diagnose and treat patients today is evidence based. Which means that one in five diagnoses is incorrect or incomplete.

… Continue reading

A Thousand Kilobots Self-Assemble Into Complex Shapes

By admin,

ORIGINAL: IEEE Spectrum
By Evan Ackerman
14 Aug 2014
 Photo: Michael Rubenstein/Harvard Universit

When Harvard roboticists first introduced their Kilobots in 2011, they’d only made 25 of them. When we next saw the robots in 2013, they’d made 100. Now the researchers have built one thousand of them. That’s a whole kilo of Kilobots, and probably the most robots that have ever been in the same place at the same time, ever.

The researchers—Michael Rubenstein, Alejandro Cornejo, and Professor Radhika Nagpal of Harvard’s Self-Organizing Systems Research Group—describe their thousand-robot swarm in a paper published today in Science (they actually built 1024 robots, apparently following the computer science definition of “kilo”).

Despite their menacing name (KILL-O-BOTS!) and the robot swarm nightmares they may induce in some people, these little guys are harmless. Each Kilobot [pictured below] is a small, cheap-ish ($14) device that can move around by vibrating their legs and communicate with other robots with infrared transmitters and receivers.

… Continue reading