The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as
As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose.
|Attendees at Asilomar, Pacific Grove, February 21–22, 2009 (left to right): Michael Wellman, Eric Horvitz, David Parkes, Milind Tambe, David Waltz, Thomas Dietterich, Edwina Rissland (front), Sebastian Thrun, David McAllester, Magaret Boden, Sheila McIlraith, Tom Dean, Greg Cooper, Bart Selman, Manuela Veloso, Craig Boutilier, Diana Spears (front), Tom Mitchell, Andrew Ng.|
We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from
In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.
Fortunately scientists have built a DNA time capsule that’s capable of safely preserving all of our data for more than a million years. And we’re kind of freaking out over how huge the implications are.
Researchers already knew that DNA was ideal for data storage. In theory, just 1 gram of DNA is capable of holding 455 exabytes, which is the equivalent of one billion gigabytes, and more than enough space to store all of Google, Facebook and pretty much everyone else’s data.
Storing information on DNA is also surprisingly simple – researchers just need to program the A and C base pairs of DNA as a binary ‘0’, and the T and G as a ‘1’. But the researchers, led by Robert Grass from ETH Zürich in Switzerland, wanted to find out just how long this data would last.
DNA can definitely be durable – in 2013 scientists managed to sequence genetic code from 700,000-year-old horse bones – but it has to be preserved in pretty specific conditions, otherwise it can change and break down as it’s exposed to the environment. So Glass’s team decided to try to replicate a fossil, to see if it would help them create a long-lasting DNA hard drive.
“Similar to these bones, we wanted to protect the information-bearing DNA with a synthetic ‘fossil’ shell,” explained Grass in a press release.
In order to do that, the team encoded Switzerland’s Federal Charter of 1921 and The Methods of Mechanical Theorems by Archimedes onto a DNA strand – a total of 83 kilobytes of data. They then encapsulated the DNA into tiny glass spheres, which were around 150 nanometres in diameter.
The researchers compared these glass spheres against other packaging methods by exposing them to temperatures of between 60 and 70 degrees Celsius - conditions that replicated the chemical degradation that would usually occur over hundreds of years, all crammed into a few destructive weeks.
They found that even after this sped-up degradation process, the DNA inside the glass spheres could easily be extracted using a fluoride solution, and the data on it could still be read. In fact, these glass casings seem to work much like fossilised bones.
Based on their results, which have been published in Angewandte Chemie, the team predicts that data stored on DNA could survive over a million years if it was stored in temperatures below -18 degrees Celsius, for example, in a facility like the Svalbard Global Seed Vault, which is also known as the ‘Doomsday Vault’. They say it could last 2,000 years if stored somewhere less secure at 10 degrees Celsius – a similar average temperature to central Europe.
The tricky part of this whole process is that the data stored in DNA needs to be read properly in order for future civilisations to be able to access it. And despite advances in sequencing technology, errors still arise from DNA sequencing.
The team overcame this by embedding a method for correcting any errors within the glass spheres, based on the Reed-Solomon Codes, which help researchers transmit data over long distances. Basically, additional information is attached to the actual data, to help people read it on the other end.
This worked so well that even after the test DNA had been kept in scorching and degrading conditions for a month, the team could still read Switzerland’s Federal Charter and Archimedes’ wise words at the end of the study.
The other major problem, which is not so easy to overcome, is the fact that storing information on DNA is still extremely expensive – it cost around US$1,500 just to encode the 83 kilobytes of data used in this study. Hopefully this cost will go down as we get better at writing information onto DNA. Researchers out there are already storing books onto DNA, and the band OK Go are also writing their new album into genetic information.
“Many entries are described in detail, others less so. This probably provides a good overview of what our society knows, what occupies it and to what extent,” said Grass in the release.
It’s ridiculously cool to think that even if we do wipe ourselves off the face of the Earth, our civilisation might still live on for millennia to come in the form of Wikipedia pages and Facebook updates.
We really are (almost) infinite.
Source: New Scientist
Humans should be worried about the threat posed by artificial Intelligence, Bill Gates has said.
The Microsoft founder said he didn’t understand people who were not troubled by the possibility that AI could grow too strong for people to control.
Mr Gates contradicted one of Microsoft Research’s chiefs, Eric Horvitz, who has said he “fundamentally” did not see AI as a threat.
Mr Horvitz has said about a quarter of his team’s resources are focused on AI.
During an “ask me anything” question and answer session on Reddit, Mr Gates wrote: “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.
“A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
His view was backed up by the likes of Mr Musk and Professor Stephen Hawking, who have both warned about the possibility that AI could evolve to the point that it was beyond human control. Prof Hawking said he felt that machines with AI could “spell the end of the human race”.
Mr Horvitz has said: “There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don’t think that’s going to happen.”
He was giving an interview marking his acceptance of the AAAI Feigenbaum Prize for “outstanding advances” in AI research.
“I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”
Mr Horvitz runs Microsoft Research‘s lab at the parent company’s Redmond headquarters. His division’s work has already helped introduce Cortana, Microsoft’s virtual assistant.
Despite his own reservations, Mr Gates wrote on Reddit that, had Microsoft not worked out, he would probably be a researcher on AI.
“When I started Microsoft I was worried I would miss the chance to do basic work in that field,” he said.
He added that he believed the firm he founded would see “more progress… than ever” over the next three decades.
“Even in the next 10 [years,] problems like vision and speech understanding and translation will be very good.”
He predicted that, in that time, robots would perform tasks such as picking fruit or moving hospital patients. “Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.”
He said he was working on a project with Microsoft called “Personal Agent“, which he said would “remember everything and help you go back and find things and help you pick what things to pay attention to“.
He wrote: “The idea that you have to find applications and pick them and they each are trying to tell you what is new is just not the efficient model – the agent will help solve this. It will work across all your devices.”
But he admitted that he felt “pretty stupid” because he cannot speak any language other than English.
“I took Latin and Greek in High School and got As and I guess it helps my vocabulary but I wish I knew French or Arabic or Chinese.
“I keep hoping to get time to study one of these – probably French because it is the easiest… Mark Zuckerberg amazingly learned Mandarin and did a Q&A with Chinese students – incredible,” he wrote.
More on This Story
From other news sites
Pattern & image recognition module with neuromorphic learning for all your maker projects.
Robotics fans, drone pilots, hackers & data-miners – rejoice!
The BrainCard is an open source hardware platform featuring the worlds only fully functional and field-tested
Neuromorphic Chip containing 1024 silicon neurons. It is able to learn and
recognize patterns within any dataset generated by any source, from the physical (sensors), to the virtual
Offered here, for the first time, to makers
in a format compatible with nearly all other popular electronics platforms — from
Raspberry Pi to Arduino and Intel Edison — we aim to help
you add cognitive perception to any electronics project.
Add a brain to: Robots, toys or an old
GoPro. Give them the ability to recognize and recall almost anything… You can also add a brain to any digital cameras including dash
cams. Vision not your thing? The same technology can recognize patterns in data like that packet of code you’re looking for in a sea of C++, a phrase in an eBook (regardless of the books length), even real time data: Build your own biosensors! Make any appliance you like “smart”, like a coffee pot that recognizes you and starts making your coffee the way you like best.
Simply put; make it think.
The BrainCard is an open source hardware platform featuring the worlds only fully functional and field-tested
Cannot wait for technical details?
Before we carry on, for those of you that are quick studies and/or already know everything, we thought you might like to skip straight to the specs so here you go:
BrainCard Specifications (Hardware and API)
For everyone else – please read on…
Now back to you project…
The BrainCard™ is a small electronics board with a NeuroMem® CM1K device plus a FPGA (Field Programmable Gate Array) chip to connect to platform buses and sensor inputs. There is even an optional image sensor featured on the BrainCard 1KIS (Image Sensor) version. It can be connected to almost any popular electronics platform including Arduino/Raspberry Pi/Intel Edison and enables users to massively boost any devices capability by creating a brain-like system architecture – hence the name.
The CM1K chip(s) on the BrainCard essentially acts as a right-brain hemisphere ready to learn, recognize and recall patterns/images/sounds/inputs from any incoming data stream. This allows the accompanying MPU device to concentrate on what it’s good at — left-brain functions such as logic, procedural computing and as a communications and I/O interface.
The BrainCard is an open source hardware platform featuring the world’s only fully functional and field-tested
Neuromorphic Chip containing 1024 silicon neurons. It is able to learn and
recognize patterns within any dataset generated by any source, from the physical (sensors), to the virtual (data). Offered here, for the first time, to makers in a format compatible with nearly all other popular electronics platforms — from
Raspberry Pi to Arduino and Intel Edison — we aim to help you add cognitive perception to any electronics project.
Add a brain to: Robots, toys or an old GoPro. Give them the ability to recognize and recall almost anything… You can also add a brain to any digital cameras including dash cams. Vision not your thing? The same technology can recognize patterns in data like that packet of code you’re looking for in a sea of C++, a phrase in an eBook (regardless of the books length), even real time data: Build your own biosensors! Make any appliance you like “smart”, like a coffee pot that recognizes you and starts making your coffee the way you like best.
Simply put; make it think.
The key to success is teaching BrainCard as you would a child: Teach it too conservatively and it will not generalize enough; too moderately and it could get confused. It is not like traditional programming and we have found that part of the fun in building projects with the BrainCard is in this new learning parameter.
It’s really quite simple: Show the BrainCard what it must recognize and assign the example a category. So: This face is John, that voice is Emma, this vibration is made by your cat purring and so on.
The BrainCard is delivered with a default configuration which can communicate with either one of the proposed controllers (Arduino, Raspberry PI or Edison) through a same communication protocol over their SPI lines. Access to generic pattern learning and recognition functions using the CM1K chip are made through a simple API delivered for the different IDE (Arduino and Eclipse). More specific function libraries will be released shortly after and we hope to start a repository of your libraries too!
So what can it do?
BrainCard can recognize data from any Bio-signal source – such as:
You can run your data through theBrainCard in any form — from text to binary to DNA sequences — and teach it to recognize patterns, which will allow it to detect anomalies, identify clusters and make predictions.There are MANY MORE applications we just haven’t tried yet…
If you go crazy while teaching and fill all 1024 neurons on a chip, don’t panic. BrainCard provides an expansion bus to stack more CM1K chips in boards of two, thereby increasing the number of modules (subject to availability) you can teach by increments of 2048 (1x CM1K equals 1,024 neurons). This expansion can be done at any time to its maximum of 8,192 (plus the original 1024 on the BrainCard), and will not impact your teaching allowing you to experiment to your heart’s content.
The NeuroMem CM1K technology has already found many applications in industry and has been working in the real world since 2007 – so we know everything we’re claiming above is 100% true, because most of these applications have been built somewhere.
What we need, and what you getThis Indiegogo campaign has been launched with one aim: To generate the volume and revenue we need to manufacture the maker version of the CM1K technology — the BrainCard.
By supporting this Indiegogo project you will be a part of the first chapter of a much bigger story: We aim to change the way the world computes with neural network technology. We’re looking to raise at least $200k to start manufacturing in volume, which will make the BrainCard as cheap as possible.
We’re beginning with 1000 chips that we already have in inventory which were originally ordered by an industrial client. After that, we will aim to start manufacturing on a mass production line, and this will take approximately six months. So, those first 1000 purchasers will be the only ones able to experience the unique capabilities of the BrainCard until mid-2015.
The first 1000 BrainCard’s will cost $199 and are what we call IWIN (I Want It Now), or $219 for a version including an image sensor (the IS version) – so 500 of each version.
If we don’t reach the goal, all the money raised will be aimed at manufacturing as many BrainCards as we can, so that it can be more affordable for the masses.
This is why we’re turning to the maker community — we’d like to crowdsource our research and development through YOU!
Risks and challenges
The core of the NeuroMem/NeuromorThings team has been in place for 16 years and has plenty of research and industrial customers already using the CM1K chip, so this is not a typical “prototype” project.
We have a full supply chain already in place for both the board and for mounting the chips. We also have a wealth of knowledge in developing board-level and semiconductor technologies — all of which makes the risks to you a bare minimum.
We just need your support to complete prototyping/testing and to begin volume manufacturing. The first 1000 IWIN BrainCards will have exclusive access to the technology for the three months it takes us to make the new batch of chips.
Once we begin mass manufacturing the BrainCard, we will begin our long development roadmap on its successors and other neuromorthings.
After the first run of IWIN devices, the rest of the time will be dedicated to mounting the chips to the boards and testing them. With enough support we can get production runs up to very large numbers per month very quickly.
Shipping a technology product is fraught with issues like export restrictions. We’ve tried to make it as simple as possible and built shipping as a perk.
In the US, Mexico and Canada? included
Rest of World? $30 Shipping & Packing
Due to the technical nature of the BrainCard it can be liable to Export Restrictions in certain countries under United States Law. If you are unsure if you are effected – please contact us at: firstname.lastname@example.org and put “Export” in the subject line and we’ll do everything we can to help.
Other Ways You Can HelpCan’t buy a BrainCard? How about giving us a High $5? High 5’ers will all feature on the website and be written into NeuromorThings lore… it’s a program for those interested in the technology and who want to help but who can’t spring for their own BrainCard.
Got no cash at all? No problem – simply SPREAD THE WORD! Tell everyone you know about us and help us that way instead, on Facebook, on Twitter – wherever.
Every little bit helps!
There’s been a lot of fear about the future of artificial intelligence.
Stephen Hawking and Elon Musk worry that AI-powered computers might one day become uncontrollable super-intelligent demons. So does Bill Gates.
But Baidu chief scientist Andrew Ng—one of the world’s best-known AI researchers and a guy who’s building out what is likely one of the world’s largest applied AI projects—says we really ought to worry more about robot truck drivers than the Terminator.
In fact, he’s irritated by the discussion about scientists somehow building an apocalyptic super-intelligence. “I think it’s a distraction from the conversation about…serious issues,” Ng said at an AI conference in San Francisco last week.
Ng isn’t alone in thinking this way. A select group of AI luminaries met recently at a closed door retreat in Puerto Rico to discuss ethics and AI. WIRED interviewed some of them, and the consensus was that there are short-term and long-term AI issues to worry about. But it’s the long-term questions getting all the press.
Artificial intelligence is likely to start having an important effect on society over the next five to 10 years, according to Murray Shanahan, a professor of cognitive robotics with Imperial College, Professor of Cognitive Robotics. “It’s hard to predict exactly what’s going on,” he told WIRED a few weeks ago, “but we can be pretty sure that these technologies are going to impact and society quite a bit. ”
The way Ng sees it, it took the US about 200 years to switch from an agricultural economy where 90 percent of the country worked on farms, to our current economy, where the number is closer to 2 percent. The AI switchover promises to come must faster, and that could make it a bigger problem.
That’s an idea echoed in two MIT academics, Erik Brynjolfsson and Andrew McAfee, who argue that we’re entering a “second machine age,” where the accelerating rate of change brought on by digital technologies could leave millions of medium-and-low skilled workers behind.
Some AI technologies, such as the self-driving car, could be extremely disruptive, but over a much shorter period of time than the industrial revolution. There are three million truck drivers in the US, according to the American Trucking Association. What happens if self-driving vehicles put them all out of a job in a matter of years?
With recent advances in perception, the range of things that machines can do is getting a boost. Computers are better at understanding what we say and analyzing data in a way that used to be the exclusive domain of humans.
Last month, Audi’s self-driving car took WIRED’s Alex Davies for a 500 mile ride. In Cupertino, California’s Aloft Hotel a robot butler can deliver you a toothbrush. Paralegals are now finding their work performed by data-sifting computers. And just last year, Google told us about a group of workers who were doing mundane image recognition work for the search giant—jobs like figuring out the difference between telephone numbers and street addresses on building walls. Google figured out how to do this by machine, and so they’ve now moved onto other things.
Ng, who also co-founded the online learning company Coursera, says that if AI really starts taking jobs, retraining all of those workers could present a major challenge. When it comes to retraining workers, he said, “our education system has historically found it very difficult.”