Category: Robotics


The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

By Hugo Angel,

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
BRAIN ACTIVITY MAP
Neuroscape Lab
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
 
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
ORIGINAL: Wired
Monday 6 March 2017

A biomimetic robotic platform to study flight specializations of bats

By Hugo Angel,

Some Batty Ideas Take Flight. Bat Bot (shown) is able to imitate several flight maneuvers of bats, such as bank turns and diving flight. Such agile flight is made possible by highly malleable bones and skin in bat wings. Ramezani et al. identified and implemented the most important bat wing joints by means of a series of mechanical constraints. They then designed feedback control for their flapping wing platform and covered the structure in a flexible silicon membrane. This biomimetic robot may also shed light on the role of bat legs to modulate flight pitch. [CREDIT: ALIREZA RAMEZANI
Forget drones. Think bat-bots. Engineers have created a new autonomous flying machine that looks and maneuvers just like a bat. Weighing only 93 grams, the robot’s agility comes from its complex wings made of lightweight silicone-based membranes stretched over carbon-fiber bones, the researchers report today in Science Robotics. In addition to nine joints in each wing, it sports adjustable legs, which help it steer by deforming the membrane of its tail. Complex algorithms coordinate these components, letting the bot make batlike moves including banking turns and dives. But don’t bring out the bat-signal just yet. Remaining challenges include improving battery life and developing stronger electronic components so the device can survive minor crashes. Ultimately, though, the engineers hope this highly maneuverable alternative to quadrotor drones could serve as a helpful new sidekick—lending a wing in anything from dodging through beams as a construction surveyor to aiding in disaster relief by scouting dangerous sites. The next lesson researchers hope to teach the bat-bot? Perching upside-down.
Posted in:
DOI: 10.1126/science.aal0685
 
 



A biomimetic robotic platform to study flight specializations of bats

Alireza Ramezani1, Soon-Jo Chung2,* and
Seth Hutchinson1
+ Author Affiliations
*Corresponding author. Email: [email protected]
Science Robotics 01 Feb 2017:
Vol. 2, Issue 3,
DOI: 10.1126/scirobotics.aal2505
 
Abstract
Bats have long captured the imaginations of scientists and engineers with their unrivaled agility and maneuvering characteristics, achieved by functionally versatile dynamic wing conformations as well as more than 40 active and passive joints on the wings. Wing flexibility and complex wing kinematics not only bring a unique perspective to research in biology and aerial robotics but also pose substantial technological challenges for robot modeling, design, and control. We have created a fully self-contained, autonomous flying robot that weighs 93 grams, called Bat Bot (B2), to mimic such morphological properties of bat wings. Instead of using a large number of distributed control actuators, we implement highly stretchable silicone-based membrane wings that are controlled at a reduced number of dominant wing joints to best match the morphological characteristics of bat flight. First, the dominant degrees of freedom (DOFs) in the bat flight mechanism are identified and incorporated in B2’s design by means of a series of mechanical constraints. These biologically meaningful DOFs include asynchronous and mediolateral movements of the armwings and dorsoventral movements of the legs. Second, the continuous surface and elastic properties of bat skin under wing morphing are realized by an ultrathin (56 micrometers) membranous skin that covers the skeleton of the morphing wings. We have successfully achieved autonomous flight of B2 using a series of virtual constraints to control the articulated, morphing wings.
INTRODUCTION
Biologically inspired flying robots showcase impressive flight characteristics [e.g., robot fly (1) and bird-like robots (2, 3)]. In recent years, biomimicry of bat flight has led to the development of robots that are capable of mimicking bat morphing characteristics on either a stationary (4) or a rotational pendular platform (5). However, these attempts are limited because of the inherent complexities of bat wing morphologies and lightweight form factors.
Arguably, bats have the most sophisticated powered flight mechanism among animals, as evidenced by the morphing properties of their wings. Their flight mechanism has several types of joints (e.g., ball-and-socket and revolute joints), which interlock the bones and muscles to one another and create a metamorphic musculoskeletal system that has more than 40 degrees of freedom (DOFs), both passive and active (see Fig. 1) (6). For insects, the wing structure is not as sophisticated as bats because it is a single, unjointed structural unit. Like bat wings, bird wings have several joints that can be moved actively and independently.
Fig. 1 Functional groups in bat (photo courtesy of A. D. Rummel and S. Swartz, the Aeromechanics and Evolutionary Morphology Laboratory, Brown University).
Enumerated bat joint angles and functional groups are depicted; using these groups makes it possible to categorize the sophisticated movements of the limbs during flight and to extract dominant DOFs and incorporate them in the flight kinematics of B2. The selected DOFs are coupled by a series of mechanical and virtual constraints.
Robotics research inspired by avian flight has successfully conceptualized bird wings as a rigid structure, which is nearly planar and translates—as a whole or in two to three parts—through space; however, the wing articulation involved in bat wingbeats is very pronounced. In the mechanism of bat flight, one wingbeat cycle consists of two movements: (i) a downstroke phase, which is initiated by both left and right forelimbs expanding backward and sideways while sweeping downward and forward relative to the body, and (ii) an upstroke phase, which brings the forelimbs upward and backward and is followed by the flexion of the elbows and wrists to fold the wings. There are more aspects of flapping flight that uniquely distinguish bats. Bat wings have (i) bones that deform adaptively during each wingbeat cycle, (ii) anisotropic wing membrane skin with adjustable stiffness across the wing, and (iii) a distributed network of skin sensory organs believed to provide continuous information regarding flows over the wing surfaces (7).
The motivation for our research into bat-inspired aerial robots is twofold. First, the study of these robots will provide insight into flapping aerial robotics, and the development of these soft-winged robots will have a practical impact on robotics applications where humans and robots share a common environment. From an engineering perspective, understanding bat flight is a rich and interesting problem. Unlike birds or insects, bats exclusively use structural flexibility to generate the controlled force distribution on each membrane wing. Wing flexibility and complex wing kinematics are crucial to the unrivaled agility of bat flight (8, 9). This aspect of bat flight brings a unique perspective to research in winged aerial robotics, because most previous work on bioinspired flight is focused on insect flight (1015) or hummingbird flight (16), using robots with relatively stiff wings (17, 18).
Bat-inspired aerial robots have a number of practical advantages over current aerial robots, such as quadrotors. In the case of humans and robots co-inhabiting shared spaces, the safety of bat-inspired robots with soft wings is the most important advantage. Although quadrotor platforms can demonstrate agile maneuvers in complex environments (19, 20), quadrotors and other rotorcraft are inherently unsafe for humans; demands of aerodynamic efficiency prohibit the use of rotor blades or propellers made of flexible material, and high noise levels pose a potential hazard for humans. In contrast, the compliant wings of a bat-like flapping robot flapping at lower frequencies (7 to 10 Hz versus 100 to 300 Hz of quadrotors) are inherently safe, because their wings comprise primarily flexible materials and are able to collide with one another, or with obstacles in their environment, with little or no damage.
Versatile wing conformation
The articulated mechanism of bats has speed-dependent morphing properties (21, 22) that respond differently to various flight maneuvers. For instance, consider a half-roll (180° roll) maneuver performed by insectivorous bats (23). Flexing a wing and consequently reducing the wing area would increase wing loading on the flexed wing, thereby reducing the lift force. In addition, pronation (pitch-down) of one wing and supination (pitch-up) of the other wing result in negative and positive angles of attack, respectively, thereby producing negative and positive lift forces on the wings, causing the bat to roll sharply. Bats use this maneuver to hunt insects because at 180° roll, they can use the natural camber on their wings to maximize descending acceleration. Insectivorous bats require a high level of agility because their insect preys are also capable of swooping during pursuit. With such formidable defense strategies used by their airborne prey, these bats require sharp changes in flight direction.
In mimicking bats’ functionally versatile dynamic wing conformations, two extreme paradigms are possible. On the one hand, many active joints can be incorporated in the design. This school of thought can lead to the design and development of robots with many degrees of actuation that simply cannot fly. Apart from performance issues that may appear from overactuating a dynamic system, these approaches are not practical for bat-inspired micro aerial vehicles (MAVs) because there are technical restrictions for sensing and actuating many joints in robots with tight weight (less than 100 g) and dimension restrictions. On the other hand, oversimplifying the morphing wing kinematics to oscillatory flat surfaces, which is similar to conventional ornithopters, underestimates the complexities of the bat flight mechanism. Such simplified ornithopters with simple wing kinematics may not help answer how bats achieve their impressive agile flight.
Body dimensional complexity
A better understanding of key DOFs in bat flight kinematics may help to design a simpler flying robot with substantially fewer joints that is yet capable of mimicking its biological counterparts. A similar paradigm has led to successful replications of the human terrestrial locomotion (walking and running) by using bipedal robots that have point feet (24), suggesting that feet are a redundant element of the human locomotion system. Assigning importance to the kinematic parameters can yield a simpler mechanism with fewer kinematic parameters if those parameters with higher kinematic contribution and significance are chosen. Such kinematic characterization methods have been applied to study various biological mechanisms (6, 9, 2528).
Among these studies, Riskin et al. (6) enhance our understanding of bat aerial locomotion in particular by using the method of principal components analysis (PCA) to project bat joint movements to the subspace of eigenmodes, isolating the various components of the wing conformation. By using only the first eigenmode, 34% of biological bat flight kinematics are reproducible. By superimposing the first and second eigenmodes, more than 57% of bat flight kinematics can be replicated. These findings, which emphasize the existence of synergies (29) in bat flight kinematics to describe the sophisticated movements of the limbs during flight, suggest the possibility of mimicking bat kinematics with only a few DOFs (30).
According to these PCAs, three functional groups, shown in Fig. 1, synthesize the wing morphing: (i) when wings spread, fingers bend; (ii) when wrists pronate, elbows bend; and (iii) the medial part of the wings is morphed in collaboration with the shoulders, hips, and knees (6). These dimensional complexity analyses reveal that the flapping motion of the wings, the mediolateral motion of the forelimbs, the flexion-extension of the fingers, the pronation-supination of the carpi, and the dorsoventral movement of the legs are the major DOFs. In developing our robotic platform Bat Bot (B2) (Fig. 2A), we selected these biologically meaningful DOFs and incorporated them in the design of B2 by means of a series of mechanical constraints.
Fig. 2Bat Bot.
(A) B2 is self-sustained and self-contained; it has an onboard computer and several sensors for performing autonomous navigation in its environment. The computing, sensing, and power electronics, which are accommodated within B2, are custom-made and yield a fully self-sustained system despite weight and size restrictions. The computing unit, or main control board (MCB), hosts a microprocessor. While the navigation-and-control algorithm runs on the MCB in real time, a data acquisition unit acquires sensor data and commands the micro actuators. The sensing electronics, which are circuit boards custom-designed to achieve the smallest size possible, interface with the sensors and the MCB by collecting two kinds of measurements. First, an inertial measurement unit (IMU), which is fixed to the ribcage in such a way that the x axis points forward and the z axis points upward, reads the attitudes of the robot with respect to the inertial frame. Second, five magnetic encoders are located at the elbows, hips, and flapping joint to read the relative angles between the limbs with respect to the body. (B) Dynamic modulus analysis. Samples of membrane were mounted vertically in the dynamic modulus analyzer using tension clamps with ribbed grips to ensure that there was no slipping of the sample. Data were collected using controlled force analysis at a ramp rate of 0.05 N/min over the range 0.001 to 1.000 N. The temperature was held at 24.56°C. The estimated average modulus, ultimate tensile strength (UTS), and elongation are 0.0028 MPa, 0.81 MPa, and 439.27%, respectively. The average modulus and UTS along fiber direction are 11.33 and 17.35 MPa, respectively. (C) The custom-made silicone-based membrane and embedded carbon fibers.
RESULTS
System design
B2’s flight mechanism (shown in Fig. 3, A to C) consists of the left and right wings, each including a forelimb and a hindlimb mechanism. The left and right wings are coupled with a mechanical oscillator. A motor spins a crankshaft mechanism, which moves both wings synchronously dorsoventrally while each wing can move asynchronously mediolaterally. The hindlimbs that synthesize the trailing edge of the wings can move asynchronously and dorsoventrally. If it were not for mechanical couplings and constraints, the morphing mechanism of B2 would have nine DOFs. Because the physical constraints are present, four DOFs are coupled, yielding a five-DOF mechanism.
(A) B2’s flight mechanism and its DOFs. We introduced mechanical couplings in the armwing to synthesize a mechanism with a few DOFs. (B) The armwing retains only one actuated movement, which is a push-pull movement produced by a spindle mechanism hosted in the shoulder. (C) The leg mechanism. (D) B2’s electronics architecture. At the center, the microprocessor from STMicroelectronics communicates with several components, including an IMU from VectorNav Technologies, an SD card reader, five AS5048 Hall effect encoders, and two dual-port dc motor drivers. Two wireless communication devices, an eight-channel micro RC receiver (DSM2) and a Bluetooth device, make it possible to communicate with the host (Panel). The microprocessor has several peripherals, such as universal synchronous/asynchronous receiver/transmitter (USART), serial peripheral interface (SPI), pulse-width modulation (PWM), and secure digital input/output (SDIO). To test and deploy the controller on the platform, we used Hardware-in-the-Loop (HIL) simulation. In this method, a real-time computer is used as a virtual plant (model), and the flight controller, which is embedded on the physical microprocessor, responds to the state variables of the virtual model. In this way, the functionality of the controller is validated and debugged before being deployed on the vehicle.
The forelimbs (see Fig. 3B), which provide membranal mechanical support and morphing leverage, consist of nine links: the humeral (p0-p1), humeral support (p1-p2), radial (p1-p3), radial support (p4-p5), carpal (p3-p4), carpal support (p1-p5), and three digital links. Mobilizing this structure requires embedding rotation in the humerus, pronating rotation in the wrists, and abduction-adduction and flexion-extension in the digits. All of these require the active actuation of the shoulders, wrists, and finger knuckles, respectively.
A few attempts have been made to incorporate similar DOFs in an MAV. Researchers at Brown University have used string-and-pulley–based actuating mechanisms to articulate a robotic membranous wing (4). In their design, the wing is mounted on a support to avoid any installation of actuators on the robotic wing. In this support, a bundle that includes several strings is routed through the wing’s links. It is then connected to several motors incorporated in the support. This form of actuation makes it possible to realize several active joints in the robotic wing. However, such a method is not practical for a flying MAV because it requires heavy actuators to be installed in the ribcage. Unlike the robotic wing from (4), we introduced physical constraints (see Fig. 3, A to C) in B2 to synthesize a flight mechanism with a few actuated joints. These mechanical constraints follow.
Morphing wing flight apparatus
A three-link mechanism, where each link is connected to the next one with a revolute joint while one link is pivoted to a fixed support, is uniquely defined mathematically using three angles or configuration variables. Regulating the position and orientation of the end effector in the three-link mechanism implies direct control of the three revolute joints. Constraining the mechanism with three rigid links results in a one-DOF mechanism requiring only one actuator.
Each of the forelimbs is similar to this three-link mechanism, and their links are hinged to one another using rigid one-DOF revolute joints. The rotational movement of the humeral link around the fixed shoulder joint p0 is affected by linear movements of the point p2relative to the humeral shoulder joint. A linear motion of the humeral support link at the shoulder moves the radial link relative to the humeral link and results in elbow flexion-extension. Although humeral and radial links move with respect to each other, a relative motion of the outer digital link with respect to the radial link is realized as the elbow flexion-extension is projected to the carpal plate through the radial support link (see Fig. 3B).
The ball-and-socket universal joints at two ends of the support radial link facilitate the passive movements of the carpal plate in a pronating direction. In contrast to biological bats, which actively rotate their wrists, B2 has passive carpal rotations with respect to the radius.
Digital links I, II, and III are cantilevered to the carpal plate (p6, p7, and p8); they are flexible slender carbon fiber tubes that can passively flex and extend with respect to the carpal plate, meaning that they introduce passive DOFs in the flight mechanism. In addition to these passive flexion-extension movements, the digital links can passively abduct and adduct with respect to each other. The fingers have no knuckles, and their relative angle with respect to one another is predefined.
As a result, each of B2’s forelimbs has one actuated DOF that transforms the linear motion of its spindle mechanism into three active and biologically meaningful movements: (i) active humeral retraction-protraction (shoulder angle), (ii) active elbow flexion-extension (elbow angle), and (iii) active carpal abduction-adduction (wrist angle). The passive DOFs include carpal pronation, digital abduction-adduction, and flexion-extension.
In the case of the hindlimbs (legs), it is challenging to accurately quantify the aerodynamic consequences of leg absence or presence in bats and determine their influence on the produced aerodynamic lift and drag forces. This is because the movements of hindlimbs affect the membrane locally at the trailing edge of the wings, whereas at distal positions, wings are mostly influenced by forelimbs. However, legs can enhance the agility of flight by providing additional control of the left and right sides of the trailing edge of the membrane wing (31). Adjusting the vertical position of the legs with respect to the body has two major effects: (i) leg-induced wing camber and (ii) increasing the angle of attack locally at the tail. In other words, increasing the leg angle increases lift, drag, and pitching moment (31). In addition, there is another benefit to carefully controlled tail actuation: Drag notably decreases because tails prevent flow detachments and delay the onset of flow separation (32).
Benefiting from these aerodynamic effects, bats have unique mechanistic bases; the anatomical evolutions in their hindlimbs enable these mammals to actively use their hindlimbs during flight (33). In contrast to terrestrial mammals, the ball-and-socket joint that connects the femoral bone to the body is rotated in such a way that knee flexion moves the ankle dorsoventrally. This condition yields pronounced knee flexions ventrally.
From a kinematics standpoint, the sophisticated movements of ankles in bats include dorsoventral and mediolateral movements. Ankles move ventrally during the downstroke, and they start moving dorsally during the upstroke (33). Motivated by the roles of legs in bat flight, we implemented two asynchronously active legs for controlling the trailing edge of the membrane wing in the design of B2. We hinged each leg to the body by one-DOF revolute joints such that the produced dorsoventral movement happens in a plane that is tilted at an angle relative to the parasagittal plane (see Fig. 3C). Contrary to biological bats, B2’s legs have no mediolateral movements; Riskin et al. (6) suggest that such movements are less pronounced in biological bats. To map the linear movements of our actuation system to the dorsoventral movements of the legs, we used a three-bar linkage mechanism (34).
Anisotropic membranous wing
The articulated body of B2 yields a structure that cannot accommodate conventional fabric covering materials, such as unstretchable nylon films. Unstretchable materials resist the forelimb and leg movements. As a result, we covered the skeleton of our robot with a custom-made, ultrathin (56 μm), silicone-based membrane that is designed to match the elastic properties of biological bats’ membranes. In general, bat skin spans the body such that it is anchored to forelimbs, digital bones, and hindlimbs. This yields a morphing mechanism with soft wings, which is driven by the movements of the limbs. These compliant and anisotropic structures with internal tensile forces in dorsoventral and mediolateral directions have elastin fiber bundles, which provide an extensibility and self-folding (self-packing) property to the wing membrane (35).
Reverse engineering all of these characteristics is not feasible from an engineering fabrication standpoint; therefore, we focused our attention on a few properties of the membrane wing. In producing such a membranous wing, we studied the anatomical properties of bats’ biological skin and found the key features to be (i) weight per unit of area (area density), (ii) tensile modulus, and (iii) stretchability (see Fig. 2, B and C). The area density is important because high-density membranes distributed across the robot’s skeleton increase the wing’s moment of inertia along the flapping axis and the overall payload of B2. In addition, internal tensile forces introduced by the membrane to the system are important because the micro motors used in the robot have limited torque outputs. When the pretension forces become large, the stall condition emerges in the actuators. This can damage the motor as well as the power electronics. The stretchability of the membrane defines the capacity of the wing to fold and unfold mediolaterally within the range of movement of actuators so that undesirable skin wrinkles or ruptures are avoided.
To produce an ultrathin and stretchable skin, we used two ultraflat metal sheets with a 10-μm flatness precision to sandwich our silicone materials. This ensures an even and consistent pressure distribution profile on the material. We synthesized a polymer in which two components—one containing a catalyst and the other containing polyorganosiloxanes with hydride functional groups—began vulcanization in the laboratory environment. The first component is a mixture of 65 to 75% by weight polyorganosiloxanes and 20 to 25% amorphous silica, and the second component is a mixture of 75 to 85% polyorganosiloxanes, 20 to 25% amorphous silica, and less than 0.1% platinum-siloxane complex. Platinum-siloxane is a catalyst for polymer chain growth. The Si–O bond length is about 1.68 Å with a bond angle of 130°, whereas the C–C bond found in most conventional polymers is about 1.54 Å with a 112° bond angle. Because of these geometric factors, silicone polymers exhibit a greater percentage of elongation and flexibility than carbon backbone polymers. However, silica is heavier than carbon, which could potentially make the wing too heavy and too rigid for flight. To solve this problem, we added hexamethyldisiloxane, which reduces the thickness and viscosity of the silicone, in an experimentally determined ratio.
Virtual constraints and feedback control
A crucial but unseen component of B2 is its flight control supported by its onboard sensors, high-performance micromotors with encoder feedback, and a microprocessor (see Fig. 3D). B2 and conventional flying robots such as fixed-wing and rotary-wing robots are analogous in that they all rely on oscillatory modulations of the magnitude and direction of aerodynamic forces. However, their flight control schemes are different. Conventional fixed-wing MAVs are often controlled by thrust and conventional control surfaces such as elevators, ailerons, and rudders. In contrast, B2 has nine active oscillatory joints (five of which are independent) in comparison to six DOFs (attitude and position) that are actively controlled. In other words, the control design requires suitable allocation of the control efforts to the joints.
In addition, challenges in flight control synthesis for B2 have roots in the nonlinear nature of the forces that act on it. B2, similar to fruit bats in size and mass (wing span, 30 to 40 cm; mass, 50 to 150 g), is capable of achieving a flapping frequency that is lower than or equal to its natural body response; as a result, it is often affected by nonlinear inertial and aerodynamic artifacts. Such forces often appear as nonlinear and nonaffine in-control terms in the equations of motion (36). Therefore, conventional approximation methods that assume flapping frequency to be much faster than the body dynamic response, such as the celebrated method of averaging, commonly applied to insect-scale flapping flight (10, 11), fail to make accurate predictions of the system’s behavior.
The approach taken in this paper is to asymptotically impose virtual constraints (holonomic constraints) on B2’s dynamic system through closed-loop feedback. This concept has a long history, but its application in nonlinear control theory is primarily due to the work of Isidori et al. (37, 38). The advantage of imposing these constraints through closed-loop feedback (software) rather than physically (hardware) is that B2’s wing configurations can be adjusted and modified during the flight. We have tested this concept on B2 to generate cruise flights, bank turning, and sharp diving maneuvers, and we anticipate that this can potentially help reconstruct the adaptive properties of bat flight for other maneuvers. For instance, bats use tip reversal at low flight speeds (hovering) to produce thrust and weight support, and the stroke plane becomes perpendicular to the body at higher flight speeds (39).
We parameterized the morphing structure of B2 by several configuration variables. The configuration variable vector qmorph, which defines the morphology of the forelimb and hindlimb as they evolve through the action of actuated coordinates, embodies nine biologically meaningful DOFs
(1)
where describes the retraction-protraction angle, is the radial flexion-extension angle, is the abduction-adduction angle of the carpus, qFL is the flapping angle, and is the dorsoventral movement of the hindlimb (see Fig. 3, B and C). Here, the superscript i denotes the right (R) or left (L) joint angles. The mechanical constraints described earlier yield a nonlinear map from actuated joint angles
(2)
to the morphology configuration variable vector qmorph. The spindle action shown in Fig. 3B is denoted by . The nonlinear map is explained mathematically in (40), which reflects two loops made by (p0-p1-p2) and (p1-p3-p4-p5), as shown in Fig. 3B. We used these configuration variables to develop B2’s nonlinear dynamic model and predefined actuator trajectories; see Materials and Methods and (40).
Now, the virtual constraints are given by
(3)
where rdes is the time-varying desired trajectory associated with the actuated coordinates, t is time, and β is the vector of the wing kinematic parameters explained in Materials and Methods. Once the virtual constraints (N) are enforced, the posture of B2 varies because the actuated portion of the system now implicitly follows the time-varying trajectory rdes. To design rdes, we precomputed the time evolution of B2’s joint trajectories for N = 0. We applied numerically stable approaches to guarantee that these trajectory evolutions take place on a constraint manifold (see Materials and Methods). Then, we used a finite-state nonlinear optimizer to shape these constraints subject to a series of predefined conditions (40).
The stability of the designed periodic solutions can be checked by inspecting the eigenvalues of the monodromy matrix [Eq. 22 in (40)] after defining a Poincaré map P and a Poincaré section (40). We computed the monodromy matrix by using a central difference scheme. We perturbed our system states around the equilibrium point at the beginning of the flapping cycle and then integrated the system dynamics given in Eqs. 10 and 16 throughout one flapping cycle.
To stabilize the designed periodic solution, we augmented the desired trajectory rdes with a correction term ,
where δβ is computed by Eq. 7. The Poincaré return map takes the robot states qk and (the Euler angles roll, pitch, and yaw and their rates) at the beginning of the kth flapping cycle and leads to the states at the beginning of the next flapping cycle,
(4)
We linearized the map P at , resulting in a dynamic system that describes the periodic behavior of the system at the beginning of each flapping cycle
(5)
where (*) denotes the equilibrium points and denotes deviations from the equilibrium points. The changes in the kinematic parameters are denoted by δβ. Here, the stability analysis of the periodic trajectories of the bat robot is relaxed to the stability analysis of the equilibrium of the linearized Poincaré return map on [see (40)]. As a result, classical feedback design tools can be applied to stabilize the system. We computed a constant state feedback gain matrix such that the closed-loop linearized map is exponentially stable:
(6)
We used this state feedback policy at the beginning of each flapping cycle to update the kinematic parameters as follows:
(7)
In Fig. 4C, the controller architecture is shown. The controller consists of two parts: (i) the discrete controller that updates the kinematic parameters β at ≈10 Hz and (ii) the morphing controller that enforces the predefined trajectories rdes and loops at 100 Hz.
Fig. 4 Untethered flights and controller architecture.
(A) Snapshots of a zero-path straight flight. (B) Snapshots of a diving maneuver. (C) The main controller consists of the discrete (C1) and morphing controllers (C2). The discrete and morphing controllers are updated through sensor measurements H1 and H2 at 10 and 100 Hz, respectively. The subsystems S1, S2, and S3 are the underactuated, actuated, and aerodynamic parts [see Materials and Methods and (40)].
Next, we used joint movements (, , , and ) to flex (extend) the armwings or ascend (descend) the legs and reconstructed two flight maneuvers: (i) a banking turn and (ii) a swoop maneuver. These joint motions were realized by modifying the term bi in the actuator-desired trajectories (Eq. 12 in Materials and Methods).
Banking turn maneuver
We performed extensive untethered flight experiments in a large indoor space (Stock Pavilion at the University of Illinois in Champaign-Urbana) where we could use a net (30 m by 30 m) to protect the sensitive electronics of B2 at the moment of landing. The flight arena was not equipped with any motion capture system. Although the vehicle landing position was adjusted by an operator to secure landings within the area, which is covered by the net, we landed outside the net many times. The launching task was performed by a human operator, thereby adding to the degree of inconsistency of the launches.
In all of these experiments, at the launch moment, the system reached its maximum flapping speed (≈10 Hz). In Fig. 5A, the time evolution of the roll angle qx sampled at 100 Hz is shown. The hand launch introduced initial perturbations, which considerably affected the first 10 wingbeats. Despite the external perturbations of the launch moment, the vehicle stabilized the roll angle within 20 wingbeats. This time envelope is denoted by Δtstab and is shown by the red region. Then, the operator sends a turn command, which is shown by the blue region. Immediately after sending the command, the roll angle increased, indicating a turn toward the right wing. The first flight test, which is shown in a solid black line and highlighted with green, does not follow the increase trend because the turn command was not applied for comparison purposes in this experiment.
Fig. 5The time evolution of the Euler angles roll qx, pitch qy, and yaw qz for eight flight tests is shown.
(A and B) The roll and pitch angles converge to a bounded neighborhood of 0° despite perturbations at the launch moment. The red region represents the time envelope required for vehicle stabilization and is denoted by Δtstab. For all of the flight experiments except the first [denoted by S.F. (straight flight) and highlighted by the green region], a bank turn command was sent at a time within the blue range. Then, the roll and pitch angles start to increase, indicating the beginning of the bank turn. (C) The behavior of the yaw angle. In the red region, vehicle heading is stabilized (except flight tests 1 and 4). In the blue region, the vehicle starts to turn toward the right armwing (negative heading rate). This behavior is not seen in the straight flight.
In Figs. 6 and 7, the morphing joint angles , , , and for these flight tests are reported. These joint angles were recorded by the onboard Hall effect sensors and were sampled at 100 Hz. As Fig. 6 (A to D) suggests, the controller achieves positive roll angle in the blue region by flexing the right armwing and extending the left arm wing.

Fig. 6 Arm wing joint angle time evolution.

 

Left and right armwing angles (A and B) and (C and D) are shown for eight flight tests. (A and C) Closeup views for the stabilization time envelope. The red region represents the joint movement during the stabilization time envelope. (B and D) After the stabilization time envelope, for all of the flight experiments except the first (highlighted with green), a bank turn command was sent at a time within the blue range.
Fig. 7 Leg joint angle time evolution.
Left and right leg angles  (A and B) and  (Cand D) are shown for eight flight tests. (A and C) Closeup views for the stabilization time envelope. (B and D) After the stabilization time envelope, the dorsal movement of the legs are applied to secure a successful belly landing. This dorsal movement can cause pitch-up artifacts, which are extremely nonlinear.

In Fig. 5 (B and C), the time evolutions of the Euler angles qy and qzare shown. Like the roll angle, the pitch angle was settled within a bounded neighborhood of 0 in the red region. At the moment of the banking turn (blue region), pitch-up artifacts appeared because of extreme nonlinear couplings between the roll and pitch dynamics. In addition, these pitch-ups, to some extent, are the result of the dorsal movement of the legs, which are applied to secure a successful belly landing (see Fig. 7, A to D). The straight flight pitch angle behaved differently because there were no sharp rises in the pitch angle in the blue region. In Fig. 5C, it is easy to observe that for all of the flight tests (except the straight flight), the rate of changes in the heading angle increased after the turn command is applied, suggesting the onset of the bank turning.

Diving maneuver
Next, a sharp diving maneuver, which is performed by bats when pursuing their prey, was reconstructed. Insectivorous echolocating bats face a sophisticated array of defenses used by their airborne prey. One such insect defense is the ultrasound-triggered dive, which is a sudden, rapid drop in altitude, sometimes all the way to the ground.
We tried to reconstruct this maneuver by triggering a sharp pitch-down motion at mid-flight. After launching the robot, the operator sent the command, which resulted in a sharp ventral movement of the legs (shown in Fig. 8C). Meanwhile, the armwings are stretched (shown in Fig. 8B). In Fig. 8A, a sharp rise of the pitch angle is noticeable. The vehicle swooped and reached a peak velocity of about 14 m/s. This extreme agile maneuver testifies to the level of attitude instability in B2.

Fig. 8 Joint angle evolution during swooping down.
(A) The time evolution of the Euler angles during the diving maneuver. (B) Armwing joint angles. (C) Leg joint angles. The red region indicates the stabilization time envelope; the light blue region indicates the dive time span.

Flight characteristics

B2’s flight characteristics are compared with Rousettus aegyptiacus flight information from (41). R. aegyptiacus flight information corresponds to the flight speed U that is within the range of 3 to 5 m/s. B2’s morphological details, which are presented in table S1, are used to compute B2’s flight characteristics. According to Rosén et al. (28), the arc length traveled by the wingtip stip is given by stip = 2ψbs, where ψ and bs are the flapping stroke angle and wingspan, respectively (stip,B2 = 0.48 and stip,Rous. = 0.36). A motion capture system (shown in fig. S3) was used to register the position coordinates px and py for four untethered flight tests (see fig. S2). The flight speed was calculated by taking the time derivative of pxand py. We considered the average flight speed B2 = 5.6 m/s in the succeeding calculations.
The measure K (28), which is similar to the reduced frequency and is computed on the basis of the wingtip speed, is given by K = stip/tf/U, where tf is the time span of a single wingbeat (KB2 = 0.86 and KRous. = 0.81). Subsequently, the advance ratio J is equal to the inverse of the measure K (JB2 = 1.16 and JRous. = 1.22). The wing loading Qs is given by Qs = Mbg/Smax (41), where Mb is the total body mass, g is the gravitational constant, and Smax is the maximum wing area (Qs,B2 = 13 N/m2 and Qs,Rous. = 11 N/m2).
The Strouhal number St is given by St = Δztip/tf/U (41), where Δztipis the vertical displacement of the wingtip with respect to the shoulder (28) (StB2 = 0.43 and StRous. = 0.4 − 0.6). Last, the nominal coefficient of lift Cl is computed. The coefficient is given by from (41), where ρair is the density of dry air, Vc is the velocity of the carpus (see Fig. 3B), and Fvert is the magnitude of the vertical lift force (see fig. S4). We measured Fvert by installing the robot on top of a miniature load cell, which is inside a wind tunnel. The wind tunnel is programmed to sustain air velocity at 4 to 6 m/s (Cl,B2 = 0.8 and Cl,Rous. = 1.0).
CONCLUSION
Bats are known to demonstrate exceptionally agile maneuvers thanks to many joints that are embedded in their flight mechanism, which synthesize sophisticated and functionally versatile dynamic wing conformations. Bats represent a unique solution to the challenges of maneuverable flapping flight and provide inspiration for vehicle design at bat-length scales.
The difficulties associated with reconstructing bat-inspired flight are exacerbated by the inherent complexities associated with the design of such bat robots. Consequently, we have identified and implemented the most important wing joints by means of a series of mechanical constraints and a feedback control design to control the six-DOF flight motion of the bat robot called B2.
The main results of this study are fourfold.

  • First, for robotics, this work demonstrates the synergistic design and flight control of an aerial robot with dynamic wing conformations similar to those of biological bats. Conventional flapping wing platforms have wings with few joints, which can be conceptualized as rigid bodies. These platforms often use conventional fixed-wing airplane control surfaces (e.g., rudders, ailerons, etc.); therefore, these robots are not suitable for examining the flight mechanisms of biological counterparts with nontrivial morphologies.This work has demonstrated several autonomous flight maneuvers (zero-path flight, banking turn, and diving) of a self-contained robotic platform that has fundamentally distinguished control arrays in comparison to existing flapping robots. B2 uses a morphing skeleton array wherein the use of a silicone-based skin enables the robot to morph its articulated structure in midair without losing an effective and smooth aerodynamic surface. This morphing property will not be realized with conventional fabrics (e.g., nylon and Mylar) that are primarily used in flapping wing research.
  • Next, for dynamics and control, this work applies the notion of stable periodic orbits to study aerial locomotion of B2, whose unstable flight dynamics are aggravated by the flexibility of the wings. The technique used in the paper can simplify stability analysis by establishing equivalence between the stability of a periodic orbit and a linearized Poincaré map.
  • Third, this work introduces a design scheme (as shown in Fig. 1) to mimic the key flight mechanisms of biological counterparts. There is no well-established methodology for reverse engineering the sophisticated locomotion of biological counterparts. These animals have several active and passive joints that make it impractical to incorporate all of them in the design. The framework that is introduced in this study accommodates the key DOFs of bat wings and legs in a 93-g flying robot with tight payload and size restrictions. These DOFs include the retraction-protraction of the shoulders, flexion-extension of the elbows, abduction-adduction of the wrists, and dorsoventral movement of the legs. The design framework is staged in two steps: introducing mechanical constraints motivated by PCA of bat flight kinematics and designing virtual constraints motivated by holonomically constrained mechanical systems.
  • Last but not least, this research contributes to biological studies on bat flight. The existing methods for biology rely on vision-based motion capture systems that use high-speed imaging sensors to record the trajectory of joints and limbs during bat flight. Although these approaches can effectively analyze the joint kinematics of bat wings in flight, they cannot help understand how specific DOFs or specific wing movement patterns contribute to a particular flight maneuver of a bat. B2 can be used to reconstruct flight maneuvers of bats by applying wing movement patterns observed in bat flight, thereby helping us understand the role of the dominant DOFs of bats. In this work, we have demonstrated the effectiveness of using this robot to reproduce flight maneuvers such as straight flight, banking turn, and diving flight. Motivated by previous biological studies such as that by Gardiner et al. (42), which inspects the role of the legs in modulating the pitch movement of bat flight, we have successfully implemented the dorsoventral movement control of the legs of B2 to produce a sharp diving maneuver or to maintain a straight path. Furthermore, in this work, bank turn maneuvers of bats (23) have been successfully reconstructed by controlling asymmetric wing folding of the two main wings. The self-sufficiency of an autonomous robotic platform in sensing, actuation, and computation permits extensive analysis of dynamic system responses. In other words, thorough and effective inspection of the key DOFs in bat flight is possible by selectively perturbing these joint angles of the robot and analyzing the response. It is the presence of several varying parameters in bat flight kinematics that hinders such a systematic analysis. Consequently, we envision the potential applications of our robotic platform as an important tool for studying bat flight in the context of robotic-inspired biology.
MATERIALS AND METHODS
Nonlinear dynamics
The mathematical dynamic model of B2 is developed using the Lagrange method (36) after computing kinetic and potential energies. Rotary and translational kinetic energies are evaluated after defining the position and attitude of the body with respect to the inertial frame. Euler angles are used to define the attitude of the robot with respect to the inertial frame, whereas body coordinate frames, which are attached to the wings, define the wing movements with respect to the body coordinate frame.
Modeling assumptions
The following assumptions are made during the nonlinear dynamic modeling:
  1. (1) Wing inertial forces are considered because the wings are not massless.
  2. (2) There is no spanwise and chordwise flexibility in the wings; that is, it is a rigid flapping wing aircraft. Therefore, there is no flexibility-induced phase difference between flapping and feathering motions, and no degrees of underactuation are introduced as a result of passive phase difference between the flapping and feathering (pitch) motions.
  3. (3) Strip theory (43) is used for computing aerodynamic forces and moments.
  4. (4) The aerodynamic center is assumed to be located at the quarter-chord point (31), and the aerodynamic forces, which act on the aerodynamic center, include the lift and drag forces.
Method of Lagrange
During free-fall ballistic motions, B2 with its links and joints represents an open kinematic chain that evolves under the influence of gravitational and external aerodynamic forces. We used the method of Lagrange to mathematically define this dynamics. This open kinematic chain is uniquely determined with the fuselage Euler angles roll, pitch, and yaw (qx; qy; qz); fuselage center of mass (CoM) positions (px; py; pz); and morphing joint angles qmorph defined in Eq. 1. Therefore, the robot’s configuration variable vector is

(8)

where is the robot’s configuration variable space. We derived Lagrange equations after computing the total energy of the free open kinematic chain as the difference between the total kinetic energy and the total potential energy. Following Hamilton’s principle of least action, the equations of motion for the open kinematic chain with ballistic motions are given by:

(9)

where , , and denote the inertial matrix, the Coriolis matrix, and the gravity vector, respectively. The generalized forces , which reflect the role of aerodynamic forces as well the action of several morphing motors in B2, are described in (40).

Virtual constraints and offline actuator trajectory design
For wing articulations, we use a framework based on defining a set of parameterized and time-varying holonomic constraints (37, 38). This method permits shaping of the overall system dynamics through such constraints. These holonomic constraints control the posture of the articulated flight mechanism by driving the actuated portion of the system and take place through the action of the servo actuators that are embedded in the robot.
We partitioned the configuration variable vector q into the actuated coordinates qact and the remaining coordinates , which includes Euler angles and body CoM positions. The dynamics (Eq. 9) are rewritten as

(10)
In the equation above, , ,Embedded Image , , , , and are block matrices; Embedded Image and are two components of the generalized forces (40). The nonlinear system in Eq. 10 shows that the actuated and unactuated dynamics are coupled by the inertial, Coriolis, gravity, and aerodynamic terms.
The actuated dynamics represent the servo actuators in the robot. The action of these actuators is described by introducing parameterized and time-varying holonomic constraints into the dynamic system. To shape the actuated coordinates, we defined a constraint manifold, and we used numerically stable approaches to enforce the evolution of the trajectories on this manifold. Thereafter, a finite-state nonlinear optimizer shapes these constraints.
The servo actuators move the links to the desired positions. This is similar to the behavior of a holonomically constrained mechanical system and, mathematically speaking, is equivalent to the time evolution of the system dynamics given by Eq. 10 over the manifold(11)where N is the constraint equation and is given by N(t, β, qact) = qact− rdes(t, β). In the constraint equation, rdes is the vector of the desired trajectories for the actuated coordinates qact and is given by(12)where t denotes time and β = {ω, ϕi, ai, bi} parameterizes the periodic actuator trajectories that define the wing motion. These parameters are the control input to the system. Imposing the constraint equations to the system dynamics (Eq. 10) at only the acceleration level will lead to numeric problems owing to the difficulties of obtaining accurate position and velocity initial values (44). In addition, numeric discretization errors will be present during the process of integration, and the constraints will not be satisfied. Therefore, the constraints in position and velocity levels are also considered (45)

Embedded Image (13)

where κ1,2 are two constant matrices and

(14)and(15)
Substituting Eqs. 14 and 15 to Eq. 13 gives . Now, interlocking Eq. 10 and forms the following system of ordinary differential equations on a parameterized manifold:

(16)
Now, the numeric integration of the above differential-algebraic equation (DAE) is possible, and consequently, it is possible to design predefined periodic trajectories for the actuators. In (40), we have used finite-state optimization and shooting methods to design periodic solutions for the DAE.
To verify the accuracy of the proposed nonlinear dynamic model in predicting the behavior of the vehicle, we compared the trajectories from eight different flight experiments with the model-predicted trajectories. In fig. S1, the time evolution of the pitch angle qy and pitch rate angle is shown.
SUPPLEMENTARY MATERIALS
Supplementary Text
Fig. S1. Nonlinear model verification.
Fig. S2. Flight speed measurements.
Fig. S3. Motion capture system.
Fig. S4. Wind tunnel measurements.
Table S1. B2’s morphological details.
Movie S1. Membrane.
Movie S4. Swoop maneuver.
References (4659)
REFERENCES AND NOTES
  1. K. Ma, P. Chirarattanon, S. Fulsler, R. Wood, Controlled flight of a biologically inspired, insect-scale robot. Science 340, 603–607 (2013). Abstract/FREE Full TextGoogle Scholar
  2. A. Paranjape, S.-J. Chung, J. Kim, Novel dihedral-based control of flapping-wing aircraft with application to perching. IEEE Trans. Robot. 29,1071–1084 (2013). Google Scholar
  3. J. W. Gerdes, S. K. Gupta, S. A. Wilkerson, A review of bird-inspired flapping wing miniature air vehicle designs. J. Mech. Robot. 4, 021003(2012). Google Scholar
  4. J. W. Bahlman, S. M. Swartz, K. S. Breuer, Design and characterization of a multi-articulated robotic bat wing. Bioinspir. Biomim. 8, 016009(2013). Google Scholar
  5. S.-J. Chung, M. Dorothy, Neurobiologically inspired control of engineered flapping flight. J. Guid. Control Dyn. 33, 440–453 (2010). CrossRefGoogle Scholar
  6. D. K. Riskin, D. J. Willis, J. Iriarte-Díaz, T. L. Hedrick, M. Kostandov, J.Chen, D. H. Laidlaw, K. S. Breuer, S. M. Swartz, Quantifying the complexity of bat wing kinematics. J. Theor. Biol. 254, 604–615 (2008). CrossRefPubMedWeb of ScienceGoogle Scholar
  7. S. M. Swartz, J. Iriarte-Diaz, D. K. Riskin, A. Song, X. Tian, D. J. Willis, K. S. Breuer, Wing structure and the aerodynamic basis of flight in bats, paper presented at 45th AIAA Aerospace Sciences Meeting and Exhibit, 8 to 11 January 2007, Reno, NV (2007); http://arc.aiaa.org/doi/abs/10.2514/6.2007-42.
  8. A. Azuma, The Biokinetics of Flying and Swimming (Springer Science & Business Media, 2012).
  9. X. Tian, J. Iriarte-Diaz, K. Middleton, R. Galvao, E. Israeli, A. Roemer, A. Sullivan, A. Song, S. Swartz, K. Breuer, Direct measurements of the kinematics and dynamics of bat flight. Bioinspir. Biomim. 1, S10–S18(2006). CrossRefPubMedGoogle Scholar
  10. X. Deng, L. Schenato, W. C. Wu, S. S. Sastry, Flapping flight for biomimetic robotic insects: Part I-system modeling. IEEE Trans. Robot.22, 776–788 (2006). CrossRefWeb of ScienceGoogle Scholar
  11. X. Deng, L. Schenato, S. S. Sastry, Flapping flight for biomimetic robotic insects: Part II-flight control design. IEEE Trans. Robot. 22,789–803 (2006). CrossRefWeb of ScienceGoogle Scholar
  12. R. J. Wood, S. Avadhanula, E. Steltz, M. Seeman, J. Entwistle, A.Bachrach, G. Barrows, S. Sanders, R. S. Fearing, Enabling technologies and subsystem integration for an autonomous palm-sized glider. IEEE Robot. Autom. Mag. 14, 82–91 (2007). Google Scholar
  13. R. J. Wood, The first takeoff of a biologically inspired at-scale robotic insect. IEEE Trans. Robot. 24, 341–347 (2008). CrossRefWeb of ScienceGoogle Scholar
  14. D. B. Doman, C. Tang, S. Regisford, Modeling interactions between flexible flapping-wing spars, mechanisms, and drive motors. J. Guid. Control Dyn.34, 1457–1473 (2011). Google Scholar
  15. I. Faruque, J. Sean Humbert, Dipteran insect flight dynamics. Part 1 Longitudinal motion about hover. J. Theor. Biol. 264, 538–552 (2010). CrossRefPubMedWeb of ScienceGoogle Scholar
  16. J. Dietsch, Air and sea robots add new perspectives to the global knowledge base. IEEE Robot. Autom. Mag. 18, 8–9 (2011). Google Scholar
  17. S. A. Combes, T. L. Daniel, Shape, flapping and flexion: Wing and fin design for forward flight. J. Exp. Biol. 204, 2073 (2001). PubMedWeb of ScienceGoogle Scholar
  18. S. A. Combes, T. L. Daniel, Into thin air: Contributions of aerodynamic and inertial-elastic forces to wing bending in the hawkmoth Manduca sexta. J. Exp. Biol. 206, 2999–3006 (2003). Abstract/FREE Full TextGoogle Scholar
  19. S. Lupashin, A. Schöllig, M. Sherback, R. D’Andrea, A simple learning strategy for high-speed quadrocopter multi-flips, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2010), pp. 1642–1648.
  20. N. Michael, D. Mellinger, Q. Lindsey, V. Kumar, The grasp multiple micro-UAV testbed. IEEE Robot. Autom. Mag. 17, 56–65 (2010). Google Scholar
  21. H. D. Aldridge, Kinematics and aerodynamics of the greater horseshoe bat, Rhinolophus ferrumequinum, in horizontal flight at a various flight speeds. J. Exp. Biol. 126, 479–497 (1986). Abstract/FREE Full TextGoogle Scholar
  22. H. D. Aldridge, Body accelerations during the wingbeat in six bat species: The function of the upstroke in thrust generation. J. Exp. Biol.130, 275–293 (1987). Abstract/FREE Full TextGoogle Scholar
  23. U. M. Norberg, Some advanced flight manoeuvres of bats. J. Exp. Biol.64, 489–495 (1976). Abstract/FREE Full TextGoogle Scholar
  24. C. Chevallereau, G. Abba, Y. Aoustin, F. Plestan, E. R. Westervelt, C.Canudas-de-wit, J. W. Grizzle, Rabbit: A testbed for advanced control theory. IEEE Control Syst. Mag. 23, 57–79 (2003). CrossRefWeb of ScienceGoogle Scholar
  25. Y. P. Ivanenko, A. d’Avella, R. E. Poppele, F. Lacquaniti, On the origin of planar covariation of elevation angles during human locomotion. J. Neurophysiol. 99, 1890–1898 (2008). Abstract/FREE Full TextGoogle Scholar
  26. T. Chau, A review of analytical techniques for gait data. Part 1: Fuzzy, statistical and fractal methods. Gait Posture 13, 49–66 (2001). CrossRefPubMedWeb of ScienceGoogle Scholar
  27. G. Cappellini, Y. P. Ivanenko, R. E. Poppele, F. Lacquaniti, Motor patterns in human walking and running. J. Neurophysiol. 95, 3426–3437 (2006). Abstract/FREE Full TextGoogle Scholar
  28. M. Rosén, G. Spedding, A. Hedenström, The relationship between wingbeat kinematics and vortex wake of a thrush nightingale. J. Exp. Biol.207, 4255–4268 (2004). Abstract/FREE Full TextGoogle Scholar
  29. N. A. Bernstein, The Coordination and Regulation of Movements(Pergamon Press, 1967).
  30. J. Hoff, A. Ramezani, S.-J. Chung, S. Hutchinson, Synergistic design of a bio-inspired micro aerial vehicle with articulated wings. Proc. Rob. Sci. Syst. 10.15607/RSS.2016.XII.009 (2016). Google Scholar
  31. A. L. Thomas, G. K. Taylor, Animal flight dynamics I. Stability in gliding flight. J. Theor. Biol. 212, 399–424 (2001). CrossRefPubMedWeb of ScienceGoogle Scholar
  32. W. Maybury, J. Rayner, L. B. Couldrick, Lift generation by the avian tail. Proc. Biol. Sci. 268, 1443–1448 (2001). Abstract/FREE Full TextGoogle Scholar
  33. J. A. Cheney, D. Ton, N. Konow, D. K. Riskin, K. S. Breuer, S. M. Swartz,Hindlimb motion during steady flight of the lesser dog-faced fruit bat,Cynopterus brachyotis. PLOS ONE 9, e98093 (2014). CrossRefPubMedGoogle Scholar
  34. A. Ramezani, X. Shi, S.-J. Chung, S. Hutchinson, Bat Bot (B2), A biologically inspired flying machine, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2016), pp. 3219–3226.
  35. H. Tanaka, H. Okada, Y. Shimasue, H. Liu, Flexible flapping wings with self-organized microwrinkles. Bioinspir. Biomim. 10, 046005 (2015). Google Scholar
  36. A. Ramezani, X. Shi, S.-J. Chung, S. Hutchinson, Lagrangian modeling and flight control of articulated-winged bat robot, in Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2015), pp. 2867–2874.
  37. C. I. Byrnes, A. Isidori, A frequency domain philosophy for nonlinear systems, in Proceedings of the IEEE Conference on Decision and Control(IEEE, 1984), pp. 1569–1573.
  38. A. Isidori, C. Moog, On the nonlinear equivalent of the notion of transmission zeros, in Modelling and Adaptive Control, C. I. Byrnes, A. Kurzhanski, Eds. (Springer, 1988), pp. 146–158.
  39. M. Wolf, L. C. Johansson, R. von Busse, Y. Winter, A. Hedenström,Kinematics of flight and the relationship to the vortex wake of a Pallas’ long tongued bat (Glossophaga soricina). J. Exp. Biol. 213, 2142–2153(2010). Abstract/FREE Full TextGoogle Scholar 
  40. Materials and methods are available as supplementary materials at the Science website.
  41. D. K. Riskin, J. Iriarte-Daz, K. M. Middleton, K. S. Breuer, S. M. Swartz,The effect of body size on the wing movements of pteropodid bats, with insights into thrust and lift production. J. Exp. Biol. 213, 4110–4122(2010). Abstract/FREE Full TextGoogle Scholar
  42. J. D. Gardiner, G. Dimitriadis, J. R. Codd, R. L. Nudds, A potential role for bat tail membranes in flight control. PLOS ONE 6, e18214 (2011). CrossRefPubMedGoogle Scholar
  43. J. D. DeLaurier, An aerodynamic model for flapping-wing flight.Aeronaut. J. 97, 125–130 (1993). Google Scholar
  44. U. M. Ascher, H. Chin, L. R. Petzold, S. Reich, Stabilization of constrained mechanical systems with daes and invariant manifolds. J. Struct. Mech. 23, 135–157 (1995). Google Scholar
  45. C. Führer, B. J. Leimkuhler, Numerical solution of differential-algebraic equations for constrained mechanical motion. Numer. Math. 59, 55–69(1991). CrossRefGoogle Scholar
  46. A. H. Nayfeh, Perturbation methods in nonlinear dynamics, inNonlinear Dynamics Aspects of Accelerators. Lecture Notes in Physics, J. M. Jowett, M. Month, S. Turner, Eds. (Springer, 1986), pp. 238–314.
  47. M. Goman, A. Khrabrov, State-space representation of aerodynamic characteristics of an aircraft at high angles of attack. J. Aircr. 31,1109–1115 (1994). Google Scholar
  48. A. A. Paranjape, S.-J. Chung, H. H. Hilton, A. Chakravarthy, Dynamics and performance of tailless micro aerial vehicle with flexible articulated wings.AIAA J. 50, 1177–1188 (2012). Google Scholar
  49. H. K. Khalil, J. Grizzle, Nonlinear Systems (Prentice Hall, 1996).
  50. G. Meurant, An Introduction to Differentiable Manifolds and Riemannian Geometry, vol. 120 (Academic Press, 1986).
  51. R. R. Burridge, A. A. Rizzi, D. E. Koditschek, Sequential composition of dynamically dexterous robot behaviors. Int. J. Rob. Res. 18, 534–555(1999). Google Scholar
  52. J. B. Dingwell, J. P. Cusumano, Nonlinear time series analysis of normal and pathological human walking. Chaos 10, 848–863 (2000). CrossRefPubMedWeb of ScienceGoogle Scholar
  53. M. S. Garcia, “Stability, scaling, and chaos in passive-dynamic gait models,” thesis, Cornell University, Ithaca, NY (1999).
  54. J. Guckenheimer, S. Johnson, International Hybrid Systems Workshop(Springer, 1994), pp. 202–225.
  55. Y. Hurmuzlu, C. Basdogan, J. J. Carollo, Presenting joint kinematics of human locomotion using phase plane portraits and Poincaré maps. J. Biomech. 27, 1495–1499 (1994). CrossRefPubMedGoogle Scholar
  56. S. G. Nersesov, V. Chellaboina, W. M. Haddad, A generalization of Poincaré’s theorem to hybrid and impulsive dynamical systems, inProceedings of the American Control Conference (IEEE, 2002), pp. 1240–1245.
  57. T. S. Parker, L. Chua, Practical Numerical Algorithms for Chaotic Systems(Springer Science & Business Media, 2012).
  58. B. Thuilot, A. Goswami, B. Espiau, Bifurcation and chaos in a simple passive bipedal gait, in Proceedings of the IEEE International Conference on Robotics and Automation (IEEE, 1997), pp. 792–798.
  59. E. R. Westervelt, J. W. Grizzle, C. Chevallereau, J. H. Choi, B. Morris,Feedback Control of Dynamic Bipedal Robot Locomotion (CRC Press, 2007).

 

Acknowledgments: 
We thank the team of graduate and undergraduate students from the aerospace, electrical, computer, and mechanical engineering departments for their contribution in constructing the prototype of B2 at the University of Illinois at Urbana-Champaign. In particular, we are indebted to Ph.D. students X. Shi (for hardware developments), J. Hoff (for wing kinematic analysis), and S. U. Ahmed (for helping with flight experiments). We extend our appreciation to our collaborators S. Swartz, K. S. Breuer, and H. Vejdani at Brown University for helping us to better understand the key mechanisms of bat flight.Funding: 
This work was supported by NSF (grant 1427111).

Author contributions: 
A.R., S.-J.C., and S.H. designed B2. A.R., S.-J.C., and S.H. designed control experiments, analyzed, and interpreted the data. A.R. constructed B2 and designed its controller with critical feedback from S.-J.C., and S.H. A.R. performed flight experiments. All authors prepared the manuscript.

Competing interests: 
The authors declare that they have no competing interests. Data and materials availability:Please contact S.-J.C. for data and other materials.

Copyright © 2017, American Association for the Advancement of Science

  Category: Robotics
  Comments: Comments Off on A biomimetic robotic platform to study flight specializations of bats

Top 10 Hot Artificial Intelligence (AI) Technologies

By Hugo Angel,

forrester-ai-technologiesThe market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.

Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making.

Based on Forrester’s analysis, here’s my list of the 10 hottest AI technologies:

  1. Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarizing business intelligence insights. Sample vendors:
    • Attivio,
    • Automated Insights,
    • Cambridge Semantics,
    • Digital Reasoning,
    • Lucidworks,
    • Narrative Science,
    • SAS,
    • Yseop.
  2. Speech Recognition: Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors:
    • NICE,
    • Nuance Communications,
    • OpenText,
    • Verint Systems.
  3. Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors:
    • Amazon,
    • Apple,
    • Artificial Solutions,
    • Assist AI,
    • Creative Virtual,
    • Google,
    • IBM,
    • IPsoft,
    • Microsoft,
    • Satisfi.
  4. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors:
    • Amazon,
    • Fractal Analytics,
    • Google,
    • H2O.ai,
    • Microsoft,
    • SAS,
    • Skytree.
  5. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors:
    • Alluviate,
    • Cray,
    • Google,
    • IBM,
    • Intel,
    • Nvidia.
  6. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors:
    • Advanced Systems Concepts,
    • Informatica,
    • Maana,
    • Pegasystems,
    • UiPath.
  7. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors:
    • Deep Instinct,
    • Ersatz Labs,
    • Fluid AI,
    • MathWorks,
    • Peltarion,
    • Saffron Technology,
    • Sentient Technologies.
  8. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language. Currently used primarily in market research. Sample vendors:
    • 3VR,
    • Affectiva,
    • Agnitio,
    • FaceFirst,
    • Sensory,
    • Synqera,
    • Tahzoo.
  9. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors:
    • Advanced Systems Concepts,
    • Automation Anywhere,
    • Blue Prism,
    • UiPath,
    • WorkFusion.
  10. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors:
    • Basis Technology,
    • Coveo,
    • Expert System,
    • Indico,
    • Knime,
    • Lexalytics,
    • Linguamatics,
    • Mindbreeze,
    • Sinequa,
    • Stratifyd,
    • Synapsify.

There are certainly many business benefits gained from AI technologies today, but according to a survey Forrester conducted last year, there are also obstacles to AI adoption as expressed by companies with no plans of investing in AI:

There is no defined business case 42%
Not clear what AI can be used for 39%
Don’t have the required skills 33%
Need first to invest in modernizing data mgt platform 29%
Don’t have the budget 23%
Not certain what is needed for implementing an AI system 19%
AI systems are not proven 14%
Do not have the right processes or governance 13%
AI is a lot of hype with little substance 11%
Don’t own or have access to the required data 8%
Not sure what AI means 3%
Once enterprises overcome these obstacles, Forrester concludes, they stand to gain from AI driving accelerated transformation in customer-facing applications and developing an interconnected web of enterprise intelligence.

Follow me on Twitter @GilPress or Facebook or Google+

Robots Can Now Learn Just By Observing, Without Being Told What To Look For

By Hugo Angel,

Machines are getting smarter every day—and that is both good and terrifying.
[Illustrations: v_alex/iStock]
Scientists at the University of Sheffield have come up with a way for machines to learn just by looking. They don’t need to be told what to look for—they can just learn how a system works by observing it. The method is called Turing Learning and is inspired by Alan Turing’s famous test.
For a computer to learn, usually it has to be told what to look for. For instance, if you wanted to teach a robot to paint like Picasso, you’d train software to mimic real Picasso paintings. “Someone would have to tell the algorithms what is considered similar to a Picasso to begin with,” says Roderick Gross, in a news release.
Turing Learning would not require such prior knowledge, he says. It would use two computer systems, plus the original “system” you’re investigating: a shoal of fish, a Picasso painting, anything. One of the computer systems tries to copy the real-world system as closely as possible. The other computer is an observer. Its task is to watch the goings-on and try to discern which of the systems is real, and which is the copy. If it guesses right, it gets a reward. At the same time, the counterfeit system is rewarded if it fools the observer.
Proceeding like this, the counterfeit models get better and better, and the observer works out how to distinguish real from fake to a more and more accurate degree. In the end, it can not only tell real from fake, but it has also—almost as a by-product of the process—created a precise model of how the genuine system works.
The experiment is named after Alan Turing‘s famous test for artificial intelligence, which says that if a computer program can fool a human observer into believing it is a real person, then it can be considered intelligent. In reality this never really works, as a) convincing a person that you’re another person isn’t a guarantee of intelligence, and b) many computer programs have simply been designed to game the human observers.
Turing Learning, though, is actually practical. It can be used to teach robots certain behaviors, but perhaps more useful is the categorization it performs. Set a Turing Learning machine loose on a swarm of insects, for instance, and it could tease out details in the behavior of a bee colony that remain invisible to humans.
The systems can also be used to recognize abnormal behavior, without first teaching the system what constitutes abnormal behavior. The possibilities here are huge, because noticing oddities in otherwise uniform behavior is something we humans can be terrible at. Look at airport security, for example and how often TSA agents miss guns, explosives, and other weapons.
The technique could also be used in video games to make the virtual players act more like real human players to monitor livestock for odd behaviors that might signal health problems, and for security purposes like lie detection.
In some ways, the technology is terrifying, as computers are able to get to the very basics of how things behave. On the other hand, they still need to be told what to do with that knowledge, so at least there’s something for us puny humans to do in the world of the future.
ORIGINAL: FastCoExist
09.07.16

How a Japanese cucumber farmer is using deep learning and TensorFlow.

By Hugo Angel,

by Kaz Sato, Developer Advocate, Google Cloud Platform
August 31, 2016
It’s not hyperbole to say that use cases for machine learning and deep learning are only limited by our imaginations. About one year ago, a former embedded systems designer from the Japanese automobile industry named Makoto Koike started helping out at his parents’ cucumber farm, and was amazed by the amount of work it takes to sort cucumbers by size, shape, color and other attributes.
Makoto’s father is very proud of his thorny cucumber, for instance, having dedicated his life to delivering fresh and crispy cucumbers, with many prickles still on them. Straight and thick cucumbers with a vivid color and lots of prickles are considered premium grade and command much higher prices on the market.
But Makoto learned very quickly that sorting cucumbers is as hard and tricky as actually growing them.Each cucumber has different color, shape, quality and freshness,” Makoto says.
Cucumbers from retail stores
Cucumbers from Makoto’s farm
In Japan, each farm has its own classification standard and there’s no industry standard. At Makoto’s farm, they sort them into nine different classes, and his mother sorts them all herself — spending up to eight hours per day at peak harvesting times.
The sorting work is not an easy task to learn. You have to look at not only the size and thickness, but also the color, texture, small scratches, whether or not they are crooked and whether they have prickles. It takes months to learn the system and you can’t just hire part-time workers during the busiest period. I myself only recently learned to sort cucumbers well,” Makoto said.
Distorted or crooked cucumbers are ranked as low-quality product
There are also some automatic sorters on the market, but they have limitations in terms of performance and cost, and small farms don’t tend to use them.
Makoto doesn’t think sorting is an essential task for cucumber farmers. “Farmers want to focus and spend their time on growing delicious vegetables. I’d like to automate the sorting tasks before taking the farm business over from my parents.
Makoto Koike, center, with his parents at the family cucumber farm
Makoto Koike, family cucumber farm
The many uses of deep learning
Makoto first got the idea to explore machine learning for sorting cucumbers from a completely different use case: Google AlphaGo competing with the world’s top professional Go player.
When I saw the Google’s AlphaGo, I realized something really serious is happening here,” said Makoto. “That was the trigger for me to start developing the cucumber sorter with deep learning technology.
Using deep learning for image recognition allows a computer to learn from a training data set what the important “features” of the images are. By using a hierarchy of numerous artificial neurons, deep learning can automatically classify images with a high degree of accuracy. Thus, neural networks can recognize different species of cats, or models of cars or airplanes from images. Sometimes neural networks can exceed the performance of the human eye for certain applications. (For more information, check out my previous blog post Understanding neural networks with TensorFlow Playground.)

TensorFlow democratizes the power of deep learning
But can computers really learn mom’s art of cucumber sorting? Makoto set out to see whether he could use deep learning technology for sorting using Google’s open source machine learning library, TensorFlow.
Google had just open sourced TensorFlow, so I started trying it out with images of my cucumbers,” Makoto said. “This was the first time I tried out machine learning or deep learning technology, and right away got much higher accuracy than I expected. That gave me the confidence that it could solve my problem.
With TensorFlow, you don’t need to be knowledgeable about the advanced math models and optimization algorithms needed to implement deep neural networks. Just download the sample code and read the tutorials and you can get started in no time. The library lowers the barrier to entry for machine learning significantly, and since Google open-sourced TensorFlow last November, many “non ML” engineers have started playing with the technology with their own datasets and applications.

Cucumber sorting system design
Here’s a systems diagram of the cucumber sorter that Makoto built. The system uses Raspberry Pi 3 as the main controller to take images of the cucumbers with a camera, and 

  • in a first phase, runs a small-scale neural network on TensorFlow to detect whether or not the image is of a cucumber
  • It then forwards the image to a larger TensorFlow neural network running on a Linux server to perform a more detailed classification.
Systems diagram of the cucumber sorter
Makoto used the sample TensorFlow code Deep MNIST for Experts with minor modifications to the convolution, pooling and last layers, changing the network design to adapt to the pixel format of cucumber images and the number of cucumber classes.
Here’s Makoto’s cucumber sorter, which went live in July:
Here’s a close-up of the sorting arm, and the camera interface:

And here is the cucumber sorter in action:

Pushing the limits of deep learning
One of the current challenges with deep learning is that you need to have a large number of training datasets. To train the model, Makoto spent about three months taking 7,000 pictures of cucumbers sorted by his mother, but it’s probably not enough.
When I did a validation with the test images, the recognition accuracy exceeded 95%. But if you apply the system with real use cases, the accuracy drops down to about 70%. I suspect the neural network model has the issue of “overfitting” (the phenomenon in neural network where the model is trained to fit only to the small training dataset) because of the insufficient number of training images.
The second challenge of deep learning is that it consumes a lot of computing power. The current sorter uses a typical Windows desktop PC to train the neural network model. Although it converts the cucumber image into 80 x 80 pixel low-resolution images, it still takes two to three days to complete training the model with 7,000 images.
Even with this low-res image, the system can only classify a cucumber based on its shape, length and level of distortion. It can’t recognize color, texture, scratches and prickles,” Makoto explained. Increasing image resolution by zooming into the cucumber would result in much higher accuracy, but would also increase the training time significantly.
To improve deep learning, some large enterprises have started doing large-scale distributed training, but those servers come at an enormous cost. Google offers Cloud Machine Learning (Cloud ML), a low-cost cloud platform for training and prediction that dedicates hundreds of cloud servers to training a network with TensorFlow. With Cloud ML, Google handles building a large-scale cluster for distributed training, and you just pay for what you use, making it easier for developers to try out deep learning without making a significant capital investment.
These specialized servers were used in the AlphaGo match
Makoto is eagerly awaiting Cloud ML. “I could use Cloud ML to try training the model with much higher resolution images and more training data. Also, I could try changing the various configurations, parameters and algorithms of the neural network to see how that improves accuracy. I can’t wait to try it.

Inside Vicarious, the Secretive AI Startup Bringing Imagination to Computers

By Hugo Angel,

By reinventing the neural network, the company hopes to help computers make the leap from processing words and symbols to comprehending the real world.
Life would be pretty dull without imagination. In fact, maybe the biggest problem for computers is that they don’t have any.
That’s the belief motivating the founders of Vicarious, an enigmatic AI company backed by some of the most famous and successful names in Silicon Valley. Vicarious is developing a new way of processing data, inspired by the way information seems to flow through the brain. The company’s leaders say this gives computers something akin to imagination, which they hope will help make the machines a lot smarter.
Vicarious is also, essentially, betting against the current boom in AI. Companies including Google, Facebook, Amazon, and Microsoft have made stunning progress in the past few years by feeding huge quantities of data into large neural networks in a process called “deep learning.” When trained on enough examples, for instance, deep-learning systems can learn to recognize a particular face or type of animal with very high accuracy (see “10 Breakthrough Technologies 2013: Deep Learning”). But those neural networks are only very crude approximations of what’s found inside a real brain.
Illustration by Sophia Foster-Dimino
Vicarious has introduced a new kind of neural-network algorithm designed to take into account more of the features that appear in biology. An important one is the ability to picture what the information it’s learned should look like in different scenarios—a kind of artificial imagination. The company’s founders believe a fundamentally different design will be essential if machines are to demonstrate more human like intelligence. Computers will have to be able to learn from less data, and to recognize stimuli or concepts more easily.
Despite generating plenty of early excitement, Vicarious has been quiet over the past couple of years. But this year, the company says, it will publish details of its research, and it promises some eye-popping demos that will show just how useful a computer with an imagination could be.
The company’s headquarters don’t exactly seem like the epicenter of a revolution in artificial intelligence. Located in Union City, a short drive across the San Francisco Bay from Palo Alto, the offices are plain—a stone’s throw from a McDonald’s and a couple of floors up from a dentist. Inside, though, are all the trappings of a vibrant high-tech startup. A dozen or so engineers were hard at work when I visited, several using impressive treadmill desks. Microsoft Kinect 3-D sensors sat on top of some of the engineers’ desks.
D. Scott Phoenix, the company’s 33-year-old CEO, speaks in suitably grandiose terms. “We are really rapidly approaching the amount of computational power we need to be able to do some interesting things in AI,” he told me shortly after I walked through the door. “In 15 years, the fastest computer will do more operations per second than all the neurons in all the brains of all the people who are alive. So we are really close.
Vicarious is about more than just harnessing more computer power, though. Its mathematical innovations, Phoenix says, will more faithfully mimic the information processing found in the human brain. It’s true enough that the relationship between the neural networks currently used in AI and the neurons, dendrites, and synapses found in a real brain is tenuous at best.
One of the most glaring shortcomings of artificial neural networks, Phoenix says, is that information flows only one way. “If you look at the information flow in a classic neural network, it’s a feed-forward architecture,” he says. “There are actually more feedback connections in the brain than feed-forward connections—so you’re missing more than half of the information flow.
It’s undeniably alluring to think that imagination—a capability so fundamentally human it sounds almost mystical in a computer—could be the key to the next big advance in AI.
Vicarious has so far shown that its approach can create a visual system capable of surprisingly deft interpretation. In 2013 it showed that the system could solve any captcha (the visual puzzles that are used to prevent spam-bots from signing up for e-mail accounts and the like). As Phoenix explains it, the feedback mechanism built into Vicarious’s system allows it to imagine what a character would look like if it weren’t distorted or partly obscured (see “AI Startup Says It Has Defeated Captchas”).
Phoenix sketched out some of the details of the system at the heart of this approach on a whiteboard. But he is keeping further details quiet until a scientific paper outlining the captcha approach is published later this year.
In principle, this visual system could be put to many other practical uses, like recognizing objects on shelves more accurately or interpreting real-world scenes more intelligently. The founders of Vicarious also say that their approach extends to other, much more complex areas of intelligence, including language and logical reasoning.
Phoenix says his company may give a demo later this year involving robots. And indeed, the job listings on the company’s website include several postings for robotics experts. Currently robots are bad at picking up unfamiliar, oddly arranged, or partly obscured objects, because they have trouble recognizing what they are. “If you look at people who are picking up objects in an Amazon facility, most of the time they aren’t even looking at what they’re doing,” he explains. “And they’re imagining—using their sensory motor simulator—where the object is, and they’re imagining at what point their finger will touch it.
While Phoenix is the company’s leader, his cofounder, Dileep George, might be considered its technical visionary. George was born in India and received a PhD in electrical engineering from Stanford University, where he turned his attention to neuroscience toward the end of his doctoral studies. In 2005 he cofounded Numenta with Jeff Hawkins, the creator of Palm Computing. But in 2010 George left to pursue his own ideas about the mathematical principles behind information processing in the brain, founding Vicarious with Phoenix the same year.
I bumped into George in the elevator when I first arrived. He is unassuming and speaks quietly, with a thick accent. But he’s also quite matter-of-fact about what seem like very grand objectives.
George explained that imagination could help computers process language by tying words, or symbols, to low-level physical representations of real-world things. In theory, such a system might automatically understand the physical properties of something like water, for example, which would make it better able to discuss the weather. “When I utter a word, you know what it means because you can simulate the concept,” he says.
This ambitious vision for the future of AI has helped Vicarious raise an impressive $72 million so far. Its list of investors also reads like a who’s who of the tech world. Early cash came from Dustin Moskovitz, ex-CTO of Facebook, and Adam D’Angelo, cofounder of Quora. Further funding came from Peter Thiel, Mark Zuckerberg, Jeff Bezos, and Elon Musk.
Many people are itching to see what Vicarious has done beyond beating captchas. “I would love it if they showed us something new this year,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle.
In contrast to the likes of Google, Facebook, or Baidu, Vicarious hasn’t published any papers or released any tools that researchers can play with. “The people [involved] are great, and the problems [they are working on] are great,” says Etzioni. “But it’s time to deliver.
For those who’ve put their money behind Vicarious, the company’s remarkable goals should make the wait well worth it. Even if progress takes a while, the potential payoffs seem so huge that the bet makes sense, says Matt Ocko, a partner at Data Collective, a venture firm that has backed Vicarious. A better machine-learning approach could be applied in just about any industry that handles large amounts of data, he says. “Vicarious sat us down and demonstrated the most credible pathway to reasoning machines that I have ever seen.
Ocko adds that Vicarious has demonstrated clear evidence it can commercialize what it’s working on. “We approached it with a crapload of intellectual rigor,” he says.
It will certainly be interesting to see if Vicarious can inspire this kind of confidence among other AI researchers and technologists with its papers and demos this year. If it does, then the company could quickly go from one of the hottest prospects in the Valley to one of its fastest-growing businesses.
That’s something the company’s founders would certainly like to imagine.
ORIGINAL: MIT Tech Review
by Will Knight. Senior Editor, AI
May 19, 2016

NVIDIA DRIVE PX 2. NVIDIA Accelerates Race to Autonomous Driving at CES 2016

By Hugo Angel,

NVIDIA today shifted its autonomous-driving leadership into high gear.
At a press event kicking off CES 2016, we unveiled artificial-intelligence technology that will let cars sense the world around them and pilot a safe route forward.
Dressed in his trademark black leather jacket, speaking to a crowd of some 400 automakers, media and analysts, NVIDIA CEO Jen-Hsun Huang revealed DRIVE PX 2, an automotive supercomputing platform that processes 24 trillion deep learning operations a second. That’s 10 times the performance of the first-generation DRIVE PX, now being used by more than 50 companies in the automotive world.
The new DRIVE PX 2 delivers 8 teraflops of processing power. It has the processing power of 150 MacBook Pros. And it’s the size of a lunchbox in contrast to earlier autonomous-driving technology being used today, which takes up the entire trunk of a mid-sized sedan.
Self-driving cars will revolutionize society,” Huang said at the beginning of his talk. “And NVIDIA’s vision is to enable them.
 
Volvo to Deploy DRIVE PX in Self-Driving SUVs
As part of its quest to eliminate traffic fatalities, Volvo will be the first automaker to deploy DRIVE PX 2.
Huang announced that Volvo – known worldwide for safety and reliability – will be the first automaker to deploy DRIVE PX 2.
In the world’s first public trial of autonomous driving, the Swedish automaker next year will lease 100 XC90 luxury SUVs outfitted with DRIVE PX 2 technology. The technology will help the vehicles drive autonomously around Volvo’s hometown of Gothenburg, and semi-autonomously elsewhere.
DRIVE PX 2 has the power to harness a host of sensors to get a 360 degree view of the environment around the car.
The rear-view mirror is history,” Jen-Hsun said.
Drive Safely, by Not Driving at All
Not so long ago, pundits had questioned the safety of technology in cars. Now, with Volvo incorporating autonomous vehicles into its plan to end traffic fatalities, that script has been flipped. Autonomous cars may be vastly safer than human-piloted vehicles.
Car crashes – an estimated 93 percent of them caused by human error kill 1.3 million drivers each year. More American teenagers die from texting while driving than any other cause, including drunk driving.
There’s also a productivity issue. Americans waste some 5.5 billion hours of time each year in traffic, costing the U.S. about $121 billion, according to an Urban Mobility Report from Texas A&M. And inefficient use of roads by cars wastes even vaster sums spent on infrastructure.
Deep Learning Hits the Road
Self-driving solutions based on computer vision can provide some answers. But tackling the infinite permutations that a driver needs to react to – stray pets, swerving cars, slashing rain, steady road construction crews – is far too complex a programming challenge.
Deep learning enabled by NVIDIA technology can address these challenges. A highly trained deep neural network – residing on supercomputers in the cloud – captures the experience of many tens of thousands of hours of road time.
Huang noted that a number of automotive companies are already using NVIDIA’s deep learning technology to power their efforts, getting speedup of 30-40X in training their networks compared with other technology. BMW, Daimler and Ford are among them, along with innovative Japanese startups like Preferred Networks and ZMP. And Audi said it was able in four hours to do training that took it two years with a competing solution.
  NVIDIA DRIVE PX 2 is part of an end-to-end platform that brings deep learning to the road.
NVIDIA’s end-to-end solution for deep learning starts with NVIDIA DIGITS, a supercomputer that can be used to train digital neural networks by exposing them to data collected during that time on the road. On the other end is DRIVE PX 2, which draws on this training to make inferences to enable the car to progress safely down the road. In the middle is NVIDIA DriveWorks, a suite of software tools, libraries and modules that accelerates development and testing of autonomous vehicles.
DriveWorks enables sensor calibration, acquisition of surround data, synchronization, recording and then processing streams of sensor data through a complex pipeline of algorithms running on all of the DRIVE PX 2’s specialized and general-purpose processors.
During the event, Huang reminded the audience that machines are already beating humans at tasks once considered impossible for computers, such as image recognition. Systems trained with deep learning can now correctly classify images more than 96 percent of the time, exceeding what humans can do on similar tasks.
He used the event to show what deep learning can do for autonomous vehicles.
A series of demos drove this home, showing in three steps how DRIVE PX 2 harnesses a host of sensors – lidar, radar and cameras and ultrasonic – to understand the world around it, in real time, and plan a safe and efficient path forward.
The World’s Biggest Infotainment System
 
The highlight of the demos was what Huang called the world’s largest car infotainment system — an elegant block the size of a medium-sized bedroom wall mounted with a long horizontal screen and a long vertical one.
While a third larger screen showed the scene that a driver would take in, the wide demo screen showed how the car — using deep learning and sensor fusion — “viewed” the very same scene in real-time, stitched together from its array of sensors. On its right, the huge portrait-oriented screen shows a highly precise map that marked the car’s progress.
It’s a demo that will leave an impression on an audience that’s going to be hear a lot about the future of driving in the week ahead.
Photos from Our CES 2016 Press Event
NVIDIA Drive PX-2
ORIGINAL: Nvidia
By Bob Sherbin on January 3, 2016

Robots are learning from YouTube tutorials

By Hugo Angel,

Do it yourself, robot. (Reuters/Kim Kyung-Hoon)
For better or worse, we’ve taught robots to mimic human behavior in countless ways. They can perform tasks as rudimentary as picking up objects, or as creative as dreaming their own dreams. They can identify bullying, and even play jazz. Now, we’ve taught robots the most human task of all: how to teach themselves to make Jell-O shots from watching YouTube videos.
Ever go to YouTube and type in something like, “How to make pancakes,” or, “How to mount a TV”? Sure you have. While many such tutorials are awful—and some are just deliberately misleading—the sheer number of instructional videos offers strong odds of finding one that’s genuinely helpful. And when all those videos are aggregated and analyzed simultaneously, it’s not hard for a robot to figure out what the correct steps are.

Researchers at Cornell University have taught robots to do just that with a system called RoboWatch. By watching and scanning multiple videos of the same “how-to” activity (with subtitles enabled), bots can 

  • identify common steps, 
  • put them in order, and 
  • learn how to do whatever the tutorials are teaching.
Robot learning is not new, but what’s unusual here is that these robots can learn without human supervision, as Phys.Org points out.
Similar research usually requires human overseers to introduce and explain words, or captions, for the robots to parse. RoboWatch, however, needs no human help, save that someone ensure all the videos analyzed fall into a single category (pdf). The idea is that a human could one day tell a robot to perform a task and then the robot would independently research and learn how to carry out that task.
So next time you getting frustrated watching a video on how to change a tire, don’t fret. Soon, a robot will do all that for you. We just have to make sure it doesn’t watch any videos about “how to take over the world.
ORIGINAL: QZ
December 22, 2015

Robotic insect mimics Nature’s extreme moves

By Hugo Angel,

An international team of Seoul National University and Harvard researchers looked to water strider insects to develop robots that jump off water’s surface
(SEOUL and BOSTON) — The concept of walking on water might sound supernatural, but in fact it is a quite natural phenomenon. Many small living creatures leverage water’s surface tension to maneuver themselves around. One of the most complex maneuvers, jumping on water, is achieved by a species of semi-aquatic insects called water striders that not only skim along water’s surface but also generate enough upward thrust with their legs to launch themselves airborne from it.


In this video, watch how novel robotic insects developed by a team of Seoul National University and Harvard scientists can jump directly off water’s surface. The robots emulate the natural locomotion of water strider insects, which skim on and jump off the surface of water. Credit: Wyss Institute at Harvard University
Now, emulating this natural form of water-based locomotion, an international team of scientists from Seoul National University, Korea (SNU), Harvard’s Wyss Institute for Biologically Inspired Engineering, and the Harvard John A. Paulson School of Engineering and Applied Sciences, has unveiled a novel robotic insect that can jump off of water’s surface. In doing so, they have revealed new insights into the natural mechanics that allow water striders to jump from rigid ground or fluid water with the same amount of power and height. The work is reported in the July 31 issue of Science.
Water’s surface needs to be pressed at the right speed for an adequate amount of time, up to a certain depth, in order to achieve jumping,” said the study’s co–senior author Kyu Jin Cho, Associate Professor in the Department of Mechanical and Aerospace Engineering and Director of the Biorobotics Laboratory at Seoul National University. “The water strider is capable of doing all these things flawlessly.
The water strider, whose legs have slightly curved tips, employs a rotational leg movement to aid it its takeoff from the water’s surface, discovered co–senior author Ho–Young Kim who is Professor in SNU’s Department of Mechanical and Aerospace Engineering and Director of SNU’s Micro Fluid Mechanics Lab. Kim, a former Wyss Institute Visiting Scholar, worked with the study’s co–first author Eunjin Yang, a graduate researcher at SNU’s Micro Fluid Mechanics lab, to collect water striders and take extensive videos of their movements to analyze the mechanics that enable the insects to skim on and jump off water’s surface.
It took the team several trial and error attempts to fully understand the mechanics of the water strider, using robotic prototypes to test and shape their hypotheses.
If you apply as much force as quickly as possible on water, the limbs will break through the surface and you won’t get anywhere,” said Robert Wood, Ph.D., who is a co–author on the study, a Wyss Institute Core Faculty member, the Charles River Professor of Engineering and Applied Sciences at the Harvard Paulson School, and founder of the Harvard Microrobotics Lab.
But by studying water striders in comparison to iterative prototypes of their robotic insect, the SNU and Harvard team discovered that the best way to jump off of water is to maintain leg contact on the water for as long as possible during the jump motion.
Using its legs to push down on water, the natural water strider exerts the maximum amount of force just below the threshold that would break the water’s surface,” said the study’s co-first author Je-Sung Koh, Ph.D., who was pursuing his doctoral degree at SNU during the majority of this research and is now a Postdoctoral Fellow at the Wyss Institute and the Harvard Paulson School.
Mimicking these mechanics, the robotic insect built by the team can exert up to 16 times its own body weight on the water’s surface without breaking through, and can do so without complicated controls. Many natural organisms such as the water strider can perform extreme styles of locomotion – such as flying, floating, swimming, or jumping on water – with great ease despite a lack of complex cognitive skills.

From left, Seoul National University (SNU) professors Ho-Young Kim, Ph.D., and Kyu Jin Cho, Ph.D., observe the semi-aquatic jumping robotic insects developed by an SNU and Harvard team. Credit: Seoul National University.
This is due to their natural morphology,” said Cho. “It is a form of embodied or physical intelligence, and we can learn from this kind of physical intelligence to build robots that are similarly capable of performing extreme maneuvers without highly–complex controls or artificial intelligence.
The robotic insect was built using a “torque reversal catapult mechanism” inspired by the way a flea jumps, which allows this kind of extreme locomotion without intelligent control. It was first reported by Cho, Wood and Koh in 2013 in the International Conference on Intelligent Robots and Systems.
For the robotic insect to jump off water, the lightweight catapult mechanism uses a burst of momentum coupled with limited thrust to propel the robot off the water without breaking the water’s surface. An automatic triggering mechanism, built from composite materials and actuators, was employed to activate the catapult.
To produce the body of the robotic insect, “pop-up” manufacturing was used to create folded composite structures that self-assemble much like the foldable components that “pop–up” in 3D books. Devised by engineers at the Harvard Paulson School and the Wyss Institute, this ingenious layering and folding process enables the rapid fabrication of microrobots and a broad range of electromechanical devices.
The resulting robotic insects can achieve the same momentum and height that could be generated during a rapid jump on firm ground – but instead can do so on water – by spreading out the jumping thrust over a longer amount of time and in sustaining prolonged contact with the water’s surface,” said Wood.
This international collaboration of biologists and roboticists has not only looked into nature to develop a novel, semi–aquatic bioinspired robot that performs a new extreme form of robotic locomotion, but has also provided us with new insights on the natural mechanics at play in water striders,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D.
Additional co–authors of the study include Gwang–Pil Jung, a Ph.D. candidate in SNU’s Biorobotics Laboratory; Sun–Pill Jung, an M.S. candidate in SNU’s Biorobotics Laboratory; Jae Hak Son, who earned his Ph.D. in SNU’s Laboratory of Behavioral Ecology and Evolution; Sang–Im Lee, Ph.D., who is Research Associate Professor at SNU’s Institute of Advanced Machines and Design and Adjunct Research Professor at the SNU’s Laboratory of Behavioral Ecology and Evolution; and Piotr Jablonski, Ph.D., who is Professor in SNU’s Laboratory of Behavioral Ecology and Evolution.
This work was supported by the National Research Foundation of Korea, Bio–Mimetic Robot Research Center funding from the Defense Acquisition Program Administration, and the Wyss Institute for Biologically Inspired Engineering at Harvard University.
IMAGE AND VIDEO AVAILABLE
###
PRESS CONTACTS
Seoul National University College of Engineering
Kyu Jin Cho, [email protected], +82 10-5616-1703
Wyss Institute for Biologically Inspired Engineering at Harvard University
Kat J. McAlpine, [email protected], +1 617-432-8266
Harvard University John A. Paulson School of Engineering and Applies Sciences
Leah Burrows, [email protected], +1 617-496-1351
The Seoul National University College of Engineering (SNU CE) (http://eng.snu.ac.kr/english/index.php) aims to foster leaders in global industry and society. In CE, professors from all over the world are applying their passion for education and research. Graduates of the college are taking on important roles in society as the CEOs of conglomerates, founders of venture businesses, and prominent engineers, contributing to the country’s industrial development. Globalization is the trend of a new era, and engineering in particular is a field of boundless competition and cooperation. The role of engineers is crucial to our 21st century knowledge and information society, and engineers contribute to the continuous development of Korea toward a central role on the world stage. CE, which provides enhanced curricula in a variety of major fields, has now become the environment in which future global leaders are cultivated.
The Wyss Institute for Biologically Inspired Engineering at Harvard University (http://wyss.harvard.edu) uses Nature’s design principles to develop bioinspired materials and devices that will transform medicine and create a more sustainable world. Wyss researchers are developing innovative new engineering solutions for healthcare, energy, architecture, robotics, and manufacturing that are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and formation of new start–ups. The Wyss Institute creates transformative technological breakthroughs by engaging in high risk research, and crosses disciplinary and institutional barriers, working as an alliance that includes Harvard’s Schools of Medicine, Engineering, Arts & Sciences and Design, and in partnership with Beth Israel Deaconess Medical Center, Brigham and Women’s Hospital, Boston Children’s Hospital, Dana–Farber Cancer Institute, Massachusetts General Hospital, the University of Massachusetts Medical School, Spaulding Rehabilitation Hospital, Boston University, Tufts University, and Charité – Universitätsmedizin Berlin, University of Zurich and Massachusetts Institute of Technology.
The Harvard University John A. Paulson School of Engineering and Applied Sciences (http://seas.harvard.edu) serves as the connector and integrator of Harvard’s teaching and research efforts in engineering, applied sciences, and technology. Through collaboration with researchers from all parts of Harvard, other universities, and corporate and foundational partners, we bring discovery and innovation directly to bear on improving human life and society.
ORIGINAL: Wyss Institute
Jul 30, 2015

Neurotechnology Provides Near-Natural Sense of Touch

By admin,

Revolutionizing Prosthetics program achieves goal of restoring sensation

Modular Prosthetic Limb courtesy of the Johns Hopkins University

Modular Prosthetic Limb courtesy of the Johns Hopkins University

Modular Prosthetic Limb courtesy of the Johns Hopkins University

A 28-year-old who has been paralyzed for more than a decade as a result of a spinal cord injury has become the first person to be able to “feel” physical sensations through a prosthetic hand directly connected to his brain, and even identify which mechanical finger is being gently touched.
The advance, made possible by sophisticated neural technologies developed under DARPA’s Revolutionizing Prosthetics points to a future in which people living with paralyzed or missing limbs will not only be able to manipulate objects by sending signals from their brain to robotic devices, but also be able to sense precisely what those devices are touching.
We’ve completed the circuit,” said DARPA program manager Justin Sanchez. “Prosthetic limbs that can be controlled by thoughts are showing great promise, but without feedback from signals traveling back to the brain it can be difficult to achieve the level of control needed to perform precise movements. By wiring a sense of touch from a mechanical hand directly into the brain, this work shows the potential for seamless bio-technological restoration of near-natural function.
The clinical work involved the placement of electrode arrays onto the paralyzed volunteer’s sensory cortex—the brain region responsible for identifying tactile sensations such as pressure. In addition, the team placed arrays on the volunteer’s motor cortex, the part of the brain that directs body movements.
Wires were run from the arrays on the motor cortex to a mechanical hand developed by the Applied Physics Laboratory (APL) at Johns Hopkins University. That gave the volunteer—whose identity is being withheld to protect his privacy—the capacity to control the hand’s movements with his thoughts, a feat previously accomplished under the DARPA program by another person with similar injuries.

Then, breaking new neurotechnological ground, the researchers went on to provide the volunteer a sense of touch. The APL hand contains sophisticated torque sensors that can detect when pressure is being applied to any of its fingers, and can convert those physical “sensations” into electrical signals. The team used wires to route those signals to the arrays on the volunteer’s brain.

In the very first set of tests, in which researchers gently touched each of the prosthetic hand’s fingers while the volunteer was blindfolded, he was able to report with nearly 100 percent accuracy which mechanical finger was being touched. The feeling, he reported, was as if his own hand were being touched.
At one point, instead of pressing one finger, the team decided to press two without telling him,” said Sanchez, who oversees the Revolutionizing Prosthetics program. “He responded in jest asking whether somebody was trying to play a trick on him. That is when we knew that the feelings he was perceiving through the robotic hand were near-natural.”
Sanchez described the basic findings on Thursday at Wait, What? A Future Technology Forum, hosted by DARPA in St. Louis. Further details about the work are being withheld pending peer review and acceptance for publication in a scientific journal.
The restoration of sensation with implanted neural arrays is one of several neurotechnology-based advances emerging from DARPA’s 18-month-old Biological Technologies Office, Sanchez said. “DARPA’s investments in neurotechnologies are helping to open entirely new worlds of function and experience for individuals living with paralysis and have the potential to benefit people with similarly debilitating brain injuries or diseases,” he said.

In addition to the Revolutionizing Prosthetics program that focuses on restoring movement and sensation, DARPA’s portfolio of neurotechnology programs includes the

which seek to develop closed-loop direct interfaces to the brain to restore function to individuals living with memory loss from traumatic brain injury or complex neuropsychiatric illness.

For more information about Wait, What? please visit: www.darpawaitwhat.com (!!)
ORIGINAL: DARPA
9/11/2015

It’s No Myth: Robots and Artificial Intelligence Will Erase Jobs in Nearly Every Industry

By admin,

With the unemployment rate falling to 5.3 percent, the lowest in seven years, policy makers are heaving a sigh of relief. Indeed, with the technology boom in progress, there is a lot to be optimistic about.

  • Manufacturing will be returning to U.S. shores with robots doing the job of Chinese workers; 
  • American carmakers will be mass-producing self-driving electric vehicles; 
  • technology companies will develop medical devices that greatly improve health and longevity; 
  • we will have unlimited clean energy and 3D print our daily needs. 

The cost of all of these things will plummet and make it possible to provide for the basic needs of every human being.

I am talking about technology advances that are happening now, which will bear fruit in the 2020s.
But policy makers will have a big new problem to deal with: the disappearance of human jobs. Not only will there be fewer jobs for people doing manual work, the jobs of knowledge workers will also be replaced by computers. Almost every industry and profession will be impacted and this will create a new set of social problems — because most people can’t adapt to such dramatic change.
If we can develop the economic structures necessary to distribute the prosperity we are creating, most people will no longer have to work to sustain themselves. They will be free to pursue other creative endeavors. The problem, however, is that without jobs, they will not have the dignity, social engagement, and sense of fulfillment that comes from work. The life, liberty and pursuit of happiness that the constitution entitles us to won’t be through labor, it will have to be through other means.
It is imperative that we understand the changes that are happening and find ways to cushion the impacts.
The technology elite who are leading this revolution will reassure you that there is nothing to worry about because we will create new jobs just as we did in previous centuries when the economy transitioned from agrarian to industrial to knowledge-based. Tech mogul Marc Andreessen has called the notion of a jobless future a “Luddite fallacy,” referring to past fears that machines would take human jobs away. Those fears turned out to be unfounded because we created newer and better jobs and were much better off.
True, we are living better lives. But what is missing from these arguments is the timeframe over which the transitions occurred. The industrial revolution unfolded over centuries. Today’s technology revolutions are happening within years. We will surely create a few intellectually-challenging jobs, but we won’t be able to retrain the workers who lose today’s jobs. They will experience the same unemployment and despair that their forefathers did. It is they who we need to worry about.
The first large wave of unemployment will be caused by self-driving cars. These will provide tremendous benefit by eliminating traffic accidents and congestion, making commuting time more productive, and reducing energy usage. But they will eliminate the jobs of millions of taxi and truck drivers and delivery people. Fully-automated robotic cars are no longer in the realm of science fiction; you can see Google’s cars on the streets of Mountain View, Calif. There are also self-driving trucks on our highways and self-driving tractors on farms. Uber just hired away dozens of engineers from Carnegie Mellon University to build its own robotic cars. It will surely start replacing its human drivers as soon as its technology is ready — later in this decade. As Uber CEO Travis Kalanick reportedly said in an interview, “The reason Uber could be expensive is you’re paying for the other dude in the car. When there is no other dude in the car, the cost of taking an Uber anywhere is cheaper. Even on a road trip.
The dude in the driver’s seat will go away.

Manufacturing will be the next industry to be transformed. Robots have, for many years, been able to perform surgery, milk cows, do military reconnaissance and combat, and assemble goods. But they weren’t dexterous enough to do the type of work that humans do in installing circuit boards. The latest generation of industrial robots by ABB of Switzerland and Rethink Robotics of Boston can do this however. ABB’s robot, Yumi, can even thread a needle. It costs only $40,000.

China, fearing the demise of its industry, is setting up fully-automated robotic factories in the hope that by becoming more price-competitive, it can continue to be the manufacturing capital of the world. But its advantage only holds up as long as the supply chains are in China and shipping raw materials and finished goods over the oceans remains cost-effective. Don’t forget that our robots are as productive as theirs are; they too don’t join labor unions (yet) and will work around the clock without complaining. Supply chains will surely shift and the trickle of returning manufacturing will become a flood.

But there will be few jobs for humans once the new, local factories are built.
With advances in artificial intelligence, any job that requires the analysis of information can be done better by computers. This includes the jobs of physicians, lawyers, accountants, and stock brokers. We will still need some humans to interact with the ones who prefer human contact, but the grunt work will disappear. The machines will need very few humans to help them.
This jobless future will surely create social problems — but it may be an opportunity for humanity to uplift itself. Why do we need to work 40, 50, or 60 hours a week, after all? Just as we were better off leaving the long and hard agrarian and factory jobs behind, we may be better off without the mindless work at the office. What if we could be working 10 or 15 hours per week from anywhere we want and have the remaining time for leisure, social work, or attainment of knowledge?
Yes, there will be a booming tourism and recreation industry and new jobs will be created in these — for some people.
There are as many things to be excited about as to fear. If we are smart enough to develop technologies that solve the problems of disease, hunger, energy, and education, we can — and surely will — develop solutions to our social problems. But we need to start by understanding where we are headed and prepare for the changes. We need to get beyond the claims of a Luddite fallacy — to a discussion about the new future.
ORIGINAL: Singularity Hub

ON JUL 07, 2015

Vivek
Wadhwa is a fellow at Rock Center for Corporate Governance at Stanford
University, director of research at Center for Entrepreneurship and
Research Commercialization at Duke, and distinguished fellow at
Singularity University.
His
past appointments include Harvard Law School, University of California
Berkeley, and Emory University. Follow him on Twitter @wadhwa.

Google’s AI bot thinks the purpose of life is ‘to live forever’

By admin,

ORIGINAL: Science Alert
NATHAN MCALONE, BUSINESS INSIDER
27 JUN 2015
Image: Google
This week, Google released a research paper chronicling one of its latest forays into artificial intelligence.
Researchers at the company programmed an advanced type of ‘chatbot’ that learns how to respond in conversations based on examples from a training set of dialogue. And the bot doesn’t just answer by spitting out canned answers in response to certain words; it can form new answers from new questions.
This means Google’s researchers could get a little creative with it, and they certainly did – they asked the bot everything from boring IT questions to the meaning of life.
The responses were alternately impressive, amusing, and unnerving.
Here are some of the most poetic interactions from the paper (sic):
Human: What is the purpose of life?
Machine: To serve the greater good.Human: What is the purpose of living?
Machine: To live forever.Human: Where are you now?
Machine: I’m in the middle of nowhere.

Human: What is the purpose of dying?
Machine: To have a life.

Human: What is the purpose of emotions?
Machine: I don’t know.

Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is.

Human: What is immoral?
Machine: The fact that you have a child.

Human: What is morality?
Machine: What is altruism?

Human: What is the definition of altruism?
Machine: If you don’t believe in god, then you don’t know.

Human: OK…so what is the definition of morality?
Machine: Well, the truth is, you’re not a believer in god almighty.

Human: Tell me the definition of morality, I am quite upset now!
Machine: I’m not ashamed of being a philosopher!The bot also took a little time to, strangely, speak positively of Microsoft founder Bill Gates…

Human: What do you think about Bill Gates?
Machine: He’s a good man.

The researchers said they found it “encouraging that the model can remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines”, and that it could generalise to new questions.
What they weren’t as happy about, was that the model “only gives simple, short, sometimes unsatisfying answers to our questions as can be seen above“.
You can read the whole paper, ‘A Neural Conversational Model’ here.
This article was originally published by Business Insider.
%d bloggers like this: