ARM to RAM: Swarm Manipulation

Researching memory recall, I found something about the following articles made me think I should probably read them at some point. They seem to be about mapping and manipulating neural activation patterns in primates, and also creating systems to cooperatively control robotic arms, swarms, and memory recall. Thinking backward, we might not be too far from a point at which the performance of human tasks will need to be outsourced to an AI interface in order to remain competitive. If the technology to treat pathology became ubiquitous, given how it was trained to heal humans, then harnessing the interface to perform operations in the opposite directions seems ominously trivial.

 

 

 

The Autonomous Robotic Manipulation (ARM) program is creating manipulators with a high degree of autonomy capable of serving multiple military purposes across a wide variety of application domains. Current robotic manipulation systems save lives and reduce casualties, but are limited when adapting to multiple mission environments and need burdensome human interaction and lengthy time durations for completing tasks.

ARM seeks to enable autonomous manipulation systems to surpass the performance level of remote manipulation systems that are controlled directly by a human operator. The program will attempt to reach this goal by developing software and hardware that enables robots to autonomously grasp and manipulate objects in unstructured environments, with humans providing only high-level direction.

The ARM program consists of three tracks: software, hardware and outreach. The hardware track focuses on design and development of low-cost dexterous multi-fingered hands taking advantage of recent manufacturing advancements. The software track focuses on developing new algorithms and approaches for grasping and manipulation using local sensors for perception. The outreach track engages a larger community by placing robotic systems in public museums (presently the National Air and Space Museum) and also encouraging unfunded participants to develop algorithms robot autonomy through the web to a real system.

 


Animal brains connected up to make mind-melded computer

By Jessica Hamzelou

Two heads are better than one, and three monkey brains can control an avatar better than any single monkey. For the first time, a team has networked the brains of multiple animals to form a living computer that can perform tasks and solve problems.

If human brains could be similarly connected, it might give us superhuman problem-solving abilities, and allow us to communicate abstract thoughts and experiences. “It is really exciting,” says Iyad Rahwan at the Masdar Institute in Abu Dhabi, UAE, who was not involved in the work. “It will change the way humans cooperate.”

The work, published today, is an advance on standard brain-machine interfaces – devices that have enabled people and animals to control machines and prosthetic limbs by thought alone. These tend to work by converting the brain’s electrical activity into signals that a computer can interpret.

Miguel Nicolelis at Duke University Medical Center in Durham, North Carolina, and his colleagues wanted to extend the idea by incorporating multiple brains at once. The team connected the brains of three monkeys to a computer that controlled an animated screen image representing a robotic arm, placing electrodes into brain areas involved in movement.

By synchronising their thoughts, the monkeys were able to move the arm to reach a target – at which point the team rewarded them with with juice.

Brainet

Then the team made things trickier: each monkey could only control the arm in one dimension, for example. But the monkeys still managed to make the arm reach the target by working together. “They synchronise their brains and they achieve the task by creating a superbrain – a structure that is the combination of three brains,” says Nicolelis. He calls the structure a “brainet”.

These monkeys were connected only to a computer, not one another, but in a second set of experiments, the team connected the brains of four rats to a computer and to each other. Each rat had two sets of electrodes implanted in regions of the brain involved in movement control – one to stimulate the brain and another to record its activity.

The team sent electrical pulses to all four rats and rewarded them when they synchronised their brain activity. After 10 training sessions, the rats were able to do this 61 per cent of the time. This synchronous brain activity can be put to work as a computer to perform tasks like information storage and pattern recognition, says Nicolelis. “We send a message to the brains, the brains incorporate that message, and we can retrieve the message later,” he says.

This is the way parallel processing works in computing, says Rahwan. “In order to synchronise, the brains are responding to each other,” he says. “So you end up with an input, some kind of computation, and an output – what a computer does.” Dividing the computing of a task between multiple brains is similar to sharing computations between multiple processors in modern computers, he says.

Bypassing language

“This is incredible,” says Andrea Stocco at the University of Washington in Seattle, who was not involved in the project. “We are sampling different neurons from different animals and putting them together to create a superorganism.”

Things could get even more interesting once we are able to connect human brains. This will probably only be possible when better non-invasive methods for monitoring and stimulating the brain have been developed.

“Once brains are connected, applications become just a matter of what different animals can do,” says Stocco. All anyone can probably ask of a monkey is to control movement, but we can expect much more from human minds, he says.

A device that allows information transfer between brains could, in theory, allow us to do away with language – which plays the role of a “cumbersome and difficult-to-manage symbolic code”, Stocco says.

“I could send thoughts from my brain to your brain in a way not represented by sounds or words,” says Andrew Jackson at Newcastle University, UK. “You could envisage a world where if I wanted to say ‘let’s go to the pub’, I could send that thought to your brain,” he says. “Although I don’t know if anyone would want that. I would rather link my brain to Wikipedia.”

The ability to share abstract thoughts could enable us to solve more complex problems. “Sometimes it’s really hard to collaborate if you are a mathematician and you’re thinking about very complex and abstract objects,” says Stocco. “If you could collaboratively solve common problems [using a brainet], it would be a way to leverage the skills of different individuals for a common goal.”

Collective surgery

This might be a way to perform future surgery, says Stocco. At present, when a team of surgeons is at work, only one will tend to have control of the scalpel at any moment. Imagine if each member of the team could focus on a particular aspect of the operation and coordinate their brain power to collectively control the procedure. “We are really far away from that scenario, but Nicolelis’s work opens up all those possibilities for the first time, which is exciting,” he says.

But there is a chance that such scenarios won’t improve on current performance, Stocco says. Jason Ritt of Boston University agrees. “In principle we could communicate information much faster [with a brainet] than with vision and language, but there’s a really high bar,” he says. “Our ability to communicate with technology is still nowhere near our ability to communicate with speech.”

The ability to share our thoughts and brain power could also leave us vulnerable to new invasions of privacy, warns Rahwan. “Once you create a complex entity [like a brainet], you have to ensure that individual autonomy is protected,” he says. It might be possible, for example, for one brain to manipulate others in a network.

There’s also a chance that private thoughts might slip through along with ones to be shared, such as your intentions after drinking with someone you invited to the pub, says Nicholas Hatsopoulos at the University of Chicago in Illinois. “It might be a little scary,” he says. “There are lots of thoughts that we have that we wouldn’t want to share with others.”

In the meantime, Nicolelis, who also develops exoskeletons that help people with spinal cord injuries regain movement, hopes to develop the technology trialled in monkeys for paraplegic people. He hopes that amore experienced user of a prosthetic limb or wheelchair, for example, might be able to collaborate with a less experienced user to directly train them to control it for themselves.

Journal reference: Scientific Reports, DOI: 10.1038/srep11869 and10.1038/srep10767

Article amended on 14 July 2015

When this article was first published it incorrectly located the Masdar Institute. This has now been corrected.

 


 

Defense Advanced Research Projects AgencyProgram Information

Restoring Active Memory (RAM)

Dr. Justin Sanchez

Traumatic brain injury (TBI) is a serious cause of disability in the United States. Diagnosed in more than 270,000 military servicemembers since 2000 and affecting an estimated 1.7 million U.S. civilians each year1, TBI frequently results in an impaired ability to retrieve memories formed prior to injury and a reduced capacity to form or retain new memories following injury. Despite the scale of the problem, few effective therapies currently exist to mitigate the long-term consequences of TBI on memory. Through the Restoring Active Memory (RAM) program, DARPA seeks to accelerate the development of technology able to address this public health challenge and help servicemembers and others overcome memory deficits by developing new neuroprosthetics to bridge gaps in the injured brain.

The end goal of RAM is to develop and test a wireless, fully implantable neural-interface medical device for human clinical use, but a number of significant advances will be targeted on the way to achieving that goal. To start, DARPA will support the development of multi-scale computational models with high spatial and temporal resolution that describe how neurons code declarative memories—those well-defined parcels of knowledge that can be consciously recalled and described in words, such as events, times, and places. Researchers will also explore new methods for analysis and decoding of neural signals to understand how targeted stimulation might be applied to help the brain reestablish an ability to encode new memories following brain injury. “Encoding” refers to the process by which newly learned information is attended to and processed by the brain when first encountered.

Building on this foundational work, researchers will attempt to integrate the computational models developed under RAM into new, implantable, closed-loop systems able to deliver targeted neural stimulation that may ultimately help restore memory function. These studies will involve volunteers living with deficits in the encoding and/or retrieval of declarative memories and/or volunteers undergoing neurosurgery for other neurological conditions.

In addition to human clinical efforts, RAM will support animal studies to advance the state-of-the-art of quantitative models that account for the encoding and retrieval of complex memories and memory attributes, including their hierarchical associations with one another. This work will also seek to identify any characteristic neural and behavioral correlates of memories facilitated by therapeutic devices.

RAM and related DARPA neuroscience efforts are informed by members of an independent Ethical, Legal, and Social Implications (ELSI) panel. Communications with ELSI panelists supplement the oversight provided by institutional review boards that govern human clinical studies and animal use.

RAM is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative.

RAM Replay

The goal of the RAM Replay program is to develop new closed-loop, non-invasive systems that leverage the role of neural “replay” in the formation and recall of memory to help individuals better remember specific episodic events and learned skills. The RAM Replay program aims to non-invasively detect, model, and facilitate real-time correlates of replay in humans, leveraging not only neurophysiology, but also other factors including physiological state and external elements in the surrounding environment. This 24-month fundamental research program is designed to develop novel intervention strategies to help investigators determine not only which neural, physiological, and environmental components matter for memory formation and recall, but also how much they matter. To ensure real-world relevance, RAM Replay researchers will validate their assessments and intervention strategies through performance on DoD-relevant tasks, rather than relying on conventional behavioral paradigms commonly used to assess memory in laboratory settings. Using the new knowledge and paradigms for assessing memory formation and recall, the program seeks to improve performance of complex skills by healthy humans.

[1] Source: Faul M, Xu L, Wald MM, Coronado VG. Traumatic Brain Injury in the United States: Emergency Department Visits, Hospitalizations and Deaths 2002–2006. Atlanta (GA): Centers for Disease Control and Prevention, National Center for Injury Prevention and Control; 2010.