A so-called iCub robot is shown at the right. The picture is a screenshot if the iCub simulator. The iCub is a robot mimicking a baby. It can stare at you and the full-fledged rolled out version is able to crawl and grasp objects. Its cognitive architecture uses components of global workspace theory, which is also under investigation at Almende for the Replicator robots. You can find some architecture pictures at the iCub wiki (and many scientific papers online). Quite interesting from an open-source perspective is that the iCub software is build on top of YARP. As formulated nicely by the designers in the paper Towards Long-Lived Robot Genes (pdf). I will quote ad verbatim (emphasis is mine):
Robotics development is, in some ways, like natural evolution. Consider robot software. Every piece of software has its niche: the environmental conditions within which it can be used. Within this niche it will grow and change and perhaps expand to nearby niches. Some niches are large (standard PCs), some are medium-sized (for example robots like Khepera , Pioneer  and AIBO  to mention a few), and some are tiny (a newly developed humanoid). Software evolves quickly as new technologies get proposed and hardware changes; if trapped in too narrow a niche it tends to become obsolete and die, together with the efforts of the developers who have contributed to it. Robot hardware is subject in turn to the changes in the commercial and industrial environment. In academia, software and hardware designed for robotic projects are prone to obsolescence, because although graduate students may be talented developers they are rarely experienced and disciplined system engineers. Also, usually the development of a robotic platform is not the main goal of the people who are working on it but simply a means to an end.
With simulated robots it is extremely important to separate the controllers that operate on the simulated robots from the simulator software. If controllers by accident use simulator functionality, which might be as simple as an accidental call to an OpenCV library, this would mean that this library also need to exist on the robot itself. It is also important to provide exactly the same simulated device interface as a real device interface. Or even when the controller code is separated from the simulation code, it still needs to adapted to the real devices on the real robot. Last, but not least, by providing a network interface between devices (sensors, actuators) and controllers, it is very easy to implement distributed processing! All this is provided by YARP: Yet Another Robot Platform. A slightly misleading term, because there are not so many robot platforms that provide this much needed functionality!
Sending Images over YARP ports
YARP provide ports. A simulated device, in this case a camera, can send images over a YARP port. The yarpserver maintains a registry of all registered software modules. As soon as there is an interested party, for example the default available yarpview module. It is possible to connect the camera with the viewer on the command-line with something like: yarp connect /camera/front /view. At Almende we have developed something like this in the house, called the mbus. The smart thing of YARP is that it uses strings as port names (/camera/front in the example above) rather than integers. One of the things YARP cannot do is to startup processes remotely, but a wrapper for that is easily written. Actually, I stand corrected. Paul Fitzpatrick told me just now that there is a yarprun server, which is able to start and stop processes on a remote machine. This will likely be extremely valuable on modular robots!
YARP is lightweight, they don’t support all OpenCV picture formats. They just support RGB and BGR without alpha channel. To be able to send images over YARP you have to convert the osg::Image to an IplImage (OpenCV). (Email me if you need this code.)
In the picture at the left, images are streamed over a YARP port. It shows the Symbricator3D simulator at the left. At the right you see a viewer, it shows what the robot is seeing through its front camera. The yarpserver in this case is running on another machine at our LAN. In the same way can controllers now be placed as separate binaries outsides of the Symbricator3D simulator. Nice, little blobs of code, easy to be reused in many research projects!
There have been some posts before about the gene regulatory network engine that is developed at Almende. This post shows one of the results of the integration of this engine with the 3D robot simulator, Symbricator3D, in the FP7 Replicator project.
Each robot runs the same gene regulatory network. The movie shows only the result of the evolutionary process. In the evolutionary process itself groups of robots compete with other groups of robots. The group of robots with the right gene regulatory network does have a fitness advantage compared to the other group. In total 32 groups of robots have been competing to create a so-called glider. A glider is a repeating pattern that can be found in cellular automata. It is a pattern of 5 units. By applying the local neighbour rules of the Game of Life it moves diagonally over the 2D grid. This can be seen in the picture at the left from wikipedia.
Although it might seem far fetched on first sight, a glider is a very special type of “organism”. Its “body shape” changes continuously over time. Most of the research with respect to gene regulatory networks is tailored to development. The creation of a static body layout. Or the creation of a locomotion pattern that corresponds to a static body layout. The glider is, however, a dynamic body form. It can be seen as a trivial example of metamorphosis. As everybody knows, the cells in our body are regenerated in, say, every 7 years. We do not have a static set of cells at all. During puberty suddenly a bunch of cells start to undergo transitions again. With wounds we are able to regenerate tissue. Cancer might very well be in certain cases errors in the gene regulatory machinery. Aging is related to the inability to maintain this homeostatic balance with continuously changing subcomponents.
This movie shows the result of an evolution process (which will be published in the ANTS 2010 Proceedings later this year) that creates a simplified glider. The robots are not able to dock diagonally, so a Von Neumann neighbourhood is used. The colours indicate the state of the robot module. The state is uniquely defined by a vector of all protein quantities in each module. The gene regulatory network is a matrix that takes this vector as input and produces a new protein quantity vector. When the robot is blue it wants to dock, when it is red it wants to disconnect, when it is purple it wants both. As soon as the robot modules are connected an additional mechanism comes into play. The proteins generated by both gene regulatory networks are now able to flow into the other module. The new state of a modules does not only depend on its previous state (the protein quantity vector) of itself, but also the one of its neighbour. This allows the module to “change mind” as it gets connected to another module, and request a disconnect again.
The simulation does abstract from the process of finding a robot module and docking to it. This has been shown previously in the sensor fusion movies. A combination is needed to completely implement the metamorphosis process on the real robots.
You might think you know programming languages, from VHDL to Scheme to Java, however most of the times, knowledge of the language itself is just the beginning. This week I spent three days creating a subsumption architecture using the Boost Graph Library. The latter is part of the Boost libraries, that are such an entire source of additional knowledge on top of ordinary C++ know-how.
Over time several robot architectures have been implemented. I will leave out very interesting ones (see also the internship description and the Replicator deliverables, if you are interested in those). A comparison between three architectures is given by Georgas and Taylor. They compare the subsumption architecture with the three-layer 3L approach and the reactive-concentric architecture in An Architectural Style Perspective on Dynamic Robot Architectures. Most conspicuous in this article is by the way the ease with which layers are accepted as the way to organize a robot brain. It is without doubt that the brain is built up in a modular fashion. However, how this modularity is actually materialized needs to be handled very carefully.
Notwithstanding those concerns, Brooks’ subsumption architecture is the prime example of a robot architecture. The movie at the top is an appetizer of Brooks’ film “Rodney’s Robot Revolution” in which he wants to create a robot physically playing the board game Go. But for now we will only focus on the subsumption architecture itself. This is in incredibly simple structure. Namely of tasks or controllers, in a layered structure. Each controller might give commands. For example there is a collision avoidance controller reading infrared sensors and sending commands to the wheels to avoid collisions. However, in certain circumstances, for example when a robot needs to dock to another robot, this controller needs to be overruled. In a subsumption architecture the output of the collision avoidance controller is subsumed by the output of the docking controller. In this way an entire hierarchy of controllers can be built.
The Boost Graph Library contains a lot of predefined classes. Because of the use of generics the vertex and edge entities (or descriptors) can be easily replaced by custom made versions. This was not needed for the purpose of presenting a subsumption hierarchy by a graph though. At our SVN server you can see (check also this corresponding test unit) how this becomes incredibly easy. For example, output to GraphViz in the form of a .dot file (a kind of XML for graphs, although there are more XML-like versions) is just one line of code! Like always, feel free to use the code, it is open-source.
OpenAL is software that enables game developers to create an immersive sound experience. In OpenAL sources can be placed on 3D locations in a virtual world. Moreover, a so-called Listener can be placed in this same 3D world. This listener, in most cases, the game player, can move around. The OpenAL software then automatically makes sure that the listener hears all the sources in the right proportions. The sources nearby should be louder than the sources far away if they emit sound with the same strength. This is called sound attenuation. OpenAL also implements the Doppler effect and certain other effects. In the Delta3D simulator, upon which the Symbricator3D simulator is built, OpenAL is used too as underlying sound library.
The big problem is that OpenAL does not allow for multiple listeners. You will find many people asking for this functionality. First of all, it is necessary if the game actors that are not players need to hear something. If you are playing a game with intelligent computer-driven opponents, they might hear you coming closer! This is implemented in no game that I know off. It is quite complicated to implement artificial intelligence for such game actors, and probably game developers yet do not feel ready for it. Secondly, and most important in the Replicator project, it is important to have multiple listeners in a simulator with multiple robots. To have only one listener in the environment, means that you can only emulate one microphone. With a robot swarm or a robot organism this is of course not enough. There is no open-source software available that implements that however! Also Player/Stage/Gazebo or Webots do not implement sound for multiple robots. This is because all that robot simulation software is built on top of OpenAL which does not support it.
So, what to do? First I emailed Chris Robinson who created the openal-soft implementation. In the meanwhile I started to implemented openal-sim. First of all I played around with OpenAL in combination with PulseAudio. It is possible to have multiple listeners showing up in the PulseAudio console “pavucontrol”. However, it is not possible to redirect individual capture streams to recording streams. So, what that means is that I can only mix together what multiple listeners hear in one big mix. But help is near! In the OpenAL implementation there is a so-called “Wave File Writer” device. This allows for writing to a file on the disk. Rewriting the openal-soft implementation so that multiple wave files can be written in parallel and we are halfway! Then it is only necessary to read the data back. For that I choose to use the already existing RingBuffers. This is a well-known programming solution that stores data in a circular construction in which when the user comes to the end of the RingBuffer the write pointer is automatically set to the beginning again. This all nicely encapsulated with proper mutexes, so that it is impossible to read when someone is just writing. Anyway, this also allows for retrieving the sound mix concocted by OpenAL with attenuation, Doppler and eventual other effects to be retrieved again by microphones in the simulator! Then Chris emails back that it is a nice problem and perhaps something he might implement in the future. Not needed anymore!
The result can be checked out from the Almende SVN Server. And a file to test it at dtJack. This is not the end of the story, the Delta3D simulator also needs to be adapted to use openal-sim. You will find this at dtAudio. All this is implemented within a week, so don’t expect maturity!
With this software set up it is finally possible to have multiple robots in a simulator each hearing each other. Something never shown before! Now it becomes possible that robots are gonna behave like a bunch of birds, tjilping/twittering different tunes! Each of them attracted by sounds in its own way!
Nature inventing the wheel
There are so many new things to watch on the internet. Such as this spider that transforms itself into a wheel when it wants to escape its predator! Right click on the move to watch it on Youtube itself. But this post will not describe all kind of animals that seem to come close to the ideal of “back-and-forth” metamorphosis, of temporarily body changes. No it is about building brains.
Our crappy computer designs…
The future will be defined by our abilities to make living things (gene technology), fast things (information technology), small things (nano-technology) and smart things (artificial intelligence). This story is about a marriage between the last two. It all started with the Van Neumann computer architecture. Our computers are organized according to this design principle. Namely with a separate CPU, central processor unit, and memory. A famous problem is the so-called Van Neumann bottleneck. If those two parts, the processor and the memory is housed on two different places, there will be going a lot of data between them. This “gate” forms a bottleneck, potentially constraining bandwidth. However, this is not the biggest problem.
You might think that processing data costs energy, and that is true. However, what also costs energy is powering wires. The longer the wires are, the more power they consume. And then our biggest problem is our own head… We just don’t have the wiring (the proper mindset) to build computers the same we are wired ourselves. In our brains there is not a part where the processing takes place, and another part where we store everything. In contrary, it is distributed all over the place. And certainly, some places are more dedicated to memory, like the place cells in the hippocampus, but it is seldom so black and white. For that reason I would like to refer you to the following talk:
Recently at HP Labs Stan Williams and his team discovered a physical representation on nano-scale of the fourth circuit element described by Chua long ago. Besides the resistor, capacitor and inductor, there is the memristor. Apply current or voltage on a memristor and it will get a new resistance value, even if you detract the source! This is awesome to create instant-on computers, which is mentioned as on of the possible killer apps on the web. However, what also can be done is solving the above mentioned crappy computer design problem. To come up with a design that integrates remembering (memory) with thinking (processing). One of the ways our brain stores events is by increasing the strength of connections (synapses) between neurons. The neurons that fire together, wire together. You can see the brain as some kind of large statistical machine that extracts a lot of information just out of the fact that things happen at the same time, or consistently slightly after each other. That is, it can do it with “images”, but not with lottery tickets.
In the movie Gregory Snider (I only watched the first part) explains how the brain can be emulated using memristors to model synapses. Like as cell processors, or even more with field programmable gate arrays [FPGAs], there will have to change a lot in conventional computer programming, will computer scientists make use of the inherent reconfigurability that is possible with this hardware. VHDL (a hardware description language) is not the best horse to bet on, to win the race of reconfigurable programming. Artificial neural networks, and we have indeed better and better ones (they are not the old fashioned feed forward net with back-propagation anymore) are inherently mixing memory with processing.
The Killer App
Our robots need large-scale pattern recognition. We cannot afford to do without artificial neural networks anymore. The networks might be used in an abstract form, say as a liquid state machines or adaptive resonance theory model, but we nevertheless have them on our robots! And, in the end we will need to have hardware that can run those networks that are so like our own minds. And this is the promise of memristor-computers. Robots need memristors. And it is also the other way around. Memristors need robots. It might turn out to be virtually impossible to program “memristor-computers” without a “biological” mindset!
Robots in health care
Do we want a robot nurse? And what are the typical tasks a robot nurse would have to perform? Or what does a current nurse nÃ³t want to do? Questions like this are asked at the Rathenau institute. If you ask a nurse herself, where she wants to have a robot, the answer is straightforward. She wants a crane. And preferably not a smart crane. No smart robots please! She wants to be able to say when and whom the robot is carrying from bed to wheelchair. However, it is very unlikely that this won’t be automated in the end. If a patient is always going to the shower at a certain time, why would the nurse not plan it in a calendar for a week in advance? And there we touch upon the question if a robot needs to be a bit intelligent in the end? What if – a terrible example! – a person dies and the nurse forgets to update the calendar? It is a shocking sight to see a robot starting to wash a deceased person!! So, it is unconceivable to have robots that are nÃ³t aware of the state of a patient. It is very recommendable that robots have some autonomy and can make some decisions for themselves about what is appropriate and what not. If a patient cries, the robot shouldn’t continue! It should have rudimentary empathic feelings. At first we might want to have a robot who is human-like on the outside. However, what we might end up needing even more is robot who is human-like on the inside! We don’t want to have a robot for whom befehl ist befehl! It should not take everything at face value.
Should we start to create a “nursebot”? Or should we start with the actual problems a nurse faces? Besides heavy physical work, two things nurses hate is administration and stress. Although a few administrative tasks are recommendable (because it are the moments on a day that a nurse can sit down and relax for a while), there is a lot of overhead. This all leads to what they call in Dutch: â€œfewer hands at the bedsâ€. A hospital is a complex logistic institution. A nurse, or most often an internee, is running for supplies from one place to another all day long. A robot being able to take over this job, does not have to be humanoid at all. It can be a mobile variant of pneumatic tubes. Delivery robots can play a big role.
And so many things can be done to reduce stress! The patient might communicate with family at home via a robot. Although the robot is then perhaps not much more than a movable webcam… Or a patient might say something to the robot when she or he awakes. A nurse can be warned or the message played back when a nurse arrives. A smart observer would only alert a nurse when needed. A nurse is in this sense helped with prioritization of her activities. This communication aspect should not be downplayed. Imagine that a physician is always able to look through the eyes of such a robot. Would a nurse than try to make everything “neat” before a doctor does his or her round? There won’t be such emphasize anymore on such traditional inspection rounds at all. I bet when you start to think about automation in a hospital in relation to stress, you might come up with a lot of inventive solutions. I challenge you this very moment!
Soft morphing robots
There are many morphing robots around. Okay, perhaps not that many. However, the ones in Replicator (and Symbrion) distinguish themselves from the crowd by the enormous amount of sensors that are on them. That is why sensor fusion is such a very important topic in the first place! A robot of 10 modules, with on each module 4 cameras makes a stunning amount of 40 cameras! That’s why such a developmental engine is being built.
But this post is not about the Replicator software. It is about a soft morphing iRobot. This iRobot has been presented at the IROS 2009 conference. It is a ball with a shell that exists out of compartments. By vacuum some of those compartments expand which makes it possible to create a blob on one side of the ball and starts it rolling.
And, what if they can sense light or warmth! Imagine enormous large morphing balls rolling through the desert getting energy from the sun. Freaky!