Living with Robots

Written by Thrishantha Nanayakkara, Director, Laboratory for Morphological Computation and Learning, Centre for Robotics Research, King’s College London

This is a follow up note on the very stimulating discussion we had at the Digital Leaders Research Salon titled ‘Practical Use of Robotics for Public Services and Enterprise’, held at the University of Liverpool in London on 13th April, 2016. This note is supplementary to very important remarks made by my colleagues, Dr. Giuseppe Carbone and Dr. Alessandro Di Nuovo from Sheffield Hallam University.

When does a machine become a robot?

It is in the eyes of the observer. If the observer feels that the machine is able to meaningfully respond to environmental conditions like some living being, the machine makes a move towards becoming a robot. This means, a simple rubber toy without a single sensor or a microprocessor can be viewed as a robot if it moves around in a cluttered environment like an insect. Please check if you think these HexBugs are robots. You will soon realize that whatever your classification of these rubber bugs cannot be done independent from their environment. Therefore, a “robot” is a very contextual notion that heavily depends on how the observer perceives the context. For instance, a child playing with these HexBugs may even think that these rubber bugs are alive. NASA explains, “robots are machines that can be used to do jobs. Some robots can do work by themselves. Other robots must always have a person telling them what to do”.

Where does intelligence come from?

The term “intelligence” is also a mental construct of the observer. It often refers to the ability to know the state of a being in the environment and to know what actions should be taken to either cooperate or compete in a given situation. However, there is no scientific confirmation that the act of knowing is purely a neural based process. There is evidence to show that pure mental activities like meditation leads to changes in neural circuits. Therefore, there is a debate as to whether machines based on numerical algorithms will ever be able to “know” their state of being in the environment no matter how complex they become. For instance, the Internet that runs very sophisticated algorithms that can piece together information from more than 15 billion computing nodes within milliseconds, shows no trace of self awareness, whereas insects with far less computational power in their brains show very complex defensive behaviors, that clearly come from self awareness. However, pieces of numerical algorithms running in a robot can measure its state based on sensors without being aware of itself as an entity.

A far more simplified notion of “machine intelligence” is to be able to meaningfully respond to situations and to be able to learn in changing conditions even if the machine is not aware of its existence separate from the environment. In this sense, a robot doesn’t need to have sensors, microprocessors, or numerical algorithms to exhibit some level of intelligent behavior. For instance, this passive dynamic walker developed at Andy Ruina’s lab, Cornell University can walk down a ramp without a single sensor, motor, or a microprocessor. Another example is the ability of a dead trout body to start swimming against a stream of water (first demonstrated in the Lauder’s lab at Harvard). These examples show that computation is not limited to numerical algorithms. Pure mechanical, aerodynamic, or hydrodynamic circuits can perform computations. We call this “morphological computation”. Some recent work done in my laboratory at KCL explain how various structures in the body perform local computations that relieve the brain from complex feedback control with long delays. Ex. How the variable damping (friction) profile of the knee joint optimize bipedal walking, how the hoof of a goat can self stabilize by dynamically increasing friction when pushed against a surface, how we control the stiffness of joints in order to maximize the quality of haptic perception. Therefore, we believe that the cost of future robots can drastically go down if we pay more attention to understand how these bodily computations (morphological computations) that can be done without costly sensors, motor, or microprocessor based software algorithms, and add numerical intelligence on top of it.

Will robots make human skills redundant?

Yes, it will, just like the invention of the wheel, and the art of agriculture that followed it made hunting skills of humans redundant. But the hunters that went out of job quickly learnt to be farmers around 10,000 year ago. Then when the art of writing was developed around 5,000 years ago, the expanding human population saw choices other than hunting and farming. By the time industrial revolution happened, hunting was reduced to a hobby, and many professions purely based on writing had emerged. Today, due to the expanded Global population, farming is a subsidized industry, and we talk about Genetically Modified (GM) products and machine made items like clothes. The job market is so diverse that we cannot count the number of choices a high school student will have to think about. Not a single such job can be found without having to use some sort of man-made technology.

However, with every technological intervention in products and services, the value we assign to human touch increased. Ex. With GM products, the value of organic food increased. With machine made items, the value of hand made items increased. Even car washing centers advertise “hand washing” as a premium service. When AI customer service agents were introduced to cope up with the increasing volume of calls to service providers, the value of human customer service agents increased. In my laboratory, the more we investigate natural phenomena to do with perception and action through robotics, the more respect we grow for nature. For instance, some recent work to develop a fingertip that can classify textures made us wonder how precious and complex our fingertips are. Therefore, the more we begin to interact with robots, the more we will value humanity, human love, affection, emotions, empathy, and relationships. However, due to growing economic pressures, changing demographics, and complex life-styles, humans will be pressured to obtain services from robots in the future. In my view, it will will help humans to realize that precious things to do with humanity cannot be programmed or numerically solved. Then, just like GM products highlighted the value of organic food, robots will help humans to care more about their relationships with other humans.

Comments are closed.