“The art of progress is to preserve order amid change and to preserve change amid order.”
— Alfred North Whitehead
“The art of progress is to preserve order amid change and to preserve change amid order.”
— Alfred North Whitehead
Our world famous Senior Robotics Scientist, Dr. James Albus, has always sought to understand the most complex system in the known Universe: the human mind. Contemplating the ancient philosophical mind/brain conundrum now seems to lead to the conclusion that the mind is what the brain does.
A Computational Theory of Mind
By
Dr. James Albus
There are three great questions in science:
What is matter and energy?
What is life?
What is mind?
Over the past two centuries, the fields of physics and chemistry have developed a fundamental understanding of matter and energy. Within the last hundred years, physiology and biology have discovered the basic mechanisms of life. In just the last few decades, the fields of psychology and computer science have begun to understand the nature of intelligence. The outlines of a scientific theory of mind are beginning to emerge.
The study of intelligence in brains and machines is the field of science that has occupied me for the past 40 years. Now, in the twilight of my career, I believe we have arrived on the threshold of understanding of how to build intelligent factories, construction machines, mining and drilling equipment, transportation facilities, and service industries that could enable the elimination of poverty and pollution, and the creation of a golden age of prosperity for all humankind. To be sure, achieving level of prosperity will require more than just technological advances. It will require new political and economic policies and institutions. But these are not beyond the realm of physical possibility. And if they were widely understood, they would not be beyond the realm of political possibility either.
The Early Years
It was Michael Arbib's book The Machinery of the Brain that first attracted my attention to the issue of machine intelligence. The 1950’s were an era of great expectations among the founders of the discipline of Artificial Intelligence. It seemed reasonable to me that within my lifetime it might be possible to solve the puzzle of how the machinery of the brain gives rise to the phenomena of intelligence. I felt that it would at least be possible to advance the field beyond philosophical debate to experimental science -- and along the way to make some important contributions to science and engineering of intelligent systems.
So I read the literature in artificial intelligence and neurophysiology, and took graduate courses in control theory, signal processing, biophysics, and neuropsychology. It was 1965 when Sir John Eccles and his co-workers M. Ito and J. Szentagothai published their results from a remarkable series of studies on the cerebellum in a book entitled The Cerebellum as a Neuronal Machine. It turned out that the neural architecture in the cerebellum bore a remarkable resemblance to the Perceptron, an early neural net developed by an Air Force research scientist named Frank Rosenblatt.
A Model of the Cerebellum
It was 1969 when David Marr burst onto the scientific stage with the publication of his theory of cerebellar cortex. In 1971, I published a theory of cerebellar function that modified and extended the Marr model. In 1972, I received my Ph.D. degree in Electrical Engineering from the University of Maryland with a thesis demonstrating how my cerebellar model could learn to control a robotic arm in coordinated movements. Today, the Marr-Albus theory is one of the most widely accepted theoretical models of the cerebellum. It is regularly cited in the literature, and is still the basis of experimental activity by leading cerebellar neurophysiologists throughout the world.
[Please click here for a 1971 paper on A Theory of Cerebellar Function]
[Please click here for an early (1980) paper on Hierarchical Control]
[Please click here for a 1989 paper on Marr and Albus Theories of the Cerebellum]
A Neural Net
In 1975, building on my theory of cerebellar function, I developed a new type of neural net computer -- the CMAC (Cerebellar Model Arithmetic Computer.) CMAC has several significant advantages over other neural nets. One is that CMAC learns non-linear functions orders of magnitude faster than other neural net algorithms such as back propagation. CMAC is also much more efficient in execution -- so much so that even on 1975 era microcomputers it was able to compute in milliseconds. This enabled engineers to build closed-loop controllers that could learn complex task skills, such as the ability of an automated fork lift to find and pick up palletized loads using infrared proximity sensors. In 1976, CMAC was awarded an IR-100 award from Industrial Research Magazine as one of the 100 most important industrial innovations of the year. Today, CMAC continues to inspire research in neural networks and learning by intelligent controllers. It is particularly useful for applications requiring real-time adaptive control. CMAC is the subject of many recent publications in the robotics and neural net literature.
For me, however, the most important feature of CMAC was it provided a way of thinking about computation in the brain that has led to a number of important insights about the building blocks of intelligent systems. The first and most obvious insight was that a neural net could compute any reasonably smooth mathematical function. For example, it could do simple addition, subtraction, multiplication, and division. It could also compute smooth non-linear functions such as trigonometric functions, and perform matrix manipulations such as those required for coordinate transformations. A CMAC could, for example, transform a robot manipulator command from end-point coordinates to joint coordinates, or from head coordinates to end-point coordinates. This suggested how neurons in the brain could control muscles and joints in tasks of manipulation and locomotion. It showed how neural modules in the brain could compute errors between desired states and observed states, and compute control actions that compensate for those errors. Other researchers in the field of neural net controllers showed how neural nets can learn inverse models that generate feed forward signals for anticipatory control. The main result of this line of research was to demonstrate that neural nets can (and by extension neurons in the brain may) compute the all the functions required to implement sophisticated feed forward and feedback control for position, velocity, and force in nonlinear actuators such as muscles.
CMAC demonstrated how neural networks can encode knowledge in the form of look-up tables, by treating the input vector as a memory address and the output as the contents of the address. CMAC demonstrated how neural nets can implement IF-THEN rules, by treating the input vector as an IF predicate, and the output vector as a THEN consequent. CMAC showed how to perform indirect addressing by treating the input vector as an address, and the output as another address. This demonstrated how neural nets could be organized to form abstract data structures (such as frames, objects, and classes.) It suggested how neurons could establish pointers that represent relationships within and between symbolic data structures. And it illustrates how neural nets can perform list processing operations such as are required for searching graphs and building semantic nets. Thus, CMAC suggests how neurons in the brain could be organized to perform all of the types of computations that are standard tools of researchers in signal processing, image understanding, cognitive reasoning, world model building, planning, and control functions required to build intelligent systems.
Features of the CMAC mechanism for recoding continuous input variables into a higher dimensional address space also enable it to interface symbolic commands with continuous behavioral functions. CMAC is able to implement what later became known as "fuzzy control." CMAC suggests a neurophysiological basis for fuzzy control.
Experiments with CMAC and other types of neural nets have demonstrated how neurons might perform cross-correlation between vector manifolds, and compute correlation and variance between any two spatial or temporal patterns. CMAC shows how neural nets can be cascaded to build phase-lock loops and to perform recursive estimation functions such as Kalman filtering and signal decoding. This suggests how neurons in the brain might be organized to classify images, and recognize temporal strings such as phonemes, words, and sentences.
Another important insight suggested by CMAC was that if some of its outputs are looped back to its input, a CMAC can act as a finite state automaton. This means that neural nets can generate strings of actions such as are observed in biological behavior, or strings of symbols such as are observed in language and music. It also suggests how neurons can compute spatial and temporal derivatives and integrals, and solve differential equations.
A conclusion that can be drawn from this line of research is that neural nets can generate any of the computational processes that can be achieved with either analog or digital computers. Of course, the converse is not necessarily true. There are, in fact, many today (such as Peter Dreyfus, John Searle, Jerry Fodor, Paul Churchland, and Roger Penrose) who argue that the human brain possesses capabilities (such as the ability to perform abductive inference and symbol grounding, and to solve the frame problem) that at best have not been demonstrated yet in computer systems, and at worst are theoretically impossible to achieve through computational mechanisms.
I respectfully disagree. All perception involves abductive inference. Perception requires making an educated guess (or hypothesis) about what is in the external world based on sensory input. The perception process then tests that hypothesis by using it to predict future sensory input. When observations match predictions, then the probability that hypothesis is correct increases. When observations fail to match predictions, the probability that the hypotheses is correct decreases. When the probability of a correct hypothesis rises above some belief threshold, then the hypothesis is accepted as true. This is a process known to engineers for years as Kalman filtering, or recursive estimation.
Similarly, symbol ground is considered to be a problem only by linguists, semioticists, and cognitive philosophers. It is not a problem for control system engineers. In a control system, every symbol is grounded in the world. Every symbol represents an entity, event, or state of the real-world. The same is true in manufacturing science. Every symbolic entity, surface, feature, task, tool, and procedure refers to some physical reality that exists or occurs in the manufacturing process.
A similar argument can be applied to the frame problem. Control engineers do not build systems that have a frame problem. In a good control system design, sensors monitor the relevant state variables at a frequency and with a resolution, and with a signal-to-noise ratio that is adequate to control the process within the specified tolerance. All the relevant variables in internal model are updated fast enough to generate stable control, and irrelevant variables are ignored. The secret to building an intelligent control system is to focus attention on what is important, and ignore what is irrelevant.
The bottom line of this line of research is that it will soon be possible to build machines that exhibit high levels of intelligence. And for many applications, machine intelligence will equal or exceed that of humans.
Intelligent machines will undoubtedly transform society in many important ways. One of these is by the creation of economic wealth. The wealth producing capabilities of intelligent machines could increase the world per capita wealth to the point where poverty could be eliminated. Eventually all humans could experience a lifestyle that today is reserved only for the rich. A more detailed discussion of how truly intelligent machines could be engineered can be found in my book, Engineering of Mind: An Introduction to the Science of Intelligent Systems (available from Amazon.com).
A PowerPoint presentation of the potential impact of intelligent machines on science, economics, military strength, and human well being can be found by clicking below.
[Please click here for a Presentation on the Engineering of Mind]
RCS Reference Model Architecture
During the early 1970's, I began working with John Evans and Anthony Barbera and others at what was then the National Bureau of Standards (since renamed the National Institute of Standards and Technology) on the development of a control system called RCS (Real-time Control System). RCS was based on the CMAC version of a finite state machine. Over the years, RCS has evolved into a reference model architecture and a design methodology for intelligent systems. The RCS reference model architecture integrates concepts from artificial intelligence, machine vision, expert systems, modern control theory, operations research, differential games, and utility theory. It bridges the gap between symbolic and numeric computation, and integrates deliberative and reactive controllers. It facilitates software and hardware system design, development, and test. And it enables the integration of software from a wide variety of sources. It provides a solid theoretical basis for open architecture standards and performance measures for intelligent systems software.
The AMRF
During the 1980´s, the RCS reference model architecture provided the fundamental integrating principle of the National Bureau of Standards (NBS) Automated Manufacturing Research Facility (AMRF.) The AMRF was an $80 million experimental automated factory-of-the-future funded by the U.S. Navy Manufacturing Technology Program and the National Bureau of Standards (NBS). The success of the AMRF was partially responsible for Congressional legislation that transformed NBS into the National Institute of Standards and Technology (NIST.)
One of the spin-offs from the AMRF was the Initial Graphics Exchange Specification (IGES) that evolved into the Product Description Exchange Standard (PDES), and finally into the International Standards Organization series of standards called STEP. The STEP standards are used extensively around the world in the manufacturing and constructions industries for the representation of product data using standardized information models and reference dictionaries. STEP is widely used in the aerospace industries.
RCS Applications
Over the past two decades, scientists and engineers at NIST, in other government agencies, in industry, and academia have applied the RCS reference model architecture to a wide variety of practical problems. It has been used by engineers in a number of companies such as General Electric, Martin Marietta, Boeing, Hughes, General Motors, Lockheed, General Dynamics Robotic Systems, and many others for implementation of advanced control systems for robots, machine tools, factory automation, and autonomous vehicles.
During the 1980s an RCS controller for an automated deburring and chamfering robotic system was developed in coordination with United Technologies and Pratt & Whitney for jet engine turbine blade manufacturing. The RCS reference model architecture was adopted by NASA as the NBS/NASA Standard Reference Model for Telerobot Control System Architecture (NASREM). NASREM was used by NASA and its prime contractor Martin Marietta for the space station flight telerobotic servicer. It was adopted by the European Space Agency for telerobotic control. A submarine operational automation system based on the RCS reference architecture was developed by General Dynamics Electric Boat Division for the Navy´s next generation of nuclear submarines. The NIST-RCS architecture has been adopted by the U.S. Bureau of Mines for its coal mine automation program. Advanced Technology Research Corporation (ATR) developed a RCS control system for the U.S. Postal Service for automating a next generation general mail facility and a new stamp distribution facility.
The Enhanced Machine Controller
During the 1990´s, Tony Barbera, Fred Proctor, and I led a team of researchers in developing an open architecture Enhanced Machine Controller (EMC) for intelligent machine tools and next generation inspection systems based on the RCS principles. The EMC was successfully tested and evaluated by shop personnel in a production prototype shop at General Motors Powertrain plant in Flint Michigan. Commercial controllers based on this version of the EMC have been developed by Tony Barbera at Advanced Technology Research Corporation (ATR) for use by industry for water-jet cutters and machining cell controllers. Application Program Interface (API) specifications for open architecture controller standards have been developed in collaboration with an industry consortium consisting of developers, vendors, and users including General Motors, Ford, Chrysler, Boeing, DOE, NIST, and a number of small companies. EMC software and documentation has also been made available to universities and is currently being incorporated into curriculum for university courses in advanced control systems. More information about RCS is available at http://www.isd.mel.nist.gov/projects/rcs/
Autonomous Vehicle Systems
The RCS has recently been used for controlling intelligent unmanned vehicles. An Army HMMWV using the RCS control architecture has demonstrated the ability to drive on a highway at 100 km per hour under computer control, using only machine vision for steering. In 1998, the most recent version of the RCS reference model architecture (4D/RCS) was selected by the Army Research Laboratory for its highly successful Demo III Experimental Unmanned Vehicle (XUV) program. In a recent series of tests designed to determine the technology readiness of autonomous driving, the Demo III XUVs were driven more than 550 km, over rough terrain: 1) in the desert; 2) in the woods, through rolling fields of weeds and tall grass, and on dirt roads and trails; and 3) through an urban environment with narrow streets cluttered with parked cars, dumpsters, culverts, telephone poles, and manikins. Tests were conducted under various conditions including night, day, clear weather, rain, and falling snow. During these tests, the XUVs operated over 90% of both time and distance without any operator assistance. As a result of these tests, the Army decided to move unmanned ground vehicle technology into the System Development and Demonstration phase of the procurement cycle.
The 4D/RCS is currently being used for unmanned vehicle research under the Army Research Laboratory Collaborative Technology Alliance and the DARPA Autonomous Mobile Robot Software program. 4D/RCS has been adopted by the U.S. Army for four major programs: 1) The Army Research Lab follow-on to the Demo III XUV program, 2) the Army Tank and Automotive Research and Development Center Vetronics Technology Integration program, 3) the Army Future Combat System Autonomous Navigation System (ANS), and 4) the Army Future Force Warrior program. The ANS is being manufactured by General Dynamics Robotic Systems and will be installed on all combat vehicles in the future Army.
The 4D/RCS is also serving as the autonomous intelligent control system for our Energetically Autonomous Tactical Robot (EATR) Project, a Small Business Innovation Research (SBIR) Phase II Project sponsored by the Defense Advanced Research Projects Agency (DARPA). As part of the commercialization of technology from the EATR Project, the 4D/RCS may become the autonomous intelligent control architecture for driverless (i.e., driver-optional) cars and trucks.
[Please click here for a Presentation on Intelligent Control and Tactical Behavior]
[Please click here for a Paper on Intelligent Control and Tactical Behavior]
[Please click here for a Report on Achieving Intelligent Performance in Autonomous Driving]
A Computational Model of Mind
In the past three years, the 4D/RCS reference model has been extended to a cognitive architecture for intelligent multi-agent systems. This cognitive architecture is designed to enable any level of intelligent behavior, up to and including human levels of performance in driving vehicles and coordinating tactical behaviors between autonomous air, ground, and amphibious vehicle systems.
The theory underlying the 4D/RCS cognitive architecture addresses long-standing fundamental theoretical questions regarding whether computational processes are capable of emulating the functional processes of perception, knowledge representation, symbol grounding, cognition, abductive inference, decision making, planning in the brain. It provides a theoretical basis for understanding how the machinery of the brain generates the processes of the mind. It also addresses practical experimental issues of computation, signal processing, image understanding, situation assessment, planning, control, and human-machine interfaces for intelligent machines, and collaborating systems of intelligent machines. This theory has been published in a number of journal articles and a book entitled Engineering of Mind: An Introduction to the Science of Intelligent Systems, published by Wiley Science Series. Engineering of Mind is available from John Wiley & Sons, New York, or Amazon.com
I believe that the capabilities deriving from this work will transform society in many important ways.
My Vision
A world without poverty
Imagine a world without poverty, or hunger, or homelessness. Imagine a world where all live in decent homes, with plenty to eat. Imagine a world where everyone has access to a good education, and everyone lives in a safe clean community without fear of crime or violence. Imagine a world where every person has good medical care, every senior citizen can look forward to a secure retirement, and all disabled and elderly can afford assisted living and elder care in their declining years.
A world of prosperity
Imagine a world where economic wealth is efficiently created and equitably distributed. Imagine a world where jobs are plentiful, unemployment is low, and workers are well paid. Imagine a world where every adult receives an income from dividends on capital assets that is large enough to support a decent standard of living. Imagine a world where average income consistently increases more than 5% per year.
A world of opportunity
Imagine a world where opportunities abound to become rich, but no one is poor. Imagine a world, where there is no limit at the top for how wealthy anyone can become, but there is a limit at the bottom that prevents anyone from sinking into poverty. Imagine a world where everyone has an opportunity to succeed in any field of endeavor, including professional, business, and finance – and no one is homeless, or hungry, or without financial means. Imagine a world where many are wealthy, and all are financially secure.
A world without pollution
Imagine a world where growing economic prosperity and high living standards are environmentally friendly. Imagine a world where society is both able and willing to afford the cost of a clean environment. Imagine a world where the air is pure, the rivers are clean, streams and lakes are clear, the environment is protected, and wild life and wilderness are preserved for future generations. Imagine a world where clean renewable energy is inexpensive and easily available. Imagine a world where the power to light, heat, and air condition our homes, to power our cars, to run our transportation systems, to brighten our cities, and to power our industrial facilities is derived entirely from renewable sources that generate no air or water pollution and create no risk of radioactive contamination.
A world without war
Imagine a world where nations live in prosperity and peace without terrorism or war. Imagine a world without dictators and tyrants. Imagine a world where poor and ignorant masses no longer exist to be exploited by political and religious zealots. Imagine a world where all are too well educated and financially secure to be seduced by bigots or tyrants. Imagine a world where historical grievances have been consigned to the past and where political passions are inspired by realistic hopes and dreams of opportunity, prosperity, and longevity.
My plan to achieve my vision
My plan is simple in concept. Increase the rate of wealth production by raising the rate of investment through public funding and personal savings. Distribute the returns on public investments directly to the people through dividends. I call this Peoples´ Capitalism: how public money could be raised and invested in profit making enterprises that pay dividends to the public on an equal per capita basis. The result would be rapid and stable economic growth, from which everyone would benefit. Everyone would own a share in the means of production. Everyone would be a capitalist. Eventually, public dividends would rise to the point where poverty would cease to exist. A clean environment would be affordable, and the primary source of tribal, national, and religious conflict would be eliminated. Humankind would enter a golden age of peace, prosperity, and opportunity. Please click below for the book Peoples Capitalism: The Economics of the Robot Revolution which explains my vision in detail.
[Please click here for Peoples Capitalism: The Economics of the Robot Revolution]