Anticipation (artificial intelligence)
In artificial intelligence (AI), anticipation is the concept of an agent making decisions based on predictions, expectations, or beliefs about the future. It is widely considered that anticipation is a vital component of complex natural cognitive systems. As a branch of AI, anticipatory systems is a specialization still echoing the debates from the 1980s about the necessity for AI for an internal model.
Reaction, proaction and anticipation
Elementary forms of artificial intelligence can be constructed using a policy based on simple if-then rules. An example of such a system would be an agent following the rules
If it rains outside, take the umbrella. Otherwise leave the umbrella home
A system such as the one defined above might be viewed as inherently reactive because the decision making is based on the current state of the environment with no explicit regard to the future. An agent employing anticipation would try to predict the future state of the environment (weather in this case) and make use of the predictions in the decision making. For example,
If the sky is cloudy and the air pressure is low, it will probably rain soon so take the umbrella with you. Otherwise leave the umbrella home.
These rules appear more proactive, because they explicitly take into account possible future events. Notice though that in terms of representation and reasoning, these two rule sets are identical, both behave in response to existing conditions. Note too that both systems assume the agent is proactively
- leaving the house, and
- trying to stay dry.
In practice, systems incorporating reactive planning tend to be autonomous systems proactively pursuing at least one, and often many, goals. What defines anticipation in an AI model is the explicit existence of an inner model of the environment for the anticipatory system (sometimes including the system itself). For example, if the phrase it will probably rain were computed on line in real time, the system would be seen as anticipatory.
In 1985, Robert Rosen defined an anticipatory system as follows:[1]
- A system containing a predictive model of itself and/or its environment,
- which allows it to change state at an instant in accord
- with the model's predictions pertaining to a later instant.
In Rosen's work, analysis of the example : "It's raining outside, therefore take the umbrella" does involve a prediction. It involves the prediction that "If it is raining, I will get wet out there unless I have my umbrella". In that sense, even though it is already raining outside, the decision to take an umbrella is not a purely reactive thing. It involves the use of predictive models which tell us what will happen if we don't take the umbrella, when it is already raining outside.
To some extent, Rosen's definition of anticipation applies to any system incorporating machine learning. At issue is how much of a system's behaviour should or indeed can be determined by reasoning over dedicated representations, how much by on-line planning, and how much must be provided by the system's designers.
Anticipation in evolution and cognition
The anticipation of future states is also a major evolutionary and cognitive advance (Sjolander 1995). Anticipatory agents belonging to Rosen's definition are easy to see in human mental capabilities of taking decisions at a certain time T taking into account the effects of their own actions at different future timescales T+k. However, Rosen (a theoretical biologist) describes ALL living organisms as examples of naturally occurring anticipatory systems, which means that there must be somatic predictive models (meaning, "of the body"; physical) as components within the organization of all living organisms. No mental process is required for anticipation. In his book, Anticipatory Systems, Rosen describes how even single cellular organisms manifest this behavior pattern. It is logical to hypothesize therefore: If it is true that life is anticipatory in this sense, then the evolution of the conscious mind (such as human beings experience) may be a natural concentration and amplification of the anticipatory nature of life, itself.
Machine learning methods started to integrate anticipatory capabilities in an implicit form as in reinforcement learning systems (Sutton & Barto, 1998; Balkenius, 1995[2]) where they learn to anticipate future rewards and punishments caused by current actions (Sutton & Barto, 1998). Moreover, anticipation enhanced performance of machine learning techniques to face with complex environments where agents have to guide their attention to collect important information to act (Balkenius & Hulth, 1999).
From Anticipation to Curiosity
Jürgen Schmidhuber modifies error back propagation algorithm to change neural network weights in order to decrease the mismatch between anticipated states and states actually experienced in the future (Schmidhuber - Adaptive curiosity and adaptive confidence, 1991). He introduces the concept of curiosity for agents as a measure of the mismatch between expectations and future experienced reality. Agents able to monitor and control their own curiosity explore situations where they expect to engage with novel experiences and are generally able to deal with complex environments more than the others.
See also
- Action selection
- Cognition
- Dynamic planning
- The History of artificial intelligence
- MindRACES
- Nature and nurture
- The Physical symbol system hypothesis
- Strong AI
- Robert Rosen
- Teleonomy
References
- ↑ Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, Robert Rosen, 1985, Pergamon Press
- ↑ Balkenius, C. (1995). Natural Intelligence in Artificial Creatures. Lund University Cognitive Studies, 37. ISBN 91-628-1599-7.
External links
- MindRACES: From Reactive to Anticipatory Cognitive Embodied Systems, http://www.mindraces.org, 2004