Types of Agents (Goal-Based and Utility-Based Agents)

Types of Agents


            Before reading this article you need to learn about "what are the Types of Agents ?" and about simple Reflex Agent and Model-based reflex Agent. To read those topics click on the topic.


Goal-Based reflex Agents :


Knowing something about the current state of the environment is not always enough to decide what to do. For example, at a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where the taxi is trying to get to. In other words, as well as a current state description, the agent needs some sort of goal information that describes situations that are desirable—for example, being at a particular destination. The agent program can combine this with the model (the same information as was used in the model-based reflex agent) to choose actions that achieve the goal. In the given Figure it shows the goal-based agent’s structure.




Sometimes goal-based action selection is straightforward - for example, when goal satisfaction results immediately from a single action. Sometimes it will be more tricky - for example, when the agent has to consider long sequences of twists and turns in order to find a way to achieve the goal. Search  and planning  are the subfields of AI devoted to finding action sequences that achieve the agent’s goals.


        Notice that decision making of this kind is fundamentally different from the condition - action rules described earlier, in that it involves consideration of the future - both “What will happen if I do such-and-such?” and “Will that make me happy?” In the reflex agent designs, this information is not explicitly represented, because the built-in rules map directly from percepts to actions. The reflex agent brakes when it sees brake lights, period. It has no idea why. A goal-based agent brakes when it sees brake lights because that’s the only action that it predicts will achieve its goal of not hitting other cars.


Although the goal-based agent appears less efficient, it is more flexible because the knowledge that supports its decisions is represented explicitly and can be modified. For example, a goal-based agent’s behavior can easily be changed to go to a different destination, simply by specifying that destination as the goal. The reflex agent’s rules for when to turn and when to go straight will work only for a single destination; they must all be replaced to go somewhere new.


Utility-Based reflex Agents : 


        Goals alone are not enough to generate high-quality behavior in most environments. For example, many action sequences will get the taxi to its destination (thereby achieving the goal), but some are quicker, safer, more reliable, or cheaper than others. Goals just provide a crude binary distinction between “happy” and “unhappy” states. A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent. Because “happy” does not sound very scientific, economists and computer scientists use the term utility instead.


We have already seen that a performance measure assigns a score to any given sequence of environment states, so it can easily distinguish between more and less desirable ways of getting to the taxi’s destination. An agent’s utility function is essentially an internalization of the performance measure. Provided that the internal utility function and the external performance measure are in agreement, an agent that chooses actions to maximize its utility will be rational according to the external performance measure.


Let us emphasize again that this is not the only way to be rational—we have already seen a rational agent program for the vacuum world  that has no idea what its utility function is—but, like goal-based agents, a utility-based agent has many advantages in terms of flexibility and learning. Furthermore, in two kinds of cases, goals are inadequate but a utility-based agent can still make rational decisions. First, when there are conflicting goals, only some of which can be achieved (for example, speed and safety), the utility function specifies the appropriate tradeoff. Second, when there are several goals that the agent can aim for, none of which can be achieved with certainty, utility provides a way in which the likelihood of success can be weighed against the importance of the goals. 


Partial observability and nondeterminism are ubiquitous in the real world, and so, therefore, is decision making under uncertainty. Technically speaking, a rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes that is, the utility the agent expects to derive, on average, given the probabilities and utilities of each outcome.  In further lessons we'll see that any rational agent must behave as if it possesses a utility function whose expected value it tries to maximize. An agent that possesses an explicit utility function can make rational decisions with a general-purpose algorithm that does not depend on the specific utility function being maximized. In this way, the “global” definition of rationality - designating as rational those agent functions that have the highest performance - is turned into a “local” constraint on rational-agent designs that can be expressed in a simple program.





The utility-based agent structure appears in the given Figure. Utility-based agent programs appear in further lessons, where we design decision-making agents that must handle the uncertainty inherent in nondeterministic or partially observable environments. Decision making in multiagent environments is also studied in the framework of utility theory, as explained in next topics.


References : Artificial Intelligence A Modern Approach Fourth Edition                                                                          Author : Russell, Stuart J. (Stuart Jonathan), author. | Norvig, Peter, author.






0 comments :

Post a Comment

Cancel Reply

Artificial intelligence is a new trending technology in the current world. Ohm McCarthy, father of Artificial Intelligence was developed AI technology in 2006. We have claimed that AI is interesting, but we have not said what it is. Historically, researchers have pursued several different versions of AI.

Search This Blog

Powered by Blogger.