PEAS
Encyclopedia
P.E.A.S. is an acronym in artificial intelligence
Artificial intelligence
Artificial intelligence is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its...

 that stands for Performance, Environment, Actuators, Sensors.

Performance

Performance is a function that measures the quality of the actions the agent did.
Such as:- Safe, Fast, Legal, Comfortable trip, Maximize Profits etc.

Environment

The environment in which the agent operates. They are described with the following main properties:

Fully observable vs. partially observable (Accessible vs. inaccessible)

If an agent's sensors give it access to the complete state of the environment at each
point in time, then we say that the task environment is fully observable. A task
environment is effectively fully observable if the sensors detect all aspects that are relevant
to the choice of action; relevance, in turn, depends on the performance measure. Fully
observable environments are convenient because the agent need not maintain any
internal state to keep track of the world. An environment might be partially observable
because of noisy and inaccurate sensors or because parts of the state are simply missing
from the sensor data—for example, a vacuum agent with only a local dirt sensor cannot
tell whether there is dirt in other squares, and an automated taxi cannot see what other
drivers are thinking.

Deterministic vs. stochastic (non-deterministic)

If the next state of the environment is completely determined by the current state and
the action executed by the agent, then we say the environment is deterministic;
otherwise, it is stochastic. In principle, an agent need not worry about uncertainty in a fully
observable, deterministic environment. If the environment is partially observable,
however, then it could appear to be stochastic. This is particularly true if the environment
is complex, making it hard to keep track of all the unobserved aspects. Thus, it is often
better to think of an environment as deterministic or stochastic from the point of view of
the agent. Taxi driving is clearly stochastic in this sense, because one can never predict
the behavior of traffic exactly; moreover, one's tires blow out and one's engine seizes
up without warning. The vacuum world as we described it is deterministic, but
variations can include stochastic elements such as randomly appearing dirt and an unreliable
suction mechanism. If the environment is deterministic except for the
strategic actions of other agents, we say that the environment is strategic.

Episodic vs. sequential (non-episodic)

In an episodic task environment, the agent's experience is divided into atomic episodes.
Each episode consists of the agent perceiving and then performing a single action.
Crucially, the next episode does not depend on the actions taken in previous episodes. In
episodic environments, the choice of action in each episode depends only on the episode
itself. Many classification tasks are episodic. For example, an agent that has to spot
defective parts on an assembly line bases each decision on the current part, regardless
of previous decisions; moreover, the current decision doesn't affect whether the next part is defective. In sequential environments, on the other hand, the current decision
could affect all future decisions. Chess and taxi driving are sequential: in both cases,
short-term actions can have long-term consequences. Episodic environments are much
simpler than sequential environments because the agent does not need to think ahead.

Static vs dynamic

If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise, it is static. Static environments are easy to
deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time. Dynamic environments, on the
other hand, are continuously asking the agent what it wants to do; if it hasn't decided yet, that counts as deciding to do nothing. If the environment itself does not change with the passage of time but the agent's performance score does, then we say the environment is semidynamic. Taxi driving is clearly dynamic: the other cars and the taxi itself keep moving while the driving algorithm dithers about what to do next. Chess, when played with a clock, is semidynamic. Crossword puzzles are static.

Discrete vs continuous

The discrete/continuous distinction can be applied to the state of the environment, to
the way time is handled, and to the percepts and actions of the agent. For example, a
discrete-state environment such as a chess game has a finite number of distinct states.
Chess also has a discrete set of percepts and actions. Taxi driving is a continuous-
state and continuous-time problem: the speed and location of the taxi and of the other
vehicles sweep through a range of continuous values and do so smoothly over time.
Taxi-driving actions are also continuous (steering angles, etc.). Input from digital
cameras is discrete, strictly speaking, but is typically treated as representing continuously
varying intensities and locations.

Single-Agent vs Multi-Agent

As the names suggest, a single agent environment is one which consists of only one agent. This means that this one agent does not have to account for other agents in the environment and can be solely concerned only with how its own actions affect the world.

In a multi-agent environment on the other hand, agents need to account for the actions of other agents. In particular, if the other agents are directly in competition with each other, it is said that the environment is competitive, whereas if the agents are existing in unity, it is said that the environment is cooperative. Note that the two qualities are not mutually exclusive, and an environment can be both competitive and cooperative to different degrees.

Example of a cooperative environment would be for example, a G-rated driving game - none of the agents in the world (usually) would be interested in crashing into each other. Of course in a game such as destruction-derby, agents are interested in crashing into one another, and the environment leans heavily against the competitive side.

Actuators

Actuators are the set of devices that the agent can use to perform actions. For a computer, it can be a printer
Computer printer
In computing, a printer is a peripheral which produces a text or graphics of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local peripherals, and are attached by a printer cable or, in most new printers, a...

 or a screen. For a mechanical robot
Robot
A robot is a mechanical or virtual intelligent agent that can perform tasks automatically or with guidance, typically by remote control. In practice a robot is usually an electro-mechanical machine that is guided by computer and electronic programming. Robots can be autonomous, semi-autonomous or...

, it can be an engine.

Sensors

Sensors allow the agent to collect the percept sequence that will be used for deliberating on the next action.

Wumpus World Example

This example is related to the Wumpus World game.

You may create a controlled space for an agent to explore.

Performance
The performance of the agent in the Wumpus World is measured by points. The agent will gain points for killing the Wumpus or exiting the cave with the gold. The agent loses points if it falls in a pit, takes a turn, or fires an arrow.


Environment
The environment is a grid of a given size, we'll say 4x4. The agent will always start in the bottom left square, labeled (1,1). The Wumpus and the gold will be placed on a square other than the starting square. There may also be pits located throughout the grid.


Actuators
The agent has multiple actions it may take. These will include things such as MoveForward, TurnLeft, TurnRight, PickUpGold, FireArrow, and ExitCave. These actions are used to traverse the grid, pick up gold if it is detected, kill the Wumpus, and leave the grid.


Sensors
The agent can have five basic sensors to detect the state of the surrounding squares.

Stench

There is a Wumpus in an adjacent square.

Breeze

There is a pit in an adjacent square.

Glitter

The gold is located in the current square.

Bump

The agent has hit a wall.

Scream

The Wumpus has been killed.

The state of each sensor, either yes or no, is sent to the program for appropriate actuators to fire.


Example based from the text Artificial Intelligence: A Modern Approach, 3rd Edition, by Russell and Norvig.

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK