Properties of Environment


Properties of Environment

1] Fully observable Vs Partially Observable.

The task environment is completely observable if an agent's sensors allow it access to the whole state of the environment at any one time. If the sensors detect all essential characteristics to the action option, the task environment is effectively fully observable; the performance measure determines relevance. Entirely visual environments are handy since the agent does not need to track the world internally. Because noisy and incorrect sensors are absent from the sensor data, an environment may only be partially visible.

A vacuum agent with simply a local dirt sensor, for example, can't detect if there's dirt in other squares.

2] Stochastic vs. deterministic.

If the present state of the environment and the action taken by the agent entirely dictate the next state of the environment, we call it deterministic; otherwise, we call it stochastic.

3] Episodic Vs Sequential.

The agent's experience in an episodic task environment is split into atomic episodes, each consisting of the agent seeing and then completing a single action. Notably, the actions done in prior episodes have no bearing on the following episode. In an episodic setting, the action taken in each episode is solely determined by the episode itself. Episodes are used in a lot of categorization jobs.

For example, an agent in an assembly line who needs to detect faulty components bases each judgement on the current part, regardless of past decisions. Furthermore, the recent review has no bearing on whether or not the following section is faulty. In sequential settings, the present decision has the potential to influence all subsequent decisions.

4] Static Vs Dynamics.

If an agent's surroundings may change while celebrating, we call it dynamic; otherwise, we call it static. Static settings are simple to deal with since the agent does not need to gaze out the window while determining what to do constantly, nor does it need to be concerned about the passage of time. Dynamic environments ask the agent what it wants to do all of the time, and if it hasn't been chosen yet, that counts as deciding to do nothing. When the environment does not vary over time, but the agent's performance score does, the environment is said to be semi-dynamic.

For example, taxi driving is visibly dynamic; other cars and the cab itself continue to move as the driving algorithm mulls over its next move. Chess is a semi-dynamic crossword when played with a clock, whereas puzzles are static.

5] Discrete Vs Continuous.

The discrete/continuous distinction may be used to describe the state of the environment, how time is handled, and the agent's perceptions and actions.

A chess game, for example, is a discrete state environment with a finite number of different states, and chess has a distinct set of perceptions and behaviours as well.

The speed and position of the cab and other vehicles sweep over a range of continuous values and go so smoothly throughout time, making taxi driving a constant state and continuous-time challenge. The acts of taxi drivers are likewise regular, and Digital camera input is hidden.

6] Single Agent Vs Multi Agent.

The contrast between single-agent and multi-agent settings may be straightforward.

A single agent, for example, is doing a crossword problem by itself. Chess is played in a two-agent system, and chess is a multi-agency competitive game. Avoiding crashes enhances the performance metric in a taxi driving scenario. As a result, the multi-agent environment is somewhat cooperative.