AI-Assignment 2 SUBMITTED TO: SIR AWAIS IRFAN BASHIR 12347 Question # 1: Describe a rational agent function for the case in which each movement costs one point. Does the corresponding agent program require internal state? For the case in which each movement costs one point, the agent should stop after checking both A and B squares and eliminating any dirt found in order to reduce unnecessary movements and loss of points. The corresponding agent program would require internal state in this case, as it would essential to remember checking both squares before it stops execution its task. If it finds dirt at square A, cleans it, then moves to square B and finds no dirt, and then does not remember whether or not it checked square A, it will check A again, and then repeat with square B and continue making unnecessary movements, thus losing points. If the squares did not permanently remain clean, however, the agent could stop for some fixed amount of time after checking and cleaning both squares and repeat its task. It could work at some time interval, checking both squares, cleaning any dirt found, stopping for 1 hour or so, and then repeating the process. This would allow the agent to minimize movements and point loss while keeping the squares clean if they were to become dirty again. Question #2: For each of the following activities, give a PEAS description of the task environment and characterize it in terms of the properties of task environment. Playing soccer. P- Win/Lose Upper body S- Eyes, Ears. Partially observable, E- Soccer field A- Legs, Head, multiagent, stochastic, sequential, dynamic, continuous, un-known. Exploring the subsurface oceans of Titan. A- steering, accelerator, break, probe arm, S- camera, sonar, probe sensors. partially observable, single agent, P- Surface area mapped, extraterrestrial life found E- subsurface oceans of Titan stochastic, sequential, dynamic, continuous, unknown Shopping for used AI books on the Internet P- Cost of book, partially observable, multiagent, stochastic quality/relevance/correct edition E- Internet’s used book shops A- key entry, cursor S- website interfaces, browser. Sequential, dynamic, continuous, unknown Playing a tennis match. P- Win/Lose, Legs, S- Eyes, Ears. partially observable, multiagent, E- Tennis court A- Tennis racquet, stochastic, sequential, dynamic, continuous, unknown Practicing tennis against a wall. P- Improved performance in future tennis matches, E- Near a wall A- Tennis racquet, Legs SEyes, Ears. observable, single agent, stochastic, sequential, dynamic, continuous, unknown Performing a high jump. P- Clearing the jump or not E- Track A- Legs, Body,S- Eyes. observable, single agent, stochastic, sequential, dynamic, continuous, unknown Knitting a sweater. P- Quality of resulting sweater E- Rocking chair A- Hands,Needles,S- Eyes. observable, single agent, stochastic, sequential, dynamic, continuous, unknown Bidding on an item at an auction. P- Item acquired, Final price paid for item, E- Auction House (or online) A- Bidding S- Eyes, Ears. Partially observable, multiagent, stochastic (tie-breaking for two simultaneous bids), episodic, dynamic, continuous, known Question # 3: Provide two situations / scenarios each which fulfills the requirements of the nature of following Environments. Also provide arguments that categorizes the given scenario / situations into the following environment. The example scenarios and situations should not be coming from the book examples 1. Fully Observable environment An environment is called Fully Observable is when the information received by your agent at any point of time is sufficient to make the optimal decision. Scenario 1: In a Tic-Tac-Toe game, seeing the position of the elements on the board is enough to make an optimal decision on the next move. Scenario 2: In a chess game, the state of the system, that is, the position of all the players on the chess board, is available the whole time so the player can make an optimal decision. 2. Partially Observable environment An environment is called Partially Observable is when the agent needs a memory in order to make the best possible decision. Scenario 1: In a Poker game, the agent needs to remember the previous moves in order to make a best possible decision. Which is why it needs a memory. Scenario 2: Driving – the environment is partially observable because what’s around the corner is not known 3. Deterministic Environment An environment is called Deterministic is where your agent’s actions uniquely determine the outcome. Scenario 1: In Chess, there is no randomness when you move a piece. Scenario 2: if we had a pawn while playing chess and we moved that piece from A2 to A3, that would always work. There is no uncertainty in the outcome of that move. 4. Stochastic Environment An environment is called Stochastic is where your agent’s actions don’t uniquely determine the outcome. Scenario 1: In games with dice, you can determine your dice throwing action but not the outcome of the dice. Scenario 2: Self Driving Cars – the actions of a self-driving car are not unique, it varies time to time 5. Episodic Environment In an episodic environment, there is a series of one-shot actions, and only the current percept is required for the action. Scenario 1: An AI that looks at radiology images to determine if there is a sickness is an example of an episodic environment Scenario 2: An AI that looks at radiology images to determine if there is a sickness is an example of an episodic environment. 6. Sequential Environment Scene 1: With a chess agent, each new action depends upon what happened previously. Or, in other words, different actions can have different consequences. Using your queen to take your opponent’s knight may bring short-term utility, but it may also put your queen at risk in the next move. This is a sequential environment. Scene 2: Brushing your teeth 7. Dynamic Environment A dynamic environment is changing rapidly. Managers must react quickly and organizations must be flexible to respond. Scenario 1: Today's business environment is generally very dynamic. Technology, consumer tastes, laws and regulations, political leaders, and international conditions are all changing rapidly and dramatically. Scenario 2: Roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant. 8. Static Environment An environment is static if only the actions of an agent modify it. ... An environment is said to be discrete if there are a finite number of actions that can be performed within it. Scenario 1: An empty house is static as there’s no change in the surroundings when an agent enters. Scenario 2: Empty office with no moving objects