Just a collection of some random cool stuff. PS. Almost 99% of the contents here are not mine and I don't take credit for them, I reference and copy part of the interesting sections.
Wednesday, January 6, 2010
ch 2 - ai
agent - anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
percepts - location and contents, (room A, dirty)
rational agent - not omniscient (doesn't have everything in knowledge base, eg weather prediction) and not perfect (can be wrong, given limited information, i think the left turn is the right one ...)
peas - performance measure (quantifiable - accuracy, scores), environment, actuators, sensors, eg. taxi driver
***environment types -
* fully observable (chess, see everything) vs partially observable (taxi driving, can't see past buildings)
* deterministic (yes/no eg chess, move or not) vs stochastic (uncertainty, eg. driving, person might suddenly jump out of nowhere, traffic lights suddenly change)
* episodic (episodes, independent states, eg. part picking robot, image-analysis, only a few of these ...) vs sequential (eg. crossword, once you fill horizontal, the intersection is restricted with the letter.)
* static (you can take as much time as you like, there's no concept of time, eg. crossword, chess with no clock) vs dynamic (there are threads, eg. chess with clock, taxi driving - many things happen, thread for traffic light, thread for other car agents, thread for people crossing, etc.), vs semidynamic time,
* discrete(vs continuous) actions,
* single agent(vs multi)
- see slide 13 for example table
agent types (increasing generality)
1. simple reflex agents - condition-action (if statements) rules, very simple, state-less eg. vacuum with no memory, robot picking???, image analysis???
2. reflex agents with state / model based - knows the world, keeps states eg. chess / crossword??
3. goal-based agents - has reasoning (eg. red light, car in front stops, i should stop, i should slam on brakes), has a goal (replaces simple condition-action), eg. taxi driving
4. utility-based agents - maps state to degree of happiness to evaluate current state, how fast, how efficient, etc. eg. taxi driving, drive to destination the safest way
all these can be turned to learning agents - learning / feedback / critic
Classifying the environment
• Static / Dynamic
Previous problem was static: no attention to changes in environment
• Deterministic / Stochastic
Previous problem was deterministic: no new percepts
were necessary, we can predict the future perfectly
• Observable / Partially Observable / Unobservable
Previous problem was observable: it knew the initial state, &c
• Discrete / continuous
Previous problem was discrete: we can enumerate all possibilities
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment