Three prisoners, A, B, and C, are locked in their cells. Everyone knows that one of them will be executed the next day and that the other two will be pardoned. Only the governor knows which one will be executed, and that is how he likes it.
Prisoner A asks the prison guard a favor: "Please ask the governor who will be executed, and then take a message to one of my friends, either B or C, to let him know that he will be pardoned in the morning." The guard agrees to do just that.
Later, the guard stops by A's cell. He tells A that he gave the pardon message to prisoner B.
What are A's chances of being executed, given this information?
Try to answer this logically or mathematically, not by energetic waving of hands.
The answer: 1/3! Boy, I would love to be C just now...
How are we forced to act from a position of uncertainty?
The consequences of these practicalities:
Some side effects of these unfortunate realities:
How can we augment a representation based in logic to reflect this broader view? Assertions are contingent. Each statement has an associated probability that denotes the agent's degree of belief in the assertion.
So, instead of saying, "The patient has a cavity.", we might we say, "There is an 80% chance that the patient has a cavity".
An agent's degree of belief in an assertion can change over time:
Imagine the case of the dental patient:
before seeing any evidence P(x) = 0.2 after evidence: phone call on tootache P(x|e1) = 0.7 after evidence: visual exam P(x|e1^e2) = 0.95 after evidence: X-ray P(x|e1^e2^e3) = 0.999
I have to fly to Phoenix on February 25. My internal planner has created a plan that is 99% certain to get me to the airport on time for my flight.
Should I use this plan or try to find another one? If yes, why? If no, why not?
Probabilities are meaningless out of context.
This is an economic decision.
Probabilities are part of a bigger picture. In the real world, agents have preferences for some states of the world over other states of the world.
Consider some examples:
Given that all reasoning is done in the context of uncertainty, risk aversion becomes an important preference.
Economists and AI folks use a notion of utility theory to measure the expected outcomes of an action and choose among them. "Acting rationally" is equivalent to choosing the action with the highest expected value. Probability is merely a way to compute expectations.
Some simple definitions of terms we've seen above:
x an assertion P(x) degree of belief in x P(x | E) degree of belief in x given that we know E
Some of the relevant laws of probability:
We know that:
P(x | E) == P(x and E) / P(E)
P(x and E) == P(x | E) * P(E)
But evidence is relative, so:
P(E and x) == P(E | x) * P(x)
P(E | x) * P(x) == P(x | E) * P(E)
P(E | x) * P(x) P(x | E) == --------------- P(E)
This is Bayes' Law, the work horse of most AI that uses probabilistic reasoning.
You recently read that doctors have reported several cases of meningitis in your town. You wake up one morning with a sore throat, so you decide to call your doctor. He encourages you to come in for an appointment but calms your fears of having meningitis. It turns out that meningitis strikes only 1 in 50,000 people, and the probability that a person will have a sore throat if they do have meningitis is only 50%. And, besides, the chance that you will wake up with a sore throat on any given day is 5%.
What are the chances that you have the disease?
P( HaveSoreThroat | HaveDisease ) = 0.50 P( HaveDisease ) = 0.00002 P( HaveSoreThroat ) = 0.05 P( HaveDisease | HaveSoreThroat ) = P( HaveSoreThroat | HaveDisease ) * P( HaveDisease ) ---------------------------------------------------- P( HaveSoreThroat ) = ( 0.50 * 0.00002 ) / 0.05 = 0.0002
After your yearly check-up, the doctor has good news and bad news. The bad news is that you have tested positive for a serious disease, and that the test is 99% accurate. The good news that this is a rare disease, striking only 1 in 10,000 people.
Why is the good news good news?
What are the chances you have the disease?
P( PositiveTest | HaveDisease ) = 0.99 P( not PositiveTest | not HaveDisease ) = 0.99 P( HaveDisease ) = 0.0001 P( PositiveTest ) = ?????? P( HaveDisease | PositiveTest ) = ?????? P( PositiveTest ) = P(PositiveTest| HaveDisease) * P( HaveDisease) + P(PositiveTest|not HaveDisease) * P(not HaveDisease) P( PositiveTest | HaveDisease ) * P( HaveDisease ) = 0.99 * 0.0001 = 0.000099 P( PositiveTest | not HaveDisease ) * P( not HaveDisease ) = 0.009999 P( PositiveTest ) = 0.000099 + 0.009999 P( HaveDisease | PositiveTest ) = 0.000099 / (0.000099 + 0.009999) = 0.009804
What have we learned from our exploration?