Classification on high octane (1): Naïve Bayes (hopefully, with Hadoop). COSC 526 Class 3. Arvind Ramanathan. Computational Science & Engineering ...
- 243
- 0
- 0
注脚
展开查看详情
1.Classification on high octane (1): Naïve Bayes (hopefully, with Hadoop ) COSC 526 Class 3 Arvind Ramanathan Computational Science & Engineering Division Oak Ridge National Laboratory, Oak Ridge Ph : 865-576-7266 E-mail: ramanathana@ornl.gov
2.Hadoop Installation Issues
3.Different operating systems have different requirements My experience is purely based on Linux: I don’t know anything about Mac/Windows Installation! Windows install is not stable: Hacky install tips abound on web! You will have a small linux based Hadoop installation available to develop and test your code A much bigger virtual environment is underway!
4.What to do if you are stuck? Read over the internet! Many suggestions are specific to a specific version Hadoop install becomes an “art” rather than following a typical program “install” If you are still stuck: let’s learn I will point you to a few people that have had experience with H adoop
5.Basic Probability Theory
6.Overview Review of Probability Theory Naïve Bayes (NB) The basic learning algorithm How to implement NB on Hadoop Logistic Regression Basic algorithm How to implement LR on Hadoop
7.What you need to know Probabilities are cool Random variables and events The axioms of probability Independence, Binomials and Multinomials Conditional Probabilities Bayes Rule Maximum Likelihood Estimation (MLE), Smoothing, and Maximum A Posteriori (MAP) Joint Distributions
8.Independent Events Definition: two events A and B are independent if Pr(A and B)=Pr(A)*Pr(B). Intuition: outcome of A has no effect on the outcome of B (and vice versa). E.g., different rolls of a dice are independent . You frequently need to assume the independence of something to solve any learning problem.
9.Multivalued Discrete Random Variables Suppose A can take on more than 2 values A is a random variable with arity k if it can take on exactly one value out of {v 1 ,v 2 , .. v k } Example: V={ aaliyah , aardvark, …., zymurge , zynga } Thus…
10.Terms: Binomials and Multinomials Suppose A can take on more than 2 values A is a random variable with arity k if it can take on exactly one value out of {v 1 ,v 2 , .. v k } Example: V={ aaliyah , aardvark, …., zymurge , zynga } The distribution Pr(A) is a multinomial For k=2 the distribution is a binomial
11.More about Multivalued Random Variables Using the axioms of probability and assuming that A obeys axioms of probability:
12.A practical problem I have lots of standard d20 die, lots of loaded die, all identical. Loaded die will give a 19/20 (“critical hit”) half the time. In the game, someone hands me a random die, which is fair (A) or loaded (~A), with P(A) depending on how I mix the die. Then I roll, and either get a critical hit (B) or not (~B) . Can I mix the dice together so that P(B ) is anything I want - say, p(B)= 0.137 ? P(B) = P(B and A ) + P(B and ~A ) = 0.1* λ + 0.5* (1- λ ) = 0.137 λ = (0.5 - 0.137)/0.4 = 0.9075 “mixture model”
13.Another picture for this problem A (fair die) ~A (loaded) A and B ~A and B It’s more convenient to say “if you’ve picked a fair die then …” i.e. Pr(critical hit|fair die)=0.1 “if you’ve picked the loaded die then….” Pr(critical hit|loaded die)=0.5 Conditional probability: Pr(B|A) = P(B^A)/P(A) P(B|A) P(B|~A)
14.Definition of Conditional Probability P(A ^ B) P(A|B) = ----------- P(B ) Corollary: The Chain Rule P(A ^ B) = P(A|B) P(B)
15.Some practical problems I have 3 standard d20 dice, 1 loaded die. Experiment: (1) pick a d20 uniformly at random then (2) roll it. Let A =d20 picked is fair and B =roll 19 or 20 with that die. What is P( B )? P( B ) = P( B | A ) P(A) + P( B | ~A ) P(~A) = 0.1 * 0.75 + 0.5 * 0.25 = 0.2 “ marginalizing out ” A
16.P(B|A) * P(A) P(B) P(A|B) = P(A|B) * P(B) P(A) P(B|A) = Bayes, Thomas (1763) An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53:370-418 …by no means merely a curious speculation in the doctrine of chances, but necessary to be solved in order to a sure foundation for all our reasonings concerning past facts, and what is likely to be hereafter…. necessary to be considered by any that would give a clear account of the strength of analogical or inductive reasoning… Bayes’ rule prior posterior
17.Some practical problems I bought a loaded d20 on EBay…but it didn’t come with any specs. How can I find out how it behaves? 1. Collect some data (20 rolls) 2. Estimate Pr( i )=C(rolls of i )/C(any roll)
18.One solution I bought a loaded d20 on EBay…but it didn’t come with any specs. How can I find out how it behaves? P(1)=0 P(2)=0 P(3)=0 P(4)=0.1 … P(19)=0.25 P(20)=0.2 MLE = maximum likelihood estimate But: Do you really think it’s impossible to roll a 1,2 or 3? Would you bet your life on it?
19.A better solution I bought a loaded d20 on EBay…but it didn’t come with any specs. How can I find out how it behaves? 1. Collect some data (20 rolls) 2. Estimate Pr( i )=C(rolls of i )/C(any roll) 0. Imagine some data (20 rolls, each i shows up once )
20.A better solution I bought a loaded d20 on EBay…but it didn’t come with any specs. How can I find out how it behaves? P(1 )=1/40 P(2 )=1/40 P(3 )=1/40 P(4 )=(2+1)/40 … P(19 )=(5+1)/40 P(20 )=(4+1)/40=1/8 0.25 vs. 0.125 – really different! Maybe I should “imagine” less data?
21.A better solution? P(1 )=1/40 P(2 )=1/40 P(3 )=1/40 P(4 )=(2+1)/40 … P(19 )=(5+1)/40 P(20 )=(4+1)/40=1/8 0.25 vs. 0.125 – really different! Maybe I should “imagine” less data?
22.A better solution? Q: What if I used m rolls with a probability of q=1/20 of rolling any i ? I can use this formula with m>20, or even with m<20 … say with m=1
23.A better solution Q: What if I used m rolls with a probability of q=1/20 of rolling any i ? If m>>C(ANY) then your imagination q rules If m<<C(ANY) then your data rules BUT you never ever ever end up with Pr( i )=0
24.Terminology – more later This is called a uniform Dirichlet prior C( i ), C(ANY) are sufficient statistics MLE = maximum likelihood estimate MAP= maximum a posteriori estimate
25.The Joint Distribution Recipe for making a joint distribution of M variables: Example: Boolean variables A, B, C
26.The Joint Distribution Recipe for making a joint distribution of M variables: Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). Example: Boolean variables A, B, C A B C 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1
27.The Joint Distribution Recipe for making a joint distribution of M variables: Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). For each combination of values, say how probable it is. Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10
28.The Joint Distribution Recipe for making a joint distribution of M variables: Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). For each combination of values, say how probable it is. If you subscribe to the axioms of probability, those numbers must sum to 1. Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10 A B C 0.05 0.25 0.10 0.05 0.05 0.10 0.10 0.30
29.Using the Joint One you have the JD you can ask for the probability of any logical expression involving your attribute Abstract : Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset. [ Kohavi , 1996] Number of Instances: 48,842 Number of Attributes: 14 (in UCI’s copy of dataset); 3 (here)