Classification on high octane (1): Naïve Bayes (hopefully, with Hadoop). COSC 526 Class 3. Arvind Ramanathan. Computational Science & Engineering ...

媿魅发布于2018/06/03 00:00

注脚

1.Classification on high octane (1): Naïve Bayes (hopefully, with Hadoop ) COSC 526 Class 3 Arvind Ramanathan Computational Science & Engineering Division Oak Ridge National Laboratory, Oak Ridge Ph : 865-576-7266 E-mail: ramanathana@ornl.gov

2.Hadoop Installation Issues

3.Different operating systems have different requirements My experience is purely based on Linux: I don’t know anything about Mac/Windows Installation! Windows install is not stable: Hacky install tips abound on web! You will have a small linux based Hadoop installation available to develop and test your code A much bigger virtual environment is underway!

4.What to do if you are stuck? Read over the internet!  Many suggestions are specific to a specific version Hadoop install becomes an “art” rather than following a typical program “install” If you are still stuck: let’s learn I will point you to a few people that have had experience with H adoop

5.Basic Probability Theory

6.Overview Review of Probability Theory Naïve Bayes (NB) The basic learning algorithm How to implement NB on Hadoop Logistic Regression Basic algorithm How to implement LR on Hadoop

7.What you need to know Probabilities are cool Random variables and events The axioms of probability Independence, Binomials and Multinomials Conditional Probabilities Bayes Rule Maximum Likelihood Estimation (MLE), Smoothing, and Maximum A Posteriori (MAP) Joint Distributions

8.Independent Events Definition: two events A and B are independent if Pr(A and B)=Pr(A)*Pr(B). Intuition: outcome of A has no effect on the outcome of B (and vice versa). E.g., different rolls of a dice are independent . You frequently need to assume the independence of something to solve any learning problem.

9.Multivalued Discrete Random Variables Suppose A can take on more than 2 values A is a random variable with arity k if it can take on exactly one value out of {v 1 ,v 2 , .. v k } Example: V={ aaliyah , aardvark, …., zymurge , zynga } Thus…

10.Terms: Binomials and Multinomials Suppose A can take on more than 2 values A is a random variable with arity k if it can take on exactly one value out of {v 1 ,v 2 , .. v k } Example: V={ aaliyah , aardvark, …., zymurge , zynga } The distribution Pr(A) is a multinomial For k=2 the distribution is a binomial

11.More about Multivalued Random Variables Using the axioms of probability and assuming that A obeys axioms of probability:

12.A practical problem I have lots of standard d20 die, lots of loaded die, all identical. Loaded die will give a 19/20 (“critical hit”) half the time. In the game, someone hands me a random die, which is fair (A) or loaded (~A), with P(A) depending on how I mix the die. Then I roll, and either get a critical hit (B) or not (~B) . Can I mix the dice together so that P(B ) is anything I want - say, p(B)= 0.137 ? P(B) = P(B and A ) + P(B and ~A ) = 0.1* λ + 0.5* (1- λ ) = 0.137 λ = (0.5 - 0.137)/0.4 = 0.9075 “mixture model”

13.Another picture for this problem A (fair die) ~A (loaded) A and B ~A and B It’s more convenient to say “if you’ve picked a fair die then …” i.e. Pr(critical hit|fair die)=0.1 “if you’ve picked the loaded die then….” Pr(critical hit|loaded die)=0.5 Conditional probability: Pr(B|A) = P(B^A)/P(A) P(B|A) P(B|~A)

14.Definition of Conditional Probability P(A ^ B) P(A|B) = ----------- P(B ) Corollary: The Chain Rule P(A ^ B) = P(A|B) P(B)

15.Some practical problems I have 3 standard d20 dice, 1 loaded die. Experiment: (1) pick a d20 uniformly at random then (2) roll it. Let A =d20 picked is fair and B =roll 19 or 20 with that die. What is P( B )? P( B ) = P( B | A ) P(A) + P( B | ~A ) P(~A) = 0.1 * 0.75 + 0.5 * 0.25 = 0.2 “ marginalizing out ” A

16.P(B|A) * P(A) P(B) P(A|B) = P(A|B) * P(B) P(A) P(B|A) = Bayes, Thomas (1763) An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53:370-418 …by no means merely a curious speculation in the doctrine of chances, but necessary to be solved in order to a sure foundation for all our reasonings concerning past facts, and what is likely to be hereafter…. necessary to be considered by any that would give a clear account of the strength of analogical or inductive reasoning… Bayes’ rule prior posterior

17.Some practical problems I bought a loaded d20 on EBay…but it didn’t come with any specs. How can I find out how it behaves? 1. Collect some data (20 rolls) 2. Estimate Pr( i )=C(rolls of i )/C(any roll)

18.One solution I bought a loaded d20 on EBay…but it didn’t come with any specs. How can I find out how it behaves? P(1)=0 P(2)=0 P(3)=0 P(4)=0.1 … P(19)=0.25 P(20)=0.2 MLE = maximum likelihood estimate But: Do you really think it’s impossible to roll a 1,2 or 3? Would you bet your life on it?

19.A better solution I bought a loaded d20 on EBay…but it didn’t come with any specs. How can I find out how it behaves? 1. Collect some data (20 rolls) 2. Estimate Pr( i )=C(rolls of i )/C(any roll) 0. Imagine some data (20 rolls, each i shows up once )

20.A better solution I bought a loaded d20 on EBay…but it didn’t come with any specs. How can I find out how it behaves? P(1 )=1/40 P(2 )=1/40 P(3 )=1/40 P(4 )=(2+1)/40 … P(19 )=(5+1)/40 P(20 )=(4+1)/40=1/8 0.25 vs. 0.125 – really different! Maybe I should “imagine” less data?

21.A better solution? P(1 )=1/40 P(2 )=1/40 P(3 )=1/40 P(4 )=(2+1)/40 … P(19 )=(5+1)/40 P(20 )=(4+1)/40=1/8 0.25 vs. 0.125 – really different! Maybe I should “imagine” less data?

22.A better solution? Q: What if I used m rolls with a probability of q=1/20 of rolling any i ? I can use this formula with m>20, or even with m<20 … say with m=1

23.A better solution Q: What if I used m rolls with a probability of q=1/20 of rolling any i ? If m>>C(ANY) then your imagination q rules If m<<C(ANY) then your data rules BUT you never ever ever end up with Pr( i )=0

24.Terminology – more later This is called a uniform Dirichlet prior C( i ), C(ANY) are sufficient statistics MLE = maximum likelihood estimate MAP= maximum a posteriori estimate

25.The Joint Distribution Recipe for making a joint distribution of M variables: Example: Boolean variables A, B, C

26.The Joint Distribution Recipe for making a joint distribution of M variables: Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). Example: Boolean variables A, B, C A B C 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1

27.The Joint Distribution Recipe for making a joint distribution of M variables: Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). For each combination of values, say how probable it is. Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10

28.The Joint Distribution Recipe for making a joint distribution of M variables: Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). For each combination of values, say how probable it is. If you subscribe to the axioms of probability, those numbers must sum to 1. Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10 A B C 0.05 0.25 0.10 0.05 0.05 0.10 0.10 0.30

29.Using the Joint One you have the JD you can ask for the probability of any logical expression involving your attribute Abstract : Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset. [ Kohavi , 1996] Number of Instances: 48,842 Number of Attributes: 14 (in UCI’s copy of dataset); 3 (here)

30.Using the Joint P(Poor Male) = 0.4654

31.Using the Joint P(Poor) = 0.7604

32.Inference with the Joint

33.Inference with the Joint P( Male | Poor ) = 0.4654 / 0.7604 = 0.612

34.Estimating the joint distribution Collect some data points Estimate the probability P(E1=e1 ^ … ^ En=en) as #(that row appears)/#(any row appears) …. Gender Hours Wealth g1 h1 w1 g2 h2 w2 .. … … gN hN wN

35.Estimating the joint distribution For each combination of values r: Total = C[ r ] = 0 For each data row r i C[ r i ] ++ Total ++ Gender Hours Wealth g1 h1 w1 g2 h2 w2 .. … … gN hN wN Complexity? Complexity ? O(n) n = total size of input data O(2 d ) d = #attributes (all binary) = C[ r i ]/ Total r i is “female,40.5+, poor”

36.Estimating the joint distribution For each combination of values r: Total = C[ r ] = 0 For each data row r i C[ r i ] ++ Total ++ Gender Hours Wealth g1 h1 w1 g2 h2 w2 .. … … gN hN wN Complexity ? Complexity ? O(n) n = total size of input data k i = arity of attribute i

37.Estimating the joint distribution Gender Hours Wealth g1 h1 w1 g2 h2 w2 .. … … gN hN wN Complexity? Complexity? O(n) n = total size of input data k i = arity of attribute i For each combination of values r: Total = C[ r ] = 0 For each data row r i C[ r i ] ++ Total ++

38.Estimating the joint distribution For each data row r i If r i not in hash tables C,Total : Insert C[ r i ] = 0 C[ r i ] ++ Total ++ Gender Hours Wealth g1 h1 w1 g2 h2 w2 .. … … gN hN wN Complexity? Complexity? O(n) n = total size of input data m = size of the model O(m)

39.Naïve Bayes (NB)

40.Bayes Rule prior probability of hypothesis h prior probability of training data D Probability of h given D Probability of D given h

41.A simple shopping cart example Customer Zipcode bought organic bought green tea 1 37922 Yes Yes 2 37923 No No 3 37923 Yes Yes 4 37916 No No 5 37993 Yes No 6 37922 No Yes 7 37922 No No 8 37923 No No 9 37916 Yes Yes 10 37993 Yes Yes What is the probability that a person is in zipcode 37923? 3/10 What is the probability that the person is from 37923 knowing that he bought green tea? 1/5 Now, if we want to display an ad only if the person is likely to buy tea. We know that the person lives in 37922. Two competing hypothesis exist: The person will buy green tea P(buyGreenTea|37922) = 0.6 The person will not buy green tea P(~buyGreenTea|37922) = 0.4 We will show the ad!

42.Maximum a Posteriori (MAP) hypothesis Let D represent the data I know about a particular customer: E.g., Lives in zipcode 37922, has a college age daughter, goes to college Suppose, I want to send a flyer (from three possible ones: laptop, desktop, tablet), what should I do? Bayes Rule to the rescue:

43.MAP hypothesis: (2) Formal Definition Given a large number of hypotheses h 1 , h 2 , …, h n , and data D, we can evaluate:

44.MAP : Example (1) A patient takes a cancer lab test and it comes back positive. The test returns a correct positive result 98% of the cases, in which case the disease is actually present. It also returns a correct negative result 97% of the cases, in which case the disease is not present. Further, 0.008 of the entire population actually have the cancer. Example source: Dr. Tom Mitchell, Carnegie Mellon

45.MAP: Example (2) Suppose Alice comes in for a test. Her result is positive. Does she have to worry about having cancer? Alice may not have cancer!! Making our answers pretty: 0.0072/(0.0072 + 0.0298) = 0.21 Alice may have a chance of 21% in actually having cancer!!

46.Basic Formula of Probabilities Product rule : Probability P(A ∧ B) – conjunction of two events: Sum rule : Disjunction of two events: Theorem of Total Probability : if events A1, A2, … An are mutually exclusive, with sum(A{1,n}) = 1:

47.A Brute force MAP Hypothesis learner For each hypothesis h in H, calculate the posterior probability Output the hypothesis h MAP with the highest probability

48.Naïve Bayes Classifier One of the most practical learning algorithms Used when: Moderate to large training set available Attributes that describe instances are conditionally independent given the classification Surprisingly gives rise to good performance: Accuracy can be high (sometimes suspiciously!!) Applications include clinical decision making

49.Naïve Bayes Classifier Assume a target function with f: X V, where each instance x is described by <x 1 , x 2 , …, x n >. Most probable value of f(x) is: Using the Naïve Bayes assumption:

50.Naïve Bayes Algorithm NaiveBayesLearn (examples): for each target value v_j : Phat ( v_j )  estimate P( v_j ) for each attribute value x_i in x Phat ( x_i|v_j )  estimate P( x_i|v_j ) NaiveBayesClassifyInstance (x): v_NB = argmax Phat ( v_j ) Π_iPhat ( a_i|v_j )

51.Notes of caution! (1) Conditional independence is often violated We don’t need the estimated posteriors to be correct, only need: Usually, posteriors are close to 0 or 1

52.Notes of caution! (2) We may not observe training data with the target value v_i , having attribute x_i . Then: To overcome this: nc is the number of examples where v = v_j and x = x_i m is weight given to prior ( e.g , no. of virtual examples) p is the prior estimate n is total number of training examples where v= v_j

53.Learning the Naïve Density Estimator MLE MAP

54.Putting it all together Training: for each example [id, y, x1, … xd ]: C(Y=any)++; C(Y=y)++ for j in 1…d: C(Y=y and Xj = xj )++; Testing: for each example [id, y, x1, … xd ]: for each y’ in dom (Y): compute PR(y’, x1, …, xd ) = return best PR

55.So, now how do we implement NB on Hadoop ? Remember, NB has two phases: Training Testing Training: #(Y = *): total number of documents #(Y=y): number of documents that have the label y #(Y=y, X=*): number of words with label y in all documents we have #(Y=y, X=x): number of times word x has occurred in document Y with the label y dom (X): number of unique words across all documents dom (Y): number of unique labels across all documents

56.Map Reduce process Mappers Reducer

57.Code Snippets: Training Training_map (key, value): for each sample: parse category and value for each word count  frequency of word for each label: key’, value’  label, count return <key’, value’> Training_reduce (key’, value’): sum  0 for each label: sum += value’;

user picture
  • 媿魅
  • Apparently, this user prefers to keep an air of mystery about them.

相关Slides

  • 视觉任务之间是否有关系,或者它们是否无关?例如,表面法线可以简化估算图像的深度吗?直觉回答了这些问题,暗示了视觉任务中存在结构。了解这种结构具有显著的价值;它是传递学习的基本概念,并提供了一种原则性的方法来识别任务之间的冗余,例如,无缝地重用相关任务之间的监督或在一个系统中解决许多任务而不会增加复杂性。 我们提出了一种完全计算的方法来建模视觉任务的空间结构。这是通过在隐空间中的二十六个2D,2.5D,3D和语义任务的字典中查找(一阶和更高阶)传递学习依赖性来完成的。该产品是用于任务迁移学习的计算分类地图。我们研究了这种结构的后果,例如:非平凡的关系,并利用它们来减少对标签数据的需求。例如,我们表明,解决一组10个任务所需的标记数据点总数可以减少大约2/3(与独立训练相比),同时保持性能几乎相同。我们提供了一套用于计算和探测这种分类结构的工具,包括用户可以用来为其用例设计有效监督策略。

  • 尽管最近在生成图像建模方面取得了进展,但是从像ImageNet这样的复杂数据集中成功生成高分辨率,多样化的样本仍然是一个难以实现的目标。为此,我们以最大规模训练了生成性对抗网络,并研究了这种规模所特有的不稳定性。我们发现将正交正则化应用于生成器使得它适合于简单的“截断技巧”,允许通过截断潜在空间来精确控制样本保真度和多样性之间的权衡。我们的修改导致模型在类条件图像合成中达到了新的技术水平。当我们在ImageNet上以128×128分辨率进行训练时,我们的模型(BigGAN)的初始得分(IS)为166.3,Frechet初始距离(FID)为9.6,比之前的最优IS为52.52,FID为18.65有了显著的提升。

  • 2017年,以斯坦福大学为首、包括吴恩达、李开复等一众大咖专家团队齐力打造的人工智能指数(AI Index)重磅年度报告首次发布。从学术、业界发展、政府策略等方面对全年的人工智能全球发展进行了回顾,堪称全年人工智能最强报告。 该重点介绍了人工智能领域的投资和工作岗位前所未有的增长速度,尤其是在游戏和计算机视觉领域进展飞速。

  • 18年12月12日,哈佛大学,麻省理工学院,斯坦福大学以及OpenAI等联合发布了第二届人工智能指数(AI Index)年度报告。 人工智能领域这一行业的发展速度,不仅仅是通过实际产品的产生以及研究成果来衡量,还要考虑经济学家和政策制定者的预测和担忧。这个报告的目标是使用硬数据衡量人工智能领域的发展。 报告中多次提及了中国人工智能的发展以及清华大学: 美国仅占到全球论文发布内容的17%,欧洲是论文最高产的国家,18年发表的论文在全球范围内占比28%,中国紧随其后,占比25%。; 大学人工智能和机器学习相关课程注册率在全球范围都有大幅提升,其中最瞩目的是清华大学,相关课程2017年的注册率比2010年高出16倍,比2016年高出了将近3倍; 各国对人工智能应用方向重视不同。中国非常重视农业科学,工程和技术方面的应用,相比于2000年,2017年,中国加大了对农业方面的重视。 吴恩达也在今天的推特中重磅推荐了这份报告,称“数据太多了”,并划重点了两个报告亮点:人工智能在业界和学界都发展迅速;人工智能的发展仍需要更加多样包容。