UC伯克利大学的朴素贝叶斯课程

注脚

1.Announcements Homework 9 Released soon, due Monday 4/14 at 11:59pm Final Contest (Optional) Opportunities for extra credit every Sunday Cal Day – Robot Learning Lab Open House Saturday 10am-1pm 3 rd floor Sutardja Dai Hall Robot demos of towel folding, knot tying, high-fives, fist-pumps, hugs

2.CS 188: Artificial Intelligence Naïve Bayes Instructors: Dan Klein and Pieter Abbeel --- University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http:// ai.berkeley.edu .]

3.Machine Learning Up until now: how use a model to make optimal decisions Machine learning: how to acquire a model from data / experience Learning parameters (e.g. probabilities) Learning structure (e.g. BN graphs) Learning hidden concepts (e.g. clustering) Today: model-based classification with Naive Bayes

4.Classification

5.Classification

6.Example: Digit Recognition Input: images / pixel grids Output: a digit 0-9 Setup: Get a large collection of example images, each labeled with a digit Note: someone has to hand label all this data! Want to learn to predict labels of new, future digit images Features: The attributes used to make the digit decision Pixels: (6,8)=ON Shape Patterns: NumComponents , AspectRatio , NumLoops … 0 1 2 1 ??

7.Other Classification Tasks Classification: given inputs x, predict labels (classes) y Examples: Spam detection (input: document, classes: spam / ham) OCR (input: images, classes: characters) Medical diagnosis (input: symptoms, classes: diseases) Automatic essay grading (input: document, classes: grades) Fraud detection (input: account activity, classes: fraud / no fraud) Customer service email routing … many more Classification is an important commercial technology!

8.Model-Based Classification

9.Model-Based Classification Model-based approach Build a model (e.g. Bayes’ net) where both the label and features are random variables Instantiate any observed features Query for the distribution of the label conditioned on the features Challenges What structure should the BN have? How should we learn its parameters?

10.Naïve Bayes for Digits Naïve Bayes : Assume all features are independent effects of the label Simple digit recognition version: One feature (variable) F ij for each grid position < i,j > Feature values are on / off, based on whether intensity is more or less than 0.5 in underlying image Each input maps to a feature vector, e.g. Here: lots of features, each is binary valued Naïve Bayes model: What do we need to learn? Y F 1 F n F 2

11.General Naïve Bayes A general Naive Bayes model: We only have to specify how each feature depends on the class Total number of parameters is linear in n Model is very simplistic, but often works anyway Y F 1 F n F 2 |Y| parameters n x |F| x |Y| parameters |Y| x | F| n values

12.Inference for Naïve Bayes Goal: compute posterior distribution over label variable Y Step 1: get joint probability of label and evidence for each label Step 2: sum to get probability of evidence Step 3: normalize by dividing Step 1 by Step 2 +

13.General Naïve Bayes What do we need in order to use Naïve Bayes ? Inference method (we just saw this part) Start with a bunch of probabilities: P(Y) and the P( F i |Y ) tables Use standard inference to compute P(Y|F 1 …F n ) Nothing new here Estimates of local conditional probability tables P(Y), the prior over labels P( F i |Y ) for each feature (evidence variable) These probabilities are collectively called the parameters of the model and denoted by  Up until now, we assumed these appeared by magic, but… …they typically come from training data counts: we’ll look at this soon

14.Example: Conditional Probabilities 1 0.1 2 0.1 3 0.1 4 0.1 5 0.1 6 0.1 7 0.1 8 0.1 9 0.1 0 0.1 1 0.01 2 0.05 3 0.05 4 0.30 5 0.80 6 0.90 7 0.05 8 0.60 9 0.50 0 0.80 1 0.05 2 0.01 3 0.90 4 0.80 5 0.90 6 0.90 7 0.25 8 0.85 9 0.60 0 0.80

15.Example: Conditional Probabilities 1 0.1 2 0.1 3 0.1 4 0.1 5 0.1 6 0.1 7 0.1 8 0.1 9 0.1 0 0.1 1 0.01 2 0.05 3 0.05 4 0.30 5 0.80 6 0.90 7 0.05 8 0.60 9 0.50 0 0.80 1 0.05 2 0.01 3 0.90 4 0.80 5 0.90 6 0.90 7 0.25 8 0.85 9 0.60 0 0.80

16.Naïve Bayes for Text Bag-of-words Naïve Bayes : Features: W i is the word at positon i As before: predict label conditioned on feature variables (spam vs. ham) As before: assume features are conditionally independent given label New: each W i is identically distributed Generative model: “Tied” distributions and bag-of-words Usually, each variable gets its own conditional probability distribution P(F|Y) In a bag-of-words model Each position is identically distributed All positions share the same conditional probs P(W|Y) Why make this assumption? Called “bag-of-words” because model is insensitive to word order or reordering Word at position i, not i th word in the dictionary!

17.Example: Spam Filtering Model: What are the parameters? Where do these tables come from? the : 0.0156 to : 0.0153 and : 0.0115 of : 0.0095 you : 0.0093 a : 0.0086 with: 0.0080 from: 0.0075 ... the : 0.0210 to : 0.0133 of : 0.0119 2002: 0.0110 with: 0.0108 from: 0.0107 and : 0.0105 a : 0.0100 ... ham : 0.66 spam: 0.33

18.Spam Example Word P(w|spam) P(w|ham) Tot Spam Tot Ham (prior) 0.33333 0.66666 -1.1 -0.4 Gary 0.00002 0.00021 -11.8 -8.9 would 0.00069 0.00084 -19.1 -16.0 you 0.00881 0.00304 -23.8 -21.8 like 0.00086 0.00083 -30.9 -28.9 to 0.01517 0.01339 -35.1 -33.2 lose 0.00008 0.00002 -44.5 -44.0 weight 0.00016 0.00002 -53.3 -55.0 while 0.00027 0.00027 -61.5 -63.2 you 0.00881 0.00304 -66.2 -69.0 sleep 0.00006 0.00001 -76.0 -80.5 P(spam | w) = 98.9

19.Training and Testing

20.Important Concepts Data: labeled instances, e.g. emails marked spam/ham Training set Held out set Test set Features: attribute-value pairs which characterize each x Experimentation cycle Learn parameters (e.g. model probabilities) on training set (Tune hyperparameters on held-out set) Compute accuracy of test set Very important: never “peek” at the test set! Evaluation Accuracy: fraction of instances predicted correctly Overfitting and generalization Want a classifier which does well on test data Overfitting : fitting the training data very closely, but not generalizing well We’ll investigate overfitting and generalization formally in a few lectures Training Data Held-Out Data Test Data

21.Generalization and Overfitting

22.0 2 4 6 8 10 12 14 16 18 20 -15 -10 -5 0 5 10 15 20 25 30 Degree 15 polynomial Overfitting

23.Example: Overfitting 2 wins!!

24.Example: Overfitting Posteriors determined by relative probabilities (odds ratios): south-west : inf nation : inf morally : inf nicely : inf extent : inf seriously : inf ... What went wrong here? screens : inf minute : inf guaranteed : inf $205.00 : inf delivery : inf signature : inf ...

25.Generalization and Overfitting Relative frequency parameters will overfit the training data! Just because we never saw a 3 with pixel (15,15) on during training doesn’t mean we won’t see it at test time Unlikely that every occurrence of “minute” is 100% spam Unlikely that every occurrence of “seriously” is 100% ham What about all the words that don’t occur in the training set at all? In general, we can’t go around giving unseen events zero probability As an extreme case, imagine using the entire email as the only feature Would get the training data perfect (if deterministic labeling) Wouldn’t generalize at all Just making the bag-of-words assumption gives us some generalization, but isn’t enough To generalize better: we need to smooth or regularize the estimates

26.Parameter Estimation

27.Parameter Estimation Estimating the distribution of a random variable Elicitation: ask a human (why is this hard?) Empirically: use training data (learning!) E.g.: for each outcome x, look at the empirical rate of that value: This is the estimate that maximizes the likelihood of the data r r b r b b r b b r b b r b b r b b

28.Smoothing

29.Maximum Likelihood? Relative frequencies are the maximum likelihood estimates Another option is to consider the most likely parameter value given the data ????

30.Unseen Events

31.Laplace Smoothing Laplace’s estimate: Pretend you saw every outcome once more than you actually did Can derive this estimate with Dirichlet priors (see cs281a) r r b

32.Laplace Smoothing Laplace’s estimate (extended): Pretend you saw every outcome k extra times What’s Laplace with k = 0? k is the strength of the prior Laplace for conditionals: Smooth each condition independently: r r b

33.Estimation: Linear Interpolation* In practice, Laplace often performs poorly for P(X|Y): When |X| is very large When |Y| is very large Another option: linear interpolation Also get the empirical P(X) from the data Make sure the estimate of P(X|Y) isn’t too different from the empirical P(X) What if  is 0? 1? For even better ways to estimate parameters, as well as details of the math, see cs281a, cs288

34.Real NB: Smoothing For real classification problems, smoothing is critical New odds ratios: helvetica : 11.4 seems : 10.8 group : 10.2 ago : 8.4 areas : 8.3 ... verdana : 28.8 Credit : 28.4 ORDER : 27.2 <FONT> : 26.9 money : 26.5 ... Do these make more sense?

35.Tuning

36.Tuning on Held-Out Data Now we’ve got two kinds of unknowns Parameters: the probabilities P(X|Y), P(Y) Hyperparameters : e.g. the amount / type of smoothing to do, k,  What should we learn where? Learn parameters from training data Tune hyperparameters on different data Why? For each value of the hyperparameters , train and test on the held-out data Choose the best value and do a final test on the test data

37.Features

38.Errors, and What to Do Examples of errors Dear GlobalSCAPE Customer, GlobalSCAPE has partnered with ScanSoft to offer you the latest version of OmniPage Pro, for just $99.99* - the regular list price is $499! The most common question weve received about this offer is - Is this genuine? We would like to assure you that this offer is authorized by ScanSoft, is genuine and valid. You can get the . . . . . . To receive your Amazon.com promotional certificate, click through to http://www.amazon.com/apparel and see the prominent link for the offer. All details are there. We hope you enjoyed receiving this message. However, if youd rather not receive future e-mails announcing new store launches, please click . . .

39.What to Do About Errors? Need more features– words aren’t enough! Have you emailed the sender before? Have 1K other people just gotten the same email? Is the sending information consistent? Is the email in ALL CAPS? Do inline URLs point where they say they point? Does the email address you by (your) name? Can add these information sources as new variables in the NB model Next class we’ll talk about classifiers which let you easily add arbitrary features more easily

40.Baselines First step: get a baseline Baselines are very simple “straw man” procedures Help determine how hard the task is Help know what a “good” accuracy is Weak baseline: most frequent label classifier Gives all test instances whatever label was most common in the training set E.g. for spam filtering, might label everything as ham Accuracy might be very high if the problem is skewed E.g. calling everything “ham” gets 66%, so a classifier that gets 70% isn’t very good… For real research, usually use previous work as a (strong) baseline

41.Confidences from a Classifier The confidence of a probabilistic classifier: Posterior over the top label Represents how sure the classifier is of the classification Any probabilistic model will have confidences No guarantee confidence is correct Calibration Weak calibration: higher confidences mean higher accuracy Strong calibration: confidence predicts accuracy rate What’s the value of calibration?

42.Summary Bayes rule lets us do diagnostic queries with causal probabilities The naïve Bayes assumption takes all features to be independent given the class label We can build classifiers out of a naïve Bayes model using training data Smoothing estimates is important in real systems Classifier confidences are useful, when you can get them

43.Next Time: Perceptron!

44.Precision vs. Recall Let’s say we want to classify web pages as homepages or not In a test set of 1K pages, there are 3 homepages Our classifier says they are all non-homepages 99.7 accuracy! Need new measures for rare positive events Precision: fraction of guessed positives which were actually positive Recall: fraction of actual positives which were guessed as positive Say we guess 5 homepages, of which 2 were actually homepages Precision: 2 correct / 5 guessed = 0.4 Recall: 2 correct / 3 true = 0.67 Which is more important in customer support email automation? Which is more important in airport face recognition? - guessed + actual +

45.Precision vs. Recall Precision/recall tradeoff Often, you can trade off precision and recall Only works well with weakly calibrated classifiers To summarize the tradeoff: Break-even point: precision value when p = r F-measure: harmonic mean of p and r:

user picture
  • Arachchi
  • Apparently, this user prefers to keep an air of mystery about them.

相关Slides

  • 视觉任务之间是否有关系,或者它们是否无关?例如,表面法线可以简化估算图像的深度吗?直觉回答了这些问题,暗示了视觉任务中存在结构。了解这种结构具有显著的价值;它是传递学习的基本概念,并提供了一种原则性的方法来识别任务之间的冗余,例如,无缝地重用相关任务之间的监督或在一个系统中解决许多任务而不会增加复杂性。 我们提出了一种完全计算的方法来建模视觉任务的空间结构。这是通过在隐空间中的二十六个2D,2.5D,3D和语义任务的字典中查找(一阶和更高阶)传递学习依赖性来完成的。该产品是用于任务迁移学习的计算分类地图。我们研究了这种结构的后果,例如:非平凡的关系,并利用它们来减少对标签数据的需求。例如,我们表明,解决一组10个任务所需的标记数据点总数可以减少大约2/3(与独立训练相比),同时保持性能几乎相同。我们提供了一套用于计算和探测这种分类结构的工具,包括用户可以用来为其用例设计有效监督策略。

  • 尽管最近在生成图像建模方面取得了进展,但是从像ImageNet这样的复杂数据集中成功生成高分辨率,多样化的样本仍然是一个难以实现的目标。为此,我们以最大规模训练了生成性对抗网络,并研究了这种规模所特有的不稳定性。我们发现将正交正则化应用于生成器使得它适合于简单的“截断技巧”,允许通过截断潜在空间来精确控制样本保真度和多样性之间的权衡。我们的修改导致模型在类条件图像合成中达到了新的技术水平。当我们在ImageNet上以128×128分辨率进行训练时,我们的模型(BigGAN)的初始得分(IS)为166.3,Frechet初始距离(FID)为9.6,比之前的最优IS为52.52,FID为18.65有了显著的提升。

  • 2017年,以斯坦福大学为首、包括吴恩达、李开复等一众大咖专家团队齐力打造的人工智能指数(AI Index)重磅年度报告首次发布。从学术、业界发展、政府策略等方面对全年的人工智能全球发展进行了回顾,堪称全年人工智能最强报告。 该重点介绍了人工智能领域的投资和工作岗位前所未有的增长速度,尤其是在游戏和计算机视觉领域进展飞速。

  • 18年12月12日,哈佛大学,麻省理工学院,斯坦福大学以及OpenAI等联合发布了第二届人工智能指数(AI Index)年度报告。 人工智能领域这一行业的发展速度,不仅仅是通过实际产品的产生以及研究成果来衡量,还要考虑经济学家和政策制定者的预测和担忧。这个报告的目标是使用硬数据衡量人工智能领域的发展。 报告中多次提及了中国人工智能的发展以及清华大学: 美国仅占到全球论文发布内容的17%,欧洲是论文最高产的国家,18年发表的论文在全球范围内占比28%,中国紧随其后,占比25%。; 大学人工智能和机器学习相关课程注册率在全球范围都有大幅提升,其中最瞩目的是清华大学,相关课程2017年的注册率比2010年高出16倍,比2016年高出了将近3倍; 各国对人工智能应用方向重视不同。中国非常重视农业科学,工程和技术方面的应用,相比于2000年,2017年,中国加大了对农业方面的重视。 吴恩达也在今天的推特中重磅推荐了这份报告,称“数据太多了”,并划重点了两个报告亮点:人工智能在业界和学界都发展迅速;人工智能的发展仍需要更加多样包容。