Introduction to Information Retrieval. Introduction to. Information Retrieval. Lecture 15: Text Classification & Naive Bayes. 1. Introduction to Information Retrieval.

幽如梦长如眠发布于2018/06/14 00:00

注脚

1.Lecture 15: Text Classification & Naive Bayes 1

2.2 A text classification task: Email spam filtering 2 From : ‘‘’’ <takworlld@hotmail.com> Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY ! There is no need to spend hundreds or even thousands for similar courses I am 22 years old and I have already purchased 6 properties using the methods outlined in this truly INCREDIBLE ebook . Change your life NOW ! ================================================= Click Below to order: http://www.wholesaledaily.com/sales/nmd.htm ================================================= How would you write a program that would automatically detect and delete this type of message?

3.3 Formal definition of TC: Training 3 Given : A document set X Documents are represented typically in some type of high -dimensional space . A fixed set of classes C = {c 1 , c 2 , . . . , c J } The classes are human-defined for the needs of an application (e.g., relevant vs. nonrelevant ). A training set D of labeled documents with each labeled document < d , c > ∈ X × C Using a learning method or learning algorithm , we then wish to learn a classifier ϒ that maps documents to classes: ϒ : X → C

4.4 Formal definition of TC: Application/Testing 4 Given: a description d ∈ X of a document Determine: ϒ ( d ) ∈ C, that is, the class that is most appropriate for d

5.5 Topic classification 5

6.6 Examples of how search engines use classification 6 Language identification ( classes : English vs. French etc.) The automatic detection of spam pages (spam vs. nonspam ) Topic -specific or vertical search – restrict search to a “vertical” like “related to health” (relevant to vertical vs. not )

7.7 Classification methods: Statistical /Probabilistic 7 This was our definition of the classification problem – text classification as a learning problem ( i ) Supervised learning of a the classification function ϒ and (ii) its application to classifying new documents We will look at doing this using Naive Bayes requires hand-classified training data But this manual classification can be done by non-experts.

8.8 Derivation of Naive Bayes rule 8 We want to find the class that is most likely given the document: Apply Bayes rule Drop denominator since P(d) is the same for all classes:

9.9 Too many parameters / sparseness 9 There are too many parameters , one for each unique combination of a class and a sequence of words . We would need a very, very large number of training examples to estimate that many parameters. This is the problem of data sparseness .

10.10 Naive Bayes conditional independence assumption 10 To reduce the number of parameters to a manageable size, we make the Naive Bayes conditional independence assumption: We assume that the probability of observing the conjunction of attributes is equal to the product of the individual probabilities P( X k = t k | c ).

11.11 The Naive Bayes classifier 11 The Naive Bayes classifier is a probabilistic classifier. We compute the probability of a document d being in a class c as follows : n d is the length of the document. (number of tokens) P ( t k | c ) is the conditional probability of term t k occurring in a document of class c P ( t k | c ) as a measure of how much evidence t k contributes that c is the correct class. P ( c ) is the prior probability of c . If a document’s terms do not provide clear evidence for one class vs. another, we choose the c with highest P ( c ).

12.12 Maximum a posteriori class 12 Our goal in Naive Bayes classification is to find the “best” class . The best class is the most likely or maximum a posteriori (MAP) class c map :

13.13 Taking the log 13 Multiplying lots of small probabilities can result in floating point underflow . Since log( xy ) = log( x ) + log( y ), we can sum log probabilities instead of multiplying probabilities . Since log is a monotonic function, the class with the highest score does not change . So what we usually compute in practice is:

14.14 Naive Bayes classifier 14 Classification rule : Simple interpretation : Each conditional parameter log is a weight that indicates how good an indicator t k is for c . The prior log is a weight that indicates the relative frequency of c . The sum of log prior and term weights is then a measure of how much evidence there is for the document being in the class . We select the class with the most evidence.

15.15 Parameter estimation take 1: Maximum likelihood 15 Estimate parameters and from train data: How? Prior: N c : number of docs in class c ; N : total number of docs Conditional probabilities : T ct is the number of tokens of t in training documents from class c (includes multiple occurrences) We’ve made a Naive Bayes independence assumption here:

16.16 The problem with maximum likelihood estimates: Zeros 16 P ( China | d ) ∝ P ( China ) ・ P ( BEIJING | China ) ・ P ( AND | China ) ・ P ( TAIPEI | China ) ・ P ( JOIN | China ) ・ P ( WTO| China ) If WTO never occurs in class China in the train set:

17.17 The problem with maximum likelihood estimates: Zeros (cont) 17 If there were no occurrences of WTO in documents in class China, we’d get a zero estimate : → We will get P( China|d ) = 0 for any document that contains WTO! Zero probabilities cannot be conditioned away.

18.18 To avoid zeros: Add-one smoothing 18 Before : Now: Add one to each count to avoid zeros: B is the number of different words (in this case the size of the vocabulary : | V | = M )

19.19 To avoid zeros: Add-one smoothing 19 Estimate parameters from the training corpus using add-one smoothing For a new document, for each class, compute sum of ( i ) log of prior and (ii) logs of conditional probabilities of the terms Assign the document to the class with the largest score

20.20 Naive Bayes : Training 20

21.21 Naive Bayes : Testing 21

22.22 Exercise 22 Estimate parameters of Naive Bayes classifier Classify test document

23.23 Example: Parameter estimates 23 The denominators are (8 + 6) and (3 + 6) because the lengths of text c and are 8 and 3, respectively, and because the constant B is 6 as the vocabulary consists of six terms.

24.24 Example: Classification 24 Thus, the classifier assigns the test document to c = China . The reason for this classification decision is that the three occurrences of the positive indicator CHINESE in d 5 outweigh the occurrences of the two negative indicators JAPAN and TOKYO .

25.25 Generative model 25 Generate a class with probability P ( c ) Generate each of the words (in their respective positions), conditional on the class, but independent of each other, with probability P ( t k | c ) To classify docs, we “reengineer” this process and find the class that is most likely to have generated the doc.

26.26 Evaluating classification 26 Evaluation must be done on test data that are independent of the training data (usually a disjoint set of instances). It’s easy to get good performance on a test set that was available to the learner during training (e.g., just memorize the test set ). Measures: Precision, recall, F 1 , classification accuracy

27.27 Constructing Confusion Matrix c

28.28 Precision P and recall R 28 P = TP / ( TP + FP ) R = TP / ( TP + FN )

29.29 A combined measure: F 29 F 1 allows us to trade off precision against recall. This is the harmonic mean of P and R :

30.30 Averaging: Micro vs. Macro 30 We now have an evaluation measure ( F 1 ) for one class . But we also want a single number that measures the aggregate performance over all classes in the collection. Macroaveraging Compute F 1 for each of the C classes Average these C numbers Microaveraging Compute TP, FP, FN for each of the C classes Sum these C numbers (e.g., all TP to get aggregate TP) Compute F 1 for aggregate TP, FP, FN

31.31 Micro- vs. Macro-average: Example

user picture

相关Slides

  • 视觉任务之间是否有关系,或者它们是否无关?例如,表面法线可以简化估算图像的深度吗?直觉回答了这些问题,暗示了视觉任务中存在结构。了解这种结构具有显著的价值;它是传递学习的基本概念,并提供了一种原则性的方法来识别任务之间的冗余,例如,无缝地重用相关任务之间的监督或在一个系统中解决许多任务而不会增加复杂性。 我们提出了一种完全计算的方法来建模视觉任务的空间结构。这是通过在隐空间中的二十六个2D,2.5D,3D和语义任务的字典中查找(一阶和更高阶)传递学习依赖性来完成的。该产品是用于任务迁移学习的计算分类地图。我们研究了这种结构的后果,例如:非平凡的关系,并利用它们来减少对标签数据的需求。例如,我们表明,解决一组10个任务所需的标记数据点总数可以减少大约2/3(与独立训练相比),同时保持性能几乎相同。我们提供了一套用于计算和探测这种分类结构的工具,包括用户可以用来为其用例设计有效监督策略。

  • 尽管最近在生成图像建模方面取得了进展,但是从像ImageNet这样的复杂数据集中成功生成高分辨率,多样化的样本仍然是一个难以实现的目标。为此,我们以最大规模训练了生成性对抗网络,并研究了这种规模所特有的不稳定性。我们发现将正交正则化应用于生成器使得它适合于简单的“截断技巧”,允许通过截断潜在空间来精确控制样本保真度和多样性之间的权衡。我们的修改导致模型在类条件图像合成中达到了新的技术水平。当我们在ImageNet上以128×128分辨率进行训练时,我们的模型(BigGAN)的初始得分(IS)为166.3,Frechet初始距离(FID)为9.6,比之前的最优IS为52.52,FID为18.65有了显著的提升。

  • 2017年,以斯坦福大学为首、包括吴恩达、李开复等一众大咖专家团队齐力打造的人工智能指数(AI Index)重磅年度报告首次发布。从学术、业界发展、政府策略等方面对全年的人工智能全球发展进行了回顾,堪称全年人工智能最强报告。 该重点介绍了人工智能领域的投资和工作岗位前所未有的增长速度,尤其是在游戏和计算机视觉领域进展飞速。

  • 18年12月12日,哈佛大学,麻省理工学院,斯坦福大学以及OpenAI等联合发布了第二届人工智能指数(AI Index)年度报告。 人工智能领域这一行业的发展速度,不仅仅是通过实际产品的产生以及研究成果来衡量,还要考虑经济学家和政策制定者的预测和担忧。这个报告的目标是使用硬数据衡量人工智能领域的发展。 报告中多次提及了中国人工智能的发展以及清华大学: 美国仅占到全球论文发布内容的17%,欧洲是论文最高产的国家,18年发表的论文在全球范围内占比28%,中国紧随其后,占比25%。; 大学人工智能和机器学习相关课程注册率在全球范围都有大幅提升,其中最瞩目的是清华大学,相关课程2017年的注册率比2010年高出16倍,比2016年高出了将近3倍; 各国对人工智能应用方向重视不同。中国非常重视农业科学,工程和技术方面的应用,相比于2000年,2017年,中国加大了对农业方面的重视。 吴恩达也在今天的推特中重磅推荐了这份报告,称“数据太多了”,并划重点了两个报告亮点:人工智能在业界和学界都发展迅速;人工智能的发展仍需要更加多样包容。