- 快召唤伙伴们来围观吧
- 微博 QQ QQ空间 贴吧
- 文档嵌入链接
- 复制
- 微信扫一扫分享
- 已成功复制到剪贴板
Naive Bayes for Text Classification - CSE IIT Kgp
展开查看详情
1 .Lecture 15: Text Classification & Naive Bayes 1
2 .2 A text classification task: Email spam filtering 2 From : ‘‘’’ <takworlld@hotmail.com> Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY ! There is no need to spend hundreds or even thousands for similar courses I am 22 years old and I have already purchased 6 properties using the methods outlined in this truly INCREDIBLE ebook . Change your life NOW ! ================================================= Click Below to order: http://www.wholesaledaily.com/sales/nmd.htm ================================================= How would you write a program that would automatically detect and delete this type of message?
3 .3 Formal definition of TC: Training 3 Given : A document set X Documents are represented typically in some type of high -dimensional space . A fixed set of classes C = {c 1 , c 2 , . . . , c J } The classes are human-defined for the needs of an application (e.g., relevant vs. nonrelevant ). A training set D of labeled documents with each labeled document < d , c > ∈ X × C Using a learning method or learning algorithm , we then wish to learn a classifier ϒ that maps documents to classes: ϒ : X → C
4 .4 Formal definition of TC: Application/Testing 4 Given: a description d ∈ X of a document Determine: ϒ ( d ) ∈ C, that is, the class that is most appropriate for d
5 .5 Topic classification 5
6 .6 Examples of how search engines use classification 6 Language identification ( classes : English vs. French etc.) The automatic detection of spam pages (spam vs. nonspam ) Topic -specific or vertical search – restrict search to a “vertical” like “related to health” (relevant to vertical vs. not )
7 .7 Classification methods: Statistical /Probabilistic 7 This was our definition of the classification problem – text classification as a learning problem ( i ) Supervised learning of a the classification function ϒ and (ii) its application to classifying new documents We will look at doing this using Naive Bayes requires hand-classified training data But this manual classification can be done by non-experts.
8 .8 Derivation of Naive Bayes rule 8 We want to find the class that is most likely given the document: Apply Bayes rule Drop denominator since P(d) is the same for all classes:
9 .9 Too many parameters / sparseness 9 There are too many parameters , one for each unique combination of a class and a sequence of words . We would need a very, very large number of training examples to estimate that many parameters. This is the problem of data sparseness .
10 .10 Naive Bayes conditional independence assumption 10 To reduce the number of parameters to a manageable size, we make the Naive Bayes conditional independence assumption: We assume that the probability of observing the conjunction of attributes is equal to the product of the individual probabilities P( X k = t k | c ).
11 .11 The Naive Bayes classifier 11 The Naive Bayes classifier is a probabilistic classifier. We compute the probability of a document d being in a class c as follows : n d is the length of the document. (number of tokens) P ( t k | c ) is the conditional probability of term t k occurring in a document of class c P ( t k | c ) as a measure of how much evidence t k contributes that c is the correct class. P ( c ) is the prior probability of c . If a document’s terms do not provide clear evidence for one class vs. another, we choose the c with highest P ( c ).
12 .12 Maximum a posteriori class 12 Our goal in Naive Bayes classification is to find the “best” class . The best class is the most likely or maximum a posteriori (MAP) class c map :
13 .13 Taking the log 13 Multiplying lots of small probabilities can result in floating point underflow . Since log( xy ) = log( x ) + log( y ), we can sum log probabilities instead of multiplying probabilities . Since log is a monotonic function, the class with the highest score does not change . So what we usually compute in practice is:
14 .14 Naive Bayes classifier 14 Classification rule : Simple interpretation : Each conditional parameter log is a weight that indicates how good an indicator t k is for c . The prior log is a weight that indicates the relative frequency of c . The sum of log prior and term weights is then a measure of how much evidence there is for the document being in the class . We select the class with the most evidence.
15 .15 Parameter estimation take 1: Maximum likelihood 15 Estimate parameters and from train data: How? Prior: N c : number of docs in class c ; N : total number of docs Conditional probabilities : T ct is the number of tokens of t in training documents from class c (includes multiple occurrences) We’ve made a Naive Bayes independence assumption here:
16 .16 The problem with maximum likelihood estimates: Zeros 16 P ( China | d ) ∝ P ( China ) ・ P ( BEIJING | China ) ・ P ( AND | China ) ・ P ( TAIPEI | China ) ・ P ( JOIN | China ) ・ P ( WTO| China ) If WTO never occurs in class China in the train set:
17 .17 The problem with maximum likelihood estimates: Zeros (cont) 17 If there were no occurrences of WTO in documents in class China, we’d get a zero estimate : → We will get P( China|d ) = 0 for any document that contains WTO! Zero probabilities cannot be conditioned away.
18 .18 To avoid zeros: Add-one smoothing 18 Before : Now: Add one to each count to avoid zeros: B is the number of different words (in this case the size of the vocabulary : | V | = M )
19 .19 To avoid zeros: Add-one smoothing 19 Estimate parameters from the training corpus using add-one smoothing For a new document, for each class, compute sum of ( i ) log of prior and (ii) logs of conditional probabilities of the terms Assign the document to the class with the largest score
20 .20 Naive Bayes : Training 20
21 .21 Naive Bayes : Testing 21
22 .22 Exercise 22 Estimate parameters of Naive Bayes classifier Classify test document
23 .23 Example: Parameter estimates 23 The denominators are (8 + 6) and (3 + 6) because the lengths of text c and are 8 and 3, respectively, and because the constant B is 6 as the vocabulary consists of six terms.
24 .24 Example: Classification 24 Thus, the classifier assigns the test document to c = China . The reason for this classification decision is that the three occurrences of the positive indicator CHINESE in d 5 outweigh the occurrences of the two negative indicators JAPAN and TOKYO .
25 .25 Generative model 25 Generate a class with probability P ( c ) Generate each of the words (in their respective positions), conditional on the class, but independent of each other, with probability P ( t k | c ) To classify docs, we “reengineer” this process and find the class that is most likely to have generated the doc.
26 .26 Evaluating classification 26 Evaluation must be done on test data that are independent of the training data (usually a disjoint set of instances). It’s easy to get good performance on a test set that was available to the learner during training (e.g., just memorize the test set ). Measures: Precision, recall, F 1 , classification accuracy
27 .27 Constructing Confusion Matrix c
28 .28 Precision P and recall R 28 P = TP / ( TP + FP ) R = TP / ( TP + FN )
29 .29 A combined measure: F 29 F 1 allows us to trade off precision against recall. This is the harmonic mean of P and R :