Naïve rule: classify all examples as belonging to the most prevalent class. Often used ... For example, with Naïve Bayes the default cutoff value is 0.5. If p(y=1|x) ...

Horin发布于2018/06/15 00:00

注脚

1.Evaluating Classification Performance Data Mining 1

2.Why Evaluate? Multiple methods are available for classification For each method, multiple choices are available for settings (e.g. value of K for KNN, size of Tree of Decision Tree Learning) To choose best model, need to assess each model’s performance 2

3.Basic performance measure: Misclassification error Error = classifying an example as belonging to one class when it belongs to another class. Error rate = percent of misclassified examples out of the total examples in the validation data (or test data) 3

4.Naive classification Rule Naïve rule : classify all examples as belonging to the most prevalent class Often used as benchmark: we hope to do better than that Exception: when goal is to identify high-value but rare outcomes, we may do well by doing worse than the naïve rule (see “lift” – later) 4

5.Confusion Matrix 201 1’s correctly classified as “1” - True Positives 25 0’s incorrectly classified as “1 ” - False Positives 85 1’s incorrectly classified as “0” - False Negatives 2689 0’s correctly classified as “0”- True Negatives We use TP to denote True Positives . Similarly , FP, FN and TN

6.Error Rate and Accuracy Error rate = (FP + FN)/(TP + FP + FN + TN) Overall error rate = (25+85)/3000 = 3.67% Accuracy = 1 – err = (201+2689) = 96.33% If multiple classes, error rate is: (sum of misclassified records)/(total records) 6 TP = FN = FP = TN =

7.Cutoff for classification Many classification algorithms classify via a 2-step process: For each record, Compute a score or a probability of belonging to class “1” Compare to cutoff value, and classify accordingly For example, with Naïve Bayes the default cutoff value is 0.5 If p(y=1| x ) >= 0.5, classify as “1” If p(y=1| x ) < 0.50, classify as “0” Can use different cutoff values Typically, error rate is lowest for cutoff = 0.50 7

8.Cutoff Table - example If cutoff is 0.50: 13 examples are classified as “1” If cutoff is 0.75: 8 example s are classified as “1” If cutoff is 0.25 : 15 examples are classified as “1”

9.Confusion Matrix for Different Cutoffs Predicted class 0 Predicted class 1 1 11 Actual class 1 8 4 Actual class 0 Cutoff probability = 0.25 Accuracy = 19/24 Cutoff probability = 0.5 (Not shown) Accuracy = 21/24 Cutoff probability = 0.75 Accuracy = 18/24 Predicted class 0 Predicted class 1 5 7 Actual class 1 11 1 Actual class 0

10.Lift 10

11.When One Class is More Important In many cases it is more important to identify members of one class Tax fraud Response to promotional offer Detecting malignant tumors In such cases, we are willing to tolerate greater overall error, in return for better identifying the important class for further attention

12.Alternate Accuracy Measures We assume that the important class is 1 Sensitivity = % of class 1 examples correctly classified Sensitivity = TP / (TP+ FN ) Specificity = % of class 0 examples correctly classified Specificity = TN / (TN+ FP ) True positive rate = % of class 1 examples correctly as class 1 = Sensitivity = TP / (TP+ FN ) False positive rate = % of class 0 examples that were classified as class 1 = 1- specificity = FP / (TN+ FP ) 12 Predicted class 0 Predicted class 1 FN TP Actual class =1 TN FP Actual class =0

13.ROC curve Plot the True Positive Rate versus False Positive Rate for various values of the threshold The diagonal is the baseline – a random classifier Sometimes researchers use the Area under the ROC curve as a performance measure – AUC AUC by definition is between 0 and 1

14.Lift Charts: Goal Useful for assessing performance in terms of identifying the most important class Helps evaluate, e.g., How many tax records to examine How many loans to grant How many customers to mail offer to 14

15.Lift Charts – Cont. Compare performance of DM model to “no model, pick randomly” Measures ability of DM model to identify the important class, relative to its average prevalence Charts give explicit assessment of results over a large number of cutoffs 15

16.Lift Chart – cumulative performance For example: after examining 10 cases (x-axis), 9 positive cases (y-axis) have been correctly identified 16 Positives #Positives #Positives

17.Lift Charts: How to Compute Using the model’s classification scores, sort examples from most likely to least likely members of the important class Compute lift: Accumulate the correctly classified “important class” records (Y axis) and compare to number of total records (X axis) 17

18.Asymmetric Costs 18

19.Misclassification Costs May Differ The cost of making a misclassification error may be higher for one class than the other(s) Looked at another way, the benefit of making a correct classification may be higher for one class than the other(s) 19

20.Example – Response to Promotional Offer Suppose we send an offer to 1000 people, with 1% average response rate (“ 1” = response, “0” = nonresponse) “Naïve rule” (classify everyone as “0”) has error rate of 1 %, accuracy 99% (seems good) Using DM we can correctly classify eight 1’s as 1’s It comes at the cost of misclassifying twenty 0’s as 1’s and two 1’s as 0 ’s . 20

21.The Confusion Matrix Error rate = (2+20) = 2.2% (higher than naïve rate) 21

22.Introducing Costs & Benefits Suppose: Profit from a “1” is $10 Cost of sending offer is $1 Then: Under naïve rule, all are classified as “0”, so no offers are sent: no cost, no profit Under DM predictions, 28 offers are sent. 8 respond with profit of $10 each 20 fail to respond, cost $1 each 972 receive nothing (no cost, no profit) Net profit = $60 22

23.Profit Matrix 23

24.Lift (again) Adding costs to the mix, as above, does not change the actual classifications. But it allows us to get a better decision (threshold) Use the lift curve and change the cutoff value for “1” to maximize profit 24

25.Adding Cost/Benefit to Lift Curve Sort test examples in descending probability of success For each case, record cost/benefit of actual outcome Also record cumulative cost/benefit Plot all records X-axis is index number (1 for 1 st case, n for n th case) Y-axis is cumulative cost/benefit Reference line from origin to y n ( y n = total net benefit) 25

26.Lift Curve May Go Negative If total net benefit from all cases is negative, reference line will have negative slope Nonetheless, goal is still to use cutoff to select the point where net benefit is at a maximum 26

27.Negative slope to reference curve 27 Zoom in Maximum profit = 60$

28.Multiple Classes Theoretically, there are m ( m -1) misclassification costs, since any case could be misclassified in m -1 ways Practically too many to work with In decision-making context, though, such complexity rarely arises – one class is usually of primary interest For m classes, confusion matrix has m rows and m columns 28

29.Classification Using Triage Instead of classifying as C 1 or C 0 , we classify as C 1 C 0 Can’t say The third category might receive special human review Take into account a gray area in making classification decisions 29

user picture
  • Horin
  • do your good at,challenge what do you want to do

相关Slides

  • 视觉任务之间是否有关系,或者它们是否无关?例如,表面法线可以简化估算图像的深度吗?直觉回答了这些问题,暗示了视觉任务中存在结构。了解这种结构具有显著的价值;它是传递学习的基本概念,并提供了一种原则性的方法来识别任务之间的冗余,例如,无缝地重用相关任务之间的监督或在一个系统中解决许多任务而不会增加复杂性。 我们提出了一种完全计算的方法来建模视觉任务的空间结构。这是通过在隐空间中的二十六个2D,2.5D,3D和语义任务的字典中查找(一阶和更高阶)传递学习依赖性来完成的。该产品是用于任务迁移学习的计算分类地图。我们研究了这种结构的后果,例如:非平凡的关系,并利用它们来减少对标签数据的需求。例如,我们表明,解决一组10个任务所需的标记数据点总数可以减少大约2/3(与独立训练相比),同时保持性能几乎相同。我们提供了一套用于计算和探测这种分类结构的工具,包括用户可以用来为其用例设计有效监督策略。

  • 尽管最近在生成图像建模方面取得了进展,但是从像ImageNet这样的复杂数据集中成功生成高分辨率,多样化的样本仍然是一个难以实现的目标。为此,我们以最大规模训练了生成性对抗网络,并研究了这种规模所特有的不稳定性。我们发现将正交正则化应用于生成器使得它适合于简单的“截断技巧”,允许通过截断潜在空间来精确控制样本保真度和多样性之间的权衡。我们的修改导致模型在类条件图像合成中达到了新的技术水平。当我们在ImageNet上以128×128分辨率进行训练时,我们的模型(BigGAN)的初始得分(IS)为166.3,Frechet初始距离(FID)为9.6,比之前的最优IS为52.52,FID为18.65有了显著的提升。

  • 2017年,以斯坦福大学为首、包括吴恩达、李开复等一众大咖专家团队齐力打造的人工智能指数(AI Index)重磅年度报告首次发布。从学术、业界发展、政府策略等方面对全年的人工智能全球发展进行了回顾,堪称全年人工智能最强报告。 该重点介绍了人工智能领域的投资和工作岗位前所未有的增长速度,尤其是在游戏和计算机视觉领域进展飞速。

  • 18年12月12日,哈佛大学,麻省理工学院,斯坦福大学以及OpenAI等联合发布了第二届人工智能指数(AI Index)年度报告。 人工智能领域这一行业的发展速度,不仅仅是通过实际产品的产生以及研究成果来衡量,还要考虑经济学家和政策制定者的预测和担忧。这个报告的目标是使用硬数据衡量人工智能领域的发展。 报告中多次提及了中国人工智能的发展以及清华大学: 美国仅占到全球论文发布内容的17%,欧洲是论文最高产的国家,18年发表的论文在全球范围内占比28%,中国紧随其后,占比25%。; 大学人工智能和机器学习相关课程注册率在全球范围都有大幅提升,其中最瞩目的是清华大学,相关课程2017年的注册率比2010年高出16倍,比2016年高出了将近3倍; 各国对人工智能应用方向重视不同。中国非常重视农业科学,工程和技术方面的应用,相比于2000年,2017年,中国加大了对农业方面的重视。 吴恩达也在今天的推特中重磅推荐了这份报告,称“数据太多了”,并划重点了两个报告亮点:人工智能在业界和学界都发展迅速;人工智能的发展仍需要更加多样包容。