ML之NB:利用朴素贝叶斯NB算法(CountVectorizer+不去除停用词)对fetch_20newsgroups数据集(20类新闻文本)进行分类预测、评估


安详保卫大神
安详保卫大神 2022-09-19 15:29:59 50760
分类专栏: 资讯

ML之NB:利用朴素贝叶斯NB算法(CountVectorizer+不去除停用词)对fetch_20newsgroups数据集(20类新闻文本)进行分类预测、评估

目录

输出结果

设计思路

核心代码


输出结果

设计思路

核心代码

https://www.cnblogs.com/yunyaniu/articles/10465701.html

  1. class MultinomialNB Found at: sklearn.naive_bayes
  2. class MultinomialNB(-title class_ inherited__">BaseDiscreteNB):
  3. """
  4. Naive Bayes classifier for multinomial models
  5. The multinomial Naive Bayes classifier is suitable for classification with
  6. discrete features (e.g., word counts for text classification). The
  7. multinomial distribution normally requires integer feature counts. However,
  8. in practice, fractional counts such as tf-idf may also work.
  9. Read more in the :ref:`User Guide <multinomial_naive_bayes>`.
  10. Parameters
  11. ----------
  12. alpha : float, optional (default=1.0)
  13. Additive (Laplace/Lidstone) smoothing parameter
  14. (0 for no smoothing).
  15. fit_prior : boolean, optional (default=True)
  16. Whether to learn class prior probabilities or not.
  17. If false, a uniform prior will be used.
  18. class_prior : array-like, size (n_classes,), optional (default=None)
  19. Prior probabilities of the classes. If specified the priors are not
  20. adjusted according to the data.
  21. Attributes
  22. ----------
  23. class_log_prior_ : array, shape (n_classes, )
  24. Smoothed empirical log probability for each class.
  25. intercept_ : property
  26. Mirrors ``class_log_prior_`` for interpreting MultinomialNB
  27. as a linear model.
  28. feature_log_prob_ : array, shape (n_classes, n_features)
  29. Empirical log probability of features
  30. given a class, ``P(x_i|y)``.
  31. coef_ : property
  32. Mirrors ``feature_log_prob_`` for interpreting MultinomialNB
  33. as a linear model.
  34. class_count_ : array, shape (n_classes,)
  35. Number of samples encountered for each class during fitting. This
  36. value is weighted by the sample weight when provided.
  37. feature_count_ : array, shape (n_classes, n_features)
  38. Number of samples encountered for each (class, feature)
  39. during fitting. This value is weighted by the sample weight when
  40. provided.
  41. Examples
  42. --------
  43. >>> import numpy as np
  44. >>> X = np.random.randint(5, size=(6, 100))
  45. >>> y = np.array([1, 2, 3, 4, 5, 6])
  46. >>> from sklearn.naive_bayes import MultinomialNB
  47. >>> clf = MultinomialNB()
  48. >>> clf.fit(X, y)
  49. MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
  50. >>> print(clf.predict(X[2:3]))
  51. [3]
  52. Notes
  53. -----
  54. For the rationale behind the names `coef_` and `intercept_`, i.e.
  55. naive Bayes as a linear classifier, see J. Rennie et al. (2003),
  56. Tackling the poor assumptions of naive Bayes text classifiers, ICML.
  57. References
  58. ----------
  59. C.D. Manning, P. Raghavan and H. Schuetze (2008). Introduction to
  60. Information Retrieval. Cambridge University Press, pp. 234-265.
  61. http://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-
  62. classification-1.html
  63. """
  64. def __init__(self, alpha=1.0, fit_prior=True, class_prior=None):
  65. self.alpha = alpha
  66. self.fit_prior = fit_prior
  67. self.class_prior = class_prior
  68. def _count(self, X, Y):
  69. """Count and smooth feature occurrences."""
  70. if np.any((X.data if issparse(X) else X) < 0):
  71. raise ValueError("Input X must be non-negative")
  72. self.feature_count_ += safe_sparse_dot(Y.T, X)
  73. self.class_count_ += Y.sum(axis=0)
  74. def _update_feature_log_prob(self, alpha):
  75. """Apply smoothing to raw counts and recompute log probabilities"""
  76. smoothed_fc = self.feature_count_ + alpha
  77. smoothed_cc = smoothed_fc.sum(axis=1)
  78. self.feature_log_prob_ = np.log(smoothed_fc) - np.log(smoothed_cc.
  79. reshape(-1, 1))
  80. def _joint_log_likelihood(self, X):
  81. """Calculate the posterior log probability of the samples X"""
  82. check_is_fitted(self, "classes_")
  83. X = check_array(X, accept_sparse='csr')
  84. return safe_sparse_dot(X, self.feature_log_prob_.T) + self.class_log_prior_

网站声明:如果转载,请联系本站管理员。否则一切后果自行承担。

本文链接:https://www.xckfsq.com/news/show.html?id=3291
赞同 0
评论 0 条
安详保卫大神L1
粉丝 0 发表 8 + 关注 私信
上周热门
如何使用 StarRocks 管理和优化数据湖中的数据?  2941
【软件正版化】软件正版化工作要点  2860
统信UOS试玩黑神话:悟空  2819
信刻光盘安全隔离与信息交换系统  2712
镜舟科技与中启乘数科技达成战略合作,共筑数据服务新生态  1246
grub引导程序无法找到指定设备和分区  1213
华为全联接大会2024丨软通动力分论坛精彩议程抢先看!  163
点击报名 | 京东2025校招进校行程预告  162
2024海洋能源产业融合发展论坛暨博览会同期活动-海洋能源与数字化智能化论坛成功举办  160
华为纯血鸿蒙正式版9月底见!但Mate 70的内情还得接着挖...  157
本周热议
我的信创开放社区兼职赚钱历程 40
今天你签到了吗? 27
信创开放社区邀请他人注册的具体步骤如下 15
如何玩转信创开放社区—从小白进阶到专家 15
方德桌面操作系统 14
我有15积分有什么用? 13
用抖音玩法闯信创开放社区——用平台宣传企业产品服务 13
如何让你先人一步获得悬赏问题信息?(创作者必看) 12
2024中国信创产业发展大会暨中国信息科技创新与应用博览会 9
中央国家机关政府采购中心:应当将CPU、操作系统符合安全可靠测评要求纳入采购需求 8

加入交流群

请使用微信扫一扫!