ML之RF&XGBoost:基于RF/XGBoost(均+5f-CrVa)算法对Titanic(泰坦尼克号)数据集进行二分类预测(乘客是否生还)


橙皮
橙皮 2022-09-19 15:22:40 47878
分类专栏: 资讯

ML之RF&XGBoost:基于RF/XGBoost(均+5f-CrVa)算法对Titanic(泰坦尼克号)数据集进行二分类预测(乘客是否生还)

目录

输出结果

比赛结果

设计思路

核心代码


输出结果





比赛结果

设计思路

核心代码

  1. rfc = RandomForestClassifier()
  2. rfc_cross_val_score=cross_val_score(rfc, X_train, y_train, cv=5).mean()
  3. print('RF:',rfc_cross_val_score)
  4. rfc.fit(X_train,y_train)
  5. rfc_y_predict = rfc.predict(X_test)
  6. rfc_submission = pd.DataFrame({'PassengerId': test['PassengerId'], 'Survived': rfc_y_predict})
  7. rfc_submission.to_csv('data_input/Titanic Data/Titanic_rfc_submission.csv', index=False)
  8. xgbc = XGBClassifier()
  9. xgbc_cross_val_score=cross_val_score(xgbc, X_train, y_train, cv=5).mean()
  10. print('XGBoost:',xgbc_cross_val_score)
  11. xgbc.fit(X_train, y_train)
  12. xgbc_y_predict = xgbc.predict(X_test)
  13. xgbc_submission = pd.DataFrame({'PassengerId': test['PassengerId'], 'Survived': xgbc_y_predict})
  14. xgbc_submission.to_csv('data_input/Titanic Data/Titanic_xgbc_submission.csv', index=False)
  1. class RandomForestClassifier(-title class_ inherited__">ForestClassifier):
  2. """A random forest classifier.
  3. A random forest is a meta estimator that fits a number of decision tree
  4. classifiers on various sub-samples of the dataset and use averaging to
  5. improve the predictive accuracy and control over-fitting.
  6. The sub-sample size is always the same as the original
  7. input sample size but the samples are drawn with replacement if
  8. `bootstrap=True` (default).
  9. Read more in the :ref:`User Guide <forest>`.
  10. Parameters
  11. ----------
  12. n_estimators : integer, optional (default=10)
  13. The number of trees in the forest.
  14. criterion : string, optional (default="gini")
  15. The function to measure the quality of a split. Supported criteria are
  16. "gini" for the Gini impurity and "entropy" for the information gain.
  17. Note: this parameter is tree-specific.
  18. max_features : int, float, string or None, optional (default="auto")
  19. The number of features to consider when looking for the best split:
  20. - If int, then consider `max_features` features at each split.
  21. - If float, then `max_features` is a percentage and
  22. `int(max_features * n_features)` features are considered at each
  23. split.
  24. - If "auto", then `max_features=sqrt(n_features)`.
  25. - If "sqrt", then `max_features=sqrt(n_features)` (same as "auto").
  26. - If "log2", then `max_features=log2(n_features)`.
  27. - If None, then `max_features=n_features`.
  28. Note: the search for a split does not stop until at least one
  29. valid partition of the node samples is found, even if it requires to
  30. effectively inspect more than ``max_features`` features.
  31. max_depth : integer or None, optional (default=None)
  32. The maximum depth of the tree. If None, then nodes are expanded
  33. until
  34. all leaves are pure or until all leaves contain less than
  35. min_samples_split samples.
  36. min_samples_split : int, float, optional (default=2)
  37. The minimum number of samples required to split an internal node:
  38. - If int, then consider `min_samples_split` as the minimum number.
  39. - If float, then `min_samples_split` is a percentage and
  40. `ceil(min_samples_split * n_samples)` are the minimum
  41. number of samples for each split.
  42. .. versionchanged:: 0.18
  43. Added float values for percentages.
  44. min_samples_leaf : int, float, optional (default=1)
  45. The minimum number of samples required to be at a leaf node:
  46. - If int, then consider `min_samples_leaf` as the minimum number.
  47. - If float, then `min_samples_leaf` is a percentage and
  48. `ceil(min_samples_leaf * n_samples)` are the minimum
  49. number of samples for each node.
  50. .. versionchanged:: 0.18
  51. Added float values for percentages.
  52. min_weight_fraction_leaf : float, optional (default=0.)
  53. The minimum weighted fraction of the sum total of weights (of all
  54. the input samples) required to be at a leaf node. Samples have
  55. equal weight when sample_weight is not provided.
  56. max_leaf_nodes : int or None, optional (default=None)
  57. Grow trees with ``max_leaf_nodes`` in best-first fashion.
  58. Best nodes are defined as relative reduction in impurity.
  59. If None then unlimited number of leaf nodes.
  60. min_impurity_split : float,
  61. Threshold for early stopping in tree growth. A node will split
  62. if its impurity is above the threshold, otherwise it is a leaf.
  63. .. deprecated:: 0.19
  64. ``min_impurity_split`` has been deprecated in favor of
  65. ``min_impurity_decrease`` in 0.19 and will be removed in 0.21.
  66. Use ``min_impurity_decrease`` instead.
  67. min_impurity_decrease : float, optional (default=0.)
  68. A node will be split if this split induces a decrease of the impurity
  69. greater than or equal to this value.
  70. The weighted impurity decrease equation is the following::
  71. N_t / N * (impurity - N_t_R / N_t * right_impurity
  72. - N_t_L / N_t * left_impurity)
  73. where ``N`` is the total number of samples, ``N_t`` is the number of
  74. samples at the current node, ``N_t_L`` is the number of samples in the
  75. left child, and ``N_t_R`` is the number of samples in the right child.
  76. ``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
  77. if ``sample_weight`` is passed.
  78. .. versionadded:: 0.19
  79. bootstrap : boolean, optional (default=True)
  80. Whether bootstrap samples are used when building trees.
  81. oob_score : bool (default=False)
  82. Whether to use out-of-bag samples to estimate
  83. the generalization accuracy.
  84. n_jobs : integer, optional (default=1)
  85. The number of jobs to run in parallel for both `fit` and `predict`.
  86. If -1, then the number of jobs is set to the number of cores.
  87. random_state : int, RandomState instance or None, optional
  88. (default=None)
  89. If int, random_state is the seed used by the random number
  90. generator;
  91. If RandomState instance, random_state is the random number
  92. generator;
  93. If None, the random number generator is the RandomState instance
  94. used
  95. by `np.random`.
  96. verbose : int, optional (default=0)
  97. Controls the verbosity of the tree building process.
  98. warm_start : bool, optional (default=False)
  99. When set to ``True``, reuse the solution of the previous call to fit
  100. and add more estimators to the ensemble, otherwise, just fit a whole
  101. new forest.
  102. class_weight : dict, list of dicts, "balanced",
  103. "balanced_subsample" or None, optional (default=None)
  104. Weights associated with classes in the form ``{class_label: weight}``.
  105. If not given, all classes are supposed to have weight one. For
  106. multi-output problems, a list of dicts can be provided in the same
  107. order as the columns of y.
  108. Note that for multioutput (including multilabel) weights should be
  109. defined for each class of every column in its own dict. For example,
  110. for four-class multilabel classification weights should be
  111. [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of
  112. [{1:1}, {2:5}, {3:1}, {4:1}].
  113. The "balanced" mode uses the values of y to automatically adjust
  114. weights inversely proportional to class frequencies in the input data
  115. as ``n_samples / (n_classes * np.bincount(y))``
  116. The "balanced_subsample" mode is the same as "balanced" except
  117. that
  118. weights are computed based on the bootstrap sample for every tree
  119. grown.
  120. For multi-output, the weights of each column of y will be multiplied.
  121. Note that these weights will be multiplied with sample_weight
  122. (passed
  123. through the fit method) if sample_weight is specified.
  124. Attributes
  125. ----------
  126. estimators_ : list of DecisionTreeClassifier
  127. The collection of fitted sub-estimators.
  128. classes_ : array of shape = [n_classes] or a list of such arrays
  129. The classes labels (single output problem), or a list of arrays of
  130. class labels (multi-output problem).
  131. n_classes_ : int or list
  132. The number of classes (single output problem), or a list containing the
  133. number of classes for each output (multi-output problem).
  134. n_features_ : int
  135. The number of features when ``fit`` is performed.
  136. n_outputs_ : int
  137. The number of outputs when ``fit`` is performed.
  138. feature_importances_ : array of shape = [n_features]
  139. The feature importances (the higher, the more important the feature).
  140. oob_score_ : float
  141. Score of the training dataset obtained using an out-of-bag estimate.
  142. oob_decision_function_ : array of shape = [n_samples, n_classes]
  143. Decision function computed with out-of-bag estimate on the training
  144. set. If n_estimators is small it might be possible that a data point
  145. was never left out during the bootstrap. In this case,
  146. `oob_decision_function_` might contain NaN.
  147. Examples
  148. --------
  149. >>> from sklearn.ensemble import RandomForestClassifier
  150. >>> from sklearn.datasets import make_classification
  151. >>>
  152. >>> X, y = make_classification(n_samples=1000, n_features=4,
  153. ... n_informative=2, n_redundant=0,
  154. ... random_state=0, shuffle=False)
  155. >>> clf = RandomForestClassifier(max_depth=2, random_state=0)
  156. >>> clf.fit(X, y)
  157. RandomForestClassifier(bootstrap=True, class_weight=None,
  158. criterion='gini',
  159. max_depth=2, max_features='auto', max_leaf_nodes=None,
  160. min_impurity_decrease=0.0, min_impurity_split=None,
  161. min_samples_leaf=1, min_samples_split=2,
  162. min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
  163. oob_score=False, random_state=0, verbose=0, warm_start=False)
  164. >>> print(clf.feature_importances_)
  165. [ 0.17287856 0.80608704 0.01884792 0.00218648]
  166. >>> print(clf.predict([[0, 0, 0, 0]]))
  167. [1]
  168. Notes
  169. -----
  170. The default values for the parameters controlling the size of the trees
  171. (e.g. ``max_depth``, ``min_samples_leaf``, etc.) lead to fully grown and
  172. unpruned trees which can potentially be very large on some data
  173. sets. To
  174. reduce memory consumption, the complexity and size of the trees
  175. should be
  176. controlled by setting those parameter values.

网站声明:如果转载,请联系本站管理员。否则一切后果自行承担。

本文链接:https://www.xckfsq.com/news/show.html?id=3250
赞同 0
评论 0 条
橙皮L2
粉丝 0 发表 14 + 关注 私信
上周热门
如何使用 StarRocks 管理和优化数据湖中的数据?  2672
【软件正版化】软件正版化工作要点  2637
统信UOS试玩黑神话:悟空  2532
信刻光盘安全隔离与信息交换系统  2216
镜舟科技与中启乘数科技达成战略合作,共筑数据服务新生态  1092
grub引导程序无法找到指定设备和分区  743
WPS City Talk · 校招西安站来了!  15
金山办公2024算法挑战赛 | 报名截止日期更新  15
看到某国的寻呼机炸了,就问你用某水果手机发抖不?  14
有在找工作的IT人吗?  13
本周热议
我的信创开放社区兼职赚钱历程 40
今天你签到了吗? 27
信创开放社区邀请他人注册的具体步骤如下 15
如何玩转信创开放社区—从小白进阶到专家 15
方德桌面操作系统 14
我有15积分有什么用? 13
用抖音玩法闯信创开放社区——用平台宣传企业产品服务 13
如何让你先人一步获得悬赏问题信息?(创作者必看) 12
2024中国信创产业发展大会暨中国信息科技创新与应用博览会 9
中央国家机关政府采购中心:应当将CPU、操作系统符合安全可靠测评要求纳入采购需求 8

加入交流群

请使用微信扫一扫!