ML之sklearn:sklearn.linear_mode中的LogisticRegression函数的简介、使用方法之详细攻略
目录
sklearn.linear_mode中的LogisticRegression函数的简介、使用方法
class LogisticRegression Found at: sklearn.linear_model._logisticclass LogisticRegression(BaseEstimator, LinearClassifierMixin, SparseCoefMixin): """ Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the 'multi_class' option is set to 'ovr', and uses the cross-entropy loss if the 'multi_class' option is set to 'multinomial'. (Currently the 'multinomial' option is supported only by the 'lbfgs', 'sag', 'saga' and 'newton-cg' solvers.) This class implements regularized logistic regression using the 'liblinear' library, 'newton-cg', 'sag', 'saga' and 'lbfgs' solvers. **Note that regularization is applied by default**. It can handle both dense and sparse input. Use C-ordered arrays or CSR matrices containing 64-bit floats for optimal performance; any other input format will be converted (and copied). The 'newton-cg', 'sag', and 'lbfgs' solvers support only L2 regularization with primal formulation, or no regularization. The 'liblinear' solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. The Elastic-Net regularization is only supported by the 'saga' solver. Read more in the :ref:`User Guide <logistic_regression>`. | 逻辑回归(又名logit, MaxEnt)分类器。
|
Parameters ---------- penalty : {'l1', 'l2', 'elasticnet', 'none'}, default='l2' Used to specify the norm used in the penalization. The 'newton-cg', 'sag' and 'lbfgs' solvers support only l2 penalties. 'elasticnet' is only supported by the 'saga' solver. If 'none' (not supported by the liblinear solver), no regularization is applied. .. versionadded:: 0.19 l1 penalty with SAGA solver (allowing 'multinomial' + L1) dual : bool, default=False Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features. tol : float, default=1e-4 Tolerance for stopping criteria. C : float, default=1.0 Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization. fit_intercept : bool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function. intercept_scaling : float, default=1 Useful only when the solver 'liblinear' is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a "synthetic" feature with constant value equal to intercept_scaling is appended to the instance vector.The intercept becomes ``intercept_scaling * synthetic_feature_weight``. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. class_weight : dict or 'balanced', default=None Weights associated with classes in the form ``{class_label: weight}``. If not given, all classes are supposed to have weight one. The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))``. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. .. versionadded:: 0.17 *class_weight='balanced'* random_state : int, RandomState instance, default=None Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the data. See :term:`Glossary <random_state>` for details. solver : {'newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'}, \ default='lbfgs' Algorithm to use in the optimization problem. - For small datasets, 'liblinear' is a good choice, whereas 'sag' and 'saga' are faster for large ones. - For multiclass problems, only 'newton-cg', 'sag', 'saga' and 'lbfgs' handle multinomial loss; 'liblinear' is limited to one-versus-rest schemes. - 'newton-cg', 'lbfgs', 'sag' and 'saga' handle L2 or no penalty - 'liblinear' and 'saga' also handle L1 penalty - 'saga' also supports 'elasticnet' penalty - 'liblinear' does not support setting ``penalty='none'`` Note that 'sag' and 'saga' fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. | 参数 --------- 处罚:{l1, l2,‘elasticnet’,‘没有’},默认=“l2” 用于指定在处罚中使用的规范。“newton-cg”,“sag”和“lbfgs”求解器只支持l2惩罚。“elasticnet”仅由“saga”求解器支持。如果“none”(liblinear求解器不支持),则不应用正则化。 . .versionadded:: 0.19 l1惩罚与SAGA求解器(允许“多项”+ l1) bool,默认=False 双重或原始配方。对偶公式仅适用于l2罚用线性求解器。当n_samples > n_features时,preferred dual=False。 tol:浮动,默认=1e-4 停止标准的容忍度。 C: float, default=1.0 正则化强度的逆;必须是正浮点数。与支持向量机一样,值越小,正则化越强。 fit_intercept: bool,默认=True 指定一个常数(即偏差或拦截)是否应该添加到决策函数中。 intercept_scaling:浮动,默认=1 只有在使用“liblinear”求解器和self时才有用。fit_intercept设置为True。在这种情况下,x变成[x, self。intercept_scaling],即。一个常数值等于intercept_scaling的“合成”特性被附加到实例向量中。拦截变成' ' intercept_scaling * synthetic_feature_weight ' '。 注意!合成特征权重与所有其他特征一样,采用l1/l2正则化。为了减少正则化对合成特征权重的影响(因此对拦截的影响),必须增加intercept_scaling。 class_weight: dict或'balanced',默认为None 以' ' {class_label: weight} ' ' '形式关联类的权重。如果没有给出,所有类的权重都应该是1。 “平衡”模式使用y的值自动调整权重与输入数据中的类频率成反比,如' ' n_samples / (n_classes * np.bincount(y)) ' '。 注意,如果指定了sample_weight,那么这些权重将与sample_weight相乘(通过fit方法传递)。 . .versionadded:: 0.17 * class_weight = '平衡' * random_state: int, RandomState instance, default=None,当' ' solver ' ' = 'sag', 'saga'或'liblinear'洗发数据时使用。详见:term: ' Glossary <random_state> '。</random_state> 解决:{‘newton-cg’,‘lbfgs’,‘liblinear’,“凹陷”,“传奇”},\默认=“lbfgs” 算法用于优化问题。 对于小数据集,“liblinear”是一个不错的选择,而“sag”和“saga”对于大数据集更快。 -对于多类问题,只有“newton-cg”、“sag”、“saga”和“lbfgs”处理多项损失;“liblinear”仅限于“一对二”方案。 - 'newton-cg', 'lbfgs', 'sag'和'saga'处理L2或没有处罚 -“liblinear”和“saga”也可以处理L1惩罚 -《英雄传奇》也支持《弹性网》的惩罚 - 'liblinear'不支持设置' ' penalty='none' ' ' 请注意,“sag”和“saga”的快速收敛只能保证在大致相同规模的特性上。您可以使用sklearn.preprocessing中的scaler对数据进行预处理。 |
.. versionadded:: 0.17 | . .versionadded:: 0.17 随机平均梯度下降求解器。 . .versionadded:: 0.19 SAGA solver。 . .versionchanged:: 0.22 在0.22中,默认求解器从“liblinear”更改为“lbfgs”。 max_iter: int,默认=100 使求解器收敛的最大迭代次数。 multi_class: {'auto', 'ovr', '多项'},默认='auto' 如果选择的选项是'ovr',那么每个标签都适合一个二进制问题。对于“多项”损失最小化是多项式损失适合整个概率分布,即使当数据是二进制*。当求解器='liblinear'时,不可用多项式。auto选择'ovr'如果数据是二进制的,或者solver='liblinear',否则选择'多项'。 . .versionadded:: 0.18 “多项式”情况的随机平均梯度下降求解器。 . .versionchanged:: 0.22 在0.22中默认从“ovr”改为“auto”。 int,默认=0 对于liblinear和lbfgs求解器,将冗长设置为任意正数。 warm_start: bool,默认=False 当设置为True时,重用前面调用的解决方案以适合初始化,否则就擦除前面的解决方案。对于线性求解器是没用的。参见:term: ' the Glossary <warm_start> '。</warm_start> . .versionadded:: 0.17 *warm_start*支持*lbfgs*, *newton-cg*, *sag*, *saga*求解器。 n_jobs: int,默认=无 如果multi_class='ovr'",则在类上并行时使用的CPU核数。当' ' solver ' '被设置为'liblinear'时,不管'multi_class'是否被指定,这个参数都会被忽略。' ' None ' '表示1,除非在:obj: ' joblib.parallel_backend '上下文中。“-1”表示使用所有处理器。 有关更多细节,请参见:term: ' Glossary <n_jobs> '。</n_jobs> l1_ratio: float, default=None 弹网混合参数``0 <= l1_ratio <= 1``。只在``penalty= ` elasticnet ``时使用。设置' ' l1_ratio=0 ' '等价于使用' ' penalty='l2' ' ',设置' ' l1_ratio=1 ' '等价于使用' ' penalty='l1' ' '。对于' ' 0 < l1_ratio <1 ' ',惩罚是L1和L2的组合。 |
Attributes ---------- classes_ : ndarray of shape (n_classes, ) A list of class labels known to the classifier. coef_ : ndarray of shape (1, n_features) or (n_classes, n_features) Coefficient of the features in the decision function. `coef_` is of shape (1, n_features) when the given problem is binary. In particular, when `multi_class='multinomial'`, `coef_` corresponds to outcome 1 (True) and `-coef_` corresponds to outcome 0 (False). intercept_ : ndarray of shape (1,) or (n_classes,) Intercept (a.k.a. bias) added to the decision function. If `fit_intercept` is set to False, the intercept is set to zero. `intercept_` is of shape (1,) when the given problem is binary. In particular, when `multi_class='multinomial'`, `intercept_` corresponds to outcome 1 (True) and `-intercept_` corresponds to outcome 0 (False). n_iter_ : ndarray of shape (n_classes,) or (1, ) Actual number of iterations for all classes. If binary or multinomial, it returns only 1 element. For liblinear solver, only the maximum number of iteration across all classes is given. .. versionchanged:: 0.20 In SciPy <= 1.0.0 the number of lbfgs iterations may exceed ``max_iter``. ``n_iter_`` will now report at most ``max_iter``. See Also -------- SGDClassifier : Incrementally trained logistic regression (when given the parameter ``loss="log"``). LogisticRegressionCV : Logistic regression with built-in cross validation. Notes ----- The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon, to have slightly different results for the same input data. If that happens, try with a smaller tol parameter. Predict output may not match that of standalone liblinear in certain cases. See :ref:`differences from liblinear <liblinear_differences>` in the narrative documentation. | 属性
|
References ---------- L-BFGS-B -- Software for Large-scale Bound-constrained Optimization Ciyou Zhu, Richard Byrd, Jorge Nocedal and Jose Luis Morales. http://users.iems.northwestern.edu/~nocedal/lbfgsb.html LIBLINEAR -- A Library for Large Linear Classification https://www.csie.ntu.edu.tw/~cjlin/liblinear/ SAG -- Mark Schmidt, Nicolas Le Roux, and Francis Bach Minimizing Finite Sums with the Stochastic Average Gradient https://hal.inria.fr/hal-00860051/document SAGA -- Defazio, A., Bach F. & Lacoste-Julien S. (2014). SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives https://arxiv.org/abs/1407.0202 Hsiang-Fu Yu, Fang-Lan Huang, Chih-Jen Lin (2011). Dual coordinate descent methods for logistic regression and maximum entropy models. Machine Learning 85(1-2):41-75. https://www.csie.ntu.edu.tw/~cjlin/papers/maxent_dual.pdf | 引用 --------- Ciyou Zhu, Richard Byrd, Jorge Nocedal和Jose Luis moral. http://users.iems.northwestern.edu/~ Nocedal /lbfgsb.html LIBLINEAR——一个大型线性分类的图书馆 https://www.csie.ntu.edu.tw/ cjlin / liblinear / SAG——Mark Schmidt, Nicolas Le Roux和Francis Bach用随机平均梯度最小化有限和 https://hal.inria.fr/hal-00860051/document 佐贺—德法齐奥,巴赫F. &拉科斯特-朱利安S.(2014)。 一个支持非强凸复合目标的快速增量梯度方法 https://arxiv.org/abs/1407.0202 俞香福、黄方兰、林志仁(2011)。双坐标下降 逻辑回归和最大熵模型的方法。机器学习85 (1 - 2):41 - 75。 https://www.csie.ntu.edu.tw/ cjlin /论文/ maxent_dual.pdf |
Examples -------- >>> from sklearn.datasets import load_iris >>> from sklearn.linear_model import LogisticRegression >>> X, y = load_iris(return_X_y=True) >>> clf = LogisticRegression(random_state=0).fit(X, y) >>> clf.predict(X[:2, :]) array([0, 0]) >>> clf.predict_proba(X[:2, :]) array([[9.8...e-01, 1.8...e-02, 1.4...e-08], [9.7...e-01, 2.8...e-02, ...e-08]]) >>> clf.score(X, y) 0.97... """ @_deprecate_positional_args | |
def __init__(self, penalty='l2', *, dual=False, tol=1e-4, C=1.0, Parameters y : array-like of shape (n_samples,) sample_weight : array-like of shape (n_samples,) default=None .. versionadded:: 0.17 Returns Notes The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be "multinomial" the softmax function is used to find the predicted probability of each class. Parameters Returns The returned estimates for all classes are ordered by the label of classes. Parameters Returns | 概率的估计。 所有类返回的估计值都按照类的标签排序。对于一个多类问题,将多类设为“多项式”,利用softmax函数求出每一类的预测概率。 |
网站声明:如果转载,请联系本站管理员。否则一切后果自行承担。
添加我为好友,拉您入交流群!
请使用微信扫一扫!