I'm trying to figure out how to produce a confusion matrix with cross_validate. I'm able to print out the scores with the code I have so far.
# Instantiating model
model = DecisionTreeClassifier()
#Scores
scoring = {'accuracy' : make_scorer(accuracy_score),
'precision' : make_scorer(precision_score),
'recall' : make_scorer(recall_score),
'f1_score' : make_scorer(f1_score)}
# 10-fold cross validation
scores = cross_validate(model, X, y, cv=10, scoring=scoring)
print("Accuracy (Testing): %0.2f (+/- %0.2f)" % (scores['test_accuracy'].mean(), scores['test_accuracy'].std() * 2))
print("Precision (Testing): %0.2f (+/- %0.2f)" % (scores['test_precision'].mean(), scores['test_precision'].std() * 2))
print("Recall (Testing): %0.2f (+/- %0.2f)" % (scores['test_recall'].mean(), scores['test_recall'].std() * 2))
print("F1-Score (Testing): %0.2f (+/- %0.2f)" % (scores['test_f1_score'].mean(), scores['test_f1_score'].std() * 2))
But I'm trying to get that data into a confusion matrix. I'm able to make a confusion matrix by using cross_val_predict -
y_train_pred = cross_val_predict(model, X, y, cv=10)
confusion_matrix(y, y_train_pred)
Which is great, but since it's doing it's own cross validation, the results won't match up. I'm just looking for a way to produce both with matching results.
Any help or pointers would be great. Thanks!
question from:
https://stackoverflow.com/questions/65645125/producing-a-confusion-matrix-with-cross-validate 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…