scikit-learn GridSearchCV with multiple repetitions



Answers

How to access Scikit Learn nested cross-validation scores

You cannot access individual params and best params from cross_val_score. What cross_val_score does internally is clone the supplied estimator and then call fit and score methods on it with given X, y on individual estimators.

If you want to access the params at each split you can use:

#put below code inside your NUM_TRIALS for loop
cv_iter = 0
temp_nested_scores_train = np.zeros(4)
temp_nested_scores_test = np.zeros(4)
for train, test in outer_cv.split(X_iris):
    clf.fit(X_iris[train], y_iris[train])
    temp_nested_scores_train[cv_iter] = clf.best_score_
    temp_nested_scores_test[cv_iter] = clf.score(X_iris[test], y_iris[test])
    #You can access grid search's params here
nested_scores_train[i] = temp_nested_scores_train.mean()
nested_scores_test[i] = temp_nested_scores_test.mean()
Question

I'm trying to get the best set of parameters for an SVR model. I'd like to use the GridSearchCV over different values of C. However, from previous test I noticed that the split into Training/Test set higlhy influence the overall performance (r2 in this instance). To address this problem, I'd like to implement a repeated 5-fold cross validation (10 x 5CV). Is there a built in way of performing it using GridSearchCV?

QUICK SOLUTION:

Following the idea presented in the sci-kit offical documentation , a quick solution is represented by:

NUM_TRIALS = 10
scores = []
for i in range(NUM_TRIALS):
     cv = KFold(n_splits=5, shuffle=True, random_state=i)
     clf = GridSearchCV(estimator=svr, param_grid=p_grid, cv=cv)
     scores.append(clf.best_score_)
print "Average Score: {0} STD: {1}".format(numpy.mean(scores), numpy.std(scores))



Links



Tags