Print Parameters Used in Grid Search During gridsearchcv












0















I am trying to see the parameters that are currently being used in a custom score function in gridsearchcv while the grid search is executing. Ideally this would look like:



Edit: To clarify I am looking to use the parameters from the grid search so I need to be able to access them in the function.



def fit(X, y): 
grid = {'max_features':[0.8,'sqrt'],
'subsample':[1, 0.7],
'min_samples_split' : [2, 3],
'min_samples_leaf' : [1, 3],
'learning_rate' : [0.01, 0.1],
'max_depth' : [3, 8, 15],
'n_estimators' : [10, 20, 50]}
clf = GradientBoostingClassifier()
score_func = make_scorer(make_custom_score, needs_proba=True)


model = GridSearchCV(estimator=clf,
param_grid=grid,
scoring=score_func,
cv=5)


def make_custom_score(y_true, y_score):
'''
y_true: array-like, shape = [n_samples] Ground truth (true relevance labels).
y_score : array-like, shape = [n_samples] Predicted scores
'''

print(parameters_used_in_current_gridsearch)



return score


I know I can get the parameters after the execution is complete, but I was trying to get the parameters while the code is executing.










share|improve this question





























    0















    I am trying to see the parameters that are currently being used in a custom score function in gridsearchcv while the grid search is executing. Ideally this would look like:



    Edit: To clarify I am looking to use the parameters from the grid search so I need to be able to access them in the function.



    def fit(X, y): 
    grid = {'max_features':[0.8,'sqrt'],
    'subsample':[1, 0.7],
    'min_samples_split' : [2, 3],
    'min_samples_leaf' : [1, 3],
    'learning_rate' : [0.01, 0.1],
    'max_depth' : [3, 8, 15],
    'n_estimators' : [10, 20, 50]}
    clf = GradientBoostingClassifier()
    score_func = make_scorer(make_custom_score, needs_proba=True)


    model = GridSearchCV(estimator=clf,
    param_grid=grid,
    scoring=score_func,
    cv=5)


    def make_custom_score(y_true, y_score):
    '''
    y_true: array-like, shape = [n_samples] Ground truth (true relevance labels).
    y_score : array-like, shape = [n_samples] Predicted scores
    '''

    print(parameters_used_in_current_gridsearch)



    return score


    I know I can get the parameters after the execution is complete, but I was trying to get the parameters while the code is executing.










    share|improve this question



























      0












      0








      0








      I am trying to see the parameters that are currently being used in a custom score function in gridsearchcv while the grid search is executing. Ideally this would look like:



      Edit: To clarify I am looking to use the parameters from the grid search so I need to be able to access them in the function.



      def fit(X, y): 
      grid = {'max_features':[0.8,'sqrt'],
      'subsample':[1, 0.7],
      'min_samples_split' : [2, 3],
      'min_samples_leaf' : [1, 3],
      'learning_rate' : [0.01, 0.1],
      'max_depth' : [3, 8, 15],
      'n_estimators' : [10, 20, 50]}
      clf = GradientBoostingClassifier()
      score_func = make_scorer(make_custom_score, needs_proba=True)


      model = GridSearchCV(estimator=clf,
      param_grid=grid,
      scoring=score_func,
      cv=5)


      def make_custom_score(y_true, y_score):
      '''
      y_true: array-like, shape = [n_samples] Ground truth (true relevance labels).
      y_score : array-like, shape = [n_samples] Predicted scores
      '''

      print(parameters_used_in_current_gridsearch)



      return score


      I know I can get the parameters after the execution is complete, but I was trying to get the parameters while the code is executing.










      share|improve this question
















      I am trying to see the parameters that are currently being used in a custom score function in gridsearchcv while the grid search is executing. Ideally this would look like:



      Edit: To clarify I am looking to use the parameters from the grid search so I need to be able to access them in the function.



      def fit(X, y): 
      grid = {'max_features':[0.8,'sqrt'],
      'subsample':[1, 0.7],
      'min_samples_split' : [2, 3],
      'min_samples_leaf' : [1, 3],
      'learning_rate' : [0.01, 0.1],
      'max_depth' : [3, 8, 15],
      'n_estimators' : [10, 20, 50]}
      clf = GradientBoostingClassifier()
      score_func = make_scorer(make_custom_score, needs_proba=True)


      model = GridSearchCV(estimator=clf,
      param_grid=grid,
      scoring=score_func,
      cv=5)


      def make_custom_score(y_true, y_score):
      '''
      y_true: array-like, shape = [n_samples] Ground truth (true relevance labels).
      y_score : array-like, shape = [n_samples] Predicted scores
      '''

      print(parameters_used_in_current_gridsearch)



      return score


      I know I can get the parameters after the execution is complete, but I was trying to get the parameters while the code is executing.







      python scikit-learn grid-search






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 14 '18 at 23:10







      RyanL

















      asked Nov 14 '18 at 17:29









      RyanLRyanL

      464




      464
























          3 Answers
          3






          active

          oldest

          votes


















          0














          If you need to actually do something in between grid search steps, you will need to write your own routine using some lower-level Scikit-learn functionality.



          GridSearchCV internally uses the ParameterGrid class, which you can iterate over to obtain combinations of parameter values.



          The basic loop looks something like this



          import sklearn
          from sklearn.model_selection import ParameterGrid, KFold

          clf = GradientBoostingClassifier()

          grid = {
          'max_features': [0.8,'sqrt'],
          'subsample': [1, 0.7],
          'min_samples_split': [2, 3],
          'min_samples_leaf': [1, 3],
          'learning_rate': [0.01, 0.1],
          'max_depth': [3, 8, 15],
          'n_estimators': [10, 20, 50]
          }

          scorer = make_scorer(make_custom_score, needs_proba=True)
          sampler = ParameterGrid(grid)
          cv = KFold(5)

          for params in sampler:
          for ix_train, ix_test in cv.split(X, y):
          clf_fitted = clone(clf).fit(X[ix_train], y[ix_train])
          score = scorer(clf_fitted, X[ix_test], y[ix_test])
          # do something with the results





          share|improve this answer


























          • I ended up using this approach to do what I specifically needed to do so I accepted this answer. The other responses weren't necessarily wrong they just didn't give me what I needed.

            – RyanL
            Nov 28 '18 at 21:50











          • @RyanL glad it helped. Note that I made a mistake in my code, using ParameterSampler (randomly sampled) instead of ParameterGrid (deterministic grid)

            – shadowtalker
            Nov 28 '18 at 22:24



















          0














          Not sure if this satisfies your use case, but there's a verbose parameter available just for this kind of stuff:



          from sklearn.model_selection import GridSearchCV
          from sklearn.linear_model import SGDRegressor

          estimator = SGDRegressor()
          gscv = GridSearchCV(estimator, {
          'alpha': [0.001, 0.0001], 'average': [True, False],
          'shuffle': [True, False], 'max_iter': [5], 'tol': [None]
          }, cv=3, verbose=2)

          gscv.fit([[1,1,1],[2,2,2],[3,3,3]], [1, 2, 3])


          This prints to the following to the stdout:



          Fitting 3 folds for each of 8 candidates, totalling 24 fits
          [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
          [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
          [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
          [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
          [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
          [Parallel(n_jobs=1)]: Done 24 out of 24 | elapsed: 0.0s finished


          You can refer to the docs, but it's also possible to specify higher values for higher verbosity.






          share|improve this answer
























          • I had seen the verbose argument, but I was looking to be able to specifically use the parameters from the grid. Because of that I need to be able to access the actual parameters in the function.

            – RyanL
            Nov 14 '18 at 18:45



















          0














          Instead of using make_scorer() on your "custom score", you can make your own scorer (Notice the difference between score and scorer!!) which accepts three arguments with the signature (estimator, X_test, y_test). See the documentation for more details.



          In this function, you can access the estimator object which is already trained on the training data in the grid-search. You can then easily access all the parameters for that estimator. But make sure to return a float value as score.



          Something like:



          def make_custom_scorer(estimator, X_test, y_test):
          '''
          estimator: scikit-learn estimator, fitted on train data
          X_test: array-like, shape = [n_samples, n_features] Data for prediction
          y_test: array-like, shape = [n_samples] Ground truth (true relevance labels).
          y_score : array-like, shape = [n_samples] Predicted scores
          '''

          # Here all_params is a dict of all the parameters in use
          all_params = estimator.get_params()

          # You need to do some filtering to get the parameters you want,
          # but that should be easy I guess (just specify the key you want)
          parameters_used_in_current_gridsearch = {k:v for k,v in all_params.items()
          if k in ['max_features', 'subsample', ..., 'n_estimators']}
          print(parameters_used_in_current_gridsearch)

          y_score = estimator.predict(X_test)

          # Use whichever metric you want here
          score = scoring_function(y_test, y_score)
          return score





          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53305768%2fprint-parameters-used-in-grid-search-during-gridsearchcv%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            3 Answers
            3






            active

            oldest

            votes








            3 Answers
            3






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            If you need to actually do something in between grid search steps, you will need to write your own routine using some lower-level Scikit-learn functionality.



            GridSearchCV internally uses the ParameterGrid class, which you can iterate over to obtain combinations of parameter values.



            The basic loop looks something like this



            import sklearn
            from sklearn.model_selection import ParameterGrid, KFold

            clf = GradientBoostingClassifier()

            grid = {
            'max_features': [0.8,'sqrt'],
            'subsample': [1, 0.7],
            'min_samples_split': [2, 3],
            'min_samples_leaf': [1, 3],
            'learning_rate': [0.01, 0.1],
            'max_depth': [3, 8, 15],
            'n_estimators': [10, 20, 50]
            }

            scorer = make_scorer(make_custom_score, needs_proba=True)
            sampler = ParameterGrid(grid)
            cv = KFold(5)

            for params in sampler:
            for ix_train, ix_test in cv.split(X, y):
            clf_fitted = clone(clf).fit(X[ix_train], y[ix_train])
            score = scorer(clf_fitted, X[ix_test], y[ix_test])
            # do something with the results





            share|improve this answer


























            • I ended up using this approach to do what I specifically needed to do so I accepted this answer. The other responses weren't necessarily wrong they just didn't give me what I needed.

              – RyanL
              Nov 28 '18 at 21:50











            • @RyanL glad it helped. Note that I made a mistake in my code, using ParameterSampler (randomly sampled) instead of ParameterGrid (deterministic grid)

              – shadowtalker
              Nov 28 '18 at 22:24
















            0














            If you need to actually do something in between grid search steps, you will need to write your own routine using some lower-level Scikit-learn functionality.



            GridSearchCV internally uses the ParameterGrid class, which you can iterate over to obtain combinations of parameter values.



            The basic loop looks something like this



            import sklearn
            from sklearn.model_selection import ParameterGrid, KFold

            clf = GradientBoostingClassifier()

            grid = {
            'max_features': [0.8,'sqrt'],
            'subsample': [1, 0.7],
            'min_samples_split': [2, 3],
            'min_samples_leaf': [1, 3],
            'learning_rate': [0.01, 0.1],
            'max_depth': [3, 8, 15],
            'n_estimators': [10, 20, 50]
            }

            scorer = make_scorer(make_custom_score, needs_proba=True)
            sampler = ParameterGrid(grid)
            cv = KFold(5)

            for params in sampler:
            for ix_train, ix_test in cv.split(X, y):
            clf_fitted = clone(clf).fit(X[ix_train], y[ix_train])
            score = scorer(clf_fitted, X[ix_test], y[ix_test])
            # do something with the results





            share|improve this answer


























            • I ended up using this approach to do what I specifically needed to do so I accepted this answer. The other responses weren't necessarily wrong they just didn't give me what I needed.

              – RyanL
              Nov 28 '18 at 21:50











            • @RyanL glad it helped. Note that I made a mistake in my code, using ParameterSampler (randomly sampled) instead of ParameterGrid (deterministic grid)

              – shadowtalker
              Nov 28 '18 at 22:24














            0












            0








            0







            If you need to actually do something in between grid search steps, you will need to write your own routine using some lower-level Scikit-learn functionality.



            GridSearchCV internally uses the ParameterGrid class, which you can iterate over to obtain combinations of parameter values.



            The basic loop looks something like this



            import sklearn
            from sklearn.model_selection import ParameterGrid, KFold

            clf = GradientBoostingClassifier()

            grid = {
            'max_features': [0.8,'sqrt'],
            'subsample': [1, 0.7],
            'min_samples_split': [2, 3],
            'min_samples_leaf': [1, 3],
            'learning_rate': [0.01, 0.1],
            'max_depth': [3, 8, 15],
            'n_estimators': [10, 20, 50]
            }

            scorer = make_scorer(make_custom_score, needs_proba=True)
            sampler = ParameterGrid(grid)
            cv = KFold(5)

            for params in sampler:
            for ix_train, ix_test in cv.split(X, y):
            clf_fitted = clone(clf).fit(X[ix_train], y[ix_train])
            score = scorer(clf_fitted, X[ix_test], y[ix_test])
            # do something with the results





            share|improve this answer















            If you need to actually do something in between grid search steps, you will need to write your own routine using some lower-level Scikit-learn functionality.



            GridSearchCV internally uses the ParameterGrid class, which you can iterate over to obtain combinations of parameter values.



            The basic loop looks something like this



            import sklearn
            from sklearn.model_selection import ParameterGrid, KFold

            clf = GradientBoostingClassifier()

            grid = {
            'max_features': [0.8,'sqrt'],
            'subsample': [1, 0.7],
            'min_samples_split': [2, 3],
            'min_samples_leaf': [1, 3],
            'learning_rate': [0.01, 0.1],
            'max_depth': [3, 8, 15],
            'n_estimators': [10, 20, 50]
            }

            scorer = make_scorer(make_custom_score, needs_proba=True)
            sampler = ParameterGrid(grid)
            cv = KFold(5)

            for params in sampler:
            for ix_train, ix_test in cv.split(X, y):
            clf_fitted = clone(clf).fit(X[ix_train], y[ix_train])
            score = scorer(clf_fitted, X[ix_test], y[ix_test])
            # do something with the results






            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 28 '18 at 22:24

























            answered Nov 14 '18 at 23:46









            shadowtalkershadowtalker

            4,45012148




            4,45012148













            • I ended up using this approach to do what I specifically needed to do so I accepted this answer. The other responses weren't necessarily wrong they just didn't give me what I needed.

              – RyanL
              Nov 28 '18 at 21:50











            • @RyanL glad it helped. Note that I made a mistake in my code, using ParameterSampler (randomly sampled) instead of ParameterGrid (deterministic grid)

              – shadowtalker
              Nov 28 '18 at 22:24



















            • I ended up using this approach to do what I specifically needed to do so I accepted this answer. The other responses weren't necessarily wrong they just didn't give me what I needed.

              – RyanL
              Nov 28 '18 at 21:50











            • @RyanL glad it helped. Note that I made a mistake in my code, using ParameterSampler (randomly sampled) instead of ParameterGrid (deterministic grid)

              – shadowtalker
              Nov 28 '18 at 22:24

















            I ended up using this approach to do what I specifically needed to do so I accepted this answer. The other responses weren't necessarily wrong they just didn't give me what I needed.

            – RyanL
            Nov 28 '18 at 21:50





            I ended up using this approach to do what I specifically needed to do so I accepted this answer. The other responses weren't necessarily wrong they just didn't give me what I needed.

            – RyanL
            Nov 28 '18 at 21:50













            @RyanL glad it helped. Note that I made a mistake in my code, using ParameterSampler (randomly sampled) instead of ParameterGrid (deterministic grid)

            – shadowtalker
            Nov 28 '18 at 22:24





            @RyanL glad it helped. Note that I made a mistake in my code, using ParameterSampler (randomly sampled) instead of ParameterGrid (deterministic grid)

            – shadowtalker
            Nov 28 '18 at 22:24













            0














            Not sure if this satisfies your use case, but there's a verbose parameter available just for this kind of stuff:



            from sklearn.model_selection import GridSearchCV
            from sklearn.linear_model import SGDRegressor

            estimator = SGDRegressor()
            gscv = GridSearchCV(estimator, {
            'alpha': [0.001, 0.0001], 'average': [True, False],
            'shuffle': [True, False], 'max_iter': [5], 'tol': [None]
            }, cv=3, verbose=2)

            gscv.fit([[1,1,1],[2,2,2],[3,3,3]], [1, 2, 3])


            This prints to the following to the stdout:



            Fitting 3 folds for each of 8 candidates, totalling 24 fits
            [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [Parallel(n_jobs=1)]: Done 24 out of 24 | elapsed: 0.0s finished


            You can refer to the docs, but it's also possible to specify higher values for higher verbosity.






            share|improve this answer
























            • I had seen the verbose argument, but I was looking to be able to specifically use the parameters from the grid. Because of that I need to be able to access the actual parameters in the function.

              – RyanL
              Nov 14 '18 at 18:45
















            0














            Not sure if this satisfies your use case, but there's a verbose parameter available just for this kind of stuff:



            from sklearn.model_selection import GridSearchCV
            from sklearn.linear_model import SGDRegressor

            estimator = SGDRegressor()
            gscv = GridSearchCV(estimator, {
            'alpha': [0.001, 0.0001], 'average': [True, False],
            'shuffle': [True, False], 'max_iter': [5], 'tol': [None]
            }, cv=3, verbose=2)

            gscv.fit([[1,1,1],[2,2,2],[3,3,3]], [1, 2, 3])


            This prints to the following to the stdout:



            Fitting 3 folds for each of 8 candidates, totalling 24 fits
            [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [Parallel(n_jobs=1)]: Done 24 out of 24 | elapsed: 0.0s finished


            You can refer to the docs, but it's also possible to specify higher values for higher verbosity.






            share|improve this answer
























            • I had seen the verbose argument, but I was looking to be able to specifically use the parameters from the grid. Because of that I need to be able to access the actual parameters in the function.

              – RyanL
              Nov 14 '18 at 18:45














            0












            0








            0







            Not sure if this satisfies your use case, but there's a verbose parameter available just for this kind of stuff:



            from sklearn.model_selection import GridSearchCV
            from sklearn.linear_model import SGDRegressor

            estimator = SGDRegressor()
            gscv = GridSearchCV(estimator, {
            'alpha': [0.001, 0.0001], 'average': [True, False],
            'shuffle': [True, False], 'max_iter': [5], 'tol': [None]
            }, cv=3, verbose=2)

            gscv.fit([[1,1,1],[2,2,2],[3,3,3]], [1, 2, 3])


            This prints to the following to the stdout:



            Fitting 3 folds for each of 8 candidates, totalling 24 fits
            [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [Parallel(n_jobs=1)]: Done 24 out of 24 | elapsed: 0.0s finished


            You can refer to the docs, but it's also possible to specify higher values for higher verbosity.






            share|improve this answer













            Not sure if this satisfies your use case, but there's a verbose parameter available just for this kind of stuff:



            from sklearn.model_selection import GridSearchCV
            from sklearn.linear_model import SGDRegressor

            estimator = SGDRegressor()
            gscv = GridSearchCV(estimator, {
            'alpha': [0.001, 0.0001], 'average': [True, False],
            'shuffle': [True, False], 'max_iter': [5], 'tol': [None]
            }, cv=3, verbose=2)

            gscv.fit([[1,1,1],[2,2,2],[3,3,3]], [1, 2, 3])


            This prints to the following to the stdout:



            Fitting 3 folds for each of 8 candidates, totalling 24 fits
            [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None ...
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None ..
            [CV] alpha=0.001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None ..
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None .
            [CV] alpha=0.0001, average=True, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None .
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=True, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None
            [CV] alpha=0.0001, average=False, max_iter=5, shuffle=False, tol=None, total= 0.0s
            [Parallel(n_jobs=1)]: Done 24 out of 24 | elapsed: 0.0s finished


            You can refer to the docs, but it's also possible to specify higher values for higher verbosity.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Nov 14 '18 at 18:07









            Matias CiceroMatias Cicero

            13.2k840103




            13.2k840103













            • I had seen the verbose argument, but I was looking to be able to specifically use the parameters from the grid. Because of that I need to be able to access the actual parameters in the function.

              – RyanL
              Nov 14 '18 at 18:45



















            • I had seen the verbose argument, but I was looking to be able to specifically use the parameters from the grid. Because of that I need to be able to access the actual parameters in the function.

              – RyanL
              Nov 14 '18 at 18:45

















            I had seen the verbose argument, but I was looking to be able to specifically use the parameters from the grid. Because of that I need to be able to access the actual parameters in the function.

            – RyanL
            Nov 14 '18 at 18:45





            I had seen the verbose argument, but I was looking to be able to specifically use the parameters from the grid. Because of that I need to be able to access the actual parameters in the function.

            – RyanL
            Nov 14 '18 at 18:45











            0














            Instead of using make_scorer() on your "custom score", you can make your own scorer (Notice the difference between score and scorer!!) which accepts three arguments with the signature (estimator, X_test, y_test). See the documentation for more details.



            In this function, you can access the estimator object which is already trained on the training data in the grid-search. You can then easily access all the parameters for that estimator. But make sure to return a float value as score.



            Something like:



            def make_custom_scorer(estimator, X_test, y_test):
            '''
            estimator: scikit-learn estimator, fitted on train data
            X_test: array-like, shape = [n_samples, n_features] Data for prediction
            y_test: array-like, shape = [n_samples] Ground truth (true relevance labels).
            y_score : array-like, shape = [n_samples] Predicted scores
            '''

            # Here all_params is a dict of all the parameters in use
            all_params = estimator.get_params()

            # You need to do some filtering to get the parameters you want,
            # but that should be easy I guess (just specify the key you want)
            parameters_used_in_current_gridsearch = {k:v for k,v in all_params.items()
            if k in ['max_features', 'subsample', ..., 'n_estimators']}
            print(parameters_used_in_current_gridsearch)

            y_score = estimator.predict(X_test)

            # Use whichever metric you want here
            score = scoring_function(y_test, y_score)
            return score





            share|improve this answer




























              0














              Instead of using make_scorer() on your "custom score", you can make your own scorer (Notice the difference between score and scorer!!) which accepts three arguments with the signature (estimator, X_test, y_test). See the documentation for more details.



              In this function, you can access the estimator object which is already trained on the training data in the grid-search. You can then easily access all the parameters for that estimator. But make sure to return a float value as score.



              Something like:



              def make_custom_scorer(estimator, X_test, y_test):
              '''
              estimator: scikit-learn estimator, fitted on train data
              X_test: array-like, shape = [n_samples, n_features] Data for prediction
              y_test: array-like, shape = [n_samples] Ground truth (true relevance labels).
              y_score : array-like, shape = [n_samples] Predicted scores
              '''

              # Here all_params is a dict of all the parameters in use
              all_params = estimator.get_params()

              # You need to do some filtering to get the parameters you want,
              # but that should be easy I guess (just specify the key you want)
              parameters_used_in_current_gridsearch = {k:v for k,v in all_params.items()
              if k in ['max_features', 'subsample', ..., 'n_estimators']}
              print(parameters_used_in_current_gridsearch)

              y_score = estimator.predict(X_test)

              # Use whichever metric you want here
              score = scoring_function(y_test, y_score)
              return score





              share|improve this answer


























                0












                0








                0







                Instead of using make_scorer() on your "custom score", you can make your own scorer (Notice the difference between score and scorer!!) which accepts three arguments with the signature (estimator, X_test, y_test). See the documentation for more details.



                In this function, you can access the estimator object which is already trained on the training data in the grid-search. You can then easily access all the parameters for that estimator. But make sure to return a float value as score.



                Something like:



                def make_custom_scorer(estimator, X_test, y_test):
                '''
                estimator: scikit-learn estimator, fitted on train data
                X_test: array-like, shape = [n_samples, n_features] Data for prediction
                y_test: array-like, shape = [n_samples] Ground truth (true relevance labels).
                y_score : array-like, shape = [n_samples] Predicted scores
                '''

                # Here all_params is a dict of all the parameters in use
                all_params = estimator.get_params()

                # You need to do some filtering to get the parameters you want,
                # but that should be easy I guess (just specify the key you want)
                parameters_used_in_current_gridsearch = {k:v for k,v in all_params.items()
                if k in ['max_features', 'subsample', ..., 'n_estimators']}
                print(parameters_used_in_current_gridsearch)

                y_score = estimator.predict(X_test)

                # Use whichever metric you want here
                score = scoring_function(y_test, y_score)
                return score





                share|improve this answer













                Instead of using make_scorer() on your "custom score", you can make your own scorer (Notice the difference between score and scorer!!) which accepts three arguments with the signature (estimator, X_test, y_test). See the documentation for more details.



                In this function, you can access the estimator object which is already trained on the training data in the grid-search. You can then easily access all the parameters for that estimator. But make sure to return a float value as score.



                Something like:



                def make_custom_scorer(estimator, X_test, y_test):
                '''
                estimator: scikit-learn estimator, fitted on train data
                X_test: array-like, shape = [n_samples, n_features] Data for prediction
                y_test: array-like, shape = [n_samples] Ground truth (true relevance labels).
                y_score : array-like, shape = [n_samples] Predicted scores
                '''

                # Here all_params is a dict of all the parameters in use
                all_params = estimator.get_params()

                # You need to do some filtering to get the parameters you want,
                # but that should be easy I guess (just specify the key you want)
                parameters_used_in_current_gridsearch = {k:v for k,v in all_params.items()
                if k in ['max_features', 'subsample', ..., 'n_estimators']}
                print(parameters_used_in_current_gridsearch)

                y_score = estimator.predict(X_test)

                # Use whichever metric you want here
                score = scoring_function(y_test, y_score)
                return score






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 15 '18 at 8:28









                Vivek KumarVivek Kumar

                15.9k42054




                15.9k42054






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53305768%2fprint-parameters-used-in-grid-search-during-gridsearchcv%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Xamarin.iOS Cant Deploy on Iphone

                    Glorious Revolution

                    Dulmage-Mendelsohn matrix decomposition in Python