Skip to content

k-fold cross validation for training models with CLI #462

@danielduckworth

Description

@danielduckworth

Describe the use case
Training multiple models with shuffled training data (k-folds) can reveals information about our data sets. For example, one fold may be much less accurate than other folds telling us that we might need to increase the total number of samples.

Describe the solution you'd like
For the CLI, it would be useful if we could have a parameter to train with k-fold cross validation.

Describe alternatives you've considered
k-fold can be achieved with the python API but not as convenient as the CLI.

Additional context
I can provide the sample code I've been using for the API.

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions