Hyperparameter autotuning, a new feature for our fastText library. This feature automatically determines the best hyperparameters for your dataset in order to build an efficient text classifier. To use autotuning, a researcher inputs the training data as well as a validation set and a time constraint. FastText then uses the allotted time to search for the hyperparameters that give the best performance on the validation set. Optionally, the researcher can also constrain the size of the final model. In such cases, fastText uses compression techniques to reduce the size of the model.
Our strategy to explore various hyperparameters is inspired by existing tools, such as Nevergrad, but tailored to fastText by leveraging the specific structure of models. Our autotune explores hyperparameters by sampling, initially in a large domain that shrinks around the best combinations found over time.
Like most machine learning models, fastText has many hyperparameters, including the learning rate, the model dimension, and the number of epochs. Each of these factors has a strong effect on the performance of the resulting model, and the optimal values tend to vary depending on the dataset or task. Searching for the best hyperparameters manually can be daunting and time-consuming, even for expert users. Our new feature allows this task to be automated.
In many situations, such as when deploying models on devices or in the cloud, it is also important to maintain a small memory footprint. FastText also allows researchers to easily build a size-constrained text classifier for their data. .
Building an efficient text classifier in one command line. Researchers can now build a memory-efficient classifier for various tasks, including sentiment analysis, language identification, spam detection, tag prediction, and topic classification.