LightFM v1.9 Release Notes
Release Date: 2016-05-25 // almost 8 years ago-
๐ Fixed
- ๐ fixed gradient accumulation in adagrad (the feature value is now correctly used when accumulating gradient). Thanks to Benjamin Wilson for the bug report.
- all interaction values greater than 0.0 are now treated as positives for ranking losses. ### โ Added
- max_sampled hyperparameter for WARP losses. This allows trading off accuracy for WARP training time: a smaller value will mean less negative sampling and faster training when the model is near the optimum.
- Added a sample_weight argument to fit and fit_partial functions. A high value will now increase the size of the SGD step taken for that interaction.
- โ Added an evaluation module for more efficient evaluation of learning-to-rank models.
- โ Added a random_state keyword argument to LightFM to allow repeatable model runs. ### ๐ Changed
- 0๏ธโฃ By default, an OpenMP-less version will be built on OSX. This allows much easier installation at the expense of ๐ performance.
- 0๏ธโฃ The default value of the max_sampled argument is now 10. This represents a decent default value that allows fast training.