Ray v1.0.0 Release Notes
Release Date: 2020-09-30 // over 3 years ago-
Ray 1.0
π We're happy to announce the release of Ray 1.0, an important step towards the goal of providing a universal API for distributed computing.
π To learn more about Ray 1.0, check out our blog post and whitepaper.
Ray Core
- π The ray.init() and
ray start
commands have been cleaned up to remove deprecated arguments - The Ray Java API is now stable
- π Improved detection of Docker CPU limits
- β Add support and documentation for Dask-on-Ray and MARS-on-Ray: https://docs.ray.io/en/master/ray-libraries.html
- β± Placement groups for fine-grained control over scheduling decisions: https://docs.ray.io/en/latest/placement-group.html.
- π New architecture whitepaper: https://docs.ray.io/en/master/whitepaper.html
Autoscaler
- π Support for multiple instance types in the same cluster: https://docs.ray.io/en/master/cluster/autoscaling.html
- π Support for specifying GPU/accelerator type in
@ray.remote
Dashboard & Metrics
- π Improvements to the memory usage tab and machine view
- π The dashboard now supports visualization of actor states
- π Support for Prometheus metrics reporting: https://docs.ray.io/en/latest/ray-metrics.html
RLlib
- Two Model-based RL algorithms were added: MB-MPO (βModel-based meta-policy optimizationβ) and βDreamerβ. Both algos were benchmarked and are performing comparably to the respective papersβ reported results.
- π A βCuriosityβ (intrinsic motivation) module was added via RLlibβs Exploration API and benchmarked on a sparse-reward Unity3D environment (Pyramids).
- β Added documentation for the Distributed Execution API.
- β Removed (already soft-deprecated) APIs: Model(V1) class, Trainer config keys, some methods/functions. Where you would see a warning previously when using these, there will be an error thrown now.
- β Added DeepMind Control Suite examples.
Tune
π₯ Breaking changes:
- Multiple tune.run parameters have been deprecated:
ray_auto_init, run_errored_only, global_checkpoint_period, with_server
(#10518) - π
tune.run(upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint
have been moved totune.SyncConfig
[docs] (#10518)
π New APIs:
mode, metric, time_budget
parameters for tune.run (#10627, #10642)- β± Search Algorithms now share a uniform API: (#10621, #10444). You can also use the new
create_scheduler/create_searcher
shim layer to create search algorithms/schedulers via string, reducing boilerplate code (#10456). - π Native callbacks for: MXNet, Horovod, Keras, XGBoost, PytorchLightning (#10533, #10304, #10509, #10502, #10220)
- β± PBT runs can be replayed with PopulationBasedTrainingReplay scheduler (#9953)
- Search Algorithms are saved/resumed automatically (#9972)
- π New Optuna Search Algorithm docs (#10044)
- π Tune now can sync checkpoints across Kubernetes pods (#10097)
- Failed trials can be rerun with
tune.run(resume="run_errored_only")
(#10060)
Other Changes:
- Trial outputs can be saved to file via
tune.run(log_to_file=...)
(#9817) - 0οΈβ£ Trial directories can be customized, and default trial directory now includes trial name (#10608, #10214)
- π Improved Experiment Analysis API (#10645)
- π Support for Multi-objective search via SigOpt Wrapper (#10457, #10446)
- π BOHB Fixes (#10531, #10320)
- Wandb improvements + RLlib compatibility (#10950, #10799, #10680, #10654, #10614, #10441, #10252, #8521)
- π Updated documentation for FAQ, Tune+serve, search space API, lifecycle (#10813, #10925, #10662, #10576, #9713, #10222, #10126, #9908)
RaySGD:
- Creator functions are subsumed by the TrainingOperator API (#10321)
- 0οΈβ£ Training happens on actors by default (#10539)
Serve
- π
serve.client
API makes it easy to appropriately manage lifetime for multiple Serve clusters. (#10460) - Serve APIs are fully typed. (#10205, #10288)
- Backend configs are now typed and validated via Pydantic. (#10559, #10389)
- Progress towards application level backend autoscaler. (#9955, #9845, #9828)
- π New architecture page in documentation. (#10204)
Thanks
π We thank all the contributors for their contribution to this release!
@MissiontoMars, @ijrsvt, @desktable, @kfstorm, @lixin-wei, @Yard1, @chaokunyang, @justinkterry, @pxc, @ericl, @WangTaoTheTonic, @carlos-aguayo, @sven1977, @gabrieleoliaro, @alanwguo, @aryairani, @kishansagathiya, @barakmich, @rkube, @SongGuyang, @qicosmos, @ffbin, @PidgeyBE, @sumanthratna, @yushan111, @juliusfrost, @edoakes, @mehrdadn, @Basasuya, @icaropires, @michaelzhiluo, @fyrestone, @robertnishihara, @yncxcw, @oliverhu, @yiranwang52, @ChuaCheowHuan, @raphaelavalos, @suquark, @krfricke, @pcmoritz, @stephanie-wang, @hekaisheng, @zhijunfu, @Vysybyl, @wuisawesome, @sanderland, @richardliaw, @simon-mo, @janblumenkamp, @zhuohan123, @AmeerHajAli, @iamhatesz, @mfitton, @noahshpak, @maximsmol, @weepingwillowben, @raulchen, @09wakharet, @ashione, @henktillman, @architkulkarni, @rkooo567, @zhe-thoughts, @amogkam, @kisuke95, @clarkzinzow, @holli, @raoul-khour-ts
- π The ray.init() and