Ray v0.7.4 Release Notes

Release Date: 2019-09-05 // about 1 year ago
  • ๐Ÿš€ Ray 0.7.4 Release Notes

    Highlights

    ๐Ÿ“š There were many documentation improvements (#5391, #5389, #5175). As we continue to improve the documentation we value your feedback through the โ€œDoc suggestion?โ€ link at the top of the documentation. Notable improvements:

    • Weโ€™ve added guides for best practices using TensorFlow and PyTorch.
    • Weโ€™ve revamped the Walkthrough page for Ray users, providing a better experience for beginners.

    - Weโ€™ve revamped guides for using Actors and inspecting internal state.

    Ray supports memory limits now to ensure memory-intensive applications run predictably and reliably. You
    can activate them through the ray.remote decorator:

    @ray.remote( memory=2000 \* 1024 \* 1024, object\_store\_memory=200 \* 1024 \* 1024)class SomeActor(object): def \_\_init\_\_(self, a, b): pass
    

    ๐Ÿ“š You can set limits for the heap and the object store, see the documentation.

    There is now preliminary support for projects , see the the project documentation. Projects allow you to
    ๐Ÿ“ฆ package your code and easily share it with others, ensuring a reproducible cluster setup. To get started, you
    can run

    # Create a new project.ray project create \<project-name\># Launch a session for the project in the current directory.ray session start# Open a console for the given session.ray session attach# Stop the given session and all of its worker nodes.ray session stop
    

    Check out the examples. This is an actively developed new feature so we appreciate your feedback!

    ๐Ÿ’ฅ Breaking change: The redis_address parameter was renamed to address (#5412, #5602) and the former will be removed in the future.

    Core

    • ๐Ÿšš Move Java bindings on top of the core worker #5370
    • ๐Ÿ‘Œ Improve log file discoverability #5580
    • Clean up and improve error messages #5368, #5351

    RLlib

    • ๐Ÿ‘Œ Support custom action space distributions #5164
    • โž• Add TensorFlow eager support #5436
    • โž• Add autoregressive KL #5469
    • Autoregressive Action Distributions #5304
    • Implement MADDPG agent #5348
    • Port Soft Actor-Critic on Model v2 API #5328
    • More examples: Add CARLA community example #5333 and rock paper scissors multi-agent example #5336
    • ๐Ÿšš Moved RLlib to top level directory #5324

    Tune

    • Experimental Implementation of the BOHB algorithm #5382
    • ๐Ÿ’ฅ Breaking change: Nested dictionary results are now flattened for CSV writing: {โ€œaโ€: {โ€œbโ€: 1}} => {โ€œa/bโ€: 1} #5346
    • โž• Add Logger for MLFlow #5438
    • ๐Ÿ‘ TensorBoard support for TensorFlow 2.0 #5547
    • โž• Added examples for XGBoost and LightGBM #5500
    • HyperOptSearch now has warmstarting #5372

    Other Libraries

    ๐Ÿ›  Various fixes: Fix log monitor issues #4382 #5221 #5569, the top-level ray directory was cleaned up #5404

    Thanks

    We thank the following contributors for their amazing contributions:

    @jon-chuang, @lufol, @adamochayon, @idthanm, @RehanSD, @ericl, @michaelzhiluo, @nflu, @pengzhenghao, @hartikainen, @wsjeon, @raulchen, @TomVeniat, @layssi, @jovany-wang, @llan-ml, @ConeyLiu, @mitchellstern, @gregSchwartz18, @jiangzihao2009, @jichan3751, @mhgump, @zhijunfu, @micafan, @simon-mo, @richardliaw, @stephanie-wang, @edoakes, @akharitonov, @mawright, @robertnishihara, @lisadunlap, @flying-mojo, @pcmoritz, @jredondopizarro, @gehring, @holli, @kfstorm