Ray v0.7.4 Release Notes

Release Date: 2019-09-05 // 12 days ago
  • 🚀 Ray 0.7.4 Release Notes

    Highlights

    📚 There were many documentation improvements (#5391, #5389, #5175). As we continue to improve the documentation we value your feedback through the “Doc suggestion?” link at the top of the documentation. Notable improvements:

    • We’ve added guides for best practices using TensorFlow and PyTorch.
    • We’ve revamped the Walkthrough page for Ray users, providing a better experience for beginners.

    - We’ve revamped guides for using Actors and inspecting internal state.

    Ray supports memory limits now to ensure memory-intensive applications run predictably and reliably. You
    can activate them through the ray.remote decorator:

    @ray.remote( memory=2000 \* 1024 \* 1024, object\_store\_memory=200 \* 1024 \* 1024)class SomeActor(object): def \_\_init\_\_(self, a, b): pass
    

    📚 You can set limits for the heap and the object store, see the documentation.

    There is now preliminary support for projects , see the the project documentation. Projects allow you to
    📦 package your code and easily share it with others, ensuring a reproducible cluster setup. To get started, you
    can run

    # Create a new project.ray project create \<project-name\># Launch a session for the project in the current directory.ray session start# Open a console for the given session.ray session attach# Stop the given session and all of its worker nodes.ray session stop
    

    Check out the examples. This is an actively developed new feature so we appreciate your feedback!

    💥 Breaking change: The redis_address parameter was renamed to address (#5412, #5602) and the former will be removed in the future.

    Core

    • 🚚 Move Java bindings on top of the core worker #5370
    • 👌 Improve log file discoverability #5580
    • Clean up and improve error messages #5368, #5351

    RLlib

    • 👌 Support custom action space distributions #5164
    • ➕ Add TensorFlow eager support #5436
    • ➕ Add autoregressive KL #5469
    • Autoregressive Action Distributions #5304
    • Implement MADDPG agent #5348
    • Port Soft Actor-Critic on Model v2 API #5328
    • More examples: Add CARLA community example #5333 and rock paper scissors multi-agent example #5336
    • 🚚 Moved RLlib to top level directory #5324

    Tune

    • Experimental Implementation of the BOHB algorithm #5382
    • 💥 Breaking change: Nested dictionary results are now flattened for CSV writing: {“a”: {“b”: 1}} => {“a/b”: 1} #5346
    • ➕ Add Logger for MLFlow #5438
    • 👍 TensorBoard support for TensorFlow 2.0 #5547
    • ➕ Added examples for XGBoost and LightGBM #5500
    • HyperOptSearch now has warmstarting #5372

    Other Libraries

    🛠 Various fixes: Fix log monitor issues #4382 #5221 #5569, the top-level ray directory was cleaned up #5404

    Thanks

    We thank the following contributors for their amazing contributions:

    @jon-chuang, @lufol, @adamochayon, @idthanm, @RehanSD, @ericl, @michaelzhiluo, @nflu, @pengzhenghao, @hartikainen, @wsjeon, @raulchen, @TomVeniat, @layssi, @jovany-wang, @llan-ml, @ConeyLiu, @mitchellstern, @gregSchwartz18, @jiangzihao2009, @jichan3751, @mhgump, @zhijunfu, @micafan, @simon-mo, @richardliaw, @stephanie-wang, @edoakes, @akharitonov, @mawright, @robertnishihara, @lisadunlap, @flying-mojo, @pcmoritz, @jredondopizarro, @gehring, @holli, @kfstorm


Previous changes from v0.7.2