All Versions
16
Latest Version
Avg Release Cycle
57 days
Latest Release
1519 days ago

Changelog History
Page 1

  • v1.6.0 Changes

    February 20, 2020

    πŸ—„ Deprecation of Python 2

    πŸš€ MXNet community voted to no longer support Python 2 in future releases of MXNet. Therefore, MXNet 1.6 release is going to be the last MXNet release to support Python 2.

    πŸ†• New features

    NumPy compatible interface and using TVM to generate operators

    NumPy has long been established as the standard math library in Python, the most prevalent language for the deep learning community. With this library as the cornerstone, there are now the largest ecosystem and community for scientific computing. The popularity of NumPy comes from its flexibility and generality.

    In #14253, the MXNet community reached consensus on moving towards a NumPy-compatible programing experience and committed to a major endeavor on providing NumPy compatible operators.

    The primary goal of the projects below is to provide the equivalent usability and expressiveness of NumPy in MXNet to facilitate Deep Learning model development, which not only helps existing deep learning practitioners but also provides people in the existing NumPy community with a shortcut for getting started in Deep Learning. The efforts towards this goal would also help a secondary goal, which is to enable the existing NumPy ecosystem to utilize GPUs and accelerators to speed up large scale computation.

    • Infra to use tvm write op kernels (#15550)
    • πŸ›  fix boolean_mask for 0-size output (#15731)
    • πŸ›  fix tvm cmake (#15781)
    • Numpy-compatible Infra (#15581)
    • πŸ‘ [MXNET-1206] Support NDArray indexing with None and Ellipsis (#13143)
    • numpy-compatible sum (#15810)
    • [Numpy] Numpy compatible slicing (#15798)
    • Numpy Tensordot and Dot Operator (#15820)
    • numpy linspace (#15852)
    • tvm infra for op attrs (#15854)
    • Port several np ops to master (#15867)
    • numpy-compatible split upstream (#15841)
    • Numpy-compatible concatenate upstream (#15894)
    • Numpy-compatible stack upstream (#15842)
    • [Numpy] Numpy behavior random.uniform() (#15858)
    • Tvm broadcast backward (#15938)
    • np elemwise unary ops upstream (#15831)
    • [Numpy] random.randint() implemented (#15956)
    • Refines NDArray indexing and adds numpy ndarray indexing [READY FOR REVIEW] (#15942)
    • Port ops from np branch (#16018)
    • numpy-compatible cumsum upstream (#15924)
    • NumPy-compatible infrastructure on Gluon (#16024)
    • πŸ‘ [OP] Support range as advanced index for ndarrays (#16047)
    • Numpy compatible max min (#16046)
    • NumPy-compatible Mean, Std and Var (#16014)
    • βž• Add fluent methods mean, std, var for ndarray (#16077)
    • numpy multinomial op (#15878)
    • βž• add numpy operator remainder (#16080)
    • [Numpy] Random.choice implemented (#16089)
    • πŸ›  Fix sample.normal shape inference
    • Numpy add numpy op indices (#15837)
    • [Numpy] Numpy copysign (#15851)
    • numpy operator ravel, derive from reshape (#16016)
    • Add array_function
    • πŸ‘Œ Improved error mesages
    • πŸ›  Fix np.choice
    • βž• add exception check for numpy reshape (#16180)
    • [Numpy] Numpy behavior normal distribution (#16109)
    • πŸ›  fix multinomial bug on gpu (#16204)
    • [Numpy] Differentiable svd (#15795)
    • βž• add epsilon to sum(pvalue) upperbound (#16211)
    • np compatible vstack (#15850)
    • Numpy add numpy op roll (#15902)
    • βž• add numpy compatible trace (#16008)
    • βž• add numpy op hanning, hamming, blackman (#15815)
    • [Numpy]flip (#15819)
    • numpy operator around (#16126)
    • numpy operator arctan2 (#15890)
    • numpy operator nonzero (#15838)
    • numpy operator hypot (#15901)
    • tvm numpy operator deg2rad && rad2deg (#16015)
    • numpy op unique
    • try to fix bug
    • πŸ›  fix memory bug and disable some test
    • πŸ›  fix according to review
    • Numpy operators: lcm, tril, identity and take (#16264)
    • πŸ“š [numpy] Cosmetic improvement on mxnet.numpy builtin op signature in documentation (#16305)
    • Disable Pylint false error in numpy_op_signature (#16370)
    • boolean_mask_assign operator for future boolean indexing (#16361)
    • Implements ldexp. (#15845)
    • Numpy Operators: Inner, Outer, vdot (#15846)
    • Numpy det and slogdet operators (#15861)
    • πŸ›  Fix random op signature
    • πŸ›  fix choice signature
    • βž• add raise test for shape
    • βž• Add boolean ndarray (#15940)
    • global numpy shape flag (#16335)
    • numpy-compatible histogram (#16266)
    • [Numpy] Numpy compatible dstack (#15871)
    • numpy eye op (#16132)
    • Numpy compatible vsplit; minor changes to split (#15983)
    • βž• add numpy op logspace (#15825)
    • βž• add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
    • πŸ›  Fix optimizer bug for np attribute (#16494)
    • βœ… Tests of NumPy interoperability (#16469)
    • πŸ‘Œ improve unary and binary operator handling and refactor tests (#16423)
    • [DOC] Fix numpy op doc (#16504)
    • βœ… [Numpy] More numpy dispatch tests (#16426)
    • [Numpy] einsum (#15911)
    • Add test pipeline for USE_TVM_OP=OFF on Unix (#16450)
    • βœ… Numpy dispatch test of ...... (#16422)
    • setup and concatenate, copy, expand_dims, expm1 (#16493)
    • βž• add sum for boolean type in mainline (#16436)
    • [Numpy] SVD outputs tuple (#16530)
    • numpy op doc: max, min, prod (#16506)
    • βž• add interface for rand
    • πŸ›  Fix numpy bugs (#16537)
    • pickler override for np ndarrays (#16561)
    • βœ… [numpy]op test in new pattern (#16556)
    • πŸ“š Enforce adding documentation for builtin numpy operators (#16575)
    • [Numpy] Support N_D(N>=3) batch_dot (#16586)
    • [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (#16597)
    • πŸ›  Fix index overflow bug in einsum (#16589)
    • βž• add npx reshape (#16640)
    • βž• add type switch to weight tensor (#16543)
    • numpy doc enhancement (#16637)
    • Infra for tvm op runtime dispatch (#16100)
    • [NumPy][Operator] NumPy operator may_share_memory and shares_memory (#16533)
    • [Numpy] Numpy operator diff (#15906)
    • Miscellaneous fix for several numpy issues (#16664)
    • [Numpy] implement np.column_stack (#16594)
    • [numpy] add numpy operator : append (#16564)
    • Backport of #16711, #16737, #16408 to 1.6 branch (#16763)
    • Backport to 1.6 (#16773, #16781, #16783, #16716, #16699, #16728, #16769, #16792) (#16832)
    • [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan (#16884)
    • Backport of #16827, #16791 and #16888 to 1.6 branch (#16901)
    • port shape op to 1.6.x (#16912)
    • [Numpy] Fix imperative basic indexing in numpy (#16902) (#16919)
    • Backport #16895, #16922, #16878, #16979 and #16900 to 1.6 (#17029)

    Graph optimizations

    Pointwise fusion for GPU

    🐎 DL models, besides compute intensive operations like convolutions and fully connected layers, feature a lot of simple pointwise (aka elementwise) operations (like elementwise addition etc.). Performance of those operations is fully memory bandwidth bound and so limit speedups from newer GPU hardware, which typically has high compute/memory bandwidth ratio. When multiple of such operations are chained one after another, it results in a series of unnecessary stores and loads as well as potential increased memory usage to store the intermediate results. Pointwise fusion helps in alleviating those problems by just-in-time generation of fused operators, which do not store intermediate results in memory, resulting in performance and memory usage improvements.

    Eliminate common subexpressions

    • Eliminate common expressions (#15657)

    0️⃣ Default MKLDNN Subgraph fusion

    • 0️⃣ [MKLDNN] Enable subgraph backend mkldnn by default. (#15518)

    πŸ†• New operators

    • [OP] Add a new arange_like operator to contrib (#15400)
    • 🌲 PDF operators for each distribution for which we have a random sampler (plus also the PDF of the Dirichlet). Supports probabilities and log-probabilities, as well as gradients. (#14617)
    • Group Normalization (#14959)
    • βž• Add RROIAlign (#16017)
    • βž• Add fast implementation of LARS (#16122)
    • Round and sign straight-through-estimators C operators. (#16373)
    • πŸ†• New ops for RCNN + old ops improvements for RCNN (#16215)
    • Comparison ops implemented using mshadow (#16414)
    • βž• Add mask target generator operator for Mask-RCNN (#16268)
    • 🚚 Move MRCNNMaskTarget op to contrib (#16486)
    • Mxnet allclose (#14443)
    • ⚑️ Aggregated adamw update (#16398)
    • Make mrcnn_mask_target arg mask_size a 2d tuple (#16567)
    • Dgl ops 2 (#16416)
    • ⚑️ Lamb optimizer update (#16715)
    • ⚑️ [OP] changing data type of 't' to int in lamb_update_phase1 (#16903)
    • ⚑️ Multi Precision Lamb Update operator (#16885)
    • Interleaved MHA for CPU path (#17138) (#17211)

    πŸ”‹ Feature improvements

    Automatic Mixed Precision

    • [AMP] Move topk from FP16_FP32_FUNCS to FP32_FUNCS (#15342)
    • Conversion from FP32 model to Mixed Precision model (#15118)
    • ⚑️ Update fp16 docs: Block.cast is inplace (#15458)
    • πŸ‘ FP16 Support for C Predict API (#15245)
    • βž• Add AMP Conversion support for BucketingModule (#15528)

    Gluon Fit API

    • πŸ›  Fixing build for gluon estimator test, including libtvm in pack libs (#16148)
    • [Estimator] handle composite metrics in estimator (#16676)
    • πŸ”¨ [Estimator] refactor estimator to allow overriding evaluate/fit of a batch (#16678)
    • πŸ”¨ [Estimator] refactor estimator and clarify docs (#16694)
    • 🌲 [Gluon] Improve estimator usability and fix logging logic (#16810) (#16846)
    • Backport Gluon estimator changes to 1.6 (#17048)
    • πŸ›  fix parameter names in the estimator api (#17051) (#17162)

    MKLDNN

    • πŸš€ Upgrade MKL-DNN submodule to v0.20 release (#15422)
    • πŸ›  Fix quantized concat when inputs are mixed int8 and uint8 (#15693)
    • ✨ [MKLDNN]Enhance Quantization APIs and Tutorial (#15448)
    • βž• Add quantization support for GluonCV (#15754)
    • βž• add int8 bn mkldnn implementation and test (#15664)
    • πŸ‘ [Quantization]support exclude operators while quantization (#15910)
    • πŸ‘ [MKLDNN]Support fullyconnected and element-wise ops fusion (#15950)
    • βœ… Disable test coverage for Clang MKLDNN (#15977)
    • ⚑️ update support MKLDNN BN conditions (#15870)
    • [MKLDNN] Fix out of bound access of req vector (#16000)
    • βž• add uint8 bn mkldnn implementation (#16003)
    • πŸ‘Œ Improve quantization flow (#15961)
    • [MKLDNN] fix uint8 batch norm memory misuse (#16034)
    • MKL-DNN RNN checks NDArray version (#16071)
    • Float64 fallback for mkldnn subgraph and rnn op (#15853)
    • ⚑️ Update MKL-DNN dependency (#16073)
    • ↔ Integrate MKL-DNN leakyrelu (#16075)
    • [MKLDNN] NDArray reorder in C API and deconv (#16265)
    • πŸ›  Fix mkldnn reshape (#16455)
    • [MKLDNN] Fix uint quantized fc when not fusing with requantize (#16523)
    • 0️⃣ [MKLDNN]Fix reorder2default (#16602)
    • ⬆️ Upgrade MKL-DNN dependency to v1.0 (#16555)
    • βͺ Revert "[MKLDNN]Fix reorder2default (#16602)" (#16697)
    • [v1.6.x] Backport #16837 into v1.6.x (#16847)
    • πŸŽ‰ Initial checkin (#16856) (#16872)

    πŸ‘ Large tensor support

    • πŸ‘ [MXNET-1413] Adding Large Tensor support for sort operators (#15170)
    • πŸ‘ Large Index Support for Slice (#15593)
    • βž• Add large tensor support binary arithmetic (#15785)
    • πŸ‘ Large tensor support for random ops (#15783)
    • βž• Add Large Tensor Support for Sequence, NN Ops (#15807)
    • βž• Add power, exponent, log ops large tensor support (#15794)
    • πŸ‘ removing unnecessary int64 C apis that were added to support Large Tensors and Vectors (#15944)
    • βœ… creating ndarray directly using mxnet ndarray primitives to reduce memory footprint of tests for topk, sort and argsort (#15900)
    • βž• Adding tests to verify support for Large Tensors in additional Ops along with new C_Apis supporting 64bit indexing (#15895)
    • βž• Added tests to verify Large Vector Support for initial set of ops (#15943)
    • βž• Added more tests for Large Indices (#15960)
    • βž• Add Large tensor vector test cases (#15941)
    • βœ… Test large vector mean operator and fix a few bugs (#16079)
    • βœ… Reducing memory footprint of one_hot for Large Array Testing (#16136)
    • removing MXNDArrayLoadFromBuffer64 and MXNDArrayLoad64 (#16203)
    • πŸ›  Fix large array tests (#16328)
    • βž• added more tests to verify support for large vector (#16477)
    • βž• added support for large tensors for Dropout operator and tests to verify support for more operators (#16409)
    • βž• adding large tensor support for add_n and tests for more ops (#16476)
    • βž• adding large tensor support for pad operator (#15126)
    • βž• Added large tensor support and test for gather_nd (#16371)
    • βœ… Large Vector tests for DGL Ops Part 2 (#16497)
    • πŸ‘‰ Showing proper error message when an attempt is made to create large tensor but MXNet is not built with it (#16570)

    TensorRT integration

    • enable TensorRT integration with cpp api (#15335)
    • βž• Add unit tests for TensorRT integration and fix some bugs (#15399)

    πŸ‘ Higher order gradient support

    • [MXNET-978] Higher order gradient for sigmoid (#15288)
    • πŸ‘ [MXNET-978] Higher Order Gradient Support reciprocal, abs. (#15413)
    • πŸ‘ [MXNET-978] Add higher order gradient support tan, tanh (#15253)
    • πŸ‘ [MXNET-978] Higher Order Gradient Support arctan, arctanh, radians. (#15531)
    • πŸ‘ [MXNET-978] Higher Order Gradient Support sqrt, cbrt. (#15474)
    • πŸ‘ [MXNET-978] Higher Order Gradient Support clip, dropout. (#15746)
    • πŸ‘ [MXNET-978] Higher Order Gradient Support sinh, cosh. (#15412)
    • βœ… [MXNET-978] n-th order gradient test support. (#15611)
    • [MXNET-978] Fully connected, higher order grad (#14779)
    • πŸ‘ [MXNET-978] Higher Order Gradient Support arcsinh, arccosh. (#15530)

    Operator improvements

    • broadcast axis is alias to broadcast axes; doc fix (#15546)
    • Utility to help developers debug operators: Tensor Inspector (#15490)
    • Softmax with length (#15169)
    • in-place reshape ops (#14053)
    • βž• Add missing default axis value to symbol.squeeze op (#15707)
    • βž• Add matrix determinant operator in linalg (#15007)
    • βž• Add fp16 support for topk (#15560)
    • [MXNET-1399] multiclass-mcc metric enhancements (#14874)
    • πŸ†• new raise mode for nd.take and fix backward for wrap mode (#15887)

    Profiler

    • πŸ›  Fixing duplication in operator profiling (#15240)
    • Custom Operator Profiling Enhancement (#15210)
    • [Opperf] Make module/namespace of the operator parameterized (#15226)
    • πŸ‘ Opperf: Support Python<3.6 (#15487)
    • βž• Add transpose_conv, sorting and searching operator benchmarks to Opperf (#15475)
    • πŸ—„ Deprecate USE_PROFILER flag (#15595)
    • ⚑️ Update profiler.md (#15477)
    • [Opperf] Add array rearrange operators to opperf (#15606)
    • [OpPerf] PDF Random ops fix (#15661)
    • ⚑️ [Opperf] Add optimizer update operator benchmarks to opperf (#15522)
    • πŸ›  fix broadcast op param (#15714)
    • [OpPerf] Profiler flag for Python, Cpp (#15881)
    • πŸ—„ [Opperf] Filter out deprecated ops (#15541)
    • [OpPerf] Handle positional arguments (#15761)
    • [OpPerf] Take care of 4d param (#15736)
    • βž• Add Median,p50,p99 to python profiler (#15953)
    • βž• adding "total" (total time) to profiler aggregate stats sorting criteria (#16055)

    ONNX import/export

    • πŸ“š Correct ONNX documentation (#15914)
    • [MXNET-895] ONNX import/export: TopK (#13627)

    βš™ Runtime discovery of features

    • Making Features as a singleton for improved caching (#15835)

    πŸ› Bug fixes

    • 🌲 [bug] fix higher grad log (#15120)
    • πŸ‘‰ Showing proper error when csr array is not 2D in shape. (#15242)
    • add 'asnumpy' dtype option to check_symbolic_backward (#15186)
    • point fix the vector declaration in MultiBoxDetection (#15300)
    • βœ… Temporarily Commenting out Flaky Test (#15436)
    • πŸ›  Fix memory leak in NaiveEngine (#15405)
    • πŸ›  fix nightly CI failure (#15452)
    • πŸ›  Small typo fixes in batch_norm-inl.h (#15527)
    • Bypass cuda/cudnn checks if no driver. (#15551)
    • Julia path patch (#15561)
    • πŸ›  Fix AMP Tutorial failures (#15526)
    • πŸ›  Fix warnings in CLang: (#15270)
    • πŸ›  Fix dumps for Constant initializer (#15150)
    • πŸ›  fix normalize mean error bug (#15539)
    • ⚠ [fix] print self in warning. (#15614)
    • πŸ‘• [MXNET-1411] solve pylint error issue#14851 (#15113)
    • [Flaky test] Skip test_operator_gpu.test_convolution_independent_gradients (#15631)
    • πŸ›  Fix subgraph with custom_op (#15671)
    • πŸ›  Fix USE_BLAS == openblas check (#15691)
    • ⚑️ update previous flaky naive engine test (#15651)
    • πŸ‘‰ make TransposeShape infer shape form both sides (#15713)
    • βœ… Skip Flaky Test (#15722)
    • βͺ Revert "Dynamic Library Loading Support" (#15755)
    • Fix flaky test test_global_metric (#15756)
    • πŸ›  Fix PR #15489 (Dynamic Library Loading Support) (#15760)
    • πŸ”¨ Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes. (#15762)
    • πŸ›  Fix backward_clip num inputs and type of clip params (#15688)
    • πŸ›  fixing problem with existing Singleton Caching (#15868)
    • Allow operators with multiple outputs in get_atomic_symbol (#15740)
    • πŸ›  Fix ConcatType backward type inference (#15829)
    • βž• Add disable attr to subgraph property (#15926)
    • βœ… Re-enable flaky test_prelu (#15777)
    • 0️⃣ declare explicitly the tblob default assign operator and copy constructor (#15937)
    • Discard needless test cases in test_convolution_independent_gradients (#15939)
    • πŸ›  fix naive engine for multi-threaded inference (#15574)
    • Fix get_rows_per_block (#15979)
    • πŸ›  Fix a memory misalignment in topk operator (#15948)
    • Decouple dtype from shape for Random multinomial (#15980)
    • πŸ›  Fix dtype inference in arange_like operator (#15930)
    • Disable laop_6 (#15976)
    • πŸ›  Fix flaky clojure profile test (#16058)
    • πŸ›  fix test_pick test time is too long (#16066)
    • πŸ‘ [fix] Support nullop in transpose (#15865)
    • πŸ›  fix flaky test (#16074)
    • πŸ›  fix some test files test time is too long (#16067)
    • πŸ›  Fix gradient tensor mutate in {adam/ftrl/rmprop/rmspropalex}_update. (#15768)
    • πŸ›  Fix unary operator ceil/floor/trunc when data type is integer (#14251)
    • πŸ›  Fix failing tests (#16117)
    • πŸ›  Fixes NAG optimizer #15543 (#16053)
    • βœ… avoid test relu at the origin due to discontinuous gradient (#16133)
    • πŸ›  Fix remaining errors reported by D2L (#16157)
    • βœ… use 1E-4 in groupnorm test(#16169)
    • Sequence last fix (#16156)
    • πŸ›  fixing test for model compatibility checker (#16159)
    • assert_allclose -> rtol=1e-10 (#16198)
    • [MEMORY] retry GPU memory allocation if fragmented (#16194)
    • πŸ‘Œ improve dataloader signals and messages (#16114)
    • ⚑️ Update ndarray.py (#16205)
    • πŸ›  fix flaky test (#16191)
    • Solve #14116, #15143 (#15144)
    • [MXNET-1422] Fix wrong results of min([inf, inf]) and max([-inf,-inf]) (#16226)
    • πŸ›  Fix inconsistent interpolation method values (#16212)
    • πŸ‘€ set fixed seed for profiler (#16155)
    • πŸ›  Fix MXNDArrayGetData (#16289)
    • fix atol for test_preloaded_multi_sgd (#16356)
    • πŸ›  Fix windows flakiness (#16415)
    • πŸ”€ cuDNN non-persistant bidirectional RNN dgrad sync fix (#16391)
    • πŸ›  [BUGFIX] Minor type issues in Squeeze (#16448)
    • πŸ›  Fix Nightly Tests for Binaries (#16451)
    • πŸ›  Fix dtype bug (#16467)
    • πŸ›  Fix flakey pylint CI failures (#16462)
    • Load NDArray only to GPU if GPU is present (#16432)
    • πŸ› Bug fix for the input of same axes of the swapaxes operator (#16513)
    • πŸ›  Fix learning rate scheduler being unexpectedly overwritten by optimizer's default value (#16487)
    • βœ… disable tests (#16536)
    • πŸ›  fix pylint in CI (#16540)
    • image crop gpu (#16464)
    • πŸ— Build dmlc-core with old thread_local implementation (#16526)
    • πŸ›  fix doc for topk (#16571)
    • RNNOp to call cudaEventCreate lazily (#16584)
    • βž• add encoding to the stub files for potential utf8 char in doc strings (#16580)
    • πŸ‘· Surpress subgraph log in CI (#16607)
    • πŸ›  Fix dequantize memory corruption (#16606)
    • πŸ›  Fix for wrong reqs set after switching from training to inference (#16553)
    • Disables test_bulking_operator_gpu due to flakiness (#16611)
    • Imagenet inference to nightly fix (#16599)
    • Move some subgraph verbose to MXNET_SUBGRAPH_VERBOSE=2 (#16622)
    • RNNOp only call cuda/cudnn if GPU ctx is requested (#16632)
    • πŸ›  fix bad encode (#16641)
    • βœ… Disable float16 test (#16643)
    • πŸ›  Fix GetMKLDNNData for delay alloc (#16618)
    • 🚚 Move ops which don't support FP16 dtype to FP32 list (#16668)
    • no such method => modified function args (#16610)
    • fix cuDNN RNN dtype_with_fallback_ bug (#16671)
    • βž• Add check if scipy is imported in sparse.py (#16574)
    • βž• Added launch bounds to the reduce kernels (#16397)
    • πŸ›  fix install dir (#16690)
    • πŸ›  fix binary dependencies in CD and nightly (#16693)
    • πŸ›  Fix SliceChannel Type inference (#16748) (#16797)
    • fix flakiness of test_np_mixed_precision_binary_funcs (#16873)
    • βœ… Fix test_gluon.py:test_sync_batchnorm when number of GPUS > 4 (#16835)
    • Omp fork numthreads fix 1.6 (#17000)
    • πŸ›  [BUGFIX] Fix race condition in kvstore.pushpull (#17007) (#17052)
    • Backport #17002, #17068 and #17114 to 1.6 branch (#17137)
    • πŸ›  Backport 3rdparty/openmp fixes (#17193)
    • πŸ›  fix norm sparse fallback (#17149)

    Front end API

    • Expose get_all_registered_operators and get_operator_arguments in the… (#15364)
    • βž• Add magic method abs to NDArray and Symbol. (#15680)
    • πŸ‘ Dynamic Library Loading Support (#15489)
    • [MXNET-1294] Add KVSTORE PushPull API (#15559)

    Gluon

    • [Dataset] Add take, filter, sample API to dataset (#16078)
    • Add register_op_hook for gluon (#15839)
    • [Dataset] add shard API (#16175)
    • βž• Add list_ctx to ParameterDict (#16185)
    • πŸ‘ [Gluon] Support None argument in HybridBlock (#16280)
    • Aggregated zero grad (#16446)
    • try to fix block (#16465)
    • [Gluon] Don't serialize shared parameters twice (#16582)
    • Initializer. eq (#16680)

    Symbol

    • βž• Add symbol api for randn and fix shape issue for randn ndarray and symbol api (#15772)
    • Graph Partition API (#15886)

    Language Bindings

    Python

    πŸš€ MXNet community voted to no longer support Python 2 in future releases of MXNet. Therefore, MXNet 1.6 release is going to be the last MXNet release to support Python 2.

    C/C++

    • πŸ‘ [C++] Improve inference script to support benchmark on Imagenet (#15164)
    • C Api for simplebind, fix comment for trigoops, add atol to assert (#16585)

    Clojure

    • Extend Clojure BERT example (#15023)
    • [Clojure] Add fastText example (#15340)
    • βœ… make clojure api generator tests less brittle (#15579)

    Julia

    • βž• add julia env settings (#15523)
    • julia: bump window prebult binary version to v1.5.0 (#15608)
    • πŸ‘· julia: remove Travis CI related files (#15616)
    • julia: bump binding version to v1.6.0 (#15607)
    • julia: rename build env var MXNET_HOME to MXNET_ROOT (#15568)
    • Revert "julia: rename build env var MXNET_HOME to MXNET_ROOT (#15568)" (#16147)
    • julia: fix mx.forward kwargs checking (#16138)
    • julia: implement context.num_gpus (#16236)
    • julia: add AbstractMXError as parent type (#16235)
    • [MXNET-1430] julia: implement context.gpu_memory_info (#16324)
    • πŸ“„ julia/docs: more DRY on page rendering (#16396)

    Perl

    • [Perl] - simplify aliasing strategy (#15395)
    • [Perl] - ndarray to native array conversion fix (#16635)

    Scala

    • βž• Add Sparse NDArray support for Scala (#15378)
    • πŸ›  fix the bug on Scala Sparse (#15500)
    • πŸ›  fix heap-use-after-free in scala (#15503)
    • ⬆️ Bump Scala version to 1.6 (#15660)
    • πŸ›  Fix Scala Symbolic API some/Some typo (#15687)
    • Faster Scala NDArray to BufferedImage function (#16219)

    🐎 Performance improvements

    • Proper bulking of ops not using FCompute (#15272)
    • πŸ‘Œ improve layernorm CPU performance (#15313)
    • Efficient MXNet sampling in the multinomial distribution (#15311)
    • βͺ Revert default return type for indices in argsort() and topk() back to float32 (#15360)
    • πŸ‘‰ Use omp threads for cpu data loader (#15379)
    • Accelerate ROIPooling layer (#14894)
    • Avoid memory copy for dropout inference (#15521)
    • Add omp parallel optimization for _contrib_BilinearReisze2D (#15584)
    • Softmax optimization for GPU (#15545)
    • Speed up group executor (#16069)
    • 🐎 FullyConnected Bias performance improvement on GPU (#16039)
    • 🐎 Embedding gradient performance optimization on GPU (#16355)
    • Faster Transpose 2D (#16104)
    • Pseudo 2D transpose kernel (#16229)
    • Faster general take (#16615)

    Example and tutorials

    • 🐎 [TUTORIAL] Gluon performance tips and tricks (#15427)
    • ⚑️ Updating profiler tutorial to include new custom operator profiling (#15403)
    • πŸ“œ [TUTORIAL] Gluon and Sparse NDArray (#15396)
    • [TUTORIAL] Revise Naming tutorial (#15365)
    • Revise Symbol tutorial (#15343)
    • πŸ›  Two fixes for info_gan.md example Code (#15323)
    • Rebase #13757 to master (#15189)
    • Tensor Inspector Tutorial (#15517)
    • 🌲 logging (#15106)
    • ⚑️ update profiler tutorial (#15580)
    • [MXNET-1358] Fit api tutorial (#15353)
    • Tutorials nighly fix (#16179)
    • Update add_op_in_backend.md (#16403)
    • typo fix in r doc lstm tutorial (#16546)
    • [MKL-DNN] Add mxnet mkldnn cmake tutorial (#16688)

    πŸ“š Website and documentation

    • [DOC] Clarify that global pooling is going to reset padding (#15269)
    • πŸ“š Update sparse_retain Documentation (#15394)
    • nano instructions (#15117)
    • βœ‚ remove comments from nano instructions (#15433)
    • REAME MTCNN Link URL Error in original website (#15020)
    • ⚑️ Update Horovod docs links in README (#15366)
    • πŸ›  fix doc for sort and argsort (#15317)
    • πŸ›  fix comment (#15481)
    • πŸ‘Œ Improve docs for AMP (#15455)
    • [Doc] Add MKL install method apt/yum into tutorial (#15491)
    • πŸ“„ Julia docs (#15454)
    • πŸ“„ Docs: Fix misprints (#15505)
    • πŸ— website build for julia: fix path to be static (#15554)
    • ✏️ some minor typos/clarifications (#15538)
    • refine Nano setup directions (#15524)
    • [Doc] add squeeze to Array change shape (#15549)
    • πŸ›  fix typo (#15648)
    • πŸ›  Fix url (404 error) (#15683)
    • ⚑️ update julia install doc (#15609)
    • πŸ“„ [DOC] refine autograd docs (#15109)
    • [DOC] Fix many arguments in the doc: reshape_like, arange_like, shape_array (#15752)
    • Add Gather_nd Scatter_nd to NDArray API category doc (#15689)
    • ⚑️ [Dependency Update] [Doc] move the general prerequisite software to the top (#15896)
    • πŸ“„ typo in docs (#16094)
    • 🚧 [WIP] New Website: New Docs [1/3] (#15884)
    • [DOC] Fix doc for nn.Embedding, nn.Dense and nd.Embedding (#15869)
    • [DOC] Consistent capitalization: mxnet -> MXNet, scala -> Scala (#16041)
    • πŸ†• New Website: Remove Old Content [2/3] (#15885)
    • πŸ†• New Website: New Pipeline [3/3] (#15883)
    • ⚑️ Update KL Divergence formula (#16170)
    • πŸ›  fix broken links (#16255)
    • redirect to the 404 page (#16287)
    • βž• add google-analytics config (#16271)
    • πŸ›  Fixing links for website + Fixing search (#16284)
    • πŸ“š Minor fix in ToTensor documentation. (#16299)
    • βž• adding redirects so that old website API links surfaced from searches (#16342)
    • πŸ›  Fix code block formatting in Why MXNet doc page (#16334)
    • πŸ“„ Julia: add API docs back (#16363)
    • πŸ”„ Change mailing list url in footer to point to instructions about how to subscribe instead (#16384)
    • βž• Add instructions to report a security vulnerability (#16383)
    • [DOC] fix installation selector wrong history (#16381)
    • πŸ— Beta build (#16411)
    • 🚧 [WIP] Improving Python Docs API (#16392)
    • πŸ›  fix autodoc for spurrious toggles (#16452)
    • πŸš€ [Doc] Update the download page with 1.5.1 release (#16442)
    • πŸ›  Fixing broken links (#16500)
    • βž• add binary and docs build command options (#16514)
    • βž• add option to remove indexes (#16525)
    • πŸ“ˆ Correct Google Analytics Tracker (#16490)
    • [Doc] Use mirror link in the download page (#16501)
    • πŸ›  checking broken link fixes work (#16538)
    • πŸ— detect number of procs during sphinx build (#16512)
    • πŸ›  fixed broken links across multiple files (#16581)
    • πŸ›  fix missing docs due to git add issues (#16496)
    • second round of fixing broken links in multiple files (#16598)
    • πŸ“„ Python Docstring Convetion (#16550)
    • [MXNET-1434] Fix a broken link for basic C++ tutorial (#16461)
    • πŸ›  Fix python doc build issue (#16630)
    • πŸ›  fixing broken links in multiple files - round 3 (#16634)

    CI/CD

    • Fix build_ccache_wrappers: (#14631)
    • βœ‚ Remove mhard-float option. This is already deprecated by Google. (#15435)
    • ⬆️ CI: upgrade Julia version from 1.0.3 to 1.0.4 (#15502)
    • βž• Add -R option to ci/build.py to avoid rebuilding containers (#15426)
    • ⚑️ [Dependency Update] Bump up the CI Nvidia docker to CUDA 10.1 (#14986)
    • πŸ›  fixed config.mk and Makefile bugs for installing mkl (#15424)
    • πŸ‘‰ Add -DMXNET_USE_OPENMP to Makefiles so libinfo gets updated accordingly (#15498)
    • ⚑️ [Dependency Update] Dependency update doc (#15045)
    • βœ‚ Remove Scala package test on build (#15915)
    • πŸ”¨ Refactor for windows CI 'out of heap space' errors (#15922)
    • πŸ›  Fix Nightly Maven GPU (#15989)
    • 🏁 Windows cmake flags cleanup (#16013)
    • Disable flaky test in test_amp_conversion (#16031)
    • ⚑️ Updates git_init Jenkins utility function to support checking out a particular commit id
    • βž• Adds artifact repository scripts
    • βž• Adds CD pipeline framework
    • βž• Adds static libmxnet release pipeline
    • ⚑️ Updates CD pipeline
    • βž• Adds documentation
    • ⚑️ Updates kvstore functions to use pushd and popd
    • Throws exceptions instead o magic numbers
    • ⚑️ Updates artifact repository cli to use --libtype instead of --static or --dynamic
    • Clarifies ci_utils and cd_utils origin remark
    • βž• Adds clarifying note on why ubuntu 14.04 is being used for compilation
    • βœ‚ Removes MXNET_SHA
    • πŸš€ Removes set_release_job_name
    • βž• Adds license headers
    • ⚑️ Updates artifact repository to expect licenses
    • 🚚 Moves ci/cd to cd directory
    • πŸ‘· Takes downstream job name from environment
    • ⚑️ Updates order of parameters
    • ⚑️ Updates job type parameter to dropdown
    • βž• Adds libmxnet feature extraction code comments
    • βœ‚ Removes ccache setup from static build
    • πŸ‘· Disable test coverage of C++ codebase on CI (#15981)
    • ⚑️ Update readme and project.clj comment (#16084)
    • Enable tvm_op for ci (#15889)
    • Not to search for coverage files when none exist (#16107)
    • πŸ›  Fixes openblas installation for static build
    • ⚑️ Update python dependencies (#16105)
    • πŸ›  CD Fixes (#16127)
    • βž• Adds dynamic libmxnet to CD pipeline (#16163)
    • πŸ›  Fix README Build Status (#16183)
    • πŸ— subscribe to build and CD changes (#16192)
    • πŸš€ [CD] Add COMMIT_ID param to release job (#16202)
    • πŸ›  Fix lack of dylib support in Makefile when use lapack (#15813)
    • βœ‚ Removes git status update stop gap solution (#16285)
    • βž• add mkl installation temp fix (#16304)
    • βž• add 'Release' cmake flag (#16294)
    • S3 upload artifacts (#16336)
    • πŸ›  Fix nightly scala pipeline (#16362)
    • βœ‚ remove redundant branch name (#16372)
    • βœ… Skipping installing nightly test (#16418)
    • βž• Adds PyPI CD Pipeline (#16190)
    • ⬆️ upgrade the pytest version (#16429)
    • βͺ Revert "add mkl installation temp fix (#16304)" (#16369)
    • 🐳 increase docker cache timeout (#16430)
    • βž• Adds pip requirements file to nightly gpu ci image (#16472)
    • 🐳 [CD] Adds python docker pipeline (#16547)
    • 🚚 Move imagenet inference to nightly (#16577)
    • Backport #16980 #17031 #17018 #17019 to 1.6 branch (#17213)

    Misc

    • ⚑️ update committer info (#15289)
    • πŸš€ Typo fix in plan_memory relase -> release. (#15299)
    • indent changes (#15321)
    • πŸ”€ Had a few PRs merged. Hope to become an official contributor and potentially a commiter. (#15451)
    • cuda/cuDNN lib version checking. Force cuDNN v7 usage. (#15449)
    • πŸ‘Œ Improve diagnose.py, adding build features info and binary library path. (#15499)
    • πŸš€ update ratcheck for apache-rat 0.13 release (#15417)
    • βž• add myself to interested modules (#15590)
    • 1.5.0 news (#15137)
    • ⬆️ bump up version from 1.5.0 to 1.6.0 on master (#15072)
    • βœ‚ Remove myself from CODEOWNERS (#15617)
    • βœ‚ remove mshadow submodule
    • import mshadow source tree
    • πŸ‘ cuDNN support cleanup (#15812)
    • Remove requests_failed_to_import handling
    • ⚑️ Update CODEOWNERS. (#15972)
    • πŸ‘Œ Improve diagnose.py to display environment variables (#15715)
    • ⚑️ Update README.md (#16035)
    • ⚑️ [Dev] update ps-lite dependency (#15936)
    • Typedef cleanup (#15899)
    • βž• add KEY for Tao Lv (#16081)
    • βœ‚ remove 'foo' and other print msg from test (#16088)
    • βͺ Revert accidental change to CMakelists (#16040)
    • ⚑️ Update env_var.md (#16145)
    • ⚑️ Update dmlc-core (#16149)
    • βž• adding codeowners (#16165)
    • Factorize CUDA_KERNEL_LOOP used in CUDA kernels (#16197)
    • βž• add code of conduct and conflict resolution (#16343)
    • simple typo error in NEWS.md (#16344)
    • ⚑️ update NEWS.md and README.md (#16385)
    • split issue templates (#16558)
    • πŸ”’ Create SECURITY.md (#16573)

    πŸ— How to build MXNet

    Please follow the instructions at https://mxnet.incubator.apache.org/get_started

    πŸ— Users that build MXNet from source are recommended to build release 1.6.0 without jemalloc to avoid incompatibilities with llvm's openmp library (details in issue #17043 and PR #17324). This is done for cmake builds by setting USE_JEMALLOC "OFF" in ./CMakeLists.txt, or for make builds with "USE_JEMALLOC = 0" in make/config.mk.

  • v1.6.0.rc2

    January 29, 2020
  • v1.6.0.rc1

    January 07, 2020
  • v1.6.0.rc0

    December 12, 2019
  • v1.5.1 Changes

    September 05, 2019

    πŸš€ Apache MXNet (incubating) 1.5.1 is a maintenance release incorporating important bug fixes and important performance improvements. All users of Apache MXNet (incubating) 1.5.0 are advised to upgrade. You can install Apache MXNet (incubating) 1.5.1 at the usual place. Please review these Release Notes to learn the bug fixes.

    πŸ› Bug-fixes

    • βž• add deconv in TRT subgraph (#15666) (#16043)
    • ⚑️ Update TRT tutorial with new APIs (#16044)
    • Fix _copy_to on MKLDNN backend (#15637) (#15803)
    • Benchmark doc fix (#15769) (#16029)
    • βœ‚ remove Julia cat image for license issue (#15964) (#16026)
    • βž• added check for empty params file and unknown param (not arg/aux) (#15917)
    • πŸ›  fix license issues (#15806) (#15860)
    • prevent TRT_Logger to be destroyed before TRT engine (#14898) (#15877)
    • [MXNET-1086] added sub and mul to ONNX->TensorRT conversion (#15344) (#15875)
    • πŸ– handle fix_gamma in tensorrt subgraph conversion correctly (#15645) (#15874)
    • πŸ›  fix LinearRegressionOutput with empty label (#15620) (#15873)
    • [v1.5.x] [MKLDNN] Independent gradients requests check with respect to weights… (#15805)
    • πŸ›  fix dropout mask output (#15697) (#15804)
    • πŸ›  fix fp32 flatten issue (#15351) (#15802)
    • πŸ“¦ Clojure package remove source images (#15828)
    • πŸ”„ changed constructor args (#15601) (#15827)
    • Add MKLDNN 4c layout to fix gluoncv se_resnext101_64x4d (#15692) (#15801)
    • πŸ›  Fix the bug of MXEnginePushAsyncND and MXEnginePushSyncND (#15751) (#15792)

    πŸ— How to build MXNet

    Please follow the instructions at https://mxnet.incubator.apache.org/get_started

    ⚑️ List of submodules used by Apache MXNet (Incubating) and when they were updated last

    ⚑️ | Name | Commit-id | Last update in MXNet | Last update in module | | --- | --- | --- | --- | | dlpack | 10892ac | Oct 30, 2017 | Aug 12, 2019 | | dmlc-core | 3943914 | May 14, 2019 | Sep 2, 2019 | βœ… | googletest | eb9225c | Jan 14, 2019 | Aug 29, 2019 | | mkldnn | 41bee20 | May 14, 2019 | Aug 27, 2019 | | mshadow | 1d79ecf | May 13, 2019 | Aug 4, 2019 | | nvidia_cub | c3cceac | Feb 16, 2018 | Jul 17, 2019 | | onnx-tensorrt | 1e209e5 | Jan 3, 2019 | Aug 22, 2019 | | openmp | 37c7212 | Nov 14, 2017 | Aug 28, 2019 | | ps-lite | 8a76389 | Apr 25, 2018 | Sep 2, 2019 | | tvm | 21935dc | May 21, 2019 | Sep 2, 2019 |

  • v1.5.1.rc0 Changes

    September 05, 2019

    πŸš€ Apache MXNet (incubating) 1.5.1 is a maintenance release incorporating important bug fixes and important performance improvements. All users of Apache MXNet (incubating) 1.5.0 are advised to upgrade. You can install Apache MXNet (incubating) 1.5.1 at the usual place. Please review these Release Notes to learn the bug fixes.

    πŸ› Bug-fixes

    • βž• add deconv in TRT subgraph (#15666) (#16043)
    • ⚑️ Update TRT tutorial with new APIs (#16044)
    • Fix _copy_to on MKLDNN backend (#15637) (#15803)
    • Benchmark doc fix (#15769) (#16029)
    • βœ‚ remove Julia cat image for license issue (#15964) (#16026)
    • βž• added check for empty params file and unknown param (not arg/aux) (#15917)
    • πŸ›  fix license issues (#15806) (#15860)
    • prevent TRT_Logger to be destroyed before TRT engine (#14898) (#15877)
    • [MXNET-1086] added sub and mul to ONNX->TensorRT conversion (#15344) (#15875)
    • πŸ– handle fix_gamma in tensorrt subgraph conversion correctly (#15645) (#15874)
    • πŸ›  fix LinearRegressionOutput with empty label (#15620) (#15873)
    • [v1.5.x] [MKLDNN] Independent gradients requests check with respect to weights… (#15805)
    • πŸ›  fix dropout mask output (#15697) (#15804)
    • πŸ›  fix fp32 flatten issue (#15351) (#15802)
    • πŸ“¦ Clojure package remove source images (#15828)
    • πŸ”„ changed constructor args (#15601) (#15827)
    • Add MKLDNN 4c layout to fix gluoncv se_resnext101_64x4d (#15692) (#15801)
    • πŸ›  Fix the bug of MXEnginePushAsyncND and MXEnginePushSyncND (#15751) (#15792)

    πŸ— How to build MXNet

    Please follow the instructions at https://mxnet.incubator.apache.org/install/index.html

    ⚑️ List of submodules used by Apache MXNet (Incubating) and when they were updated last

    ⚑️ | Name | Commit-id | Last update in MXNet | Last update in module | | --- | --- | --- | --- | | dlpack | 10892ac | Oct 30, 2017 | Aug 12, 2019 | | dmlc-core | 3943914 | May 14, 2019 | Sep 2, 2019 | βœ… | googletest | eb9225c | Jan 14, 2019 | Aug 29, 2019 | | mkldnn | 41bee20 | May 14, 2019 | Aug 27, 2019 | | mshadow | 1d79ecf | May 13, 2019 | Aug 4, 2019 | | nvidia_cub | c3cceac | Feb 16, 2018 | Jul 17, 2019 | | onnx-tensorrt | 1e209e5 | Jan 3, 2019 | Aug 22, 2019 | | openmp | 37c7212 | Nov 14, 2017 | Aug 28, 2019 | | ps-lite | 8a76389 | Apr 25, 2018 | Sep 2, 2019 | | tvm | 21935dc | May 21, 2019 | Sep 2, 2019 |

  • v1.5.0 Changes

    June 08, 2019

    πŸ†• New Features

    Automatic Mixed Precision (experimental)

    Training Deep Learning networks is a very computationally intensive task. Novel model architectures tend to have increasing numbers of layers and parameters, which slow down training. Fortunately, software optimizations and new generations of training hardware make it a feasible task.
    However, most of the hardware and software optimization opportunities exist in exploiting lower precision (e.g. FP16) to, for example, utilize Tensor Cores available on new Volta and Turing GPUs. While training in FP16 showed great success in image classification tasks, other more complicated neural networks typically stayed in FP32 due to difficulties in applying the FP16 training guidelines.
    πŸ“„ That is where AMP (Automatic Mixed Precision) comes into play. It automatically applies the guidelines of FP16 training, using FP16 precision where it provides the most benefit, while conservatively keeping in full FP32 precision operations unsafe to do in FP16. To learn more about AMP, check out this tutorial.

    πŸ‘ MKL-DNN Reduced precision inference and RNN API support

    πŸ“š Two advanced features, fused computation and reduced-precision kernels, are introduced by MKL-DNN in the recent version. These features can significantly speed up the inference performance on CPU for a broad range of deep learning topologies. MXNet MKL-DNN backend provides optimized implementations for various operators covering a broad range of applications including image classification, object detection, and natural language processing. Refer to the MKL-DNN operator documentation for more information.

    Dynamic Shape(experimental)

    πŸ‘ MXNet now supports Dynamic Shape in both imperative and symbolic mode. MXNet used to require that operators statically infer the output shapes from the input shapes. However, there exist some operators that don't meet this requirement. Examples are:

    • while_loop: its output size depends on the number of iterations in the loop.
    • boolean indexing: its output size depends on the value of the input data.
    • πŸ‘ many operators can be extended to take a shape symbol as input and the shape symbol can determine the output shape of these operators (with this extension, the symbol interface of MXNet can fully support shape).
      To support dynamic shape and such operators, we have modified MXNet backend. Now MXNet supports operators with dynamic shape such as contrib.while_loop, contrib.cond, and mxnet.ndarray.contrib.boolean_mask
      Note: Currently dynamic shape does not work with Gluon deferred initialization.

    πŸ‘ Large Tensor Support

    0️⃣ Currently, MXNet supports maximal tensor size of around 4 billon (232). This is due to uint32_t being used as the default data type for tensor size, as well as variable indexing.
    This limitation has created many problems when larger tensors are used in the model.
    A naive solution to this problem is to replace all uint32_t in the MXNet backend source code to int64_t.
    This solution is not viable, however, because many data structures use uint32_t as the data type for its members.
    Unnecessarily replacing these variables to int64_t will increase the memory consumption causing another limitation. Second, MXNet has many submodule dependencies.
    ⚑️ Updating the variable types in the MXNet repository is not enough. We also need to make sure different libraries, such as MKLDNN, MShadow etc. supports the int64_t integer data type.
    ⚑️ Third, many front end APIs assume unsigned 32-bit integer interface. Only updating the interface in C/C++ will cause all the language bindings to fail.
    ✨ Therefore, we need a systematic approach to enhance MXNet to support large tensors.
    Now you can enable large tensor support by changing the following build flag to 1: USE_INT64_TENSOR_SIZE = 1. Note this is set to 0 by default.
    πŸ‘ For more details please refer to the design document.

    ⚑️ Dependency Update

    πŸ‘ MXNet has added support for CUDA 10, CUDA 10.1, cudnn7.5, NCCL 2.4.2, and numpy 1.16.0.
    ⚑️ These updates are available through PyPI packages and build from source, refer to installation guid for more details.

    Gluon Fit API(experimental)

    Training a model in Gluon requires users to write the training loop. This is useful because of its imperative nature, however repeating the same code across multiple models can become tedious and repetitive with boilerplate code.
    The training loop can also be overwhelming to some users new to deep learning. We have introduced an Estimator and Fit API to help facilitate training loop.
    Note: this feature is still experimental, for more details, refer to design document.

    πŸ†• New Operators

    • split_v2 (#13687)
    • Gradient multiplier (contrib) operator (#13632)
    • πŸ‘ Image normalize operator - GPU support, 3D/4D inputs (#13802)
    • πŸ‘ Image ToTensor operator - GPU support, 3D/4D inputs (#13837)
    • βž• Add Gluon Transformer Crop (#14259)
    • GELU (#14449)
    • AdamW operator (Fixing Weight Decay Regularization in Adam) (#13728)
    • [MXNET-1382] Add the index_array operator (#14638)
    • βž• add an operator for computing the likelihood of a Hawkes self-exciting process (#14683)
    • βž• Add numpy linspace (#14927)

    πŸ”‹ Feature Improvements

    Operators

    • πŸ‘‰ make ROIAlign support position-sensitive pooling (#13088)
    • βž• Add erfinv operator for calculating inverse error function (#13811)
    • βž• Added optional parameters to BilinearResize2D to do relative scaling (#13985)
    • πŸ‘ MXNET-1295 Adding integer index support to Sequence* family of operators. (#13880)
    • πŸ‘ Export resize and support batch size (#14014)
    • CUDNN dropout (#13896)
    • 😌 Relaxing type requirements for slice_like op (#14097)
    • 😌 Relaxing type requirements for reshape_like op (#14325)
    • Parallelize CPU version and add GPU version of boolean_mask op (#14090)
    • βž• Add NHWC layout support to Pooling (cpu, gpu cuda, gpu cuDNN) (#13749)
    • ⚑️ Multi-precision AdamW update op (#14171)
    • [op] add back support for scalar type rescale_grad argument for adamw_update/mp_adamw_update (#14221)
    • move choose_element_0index to operator (#14273)
    • ⚑️ Optimize NMS (#14290)
    • ⚑️ Optimize NMS part 2 (#14352)
    • βž• add background class in box_nms (#14058)
    • 0️⃣ Use cudnn for dropout by default (#14278)
    • ⚑️ In-place updates for Nadam, Adadelta, Adamax and SGLD (#13960)
    • Aggregate SGD (#13346)
    • βž• Add proper exception message for negative shape in array creation routines (#14362)
    • πŸ‘Œ Support multi-threading for Custom Operator (#14363)
    • 🚚 moveaxis operator now accepts negative indices and sequence of ints as well. (#14321)
    • πŸ‘Œ Support SyncBatchNorm5D (#14542)
    • βž• Add nd.power and sym.pow (#14606)
    • πŸ”„ Change RNN OP to stateful (#14476)
    • βž• Add imresize and copyMakeBorder to mx.image (#13357)
    • add ctx for rand_ndarray and rand_sparse_ndarray (#14966)
    • βž• Add cpu implementation for Deformable PSROIPooling (#14886)
    • Add warning for fp16 inputs with MXNET_SAFE_ACCUMULATION=0 (#15046)
    • Safe LayerNorm (#15002)
    • use MXNET_SAFE_ACCUMULATION for softmax accumulator (#15037)
    • LayerNorm acceleration on GPU (#14935)
    • βž• Add matrix inversion operator in linalg (#14963)
    • implementation for equivalence of tf.moments (#14842)
    • πŸ‘‰ Use env var to enforce safe accumulation in ReduceAxesCompute (#14830)
    • [MXNet-1211] Factor and "Like" modes in BilinearResize2D operator (#13226)
    • βž• added extraction/generation of diagonal and triangonal matrices to linalg (#14501)
    • πŸ‘ [Mxnet-1397] Support symbolic api for requantize and dequantize (#14749)
    • 🌲 [MXNET-978] Support higher order gradient for log. (#14992)
    • βž• Add cpu implementation for Deformable Convolution (#14879)

    MKLDNN

    • πŸ”‹ Feature/mkldnn static (#13628)
    • πŸ”‹ Feature/mkldnn static 2 (#13503)
    • πŸ‘Œ support mkl log when dtype is fp32 or fp64 (#13150)
    • βž• Add reshape op supported by MKL-DNN (#12980)
    • Move the debug output message into MXNET_MKLDNN_DEBUG (#13662)
    • ↔ Integrate MKLDNN Conv1d and support 3d layout (#13530)
    • 0️⃣ Making MKL-DNN default on MXNet master (#13681)
    • βž• Add mkldnn OP for slice (#13730)
    • mkldnn s8 conv API change for master (#13903)
    • πŸ‘ [MKLDNN] Enable signed int8 support for convolution. (#13697)
    • βž• add mkldnn softmax_output (#13699)
    • MKLDNN based Quantized FullyConnected Operator and its fusion (#14128)
    • πŸ›  Fix entropy for uint8 (#14150)
    • πŸš€ Update MKL-DNN to v0.18 release (was: fix the Dense layer issue) (#13668)
    • πŸ‘ [MKL-DNN] Enable s8 support for inner product and 3d input with flatten=false (#14466)
    • ⚑️ Optimize transpose operator with MKL-DNN (#14545)
    • 🚚 [MKLDNN] Remove repeat parts in MKLDNN.md (#14995)
    • [MKLDNN] Enable more convolution + activation fusion (#14819)
    • ⚑️ Update MKL-DNN submodule to v0.19 (#14783)
    • βž• Add mkldnn_version.h to pip package (#14899)
    • [MKLDNN] add quantized sum (#14614)
    • πŸ”¨ [MKLDNN]Refactor requantize to speed up execution (#14608)
    • [MKLDNN]Add quantized relu (#14604)
    • βž• Add MKLDNN headers to pip package (#14339)
    • βž• add symbolic link to mkldnn header files in include (#14300)
    • 0️⃣ disable default MKLDNN for cross compilation (#13893)
    • ⚑️ Update MKLDNN_README.md (#13653)
    • πŸ‘ [Quantization] Support zero-size tensor input for quantization flow (#15031)
    • πŸ‘Œ Support 3D input for MKL-DNN softmax operator (#14818)
    • βž• Add primitive cache for MKL-DNN sum(elemwise_add operator (#14914)
    • πŸ›  Fix reshape to add in-place back (#14903)
    • [int8] Add MobileNetV2_1.0 & ResNet18 Quantization (#14823)
    • 🚀 [MKLDNN]Improve quantizeV2 and dequantize latency (#14641)
    • βž• added mkldnn dependency for plugin compile target (#14274)
    • πŸ‘Œ Support Quantized Fully Connected by INT8 GEMM (#12922)

    ONNX

    • ONNX export: Instance normalization, Shape (#12920)
    • ONNX export: Logical operators (#12852)
    • ONNX import/export: Size (#13112)
    • ONNX export: Add Flatten before Gemm (#13356)
    • βœ… ONNX import/export: Add missing tests, ONNX export: LogSoftMax (#13654)
    • ONNX import: Hardmax (#13717)
    • [MXNET-898] ONNX import/export: Sample_multinomial, ONNX export: GlobalLpPool, LpPool (#13500)
    • ONNX ops: norm exported and lpnormalization imported (#13806)
    • [MXNET-880] ONNX export: Random uniform, Random normal, MaxRoiPool (#13676)
    • 0️⃣ ONNX export: Add Crop, Deconvolution and fix the default stride of Pooling to 1 (#12399)
    • onnx export ops (#13821)
    • ONNX export: broadcast_to, tile ops (#13981)
    • πŸ‘ ONNX export: Support equal length splits (#14121)

    TensorRT

    • [MXNET-1252][1 of 2] Decouple NNVM to ONNX from NNVM to TenosrRT conversion (#13659)
    • ⚑️ [MXNET-703] Update to TensorRT 5, ONNX IR 3. Fix inference bugs. (#13310)
    • πŸ”¨ [MXNET-703] Minor refactor of TensorRT code (#13311)
    • πŸ‘ reformat trt to use subgraph API, add fp16 support (#14040)

    πŸ‘ FP16 Support

    • ⚑️ Update mshadow to support batch_dot with fp16. (#13716)
    • float32 β†’ float16 cast consistency across implementations (#13857)
    • πŸ”€ modifying SyncBN doc for FP16 use case (#14041)
    • πŸ‘Œ support dot(vector, vector) for fp16 inputs on GPU (#14102)
    • softmax for fp16 with fp32 accumulator (#14098)
    • [MXNET-1327] Allow RNN Layers to be initialized to fp16 (#14219)
    • fp16 safe norm operator (#14616)
    • ⚑️ NAG Optimizer with multi-precision support (#14568)

    πŸ‘ Deep Graph Library(DGL) support

    • βž• Add graph_compact operator. (#13436)
    • Accelerate DGL csr neighbor sampling (#13588)

    Horovod Integration

    • βž• Add extra header file to export for error checking (#13795)
    • whitelist symbols for using MXNet error handling externally (#13812)
    • πŸ“Œ Use CPUPinned context in ImageRecordIOParser2 (#13980)
    • Add pin_device_id option to Gluon DataLoader (#14136)

    Dynamic Shape

    • [MXNET-1315] Add checks for dynamic-shaped operators in CachedOp (#14018)
    • [MXNET-1325] Make InferShapeAttr a standalone pass (#14193)
    • [MXNET-1324] Add NaiveRunGraph to imperative utils (#14192)
    • [MXNET-1352] Allow dynamic shape in while_loop and if conditionals (#14393)

    Backend Engine

    • Add infer_type_partial (#14214)
    • Tidy up storage allocation and deallocation (#14480)
    • βž• Add MXEnginePushAsync and MXEnginePushSync C APIs (#14615)
    • ✨ Enhance subgraph API (#14113)
    • ✨ Enhance PartitionGraph (#14277)
    • πŸ‘ Allow clearing gpu cache (#14252)
    • πŸ›  Fix warning / static function in header. (#14900)
    • Simplify creation of NodeEntry instances and use emplace_back (#14095)
    • βž• Add unpooled gpu memory type (#14716)
    • [MXNET-1398] Enable zero-copy from numpy to MXNet NDArray (#14733)
    • 0️⃣ Use DEFAULT macro in C APIs (#14767)
    • Avoid unnecessary vector copies in imperative_utils.cc (#14665)
    • πŸ‘Œ Support populating errors back to MXNet engine in callback (#13922)
    • βͺ Restore save/load ndarray to 1.4.1 (#15073)
    • Enable serializing/deserializing ndarrays in np_shape semantics (#15090)
    • πŸ‘ [numpy] Support zero-dim and zero-size tensors in MXNet (#14661)
    • Rename np_compat to np_shape (#15063)
    • [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple (#14270)

    πŸ‘ Large Tensor Support

    • πŸ‘ Large array support for randint (#14242)
    • πŸ‘ [MXNET-1185] Support large array in several operators (part 1) (#13418)
    • βœ… [MXNET-1401] adding more operators to test support for Large Tensor (#14944)
    • πŸ‘ [MXNET-1410]Adding Large Tensor Support for tensor transpose (#15059)

    Quantization

    • Exclude concat layer for gpu quantization (#14060)
    • ✨ Enhance gpu quantization (#14094)
    • Register fake grad to subgraph and quantized operators (#14275)
    • βž• Add int8 data loader (#14123)

    Profiler

    • [MXNET-857] Add initial NVTX profiler implementation (#12328)

    CoreML

    • Add more support for mxnet_to_coreml (#14222)

    Front End API

    Gluon

    • βž• Add pixelshuffle layers (#13571)
    • [MXNET-766] add dynamic_unroll RNN for HybridBlock (#11948)
    • βž• add pos_weight for SigmoidBinaryCrossEntropyLoss (#13612)
    • πŸ“± Rewrite dataloader with process pool, improves responsiveness and reliability (#13447)
    • Complimentary gluon DataLoader improvements (#13606)
    • [Fit-API] Adress PR comments (#14885)
    • ⚑️ [Fit API] update estimator (#14849)
    • ⚑️ [MXNET-1396][Fit-API] Update default handler logic (#14765)
    • [Fit API] improve event handlers (#14685)
    • 🚚 move to gluon contrib (#14635)
    • 🚚 move estimator to contrib (#14633)
    • ⚑️ [MXNet-1340][Fit API]Update train stats (#14494)
    • [MXNet-1334][Fit API]base class for estimator and eventhandler (#14346)
    • [MXNET-1333] Estimator and Fit API (#14629)
    • βž• Add support for fast variable-length LSTM (#14208)
    • βž• Add the Gluon Implementation of Deformable Convolution (#14810)
    • hybridize rnn and add model graph (#13244)

    Python

    • Python BucketingModule bind() with grad_req = 'add' (#13984)
    • πŸ“š Refine runtime feature discovery python API and add documentation to … (#14130)
    • βš™ Runtime feature detection (#13549)
    • βž• Add dtype visualization to plot_network (#14066)
    • [MXNET-1359] Adds a multiclass-MCC metric derived from Pearson (#14461)
    • πŸ‘Œ support long for mx.random.seed (#14314)
    • Optimization of metric evaluation (#13471)
    • [MXNET-1403] Disable numpy's writability of NDArray once it is zero-copied to MXNet (#14948)
    • πŸ”¨ Refactor ImageRecordIter (#14824)

    Language Bindings

    Scala

    • πŸ‘ [MXNET-1260] Float64 DType computation support in Scala/Java (#13678)
    • [MXNET-1000] get Ndarray real value and form it from a NDArray (#12690)
    • Now passing DType of Label downstream to Label's DataDesc object (#14038)
    • Scala interpreter instructions (#14169)
    • βž• Add default parameters for Scala NDArray.arange (#13816)
    • [MXNET-1287] Up scala comp (#14667)
    • ⚠ [MXNET-1385] Improved Scala Init and Macros warning messages (#14656)
    • βœ‚ Remove all usages of makefile for scala (#14013)
    • ⚑️ Update scala-package gitignore configuration. (#13962)
    • πŸ‘· [MXNET-1177]Adding Scala Demo to be run as a part of Nightly CI (#13823)
    • ⚠ [MXNET-1287] Miscellaneous Scala warning fixes (#14658)
    • πŸ›  Fix jar path and add missing ones for spark jobs (#14020)
    • πŸ“¦ [MXNET-1155] Add scala packageTest utility (#13046)
    • [MXNET-1195] Cleanup Scala README file (#13582)
    • βž• Add scalaclean to make clean (#14322)
    • βž• Add maven wraper to scala project. (#13702)
    • βž• Add new Maven build for Scala package (#13819)
    • [MXNET-1287] Feat dep (#14668)
    • βž• add Apache header on all XML (#14138)
    • ⚑️ update the version name (#14076)
    • πŸ”„ change to compile time (#13835)
    • [MXNET-918] Random module (#13039)
    • πŸš€ Avoid secondary deployment of package to local (#14647)

    Java

    • [MXNET-1180] Java Image API (#13807)
    • [MXNET-1285] Draw bounding box with Scala/Java Image API (#14474)
    • βž• Add BERT QA Scala/Java example (#14592)
    • πŸ‘ [MXNET-1232] fix demo and add Eclipse support (#13979)
    • [MXNET-1331] Removal of non-MXNET classes from JAR (#14303)
    • ⚑️ Java install info update (#13912)
    • ⚑️ [MXNET-1226] add Docs update for MXNet Java (#14395)
    • [MXNET-1383] Java new use of ParamObject (#14645)
    • MXNET-1302 Exclude commons-codec and commons-io from assembled JAR (#14000)

    C++

    • πŸ–¨ print error message for mxnet::cpp::Operator::Invoke when failed (#14318)
    • πŸ— build docs with CPP package (#13983)
    • ⚑️ Update inception_inference.cpp (#14674)
    • ⚑️ Optimize C++ API (#13496)

    Clojure

    • ⚑️ [Clojure] - Add Spec Validations to the Optimizer namespace (#13499)
    • [Clojure] Add Spec Validations for the Random namespace (#13523)
    • πŸš€ [Clojure] Correct the versions in the README so they correspond to the latest maven.org release ([#13507)
    • πŸ“¦ Port of scala infer package to clojure (#13595)
    • πŸ›  Clojure example for fixed label-width captcha recognition (#13769)
    • ⚑️ Update project.clj file to use the snapshots repo to be able to pull (#13935)
    • πŸ“¦ [Clojure] Add resource scope to clojure package (#13993)
    • πŸ“¦ [clojure-package] improve docstrings in image.clj (#14307)
    • [Clojure] Helper function for n-dim vector to ndarray (#14305)
    • [clojure]: add comp-metric based on CompositeEvalMetric (#14553)
    • ✨ [Clojure] enhance draw bounding box (#14567)
    • [Clojure] Add methods based on NDArrayAPI/SymbolAPI (#14195)
    • [Clojure] Clojure BERT QA example (#14691)
    • πŸ“¦ [clojure-package][wip] add ->nd-vec function in ndarray.clj (#14308)
    • πŸš€ [Clojure] Correct the versions in the README so they correspond to the latest maven.org release (#13507)
    • ⚑️ Update version to v1.5.0 including clojure package (#13566)
    • πŸ”€ [clojure][generator] ndarray/symbol api random merged (#14800)
    • ⬆️ upgrade codox to work with lein 2.9.0 (#14133)
    • βœ… [clojure] fix: image test does not rely on s3 to run (#15122)

    Julia

    • πŸ‘ Julia v0.7/1.0 support and drop v0.6 support (#12845)
    • Julia: split ndarray.jl into several snippets (#14001)
    • Julia: split symbolic-node.jl into several snippets (#14024)
    • Julia: rename mx.clip to clamp for NDArray (#14027)
    • Julia: add binding for runtime feature detection (#13992)

    Perl:

    • Two more gluon loss classes. (#14194)

    R

    • βž• add NAG optimizer to r api (#14023)
    • πŸ“¦ R-Package Makefile (#14068)

    🐎 Performance Improvements

    • Less cudaGet/SetDevice calls in Gluon execution (#13764)
    • πŸ‘Œ Improve bulking in Gluon (#13890)
    • Increase perfomance of BulkAppend and BulkFlush (#14067)
    • 🐎 Performance improvement in ToTensor GPU Kernel (#14099)
    • 🐎 Performance improvement in Normalize GPU Kernel (#14139)
    • Bulked op segments to allow Variable nodes (#14200)
    • 🐎 Performance improving for MKL-DNN Quantized FullyConnected (#14528)
    • speedup SequenceMask on GPU (#14445)
    • Dual stream cudnn Convolution backward() with MXNET_GPU_WORKER_NSTREAMS=2. (#14006)
    • Speedup _contrib_index_copy (#14359)
    • 🐎 use mkl sparse matrix to improve performance (#14492)
    • Re-enable static cached_op optimization (#14931)
    • Speed up SequenceReverse (#14627)
    • πŸ‘Œ Improve FC perf when no_bias=False (#15033)
    • πŸ‘Œ Improve cached_op performance for static mode (#14785)

    Example and Tutorials

    • [MXNET-949] Module API to Gluon API tutorial (#12542)
    • πŸ‘Œ Support SSD f32/int8 evaluation on COCO dataset (#14646)
    • [MXNET-1209] Tutorial transpose reshape (#13208)
    • [Clojure] Add Fine Tuning Sentence Pair Classification BERT Example (#14769)
    • example/ssd/evaluate/eval_metric.py (#14561)
    • βž• Add examples of running MXNet with Horovod (#14286)
    • βž• Added link to landing page for Java examples (#14481)
    • ⚑️ Update lip reading example (#13647)
    • [MXNET-1121] Example to demonstrate the inference workflow using RNN (#13680)
    • 🚚 [MXNET-1301] Remove the unnecessary WaitAll statements from inception_inference example (#13972)
    • Modifying clojure CNN text classification example (#13865)
    • [MXNET-1210] Gluon Audio - Example (#13325)
    • βž• add examples and fix the dependency problem (#13620)
    • βž• add quantization example to readme (#14186)
    • Add an inference script providing both accuracy and benchmark result for original wide_n_deep example (#13895)
    • ⚑️ Update autoencoder example (#12933)
    • #13813 examples with opencv4/origami (#13813)
    • [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API (#13294)
    • βž• Add tutorial on how to use build from source jar (#14197)
    • Gluon end to end tutorial (#13411)
    • ⚑️ Update MXNetTutorialTemplate.ipynb (#13568)
    • Simplifications and some fun stuff for the MNIST Gluon tutorial (#13094)
    • Clarify dependency on OpenCV in CNN Visualization tutorial. (#13495)
    • ⚑️ Update row_sparse tutorial (#13414)
    • βž• add clojure tutorials to index (#14814)
    • ⚑️ Update lstm_crf.py (#14865)

    Website

    πŸ“š Documentation

    • πŸš€ [MXNET-1402] MXNet docs change for 1.4.1 release (#14949)
    • βž• Add API documentation for upsampling operator with examples (#14919)
    • πŸ”€ Make docblocks for Gluon BatchNorm and SyncBatchNorm consistent with the code (#14840)
    • ⚑️ [DOC] Update ubuntu install instructions from source (#14534)
    • πŸ“„ [Clojure] Better api docstrings by replacing newlines (#14752)
    • πŸ›  Fix documentation for bilinear upsampling and add unit test (#14035)
    • ⚑️ Updated docs for R-package installation (#14269)
    • πŸ“„ [docstring] improve docstring and indentation in module.clj (#14705)
    • 🚚 The folder python-howto was removed in an earlier commit. The reference to that folder was not removed. Making a PR to remove the reference to this folder to keep documents consistent (#14573)
    • πŸ“š Updated documentation about nightly tests (#14493)
    • [Doc] Start the tutorials for MKL-DNN backend (#14202)
    • [DOC] fix sym.arange doc (#14237)
    • πŸ›  fix render issue in NDArray linalg docs (#14258)
    • πŸ“¦ [clojure-package] fix docstrings in normal.clj (#14295)
    • πŸ“š [DOC] Refine documentation of runtime feature detection (#14238)
    • ⚑️ [MXNET-1178] updating scala docs (#14070)
    • πŸ›  Fix website scala doc (#14065)
    • Return value docs for nd.random.* and sym.random.* (#13994)
    • πŸ›  Fixing the doc for symbolic version of rand_zipfian (#13978)
    • πŸ›  fix doc of take operator (#13947)
    • πŸ›  beta doc fixes (#13860)
    • πŸ“š [MXNET-1255] update hybridize documentation (#13597)
    • πŸ“š Update Adam optimizer documentation (#13754)
    • πŸ— local docs build feature (#13682)
    • gluon docfix (#13631)
    • βž• Added javadocs and improved example instructions (#13711)
    • πŸ“¦ [MXNET-1164] Generate the document for cpp-package using Doxygen (#12977)
    • πŸ›  Fix warning in waitall doc (#13618)
    • ⚑️ Updated docs for randint operator (#13541)
    • ⚑️ Update java setup docs for 1.4.0 (#13536)
    • πŸ“„ clarify ops faq regarding docs strings (#13492)
    • πŸ“š [MXNET-1158] JVM Memory Management Documentation (#13105)
    • πŸ›  Fixing a 404 in the ubuntu setup doc (#13542)
    • πŸ›  Fix READMEs for examples (#14179)
    • [Doc] Add MKL-DNN operator list (#14891)
    • πŸ›  Fixed some typos in AvgPooling Docs (#14324)
    • doc fix (#13465)
    • πŸ”„ Change Straight Dope to Dive into Deep Learning (#14465)
    • ⚑️ [DEV] update code owner (#14862)
    • βž• Add notes about debug with libstdc++ symbols (#13533)
    • Mention additional language bindings and add links (#14798)
    • βž• add contributors from intel (#14455)
    • πŸš€ what's new - add 1.4.0 release (#14435)
    • βž• added note about cuda9.2 requirement (#14140)
    • βœ‚ Remove unnecessary "also" in README.md (#14543)
    • ⚑️ Updated news.md with the latest mkldnn submodule version (#14298)
    • βž• add new cloud providers to install page (#14039)
    • ⚑️ Update NOTICE (#14043)
    • ⚑️ Update README.md (#13973)
    • ⚑️ Update profiler doc (#13901)
    • βž• Add CODEOWNERS for Julia package (#13872)
    • ⚑️ update code owner (#13737)
    • ⚑️ Update git clone location to apache github (#13706)
    • πŸ†• NEWS.md backport from v1.4.x to master (#13693)
    • ⚑️ Update CODEOWNERS, add Pedro Larroy. (#13579)
    • [MXNET-1225] Always use config.mk in make install instructions (#13364)
    • Docs & website sphinx errors squished 🌦 (#13488)
    • βž• add Qing's Key to master (#14180)
    • βž• add KEY for zachgk (#14965)
    • corrected a spellign (#14247)
    • πŸš€ 1.4 release (#14297)

    πŸ— Build and Test

    • πŸ›  Fix scala doc build break for v1.3.1 (#13820)
    • βž• Adds additional CUDA build environments (#14909)
    • πŸ‘ Pins version of scikit-learn for python2 due to drop in support (#14928)
    • ⬆️ upgrade the libpng to 1.6.35 (#14620)
    • ⚑️ Updates to cudnn package installation (#14923)
    • πŸ‘Œ Improve order of execution of install scripts. (#14867)
    • Installs qemu pip requirements from qemu requirements file (#14355)
    • ⚑️ update raspberry pi install instructions (#14172)
    • ⚑️ update the scala installation tutorial on intellij (#14033)
    • βœ‚ Removes unneeded nvidia driver ppa installation (#13814)
    • πŸ— script for installing gpu libraries and build tools (#13646)
    • Set install path for libmxnet.so dynamic lib on Mac OS (#13629)
    • compatibility with opencv4 (#14313)
    • βœ… Flaky test #14189 (#14190)
    • Enforce determinism for backwards compatibility checker (#14463)
    • πŸ”„ Change CUB submodule to track Nvidia CUB project. (#13322)
    • ⚑️ Updates gpu tests to use CUDNN_VERSION supplied by the environment but default to 7.0.3 if not set (#14595)
    • ⬆️ upgrade the version to 2.0.2 (#14621)
    • ⚑️ [Dependency Update] Upgrade the libtiff to 4.0.10 (#14623)
    • ⚑️ [Dependency Update] Upgrade cuDNN & NCCL (#14884)
    • ⚑️ [Dependency Update] Upgrade openssl to 1.1.1b (#14837)
    • ⚑️ [Dependency Update] Upgrade CI to use latest cuDNN (#14950)
    • GPU RNN to use TempSpace resource for workspace. (#15056)
    • βž• Add vim-nox to ci/docker/install/ubuntu_core.sh (#14632)
    • πŸ›  Fix dockerized GPU builds in dev_menu (#14603)
    • πŸš€ [MXNET-1093] Add python3 Docker images for each MXNet release (#12791)
    • 🐳 increased docker shared memory (#14119)
    • πŸ›  Fix permissions of ci/docker/install/ubuntu_publish.sh (#13840)
    • 🐳 Dockerfiles for Publish Testing (#13707)
    • πŸ›  Fix test randint (#14990)
    • βœ… Silence excessive mkldnn logging output on tests. (#14947)
    • πŸ›  Fix test memory with ResourceScope (#14666)
    • πŸ”€ Sync Horovod distributed training examples with latest changes (#14748)
    • βœ… use mx.context.num_gpus instead of mx.test_utils.list_gpus in MF recommender example (#14926)
    • [MXNET-1400] adding tests cases to verify large tensor support for depth_to_space and space_to_depth (#14797)
    • rewrite test_custom_op_exc (#14878)
    • 🚚 [Clojure] Remove unneeded test files (#14813)
    • βœ… Use correct stash name when running nightly tests (#14809)
    • βœ… julia/ndarray: fix flaky test cases for clamp (#14776)
    • Updates tolerances for test_layer_bidirectional (#14682)
    • Adds context parameter to check_rnn_layer_forward calls in test_lstmp (#14529)
    • βœ… reenable the test (#14483)
    • βœ… temporarily disable integ tests with a dependency on origami repo (#14448)
    • Bypass ThreadedEngine in test_operator_gpu.py:test_convolution_multiple_streams. (#14338)
    • ⚑️ Updated the MLP test to accept the number of epochs. Reduced the epochs in ci_test.sh to shorten the CI build time (#14149)
    • βœ… follow up on fix nightly test (#14134)
    • βœ… Julia: enable integration test (#14025)
    • fix test_depthwise_convoltuion for occasional CI failures (#14016)
    • πŸ›  fix test_stn (#14063)
    • βž• Add a test for SGLD optimizer with comparisons for set noise seeds. (#13762)
    • βœ… Code modification for testcases of various network models in directory example (#12498)
    • Remove MXNET_STORAGE_FALLBACK_LOG_VERBOSE from test_autograd.py (#13830)
    • βœ… [MXNET-1263] Unit Tests for Java Predictor and Object Detector APIs (#13794)
    • βœ… ONNX test code cleanup (#13553)
    • βœ… #13385 [Clojure] - Turn examples into integration tests (#13554)
    • βž• add cpp example inception to nightly test (#13534)
    • βœ… Fix flaky test test_random:test_randint_generator (#13498)
    • βž• Adding test for softmaxoutput (#13116)
    • ⚑️ [MXNET-1235] Add a test for AdaMax optimizer (#13467)
    • πŸ— [MXNET-545] Fix broken cython build (#10951)
    • ⚑️ Update mkldnn window build instructions in MKLDNN_README.md (#14952)
    • 🚦 Added USE_SIGNAL_HANDLER to other Linux builds which didn't had it (#14122)
    • πŸ— Static build for Python (#13916)
    • 🏁 Julia: add windows-cpu build (#13937)
    • πŸ— Static build instruction for MXNet in general (#13914)
    • πŸ— Jenkins nightly maven with static build script and gpu (#13767)
    • πŸ— Re-organize Scala maven build (#13626)
    • πŸ— disable error checking when building old versions (#13725)
    • πŸ— scripts for building libmxnet binary and wheel (#13648)
    • πŸ‘Œ Improve dev_menu usability, local build and virtualenv (#13529)
    • πŸ— Scripts for building dependency libraries of MXNet (#13282)
    • πŸ— [MXNET-1224]: improve scala maven jni build and packing. (#13493)
    • πŸ›  fix compile error in debug mode (#13873)
    • βž• add ccache to docs build (#13832)
    • ⬇ Decreases test sensitivity (#15014)
    • bump up atol for test_bilinear_resize_op (#15011)
    • Add STL checks via -D_GLIBCXX_ASSERTIONS in debug mode (#14896)
    • clean up duplicate cudnn installation (#14996)
    • πŸ›  fix custom op fork test (#14753)
    • πŸ›  fix pi instructions (#14746)
    • Reenable TensorRT step (#14654)
    • πŸ›  Fixes for CI downloads (#14504)
    • πŸ›  Fixed tutorial warnings (#14472)
    • πŸ›  Fixes static build script for cub directory rename (#14578)
    • βž• add a compiler flag to use int64 as tensor size (#14570)
    • ⬆️ Upgrade Pylint version to 2.3.1 (#14807)
    • πŸ›  Fixes installation nightly test by filtering out the git commands (#14144)
    • πŸ›  fix nightly test on tutorials (#14036)
    • πŸ›  Fix MXNet R package build (#13952)
    • βœ… re-enable test after issue fixed #10973 (#14032)
    • βž• Add back R tests and fix typo around R and perl tests (#13940)
    • πŸ›  Fix document build (#13927)
    • 🏁 Temporarily disables windows pipeline to unblock PRs (#14261)
    • πŸ›  Fix USE_MKLDNN check in Makefile (#13775)
    • Fix spelling in threaded_engine_test (#14709)
    • πŸ›  Fix cmake options parsing in dev_menu (#13458)
    • βž• Add Local test stage and option to jump directly to menu item from commandline (#13809)
    • βž• Add CPU test coverage and refine cmake builds (#13338)
    • βœ… ONNX test code cleanup - part 2 (#13738)
    • Rearrange tests written only for update_on_kvstore = True (#13514)
    • βž• add batch norm test (#13625)
    • ⚑️ Adadelta optimizer test (#13443)
    • βœ… Skip flaky test #13446 (#13480)
    • Comment out test_unix_python3_tensorrt_gpu step (#14642)
    • 🏁 Enable bulking test on windows (#14392)
    • βœ… rewrote the concat test to avoid flaky failures (#14049)
    • βœ… #13624 clojure nightly tests (#13624)
    • βœ… Temporarily disable website testing (#13887)
    • βž• adding tolerance to flaky test (#13850)
    • βž• Add publish test of PyPi cu100mkl (#14637)
    • πŸ“¦ CMake: Enable installation of cpp-package headers (#13339)
    • 🚦 Use USE_SIGNAL_HANDLER by default set to ON in CMakeLists.txt (#14599)
    • πŸ‘Œ Improve CMake handling of sse2 and sse3 (#14757)
    • ⚑️ Update base CUDA image for CI to v10.0 cuDNN 7.3.1 (#14513)
    • ⚑️ Updates build_lib.sh to copy the cub library license (#14347)
    • βž• Add license check to dev_menu, docs build with docker (#14166)
    • πŸ‘· Print reproduction command on CI failure (#14815)
    • πŸ”„ change mxnet_option behavior (#14743)
    • ⬆️ [DEP] upgrade dmlc-core (#14510)
    • πŸ‘‰ Use ubuntu_rat container for rat check (#14678)
    • βž• Added repeats for github status updates (#14530)
    • βž• add filter to warnings (#14532)
    • 🏁 CI Changes for Codified Windows AMIs (#14336)
    • Refactors USE_NVRTC setting to ENABLE_CUDA_RTC in pip make config files (#14250)
    • ⚑️ pypi package description. manifest/setup.py update (#14255)
    • πŸš€ make rat-excludes compliant with apache release policy (#14142)
    • βž• Add libhdf5-dev to ubuntu_core.sh (#14079)
    • βž• Added logging to GitHub commit status publishing (#13615)
    • 🐳 [CI] Prevent timeouts when rebuilding containers with docker. (#13818)
    • [MXNET-862] Basic maven jenkins pipeline (#13450)
    • Scope requests so it's not needed for dev_menu (#13771)
    • βž• Add timeout/retry logic to docker cache download (#13573)
    • ⚠ turn on Sphinx warnings as errors (#13544)
    • πŸ”§ [MXNET-1251] Basic configuration to do static-linking (#13621)
    • πŸ‘Œ Improve CCache handling (#13456)
    • πŸ— build config for maven and pip (#13556)
    • βž• Add Intel MKL blas to Jenkins (#13607)
    • βž• Add workspace cleaning after job finished (#13490)
    • βž• Add a retry to qemu_provision (#13551)
    • πŸ—„ Deprecate Jenkinsfile (#13474)
    • βœ… [MXNET-1408] Adding test to verify Large Tensor Support for ravel and unravel (#15048)
    • 🚚 move amp test and change op support to warning (#15085)
    • πŸ›  Fixes call to build ubuntu gpu in nightly tests (#14964)
    • rat check make target (#15127)
    • βž• add epsilon for tolerance level (#15098)
    • Change mx.test_utils.list_gpus to mx.context.num_gpus where possible (#14946)
    • ⬆️ bump up cudnn to 7.5.1 & nccl 2.4.2 (#14988)
    • πŸ— Disables TensorRT build step (#14958)
    • βœ… disable flaky integration test (#14151)
    • βœ… Disables large tensor size cpu test step (#14982)
    • Disable Flaky Test test_poisson_generator (#14540)
    • Disabled flaky test test_negative_binomial_generator (#13784)
    • Disabled flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker (#13527)

    πŸ› Bug-fixes

    • πŸ‘Œ Improve dev_menu virtualenv handling (#14788)
    • Fallback to dense version for grad(reshape), grad(expand_dims) (#13599)
    • πŸ›  Fix the bug of BidirectionalCell (#13575)
    • ⚑️ set _scale in Trainer using optimizer rescale_grad (#14593)
    • ⚑️ [MXNET-1379] update reshape operator (#14600)
    • βž• Add repr for SymbolBlock (#14423)
    • Cudnn conv dgrad algo filtering (#14310)
    • πŸ›  Fix memory leak for size-zero ndarray (#14365)
    • πŸ›  Fixes the test_sgld (#14473)
    • βͺ Revert "Fix memory leak for size-zero ndarray (#14365)" (#14477)
    • πŸ›  fix custom operation in fork (#14451)
    • Fixes test_operator_gpu.test_multinomial_generator (#14475)
    • πŸ‘Œ support leading dimension of -1 in ravel/unravel (#14356)
    • begin=end not a valid input (#14403)
    • πŸ›  Fix NaN value comparisons in relu, max and min ops (#14262)
    • πŸ›  fix engine crash in shutdown phase (#14382)
    • πŸ›  fix OOM error during resource allocation (#14444)
    • πŸ›  Fix relative difference scala (#14417)
    • Correct update count with Gluon trainer and update_on_kvstore=False (#14377)
    • πŸ›  Fix crashes on visualization (#14425)
    • Reorder module import orders for dist-kvstore (#13742)
    • Fixes for trainer with update_on_kvstore=False (#13721)
    • πŸ›  Fix errors in docstrings for subgraph op; use code directive (#13463)
    • βž• Add resiliency to onnx export code (#13426)
    • ⚑️ update github location for sampled_block.py (#13508)
    • Revert "Manually track num_max_thread (#12380)" (#13501)
    • βͺ Revert "Feature/mkldnn static 2 (#13503)" (#13540)
    • [MXNET-1110] Add header files required by horovod (#13062)
    • ⚠ [MXAPPS-1020] Clean up some Sphinx warnings. (#13539)
    • 🐎 [MXNET-1249] Fix Object Detector Performance with GPU (#13522)
    • 🏁 [MXNET-769] Use MXNET_HOME in a tempdir in windows to prevent access denied due t… (#13531)
    • Chi_square_check for discrete distribution fix (#13543)
    • πŸ›  Fix use-before-assignment in convert_dot (#13511)
    • πŸ›  fix the situation where idx didn't align with rec (#13550)
    • πŸ›  fix link for gluon model zoo (#13583)
    • πŸ›  Fix exception handling api doc (#13519)
    • [MXNET-1253] fix control_flow_op (#13555)
    • πŸ›  fix the Float not showing correctly problem (#13617)
    • πŸ›  fix quantize pass error when the quantization supported Op are excluded in the model (#13596)
    • πŸ›  Fix for import mxnet taking long time if multiple process launched (#13602)
    • βͺ Revert "Feature/mkldnn static (#13628)" (#13638)
    • ⚑️ updated reference to Apache MXNet (#13645)
    • πŸ›  Fix incorrect delete in MXExecutorReshape exception handling (#13376)
    • βž• add build fix for Scala/Java build (#13655)
    • βœ‚ remove omp which can cause ssd accuracy variance (#13622)
    • πŸ›  Fix Jetson compilation (#13532)
    • βͺ Revert "Fix Jetson compilation" (#13665)
    • πŸ›  Fix Jetson compilation (#13666)
    • βͺ Revert "Revert "[MXNET-43] Fix Jetson compilation" (#13665)" (#13672)
    • πŸ›  fix unpicklable transform_first on windows (#13686)
    • πŸ›  Fix NDArray ToDLPack Bug (#13698)
    • πŸ›  Fix the quantization script to support Python2 (#13700)
    • ⚑️ Update basic_layers.py (#13732)
    • [MXNET-1231] Allow not using Some in the Scala operators (#13619)
    • β†ͺ [MXNET-244] Work around likely compiler bug on nested inlines and temporary acces… (#13535)
    • πŸ‘‰ Use curl to download sample data instead of wget. (#13761)
    • πŸ›  fix bipartite match memory corruption (#13727)
    • βœ‚ remove attributes clear on TRT nodes for GetOptimizedSymbol (#13703)
    • πŸ›  fix redirection issues; set default version to master (#13796)
    • πŸ›  fix for params with no dims in onnx (#13413)
    • βœ‚ Remove semicolon in libmxnet.sym file (#13822)
    • βœ‚ remove useless code (#13777)
    • πŸ›  Fixing a symlink issue with R install (#13708)
    • πŸ›  fix minor indentation (#13827)
    • πŸ›  Fix Tree Reduction on new instance type p3dn.24xlarge (#13852)
    • πŸ“¦ [Clojure] package infer tweaks (#13864)
    • πŸ›  Fix cpp examples build on Mac. (#13826)
    • πŸ›  Fix launch bounds in spatial transformer (#13188)
    • ⚑️ Update example scripts classpath. (#13849)
    • πŸ›  fix ssd quantization script error (#13843)
    • Avoid adding SegfaultLogger if process already has sig handler. (#13842)
    • πŸ›  fix the fetching GPU problem (#13889)
    • πŸ›  Fix SN-GAN example doc (#13877)
    • ⚑️ update Spectral Normalization Code (#13868)
    • πŸ›  Fixed java benchmark failing error by fixing the classpath (#13891)
    • πŸ›  Fix the order of error term's operands (#13745)
    • πŸ›  fix bug in nag optimizer (#13683)
    • πŸ›  Fix BatchNorm converter for CoreML when fix_gamma=True (#13557)
    • πŸ›  Fix for test always returning true (#13911)
    • βž• Add error checking for cpp examples. (#13828)
    • julia: fix argmax for NDArray (#13871)
    • test_ImageRecordIter_seed_augmentation flaky test fix (#12485)
    • πŸ“„ Julia: fix filename quoting in docstring (#13894)
    • Flaky maven binary download (#13974)
    • [MXNET-1293] Adding Iterables instead of List to method signature for infer APIs in Java (#13977)
    • Sample python bilinear initializer at integral points in y-direction (#12983)
    • πŸ›  Fix inconsistent handling for FResourceRequestEx for imperative and symbolic executor (#14007)
    • βœ… [MXNET-1258] fix unittest for ROIAlign Operator (#13609)
    • πŸ›  Fix performance regression in normalize operator (#14055)
    • βœ‚ Remove inplace support for ToTensor operator (#14083)
    • βž• Addresses comments in runtime feature discovery API (#13964)
    • βœ… The latest version of leiningen has a dependency problem with codox (#14132)
    • πŸ›  Fix quote on LBSGD docs (#13975)
    • πŸ›  Fixes spelling (#14168)
    • πŸ›  Fix broken amalgamation (#12792)
    • πŸ›  Fix nd.pick large array issue (#14082)
    • πŸ›  Fix req=null in SliceLikeBackward (#14209)
    • πŸ›  onnx broadcast ops fixes (#13604)
    • πŸ›  fix update params (#14218)
    • πŸ›  MXNet Java bug fixes and experience improvement (#14213)
    • βͺ reverting broadcasting fixes (#14299)
    • πŸ›  fix memory-related issues to enable ASAN tests (#14223)
    • πŸ›  FIX: flaky test exponential generator (#14287)
    • πŸ›  fix SoftmaxOutput resource bug (#14302)
    • πŸ›  Fix shape inference pass (#14153)
    • Limit workspace for cudnnGet results (#14326)
    • #14199: catch subprocess.CalledProcessError in get_gpus() (#14212)
    • πŸ›  Fixes #14181, validate model output shape for ObjectDetector. (#14215)
    • ⚑️ Optimizer MXKVStoreUpdater bug fix in serializeState method (#14337)
    • βž• Add proper exception message for negative shape in array creation routines (#14362)
    • πŸ›  Fix NaN value comparisons in relu, max and min ops (#14262)
    • πŸ›  fix engine crash in shutdown phase (#14382)
    • βœ… Flaky test #14189 (#14190)
    • Correct update count with Gluon trainer and update_on_kvstore=False (#14377)
    • πŸ›  Fix relative difference scala (#14417)
    • πŸ›  fix OOM error during resource allocation (#14444)
    • πŸ›  Fix crashes on visualization (#14425)
    • begin=end not a valid input (#14403)
    • πŸ›  Fix memory leak for size-zero ndarray (#14365)
    • πŸ›  Fixes the test_sgld (#14473)
    • βͺ Revert "Fix memory leak for size-zero ndarray (#14365)" (#14477)
    • πŸ›  fix custom operation in fork (#14451)
    • Fixes test_operator_gpu.test_multinomial_generator (#14475)
    • πŸ›  Fix script retrieval (#14519)
    • πŸ›  Memory fixes. Resolves #10867, and resolves #14080 (#14372)
    • βœ… Chouffe/clojure fix tests (#14531)
    • [clojure][image] add draw-bounding-box interop (#14533)
    • πŸ›  fix tests (#14565)
    • πŸš€ Do not touch GPU 0 during ReleaseAll (#14550)
    • πŸ‘» [MXNET-1357] Fix the cpp-examples to add exception handling (#14441)
    • πŸ›  fix build cpp examples option (#14562)
    • Fix flaky test poisson generator & test_negative_binomial_generator (#14571)
    • πŸ›  Fixing unintentional variable overloading (#14438)
    • πŸ›  fix quantize graph pass (#14605)
    • replace std::random_shuffle to std::shuffle (#14523)
    • βž• Add exception handling support for waitall (#14397)
    • split_and_load can now handle num_ctx > num_data. Issue #13909 (#14607)
    • πŸ›  Fix aspect ratio sampling for RandomResizedCrop (#14585)
    • πŸ“¦ [MXNET-400] support string type for kvstore key in cpp-package (#10792)
    • πŸ›  Fix warning on macro expansion using defined. (#14598)
    • πŸ›  Fix scaladoc scalastyle violations in Infer package (#14671)
    • πŸ›  Fix profiler check (#14677)
    • ⚠ Tweak the copy for the cudnn autotuning warning. (#14680)
    • πŸ‘» Properly handling custom op exception by modify engine (#14693)
    • Disable USE_GPERFTOOLS (#14711)
    • Reference engine from chunk via weak pointer (#14591)
    • [C++] fix type inconsistent issue when loading quantized parameters (#15038)
    • πŸ›  Fix crash in random.shuffle operator (#15041)
    • [MXNET-1406] [BUG] Fix DLManagedTensor deleter (#15016)
    • πŸ›  Fixes lint issue in AMP (#15015)
    • πŸ›  Fixed issue where the estimator was printing beyond the dataset size … (#14464)
    • πŸ›  Fixes cuDNN version for CUDA 9.0 build environment (#15001)
    • πŸ›  Fix the incorrect MKLDNN/MKL logic in cmake (#14877)
    • πŸ›  Fixed and re-enables TensorRT steps (#14960)
    • πŸ›  Fix the return type of sparse.clip operator (#14856)
    • πŸ›  Fix sample_multinomial number of outputs bug (#14873)
    • [MXNET-13578] Fix cmake installation failed (#14692)
    • πŸ›  Fix iterator over symbol when multiple children have the same name (#14597)
    • πŸ›  Fixes for wine detection tutorial (#13886)
    • Scala/Java Predict API fix #14756 (#14804)
    • πŸ›  Fix GELU backward possible NaN (#14782)
    • πŸ›  fix shape index bug (#14518)
    • πŸ›  [BUGFIX] fix ELU function will appear nan when calculating the gradient (#14673)
    • πŸ”„ Change size_t to int within for loop to fix windows build error (#14740)
    • [contrib][op] fix MultiBoxPrior confusing results if first ratio is not 1.0 (#13763)
    • πŸ›  Fix scalastyle (#14669)
    • πŸ›  fix Makefile (#14424)
    • ⚑️ [v1.4.x] Update MKL-DNN to fix the OSX build issue (#14141) (#14182)
    • βž• add preprocessed data and pretrained model info; minor format/spelling fixes (#14170)
    • πŸ›  Fixes libjpeg-turbo dependency under Ubuntu 16.04 (#14127)
    • πŸ›  Fix website error pages (#13963)
    • πŸ›  fix Makefile for rpkg (#13590)
    • πŸ›  fix c complier to clang (#13778)
    • πŸ›  Fix #13521 (#13537)
    • [MXNET-1234] Fix shape inference problems in Activation backward (#13409)
    • βͺ Revert the change broadcast_to param shape (#14998)
    • πŸ›  Fix infer shape partial after unknown shape changed to -1 (#14869)
    • πŸ›  fix add_n bug: when input mem overlap with output mem, results is wrong (#14889)
    • πŸ›  [Bugfix] Fix layer norm for large input shape (#14870)
    • πŸ›  Fix Clojure BERT example's context argument (#14843)
    • πŸ›  fix min max on zero-sized ndarray (#14745)
    • fix acc_type_switch macro with extra tests (#14773)
    • πŸ›  fix bug in profiler tutorial when using cpu (#13695)
    • πŸ‘• [MXNET-1291] solve pylint errors in examples with issue no.12205 (#13815)
    • 🚚 data preparation file moved in example (#14781)
    • πŸ‘• [MXNET-1291] solve pylint errors in examples with issue no.12205 (#13848)
    • πŸ‘» Prevent crashes for opencv exception and std::exception (#14433)
    • ⚑️ Set idx2name for Optimizer object (#14703)
    • ⚑️ Revert "Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file" (#13558)
    • πŸ›  [BUGFIX] fix unknown parameter shapes when np_shape is turned on. (#15097)
    • βž• Add gluonCV to fix AMP Tutorial (#15039)
    • πŸ›  fix the if condition for LayerNorm (#15094)
    • [MKLDNN]Fix mkldnn deconvolution forward with bias (#15088)
    • NER example: fix divisions by zero (#15068)
    • βœ‚ remove warning in tutorial: (#15135)
    • πŸ‘• [MXNET-1291] solve pylint errors in examples with issue no.12205 (#13938)
    • 🐎 Revert "Improve cached_op performance for static mode (#14785)" (#14868)
    • πŸ›  Fix mkldnn backend when using naive engine (#15089)
    • πŸ›  fix gluon rnn cell single step unroll (#15081)
    • βͺ Revert "Improve FC perf when no_bias=False (#15033)" (#15099)

    License

    • ⚑️ Updates python setup.py for recent license changes (#14778)
    • [MXNET-1377] Add static-dependencies licenses (#14726)
    • βž• add license (#13793)
    • ⚑️ License update (#13565)
    • ⚑️ Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file (#13478)
    • βœ… License Googletest and Appendix (#14687)
    • βž• Add copyrights for third party licenses to license file (#13851)
    • πŸ‘Œ Improve license_header tool by only traversing files under revision c… (#13803)
    • ⚑️ Update LICENSE File with subcomponents (#13808)

    Depreciations

    • πŸ—„ Julia: deprecate mx.empty, replace it with UndefInitializer (#13934)
    • πŸ—„ Deprecate NDArrayCollector and instead use ResourceScope (#14780)

    Known Issues

    • Amalgamation compile problems(#14808)
    • πŸ‘ Dynamic Shape does not support reverse shape inference and deferred initialization. (#14983)
    • Disables flaky test_random_size_crop (#15019)
    • Disables flaky test_l2_normalization (#15006)
    • βœ… Disables flaky TestStochasticTiming_2D test (#14412)
    • βœ… Disables flaky test_operator.test_sgld test (#14410)
    • βœ… Disables test_bulking due to flakyness (#14971)
    • βœ… Disabled flaky test (#13758)
    • βœ… Disables flaky test_droupout (#15003)
    • Disables flaky test_operator_gpu.test_activation (#14969)
  • v1.5.0.rc2

    June 27, 2019
  • v1.4.1 Changes

    April 30, 2019

    πŸš€ Apache MXNet (incubating) 1.4.1 is a maintenance release incorporating important bug fixes and important performance improvements. All users of Apache MXNet (incubating) 1.4.0 are advised to upgrade. You can install Apache MXNet (incubating) 1.4.1 at the usual place. Please review these Release Notes to learn the bug fixes.

    πŸ› Bug-fixes

    • Java bug-fix cherry pick (#14834)
    • 0️⃣ Use DEFAULT macro in C APIs (#14767) (#14789)
    • ⚑️ Set idx2name for Optimizer object (#14703) (#14772)
    • Add pin_device_id option to Gluon DataLoader (#14136) (#14771)
    • Tidy up storage allocation and deallocation (#14480) (#14768)
    • βž• Add MXEnginePushAsync and MXEnginePushSync C APIs (#14615) (#14770)
    • Less cudaGet/SetDevice calls in Gluon execution (#13764)
    • πŸ›  Fix nightly build of 1.4.x (#14556)
    • πŸ›  Memory fixes. Resolves #10867, and resolves #14080 (#14372) (#14586)
    • πŸ›  Fixes for data links (#14526)
    • 🏁 Backport of Windows CI Fixes (#14420)
  • v1.4.0 Changes

    February 16, 2019

    🌲 MXNet Change Log

    1.4.0

    • πŸ†• New Features
      • Java Inference API
      • Julia API
      • Control Flow Operators (experimental)
      • MXNet Horovod Integration
      • SVRG Optimization
      • Subgraph API (experimental)
      • JVM Memory Management
      • Topology-aware AllReduce (experimental)
      • MKLDNN backend: Graph optimization and Quantization (experimental)
      • Graph Optimization
      • Quantization
    • πŸ†• New Operators
    • πŸ”‹ Feature improvements
      • Operator
      • Optimizer
      • Sparse
      • ONNX
      • MKLDNN
      • Inference
      • Other
    • ⚑️ Frontend API updates
      • Gluon
      • Symbol
    • ⚑️ Language API updates
      • Java
      • R
      • Scala
      • Clojure
      • Perl
      • Julia
    • 🐎 Performance benchmarks and improvements
    • πŸ› Bug fixes
    • ⚑️ Licensing updates
    • πŸ‘Œ Improvements
      • Tutorial
      • Example
      • Documentation
      • Website
      • MXNet Distributions
      • Installation
      • Build and CI
      • 3rd party
      • TVM:
      • CUDNN:
      • Horovod:
    • πŸ—„ Deprecations
    • Other
    • πŸ— How to build MXNet
    • ⚑️ List of submodules used by Apache MXNet (Incubating) and when they were updated last

    πŸ†• New Features

    Java Inference API

    πŸš€ Model inference is often managed in a production ecosystem using primarily Java/Scala tools and frameworks. This release seeks to alleviate the need for software engineers to write custom MXNet wrappers to fit their production environment.
    Inference on a trained model has a couple of common use cases:

    1. Real-time or Online Inference - tasks that require immediate feedback, such as fraud detection πŸš€ 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use cases where you have massive amounts of data and want to run inference or pre-compute inference results Real-time Inference is often performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc., all of which use Java. Batch Inference is often performed on big data platforms such as Spark using Scala or Java.

    With this project, we had the following goals:

    • πŸ— Build a new set of APIs that are Java friendly, compatible with Java 7+, are easy to use for inference.
    • Lower the barrier to entry of consuming MXNet for production use cases.
      More details can be found at the Java Inference API document.

    Julia API

    πŸ“¦ MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlights of features include:

    • Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes.
    • Flexible manipulation of symbolic to composite for construction of state-of-the-art deep learning models.

    Control Flow Operators (experimental)

    Today we observe more and more dynamic neural network models, especially in the fields of natural language processing and graph analysis. The dynamics in these models come from multiple sources, including:

    • Models are expressed with control flow, such as conditions and loops.
    • NDArrays in a model may have dynamic shapes, meaning the NDArrays of a model or some of the NDArrays have different shapes for different batches.
    • Models may want to use more dynamic data structures, such as lists or dictionaries. It's natural to express dynamic models in frameworks with an imperative programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this kind of interface, developers can use Python control flows, or NDArrays with any shape at any moment, or use Python lists and dictionaries to store data as they want. The problem of this approach is that it highly dependent on the originating front-end programming language (mainly Python). A model implemented in one language can only run in the same language.

    πŸš€ A common use case is that machine learning scientists want to develop their models in Python, whereas engineers who deploy the models usually have to use a different "production" language (e.g., Java or C). Gluon tries to close the gap between the model development and production deployment. Machine learning scientists design and implement their models in Python with the imperative interface, and then Gluon converts the implementations from imperative to symbolic by invoking hybridize() for model exporting.

    πŸš€ The goal of this project is to enhance Gluon to turn a dynamic neural network into a static computation graph. The dynamic control flows are expressed by control flow operators with Gluon hybridization, and these are exported for deployment.

    ⚑️ More information can be found at Optimize dynamic neural network models with control flow operators

    MXNet Horovod Integration

    πŸ‘· Apache MXNet now supports distributed training using Horovod framework. Horovod is an open source distributed framework created at Uber. It leverages efficient inter-GPU communication to distribute and aggregate model parameters across multiple workers thus allowing efficient use of network bandwidth and scaling of training of deep learning models. To learn more about MXNet-Horovod integration, check out this blog.

    SVRG Optimization

    SVRG stands for Stochastic Variance Reduced Gradient, which was first introduced in the paper Accelerating Stochastic Gradient Descent using Predicative Variance Reduction in 2013. It is an optimization technique that complements SGD.

    SGD is known for large scale optimization, but it suffers from slow convergence asymptotically due to the inherent variance. SGD approximates the full gradient using a small batch of samples which introduces variance. In order to converge faster, SGD often needs to start with a smaller learning rate.

    ⚑️ SVRG remedies the slow convergence problem by keeping a version of the estimated weights that is close to the optimal parameters and maintains the average of the full gradient over the full pass of data. The average of the full gradients of all data is calculated w.r.t to parameters of last mth epochs. It has provable guarantees for strongly convex smooth functions; a detailed proof can be found in section 3 of the paper. SVRG uses a different update rule than SGD: gradients w.r.t current parameters minus gradients w.r.t parameters from the last mth epoch, plus the average of gradients over all data.

    Key Characteristics of SVRG:

    Subgraph API (experimental)

    πŸ‘ MXNet can integrate with many different kinds of backend libraries, including TVM, MKLDNN, TensorRT, Intel nGraph and more. In general, these backends support a limited number of operators, so running computation in a model usually involves an interaction between backend-supported operators and MXNet operators. These backend libraries share some common requirements:

    TVM , MKLDNN and nGraph use customized data formats. Interaction between these backends with MXNet requires data format conversion.
    TVM, MKLDNN, TensorRT and nGraph fuses operators.
    Integration with these backends should happen in the granularity of subgraphs instead of in the granularity of operators. To fuse operators, it's obvious that we need to divide a graph into subgraphs so that the operators in a subgraph can be fused into a single operator. To handle customized data formats, we should partition a computation graph into subgraphs as well. Each subgraph contains only TVM, MKLDNN or nGraph operators. In this way, MXNet converts data formats only when entering such a subgraph, and the operators inside a subgraph handle format conversion themselves if necessary. This makes interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet executor nor the MXNet operators need to deal with customized data formats. Even though invoking these libraries from MXNet requires similar steps, the partitioning rule and the subgraph execution of these backends can be different. As such, we define the following interface for backends to customize graph partitioning and subgraph execution inside an operator. More details can be found at PR 12157 and Subgraph API.

    JVM Memory Management

    πŸ†“ The MXNet Scala and Java API uses native memory to manage NDArray, Symbol, Executor, DataIterators using MXNet's internal C APIs. The C APIs provide appropriate interfaces to create, access and free these objects. MXNet Scala has corresponding Wrappers and APIs that have pointer references to the native memory. Before this project, JVM users (e.g. Scala, Clojure, or Java) of MXNet have to manage MXNet objects manually using the dispose pattern. There are a few usability problems with this approach:

    • πŸ‘‰ Users have to track the MXNet objects manually and remember to call dispose. This is not Java idiomatic and not user friendly. Quoting a user: "this feels like I am writing C++ code which I stopped ages ago".
    • Leads to memory leaks if dispose is not called.
    • Many objects in MXNet-Scala are managed in native memory, needing to use dispose on them as well.
    • Bloated code with dispose() methods.
    • Hard to debug memory-leaks.
      Goals of the project are:
    • πŸš€ Provide MXNet JVM users automated memory management that can release native memory when there are no references to JVM objects.
    • 🐎 Provide automated memory management for both GPU and CPU memory without performance degradation. More details can be found here: JVM Memory Management

    Topology-aware AllReduce (experimental)

    For distributed training, the Reduce communication patterns used by NCCL and MXNet are not optimal for small batch sizes. The Topology-aware AllReduce approach is based on the idea of using trees to perform the Reduce and Broadcast operations. We can use the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve distributed training following this paper by Wang, Li, Edo and Smola [1]. Our strategy is to use:

    • 🚀 a single tree (latency-optimal for small messages) to handle Reduce on small messages
    • multiple trees (bandwidth-optimal for large messages) to handle Reduce on large messages

    More details can be found here: Topology-aware AllReduce
    πŸ‘€ Note: This is an experimental feature and has known problems - see 13341. Please help to contribute to improve the robustness of the feature.

    MKLDNN backend: Graph optimization and Quantization (experimental)

    πŸš€ Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKLDNN backend in this release (#12530, #13297, #13260).
    🐎 These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. Currently, this feature is only available for inference on platforms with supported Intel CPUs.

    Graph Optimization

    MKLDNN backend takes advantage of MXNet subgraph to implement the most of possible operator fusions for inference, such as Convolution + ReLU, Batch Normalization folding, etc. When using mxnet-mkl package, users can easily enable this feature by setting export MXNET_SUBGRAPH_BACKEND=MKLDNN.

    Quantization

    Performance of reduced-precision (INT8) computation is also dramatically improved after the graph optimization feature is applied on CPU Platforms. Various models are supported and can benefit from reduced-precision computation, including symbolic models, Gluon models and even custom models. Users can run most of the pre-trained models with only a few lines of commands and a new quantization script imagenet_gen_qsym_mkldnn.py. The observed accuracy loss is less than 0.5% for popular CNN networks, like ResNet-50, Inception-BN, MobileNet, etc.

    🐎 Please find detailed information and performance/accuracy numbers here: MKLDNN README, quantization README and design proposal

    πŸ†• New Operators

    • βž• Add trigonometric operators (#12424)
    • πŸ‘ [MXNET-807] Support integer label type in ctc_loss operator (#12468)
    • [MXNET-876] make CachedOp a normal operator (#11641)
    • βž• Add index_copy() operator (#12810)
    • πŸ›  Fix getnnz operator for CSR matrix (#12908) - issue #12872
    • [MXNET-1173] Debug operators - isfinite, isinf and isnan (#12967)
    • βž• Add sample_like operators (#13034)
    • βž• Add gauss err function operator (#13229)
    • [MXNET -1030] Enhanced Cosine Embedding Loss (#12750)
    • βž• Add bytearray support back to imdecode (#12855, #12868) (#12912)
    • βž• Add Psroipooling CPU implementation (#12738)

    πŸ”‹ Feature improvements

    Operator

    • πŸ”¨ [MXNET-912] Refactoring ctc loss operator (#12637)
    • πŸ”¨ Refactor L2_normalization (#13059)
    • Customized and faster TakeOpForward operator on CPU (#12997)
    • πŸ‘ Allow stop of arange operator to be inferred from dims. (#12064)
    • Make check_isfinite, check_scale optional in clip_global_norm (#12042) add FListInputNames attribute to softmax_cross_entropy (#12701) [MXNET-867] Pooling1D with same padding (#12594)
    • βž• Add support for more req patterns for bilinear sampler backward (#12386) [MXNET-882] Support for N-d arrays added to diag op. (#12430)

    ⚑️ Optimizer

    • βž• Add a special version of Adagrad optimizer with row-wise learning rate (#12365)
    • βž• Add a Python SVRGModule for performing SVRG Optimization Logic (#12376)

    πŸ“œ Sparse

    • πŸ“œ Fall back when sparse arrays are passed to MKLDNN-enabled operators (#11664)
    • βž• Add Sparse support for logic operators (#12860)
    • βž• Add Sparse support for take(csr, axis=0) (#12889)

    ONNX

    • ONNX export - Clip operator (#12457)
    • ⚑️ ONNX version update from 1.2.1 to 1.3 in CI (#12633)
    • πŸ‘‰ Use modern ONNX API to load a model from file (#12777)
    • [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731)
    • ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646)
    • ONNX export/import: Selu (#12785)
    • ONNX export: Cleanup (#12878)
    • [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731)
    • ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067)
    • [MXNET-886] ONNX export: HardSigmoid, Less, Greater, Equal (#12812)

    MKLDNN

    • MKLDNN Forward FullyConnected op cache (#11611)
    • πŸ‘ [MXNET-753] Fallback when using non-MKLDNN supported operators (#12019)
    • MKLDNN Backward op cache (#11301)
    • Implement mkldnn convolution fusion and quantization. (#12530)
    • πŸ‘Œ Improve mkldnn fallback. (#12663)
    • ⚑️ Update MKL-DNN dependency (#12953)
    • ⚑️ Update MKLML dependency (#13181)
    • ✨ [MXNET-33] Enhance mkldnn pooling to support full convention (#11047)

    Inference

    • [MXNET-910] Multithreading inference. (#12456)
    • Tweaked the copy in c_predict_api.h (#12600)

    Other

    • πŸ‘Œ support for upper triangular matrices in linalg (#12904)
    • πŸ”¨ Introduce Random module / Refactor code generation (#13038)
    • [MXNET-779]Add DLPack Transformation API (#12047)
    • Draw label name next to corresponding bounding boxes when the mapping of id to names is specified (#9496)
    • Track epoch metric separately (#12182)
    • Set correct update on kvstore flag in dist_device_sync mode (#12786)

    ⚑️ Frontend API updates

    Gluon

    • ⚑️ Update basic_layers.py (#13299)
    • πŸ‘ Gluon LSTM Projection and Clipping Support (#13056)
    • πŸ‘‰ Make Gluon download function to be atomic (#12572)
    • [MXNET -1004] Poisson NegativeLog Likelihood loss (#12697)
    • βž• Add activation information for mxnet.gluon.nn._Conv (#12354)
    • Gluon DataLoader: avoid recursionlimit error (#12622)

    Symbol

    • βž• Addressed dumplicate object reference issues (#13214)
    • πŸ‘» Throw exception if MXSymbolInferShape fails (#12733)
    • Infer dtype in SymbolBlock import from input symbol (#12412)

    ⚑️ Language API updates

    Java

    • [MXNET-1198] MXNet Java API (#13162)

    R

    • πŸ”¨ Refactor R Optimizers to fix memory leak - 11374
    • βž• Add new Vignettes to the R package
      • Char-level Language modeling - 12670
      • Multidimensional Time series forecasting - 12664
    • πŸ›  Fix broken Examples and tutorials
      • Tutorial on neural network introduction - 12117
      • CGAN example - 12283
      • Test classification with LSTMs - 12263

    Scala

    • Explain the details for Scala Experimental (#12348)
    • [MXNET-716] Adding Scala Inference Benchmarks (#12721)
    • [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack (#12758)
    • NativeResource Management in Scala (#12647)
    • Ignore generated Scala files (#12928)
    • πŸ‘‰ Use ResourceScope in Model/Trainer/FeedForward.scala (#12882)
    • [MXNET-1180] Scala Image API (#12995)
    • ⚑️ Update log4j version of Scala package (#13131)
    • Review require() usages to add meaningful messages (#12570)
    • πŸ›  Fix Scala readme (#13082)

    Clojure

    • Introduction to Clojure-MXNet video link (#12754)
    • πŸ‘Œ Improve the Clojure Package README to Make it Easier to Get Started (#12881)
    • πŸ“¦ MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package (#12387)
    • Port of Scala Image API to Clojure (#13107)

    Perl

    • πŸ”€ [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739)

    Julia

    🐎 Performance benchmarks and improvements

    • ⚑️ Update mshadow for omp acceleration when nvcc is not present (#12674)
    • [MXNET-860] Avoid implicit double conversions (#12361)
    • βž• Add more models to benchmark_score (#12780)
    • βž• Add resnet50-v1 to benchmark_score (#12595)

    πŸ› Bug fixes

    • πŸ›  Fix for #10920 - increase tolerance for sparse dot (#12527)
    • [MXNET-1234] Fix shape inference problems in Activation backward (#13409)
    • πŸ›  Fix a bug in where op with 1-D input (#12325)
    • [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283)
    • ⏱ [MXNET-535] Fix bugs in LR Schedulers and add warmup (#11234)
    • πŸ›  Fix speech recognition example (#12291)
    • πŸ›  Fix bug in 'device' type kvstore (#12350)
    • πŸ›  fix search result 404s (#12414)
    • πŸ›  Fix help in imread (#12420)
    • πŸ›  Fix render issue on < and > (#12482)
    • 0️⃣ [MXNET-853] Fix for smooth_l1 operator scalar default value (#12284)
    • πŸ›  Fix subscribe links, remove disabled icons (#12474)
    • πŸ›  Fix broken URLs (#12508)
    • πŸ›  Fix/public internal header (#12374)
    • πŸ›  Fix lazy record io when used with dataloader and multi_worker > 0 (#12554)
    • πŸ›  Fix error in try/finally block for blc (#12561)
    • βž• Add cudnn_off parameter to SpatialTransformer Op and fix the inconsistency between CPU & GPU code (#12557)
    • [MXNET-798] Fix the dtype cast from non float32 in Gradient computation (#12290)
    • πŸ›  Fix CodeCovs proper commit detection (#12551)
    • βž• Add TensorRT tutorial to index and fix ToC (#12587)
    • Fixed typo in c_predict_api.cc (#12601)
    • πŸ›  Fix typo in profiler.h (#12599)
    • πŸ›  Fixed NoSuchMethodError for Jenkins Job for MBCC (#12618)
    • [MXNET-922] Fix memleak in profiler (#12499)
    • [MXNET-969] Fix buffer overflow in RNNOp (#12603)
    • πŸ›  Fixed param coercion of clojure executor/forward (#12627) (#12630)
    • πŸ›  Fix version dropdown behavior (#12632)
    • πŸ›  Fix reference to wrong function (#12644)
    • πŸ›  Fix the location of the tutorial of control flow operators (#12638)
    • πŸ›  Fix issue 12613 (#12614)
    • πŸ‘» [MXNET-780] Fix exception handling bug (#12051)
    • πŸ›  Fix bug in prelu, issue 12061 (#12660)
    • [MXNET-833] [R] Char-level RNN tutorial fix (#12670)
    • πŸ›  Fix static / dynamic linking of gperftools and jemalloc (#12714)
    • πŸ›  Fix #12672, importing numpy scalars (zero-dimensional arrays) (#12678)
    • [MXNET-623] Fixing an integer overflow bug in large NDArray (#11742)
    • πŸ›  Fix benchmark on control flow operators (#12693)
    • πŸ›  Fix regression in MKLDNN caused by PR 12019 (#12740)
    • πŸ›  Fixed broken link for Baidu's WARP CTC (#12774)
    • πŸ›  Fix CNN visualization tutorial (#12719)
    • πŸ‘ [MXNET-979] Add fix_beta support in BatchNorm (#12625)
    • R fix metric shape (#12776)
    • βͺ Revert [MXNET-979] Add fix_beta support in BatchNorm (#12625) (#12789)
    • πŸ›  Fix mismatch shapes (#12793)
    • πŸ›  Fixed symbols naming in RNNCell, LSTMCell, GRUCell (#12794)
    • Fixed setattr method of _MXClassPropertyMetaClass (#12811)
    • πŸ›  Fixed regex for matching platform type in Scala Benchmark scripts (#12826)
    • πŸ›  Fix broken links (#12856)
    • πŸ›  Fix Flaky Topk (#12798)
    • [MXNET-1033] Fix a bug in MultiboxTarget GPU implementation (#12840)
    • πŸ“Œ [MXNET-1107] Fix CPUPinned unexpected behaviour (#12031)
    • Fix all in optimizer/optimizer.py (#12886)
    • πŸ›  Fix Batch input issue with Scala Benchmark (#12848)
    • πŸ›  fix type inference in index_copy. (#12890)
    • πŸ›  Fix the paths issue for downloading script (#12913)
    • πŸ›  Fix indpt[0] for take(csr) (#12927)
    • πŸ›  Fix the bug of assigning large integer to NDArray (#12921)
    • πŸ›  Fix Sphinx errors for tutorials and install ToCs (#12945)
    • πŸ›  Fix variable name in tutorial code snippet (#13052)
    • πŸ›  Fix example for mxnet.nd.contrib.cond and fix typo in src/engine (#12954)
    • πŸ›  Fix a typo in operator guide (#13115)
    • πŸ›  Fix variational autoencoder example (#12880)
    • πŸ›  Fix problem with some OSX not handling the cast on imDecode (#13207)
    • [MXNET-953] Fix oob memory read (#12631)
    • πŸ›  Fix Sphinx error in ONNX file (#13251)
    • [Example] Fixing Gradcam implementation (#13196)
    • πŸ›  Fix train mnist for inception-bn and resnet (#13239)
    • πŸ›  Fix a bug in index_copy (#13218)
    • πŸ›  Fix Sphinx errors in box_nms (#13261)
    • πŸ›  Fix Sphinx errors (#13252)
    • πŸ›  Fix the cpp example compiler flag (#13293)
    • πŸ“œ Made fixes to sparse.py and sparse.md (#13305)
    • [Example] Gradcam- Fixing a link (#13307)
    • Manually track num_max_thread (#12380)
    • [Issue #11912] throw mxnet exceptions when decoding invalid images. (#12999)
    • Undefined name: load_model() --> utils.load_model() (#12867)
    • πŸ”„ Change the way NDArrayIter handle the last batch (#12545)
    • βž• Add embedding to print_summary (#12796)
    • πŸ‘ Allow foreach on input with 0 length (#12471)
    • [MXNET-360]auto convert str to bytes in img.imdecode when py3 (#10697)
    • πŸ›  Fix unpicklable transform_first on windows (#13686)

    ⚑️ Licensing updates

    • βž• Add license headers to R-package (#12559)
    • License header (#13178)
    • βž• add url and license to clojure package project (#13304)
    • V1.4.x RAT check fix (#14156)
    • βž• add license to pom files (#14155)

    πŸ‘Œ Improvements

    Tutorial

    • [MXNET-422] Distributed training tutorial (#10955)
    • βž• Add a tutorial for control flow operators. (#12340)
    • βž• Add tutorial Gotchas using NumPy (#12007)
    • ⚑️ Updated Symbol tutorial with Gluon (#12190)
    • πŸ‘Œ Improve tutorial redirection (#12607)
    • Include missing import in TensorRT tutorial (#12609)
    • ⚑️ Update Operator Implementation Tutorial (#12230)
    • βž• Add a tutorial for the subgraph API. (#12698)
    • πŸ‘Œ Improve clojure tutorial (#12974)
    • ⚑️ Update scala intellij tutorial (#12827)
    • [Example] Gradcam consolidation in tutorial (#13255)
    • [MXNET-1203] Tutorial infogan (#13144)
    • [MXNET-703] Add a TensorRT walkthrough (#12548)

    Example

    • ⚑️ Update C++ example so it is easier to run (#12397)
    • [MXNET-580] Add SN-GAN example (#12419)
    • [MXNET-637] Multidimensional LSTM example for MXNetR (#12664)
    • [MXNET-982] Provide example to illustrate usage of CSVIter in C++ API (#12636)
    • [MXNET-947] Expand scala imclassification example with resnet (#12639)
    • MKL-DNN Quantization Examples and README (#12808)
    • Extending the DCGAN example implemented by gluon API to provide a more straight-forward evaluation on the generated image (#12790)
    • ⚑️ [MXNET-1017] Updating the readme file for cpp-package and adding readme file for example directory. (#12773)
    • ⚑️ Update tree lstm example (#12960)
    • ⚑️ Update bilstm integer array sorting example (#12929)
    • ⚑️ Updated / Deleted some examples (#12968)
    • ⚑️ Update module example (#12961)
    • ⚑️ Update adversary attack generation example (#12918)
    • ⚑️ Update Gluon example folder (#12951)
    • ⚑️ Update dec example (#12950)
    • ⚑️ Updated capsnet example (#12934)
    • ⚑️ Updates to several examples (#13068)
    • ⚑️ Update multi-task learning example (#12964)
    • βœ‚ Remove obsolete memory cost example (#13235)
    • ⚑️ [Example] Update cpp example README (#13280)
    • ⚑️ [Example]update NER example readme on module prediction (#13184)
    • ⚑️ Update proposal_target.py (#12709)
    • Removing the re-size for validation data, which breaking the validation accuracy of CIFAR training (#12362)
    • ⚑️ Update the README with instruction to redirect the user to gluon-cv (#13186)

    πŸ“š Documentation

    • ⚑️ Update ONNX API docs references (#12317)
    • πŸ“š Documentation update related to sparse support (#12367)
    • πŸ’… Edit shape.array doc and some style improvements (#12162)
    • πŸ›  Fixed docs/website build checkout bug (#12413)
    • βž• Add Python API docs for test_utils and visualization (#12455)
    • πŸ›  Fix the installation doc for MKL-DNN backend (#12534)
    • βž• Added comment to docs regarding ToTensor transform (#12186)
    • πŸ“Œ Pinned dockcross to a tag with fixed ABI for RPi (#12588)
    • πŸ“š Refine the documentation of im2rec (#12606)
    • ⚑️ Update and modify Windows docs (#12620)
    • ⚑️ update docs to list cmake required for build from source page (#12592)
    • ⚑️ update the distributed_training document (#12626)
    • βž• Add docstring in im2rec.py (#12621)
    • πŸ“¦ [Doc] Change the description for pip packages (#12584)
    • πŸ“š Change dependencies documentation opencv2-->opencv (#12654)
    • βž• Add documents for two new environment variables for memory pool. (#12668)
    • πŸ“„ Scala Docs - Replace old Symbol api usages (#12759)
    • βž• add/update infer_range docs (#12879)
    • πŸ›  Fix typo in formula in docstring for GRU cell and layer and add clarification to description (gluon.rnn) (#12896)
    • πŸ›  Fix the operator API documentation (#12942)
    • πŸ›  fix broken docs (#12871)
    • πŸ›  fix mac r install and windows python build from source docs (#12919)
    • Document the newly added env variable (#13049)
    • βž• Add documentation on GPU performance on Quantization example (#13145)
    • πŸ›  Fix Sphinx python docstring formatting error. (#13177)
    • πŸ— [Doc] Fix repo paths in Ubuntu build doc (#13101)
    • πŸ›  Fix Sphinx document parsing error. (#13195)
    • πŸ›  Fix #13090, Add image.imread to python API doc. (#13176)
    • πŸ›  Fix Sphinx docstring formatting error. (#13004, #13005, #13006) (#13175)
    • πŸ›  Fix #12944, Fix Sphinx python docstring formatting error. (#13174)
    • πŸ›  Fix #13013, Fix Sphinx python docstring error. (#13173)
    • πŸ›  Fixed Sparse astype doc string formatting error (#13171)
    • πŸ›  Fixed Documentation issues (#13215)
    • ⚑️ update the doc (#13205)
    • πŸ›  Fix Sphinx doc errors (#13170)
    • πŸ›  Fix Sphinx python docstring error: initializer.InitDesc (#12939) (#13148)
    • πŸ›  Fix Sphinx python docstring error: text contrib module (#12949) (#13149)
    • πŸ›  Fix Sphinx python docstrings (#13160)
    • βž• Add Java API docs generation (#13071)
    • πŸ›  Fix scaladoc build errors (#13189)
    • βž• Add missing documentations for getnnz (#13128)
    • βž• Addressed ONNX module documentation warnings and added notes for short-form representation (#13259)
    • πŸ›  Doc fixes (#13256)
    • βž• Addressed doc issues (#13165)
    • πŸ— stop gap fix to let website builds through; scaladoc fix pending (#13298)
    • πŸ›  Fix Sphinx python docstring formatting error. (#13194)
    • Visualization doc fix. Added notes for shortform (#13291)
    • ⚑️ [Example] Add docstring for test optimizer and test score (#13286)
    • πŸ›  Fix descriptions in scaladocs for macro ndarray/sybmol APIs (#13210)
    • Sphinx error reduction (#12323)
    • Sphinx errors in Gluon (#13275)
    • ⚑️ Update env_var.md (#12702)
    • ⚑️ Updated the Instructions for use of the label bot (#13192)
    • βž• Added/changed file_name, brief description comments in some files (#13033)

    Website

    • βž• adding apache conf promo to home page (#12347)
    • Consistent website theme and custom 404 (#12426)
    • ⚑️ update apachecon links to https (#12521)
    • πŸš€ [HOLD] 1.3.0 release website updates (#12509)
    • βž• add mentions of the gluon toolkits and links to resources (#12667)
    • βœ‚ remove apachecon promo (#12695)
    • [MXNet-1002] Add GluonCV and NLP tookits, Keras, and developer wiki to navigation (#12704)

    MXNet Distributions

    • 🐳 Make the output of ci/docker/install/ubuntu_mklml.sh less verbose (#12422)
    • πŸ›  Fix tvm dependency for docker (#12479)
    • 🐳 [MXNET-703] Add TensorRT runtime Dockerfile (#12549)
    • πŸš€ [MXNET-951] Python dockerfiles built on pip binaries and build/release script (#12556)
    • πŸ”„ Change numpy version to 1.15.2 in python and docker install requirements (#12711)
    • βž• Add mkl-dnn to docker install method (#12643)
    • πŸ›  Fix docker cleanup race condition (#13092)
    • πŸ›  Bugfix in ci/docker_cache.py (#13249)
    • ⚑️ Update PyPI version number (#11773)
    • ⚑️ update download links to apache distros (#12617)

    Installation

    • Installation instructions consolidation (#12388)
    • Refine mxnet python installation (#12696)
    • ⚑️ R install instructions update for macOS (#12832)
    • βœ‚ remove legacy installation of Roxygen2 5.0 and add R-specific clean target (#12993) (#12998)
    • ⚑️ Force APT cache update before executing install (#13285)
    • πŸ‘‰ Make the Ubuntu scripts executable after download. (#12180)
    • 🏁 replacing windows setup with newer instructions (#12504)
    • ⚑️ Updated download links and verification instructions (#12651)
    • βœ‚ Remove pip overwrites (#12604)

    πŸ— Build and CI

    • πŸ— [MXNET-908] Enable minimal OSX Travis build (#12462)
    • 🏁 Use jom for parallel Windows builds (#12533)
    • πŸ— [MXNET-950] Enable parallel R dep builds in CI (#12552)
    • 🏁 Speed up CI Windows builds (#12563)
    • πŸ— [MXNET-908] Speed up travis builds to avoid timeouts (#12706)
    • πŸ— Simplify mac MKLDNN build (#12724)
    • πŸ— [MXNET-674] Speed up GPU builds in CI (#12782)
    • πŸ‘Œ Improved git reset for CI builds (#12784)
    • πŸ‘Œ Improve cpp-package example project build files. (#13093)
    • βž• Add --no-cache option to build.py when building containers (#13182)
    • βž• Addressed sphinx build issue (#13246)
    • πŸ‘• Tighten up PyLint directives again (#12322)
    • πŸ‘· [MXNET-859] Add a clang-tidy stage to CI (#12282)
    • πŸ‘· A solution to prevent zombie containers locally and in CI (#12381)
    • 🌲 [MXNET-696][PYTHON][UNDEFINED NAME] import logging in ci/util.py (#12488)
    • [MXNET-703] Static linking for libprotobuf with TensorRT (#12475)
    • βœ‚ Remove regression checks for website links (#12507)
    • πŸ‘· [MXNET-953] - Add ASAN sanitizer, Enable in CI (#12370)
    • πŸ‘ Allow custom path and static linking for custom mallocs in make (#12645)
    • Correct PR branch detection in code coverage (#12615)
    • ⚑️ Update osx.mk - Added apple to USE_BLAS comment (#12819)
    • [MXNET-953] Correct ASAN cflags flag (#12659)
    • πŸ‘ [MXNET-1025] Add Jetpack 3.3 support to Jetson (#12735)
    • πŸ‘· Fail the broken link job when broken links are found (#12905)
    • βœ‚ Removed unused header (#13066)
    • β†ͺ Maven Surefire bug workaround (#13081)
    • βž• Add Turing and Volta support to arch_name (#13168)
    • 🚚 Moves f16c autodetection to its own cmake module (#12331)
    • la_op_inline.h to la_op-inl.h for consistency (#13045)
    • πŸ‘· [MXNET-793] Virtualized ARMv7 with Qemu CI integration (#13203)
    • βœ‚ Remove unused variable rotateM_ (#10803)
    • πŸ”¨ Separate refactoring from #12276 in a prior PR (#12296)
    • 🚚 [MXNET-860] Remove std::moves that have no affect (#12730)
    • [MXNET-860] Use emplace where helpful (#12694)
    • Enable C++ coverage (#12642)
    • ⚑️ [MXNET-860] Update to modern nullptr usage (#12352)
    • [MXNET-860] Reduce redundant copies, check for regressions with clang-tidy (#12355)

    3rd party

    TVM:
    • ⚑️ Updated tvm submodule head (#12764)
    • ⚑️ Updated tvm submodule head (#12448)
    CUDNN:
    • [MXNET-1179] Enforce deterministic algorithms in convolution layers (#12992)
    • CudnnFind() usage improvements (#12804)
    • βž• Add option for automatic downcasting dtype for cudnn to allow using Tensorcore for fp32 (#12722)
    Horovod:
    • 🚚 [MXNET-1111] Remove CPUPinned in ImageRecordIter (#12666)

    πŸ—„ Deprecations

    • βž• Add a deprecate message (#13042) contrib_CTCLoss is deprecated. Added a message in command

    Other

    • ⚑️ Updating news, readme files and bumping master version to 1.3.1 (#12525)
    • βž• Add new name to CONTRIBUTORS.md (#12763)
    • ⚑️ Update contribute.md (#12685)
    • ⚑️ Updated CONTRIBUTORS.md to include lebeg and gigasquid, moved mabreu to committers section (#12766)
    • ⚑️ Update CONTRIBUTORS.md (#12996)
    • ⚑️ Updated CONTRIBUTORS.md to include mxnet-label-bot (#13048)

    πŸ— How to build MXNet

    Please follow the instructions at https://mxnet.incubator.apache.org/install/index.html

    ⚑️ List of submodules used by Apache MXNet (Incubating) and when they were updated last

    ⚑️ Submodule@commit ID::Last updated by MXNet:: Last update in submodule

    • cub@05eb57f::Jul 31, 2017 :: Jul 31, 2017
    • dlpack@10892ac:: Oct 30, 2017 :: Aug 23, 2018
    • dmlc-core@0a0e8ad:: Aug 15, 2018 :: Nov 15, 2018
    • βœ… googletest@ec44c6c:: July 14, 2016 :: July 14, 2016
    • mkldnn @ 722901c:: Feb 13, 2019 :: Feb 12, 2019
    • mshadow@696803b:: Sep 28, 2018 :: Nov 7, 2018
    • onnx-tensorrt@3d8ee04:: Aug 22, 2018 :: Nov 10, 2018
    • openmp@37c7212: Nov 22, 2017 :: Nov 13, 2018
    • ps-lite@8a76389: April 25, 2018 :: Oct 9, 2018
    • tvm@0f053c8: Oct 10, 2018 :: Oct 8, 2018