Changelog History
Page 2
-
v1.3.1 Changes
November 12, 2018๐ฒ MXNet Change Log
1.3.1
๐ Bug fixes
[MXNET-953] Fix oob memory read (v1.3.x) / #13118
๐ Simple bugfix addressing an out-of-bounds memory read.[MXNET-969] Fix buffer overflow in RNNOp (v1.3.x) / #13119
๐ This fixes an buffer overflow detected by ASAN.CudnnFind() usage improvements (v1.3.x) / #13123
This PR improves the MXNet's use of cudnnFind() to address a few issues:- With the gluon imperative style, cudnnFind() is called during forward(), and so might have its timings perturbed by other GPU activity (including potentially other cudnnFind() calls).
- With some cuda drivers versions, care is needed to ensure that the large I/O and workspace cudaMallocs() performed by cudnnFind() are immediately released and available to MXNet.
- cudnnFind() makes both conv I/O and workspace allocations that must be covered by the GPU global memory headroom defined by MXNET_GPU_MEM_POOL_RESERVE. Per issue #12662, large convolutions can result in out-of-memory errors, even when MXNet's storage allocator has free memory in its pool.
This PR addresses these issues, providing the following benefits:
- Consistent algo choice for a given convolution type in a model, both for instances in the same GPU and in other GPUs in a multi-GPU training setting.
- Consistent algo choice from run to run, based on eliminating sources of interference of the cudnnFind() timing process.
- Consistent model global memory footprint, both because of the consistent algo choice (algo's can have markedly different workspace requirements) and changes to MXNet's use of cudaMalloc.
- Increased training performance based on being able to consistently run with models that approach the GPU's full global memory footprint.
5. Adds a unittest for and solves issue #12662.
[MXNET-922] Fix memleak in profiler (v1.3.x) / #13120
๐ Fix a memleak reported locally by ASAN during a normal inference test.๐ Fix lazy record io when used with dataloader and multi_worker > 0 (v1.3.x) / #13124
๐ Fixes multi_worker data loader when record file is used. The MXRecordIO instance needs to require a new file handler after fork to be safely manipulated simultaneously.๐ This fix also safely voids the previous temporary fixes #12093 #11370.
๐ fixed symbols naming in RNNCell, LSTMCell, GRUCell (v1.3.x) / #13158
๐ This fixes #12783, by assigning all nodes in hybrid_forward a unique name. Some operations were in fact performed without attaching the appropriate (time) prefix to the name, which makes serialized graphs non-deserializable.Fixed
__setattr__
method of_MXClassPropertyMetaClass
(v1.3.x) / #13157
Fixed__setattr__
method๐ allow foreach on input with 0 length (v1.3.x) / #13151
๐ Fix #12470. With this change, outs shape can be inferred correctly.Infer dtype in SymbolBlock import from input symbol (v1.3.x) / #13117
๐ Fix for the issue - #11849
Currently, Gluon symbol block cannot import any symbol with type other than fp32. All the parameters are created as FP32 leading to failure in importing the params when it is of type fp16, fp64 etc,
In this PR, we infer the type of the symbol being imported and create the Symbol Block Parameters with that inferred type.
โ Added the tests๐ Documentation fixes
Document the newly added env variable (v1.3.x) / #13156
Document the env variable: MXNET_ENFORCE_DETERMINISM added in PR: #12992๐ fix broken links (v1.3.x) / #13155
๐ This PR fixes broken links on the website.๐ fix broken Python IO API docs (v1.3.x) / #13154
๐ Fixes #12854: Data Iterators documentation is broken๐ This PR manually specifies members of the IO module so that the docs will render as expected. This is workaround in the docs to deal with a bug introduced in the Python code/structure since v1.3.0. See the comments for more info.
๐ This PR also fixes another issue that may or may not be related. Cross references to same-named entities like name, shape, or type are confusing Sphinx and it seems to just link to whatever it last dealt with that has the same name, and not the current module. To fix this you have to be very specific. Don't use type, use np.type if that's what you want. Otherwise you might end up with mxnet.kvstore.KVStore.type. This is a known Sphinx issue, so it might be something we have to deal with for the time being.
This is important for any future modules - that they recognize this issue and make efforts to map the params and other elements.
โ add/update infer_range docs (v1.3.x) / #13153
โก๏ธ This PR adds or updates the docs for the infer_range feature.๐ Clarifies the param in the C op docs
๐ Clarifies the param in the the Scala symbol docs
โ Adds the param for the the Scala ndarray docs
โ Adds the param for the Python symbol docs
โ Adds the param for the Python ndarray docsOther Improvements
- [MXNET-1179] Enforce deterministic algorithms in convolution layers (v1.3.x) / #13152
๐ Some of the CUDNN convolution algorithms are non-deterministic (see issue #11341). This PR adds an env variable to enforce determinism in the convolution operators. If set to true, only deterministic CUDNN algorithms will be used. If no deterministic algorithm is available, MXNet will error out.
โก๏ธ Submodule updates
- โก๏ธ update mshadow (v1.3.x) / #13122
โก๏ธ Update mshadow for omp acceleration when nvcc is not present
Known issues
โ The test test_operator.test_dropout has issues and has been disabled on the branch:
- โ Disable flaky test test_operator.test_dropout (v1.3.x) / #13200
๐ For more information and examples, see full release notes
-
v1.3.0 Changes
September 11, 2018๐ฒ MXNet Change Log
1.3.0
๐ New Features - Gluon RNN layers are now HybridBlocks
- ๐ In this release, Gluon RNN layers such as
gluon.rnn.RNN
,gluon.rnn.LSTM
,gluon.rnn.GRU
becomesHybridBlock
s as part of gluon.rnn improvements project (#11482). - This is the result of newly available fused RNN operators added for CPU: LSTM(#10104), vanilla RNN(#11399), GRU(#10311)
- Now many dynamic networks that are based on Gluon RNN layers can now be completely hybridized, exported, and used in the inference APIs in other language bindings such as R, Scala, etc.
MKL-DNN improvements
- ๐ Introducing more functionality support for MKL-DNN as follows:
๐ New Features - Gluon Model Zoo Pre-trained Models
- Gluon Vision Model Zoo now provides MobileNetV2 pre-trained models (#10879) in addition to
AlexNet, DenseNet, Inception V3, MobileNetV1, ResNet V1 and V2, SqueezeNet 1.0 and 1.1, and VGG
pretrained models. - Updated pre-trained models provide state-of-the-art performance on all resnetv1, resnetv2, and vgg16, vgg19, vgg16_bn, vgg19_bn models (#11327 #11860 #11830).
๐ New Features - Clojure package (experimental)
- ๐ฆ MXNet now supports the Clojure programming language. The MXNet Clojure package brings flexible and efficient GPU computing and state-of-art deep learning to Clojure. It enables you to write seamless tensor/matrix computation with multiple GPUs in Clojure. It also lets you construct and customize the state-of-art deep learning models in Clojure, and apply them to tasks, such as image classification and data science challenges.(#11205)
- ๐ Checkout examples and API documentation here.
๐ New Features - Synchronized Cross-GPU Batch Norm (experimental)
- ๐ Gluon now supports Synchronized Batch Normalization (#11502).
- This enables stable training on large-scale networks with high memory consumption such as FCN for image segmentation.
๐ New Features - Sparse Tensor Support for Gluon (experimental)
- ๐ Sparse gradient support is added to
gluon.nn.Embedding
. Setsparse_grad=True
to enable when constructing the Embedding block. (#10924) - ๐ Gluon Parameter now supports "row_sparse" storage type, which reduces communication cost and memory consumption for multi-GPU training for large models.
gluon.contrib.nn.SparseEmbedding
is an example empowered by this. (#11001, #11429) - ๐ Gluon HybridBlock now supports hybridization with sparse operators (#11306).
๐ New Features - Control flow operators (experimental)
- โก๏ธ This is the first step towards optimizing dynamic neural networks with variable computation graphs, by adding symbolic and imperative control flow operators. Proposal.
- ๐ New operators introduced: foreach(#11531), while_loop(#11566), cond(#11760).
๐ New Features - Scala API Improvements (experimental)
- ๐ Improvements to MXNet Scala API usability(#10660, #10787, #10991)
- Symbol.api and NDArray.api would bring new set of functions that have complete definition for all arguments.
- ๐ Please see this Type safe API design document for more details.
๐ New Features - Rounding GPU Memory Pool for dynamic networks with variable-length inputs and outputs (experimental)
- ๐ MXNet now supports a new memory pool type for GPU memory (#11041).
- Unlike the default memory pool requires exact size match to reuse released memory chunks, this new memory pool uses exponential-linear rounding so that similar sized memory chunks can all be reused, which is more suitable for all the workloads with dynamic-shape inputs and outputs. Set environment variable
MXNET_GPU_MEM_POOL_TYPE=Round
to enable.
๐ New Features - Topology-aware AllReduce (experimental)
- This features uses trees to perform the Reduce and Broadcast. It uses the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve it. This topology aware approach reduces the existing limitations for single machine communication shown by mehods like parameter server and NCCL ring reduction. It is an experimental feature (#11591).
- โฑ Paper followed for implementation: Optimal message scheduling for aggregation.
- Set environment variable
MXNET_KVSTORE_USETREE=1
to enable.
๐ New Features - Export MXNet models to ONNX format (experimental)
- ๐ With this feature, now MXNet models can be exported to ONNX format(#11213). Currently, MXNet supports ONNX v1.2.1. API documentation.
- Checkout this tutorial which shows how to use MXNet to ONNX exporter APIs. ONNX protobuf so that those models can be imported in other frameworks for inference.
๐ New Features - TensorRT Runtime Integration (experimental)
- โก๏ธ TensorRT provides significant acceleration of model inference on NVIDIA GPUs compared to running the full graph in MxNet using unfused GPU operators. In addition to faster fp32 inference, TensorRT optimizes fp16 inference, and is capable of int8 inference (provided the quantization steps are performed). Besides increasing throughput, TensorRT significantly reduces inference latency, especially for small batches.
- This feature in MXNet now introduces runtime integration of TensorRT into MXNet, in order to accelerate inference.(#11325)
- ๐ฆ Currently, its in contrib package.
๐ New Examples - Scala
- ๐ Refurnished Scala Examples with improved API, documentation and CI test coverage. (#11753, #11621 )
- Now all Scala examples have:
- No bugs block in the middle
- Good Readme to start with
- with Type-safe API usage inside
- monitored in CI in each PR runs
๐ง Maintenance - Flaky Tests improvement effort
- ๐ Fixed 130 flaky tests on CI. Tracked progress of the project here.
- โ Add flakiness checker (#11572)
๐ง Maintenance - MXNet Model Backwards Compatibility Checker
- โ This tool (#11626) helps in ensuring consistency and sanity while performing inference on the latest version of MXNet using models trained on older versions of MXNet.
- ๐ This tool will help in detecting issues earlier in the development cycle which break backwards compatibility on MXNet and would contribute towards ensuring a healthy and stable release of MXNet.
๐ง Maintenance - Integrated testing for "the Straight Dope"
- "Deep Learning - The Straight Dope" is a deep learning book based on Apache MXNet Gluon that are contributed by many Gluon users.
- โ Now the testing of this book is integrated in the nightly tests.
๐ Bug-fixes
- ๐ Fix gperftools/jemalloc and lapack warning bug. (#11110)
- ๐ Fix mkldnn performance regression + improve test logging (#11262)
- ๐ Fix row_sparse_param.save() (#11266)
- ๐ Fix trainer init_kvstore (#11266)
- ๐ Fix axis Bug in MKLDNN Softmax (#11335)
- Fix 'AttributeError: '_thread._local' object has no attribute 'value'' on distributed processing applications (#11332)
- ๐ Fix recordfile dataset with multi worker (#11370)
- Manually check node existence in CachedOp (#11545)
- Javadoc fix (#11239)
- ๐ Fix bugs in MKLDNN operators to handle the kAddTo request (#11129)
- ๐ Fix InferStorage for sparse fallback in FullyConnected (#11498)
- ๐ Fix batchnorm problem with sparse matrices when fix_gamma=True (#11656)
- ๐ Fix rnn layer save (#11776)
- ๐ Fix BucketSentenceIter bug related to #11430 (#11580)
- Fix for _backward_softsign activation (#11827)
- ๐ Fix a bug in CachedOp. (#11675)
- ๐ Fix quantization divide by zero errors (#11833)
- ๐จ Refactor R optimizers to fix memory leak (#11374)
- Avoid use of troublesome cudnnFind() results when grad_req='add' (#11338)
- ๐ Fix shared memory with gluon dataloader, add option pin_memory (#11908)
- ๐ Fix quantized graph pass bug (#11937)
- Fix MXPredReshape in the c_predict_api (#11493)
- ๐ Fix the topk regression issue (#12197)
- ๐ Fix image-classification example and add missing optimizers w/ momentum support (#11826)
๐ Performance Improvements
- โ Added static allocation and static shape for HybridBloc gluon (#11320)
- ๐ Fix RecordIO augmentation speed (#11474)
- ๐ Improve sparse pull performance for gluon trainer (#11429)
- ๐ CTC operator performance improvement from HawkAaron/MXNet-CTC (#11834)
- ๐ Improve performance of broadcast ops backward pass (#11252)
- ๐ Improved numerical stability as a result of using stable L2 norm (#11573)
- ๐ Accelerate the performance of topk for GPU and CPU side (#12085 #10997 ; This changes the behavior of topk when nan values occur in the input)
- ๐ Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on CPU (#11113)
- ๐ Performance improvement for Batch Dot on CPU from mshadow (mshadow PR#342)
API Changes
- ๐ Allow Scala users to specify data/label names for NDArrayIter (#11256)
- ๐ Allow user to define unknown token symbol to rnn encode_sentences() (#10461)
- Added count_include_pad argument for Avg Pooling (#11021)
- โ Add standard ResNet data augmentation for ImageRecordIter (#11027)
- โ Add seed_aug parameter for ImageRecordIter to fix random seed for default augmentation (#11247)
- โ Add support for accepting MXNet NDArrays in ColorNormalizeAug (#11606)
- โจ Enhancement of take operator (#11326)
- โ Add temperature parameter in Softmax operator (#11466)
- โ Add support for 1D inputs in leaky relu (#11850)
- โ Add verify_ssl option to gluon.utils.download (#11546)
Other features
- โ Added ccache reporting to CI (#11322)
- ๐ณ Restructure dockcross dockerfiles to fix caching (#11302)
- โ Added tests for MKLDNN backward operators (#11232)
- โ Add elemwise_add/sub between rsp and rsp on GPU (#11179)
- Add clip_global_norm(row_sparse_grad) (#11266)
- โ Add subgraph storage type inference to CachedOp (#11306)
- โก๏ธ Enable support for dense weight and sparse grad Adagrad updates (#11355)
- โ Added Histogram Operator (#10931)
- โ Added Matthew's Correlation Coefficient to metrics (#10524)
- โ Added support for add_n(dense, csr, dense) = dense on CPU & GPU (#11330)
- โ Added support for add_n(any combination longer than 4 with at least one dense storage) = dense on CPU & GPU (#11330)
- L1 Normalization (#11229)
- โ Add support for int64 data type in CSVIter (#11446)
- โ Add test for new int64 type in CSVIter (#11499)
- โ Add sample ratio for ROI Align (#11145)
- Shape and Size Operator (#10889)
- โ Add HybidSequentialRNNCell, which can be nested in HybridBlock (#11003)
- ๐ Support for a bunch of unary functions for csr matrices (#11559)
- โ Added NDArrayCollector to dispose intermediate allocated NDArrays automatically (#11751)
- โ Added the diag() operator (#11643)
- โ Added broadcast_like operator (#11820)
- ๐ Allow Partial shape infer for Slice (#11406)
- โ Added support to profile kvstore server during distributed training (#11215)
- โ Add function for GPU Memory Query to C API (#12083)
- Generalized reshape_like operator to be more flexible (#11928)
- โ Add support for selu activation function (#12059)
- โ Add support for accepting NDArray as input to Module predict API (#12166)
- โ Add DataDesc type for the Scala Package (#11844)
Usability Improvements
- โ Added NDArray auto-collector for Scala (#11751, #12232)
- โ Added docs for mx.initializer.Constant (#10637)
- โ Added build from souce instructions on windows (#11276)
- โ Added a tutorial explaining how to use the profiler (#11274)
- โ Added two tutorials on Learning Rate Schedules (#11296)
- โ Added a tutorial for mixed precision training with float16 (#10391)
- โ Create CPP test for concat MKLDNN operator (#11371)
- โก๏ธ Update large word language model example (#11405)
- MNIST Examples for Scala new API (#11250)
- โก๏ธ Updated installation info to have latest packages and more clarity (#11503)
- GAN MNIST Examples for Scala new API (#11547)
- โ Added Learning Rate Finder tutorial (#11304)
- ๐ Fix Installation instructions for R bindings on Linux systems. (#11590)
- โ Integration Test for Scala (#11596)
- ๐ Documentation enhancement for optimizers (#11657)
- โก๏ธ Update rcnn example (#11373)
- Gluon ModelZoo, Gluon examples for Perl APIs (#11642)
- ๐ Fix R installation in CI (#11761, #11755, #11768, #11805, #11954, #11976)
- CNN Examples for Scala new API (#11292)
- Custom Operator Example for Scala (#11401)
- โ Added detailed doc about global pool layers in Gluon (#11832)
- โก๏ธ Updated MultiTask example to use new infer api (#11605)
- โ Added logistic regression tutorial (#11651)
- โ Added Support for integer type in ImageIter (#11864)
- Added depth_to_space and space_to_depth operators (#11587)
- ๐ Increased operator support for ONNX to MXNet importer (#11856)
- โ Add linux and macos MKLDNN Building Instruction (#11049)
- โ Add download utility for Scala APIs (#11866)
- ๐ Improving documentation and error messages for Async distributed training with Gluon (#11910)
- โ Added NeuralStyle Example for Scala (#11621)
Known Issues
- ๐ Armv7 docker builds are broken due to problem with dockcross
- ๐ In this release, Gluon RNN layers such as
-
v1.2.1 Changes
July 17, 2018๐ฒ MXNet Change Log
1.2.1
๐ Deprecations
The usage of
save_params
described in the gluon book did not reflect the intended usage of the API and led MXNet users to depend on the unintended usage ofsave_params
andload_params
. In 1.2.0 release an internal bug fix was made which broke the unintended usage use case and users scripts.
โช To correct the API change, the behavior ofsave_params
API has been reverted to the behavior of MXNet v1.1.0 in v1.2.1. The intended and correct use are now supported with the new APIssave_parameters
andload_parameters
.
With v1.2.1, usage ofsave_params
andload_params
APIs will resume their former functionality and a deprecation warning will appear.
All scripts to save and load parameters for a Gluon model should use the new APIs:save_parameters
andload_parameters
. If your model is hybridizable and you want to export a serialized structure of the model as well as parameters you should migrate your code to useexport
API and the newly addedimports
API instead ofsave_params
andload_params
API. Please refer to the Saving and Loading Gluon Models Tutorial for more information.๐ User Code Changes
- If you have been using the
save_params
andload_params
API, below are the recommendations on how to update your code:
- If you save parameters to load it back into a
SymbolBlock
, it is strongly recommended to useexport
andimports
API instead. For more information, please see the Saving and Loading Gluon Models Tutorial. - If you created gluon layers without a
name_scope
using MXNet 1.2.0, you must replacesave_params
withsave_parameters
. Otherwise, your models saved in 1.2.1 will fail to load back, although this worked in 1.2.0. - For the other use cases, such as models created within a
name_scope
(inside awith name_scope()
block) or models being loaded back into gluon and notSymbolBlock
, we strongly recommend replacingsave_params
andload_params
withsave_parameters
andload_parameters
. Having said that, your code won't break in 1.2.1 but will give you a deprecated warning message forsave_params
andload_params
.
Incompatible API Changes
- ๐ We are breaking semantic versioning by making a backwards incompatible change from 1.2.0 in the 1.2.1 patch release. The breaking use case is documented in point 2 above. The reason for doing this is because the 1.2.0 release broke a documented use case from the gluon book and this release reverts the breakage.
- ๐ We did break the promise with semantic versioning due to the API behavior change in 1.2.0 and the backward incompatible change between 1.2.0 and 1.2.1 patch release. The breaking use case is documented in point 2 above. The reason for doing this is because the 1.2.0 release broke a documented use case from the gluon book and this release reverts the breakage. As a community, we apologize for the inconvenience caused and will continue to strive to uphold semantic versioning.
๐ Bug Fixes
- ๐ Fixed MKLDNN bugs (#10613, #10021, #10616, #10764, #10591, #10731, #10918, #10706, #10651, #10979).
- ๐ Fixed Scala Inference Memory leak (#11216).
- ๐ Fixed Cross Compilation for armv7 (#11054).
๐ Performance Improvements
- โฌ๏ธ Reduced memory consumption from inplace operation for ReLU activation (#10847).
- ๐ Improved
slice
operator performance by 20x (#11124). - ๐ Improved performance of depthwise convolution by using cudnnv7 if available (#11076).
- Improved performance and memory usage of Conv1D, by adding back cuDNN support for Conv1D (#11270). This adds a known issue: The cuDNN convolution operator may throw
CUDNN_STATUS_EXECUTION_FAILED
whenreq == "add"
andcudnn_tune != off
with large inputs(e.g. 64k channels). If you encounter this issue, please consider settingMXNET_CUDNN_AUTOTUNE_DEFAULT
to 0.
- If you have been using the
-
v1.2.0 Changes
May 21, 2018๐ฒ MXNet Change Log
1.2.0
๐ New Features - Added Scala Inference APIs
- Implemented new Scala Inference APIs which offer an easy-to-use, Scala Idiomatic and thread-safe high level APIs for performing predictions with deep learning models trained with MXNet (#9678). Implemented a new ImageClassifier class which provides APIs for classification tasks on a Java BufferedImage using a pre-trained model you provide (#10054). Implemented a new ObjectDetector class which provides APIs for object and boundary detections on a Java BufferedImage using a pre-trained model you provide (#10229).
๐ New Features - Added a Module to Import ONNX models into MXNet
- Implemented a new ONNX module in MXNet which offers an easy to use API to import ONNX models into MXNet's symbolic interface (#9963). Checkout the example on how you could use this API to import ONNX models and perform inference on MXNet. Currently, the ONNX-MXNet Import module is still experimental. Please use it with caution.
๐ New Features - Added Support for Model Quantization with Calibration
- Implemented model quantization by adopting the TensorFlow approach with calibration by borrowing the idea from Nvidia's TensorRT. The focus of this work is on keeping quantized models (ConvNets for now) inference accuracy loss under control when compared to their corresponding FP32 models. Please see the example on how to quantize a FP32 model with or without calibration (#9552). Currently, the Quantization support is still experimental. Please use it with caution.
๐ New Features - MKL-DNN Integration
- MXNet now integrates with Intel MKL-DNN to accelerate neural network operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch Normalization, Activation, LRN, Softmax, as well as some common operators: sum and concat (#9677). This integration allows NDArray to contain data with MKL-DNN layouts and reduces data layout conversion to get the maximal performance from MKL-DNN. Currently, the MKL-DNN integration is still experimental. Please use it with caution.
๐ New Features - Added Exception Handling Support for Operators
- ๐ป Implemented Exception Handling Support for Operators in MXNet. MXNet now transports backend C++ exceptions to the different language front-ends and prevents crashes when exceptions are thrown during operator execution (#9681).
๐ New Features - Enhanced FP16 support
- โ Added support for distributed mixed precision training with FP16. It supports storing of master copy of weights in float32 with the multi_precision mode of optimizers (#10183). Improved speed of float16 operations on x86 CPU by 8 times through F16C instruction set. Added support for more operators to work with FP16 inputs (#10125, #10078, #10169). Added a tutorial on using mixed precision with FP16 (#10391).
๐ New Features - Added Profiling Enhancements
- โจ Enhanced built-in profiler to support native Intelยฎ๏ธ VTuneโข๏ธ Amplifier objects such as Task, Frame, Event, Counter and Marker from both C++ and Python -- which is also visible in the Chrome tracing view(#8972). Added Runtime tracking of symbolic and imperative operators as well as memory and API calls. Added Tracking and dumping of aggregate profiling data. Profiler also no longer affects runtime performance when not in use.
๐ฅ Breaking Changes
- ๐ Changed Namespace for MXNet scala from
ml.dmlc.mxnet
toorg.apache.mxnet
(#10284). - Changed API for the Pooling operator from
mxnet.symbol.Pooling(data=None, global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)
tomxnet.symbol.Pooling(data=None, kernel=_Null, pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None,**kwargs)
. This is a breaking change when kwargs are not provided since the new api expects the arguments starting fromglobal_pool
at the fourth position instead of the second position. (#10000).
๐ Bug Fixes
- ๐ Fixed tests - Flakiness/Bugs - (#9598, #9951, #10259, #10197, #10136, #10422). Please see: Tests Improvement Project
- Fixed
cudnn_conv
andcudnn_deconv
deadlock (#10392). - ๐ Fixed a race condition in
io.LibSVMIter
when batch size is large (#10124). - ๐ Fixed a race condition in converting data layouts in MKL-DNN (#9862).
- ๐ Fixed MKL-DNN sigmoid/softrelu issue (#10336).
- ๐ Fixed incorrect indices generated by device row sparse pull (#9887).
- ๐ Fixed cast storage support for same stypes (#10400).
- ๐ Fixed uncaught exception for bucketing module when symbol name not specified (#10094).
- ๐ Fixed regression output layers (#9848).
- ๐ Fixed crash with
mx.nd.ones
(#10014). - Fixed
sample_multinomial
crash whenget_prob=True
(#10413). - ๐ Fixed buggy type inference in correlation (#10135).
- ๐ Fixed race condition for
CPUSharedStorageManager->Free
and launched workers at iter init stage to avoid frequent relaunch (#10096). - ๐ Fixed DLTensor Conversion for int64 (#10083).
- ๐ Fixed issues where hex symbols of the profiler were not being recognized by chrome tracing tool(#9932)
- ๐ Fixed crash when profiler was not enabled (#10306)
- ๐ Fixed ndarray assignment issues (#10022, #9981, #10468).
- ๐ Fixed incorrect indices generated by device row sparse pull (#9887).
- ๐ Fixed
print_summary
bug in visualization module (#9492). - ๐ Fixed shape mismatch in accuracy metrics (#10446).
- ๐ Fixed random samplers from uniform and random distributions in R bindings (#10450).
- ๐ Fixed a bug that was causing training metrics to be printed as NaN sometimes (#10437).
- ๐ Fixed a crash with non positive reps for tile ops (#10417).
๐ Performance Improvements
- On average, after the MKL-DNN change, the inference speed of MXNet + MKLDNN outperforms MXNet + OpenBLAS by a factor of 32, outperforms MXNet + MKLML by 82% and outperforms MXNet + MKLML with the experimental flag by 8%. The experiments were run for the image classifcation example, for different networks and different batch sizes.
- ๐ Improved sparse SGD, sparse AdaGrad and sparse Adam optimizer speed on GPU by 30x (#9561, #10312, #10293, #10062).
- ๐ Improved
sparse.retain
performance on CPU by 2.5x (#9722) - Replaced
std::swap_ranges
with memcpy (#10351) - Implemented DepthwiseConv2dBackwardFilterKernel which is over 5x faster (#10098)
- Implemented CPU LSTM Inference (#9977)
- โ Added Layer Normalization in C++ (#10029)
- ๐ Optimized Performance for rtc (#10018)
- ๐ Improved CPU performance of ROIpooling operator by using OpenMP (#9958)
- Accelerated the calculation of F1 (#9833)
API Changes
Block.save_params
now match parameters according to model structure instead of names to avoid prefix mismatching problems during saving and loading (#10511).- โ Added an optional argument
ctx
tomx.random.seed
. Seeding withctx
option produces random number sequence independent of device id. (#10367). - โ Added copy flag for astype (#10347).
- โ Added context parameter to Scala Infer API - ImageClassifier and ObjectDetector (#10252).
- โ Added axes support for dropout in gluon (#10032).
- โ Added default
ctx
to cpu forgluon.Block.load_params
(#10160). - โ Added support for variable sequence length in gluon.RecurrentCell (#9934).
- โ Added convenience fluent method for squeeze op (#9734).
- Made
array.reshape
compatible with numpy (#9790). - โ Added axis support and gradient for L2norm (#9740).
๐ Sparse Support
- โ Added support for multi-GPU training with
row_sparse
weights usingdevice
KVStore (#9987). - โ Added
Module.prepare
API for multi-GPU and multi-machine training with row_sparse weight (#10285). - โ Added
deterministic
option forcontrib.SparseEmbedding
operator (#9846). - ๐ Added
sparse.broadcast_mul
andsparse.broadcast_div
with CSRNDArray and 1-D dense NDArray on CPU (#10208). - โ Added sparse support for Custom Operator (#10374).
- โ Added Sparse feature for Perl (#9988).
- โ Added
force_deterministic
option for sparse embedding (#9882). - โ Added
sparse.where
with condition being csr ndarray (#9481).
๐ Deprecations
- Deprecated
profiler_set_state
(#10156).
Other Features
- โ Added constant parameter for gluon (#9893).
- โ Added
contrib.rand.zipfian
(#9747). - โ Added Gluon PreLU, ELU, SELU, Swish activation layers for Gluon (#9662)
- โ Added Squeeze Op (#9700).
- โ Added multi-proposal operator (CPU version) and fixed bug in multi-proposal operator (GPU version) (#9939).
- โ Added in Large-Batch SGD with a warmup, and a LARS startegy (#8918).
- โ Added Language Modelling datasets and Sampler (#9514).
- โ Added instance norm and reflection padding to Gluon (#7938).
- โ Added micro-averaging strategy for F1 metric (#9777).
- โ Added Softsign Activation Function (#9851).
- โ Added eye operator, for default storage type (#9770).
- โ Added TVM bridge support to JIT NDArray Function by TVM (#9880).
- โ Added float16 support for correlation operator and L2Normalization operator (#10125, #10078).
- โ Added random shuffle implementation for NDArray (#10048).
- โ Added load from buffer functions for CPP package (#10261).
Usability Improvements
- โ Added embedding learning example for Gluon (#9165).
- โ Added tutorial on how to use data augmenters (#10055).
- โ Added tutorial for Data Augmentation with Masks (#10178).
- โ Added LSTNet example (#9512).
- โ Added MobileNetV2 example (#9614).
- โ Added tutorial for Gluon Datasets and DataLoaders (#10251).
- โ Added Language model with Google's billion words dataset (#10025).
- โ Added example for custom operator using RTC (#9870).
- ๐ Improved image classification examples (#9799, #9633).
- Added reshape predictor function to c_predict_api (#9984).
- โ Added guide for implementing sparse ops (#10081).
- โ Added naming tutorial for gluon blocks and parameters (#10511).
Known Issues
- MXNet crash when built with
USE_GPERFTOOLS = 1
(#8968). - โ DevGuide.md in the 3rdparty submodule googletest licensed under CC-BY-2.5.
- ๐ป Incompatibility in the behavior of MXNet Convolution operator for certain unsupported use cases: Raises an exception when MKLDNN is enabled, fails silently when it is not.
- MXNet convolution generates wrong results for 1-element strides (#10689).
- Tutorial on fine-tuning an ONNX model fails when using cpu context.
- ๐ CMake build ignores the
USE_MKLDNN
flag and doesn't build with MKLDNN support even with-DUSE_MKLDNN=1
. To workaround the issue please see: #10801. - โก๏ธ Linking the dmlc-core library fails with CMake build when building with
USE_OPENMP=OFF
. To workaround the issue, please use the updated CMakeLists in dmlc-core unit tests directory: dmlc/dmlc-core#396. You can also workaround the issue by using make instead of cmake when building withUSE_OPENMP=OFF
.
๐ For more information and examples, see full release notes
-
v1.1.0 Changes
February 19, 2018๐ฒ MXNet Change Log
1.1.0
Usability Improvements
- ๐ Improved the usability of examples and tutorials
๐ Bug-fixes
- ๐ Fixed I/O multiprocessing for too many open file handles (#8904), race condition (#8995), deadlock (#9126).
- ๐ Fixed image IO integration with OpenCV 3.3 (#8757).
- ๐ Fixed Gluon block printing (#8956).
- ๐ Fixed float16 argmax when there is negative input. (#9149)
- ๐ Fixed random number generator to ensure sufficient randomness. (#9119, #9256, #9300)
- ๐ Fixed custom op multi-GPU scaling (#9283)
- ๐ Fixed gradient of gather_nd when duplicate entries exist in index. (#9200)
- ๐ Fixed overriden contexts in Module
group2ctx
option when using multiple contexts (#8867) - Fixed
swap_axes
operator with "add_to" gradient req (#9541)
๐ New Features
- โ Added experimental API in
contrib.text
for building vocabulary, and loading pre-trained word embeddings, with built-in support for 307 GloVe and FastText pre-trained embeddings. (#8763) - โ Added experimental structural blocks in
gluon.contrib
:Concurrent
,HybridConcurrent
,Identity
. (#9427) - โ Added
sparse.dot(dense, csr)
operator (#8938) - โ Added
Khatri-Rao
operator (#7781) - โ Added
FTML
andSignum
optimizer (#9220, #9262) - Added
ENABLE_CUDA_RTC
build option (#9428)
API Changes
- โ Added zero gradients to rounding operators including
rint
,ceil
,floor
,trunc
, andfix
(#9040) - Added
use_global_stats
innn.BatchNorm
(#9420) - โ Added
axis
argument toSequenceLast
,SequenceMask
andSequenceReverse
operators (#9306) - โก๏ธ Added
lazy_update
option for standardSGD
&Adam
optimizer withrow_sparse
gradients (#9468, #9189) - โ Added
select
option inBlock.collect_params
to support regex (#9348) - โ Added support for (one-to-one and sequence-to-one) inference on explicit unrolled RNN models in R (#9022)
๐ Deprecations
- ๐ The Scala API name space is still called
ml.dmlc
. The name space is likely be changed in a future release toorg.apache
and might break existing applications and scripts (#9579, #9324)
๐ Performance Improvements
- ๐ Improved GPU inference speed by 20% when batch size is 1 (#9055)
- ๐ Improved
SequenceLast
operator speed (#9306) - โ Added multithreading for the class of broadcast_reduce operators on CPU (#9444)
- ๐ Improved batching for GEMM/TRSM operators with large matrices on GPU (#8846)
Known Issues
- "Predict with pre-trained models" tutorial is broken
- "example/numpy-ops/ndarray_softmax.py" is broken
๐ For more information and examples, see full release notes
-
v1.0.0 Changes
December 04, 2017๐ฒ MXNet Change Log
1.0.0
๐ Performance
- โจ Enhanced the performance of
sparse.dot
operator. - MXNet now automatically set OpenMP to use all available CPU cores to maximize CPU utilization when
NUM_OMP_THREADS
is not set. - ๐ Unary and binary operators now avoid using OpenMP on small arrays if using OpenMP actually hurts performance due to multithreading overhead.
- โ Significantly improved performance of
broadcast_add
,broadcast_mul
, etc on CPU. - โ Added bulk execution to imperative mode. You can control segment size with
mxnet.engine.bulk
. As a result, the speed of Gluon in hybrid mode is improved, especially on small networks and multiple GPUs. - ๐ Improved speed for
ctypes
invocation from Python frontend.
๐ New Features - Gradient Compression [Experimental]
- Speed up multi-GPU and distributed training by compressing communication of gradients. This is especially effective when training networks with large fully-connected layers. In Gluon this can be activated with
compression_params
in Trainer.
๐ New Features - Support of NVIDIA Collective Communication Library (NCCL) [Experimental]
- ๐ Use
kvstore=โncclโ
for (in some cases) faster training on multiple GPUs. - Significantly faster than kvstore=โdeviceโ when batch size is small.
- It is recommended to set environment variable
NCCL_LAUNCH_MODE
toPARALLEL
when using NCCL version 2.1 or newer.
๐ New Features - Advanced Indexing [General Availability]
- ๐ NDArray now supports advanced indexing (both slice and assign) as specified by the numpy standard: https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#combining-advanced-and-basic-indexing with the following restrictions:
- if key is a list type, only a list of integers is supported, e.g.
key=[1, 2]
is supported, while not forkey=[[1, 2]]
. - Ellipsis (...) and np.newaxis are not supported.
Boolean
array indexing is not supported.
- if key is a list type, only a list of integers is supported, e.g.
๐ New Features - Gluon [General Availability]
- ๐ Performance optimizations discussed above.
- โ Added support for loading data in parallel with multiple processes to
gluon.data.DataLoader
. The number of workers can be set withnum_worker
. Does not support windows yet. - โ Added Block.cast to support networks with different data types, e.g.
float16
. - โ Added Lambda block for wrapping a user defined function as a block.
- ๐ Generalized
gluon.data.ArrayDataset
to support arbitrary number of arrays.
๐ New Features - ARM / Raspberry Pi support [Experimental]
- ๐ณ MXNet now compiles and runs on ARMv6, ARMv7, ARMv64 including Raspberry Pi devices. See https://github.com/apache/incubator-mxnet/tree/master/docker_multiarch for more information.
๐ New Features - NVIDIA Jetson support [Experimental]
- MXNet now compiles and runs on NVIDIA Jetson TX2 boards with GPU acceleration.
- ๐ฆ You can install the python MXNet package on a Jetson board by running -
$ pip install mxnet-jetson-tx2
.
๐ New Features - Sparse Tensor Support [General Availability]
- โ Added more sparse operators:
contrib.SparseEmbedding
,sparse.sum
andsparse.mean
. - โ Added
asscipy()
for easier conversion to scipy. - โ Added
check_format()
for sparse ndarrays to check if the array format is valid.
๐ Bug-fixes
- ๐ Fixed a[-1] indexing doesn't work on
NDArray
. - ๐ Fixed
expand_dims
if axis < 0. - ๐ Fixed a bug that causes topk to produce incorrect result on large arrays.
- ๐ Improved numerical precision of unary and binary operators for
float64
data. - ๐ Fixed derivatives of log2 and log10. They used to be the same with log.
- ๐ Fixed a bug that causes MXNet to hang after fork. Note that you still cannot use GPU in child processes after fork due to limitations of CUDA.
- ๐ Fixed a bug that causes
CustomOp
to fail when using auxiliary states. - ๐ Fixed a security bug that is causing MXNet to listen on all available interfaces when running training in distributed mode.
โก๏ธ Doc Updates
- โ Added a security best practices document under FAQ section.
- ๐ Fixed License Headers including restoring copyright attributions.
- ๐ Documentation updates.
- ๐ Links for viewing source.
๐ For more information and examples, see full release notes
- โจ Enhanced the performance of