Description
This repository holds the code to a new kind of RNN model for processing sequential data. The model computes a recurrent weighted average (RWA) over every previous processing step. With this approach, the model can form direct connections anywhere along a sequence. This stands in contrast to traditional RNN architectures that only use the previous processing step. A detailed description of the RWA model has been published in a mansuscript at https://arxiv.org/pdf/1703.01253.pdf.
Because the RWA can be computed as a running average, it does not need to be completely recomputed with each processing step. The numerator and denominator can be saved from the previous step. Consequently, the model scales like that of other RNN models such as the LSTM model.
In each folder, the RWA model is evaluated on a different task. The performance of the RWA model is compared against a LSTM model. The RWA is found to train faster and/or generalize better on each task. See the above manuscript for additional details about each result.
rwa alternatives and similar packages
Based on the "Machine Learning" category.
Alternatively, view rwa alternatives based on common mentions on social networks and blogs.
-
xgboost
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow -
MindsDB
AI's query engine - Platform for building AI that can learn and answer questions over large scale federated data. -
PaddlePaddle
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署) -
Prophet
Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth. -
NuPIC
DISCONTINUED. Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex. -
H2O
H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc. -
Sacred
Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA. -
Clairvoyant
Software designed to identify and monitor social/historical cues for short term stock movement -
garak, LLM vulnerability scanner
DISCONTINUED. the LLM vulnerability scanner [Moved to: https://github.com/NVIDIA/garak] -
karateclub
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020) -
awesome-embedding-models
A curated list of awesome embedding models tutorials, projects and communities. -
Crab
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib). -
seqeval
A Python framework for sequence labeling evaluation(named-entity recognition, pos tagging, etc...) -
SciKit-Learn Laboratory
SciKit-Learn Laboratory (SKLL) makes it easy to run machine learning experiments. -
Robocorp Action Server
Create 🐍 Python AI Actions and 🤖 Automations, and deploy & operate them anywhere -
Feature Forge
A set of tools for creating and testing machine learning features, with a scikit-learn compatible API -
Data Flow Facilitator for Machine Learning (dffml)
DISCONTINUED. The easiest way to use Machine Learning. Mix and match underlying ML libraries and data set sources. Generate new datasets or modify existing ones with ease.
CodeRabbit: AI Code Reviews for Developers

* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of rwa or a related project?
Popular Comparisons
README
Description
This repository holds the code to a new kind of RNN model for processing sequential data. The model computes a recurrent weighted average (RWA) over every previous processing step. With this approach, the model can form direct connections anywhere along a sequence. This stands in contrast to traditional RNN architectures that only use the previous processing step. A detailed description of the RWA model has been published in a manuscript at https://arxiv.org/pdf/1703.01253.pdf.
[alt text](artwork/figure.png "Comparison of RNN architectures")
Because the RWA can be computed as a running average, it does not need to be completely recomputed with each processing step. The numerator and denominator can be saved from the previous step. Consequently, the model scales like that of other RNN models such as the LSTM model.
In each folder, the RWA model is evaluated on a different task. The performance of the RWA model is compared against a LSTM model. The RWA is found to train considerably faster on most tasks by at least a factor of five. As the sequences become longer, the RWA model scales even better. See the manuscript listed above for the details about each result.
Note: The RWA model has failed to yield competitive results on Natural Language Problems.
Download
- Download: zip
- Git:
git clone https://github.com/jostmey/rwa
Requirements
The code is written in Python3. The scripts have been upgraded to run using version 1.0 of TensorFlow.
Alternative Implementations
- RWA model as TensorFlow RNNCell (My implementation)
- RWA model as TensorFlow RNNCell (Not tested)
- RWA model in Keras (Reproduced results in paper)
- RWA model in Keras (Not tested)
- RWA model in Pytorch (Unstable branch - Work in progess)
- RWA model in Pytorch (Numerically unstable implementation)
- RWA model in Go
Acknowledgements
Thanks Alex Nichol for correcting the equations for numerical stability.
Corrections (Changelog)
- March 17th, 2017: Corrected equations used to rescale the numerator and denominator terms, which is used to avoid overflow and underflow conditions. Results for the RWA model were recomputed.
- March 26th, 2017: Corrected a bug specific to the code for loading the permuted MNIST task. Results for permuted MNIST task were recomputed.
- April 3rd, 2017: Corrected bug in the LSTM model. This bug affected all the results except for the copy problem. Results for the LSTM model were recomputed. No significant changes in performance were observed.