Description
A Handwritten Multilayer Perceptron Classifier
This python implementation is an extension of artifical neural network discussed in Python Machine Learning and Neural networks and Deep learning by extending the ANN to deep neural network & including softmax layers, along with loglikelihood loss function and L1 and L2 regularization techniques.
MLP Classifier alternatives and similar packages
Based on the "Machine Learning" category

Prophet
Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or nonlinear growth. 
Clairvoyant
Software designed to identify and monitor social/historical cues for short term stock movement 
Sacred
Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA. 
awesomeembeddingmodels
A curated list of awesome embedding models tutorials, projects and communities. 
SciKitLearn Laboratory
SciKitLearn Laboratory (SKLL) makes it easy to run machine learning experiments. 
Feature Forge
A set of tools for creating and testing machine learning features, with a scikitlearn compatible API 
neptunecontrib
Tools, helpers and everything else that helps you work with Neptune.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest. Visit our partner's website for more details.
Do you think we are missing an alternative of MLP Classifier or a related project?
README
MLP Classifier
A Handwritten Multilayer Perceptron Classifier
This python implementation is an extension of artifical neural network discussed in Python Machine Learning and Neural networks and Deep learning by extending the ANN to deep neural network & including softmax layers, along with loglikelihood loss function and L1 and L2 regularization techniques.
Some Basics
An artificial neuron is mathematical function conceived as a model of biological neurons. Each of the nodes in the diagram is a a neuron, which transfer their information to the next layer through transfer function.
The transfer function is a linear combination of the input neurons and a fixed value  bias (threshold in figure). The coefficients of the input neurons are weights.
In the code, bias is a numpy array of size(layers1) as input layer do not have a bias. The weights, also a numpy array, form a matrix for every two layers in the network.
Activation function is the output of the given neuron.
X: vectorize{(j1)th layer}
w = weights[j1]
bias = threshold[j1]
transfer_function = dot_product(w, X)
o = activation(transfer_function + bias)
Details
The implementation includes two types of artificial neurons:
 Sigmoid Neurons
 Softmax Neurons
The loss function associated with Softmax function is the loglikelihood function, while the loss function for Sigmoid function is the the crossentropy function. The calculus for both loss functions have been discussed within the code.
Further, the two most common regularization techiques  L1 and L2 have been used to prevent overfitting of training data.
Why Softmax?
For zLj in some vector ZL, softmax(zLj) is defined as
The output from the softmax layer can be thought of as a probability distribution.
In many problems it is convenient to be able to interpret the output activation O(j) as the network's estimate of the probability that the correct output is j.
Refer these notes for calculus of softmax function.
Source of MNIST training dataset.