Description
adaptive is an open-source Python library designed to make adaptive parallel function evaluation simple. With adaptive you just supply a function with its bounds, and it will be evaluated at the “best” points in parameter space. With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.
adaptive alternatives and similar packages
Based on the "Machine Learning" category.
Alternatively, view adaptive alternatives based on common mentions on social networks and blogs.
-
xgboost
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow -
MindsDB
AI's query engine - Platform for building AI that can learn and answer questions over large scale federated data. -
PaddlePaddle
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署) -
Prophet
Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth. -
NuPIC
DISCONTINUED. Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex. -
H2O
H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc. -
Sacred
Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA. -
Clairvoyant
Software designed to identify and monitor social/historical cues for short term stock movement -
garak, LLM vulnerability scanner
DISCONTINUED. the LLM vulnerability scanner [Moved to: https://github.com/NVIDIA/garak] -
karateclub
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020) -
awesome-embedding-models
A curated list of awesome embedding models tutorials, projects and communities. -
Crab
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib). -
seqeval
A Python framework for sequence labeling evaluation(named-entity recognition, pos tagging, etc...) -
SciKit-Learn Laboratory
SciKit-Learn Laboratory (SKLL) makes it easy to run machine learning experiments. -
Robocorp Action Server
Create 🐍 Python AI Actions and 🤖 Automations, and deploy & operate them anywhere -
Feature Forge
A set of tools for creating and testing machine learning features, with a scikit-learn compatible API -
Data Flow Facilitator for Machine Learning (dffml)
DISCONTINUED. The easiest way to use Machine Learning. Mix and match underlying ML libraries and data set sources. Generate new datasets or modify existing ones with ease.
Judoscale - Save 47% on cloud hosting with autoscaling that just works

* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of adaptive or a related project?
README
<!-- badges-start -->
adaptive
Adaptive: parallel active learning of mathematical functions.
<!-- badges-end -->
<!-- summary-start -->
adaptive
is an open-source Python library designed to make adaptive parallel function evaluation simple. With adaptive
you just supply a function with its bounds, and it will be evaluated at the “best” points in parameter space, rather than unnecessarily computing all points on a dense grid.
With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.
adaptive
excels on computations where each function evaluation takes at least ≈50ms due to the overhead of picking potentially interesting points.
Run the adaptive
example notebook live on Binder to see examples of how to use adaptive
or visit the tutorial on Read the Docs.
<!-- summary-end -->
Implemented algorithms
The core concept in adaptive
is that of a learner.
A learner samples a function at the best places in its parameter space to get maximum “information” about the function.
As it evaluates the function at more and more points in the parameter space, it gets a better idea of where the best places are to sample next.
Of course, what qualifies as the “best places” will depend on your application domain! adaptive
makes some reasonable default choices, but the details of the adaptive sampling are completely customizable.
The following learners are implemented:
<!-- not-in-documentation-start -->
Learner1D
, for 1D functionsf: ℝ → ℝ^N
,Learner2D
, for 2D functionsf: ℝ^2 → ℝ^N
,LearnerND
, for ND functionsf: ℝ^N → ℝ^M
,AverageLearner
, for random variables where you want to average the result over many evaluations,AverageLearner1D
, for stochastic 1D functions where you want to estimate the mean value of the function at each point,IntegratorLearner
, for when you want to intergrate a 1D functionf: ℝ → ℝ
.BalancingLearner
, for when you want to run several learners at once, selecting the “best” one each time you get more points.
Meta-learners (to be used with other learners):
BalancingLearner
, for when you want to run several learners at once, selecting the “best” one each time you get more points,DataSaver
, for when your function doesn't just return a scalar or a vector.
In addition to the learners, adaptive
also provides primitives for running the sampling across several cores and even several machines, with built-in support for
concurrent.futures,
mpi4py,
loky,
ipyparallel, and
distributed.
Examples
Adaptively learning a 1D function (the gif
below) and live-plotting the process in a Jupyter notebook is as easy as
from adaptive import notebook_extension, Runner, Learner1D
notebook_extension()
def peak(x, a=0.01):
return x + a**2 / (a**2 + x**2)
learner = Learner1D(peak, bounds=(-1, 1))
runner = Runner(learner, goal=lambda l: l.loss() < 0.01)
runner.live_info()
runner.live_plot()
<!-- not-in-documentation-end -->
Installation
adaptive
works with Python 3.7 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.
The recommended way to install adaptive is using conda
:
conda install -c conda-forge adaptive
adaptive
is also available on PyPI:
pip install "adaptive[notebook]"
The [notebook]
above will also install the optional dependencies for running adaptive
inside a Jupyter notebook.
To use Adaptive in Jupyterlab, you need to install the following labextensions.
jupyter labextension install @jupyter-widgets/jupyterlab-manager
jupyter labextension install @pyviz/jupyterlab_pyviz
Development
Clone the repository and run pip install -e ".[notebook,testing,other]"
to add a link to the cloned repo into your Python path:
git clone [email protected]:python-adaptive/adaptive.git
cd adaptive
pip install -e ".[notebook,testing,other]"
We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed packages while working on adaptive
.
In order to not pollute the history with the output of the notebooks, please setup the git filter by executing
python ipynb_filter.py
in the repository.
We implement several other checks in order to maintain a consistent code style. We do this using pre-commit, execute
pre-commit install
in the repository.
Citing
If you used Adaptive in a scientific work, please cite it as follows.
@misc{Nijholt2019,
doi = {10.5281/zenodo.1182437},
author = {Bas Nijholt and Joseph Weston and Jorn Hoofwijk and Anton Akhmerov},
title = {\textit{Adaptive}: parallel active learning of mathematical functions},
publisher = {Zenodo},
year = {2019}
}
Credits
We would like to give credits to the following people:
- Pedro Gonnet for his implementation of CQUAD, “Algorithm 4” as described in “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.
- Pauli Virtanen for his
AdaptiveTriSampling
script (no longer available online since SciPy Central went down) which served as inspiration for theadaptive.Learner2D
.
<!-- credits-end -->
For general discussion, we have a Gitter chat channel. If you find any bugs or have any feature suggestions please file a GitHub issue or submit a pull request.
<!-- references-start -->
<!-- references-end -->