Description
Neptune brings organization and collaboration to data science projects. All the experiment-related objects are backed-up and organized ready to be analyzed, reproduced and shared with others. Works with any language, framework, infrastructure, and integrates with other tools.
neptune-client alternatives and similar packages
Based on the "Deep Learning" category.
Alternatively, view neptune-client alternatives based on common mentions on social networks and blogs.
-
Pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration -
MXNet
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more -
Caffe2
Caffe2 is a lightweight, modular, and scalable deep learning framework. -
Theano
Theano was a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It is being continued as aesara: www.github.com/pymc-devs/aesara -
Serpent.AI
Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own! -
Silero Models
Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple -
Porcupine
On-device wake word detection powered by deep learning -
Spokestack
Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application with a focus on embedded systems.
Write Clean Python Code. Always.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of neptune-client or a related project?
README
Flexible metadata store for MLOps, built for research and production teams that run a lot of experiments.
Neptune is a lightweight solution designed for:
- Experiment tracking: log, display, organize, and compare ML experiments in a single place.
- Model registry: version, store, manage, and query trained models and model-building metadata.
- Monitoring ML runs live: record and monitor model training, evaluation, or production runs live.
Getting started
Step 1: Sign up for a free account
Step 2: Install the Neptune client library
pip install neptune-client
Step 3: Connect Neptune to your code
import neptune.new as neptune
run = neptune.init_run(
project="common/quickstarts",
api_token=neptune.ANONYMOUS_API_TOKEN,
)
run["parameters"] = {
"batch_size": 64,
"dropout": 0.2,
"optim": {"learning_rate": 0.001, "optimizer": "Adam"},
}
for epoch in range(100):
run["train/accuracy"].log(epoch * 0.6)
run["train/loss"].log(epoch * 0.4)
run["f1_score"] = 0.66
Learn more in the documentation or check our video tutorials to find your specific use case.
Features
Log and display
Neptune supports log and display for many different types of metadata generated during the ML model lifecycle:
- metrics and learning curves
- parameters, tags, and properties
- code, Git info, files, and Jupyter notebooks
- hardware consumption (CPU, GPU, memory)
- images, interactive charts, and HTML objects
- audio and video files
- tables and CSV files
- and more
Compare
You can compare model-building runs you log to Neptune using various comparison views:
- Charts: where you can compare learning curves for metrics or losses
- Images: where you can compare images across runs
- Parallel coordinates: where you can see parameters and metrics displayed on a parallel coordinates plot
- Side-by-side: which shows you the difference between runs in a table format
- Artifacts: where you can compare datasets, models, and other artifacts that you version in Neptune
- Notebooks: which shows you the difference between notebooks (or checkpoints of the same notebook) logged to the project
Filter and organize
Filter, sort, and group model training runs using highly configurable dashboards.
Collaborate
Improve team management and collaboration by grouping all experiments into projects and workspaces and quickly sharing any result or visualization within the team.
Integrate with your favourite ML libraries
Neptune comes with 25+ integrations with Python libraries popular in machine learning, deep learning and reinforcement learning. Available integrations:
- PyTorch and PyTorch Lightning
- TensorFlow / Keras and TensorBoard
- Scikit-learn, LightGBM, and XGBoost
- Optuna, Scikit-Optimize, and Keras Tuner
- Bokeh, Altair, Plotly, and Matplotlib
- and more
PyTorch Lightning
Example:
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import NeptuneLogger
# Create NeptuneLogger
neptune_logger = NeptuneLogger(
api_key="ANONYMOUS", # replace with your own
project="common/pytorch-lightning-integration", # "WORKSPACE_NAME/PROJECT_NAME"
tags=["training", "resnet"], # optional
)
# Pass it to the Trainer
trainer = Trainer(max_epochs=10, logger=neptune_logger)
# Run training
trainer.fit(my_model, my_dataloader)
TensorFow/Keras
Example:
import tensorflow as tf
import neptune.new as neptune
from neptune.new.integrations.tensorflow_keras import NeptuneCallback
run = neptune.init_run(
project="common/tf-keras-integration",
api_token=neptune.ANONYMOUS_API_TOKEN,
)
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential(
[
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation=tf.keras.activations.relu),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10, activation=tf.keras.activations.softmax),
]
)
optimizer = tf.keras.optimizers.SGD(
learning_rate=0.005,
momentum=0.4,
)
model.compile(
optimizer=optimizer, loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
neptune_cbk = NeptuneCallback(run=run, base_namespace="metrics")
model.fit(x_train, y_train, epochs=5, batch_size=64, callbacks=[neptune_cbk])
Scikit-learn
Example:
from sklearn.datasets import load_digits
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
import neptune.new as neptune
import neptune.new.integrations.sklearn as npt_utils
run = neptune.init_run(
project="common/sklearn-integration",
api_token=neptune.ANONYMOUS_API_TOKEN,
name="classification-example",
tags=["GradientBoostingClassifier", "classification"],
)
parameters = {
"n_estimators": 120,
"learning_rate": 0.12,
"min_samples_split": 3,
"min_samples_leaf": 2,
}
gbc = GradientBoostingClassifier(**parameters)
X, y = load_digits(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.20, random_state=28743
)
gbc.fit(X_train, y_train)
run["cls_summary"] = npt_utils.create_classifier_summary(
gbc, X_train, X_test, y_train, y_test
)
fastai
Example:
import fastai
from neptune.new.integrations.fastai import NeptuneCallback
from fastai.vision.all import *
import neptune.new as neptune
run = neptune.init_run(
project="common/fastai-integration",
api_token=neptune.ANONYMOUS_API_TOKEN,
tags="basic",
)
path = untar_data(URLs.MNIST_TINY)
dls = ImageDataLoaders.from_csv(path)
# Log all training phases of the learner
learn = cnn_learner(
dls, resnet18, cbs=[NeptuneCallback(run=run, base_namespace="experiment")]
)
learn.fit_one_cycle(2)
learn.fit_one_cycle(1)
run.stop()
Optuna
Example:
import lightgbm as lgb
import neptune.new as neptune
import neptune.new.integrations.optuna as optuna_utils
import optuna
from sklearn.datasets import load_breast_cancer
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
def objective(trial):
data, target = load_breast_cancer(return_X_y=True)
train_x, test_x, train_y, test_y = train_test_split(data, target, test_size=0.25)
dtrain = lgb.Dataset(train_x, label=train_y)
param = {
"verbose": -1,
"objective": "binary",
"metric": "binary_logloss",
"num_leaves": trial.suggest_int("num_leaves", 2, 256),
"feature_fraction": trial.suggest_uniform("feature_fraction", 0.2, 1.0),
"bagging_fraction": trial.suggest_uniform("bagging_fraction", 0.2, 1.0),
"min_child_samples": trial.suggest_int("min_child_samples", 3, 100),
}
gbm = lgb.train(param, dtrain)
preds = gbm.predict(test_x)
accuracy = roc_auc_score(test_y, preds)
return accuracy
# Create a Neptune run
run = neptune.init_run(
api_token=neptune.ANONYMOUS_API_TOKEN,
project="common/optuna-integration",
)
# Create a NeptuneCallback for Optuna
neptune_callback = optuna_utils.NeptuneCallback(run)
# Pass NeptuneCallback to Optuna Study .optimize()
study = optuna.create_study(direction="maximize")
study.optimize(objective, n_trials=20, callbacks=[neptune_callback])
# Stop logging to the run
run.stop()
Neptune.ai is trusted by great companies
Read how various customers use Neptune to improve their workflow.
Support
If you get stuck or simply want to talk to us about something, here are your options:
- Check our FAQ page.
- Chat! In the app, click the blue message icon in the bottom-right corner and send a message. A real person will talk to you ASAP (typically very ASAP).
- You can just shoot us an email at [email protected].
People behind Neptune
Created with :heart: by the Neptune.ai team:
Piotr, Jakub, Paulina, Kamil, Magdalena, Małgorzata, Piotr, Aleksandra, Marcin, Hubert, Adam, Jakub, Paweł, Patrycja, Marcin, Jakub, Prince, Rafał, Dominika, Karolina, Parth, Rafał, Stephen, Sabine, Martyna, Artur, Franciszek, Aleksiej, Kshiteej, Tomasz, Tymoteusz, Piotr, Chaz, Michał, Siddhant, Karolina, Valentina, Bartosz, Alexandra, Patryk, Aleksander, and you?