A truly generic Python package for Weights & Biases integration with any ML/DL library or long-running function.
Project description
WandB Generic Logger 🚀
A truly generic and professional Python package that brings Weights & Biases experiment tracking to any ML/DL library or long-running function. No more manual logging - just specify what variables you want to track in a config file and let the magic happen!
✨ Key Features
- 🔧 Framework Agnostic: Works with PyTorch, TensorFlow, scikit-learn, or any Python function
- 📝 Flexible Variable Logging: Log ANY variables from your function by name - no hardcoded metrics
- 🎯 Multiple Logging Patterns: Automatic capture, generator functions, context managers
- 🧪 Beyond ML: Use for finance, physics, optimization, data processing - any domain
- 🔄 Hyperparameter Sweeps: Built-in WandB sweep integration
- 💾 Model Checkpointing: Automatic artifact logging
- 🚦 Professional: Error handling, type hints, comprehensive validation
🚀 Quick Start
Installation
pip install wandb-generic
Basic Usage
- Create a config file (
config.yaml):
wandb:
project: "my-awesome-project"
hyperparameters:
learning_rate: 0.01
epochs: 10
logger:
metrics:
- loss # ANY variable name from your function
- epoch # Traditional names work
- accuracy # Descriptive names work
- x # Short names work (single letters)
- y # Any variables you create
- Add the decorator to your function:
from wandb_generic import WandbGenericLogger
@WandbGenericLogger(config_path="config.yaml")
def train_model(wandb_run=None):
model = create_model()
optimizer = torch.optim.Adam(model.parameters(), lr=wandb_run.config.learning_rate)
for epoch in range(wandb_run.config.epochs):
# Use ANY variable names you want!
loss = train_one_epoch(model, optimizer)
accuracy = validate_model(model)
x = loss # Custom names work too!
y = epoch # Any variable names
# These variables are automatically logged! ✨
# No manual wandb.log() calls needed
return model
# Run your training
trained_model = train_model()
That's it! The decorator automatically captures and logs the variables specified in your config.
🎯 Framework Examples
PyTorch
@WandbGenericLogger(config_path="config.yaml")
def train_pytorch_model(wandb_run=None):
model = torch.nn.Sequential(...)
optimizer = torch.optim.Adam(model.parameters())
for epoch in range(wandb_run.config.epochs):
train_loss = train_one_epoch(model, optimizer)
val_accuracy = validate_model(model)
learning_rate = optimizer.param_groups[0]['lr']
# Auto-logged based on config
TensorFlow/Keras
@WandbGenericLogger(config_path="config.yaml")
def train_tf_model(wandb_run=None):
model = tf.keras.Sequential([...])
for epoch in range(wandb_run.config.epochs):
history = model.fit(x_train, y_train, validation_data=(x_val, y_val))
train_loss = history.history['loss'][0]
val_loss = history.history['val_loss'][0]
val_accuracy = history.history['val_accuracy'][0]
# Auto-logged based on config
Scikit-learn
@WandbGenericLogger(config_path="config.yaml")
def train_sklearn_model(wandb_run=None):
model = RandomForestClassifier()
for n_estimators in [10, 50, 100, 200]:
model.set_params(n_estimators=n_estimators)
model.fit(X_train, y_train)
train_score = model.score(X_train, y_train)
val_score = model.score(X_val, y_val)
feature_importance = model.feature_importances_.mean()
# Auto-logged based on config
🧪 Beyond Machine Learning
This package works for ANY domain:
Financial Analysis
@WandbGenericLogger(config_path="config.yaml")
def analyze_trading_strategy(wandb_run=None):
for trading_day in range(wandb_run.config.epochs):
portfolio_return = execute_trading_strategy()
sharpe_ratio = calculate_sharpe_ratio()
max_drawdown = calculate_drawdown()
volatility = calculate_volatility()
# All metrics logged automatically
Scientific Computing
@WandbGenericLogger(config_path="config.yaml")
def simulate_physics(wandb_run=None):
for time_step in range(wandb_run.config.epochs):
kinetic_energy = calculate_kinetic_energy()
potential_energy = calculate_potential_energy()
total_energy = kinetic_energy + potential_energy
system_temperature = calculate_temperature()
# Physics metrics logged automatically
📁 Configuration Reference
Complete YAML Configuration
wandb:
project: "project-name" # Required
entity: "your-entity" # Optional
tags: ["tag1", "tag2"] # Optional
notes: "Experiment description" # Optional
hyperparameters:
learning_rate: 0.01 # Any hyperparameters you want
batch_size: 32
epochs: 100
sweep:
method: "bayes" # random, grid, bayes
metric:
name: "loss" # Any metric name from your function
goal: "minimize" # minimize or maximize
parameters:
learning_rate:
values: [0.1, 0.01, 0.001]
batch_size:
values: [16, 32, 64]
logger:
metrics: # List ANY variable names to log
- loss
- accuracy
- epoch
- custom_metric
- processing_time
log_frequency: 1 # Log every N iterations
checkpoint:
name: "my-model"
type: "model"
save_frequency: 5 # Save every N epochs
🔄 Supported Logging Patterns
1. Automatic Variable Capture (Recommended)
@WandbGenericLogger(config_path="config.yaml")
def my_function(wandb_run=None):
for iteration in range(10):
metric_value = compute_metric()
loss_score = compute_loss()
# Variables automatically logged if in config
2. Generator Functions
@WandbGenericLogger(config_path="config.yaml")
def training_generator(wandb_run=None):
for epoch in range(10):
loss = train_epoch()
yield {"loss": loss, "epoch": epoch}
3. Context Manager
from wandb_generic import WandbMetricLogger
@WandbGenericLogger(config_path="config.yaml")
def explicit_logging(wandb_run=None):
with WandbMetricLogger(wandb_run) as logger:
for i in range(10):
metric = compute_metric()
logger.log({"iteration": i, "metric": metric})
🔧 Advanced Features
Custom Logging Function
def my_custom_logger(metrics, step):
print(f"Step {step}: {metrics}")
@WandbGenericLogger(
config_path="config.yaml",
log_frequency=5, # Log every 5 iterations
custom_logger=my_custom_logger
)
def my_function(wandb_run=None):
# Your code here
pass
Error Handling
The package includes comprehensive error handling:
- Validates config file structure
- Handles missing metrics gracefully
- Converts tensor types automatically (PyTorch, NumPy)
- Provides helpful error messages
Type Safety
Full type hints for better IDE support:
from typing import Dict, Any, List
from wandb_generic import WandbGenericLogger, WandbMetricLogger
🚀 Migration from Manual Logging
Before (Manual logging):
def train_model():
wandb.init(project="my-project")
for epoch in range(10):
loss = train_epoch()
acc = validate()
wandb.log({
"loss": loss,
"accuracy": acc,
"epoch": epoch
})
After (Generic logging):
@WandbGenericLogger(config_path="config.yaml")
def train_model(wandb_run=None):
for epoch in range(wandb_run.config.epochs):
loss = train_epoch()
accuracy = validate()
# That's it! No manual logging needed
🤝 Contributing
We welcome contributions! Please see our contributing guidelines and feel free to submit issues or pull requests.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙋♂️ Support
- 📖 Documentation: Complete examples in the
examples/directory - 🐛 Issues: Report bugs on our GitHub issues page
- 💬 Discussions: Join our community discussions
Ready to make your experiment tracking effortless and truly generic? Install wandb-generic today! 🚀
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wandb_generic-1.1.0.tar.gz.
File metadata
- Download URL: wandb_generic-1.1.0.tar.gz
- Upload date:
- Size: 21.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2de6f283dc5592912c22b2927ef2ed0fabe9d5b06a389d0c8079e2864bbdec36
|
|
| MD5 |
e83b5c5b1e6ad6bc041a99ed812f2021
|
|
| BLAKE2b-256 |
5e60917ddb8702f026a092d11c04610888efcbfb349a4fdc689e9a1c579a76ca
|
File details
Details for the file wandb_generic-1.1.0-py3-none-any.whl.
File metadata
- Download URL: wandb_generic-1.1.0-py3-none-any.whl
- Upload date:
- Size: 10.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d43b2b0558d0b5aa82cc43240163c2850b92ec3d5434bca04621cff6d11c4310
|
|
| MD5 |
2629d83d030193073d72c779205d4fd3
|
|
| BLAKE2b-256 |
fc8f042af094c20944a1ac2366a91491f2e2c85131aa50042675edb3692b28e0
|