Skip to main content

A comprehensive neural network explainability and interpretability library

Project description

ScopeRX

Neural Network Explainability and Interpretability Library

Python 3.9+ License: MIT CI Codecov

ScopeRX is a comprehensive, production-grade Python library for explaining and interpreting neural network predictions. It provides state-of-the-art attribution methods, evaluation metrics, and visualization tools - all unified under a simple, intuitive API.

Version 2.0.0 Updates

  • Type Safety: Fully typed codebase with mypy compliance.
  • Production Grade: Enhanced stability, error handling, and performance optimizations.
  • CI/CD: Automated testing and linting pipelines.
  • Improved Methods: Refactored KernelSHAP, RISE, and Attention methods for better accuracy and speed.

Features

  • 15+ Explanation Methods: From classic GradCAM to cutting-edge RISE and attention methods
  • Unified API: One interface to rule them all - switch between methods with a single parameter
  • Evaluation Metrics: Faithfulness, sensitivity, and stability metrics to quantify explanation quality
  • Beautiful Visualizations: Publication-ready plots with minimal code
  • Model Agnostic: Works with any PyTorch model architecture
  • Transformer Support: Dedicated attention visualization for Vision Transformers
  • CLI Tool: Generate explanations from the command line

Installation

pip install scope-rx

With optional dependencies:

# For interactive Plotly visualizations
pip install scope-rx[interactive]

# For development
pip install scope-rx[dev]

# Full installation with all extras
pip install scope-rx[full]

Quick Start

from scope_rx import ScopeRX
import torch
import torchvision.models as models

# Load your model
model = models.resnet50(pretrained=True)
model.eval()

# Create explainer
explainer = ScopeRX(model)

# Generate explanation
result = explainer.explain(
    input_tensor,
    method='gradcam',
    target_class=predicted_class
)

# Visualize
result.visualize()

# Or save to file
result.save("explanation.png")

Available Methods

Gradient-Based Methods

Method Description Use Case
gradcam Gradient-weighted Class Activation Mapping General CNN visualization
gradcam++ Improved GradCAM with better localization Multiple object instances
scorecam Score-based CAM (gradient-free) When gradients are unstable
layercam Layer-wise CAM Fine-grained attribution
smoothgrad Noise-smoothed gradients Reducing gradient noise
integrated_gradients Axiomatic attribution method Theoretically grounded explanations
vanilla Simple input gradients Quick baseline
guided_backprop Guided backpropagation High-resolution visualization

Perturbation-Based Methods

Method Description Use Case
occlusion Sliding window occlusion Understanding spatial importance
rise Randomized Input Sampling Black-box models
meaningful_perturbation Optimized minimal perturbation Finding minimal explanations

Model-Agnostic Methods

Method Description Use Case
kernel_shap Kernel SHAP approximation Shapley value estimation
lime Local Interpretable Explanations Interpretable local surrogates

Attention-Based Methods (for Transformers)

Method Description Use Case
attention_rollout Attention weight aggregation Vision Transformers
attention_flow Attention flow propagation Understanding attention paths
raw_attention Raw attention weights Quick attention inspection

Compare Methods

from scope_rx import ScopeRX

explainer = ScopeRX(model)

# Compare multiple methods at once
results = explainer.compare_methods(
    input_tensor,
    methods=['gradcam', 'smoothgrad', 'integrated_gradients', 'rise'],
    target_class=predicted_class
)

# Visualize comparison
from scope_rx.visualization import plot_comparison
plot_comparison({name: r.attribution for name, r in results.items()})

Evaluate Explanations

from scope_rx.metrics import (
    faithfulness_score,
    insertion_deletion_auc,
    sensitivity_score,
    stability_score
)

# Faithfulness: Does the explanation reflect model behavior?
faith = faithfulness_score(model, input_tensor, attribution, target_class=0)

# Insertion/Deletion: How does model output change as we add/remove important features?
scores = insertion_deletion_auc(model, input_tensor, attribution, target_class=0)
print(f"Insertion AUC: {scores['insertion_auc']:.3f}")
print(f"Deletion AUC: {scores['deletion_auc']:.3f}")

# Sensitivity: Are explanations sensitive to meaningful changes?
sens = sensitivity_score(explainer, input_tensor, target_class=0)

# Stability: Are explanations stable across similar inputs?
stab = stability_score(explainer, input_tensor, target_class=0)

Visualization

from scope_rx.visualization import (
    plot_attribution,
    plot_comparison,
    overlay_attribution,
    create_interactive_plot,
    export_visualization
)

# Simple plot
plot_attribution(attribution, image=original_image)

# Interactive Plotly plot
fig = create_interactive_plot(attribution, image=original_image)
fig.show()

# Export to various formats
export_visualization(attribution, "output.png", colormap="jet")
export_visualization(attribution, "output.npy")  # Raw numpy array

Command Line Interface

# Generate explanation
scope-rx explain image.jpg --model resnet50 --method gradcam --output heatmap.png

# Compare methods
scope-rx compare image.jpg --model resnet50 --methods gradcam,smoothgrad,rise

# List available methods
scope-rx list-methods

# Show model layers (for layer selection)
scope-rx show-layers --model resnet50

Advanced Usage

Custom Target Layers

from scope_rx import GradCAM

# Specify exact layer
explainer = GradCAM(model, target_layer="layer4.1.conv2")

Custom Baselines for Integrated Gradients

from scope_rx import IntegratedGradients

# Use different baselines
explainer = IntegratedGradients(
    model,
    n_steps=50,
    baseline="blur"  # Options: "zero", "random", "blur"
)

Batch Processing

from scope_rx import ScopeRX
from scope_rx.utils import preprocess_image
from pathlib import Path

explainer = ScopeRX(model)

# Process multiple images
for image_path in image_paths:
    input_tensor = preprocess_image(image_path)
    result = explainer.explain(input_tensor, method='gradcam')
    result.save(f"explanations/{Path(image_path).stem}.png")

Using Individual Explainers

from scope_rx import GradCAM, SmoothGrad, RISE

# Use specific explainer directly
gradcam = GradCAM(model, target_layer="layer4")
result = gradcam.explain(input_tensor, target_class=0)

# SmoothGrad with custom parameters
smoothgrad = SmoothGrad(model, n_samples=50, noise_level=0.2)
result = smoothgrad.explain(input_tensor, target_class=0)

Testing

# Run all tests
pytest tests/

# Run with coverage
pytest tests/ --cov=scope_rx --cov-report=html

# Run specific test module
pytest tests/test_gradient_methods.py -v

Documentation

For full documentation, visit our documentation site.

Contributing

We welcome contributions! Please see our Contributing Guide for details.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If you use ScopeRX in your research, please cite:

@software{scoperx2024,
  title = {ScopeRX: Neural Network Explainability Library},
  author = {XCALEN},
  year = {2024},
  url = {https://github.com/xcalen/scope-rx}
}

Acknowledgments

ScopeRX builds upon the excellent work of the interpretability research community. Special thanks to the authors of:

  • GradCAM, GradCAM++, ScoreCAM, LayerCAM
  • SHAP and LIME
  • Integrated Gradients
  • RISE
  • And many others who have contributed to the field of explainable AI

Made with love by Desenyon

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scope_rx-2.0.0.tar.gz (54.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scope_rx-2.0.0-py3-none-any.whl (71.8 kB view details)

Uploaded Python 3

File details

Details for the file scope_rx-2.0.0.tar.gz.

File metadata

  • Download URL: scope_rx-2.0.0.tar.gz
  • Upload date:
  • Size: 54.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for scope_rx-2.0.0.tar.gz
Algorithm Hash digest
SHA256 6a9140d0bbac42a84dfca70abce1b1923d722e5bb1edabce1ea4e416583d496b
MD5 f1b45370ff8850aaf83d409c2659d3d6
BLAKE2b-256 a9c5c69c3b8020e1e0291ae16d089b0cbead5f568299ac1a871da0119d6e94d6

See more details on using hashes here.

File details

Details for the file scope_rx-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: scope_rx-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 71.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for scope_rx-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c1fc5d6eaec00217db6b3f84d651ef97232712a57a3b9e04d7eb9bebdabcbbf8
MD5 295a5fd63ea90f03260a28c1a238852a
BLAKE2b-256 59e68451f44feb2e588b2fbf2797072753b2d7a517a632cfa277cf90d9faa717

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page