Skip to main content

Real-time explainable machine learning for business optimisation

Project description

Contributors

xplainable

Real-time explainable machine learning for business optimisation

Python PyPi Downloads

Xplainable makes tabular machine learning transparent, fair, and actionable.

Why Xplainable?

In machine learning, there has long been a trade-off between accuracy and explainability. Libraries like Shap and Lime estimate model decisions after the fact, but they're slow and add complexity.

xplainable takes a different approach: models that are explainable by design. Our algorithms match the performance of black-box models like XGBoost and LightGBM while providing complete transparency in real-time — no surrogate models, no approximations.

Every prediction comes with per-feature contribution scores that explain why the model made that decision. These contributions are exact (not estimates) and can be used to drive business actions like retention campaigns, risk routing, and cost optimisation.

Installation

pip install xplainable

For preprocessing pipelines (spec-driven, JSON-serializable):

pip install xplainable-preprocessing

For cloud model management, deployment, and collaboration:

pip install xplainable-client

Quick Start

import xplainable as xp
from xplainable.core.models import XClassifier
from xplainable.core.optimisation.bayesian import XParamOptimiser
from sklearn.model_selection import train_test_split

# Load and split data
data = xp.load_dataset('titanic')
X, y = data.drop(columns=['Survived']), data['Survived']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)

# Optimise hyperparameters
opt = XParamOptimiser()
params = opt.optimise(X_train, y_train)

# Train
model = XClassifier(**params)
model.fit(X_train, y_train)

# Predict
y_pred = model.predict(X_test)

# Explain — interactive feature importances and contribution plots
model.explain()

Models

Model Class
Binary Classification XClassifier
Regression XRegressor
Partitioned Classification PartitionedClassifier
Partitioned Regression PartitionedRegressor

Key Features

Explainability — Built In, Not Bolted On

Every xplainable model provides:

  • Feature importances — which features matter most
  • Partition contributions — how each feature value range shifts the prediction
  • Per-instance explanations — why this specific prediction was made
# Global explanation
model.explain()

# Per-instance contributions
contributions = model._transform(X_test)

# Model profile (all partition details)
profile = model.profile

Preprocessing with xplainable-preprocessing

Spec-driven, JSON-serializable pipelines that can be versioned, previewed, and persisted to Xplainable Cloud.

from xplainable_preprocessing import PipelineSpec, StepSpec, compile_spec

spec = PipelineSpec(steps=[
    StepSpec(
        id="lowercase",
        type="TextCleanTransformer",
        columns=["country", "category"],
        params={"operations": ["lowercase"]},
    ),
    StepSpec(
        id="fill_missing",
        type="FillMissingTransformer",
        params={"strategy": "median"},
    ),
    StepSpec(
        id="drop_ids",
        type="DropColumnsTransformer",
        params={"columns": ["customer_id", "order_id"]},
    ),
])

pipeline = compile_spec(spec)
df_transformed = pipeline.fit_transform(df)

Available transformers: TextCleanTransformer, DropColumnsTransformer, FillMissingTransformer, TypeCastTransformer, CategoryCondenseTransformer, ExpressionTransformer, DateTimeExtractTransformer, RenameColumnsTransformer, GroupByAggTransformer, GroupedLagTransformer, RollingAggTransformer, plus all standard sklearn transformers (StandardScaler, OneHotEncoder, etc.)

Hyperparameter Optimisation

Bayesian optimisation finds the best parameters automatically.

from xplainable.core.optimisation.bayesian import XParamOptimiser

opt = XParamOptimiser(metric='roc-auc')
params = opt.optimise(X_train, y_train)

model = XClassifier(**params)
model.fit(X_train, y_train)

Rapid Refitting

Fine-tune model parameters on individual features without retraining from scratch.

model.update_feature_params(
    features=['Age'],
    max_depth=6,
    min_info_gain=0.01,
    min_leaf_size=0.03,
    weight=0.05,
    power_degree=1,
    sigmoid_exponent=1,
    x=X_train,
    y=y_train
)

Contribution-Driven Optimisation

Use the model's per-feature contributions to identify controllable business levers and calculate the expected value of interventions — derived from the data, not assumed.

# Get per-feature contributions
contributions = model._transform(X_test)

# Model profile shows partition boundaries and scores
profile = model.profile

# For controllable features, compute counterfactual lever effects:
# "How much would churn drop if we moved this customer to the best partition?"
best_score = min(p['score'] for p in profile['numeric']['orders_count'])
lever_effect = current_contribution - best_score

See the Shopify Customer Churn notebook for a complete example.

Xplainable Cloud

Deploy models, persist preprocessing pipelines, and collaborate with your team through the Xplainable Cloud platform.

from xplainable_client.client.client import XplainableClient

client = XplainableClient(
    api_key="your-api-key",
    hostname="https://platform.xplainable.io"
)

# Persist preprocessing
client.preprocessing.create_preprocessor(
    name="My Preprocessor",
    description="Feature transforms for churn model",
    spec=preprocessing_spec.model_dump(),
    sample_df=df,
)

# Persist model
client.models.create_model(
    model=model,
    model_name="Churn Prediction",
    model_description="Customer churn classifier",
    x=X_train, y=y_train
)

# Deploy
deployment = client.deployments.deploy(model_version_id=version_id)

Examples

Notebook Type Description
Shopify Customer Churn Classification Churn prediction with contribution-driven retention optimisation
Shopify Order Returns Classification Return prediction with intervention routing
Telco Churn Classification IBM Telco customer churn
HELOC Credit Risk Classification Credit risk assessment
Lead Scoring Classification Lead conversion prediction
House Prices Regression Property price prediction
Concrete Strength Regression Material strength prediction
Power Plant Output Regression Energy output prediction

Documentation

Contributing

We welcome contributions. If you're interested, reach out at contact@xplainable.io.


Made with care in Australia

© xplainable pty ltd

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xplainable-1.4.1.tar.gz (101.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

xplainable-1.4.1-py3-none-any.whl (99.6 kB view details)

Uploaded Python 3

File details

Details for the file xplainable-1.4.1.tar.gz.

File metadata

  • Download URL: xplainable-1.4.1.tar.gz
  • Upload date:
  • Size: 101.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for xplainable-1.4.1.tar.gz
Algorithm Hash digest
SHA256 128238449374e09ee332048b48f38ef1cb06f850a9357bf4cb4c8d45614c772c
MD5 6f987f25c51f53925fe647915e173457
BLAKE2b-256 45a5c7aa9c0d623f984b7b4a6fb83df0136384c0d7ef65ae010dfa61f9404b7f

See more details on using hashes here.

Provenance

The following attestation bundles were made for xplainable-1.4.1.tar.gz:

Publisher: publish-pypi.yml on xplainable/xplainable

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file xplainable-1.4.1-py3-none-any.whl.

File metadata

  • Download URL: xplainable-1.4.1-py3-none-any.whl
  • Upload date:
  • Size: 99.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for xplainable-1.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 58779a8ad503c87da91dcb2c869b1e5e3fb974685a7df29953c4285a3cf16f21
MD5 94773977ed6d9b4711b1aca69e76a061
BLAKE2b-256 3764f1202926990a779435f60777049b183d0a892ffc338c034cc1b672ba96eb

See more details on using hashes here.

Provenance

The following attestation bundles were made for xplainable-1.4.1-py3-none-any.whl:

Publisher: publish-pypi.yml on xplainable/xplainable

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page