Skip to main content

Run and summarize PHOTONAI analyses

Project description

Getting started

This page shows how to set up a simple PHOTONAI project using PhotonaiProject, run analyses, perform permutation tests, and statistically compare different analyses.

You can find the in-depth documentation here: https://wwu-mmll.github.io/photonai_projects/

Installation

Install the package (and PHOTONAI) into your environment:

pip install photonai photonai-projects 

Basic concepts

A PhotonaiProject manages multiple PHOTONAI analyses in a single project folder. Each analysis has its own subfolder containing:

  • a hyperpipe constructor script (hyperpipe_constructor.py)

  • a metadata file (hyperpipe_meta.json)

  • a data/ folder with X.npy and y.npy

  • (optionally) a permutations/ folder for permutation tests

The typical workflow is:

  1. Create a project with PhotonaiProject.
  2. Add analyses (data + hyperpipe constructor).
  3. Run analyses to train and evaluate the models.
  4. Run permutation tests to obtain null distributions.
  5. Compare analyses statistically.

Minimal example

Below is a complete example using the breast cancer dataset from scikit-learn. We create three analyses using different feature sets, run them, run permutation tests, and then compare them statistically.

from photonai_projects.project import PhotonaiProject
from sklearn.datasets import load_breast_cancer

# Load example data
X, y = load_breast_cancer(return_X_y=True)

# Split features into different sets
X_1 = X[:, :3]
X_2 = X[:, 3:6]

# Create a project
project = PhotonaiProject(project_folder="example_project")

# ---------------------------------------------------------------------
# 1) Register analyses
# ---------------------------------------------------------------------
for name, current_X in [
    ("all_features", X),
    ("first_feature_set", X_1),
    ("second_feature_set", X_2),
]:
    project.add(
        name=name,
        X=current_X,
        y=y,
        hyperpipe_script="path/to/hyperpipe_constructor.py",
        name_hyperpipe_constructor="create_hyperpipe",
    )

project.list_analyses()

# ---------------------------------------------------------------------
# 2) Run analyses
# ---------------------------------------------------------------------
for name in ["all_features", "first_feature_set", "second_feature_set"]:
    project.run(name=name)

# ---------------------------------------------------------------------
# 3) Run permutation tests (local example)
# ---------------------------------------------------------------------
# Use a small number of permutations for testing; increase for real studies.
for name in ["all_features", "first_feature_set", "second_feature_set"]:
    project.run_permutation_test(name=name, n_perms=10, overwrite=True)

# ---------------------------------------------------------------------
# 4) Statistical comparison of analyses
# ---------------------------------------------------------------------
# For the Nadeau–Bengio test you must provide n_train and n_test as used
# during cross-validation. Here we give a simple example.
n_samples = X.shape[0]
n_train = int(0.8 * n_samples)
n_test = n_samples - n_train

# Compare two analyses (Nadeau–Bengio corrected t-test)
project.compare_analyses(
    first_analysis="first_feature_set",
    second_analysis="second_feature_set",
    method="nadeau-bengio",
    n_train=n_train,
    n_test=n_test,
)

# Compare two analyses (permutation-based)
project.compare_analyses(
    first_analysis="all_features",
    second_analysis="second_feature_set",
    method="permutation",
    n_perms=10,
)

# Compare all pairs at once (optional)
multi_results = project.compare_multiple_analyses(
    analyses=["all_features", "first_feature_set", "second_feature_set"],
    method="permutation",
    n_perms=10,
)
print(multi_results.head())

Running permutation tests on a SLURM cluster

For large numbers of permutations, you can distribute them across a SLURM array:

project.prepare_slurm_permutation_test(
    name="second_feature_set",
    n_perms=1000,
    conda_env="my_photonai_env",
    memory_per_cpu=2,
    n_jobs=20,
    run_time="0-02:00:00",
    random_state=1,
)

This creates a slurm_job.cmd script in the analysis folder which you can submit with:

cd example_project/second_feature_set
sbatch slurm_job.cmd

Each array job will call the Typer CLI entry point run_perm_job and execute a subset of permutation runs.

Next steps

See the Usage page for more details on:

  • how to design your hyperpipe constructor,
  • how metrics and scorers are handled,
  • how to interpret the comparison reports.

See the API Reference for the full documentation of PhotonaiProject.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

photonai_projects-0.1.1.tar.gz (17.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

photonai_projects-0.1.1-py3-none-any.whl (17.6 kB view details)

Uploaded Python 3

File details

Details for the file photonai_projects-0.1.1.tar.gz.

File metadata

  • Download URL: photonai_projects-0.1.1.tar.gz
  • Upload date:
  • Size: 17.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for photonai_projects-0.1.1.tar.gz
Algorithm Hash digest
SHA256 082ce904c3dc6e5e606f8745cc942663153fcc68ee135da2c93f3f1f198cc126
MD5 b62c625fe9e9fc05b3d116407db4dc0f
BLAKE2b-256 43b40727f02eba0cdcd4dc8065a14a25c19ca13cd35032c371d763a7059b015d

See more details on using hashes here.

Provenance

The following attestation bundles were made for photonai_projects-0.1.1.tar.gz:

Publisher: publish.yml on wwu-mmll/photonai_projects

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file photonai_projects-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for photonai_projects-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 57b98161aae22f3558f93d805821cbf88349f0b61ab129d3cfed5d64c16cede3
MD5 604ac1ba5212ed19033b3abfa82a690f
BLAKE2b-256 29aa2e4b1b2ecf975da6a3c2a52ecea03764de0d2befa65098dc9eb04dcc93a7

See more details on using hashes here.

Provenance

The following attestation bundles were made for photonai_projects-0.1.1-py3-none-any.whl:

Publisher: publish.yml on wwu-mmll/photonai_projects

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page