Skip to main content

Deep Learning GUI for filament networks Segmentation & Tracking in Microscopy Images

Project description

FILAI

Developed at the Miranda Laboratory at NCSU with the support of NIH-NIGMS R00GM135487 and the National Institute for Theory and Mathematics in Biology (NITMB). This research was supported in part by grants from the NSF (DMS-2235451) and Simons Foundation (MPS-NITMB-00005320) to the NSF-Simons National Institute for Theory and Mathematics in Biology (NITMB).

Deep Learning Platform for microscopy analysis of filamentous fungal life cycles.

Python package leveraging a PySide6 GUI for segmentation and tracking of fungal filament via deep learning image segmentation and generative frame interpolation in microscopy time-lapse images. Built by combining Cellpose, Omnipose, and real-time frame interpolation (RIFE).

Python 3.10 License: BSD 3-Clause


Features

  • 7 Pre-trained Models — Detection (instance segmentation) of different fungal morphological landmarks: fungal filaments (Hyphae), conidia, filament tips, branching points, septa, and crossing points.
  • Ensemble Segmentation Mode — Pretrained models can be combined to produce multidimensional masks depicting different detected structures on fungal filament images, available for single images or batch processing of image time series.
  • RIFE Interpolation — Generative temporal upsampling of image time series (2×/4×/8×/16×) to facilitate the tracking of filaments and fungal features.
  • Tracking — Based solely on mask overlap, with gap-filling and ID consistency thanks to RIFE interpolation.
  • Model Retraining — Fine-tune models to your custom datasets by providing or labeling new masks to retrain the Cellpose or Omnipose models.
  • Interactive Visualization — Real-time overlay.

Installation

Prerequisites: Install Anaconda first.

Anaconda Setup (Required)

  1. Download the installer for your OS:
  2. Install Anaconda:
    • Windows: Run the installer (.exe). You can leave "Add Anaconda to my PATH environment variable" unchecked, then use Anaconda Prompt.
    • macOS/Linux: Run the installer and allow shell initialization when prompted.
  3. Open a new terminal (or Anaconda Prompt on Windows) and verify:
conda --version
python --version
  1. If conda is not found on macOS/Linux, initialize your shell and restart terminal:
conda init zsh
# or:
conda init bash
  1. Create and activate the FILAI environment:
conda create -n filai python=3.10 -y
conda activate filai
  1. Optional but recommended once on a fresh install:
conda update -n base -c defaults conda -y
  1. Daily usage:
    • Start work: conda activate filai
    • Exit environment: conda deactivate

Windows (Sometimes Required): Microsoft Visual C++ Build Tools

If you see an error like Microsoft Visual C++ 14.x is required during pip install, install Microsoft's C++ build tools:

  1. Download Build Tools for Visual Studio: https://visualstudio.microsoft.com/visual-cpp-build-tools/
  2. Run the installer and select Desktop development with C++
  3. In installation details, ensure these are selected:
    • MSVC v14x C++ build tools (latest available)
    • Windows 10/11 SDK (latest available)
  4. Complete installation, then restart your terminal
  5. Re-activate your environment and retry:
conda activate filai
pip install --no-cache-dir filai

Quick Install (CPU - Recommended for First Install)

If you already created filai in the Anaconda setup steps above, skip step 1.

# 1. Create environment with Python 3.10 (important!)
conda create -n filai python=3.10 -y
conda activate filai

# 2. Install FILAI (includes CPU versions of PyTorch)
pip install filai

# 3. Download models (segmentation + interpolation)
filai-download-models

# 4. Launch
filai

Upgrade an Existing Installation

If FILAI is already installed in your existing Conda environment, run:

# 1. Activate your existing FILAI environment
conda activate filai

# 2. Upgrade FILAI from PyPI
python -m pip install --upgrade filai

# 3. Verify installed version
python -c "import importlib.metadata as im; print(im.version('filai'))"

# 4. Optional: refresh/download model files
filai-download-models

If you hit stale wheel/cache issues during upgrade, retry with:

python -m pip install --upgrade --no-cache-dir filai

GPU Support (NVIDIA Only - Optional but Recommended for Speed)

FILAI installs CPU versions of PyTorch by default. For significantly faster segmentation and interpolation, upgrade to GPU support:

Prerequisites:

  1. Install NVIDIA drivers for your GPU
  2. Verify GPU detection: Run nvidia-smi in terminal and note the CUDA version displayed

Upgrade to GPU:

# Activate your filai environment
conda activate filai

# Remove CPU versions
pip uninstall torch torchvision -y

pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu118

Verify GPU installation: After installing, check that PyTorch sees your GPU:

python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"

You should see CUDA available: True.

Note: For Mac (M1/M2/M3), use:

conda install pytorch torchvision -c pytorch -y

Note: The model downloader will download all required models including:

  • Segmentation models (8 models, ~200MB)
  • RIFE interpolation model (~55MB)

To download only specific models, run filai-download-models --list to see options.

Daily Usage: After installation, just run:

conda activate filai
filai

Try built-in sample data (5 TIFF frames):

conda activate filai
filai --test

This copies packaged sample frames to ~/.filai/toy_dataset (writable) and opens them automatically in Processing view. Packaged sample TIFFs in filai/toy_dataset are source assets and should remain unmodified. In --test, single-model mode auto-defaults Resize to 0.5 for Coni_7 and 1.0 for other models (you can still edit Resize manually). Use --test (double dash); -test is intentionally not supported.


Usage Guide

Input Naming

No strict naming format is required for the GUI to load original input images.

For reliable behavior, use this naming convention:

  • Time series: zero-padded sequential names (example: frame_0001.tif, frame_0002.tif, ...) so frame order stays correct.
  • If pairing with masks: keep the same base name (example: frame_0001.tifframe_0001_cp_masks.tif) for best auto-matching.

Where Images and Masks Should Be

  • Segmentation: Put source images in the folder you load in the Processing view (Load image or Load image folder). Generated masks are saved by FILAI in output subfolders for the selected model.
  • Interpolation: Put source time-series images in one input folder, and choose a separate output folder for interpolated frames.
  • Tracking: Provide a folder containing labeled mask files (for example *_cp_masks.tif) and choose a separate output folder for tracking exports.
  • Model Retraining: Use two folders: one Images Folder (training images) and one Masks Folder (matching labeled masks with corresponding base names).

Segmentation

Single Image:

  1. Load image
  2. Select model in Models tab
  3. Run segmentation
  4. View color-coded overlay

Batch Processing:

  1. Load image folder
  2. Choose model
  3. Click "Run Time Series"
  4. Results saved to {folder}/{model_name}/

Ensemble Mode:

  1. Select multiple models in Models tab
  2. Run segmentation
  3. Each structure type in unique color

Frame Interpolation

  1. Go to Interpolation tab
  2. Select input/output folders
  3. Choose factor (2×, 4×, 8×, 16×)
  4. Click "Start Interpolation"
  5. Preview results in viewer

Tracking

  1. Go to Tracking tab
  2. Load masks folder
  3. Set output folder
  4. Click "Start Tracking"
  5. Export: .npy, .mat, .pklt formats

Model Retraining

  1. Open the Retrain view from the left sidebar (ADVANCED -> Retrain).
  2. Select Images Folder (training images) and Masks Folder (labeled masks).
  3. Click Base Model (.pth) and choose the pretrained model file you want to fine-tune.
  4. Set the mask pattern if needed (for example _cp_masks), then click Validate Data to confirm image-mask pairs.
  5. Set training hyperparameters: Epochs, Learning Rate, and Weight Decay (defaults work for many use cases).
  6. Click Start Training and confirm the training dialog.
  7. Monitor progress in the training progress window. (Detailed console logs are shown when verbose logging is enabled.)
  8. After completion, FILAI saves the retrained model in the generated preprocessed_<timestamp>/models/ folder, typically named <base_model>_retrain.pth (or <base_model>_retrain_<timestamp>.pth if needed).
  9. To use the retrained model in segmentation, open ADVANCED -> Add Model, click Browse and Add Model, and select the saved .pth file.
  10. The imported model appears in the models list and can be used in single-model or ensemble workflows. If performance is not satisfactory, refine labels/data and retrain.

Pre-trained Models

Model Purpose
FilaTip_6.pth Filament tips (recommended)
Coni_7.pth Conidia/spores
Phore_2.pth Sporangiophore
FilaBranch_2.pth Branch points
FilaCross_2.pth Crossings
FilaSeptum_4.pth Septa
Retrain_omni_5.pth General/custom

Troubleshooting

"conda: command not found"

Installation takes long

  • Normal - downloads ~2.5 GB (PyTorch + dependencies)
  • First install: 10-20 minutes

Windows install error: "Microsoft Visual C++ 14.x is required"

Models don't download

  • Check internet connection
  • Run manually: filai-download-models
  • Check FiloGUI_models/ folder

GPU not detected

  • Verify: python -c "import torch; print(torch.cuda.is_available())"
  • Install correct CUDA version for your GPU
  • Application will auto-fallback to CPU

Out of memory

  • Reduce batch size
  • Process smaller image regions
  • Use CPU mode: conda install pytorch torchvision cpuonly -c pytorch -y

File Formats

Supported Inputs:

  • Images: .tif, .tiff, .png, .jpg
  • Masks: .tif, .tiff, .png, .jpg (labeled)

Outputs:

  • Masks: _cp_masks.tif (labeled)
  • Tracks: .npy (NumPy), .mat (MATLAB), .pklt (metadata)
  • Interpolated: Same format as input

Directory Arrangement and Naming

Drag and drop a directory of 2D image files into the GUI (or load it from the file menu) to start segmentation, labeling, interpolation, or processing. Each file should contain a single 2D image and use a standard image extension.

Important behavior:

  • FILAI writes generated masks/labels and processed images into the loaded directory.
  • Removing those generated files will remove the corresponding saved results from prior sessions.
  • Keep image dimensions consistent within the same channel.
  • For best loading reliability, keep only valid image files and FILAI-generated files in the working directory.

Multi-channel and pre-generated mask naming:

  • Use a clear data-type identifier immediately before the extension (for example: _phase.tif, _channel2.png, _mask1.tif).
  • Any image that is a label/mask should include _mask in its identifier (for example: _mask_nucleus.tif, _mask_cytoplasm.jpg).
  • Each identifier/group should have the same number of time points; mismatched counts across channels or masks will raise an error.

Example directory (2 time points, 2 channels, 2 masks):

  • im001_channel1.tif, im001_channel2.tif, im001_mask1.tif, im001_mask2.tif
  • im002_channel1.tif, im002_channel2.tif, im002_mask1.tif, im002_mask2.tif

License

BSD 3-Clause License - see LICENSE file


That's it! The GUI will open and you're ready to analyze microscopy images.

Package Structure

FILAI/
├── filai/                   # Main package
│   ├── main.py                  # GUI application entry point
│   ├── tracking_functions.py   # Overlap-based tracking algorithm
│   ├── model_downloader.py     # Automated model fetching
│   ├── interpolate.py           # RIFE interpolation wrapper
│   ├── gui_styles.py            # UI theming
│   ├── assets/                  # Icons and resources
│   └── rife_model/              # RIFE neural network implementation
├── FiloGUI_models/              # Downloaded models directory
│   ├── Coni_7.pth              # Conidia segmentation
│   ├── FilaTip_6.pth           # Filament tips
│   └── ...                      # 5 additional models
├── pyproject.toml               # Package configuration
├── LICENSE                      # BSD 3-Clause License
└── README.md                    # This file

Core Workflow

1. Segmentation

FILAI provides three segmentation modes powered by Cellpose and Omnipose:

Single Image Mode

Load Image → Select Model → Run Segmentation → View Overlay
  • Instantly segment individual microscopy images
  • Real-time color-coded mask overlay
  • Export labeled masks as .tif files

Batch Time-Series Mode

Select Folder → Choose Model → Run Time Series → Auto-save Results
  • Process entire directories of sequential images
  • Results saved to {folder}/{model_name}/ with _cp_masks.tif suffix
  • Progress tracking with frame counter

Ensemble Mode

Models Tab → Check Multiple Models → Run Ensemble → Multi-color Overlay
  • Combine predictions from multiple specialized models
  • Each structure type rendered in unique color
  • Ideal for complex samples with tips, branches, and septa

2. Frame Interpolation (RIFE)

Increase temporal resolution using AI-powered interpolation for tracking fast-moving structures:

Usage

Input Folder → Output Folder → Interpolation Factor → Start

Interpolation Options:

  • — Double frame count (1 intermediate frame)
  • — Quadruple frames (3 intermediate frames)
  • — 8× frames (7 intermediate frames)
  • 16× — 16× frames (15 intermediate frames)

Example: 20 frames → 305 frames at 16× (perfect for tip velocity analysis)

Technical Details:

  • Supports .tif, .tiff, .png, .jpg
  • Auto-converts uint32uint16 for microscopy compatibility
  • GPU-accelerated (CUDA) with CPU fallback
  • Preserves bit depth and dynamic range
  • Integrated preview browser with frame navigation

3. Tracking

Track individual cells/tips across time with gap-filling and ID consistency:

Usage

Mask Folder → Output Folder → Start Tracking → Export Results

Output Formats:

  • {pos}_Tracks.npy — 3D NumPy array (H × W × T)
  • {pos}_ART_Tracks_MATLAB.mat — MATLAB-compatible format
  • {pos}_Tracks_vars_file.pklt — Metadata (sizes, lifetimes, statistics)

Pre-trained Models

FILAI includes 7 specialized models (~25MB each, ~175MB total):

Model Purpose Best For
Coni_7.pth Conidia Fungal spores, reproductive structures
Phore_2.pth Sporangiophore Spore-bearing aerial hyphae
FilaBranch_2.pth Branch Points Hyphal branching junctions
FilaCross_2.pth Crossings Overlapping filament networks detection
FilaSeptum_4.pth Septa Cell wall divisions, compartments
FilaTip_6.pth Tips Filament tip tracking (recommended)
Retrain_omni_5.pth General Multi-purpose, custom-trained

Model Management

# List available models
filai-download-models --list

# Download all models
filai-download-models --all

# Download specific models
filai-download-models FilaTip_6 Coni_7

# Interactive selection
filai-download-models

Models downloaded to FiloGUI_models/ in current directory or ~/.filai/models/


Recommended Workflow: Complete Analysis Pipeline

For high-quality filament networks tip tracking:

1. Acquire Microscopy Data
   └─ Time-lapse .tif series (e.g., 20-50 frames, 2-5 min intervals)

2. Frame Interpolation (Optional but Recommended)
   └─ RIFE 8× or 16× for capturing fast growth dynamics
   └─ 20 frames → 305 frames at 16×

3. Segmentation
   └─ Batch process with FilaTip_6 model
   └─ Outputs: {folder}/FilaTip_6/*_cp_masks.tif

4. Tracking
   └─ Run tracking on segmented masks
   └─ Automatically handles gaps and ID consistency
   └─ Outputs: Tracked masks + metadata

5. Analysis
   └─ Load .npy or .mat files in Python/MATLAB
   └─ Extract growth rates, velocities, morphology

Typical Performance:

  • Segmentation: ~2-5 seconds/frame (GPU)
  • Interpolation 16×: ~1-3 seconds/frame (GPU)
  • Tracking: ~0.5 seconds/frame (CPU)

Development

Install from Source

git clone https://github.com/vatsal-dp/filogui3.git
cd filogui3

conda create -n filai-dev python=3.10
conda activate filai-dev

conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -e .  # Editable install

filai-download-models --all
filai

Hot-Reload Development Mode

python dev_runner.py  # Auto-reloads on code changes

Project Structure

filai/
├── main.py                      # Main GUI (5000+ lines, modular views)
├── tracking_functions.py        # Core tracking algorithm
├── model_downloader.py          # Automated model fetching
├── interpolate.py               # RIFE wrapper
├── gui_styles.py                # Centralized theming
├── interpolation_view.py        # Interpolation tab UI
├── interpolation_dialog.py      # Interpolation dialogs
├── debug_viewer.py              # Debug visualization
├── matlab_equivalent_functions.py  # MATLAB compatibility layer
└── rife_model/                  # RIFE neural network
    ├── RIFE.py                  # Main RIFE model
    ├── IFNet.py                 # Feature extraction
    └── pytorch_msssim/          # Perceptual loss

Technical Specifications

Supported Formats

Category Formats Notes
Input Images .tif, .tiff, .png, .jpg 8/16/32-bit
Segmentation Output .tif (16-bit labeled) Instance masks
Tracking Output .npy, .mat, .pklt NumPy, MATLAB, Pickle

Troubleshooting

GPU Not Detected

import torch
print(torch.cuda.is_available())  # Should be True

Fix:

conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia

Models Not Found

# Download to current directory
filai-download-models --all

# Or specify custom path
filai-download-models --all --output-dir /path/to/models

Import Errors (Cellpose/Omnipose)

pip install --upgrade cellpose==2.1.0 omnipose==0.4.4

RIFE Not Available Warning

pip install opencv-python-headless

Acknowledgements

Sandhya Neupane, Susmita Gaire, Kevin Garcia, Tika B. Adhikari, Orlando Arguello-Miranda

1 Plant and Microbial Biology, North Carolina State University.
2 Entomology and Plant Pathology, North Carolina State University.
3 Crop and Soil Sciences, North Carolina State University.
Current address: Department of Natural Sciences, Tennessee Wesleyan University.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

filai-0.1.54.tar.gz (17.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

filai-0.1.54-py3-none-any.whl (17.9 MB view details)

Uploaded Python 3

File details

Details for the file filai-0.1.54.tar.gz.

File metadata

  • Download URL: filai-0.1.54.tar.gz
  • Upload date:
  • Size: 17.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for filai-0.1.54.tar.gz
Algorithm Hash digest
SHA256 52b2114fd48ede059a290e3633bb301ab590e5858ca531cb5ecdecbfe33ded75
MD5 7ae7cd395ad6c715997ea36d66b98887
BLAKE2b-256 5a25cfa923d03d9352aeef5fd83fa6f193e7bbe82fa9ab40115740b118da6fe4

See more details on using hashes here.

File details

Details for the file filai-0.1.54-py3-none-any.whl.

File metadata

  • Download URL: filai-0.1.54-py3-none-any.whl
  • Upload date:
  • Size: 17.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for filai-0.1.54-py3-none-any.whl
Algorithm Hash digest
SHA256 d9116543a876eb250c0b003048414ac1b5ef8da3beeaf7f3a0516e3575633b8d
MD5 eb2a97cf0db9974231a2c1f2469a5843
BLAKE2b-256 7c44734da2d031b500ddb6531f01c4a7c836ed0044c9a0cc430de989f2879e34

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page