Deep Learning GUI for filament networks Segmentation & Tracking in Microscopy Images
Project description
FILAI
Developed at the Miranda Laboratory at NCSU with the support of NIH-NIGMS R00GM135487 and the National Institute for Theory and Mathematics in Biology (NITMB). This research was supported in part by grants from the NSF (DMS-2235451) and Simons Foundation (MPS-NITMB-00005320) to the NSF-Simons National Institute for Theory and Mathematics in Biology (NITMB).
Deep Learning Platform for microscopy analysis of filamentous fungal life cycles.
Python package leveraging a PySide6 GUI for segmentation and tracking of fungal filament via deep learning image segmentation and generative frame interpolation in microscopy time-lapse images. Built by combining Cellpose, Omnipose, and real-time frame interpolation (RIFE).
Features
- 7 Pre-trained Models — Detection (instance segmentation) of different fungal morphological landmarks: fungal filaments (Hyphae), conidia, filament tips, branching points, septa, and crossing points.
- Ensemble Segmentation Mode — Pretrained models can be combined to produce multidimensional masks depicting different detected structures on fungal filament images, available for single images or batch processing of image time series.
- RIFE Interpolation — Generative temporal upsampling of image time series (2×/4×/8×/16×) to facilitate the tracking of filaments and fungal features.
- Tracking — Based solely on mask overlap, with gap-filling and ID consistency thanks to RIFE interpolation.
- Model Retraining — Fine-tune models to your custom datasets by providing or labeling new masks to retrain the Cellpose or Omnipose models.
- Interactive Visualization — Real-time overlay.
Installation
Prerequisites: Install Anaconda first.
Anaconda Setup (Required)
- Download the installer for your OS:
- Anaconda Distribution: https://www.anaconda.com/products/distribution
- Install Anaconda:
- Windows: Run the installer (
.exe). You can leave "Add Anaconda to my PATH environment variable" unchecked, then use Anaconda Prompt. - macOS/Linux: Run the installer and allow shell initialization when prompted.
- Windows: Run the installer (
- Open a new terminal (or Anaconda Prompt on Windows) and verify:
conda --version
python --version
- If
condais not found on macOS/Linux, initialize your shell and restart terminal:
conda init zsh
# or:
conda init bash
- Create and activate the FILAI environment:
conda create -n filai python=3.10 -y
conda activate filai
- Optional but recommended once on a fresh install:
conda update -n base -c defaults conda -y
- Daily usage:
- Start work:
conda activate filai - Exit environment:
conda deactivate
- Start work:
Windows (Sometimes Required): Microsoft Visual C++ Build Tools
If you see an error like Microsoft Visual C++ 14.x is required during pip install, install Microsoft's C++ build tools:
- Download Build Tools for Visual Studio: https://visualstudio.microsoft.com/visual-cpp-build-tools/
- Run the installer and select Desktop development with C++
- In installation details, ensure these are selected:
- MSVC v14x C++ build tools (latest available)
- Windows 10/11 SDK (latest available)
- Complete installation, then restart your terminal
- Re-activate your environment and retry:
conda activate filai
pip install --no-cache-dir filai
Quick Install (CPU - Recommended for First Install)
If you already created filai in the Anaconda setup steps above, skip step 1.
# 1. Create environment with Python 3.10 (important!)
conda create -n filai python=3.10 -y
conda activate filai
# 2. Install FILAI (includes CPU versions of PyTorch)
pip install filai
# 3. Download models (segmentation + interpolation)
filai-download-models
# 4. Launch
filai
Upgrade an Existing Installation
If FILAI is already installed in your existing Conda environment, run:
# 1. Activate your existing FILAI environment
conda activate filai
# 2. Upgrade FILAI from PyPI
python -m pip install --upgrade filai
# 3. Verify installed version
python -c "import importlib.metadata as im; print(im.version('filai'))"
# 4. Optional: refresh/download model files
filai-download-models
If you hit stale wheel/cache issues during upgrade, retry with:
python -m pip install --upgrade --no-cache-dir filai
GPU Support (NVIDIA Only - Optional but Recommended for Speed)
FILAI installs CPU versions of PyTorch by default. For significantly faster segmentation and interpolation, upgrade to GPU support:
Prerequisites:
- Install NVIDIA drivers for your GPU
- Verify GPU detection: Run
nvidia-smiin terminal and note the CUDA version displayed
Upgrade to GPU:
# Activate your filai environment
conda activate filai
# Remove CPU versions
pip uninstall torch torchvision -y
pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu118
Verify GPU installation: After installing, check that PyTorch sees your GPU:
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
You should see CUDA available: True.
Note: For Mac (M1/M2/M3), use:
conda install pytorch torchvision -c pytorch -y
Note: The model downloader will download all required models including:
- Segmentation models (8 models, ~200MB)
- RIFE interpolation model (~55MB)
To download only specific models, run filai-download-models --list to see options.
Daily Usage: After installation, just run:
conda activate filai
filai
Try built-in sample data (5 TIFF frames):
conda activate filai
filai --test
This copies packaged sample frames to ~/.filai/toy_dataset (writable) and opens them automatically in Processing view.
Packaged sample TIFFs in filai/toy_dataset are source assets and should remain unmodified.
In --test, single-model mode auto-defaults Resize to 0.5 for Coni_7 and 1.0 for other models (you can still edit Resize manually).
Use --test (double dash); -test is intentionally not supported.
Usage Guide
Input Naming
No strict naming format is required for the GUI to load original input images.
For reliable behavior, use this naming convention:
- Time series: zero-padded sequential names (example:
frame_0001.tif,frame_0002.tif, ...) so frame order stays correct. - If pairing with masks: keep the same base name (example:
frame_0001.tif↔frame_0001_cp_masks.tif) for best auto-matching.
Where Images and Masks Should Be
- Segmentation: Put source images in the folder you load in the Processing view (
Load imageorLoad image folder). Generated masks are saved by FILAI in output subfolders for the selected model. - Interpolation: Put source time-series images in one input folder, and choose a separate output folder for interpolated frames.
- Tracking: Provide a folder containing labeled mask files (for example
*_cp_masks.tif) and choose a separate output folder for tracking exports. - Model Retraining: Use two folders: one
Images Folder(training images) and oneMasks Folder(matching labeled masks with corresponding base names).
Segmentation
Single Image:
- Load image
- Select model in Models tab
- Run segmentation
- View color-coded overlay
Batch Processing:
- Load image folder
- Choose model
- Click "Run Time Series"
- Results saved to
{folder}/{model_name}/
Ensemble Mode:
- Select multiple models in Models tab
- Run segmentation
- Each structure type in unique color
Frame Interpolation
- Go to Interpolation tab
- Select input/output folders
- Choose factor (2×, 4×, 8×, 16×)
- Click "Start Interpolation"
- Preview results in viewer
Tracking
- Go to Tracking tab
- Load masks folder
- Set output folder
- Click "Start Tracking"
- Export:
.npy,.mat,.pkltformats
Model Retraining
- Open the
Retrainview from the left sidebar (ADVANCED -> Retrain). - Select
Images Folder(training images) andMasks Folder(labeled masks). - Click
Base Model (.pth)and choose the pretrained model file you want to fine-tune. - Set the mask pattern if needed (for example
_cp_masks), then clickValidate Datato confirm image-mask pairs. - Set training hyperparameters:
Epochs,Learning Rate, andWeight Decay(defaults work for many use cases). - Click
Start Trainingand confirm the training dialog. - Monitor progress in the training progress window. (Detailed console logs are shown when verbose logging is enabled.)
- After completion, FILAI saves the retrained model in the generated
preprocessed_<timestamp>/models/folder, typically named<base_model>_retrain.pth(or<base_model>_retrain_<timestamp>.pthif needed). - To use the retrained model in segmentation, open
ADVANCED -> Add Model, clickBrowse and Add Model, and select the saved.pthfile. - The imported model appears in the models list and can be used in single-model or ensemble workflows. If performance is not satisfactory, refine labels/data and retrain.
Pre-trained Models
| Model | Purpose |
|---|---|
FilaTip_6.pth |
Filament tips (recommended) |
Coni_7.pth |
Conidia/spores |
Phore_2.pth |
Sporangiophore |
FilaBranch_2.pth |
Branch points |
FilaCross_2.pth |
Crossings |
FilaSeptum_4.pth |
Septa |
Retrain_omni_5.pth |
General/custom |
Troubleshooting
"conda: command not found"
- Install Miniconda: https://docs.conda.io/en/latest/miniconda.html
- Restart terminal after installation
Installation takes long
- Normal - downloads ~2.5 GB (PyTorch + dependencies)
- First install: 10-20 minutes
Windows install error: "Microsoft Visual C++ 14.x is required"
- Install Build Tools for Visual Studio: https://visualstudio.microsoft.com/visual-cpp-build-tools/
- Select workload: Desktop development with C++
- Then retry:
pip install --no-cache-dir filai
Models don't download
- Check internet connection
- Run manually:
filai-download-models - Check
FiloGUI_models/folder
GPU not detected
- Verify:
python -c "import torch; print(torch.cuda.is_available())" - Install correct CUDA version for your GPU
- Application will auto-fallback to CPU
Out of memory
- Reduce batch size
- Process smaller image regions
- Use CPU mode:
conda install pytorch torchvision cpuonly -c pytorch -y
File Formats
Supported Inputs:
- Images:
.tif,.tiff,.png,.jpg - Masks:
.tif,.tiff,.png,.jpg(labeled)
Outputs:
- Masks:
_cp_masks.tif(labeled) - Tracks:
.npy(NumPy),.mat(MATLAB),.pklt(metadata) - Interpolated: Same format as input
Directory Arrangement and Naming
Drag and drop a directory of 2D image files into the GUI (or load it from the file menu) to start segmentation, labeling, interpolation, or processing. Each file should contain a single 2D image and use a standard image extension.
Important behavior:
- FILAI writes generated masks/labels and processed images into the loaded directory.
- Removing those generated files will remove the corresponding saved results from prior sessions.
- Keep image dimensions consistent within the same channel.
- For best loading reliability, keep only valid image files and FILAI-generated files in the working directory.
Multi-channel and pre-generated mask naming:
- Use a clear data-type identifier immediately before the extension (for example:
_phase.tif,_channel2.png,_mask1.tif). - Any image that is a label/mask should include
_maskin its identifier (for example:_mask_nucleus.tif,_mask_cytoplasm.jpg). - Each identifier/group should have the same number of time points; mismatched counts across channels or masks will raise an error.
Example directory (2 time points, 2 channels, 2 masks):
im001_channel1.tif,im001_channel2.tif,im001_mask1.tif,im001_mask2.tifim002_channel1.tif,im002_channel2.tif,im002_mask1.tif,im002_mask2.tif
License
BSD 3-Clause License - see LICENSE file
That's it! The GUI will open and you're ready to analyze microscopy images.
Package Structure
FILAI/
├── filai/ # Main package
│ ├── main.py # GUI application entry point
│ ├── tracking_functions.py # Overlap-based tracking algorithm
│ ├── model_downloader.py # Automated model fetching
│ ├── interpolate.py # RIFE interpolation wrapper
│ ├── gui_styles.py # UI theming
│ ├── assets/ # Icons and resources
│ └── rife_model/ # RIFE neural network implementation
├── FiloGUI_models/ # Downloaded models directory
│ ├── Coni_7.pth # Conidia segmentation
│ ├── FilaTip_6.pth # Filament tips
│ └── ... # 5 additional models
├── pyproject.toml # Package configuration
├── LICENSE # BSD 3-Clause License
└── README.md # This file
Core Workflow
1. Segmentation
FILAI provides three segmentation modes powered by Cellpose and Omnipose:
Single Image Mode
Load Image → Select Model → Run Segmentation → View Overlay
- Instantly segment individual microscopy images
- Real-time color-coded mask overlay
- Export labeled masks as
.tiffiles
Batch Time-Series Mode
Select Folder → Choose Model → Run Time Series → Auto-save Results
- Process entire directories of sequential images
- Results saved to
{folder}/{model_name}/with_cp_masks.tifsuffix - Progress tracking with frame counter
Ensemble Mode
Models Tab → Check Multiple Models → Run Ensemble → Multi-color Overlay
- Combine predictions from multiple specialized models
- Each structure type rendered in unique color
- Ideal for complex samples with tips, branches, and septa
2. Frame Interpolation (RIFE)
Increase temporal resolution using AI-powered interpolation for tracking fast-moving structures:
Usage
Input Folder → Output Folder → Interpolation Factor → Start
Interpolation Options:
- 2× — Double frame count (1 intermediate frame)
- 4× — Quadruple frames (3 intermediate frames)
- 8× — 8× frames (7 intermediate frames)
- 16× — 16× frames (15 intermediate frames)
Example: 20 frames → 305 frames at 16× (perfect for tip velocity analysis)
Technical Details:
- Supports
.tif,.tiff,.png,.jpg - Auto-converts
uint32→uint16for microscopy compatibility - GPU-accelerated (CUDA) with CPU fallback
- Preserves bit depth and dynamic range
- Integrated preview browser with frame navigation
3. Tracking
Track individual cells/tips across time with gap-filling and ID consistency:
Usage
Mask Folder → Output Folder → Start Tracking → Export Results
Output Formats:
{pos}_Tracks.npy— 3D NumPy array (H × W × T){pos}_ART_Tracks_MATLAB.mat— MATLAB-compatible format{pos}_Tracks_vars_file.pklt— Metadata (sizes, lifetimes, statistics)
Pre-trained Models
FILAI includes 7 specialized models (~25MB each, ~175MB total):
| Model | Purpose | Best For |
|---|---|---|
Coni_7.pth |
Conidia | Fungal spores, reproductive structures |
Phore_2.pth |
Sporangiophore | Spore-bearing aerial hyphae |
FilaBranch_2.pth |
Branch Points | Hyphal branching junctions |
FilaCross_2.pth |
Crossings | Overlapping filament networks detection |
FilaSeptum_4.pth |
Septa | Cell wall divisions, compartments |
FilaTip_6.pth |
Tips | Filament tip tracking (recommended) |
Retrain_omni_5.pth |
General | Multi-purpose, custom-trained |
Model Management
# List available models
filai-download-models --list
# Download all models
filai-download-models --all
# Download specific models
filai-download-models FilaTip_6 Coni_7
# Interactive selection
filai-download-models
Models downloaded to FiloGUI_models/ in current directory or ~/.filai/models/
Recommended Workflow: Complete Analysis Pipeline
For high-quality filament networks tip tracking:
1. Acquire Microscopy Data
└─ Time-lapse .tif series (e.g., 20-50 frames, 2-5 min intervals)
2. Frame Interpolation (Optional but Recommended)
└─ RIFE 8× or 16× for capturing fast growth dynamics
└─ 20 frames → 305 frames at 16×
3. Segmentation
└─ Batch process with FilaTip_6 model
└─ Outputs: {folder}/FilaTip_6/*_cp_masks.tif
4. Tracking
└─ Run tracking on segmented masks
└─ Automatically handles gaps and ID consistency
└─ Outputs: Tracked masks + metadata
5. Analysis
└─ Load .npy or .mat files in Python/MATLAB
└─ Extract growth rates, velocities, morphology
Typical Performance:
- Segmentation: ~2-5 seconds/frame (GPU)
- Interpolation 16×: ~1-3 seconds/frame (GPU)
- Tracking: ~0.5 seconds/frame (CPU)
Development
Install from Source
git clone https://github.com/vatsal-dp/filogui3.git
cd filogui3
conda create -n filai-dev python=3.10
conda activate filai-dev
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -e . # Editable install
filai-download-models --all
filai
Hot-Reload Development Mode
python dev_runner.py # Auto-reloads on code changes
Project Structure
filai/
├── main.py # Main GUI (5000+ lines, modular views)
├── tracking_functions.py # Core tracking algorithm
├── model_downloader.py # Automated model fetching
├── interpolate.py # RIFE wrapper
├── gui_styles.py # Centralized theming
├── interpolation_view.py # Interpolation tab UI
├── interpolation_dialog.py # Interpolation dialogs
├── debug_viewer.py # Debug visualization
├── matlab_equivalent_functions.py # MATLAB compatibility layer
└── rife_model/ # RIFE neural network
├── RIFE.py # Main RIFE model
├── IFNet.py # Feature extraction
└── pytorch_msssim/ # Perceptual loss
Technical Specifications
Supported Formats
| Category | Formats | Notes |
|---|---|---|
| Input Images | .tif, .tiff, .png, .jpg |
8/16/32-bit |
| Segmentation Output | .tif (16-bit labeled) |
Instance masks |
| Tracking Output | .npy, .mat, .pklt |
NumPy, MATLAB, Pickle |
Troubleshooting
GPU Not Detected
import torch
print(torch.cuda.is_available()) # Should be True
Fix:
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
Models Not Found
# Download to current directory
filai-download-models --all
# Or specify custom path
filai-download-models --all --output-dir /path/to/models
Import Errors (Cellpose/Omnipose)
pip install --upgrade cellpose==2.1.0 omnipose==0.4.4
RIFE Not Available Warning
pip install opencv-python-headless
Acknowledgements
Sandhya Neupane, Susmita Gaire, Kevin Garcia, Tika B. Adhikari, Orlando Arguello-Miranda
1 Plant and Microbial Biology, North Carolina State University.
2 Entomology and Plant Pathology, North Carolina State University.
3 Crop and Soil Sciences, North Carolina State University.
Current address: Department of Natural Sciences, Tennessee Wesleyan University.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file filai-0.1.54.tar.gz.
File metadata
- Download URL: filai-0.1.54.tar.gz
- Upload date:
- Size: 17.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
52b2114fd48ede059a290e3633bb301ab590e5858ca531cb5ecdecbfe33ded75
|
|
| MD5 |
7ae7cd395ad6c715997ea36d66b98887
|
|
| BLAKE2b-256 |
5a25cfa923d03d9352aeef5fd83fa6f193e7bbe82fa9ab40115740b118da6fe4
|
File details
Details for the file filai-0.1.54-py3-none-any.whl.
File metadata
- Download URL: filai-0.1.54-py3-none-any.whl
- Upload date:
- Size: 17.9 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d9116543a876eb250c0b003048414ac1b5ef8da3beeaf7f3a0516e3575633b8d
|
|
| MD5 |
eb2a97cf0db9974231a2c1f2469a5843
|
|
| BLAKE2b-256 |
7c44734da2d031b500ddb6531f01c4a7c836ed0044c9a0cc430de989f2879e34
|