Frequency and quality factor estimation of exponentially decaying sinusoids
Project description
Frequency and quality factor estimation of exponentially decaying sinusoids
This repository contains theoretical analysis, numerical simulations, and experimental data analysis for frequency estimation of ring-down signals. Ring-down signals are exponentially decaying sinusoids that arise from measurements of harmonic oscillators with quality factor Q, where the amplitude decays exponentially due to energy dissipation.
Quickstart
Installation
From PyPI:
pip install ringdownanalysis
For development (testing, linting) or examples and notebooks, install from source:
git clone https://github.com/mdovale/RingDownAnalysis.git
cd RingDownAnalysis
pip install -e ".[dev]" # testing, ruff, mypy, sphinx
pip install -e ".[examples]" # jupyter, pandas for notebooks
pip install -e ".[all]" # everything
Basic Usage
Generate and Analyze a Ring-Down Signal
from ringdownanalysis import RingDownSignal, NLSFrequencyEstimator, DFTFrequencyEstimator
import numpy as np
# Generate a ring-down signal
signal = RingDownSignal(
f0=5.0, # Frequency (Hz)
fs=100.0, # Sampling frequency (Hz)
N=10000, # Number of samples
A0=1.0, # Initial amplitude
snr_db=60.0, # Initial SNR (dB)
Q=10000.0, # Quality factor
)
rng = np.random.default_rng(42)
t, x, phi0 = signal.generate(rng=rng)
# Estimate frequency using NLS method
nls_estimator = NLSFrequencyEstimator(tau_known=None)
f_nls = nls_estimator.estimate(x, signal.fs)
# Estimate frequency using DFT method
dft_estimator = DFTFrequencyEstimator(window="rect")
f_dft = dft_estimator.estimate(x, signal.fs)
print(f"True frequency: {signal.f0:.6f} Hz")
print(f"NLS estimate: {f_nls:.6f} Hz")
print(f"DFT estimate: {f_dft:.6f} Hz")
# Or estimate frequency, tau, and Q together
result_nls = nls_estimator.estimate_full(x, signal.fs)
result_dft = dft_estimator.estimate_full(x, signal.fs)
print(f"\nNLS full result: f={result_nls.f:.6f} Hz, tau={result_nls.tau:.2f} s, Q={result_nls.Q:.2e}")
print(f"DFT full result: f={result_dft.f:.6f} Hz, tau={result_dft.tau:.2f} s, Q={result_dft.Q:.2e}")
Analyze Experimental Data
from ringdownanalysis import BatchRingDownAnalyzer
import pandas as pd
# Initialize batch analyzer
batch_analyzer = BatchRingDownAnalyzer()
# Process all files in data directory
results = batch_analyzer.process_directory("data", verbose=True)
# Get summary table
summary = batch_analyzer.get_summary_table()
df_summary = pd.DataFrame(summary['data'])
print(df_summary)
# For notebook/console display strings:
df_display = pd.DataFrame(batch_analyzer.get_formatted_summary_table()['data'])
See examples/usage_example.py and examples/batch_analysis_example.py for more complete examples.
Profile Q, Limits, and Raw Diagnostics
Real ring-down records may not identify the decay time constant well enough for
a trustworthy Q estimate. RingDownAnalyzer therefore separates the preferred
profile-likelihood Q result from raw optimizer diagnostics:
Q_profileis the recommended finite Q estimate. It is only populated when a log-tau profile closes on both sides of the optimum.Q_profile_valid,Q_profile_status,Q_profile_reasons,Q_profile_ci95,Q_profile_lower_limit_95, andQ_profile_upper_limit_95explain whether the data support a finite Q, a one-sided limit, or no useful bound.Q_nls_rawandQ_dft_rawpreserve the direct fitted products from the legacy optimizers.Q_nlsandQ_dftare retained as compatibility diagnostics and are only populated when the corresponding estimate is cleanly valid.Q_nls_valid,Q_dft_valid,Q_nls_status,Q_dft_status, and theQ_*_reasonslists explain invalid or warning-status estimates.
Common Q_profile_status values are:
valid:Q_profileandQ_profile_ci95are finite and may be quoted.lower_limit: the record supports a lower bound, but not a finite upper interval endpoint. QuoteQ_profile_lower_limit_95, notQ_profile.upper_limit: the record supports an upper bound only.unbounded,invalid, orfailed: the record does not support a finite Q.
Always check status before quoting Q:
result = analyzer.analyze_array(t=t, data=x)
if result["Q_profile_valid"]:
print(f"Q = {result['Q_profile']:.3e} with 95% interval {result['Q_profile_ci95']}")
elif result["Q_profile_status"] == "lower_limit":
print(f"Q > {result['Q_profile_lower_limit_95']:.3e} at approximately 95% confidence")
else:
print(f"Profile Q unavailable: {result['Q_profile_status']} {result['Q_profile_reasons']}")
Batch Q statistics prefer valid Q_profile values. If profile-Q metadata is
present but the profile is invalid or limit-only, the record is skipped by
default instead of falling back to NLS. Pass include_invalid=True to
calculate_q_factors() or get_q_factor_statistics() only for diagnostic work
where raw fitted values are explicitly desired.
For array workflows, pass detrend="constant" to analyze_array() when you
want the same constant-offset removal used by file loading. To inspect
window/crop sensitivity, use RingDownAnalyzer.q_sensitivity(...); it returns
DataFrame-ready records with valid Q, raw Q, status, reasons, tau bound flags,
and crop metadata for each requested start/duration/multiplier combination.
Configure Logging
The package uses NullHandler by default (no log output). For easier debugging, enable console logging:
import logging
from ringdownanalysis import configure_logging, BatchRingDownAnalyzer
# Quick setup: INFO-level console output
configure_logging(level=logging.INFO)
analyzer = BatchRingDownAnalyzer()
results = analyzer.process_directory("data")
For production (file + console, rotation, structured format):
from examples.logging_config_example import setup_production_logging
import logging
setup_production_logging(log_dir='logs', log_level=logging.INFO)
See examples/logging_config_example.py for more logging configuration options.
Overview
The project compares two complementary approaches for frequency estimation:
- Nonlinear Least Squares (NLS) with explicit ring-down model
- Frequency-Domain Methods (DFT) with Lorentzian peak fitting
Both methods are evaluated against Cramér-Rao-style bounds derived from the explicit Fisher information matrix for ring-down signals. Real-data analyzer outputs such as plugin_crlb_std_f, uncertainty_std_f, and the backward-compatible crlb_std_f alias are plug-in diagnostics computed from the fitted model, residual noise, and selected crop. Treat batch ratios involving those values as heuristic consistency diagnostics rather than formal hypothesis tests.
Features
Object-Oriented API
The package provides a modern object-oriented API:
RingDownSignal: Generate synthetic ring-down signals with specified parametersFrequencyEstimator: Base class for frequency estimation methodsNLSFrequencyEstimator: Nonlinear least squares estimationestimate(): Returns frequency onlyestimate_full(): ReturnsEstimationResultwith frequency, tau, and Q
DFTFrequencyEstimator: DFT-based estimation with Lorentzian fittingestimate(): Returns frequency onlyestimate_full(): ReturnsEstimationResultwith frequency, tau (via NLS with fixed frequency), and Q
EstimationResult: Named tuple containing (f, tau, Q) estimatesCRLBCalculator: Calculate Cramér-Rao Lower Bound for frequency estimationRingDownAnalyzer: Analyze individual ring-down data filesBatchRingDownAnalyzer: Batch process multiple data filesMonteCarloAnalyzer: Run Monte Carlo simulations to compare methods
Compatibility Layer
A function-based compatibility layer is also available for backward compatibility:
from ringdownanalysis import (
generate_ringdown,
estimate_freq_nls_ringdown,
estimate_freq_dft,
crlb_var_f_ringdown_explicit,
monte_carlo_analysis,
)
Usage Examples
Monte Carlo Analysis
Compare estimation methods using Monte Carlo simulations:
from ringdownanalysis import MonteCarloAnalyzer
analyzer = MonteCarloAnalyzer()
results = analyzer.run(
f0=5.0,
fs=100.0,
N=1_000_000,
A0=1.0,
snr_db=60.0,
Q=10000.0,
n_mc=100,
seed=42,
)
print(f"NLS std: {results['stats']['nls']['std']:.6e} Hz")
print(f"DFT std: {results['stats']['dft']['std']:.6e} Hz")
print(f"CRLB std: {results['crlb_std']:.6e} Hz")
Calculate CRLB
from ringdownanalysis import CRLBCalculator
crlb_calc = CRLBCalculator()
crlb_std = crlb_calc.standard_deviation(
A0=1.0,
sigma=0.001,
fs=100.0,
N=10000,
tau=636.6,
)
print(f"CRLB standard deviation: {crlb_std:.6e} Hz")
Batch Analysis
Process multiple experimental data files:
from ringdownanalysis import BatchRingDownAnalyzer
import pandas as pd
batch_analyzer = BatchRingDownAnalyzer()
# Process all files in data directory
results = batch_analyzer.process_directory("data", verbose=True, n_jobs=-1)
# Valid Q factors are available from results or calculate_q_factors().
# Invalid/warning raw values stay in Q_nls_raw and Q_dft_raw for diagnostics.
batch_analyzer.calculate_q_factors() # Ensures valid Q is in results dict
q_stats = batch_analyzer.get_q_factor_statistics()
# Get summary table
summary = batch_analyzer.get_summary_table()
df_summary = pd.DataFrame(summary['data'])
# Use get_formatted_summary_table() or get_formatted_consistency_table()
# when you want preformatted display strings instead of numeric columns.
# Consistency analysis
consistency = batch_analyzer.consistency_analysis()
# CRLB comparison
crlb_analysis = batch_analyzer.crlb_comparison_analysis()
See examples/batch_analysis_example.py for a complete batch analysis example.
Project Structure
Core Package (ringdownanalysis/)
signal.py:RingDownSignalclass for signal generationestimators.py: Frequency estimation classes (NLS, DFT)crlb.py: CRLB calculationdata_loader.py: Data loading utilities for CSV and MAT filesanalyzer.py: Single-file analysisbatch_analyzer.py: Batch processing and analysismonte_carlo.py: Monte Carlo simulation frameworkcompat.py: Compatibility layer (function-based API)
Documentation (docs/)
api/: Sphinx API documentation — build withmake -C docs/api htmldata_format.md: Data format specification for CSV and MAT filestn/main.tex: Comprehensive LaTeX document with theoretical foundationtn/main.pdf: Compiled technical note
Examples (examples/)
usage_example.py: Comprehensive usage examples for all featuresbatch_analysis_example.py: Batch analysis workflow examplebenchmark.py: Simple performance benchmark comparing NLS and DFT methodslogging_config_example.py: Examples for configuring logging in production and debugging
Benchmarks (benchmarks/)
benchmark_suite.py: Comprehensive pytest-benchmark test suiterun_benchmarks.py: Script to run benchmarks and generate reportsrun_profiling.py: Script to profile workloads and identify bottlenecksprofile_utils.py: cProfile utilities for profiling workloadsREADME.md: Detailed guide for benchmarking and profiling
See benchmarks/README.md for detailed information on performance benchmarking and profiling.
Notebooks (notebooks/)
analysis_example.ipynb: Interactive analysis examplesbatch_analysis_example.ipynb: Batch analysis in notebook format
Key Results
- NLS Method: Achieves statistical efficiency, approaching the CRLB for ring-down signals when using the explicit ring-down model
- DFT Method: Provides computationally efficient estimation with Lorentzian peak fitting, but suffers from statistical inefficiency due to discrete frequency sampling
- Exponential Decay Impact: The exponential amplitude decay reduces effective observation time and SNR, degrading estimation performance compared to constant-amplitude signals. The degradation depends on the ratio T/τ (observation time to decay time constant)
- Scaling Relationships: For slow decay (T ≪ τ), accuracy scales as T⁻³/². For rapid decay (T ≫ τ), accuracy is limited by τ and scales as τ⁻³/²
Security
File input: Load only CSV and MAT files from trusted sources. MAT files use struct_as_record=False to reduce deserialization risks; for untrusted input, consider sandboxing or alternative loaders. CSV files via Pandas are generally safe for typical scientific data.
Path handling: process_directory() validates that the directory exists and rejects path traversal in the glob pattern (e.g., ../). For production use with user-supplied paths (e.g., from a web form), validate and resolve paths to a trusted base directory before passing them to the API.
Data Format
Experimental data files should be placed in the data/ directory:
- CSV files: Moku:Lab Phasemeter format with time in column 1 and phase (cycles) in column 4.
%comments and leading plain-text headers are skipped. - MAT files: MATLAB format with
moku.dataas a non-empty 2D array containing time in column 1 and phase in column 4
See docs/data_format.md for the full specification (column indices, units, validation rules, edge cases).
Dependencies
Core dependencies (automatically installed via pip install ringdownanalysis):
- NumPy >= 1.20.0
- SciPy >= 1.7.0
- Matplotlib >= 3.5.0
- tqdm >= 4.60.0
- joblib >= 1.0.0
- pandas >= 1.3.0
For reproducible environments (examples, notebooks), install from pinned versions:
pip install -r requirements.txt
pip install -e .
Optional dependencies:
- Jupyter >= 1.0.0 (for notebooks)
- pytest >= 7.0.0 (for testing)
- pytest-cov >= 4.0.0 (for coverage)
- pytest-benchmark >= 4.0.0 (for benchmarking)
- ruff >= 0.1.0 (for linting)
API Documentation
Build the Sphinx API docs:
make -C docs/api html
Open docs/api/_build/html/index.html in a browser. Or install dev dependencies and run:
pip install -e ".[dev]"
cd docs/api && make html
Testing
Run the test suite:
pytest
With coverage:
pytest --cov=ringdownanalysis --cov-report=html
Benchmarking
The package includes a comprehensive benchmarking and profiling suite to measure performance and identify bottlenecks:
# Run benchmarks with medium workload
python benchmarks/run_benchmarks.py --size medium
# Profile critical workloads
python benchmarks/run_profiling.py all --size medium
Or run benchmarks directly with pytest:
pytest benchmarks/benchmark_suite.py --benchmark-only
See benchmarks/README.md for detailed information on benchmarking and profiling workflows.
Development
CI/CD
GitHub Actions run on every push and pull request:
- Lint: Ruff check and format
- Test: pytest with coverage on Python 3.10, 3.11, 3.12
- Typecheck: mypy
Coverage is uploaded to Codecov (optional; add CODECOV_TOKEN secret for private repos).
Releasing
To publish a new version to PyPI:
-
Update
versioninpyproject.toml -
Create and push a tag:
git tag v0.1.0 git push origin v0.1.0
-
The release workflow builds and publishes to PyPI automatically.
Setup: Add PYPI_API_TOKEN as a repository secret (Settings → Secrets → Actions). Create a token at pypi.org/manage/account/token/.
References
- S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice Hall, 1993.
- D. C. Rife and R. R. Boorstyn, "Single tone parameter estimation from discrete-time observations," IEEE Trans. Information Theory, 1974.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ringdownanalysis-1.1.0.tar.gz.
File metadata
- Download URL: ringdownanalysis-1.1.0.tar.gz
- Upload date:
- Size: 78.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f9e005f98f1cfccc949c800cda86c9837869efadc5e2dfc7c5bbb315e64ce7f2
|
|
| MD5 |
c85b5b9ffc6ca06885717c9c228dd188
|
|
| BLAKE2b-256 |
a8b7e4dd1a98adfc201caf3c23c9c3a1955b3f40bf66dd1b9ca816df81740eac
|
File details
Details for the file ringdownanalysis-1.1.0-py3-none-any.whl.
File metadata
- Download URL: ringdownanalysis-1.1.0-py3-none-any.whl
- Upload date:
- Size: 62.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3f97e06a62b4c38dffca97a741f3dd454fe6d595e375f55a1e5fe0cfed0b190e
|
|
| MD5 |
ff94291fc709b7493d73a3341890f28d
|
|
| BLAKE2b-256 |
d5094b2149da4c2febda0dc57d29462d83919d346adce3303faa2806180f2bda
|