Skip to main content

Comprehensive AI/ML framework for Python

Project description

AI/ML Framework

A comprehensive, modular, and intelligent AI/ML framework in Python that covers the entire machine learning lifecycle from data preprocessing to model deployment.

๐Ÿš€ Features

๐Ÿ“Š Data Processing & Analysis

  • Automated Data Analysis: Comprehensive dataset analysis with quality assessment
  • Smart Preprocessing: AI-powered preprocessing recommendations and automation
  • Feature Engineering: Automatic feature creation and selection
  • Data Validation: Built-in data quality checks and validation

๐Ÿค– AutoML & Model Selection

  • Intelligent Model Selection: Automatic model selection based on data characteristics
  • Hyperparameter Optimization: Advanced optimization using Optuna, Ray Tune
  • Ensemble Methods: Automatic ensemble creation and optimization
  • Model Evaluation: Comprehensive evaluation metrics and comparison

๐Ÿง  Deep Learning

  • Neural Network Designer: AI-powered architecture design
  • Multi-framework Support: TensorFlow, PyTorch, Keras integration
  • Training Visualization: Real-time training monitoring and visualization
  • Transfer Learning: Pre-trained model integration

๐Ÿ”ง Pipeline Management

  • Automated Pipelines: Scikit-learn pipeline creation and management
  • Version Control: Model and pipeline versioning with semantic versioning
  • Experiment Tracking: Comprehensive experiment management
  • Pipeline Deployment: Easy deployment of trained pipelines

๐ŸŒ API Generation & Deployment

  • Automatic API Generation: FastAPI-based REST API generation
  • Multi-platform Deployment: Docker, Kubernetes, cloud deployment
  • API Documentation: Auto-generated OpenAPI/Swagger documentation
  • Monitoring: Built-in API monitoring and logging

๐Ÿ“ˆ Visualization & Dashboards

  • Interactive Visualizations: Plotly, Matplotlib, Seaborn integration
  • Real-time Dashboards: Streamlit-based monitoring dashboards
  • Model Interpretability: SHAP, LIME integration
  • Performance Tracking: Real-time performance visualization

๐ŸŽฏ AI Recommendations

  • Intelligent Recommendations: AI-powered suggestions for next steps
  • Workflow Optimization: Automated workflow improvement suggestions
  • Best Practices: ML best practices integration
  • Performance Optimization: Automatic performance tuning recommendations

๐Ÿ“ฆ Installation

Prerequisites

  • Python 3.8 or higher
  • Git

Quick Install

# Clone the repository
git clone https://github.com/your-username/ai-ml-framework.git
cd ai-ml-framework

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Install the framework
pip install -e .

Development Install

# Install with development dependencies
pip install -e ".[dev]"

# Install pre-commit hooks
pre-commit install

Optional Dependencies

For specific functionality, install optional dependencies:

# Deep learning (TensorFlow/PyTorch)
pip install tensorflow torch torchvision

# Experiment tracking (MLflow, Weights & Biases)
pip install mlflow wandb

# Dashboard (Streamlit)
pip install streamlit

# GPU support
pip install cupy-cuda11x

๐Ÿš€ Quick Start

Basic Usage

from ai_ml_framework.preprocessing import AutoPreprocessor
from ai_ml_framework.auto_ml import AutoMLSelector
from ai_ml_framework.pipeline import PipelineCreator

# Load your data
import pandas as pd
df = pd.read_csv('your_data.csv')

# Preprocess data
preprocessor = AutoPreprocessor(target_column='target')
X_processed, y_processed = preprocessor.fit_transform(df)

# Auto-select and train models
automl = AutoMLSelector(problem_type='classification')
best_model = automl.auto_select_and_train(X_processed, y_processed)

# Create pipeline
pipeline_creator = PipelineCreator()
pipeline = pipeline_creator.create_auto_pipeline(df, target_column='target')

Complete Workflow

from ai_ml_framework.utils import AIRecommendationsEngine
from ai_ml_framework.api import APIGenerator

# Get AI recommendations
recommender = AIRecommendationsEngine()
report = recommender.generate_comprehensive_report(df, target_column='target')

# Generate API from trained model
api_generator = APIGenerator('trained_model.pkl')
app = api_generator.generate_api()

๐Ÿ“š Examples

The examples/ directory contains comprehensive examples:

  • preprocessing_example.py - Data preprocessing and analysis
  • automl_example.py - AutoML model selection and optimization
  • deep_learning_example.py - Neural network design and training
  • pipeline_example.py - Pipeline creation and management
  • api_example.py - REST API generation and deployment
  • complete_workflow_example.py - End-to-end ML workflow

Run examples:

cd examples
python complete_workflow_example.py

๐Ÿ—๏ธ Architecture

ai_ml_framework/
โ”œโ”€โ”€ preprocessing/          # Data preprocessing and analysis
โ”œโ”€โ”€ auto_ml/                # AutoML and model selection
โ”œโ”€โ”€ deep_learning/          # Deep learning tools
โ”œโ”€โ”€ pipeline/               # Pipeline management
โ”œโ”€โ”€ api/                    # API generation and deployment
โ”œโ”€โ”€ visualization/          # Visualization and dashboards
โ”œโ”€โ”€ utils/                  # Utilities and recommendations
โ”œโ”€โ”€ experiments/            # Experiment tracking
โ””โ”€โ”€ examples/               # Example scripts

๐Ÿ”ง Configuration

Environment Variables

# MLflow tracking
MLFLOW_TRACKING_URI=http://localhost:5000

# Weights & Biases
WANDB_API_KEY=your_wandb_key

# GPU support
CUDA_VISIBLE_DEVICES=0,1

Configuration Files

Create .env file:

FRAMEWORK_LOG_LEVEL=INFO
DEFAULT_EXPERIMENT_TRACKER=mlflow
API_HOST=localhost
API_PORT=8000

๐Ÿ“Š Supported Algorithms

Classification

  • Random Forest
  • Gradient Boosting (XGBoost, LightGBM, CatBoost)
  • Support Vector Machines
  • Neural Networks
  • Logistic Regression
  • k-Nearest Neighbors

Regression

  • Linear Regression
  • Random Forest Regressor
  • Gradient Boosting Regressor
  • SVR
  • Neural Networks
  • Ridge/Lasso Regression

Clustering

  • K-Means
  • DBSCAN
  • Hierarchical Clustering
  • Gaussian Mixture Models

Deep Learning

  • CNN (Convolutional Neural Networks)
  • RNN/LSTM
  • Transformers
  • Autoencoders
  • GANs

๐Ÿš€ Deployment

Docker Deployment

# Build Docker image
docker build -t ml-api .

# Run container
docker run -p 8000:8000 ml-api

Kubernetes Deployment

# Apply Kubernetes manifests
kubectl apply -f kubernetes/

# Check deployment
kubectl get pods

Cloud Deployment

# AWS (ECS)
python -m ai_ml_framework.api.deployment --platform aws

# Google Cloud (Cloud Run)
python -m ai_ml_framework.api.deployment --platform gcp

# Azure (Container Instances)
python -m ai_ml_framework.api.deployment --platform azure

๐Ÿ“ˆ Monitoring & Logging

Experiment Tracking

from ai_ml_framework.experiments import ExperimentTracker

# Initialize tracker
tracker = ExperimentTracker(backend="mlflow")

# Start experiment
run_id = tracker.start_run("my_experiment")

# Log metrics
tracker.log_metrics({"accuracy": 0.95, "loss": 0.1})

# Log model
tracker.log_model(model, "my_model")

API Monitoring

# Add monitoring to API
api_generator.add_monitoring_middleware()
api_generator.enable_rate_limiting(requests_per_minute=100)

๐Ÿงช Testing

Run Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=ai_ml_framework

# Run specific test
pytest tests/test_preprocessing.py

Test Coverage

# Generate coverage report
pytest --cov=ai_ml_framework --cov-report=html

๐Ÿ“– Documentation

Build Documentation

# Install docs dependencies
pip install -e ".[docs]"

# Build documentation
cd docs
make html

# View documentation
open _build/html/index.html

API Documentation

After starting an API, visit:

  • http://localhost:8000/docs - Swagger UI
  • http://localhost:8000/redoc - ReDoc

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Workflow

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/amazing-feature)
  3. Make changes
  4. Run tests (pytest)
  5. Commit changes (git commit -m 'Add amazing feature')
  6. Push to branch (git push origin feature/amazing-feature)
  7. Open Pull Request

Code Style

  • Use Black for formatting (black .)
  • Use isort for imports (isort .)
  • Follow PEP 8
  • Add type hints
  • Write docstrings

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • Scikit-learn for machine learning algorithms
  • TensorFlow and PyTorch for deep learning
  • FastAPI for API generation
  • MLflow for experiment tracking
  • Optuna for hyperparameter optimization
  • Plotly for visualizations

๐Ÿ“ž Support

๐Ÿ—บ๏ธ Roadmap

Version 2.0

  • Enhanced AutoML capabilities
  • More deep learning architectures
  • Advanced ensemble methods
  • Improved visualization tools

Version 2.1

  • Distributed training support
  • Advanced feature store
  • Model monitoring and alerting
  • Automated MLOps pipelines

Version 3.0

  • Graph neural networks
  • Reinforcement learning tools
  • Advanced NLP capabilities
  • Edge deployment support

๐Ÿ“Š Performance Benchmarks

Task Framework Accuracy Training Time Inference Time
Classification AI/ML Framework 94.5% 2.3s 0.001s
Regression AI/ML Framework Rยฒ=0.89 1.8s 0.001s
Clustering AI/ML Framework Silhouette=0.72 3.1s 0.002s

๐ŸŒŸ Star History

Star History Chart


Built with โค๏ธ by the AI/ML Framework Team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bouhlal_ml_framework-0.1.0.tar.gz (93.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bouhlal_ml_framework-0.1.0-py3-none-any.whl (102.5 kB view details)

Uploaded Python 3

File details

Details for the file bouhlal_ml_framework-0.1.0.tar.gz.

File metadata

  • Download URL: bouhlal_ml_framework-0.1.0.tar.gz
  • Upload date:
  • Size: 93.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.13

File hashes

Hashes for bouhlal_ml_framework-0.1.0.tar.gz
Algorithm Hash digest
SHA256 2c76e8dc69f4993a5f4fa7ff7093837b45e328033db0c5d7cd236d51ffbbbcbb
MD5 4da1e488c6624c33441c99e49ac327d4
BLAKE2b-256 c3e5df43cdc462ba2fbb27d6bc414681fb92328c24d63d31c70db0f201fc4991

See more details on using hashes here.

File details

Details for the file bouhlal_ml_framework-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for bouhlal_ml_framework-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1180e31dd4380d9321a607e348f3888ed46b47a88aef0feb534967ad8be5628e
MD5 3db2778480d8a8b425be7ed250bfe677
BLAKE2b-256 29d5e0c13f125451eedc438c5716261c15226f8daa8792368d0642b8c36bdea0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page