Skip to main content

LLM CLI plugin for fal.ai generative AI models and services

Project description

llm-fal

PyPI License: MIT Changelog Tests

LLM CLI plugin for accessing fal.ai's generative AI models and services, including image generation, text processing, audio, and video models.

Installation

Install this plugin in the same environment as LLM.

llm install llm-fal

Configuration

First, set an API key for fal.ai:

llm keys set fal
# Paste key here

You can also set the key in the environment variable FAL_API_KEY.

Usage

Run llm fal models to list the available models, categorized by type (image, video, audio, text).

Run prompts with various models like this:

# Generate an image with standard parameters
llm -m fal-ai/fast-sdxl "A futuristic cityscape at sunset"

# Generate an image with custom parameters
llm -m fal-ai/lightning-sd "Astronaut on Mars" -o width 512 -o height 512 -o prompt_strength 7.5 -o steps 30

# Save generated image to file (using shell redirection)
llm -m fal-ai/fast-sdxl "Mountain landscape" > landscape.png

# Use text models if available
llm -m fal-ai/text-model "Write a short story about space travel"

Image Attachments

For models that support image-to-video or other transformations, you can attach source images:

# Convert an image to video with WAN-Pro
llm -m fal-ai/wan-pro/image-to-video "Add smooth movement" -a source_image.jpg

# Convert an image to video with Kling
llm -m fal-ai/kling-video/v2/master/image-to-video "Add smooth camera movement" -a source_image.jpg -o duration 10 -o aspect_ratio "16:9"

# Process an image with a specific model
llm -m fal-ai/image-processor "Enhance quality" -a input.png

Model Categories

The fal.ai plugin supports the following categories of models:

  1. Image Generation Models: Stable Diffusion variants, FLUX.1, Lightning models, and other text-to-image models

    llm -m fal-ai/fast-sdxl "A cat wearing a space helmet"
    llm -m fal-ai/lightning-sd "Sunset over mountains" -o prompt_strength 8.0
    llm -m fal-ai/flux/dev "Portrait of a robot artist" -o width 1024 -o height 1024
    
  2. Video Generation Models: Models for creating or manipulating video content

    # WAN Pro model
    llm -m fal-ai/wan-pro/image-to-video "Gentle camera movement" -a static_image.jpg
    
    # Kling Video model with additional options
    llm -m fal-ai/kling-video/v2/master/image-to-video "Smooth camera zoom" -a static_image.jpg -o duration 10 -o aspect_ratio "16:9" -o negative_prompt "blur, distortion" -o cfg_scale 0.8
    
  3. Audio Processing Models: Audio generation or processing models

    # Text-to-speech conversion
    llm -m fal-ai/playai/tts/dialog "This is a test of the text to speech capabilities"
    
    # Text-to-speech with voice and format options
    llm -m fal-ai/playai/tts/dialog "Convert this text to speech" -o voice default -o output_format mp3
    
  4. Text-Based Models: Any language models available on fal.ai's platform

    llm -m fal-ai/text-generation "Write a poem about technology"
    
  5. Custom Model Endpoints: Support for user-deployed custom model endpoints

    llm -m fal-ai/custom-endpoint-id "Your custom model prompt"
    

Model Options

The following options can be passed using -o name value on the CLI or as keyword=value arguments to the Python model.prompt() method:

  • max_tokens: int

    The maximum number of tokens to generate (for text models)

  • temperature: float

    Controls randomness in the output (0-1)

  • prompt_strength: float

    Controls how much the output adheres to the prompt (for image models)

  • width: int

    Width of generated image (for image models)

  • height: int

    Height of generated image (for image models)

  • steps: int

    Number of diffusion steps (for image models)

  • seed: int

    Seed for reproducible generation

  • voice: string

    Voice to use for text-to-speech models

  • output_format: string

    Output format for audio (mp3, wav, etc.)

  • duration: int

    Duration of the generated video in seconds (5 or 10, for Kling video model)

  • aspect_ratio: string

    Aspect ratio of the generated video ("16:9", "9:16", or "1:1", for Kling video model)

  • negative_prompt: string

    Negative prompt to specify unwanted elements in generation (for Kling video model)

  • cfg_scale: float

    Controls adherence to prompt (0.0-1.0) for Kling video model

Commands

The plugin provides the following CLI commands:

# List all available models categorized by type
llm fal models

# Check your API key
llm fal auth

# Set your API key
llm fal auth YOUR_API_KEY

Python API Usage

You can also use this plugin programmatically:

import llm

# Generate an image
response = llm.prompt(
    "A beautiful landscape with mountains and a lake",
    model="fal-ai/fast-sdxl",
    width=1024,
    height=768,
    prompt_strength=7.5
)

# Get the URL of the generated image
image_url = response.text()
print(f"Generated image: {image_url}")

# Generate a video from an image
import os
from pathlib import Path

# Path to your source image
image_path = Path("source_image.jpg")

# Ensure the path exists
if image_path.exists():
    video_response = llm.prompt(
        "Add smooth camera movement",
        model="fal-ai/kling-video/v2/master/image-to-video",
        duration=10,
        aspect_ratio="16:9",
        negative_prompt="blur, distortion",
        cfg_scale=0.7,
        attachments=[image_path]
    )
    
    # Get the URL of the generated video
    video_url = video_response.text()
    print(f"Generated video: {video_url}")

# Generate speech from text
tts_response = llm.prompt(
    "This is a text to speech test",
    model="fal-ai/playai/tts/dialog",
    voice="default",
    output_format="mp3"
)

# Get the URL of the audio file
audio_url = tts_response.text()
print(f"Generated audio: {audio_url}")

# Save the image using the LLM CLI utilities
# (For actual image saving, you would need to download the image using requests)

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-fal
python3 -m venv venv
source venv/bin/activate

Now install the plugin in development mode:

pip install -e '.[test,dev]'

Testing

This project uses pytest for testing with pytest-vcr for recording and replaying API calls.

Run the tests:

pytest tests/

To run tests with coverage reporting:

pytest --cov=llm_fal tests/

When adding new tests, you can record new API interactions by setting your FAL API key and running:

PYTEST_FAL_API_KEY=your_api_key pytest tests/test_fal.py::test_name

The interactions will be recorded in the tests/cassettes/ directory.

Continuous Integration

This project uses GitHub Actions for continuous integration and deployment:

  • Testing: All commits and pull requests are automatically tested against multiple Python versions
  • Publishing: New releases are automatically published to PyPI when a new GitHub release is created

Architecture

This plugin follows a simple, single-file architecture similar to other LLM CLI plugins. The core functionality is contained in:

  • llm_fal.py: Main plugin file containing API client, model handling, and commands
  • pyproject.toml: Project configuration
  • README.md: Documentation
  • LICENSE: MIT License information

We intentionally kept the implementation minimal and streamlined, following the pattern of other successful LLM CLI plugins. This makes the code more maintainable and easier to understand while still providing all the core functionality needed.

Credits

Developed for use with the LLM command line interface.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_fal-0.1.0.tar.gz (338.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_fal-0.1.0-py3-none-any.whl (14.0 kB view details)

Uploaded Python 3

File details

Details for the file llm_fal-0.1.0.tar.gz.

File metadata

  • Download URL: llm_fal-0.1.0.tar.gz
  • Upload date:
  • Size: 338.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for llm_fal-0.1.0.tar.gz
Algorithm Hash digest
SHA256 d981383b21394e4251661a6ae80920c4ab93c8794045b4a35d82133d557733e3
MD5 e7bf454aa986699d3939190906ec0240
BLAKE2b-256 07dd4a9e7be7eca8bd43d9455b1062cab2ba2aaa8e1869b665c904531d8ccb87

See more details on using hashes here.

File details

Details for the file llm_fal-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: llm_fal-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 14.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for llm_fal-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 44151915a5ab20ac192cfc06d38191c9082e026896db86ee9a74f0a86c6bcb14
MD5 de62e605903f134660ac54805644dd5d
BLAKE2b-256 b4e3a58efeaeb0181dd5eb4753276833321a75fa635e45b12917e51d06955d25

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page