Transform any LLM into a methodical thinker that excels at systematic reasoning like OpenAI o1 and DeepSeek R1
Project description
🤔 LLM-Reasoner: Make any LLM to think deeper like OpenAI o1 and deepseek R1!
Make any LLM to think deeper like OpenAI o1 and deepseek R1!
✨ What's Cool About It?
- 🧠 Step-by-Step Reasoning: No more black-box answers! See exactly how your LLM thinks, similar to O1's methodical approach
- 🔄 Real-time Progress: Watch the reasoning unfold with smooth animations
- 🎯 Multi-Provider Support: Works with all provider supported by LiteLLM
- 🎮 Sweet UI: A slick Streamlit interface to play with
- 🛠️ Power-User CLI: For when you want to get nerdy with it
- 📊 Confidence Tracking: Know how sure your LLM is about each step
🚀 Quick Start
Pop this in your terminal:
pip install llm-reasoner
Got API keys? Drop 'em in:
# Pick your flavor:
export OPENAI_API_KEY="sk-your-key" # OpenAI fan?
export ANTHROPIC_API_KEY="your-key" # Team Claude?
export VERTEX_PROJECT="your-project" # Google enthusiast?
🎮 Jump Right In!
# List your available models
llm-reasoner models
# Generate a reasoning chain
llm-reasoner reason "How do planes fly?" --min-steps 5
# Launch the UI
llm-reasoner ui
Let's see it in action as SDK
from llm_reasoner import ReasonChain
import asyncio
async def main():
# Create a chain with your preferred settings
chain = ReasonChain(
model="gpt-4", # Choose your model
min_steps=3, # Minimum reasoning steps
temperature=0.2, # Control creativity
timeout=30.0 # Set your timeout
)
# Watch it think step by step!
async for step in chain.generate_with_metadata("Why is the sky blue?"):
print(f"\nStep {step.number}: {step.title}")
print(f"Thinking Time: {step.thinking_time:.2f}s")
print(f"Confidence: {step.confidence:.2f}")
print(step.content)
asyncio.run(main())
🌟 Cool Features You'll Love
Rich Metadata for Each Step
async for step in chain.generate_with_metadata(query):
print(f"Title: {step.title}") # What's this step about?
print(f"Content: {step.content}") # The actual thinking
print(f"Confidence: {step.confidence}") # How sure is it?
print(f"Time: {step.thinking_time}s") # How long did it take?
Custom Model Registration
Want to use your own models? We've got you covered! You can register custom models through both Python and CLI:
from llm_reasoner import model_registry
# Add your own models
model_registry.register_model(
name="my-cool-model",
provider="custom-provider",
context_window=8192
)
Using the CLI:
# Register a new model
llm-reasoner register-model my-custom-model azure --context-window 16384
# List all available models (including your custom ones)
llm-reasoner models
# Set your custom model as default
llm-reasoner set-model my-custom-model
# Use your custom model
llm-reasoner reason "What is quantum computing?" --model my-custom-model
4. Interactive UI Features
- Drag-n-drop model selection
- Real-time confidence visualization
- Step-by-step animations
- Custom parameter tuning
- Query history tracking
🎛️ Power User Settings
Fine-tune your chains:
chain = ReasonChain(
model="claude-2", # Pick your model
max_tokens=750, # Control response length
temperature=0.2, # Adjust randomness
timeout=30.0, # Set API timeout
min_steps=5 # Minimum reasoning steps
)
# Clear history if needed
chain.clear_history()
🔧 Model Support
Out of the box, we support:
- OpenAI: GPT-4, GPT-3.5-Turbo
- Anthropic: Claude 2
- Google: Gemini Pro
- Azure OpenAI models
- Custom models through our flexible provider system LiteLLM
Need to use a different model? Just register it with our CLI or Python API!
🎨 UI Walkthrough
- Launch with
llm-reasoner ui - Pick your model from the dropdown
- Adjust settings with sliders
- Type your question
- Watch the magic happen with real-time updates!
🤓 Pro Tips
- Use
min_stepsto ensure thorough reasoning - Lower
temperaturefor more consistent results - Watch the confidence scores to gauge reliability
- Try different models for different types of questions
🔮 What's Next?
We're constantly adding cool new features! Got ideas? We'd love to hear them!
📜 License
MIT - Go wild! Just give us a shoutout if you make something awesome with it.
Made with 🧠 and ❤️ by the Harish Santhanalakshmi Ganesan Happy reasoning! 🚀
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_reasoner-0.1.10.tar.gz.
File metadata
- Download URL: llm_reasoner-0.1.10.tar.gz
- Upload date:
- Size: 24.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
15e2835a5f9429e960fc7d7fe2f8deb737de5a9c8eb432a74d3e4e3913383a8c
|
|
| MD5 |
0405cf60eafba8a2c194f5dbccf2379c
|
|
| BLAKE2b-256 |
b72cac38cf81ba9ec7a7ea276ee354aa061cb19ce2af0221d49d89afcfe41166
|
File details
Details for the file llm_reasoner-0.1.10-py3-none-any.whl.
File metadata
- Download URL: llm_reasoner-0.1.10-py3-none-any.whl
- Upload date:
- Size: 27.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7cfe6060475fe248df948b6ca6a24aae42cc43cfc942170153818e2d5fbc5687
|
|
| MD5 |
d63c63588a3cefb11ccfbbb02336c75e
|
|
| BLAKE2b-256 |
787d21a422c7b84331cf5844813505d7384b825010a7cebb7c9a87f13b222f28
|