Skip to main content

Endee is the Next-Generation Vector Database for Scalable, High-Performance AI

Project description

Endee - High-Performance Vector Database

Endee is a high-performance vector database designed for speed and efficiency. It enables rapid Approximate Nearest Neighbor (ANN) searches for applications requiring robust vector search capabilities with advanced filtering, metadata support, and hybrid search combining dense and sparse vectors.

Key Features

  • Fast ANN Searches: Efficient similarity searches on vector data using HNSW algorithm
  • Hybrid Search: Combine dense and sparse vectors for powerful semantic + keyword search
  • Multiple Distance Metrics: Support for cosine, L2, and inner product distance metrics
  • Metadata Support: Attach and search with metadata and filters
  • Advanced Filtering: Powerful query filtering with operators like $eq, $in, and $range
  • High Performance: Optimized for speed and efficiency
  • Scalable: Handle millions of vectors with ease
  • Configurable Precision: Multiple precision levels for memory/accuracy tradeoffs

Installation

pip install endee

Quick Start

from endee import Endee
from constants import Precision

# Initialize client with your API token
client = Endee(token="your-token-here")
# for no auth development use the below initialization
# client = Endee()

# List existing indexes
indexes = client.list_indexes()

# Create a new index
client.create_index(
    name="my_vectors",
    dimension=1536,              # Your vector dimension
    space_type="cosine",         # Distance metric (cosine, l2, ip)
    precision=Precision.INT8    # Use precision enum for type safety
)

# Get index reference
index = client.get_index(name="my_vectors")

# Insert vectors
index.upsert([
    {
        "id": "doc1",
        "vector": [0.1, 0.2, 0.3, ...],  # Your vector data
        "meta": {"text": "Example document", "category": "reference"},
        "filter": {"category": "reference", "tags": "important"}
    }
])

# Query similar vectors with filtering
results = index.query(
    vector=[0.2, 0.3, 0.4, ...],  # Query vector
    top_k=10,
    filter=[{"category": {"$eq": "reference"}}]  # Structured filter
)

# Process results
for item in results:
    print(f"ID: {item['id']}, Similarity: {item['similarity']}")
    print(f"Metadata: {item['meta']}")

Basic Usage

To interact with the Endee platform, you'll need to authenticate using an API token. This token is used to securely identify your workspace and authorize all actions — including index creation, vector upserts, and queries.

Not using a token at any development stage will result in open APIs and vectors.

🔐 Generate Your API Token

  • Each token is tied to your workspace and should be kept private
  • Once you have your token, you're ready to initialize the client and begin using the SDK

Initializing the Client

The Endee client acts as the main interface for all vector operations — such as creating indexes, upserting vectors, and running similarity queries. You can initialize the client in just a few lines:

from endee import Endee

# Initialize with your API token
client = Endee(token="your-token-here")

Setting Up Your Domain

The Endee client allows for the setting of custom domain URL and port change (default port 8080).

from endee import Endee

# Initialize with your API token
client = Endee(token="your-token-here")

client.set_base_url('http://0.0.0.0:8081/api/v1')

Listing All Indexes

The client.list_indexes() method returns a list of all the indexes currently available in your environment or workspace. This is useful for managing, debugging, or programmatically selecting indexes for vector operations like upsert or search.

from endee import Endee

client = Endee(token="your-token-here")

# List all indexes in your workspace
indexes = client.list_indexes()

Create an Index

The client.create_index() method initializes a new vector index with customizable parameters such as dimensionality, distance metric, graph construction settings, and precision level. These configurations determine how the index stores and retrieves high-dimensional vector data.

from endee import Endee, Precision

client = Endee(token="your-token-here")

# Create an index with custom parameters
client.create_index(
    name="my_custom_index",
    dimension=768,
    space_type="cosine",
    M=16,                        # Graph connectivity parameter (default = 16)
    ef_con=128,                  # Construction-time parameter (default = 128)
    precision=Precision.INT8,   # Use Precision enum (recommended)
)

Parameters:

  • name: Unique name for your index (alphanumeric + underscores, max 48 chars)
  • dimension: Vector dimensionality (must match your embedding model's output, min 2, max 8000)
  • space_type: Distance metric - "cosine", "l2", or "ip" (inner product)
  • M: HNSW graph connectivity parameter - higher values increase recall but use more memory (default: 16)
  • ef_con: HNSW construction parameter - higher values improve index quality but slow down indexing (default: 128)
  • precision: Vector precision level using Precision enum - Precision.INT16 , Precision.BINARY2, Precision.INT8(default), Precision.FLOAT16, or Precision.FLOAT32
  • version: Optional version parameter for index versioning
  • sparse_model: Optional parameter for enabling sparse vectors by default it is disabled.

Precision Levels:

The precision parameter controls how vectors are stored internally, affecting memory usage and search accuracy. Use the Precision enum for type safety and IDE autocomplete:

from endee import Precision

# Available precision levels
Precision.FLOAT32  # 32-bit floating point
Precision.FLOAT16  # 16-bit floating point
Precision.INT16   # 16-bit integer quantization (default)
Precision.INT8    # 8-bit integer quantization
Precision.BINARY2  # 1-bit binary quantization
Precision Quantization Data Type Memory Usage Accuracy Use Case
Precision.FLOAT32 32-bit FP32 Highest Maximum When accuracy is absolutely critical
Precision.FLOAT16 16-bit FP16 ~50% less Very good Good accuracy with half precision
Precision.INT16 16-bit INT16 ~50% less Very good Default - Integer quantization with good accuracy
Precision.INT8 8-bit INT8 ~75% less Good great for most use cases
Precision.BINARY2 1-bit Binary ~96.9% less Lower Extreme compression for large-scale similarity search

Choosing the Right Precision:

  • **Precision.INT8(Default): - provides good accuracy with significant memory savings using 8-bit integer quantization
  • Precision.INT16/ Precision.FLOAT16: Better accuracy with moderate memory savings (16-bit precision)
  • Precision.FLOAT32: Maximum accuracy using full 32-bit floating point (highest memory usage)
  • Precision.BINARY2: Extreme compression for very large-scale deployments where memory is critical and lower accuracy is tolerable

Example with different precision levels:

from endee import Endee, Precision

client = Endee(token="your-token-here")

# High accuracy index
client.create_index(
    name="high_accuracy_index",
    dimension=768,
    space_type="cosine",
    precision=Precision.FLOAT32
)

# Balanced index
client.create_index(
    name="balanced_index",
    dimension=768,
    space_type="cosine",
    precision=Precision.INT8
)

Get an Index

The client.get_index() method retrieves a reference to an existing index. This is required before performing vector operations like upsert, query, or delete.

from endee import Endee

client = Endee(token="your-token-here")

# Get reference to an existing index
index = client.get_index(name="my_custom_index")

# Now you can perform operations on the index
print(index.describe())

Parameters:

  • name: Name of the index to retrieve

Returns: An Index instance configured with server parameters

Ingestion of Data

The index.upsert() method is used to add or update vectors (embeddings) in an existing index. Each vector is represented as an object containing a unique identifier, the vector data itself, optional metadata, and optional filter fields for future querying.

from endee import Endee

client = Endee(token="your-token-here")

# Accessing the index
index = client.get_index(name="your-index-name")

# Insert multiple vectors in a batch
index.upsert([
    {
        "id": "vec1",
        "vector": [...],  # Your vector
        "meta": {"title": "First document"},
        "filter": {"tags": "important"}  # Optional filter values
    },
    {
        "id": "vec2",
        "vector": [...],  # Another vector
        "meta": {"title": "Second document"},
        "filter": {"visibility": "public", "tags": "important"}
    }
])

Vector Object Fields:

  • id: Unique identifier for the vector (required)
  • vector: Array of floats representing the embedding (required)
  • meta: Arbitrary metadata object for storing additional information (optional)
  • filter: Key-value pairs for structured filtering during queries (optional)

Note: Maximum batch size is 1000 vectors per upsert call.

Querying the Index

The index.query() method performs a similarity search in the index using a given query vector. It returns the closest vectors (based on the index's distance metric) along with optional metadata and vector data.

from endee import Endee

client = Endee(token="your-token-here")

# Accessing the index
index = client.get_index(name="your-index-name")

# Query with custom parameters
results = index.query(
    vector=[...],         # Query vector
    top_k=5,              # Number of results to return (max 512)
    ef=128,               # Runtime parameter for search quality (max 1024)
    include_vectors=True  # Include vector data in results
)

Query Parameters:

  • vector: Query vector (must match index dimension)
  • top_k: Number of nearest neighbors to return
  • ef: Runtime search parameter - higher values improve recall but increase latency
  • include_vectors: Whether to return the actual vector data in results (default: False)
  • filter: Optional filter criteria (array of filter objects)
  • log: Optional logging parameter for debugging (default: False)
  • sparse_indices: Sparse vector indices for hybrid search (default: None)
  • sparse_values: Sparse vector values for hybrid search (default: None)
  • prefilter_cardinality_threshold: Controls when the search strategy switches from HNSW filtered search to brute-force prefiltering on the matched subset. See Filter Tuning for details.
  • filter_boost_percentage: Expands the internal HNSW candidate pool by this percentage when a filter is active, compensating for filtered-out results .See Filter Tuning for details.

Result Fields:

  • id: Vector identifier
  • similarity: Similarity score
  • distance: Distance score (1.0 - similarity)
  • meta: Metadata dictionary
  • norm: Vector norm
  • filter: Filter dictionary (if filter dict was included in upsert vector object while upserting)
  • vector: Vector data (if include_vectors=True)

Hybrid Search

Hybrid search combines dense vector embeddings (semantic similarity) with sparse vectors (keyword/term matching) to provide more powerful and flexible search capabilities. This is particularly useful for applications that need both semantic understanding and exact term matching, such as:

  • RAG (Retrieval-Augmented Generation) systems
  • Document search with keyword boosting
  • Multi-modal search combining different ranking signals
  • BM25 + neural embedding fusion

Creating a Hybrid Index

To enable hybrid search, specify the sparse_model parameter when creating an index. This defines the model for the sparse vector space.

from endee import Endee, Precision

client = Endee(token="your-token-here")

client.create_index(
    name="hybridtest1",
    dimension=384,              # dense vector dimension
    sparse_model="default",           # sparse vector model (default)
    space_type="cosine",
    precision=Precision.INT8   # Use Precision enum
)

# Get reference to the hybrid index
index = client.get_index(name="hybridtest1")

Upserting Hybrid Vectors

When upserting vectors to a hybrid index, you must provide both dense vectors and sparse vector representations. Sparse vectors are defined using two parallel arrays: sparse_indices (positions) and sparse_values (weights).

import numpy as np
import random

np.random.seed(42)
random.seed(42)

TOTAL_VECTORS = 2000
BATCH_SIZE = 1000
DIM = 384
SPARSE_DIM = 30000

batch = []

for i in range(TOTAL_VECTORS):
    # Dense vector (semantic embedding)
    dense_vec = np.random.rand(DIM).astype(float).tolist()
    
    # Sparse vector (e.g., BM25 term weights)
    # Example: 20 non-zero terms
    nnz = 20
    sparse_indices = random.sample(range(SPARSE_DIM), nnz)
    sparse_values = np.random.rand(nnz).astype(float).tolist()
    
    item = {
        "id": f"hybrid_vec_{i+1}",
        "vector": dense_vec,
        
        # Required for hybrid search
        "sparse_indices": sparse_indices,
        "sparse_values": sparse_values,
        
        "meta": {
            "title": f"Hybrid Document {i+1}",
            "index": i,
        },
        "filter": {
            "visibility": "public" if i % 2 == 0 else "private"
        }
    }
    
    batch.append(item)
    
    if len(batch) == BATCH_SIZE or i + 1 == TOTAL_VECTORS:
        index.upsert(batch)
        print(f"Upserted {len(batch)} hybrid vectors")
        batch = []

Hybrid Vector Fields:

  • id: Unique identifier (required)
  • vector: Dense embedding vector (required)
  • sparse_indices: List of non-zero term positions in sparse vector (required for hybrid)
  • sparse_values: List of weights corresponding to sparse_indices (required for hybrid)
  • meta: Metadata dictionary (optional)
  • filter: Filter fields for structured filtering (optional)

Note: The lengths of sparse_indices and sparse_values must match.

Querying with Hybrid Search

Hybrid queries combine dense and sparse vector similarity to rank results. Provide both a dense query vector and sparse query representation.

import numpy as np
import random

np.random.seed(123)
random.seed(123)

DIM = 384
SPARSE_DIM = 30000

# Dense query vector (semantic)
dense_query = np.random.rand(DIM).astype(float).tolist()

# Sparse query
nnz = 15
sparse_indices = random.sample(range(SPARSE_DIM), nnz)
sparse_values = np.random.rand(nnz).astype(float).tolist()

results = index.query(
    vector=dense_query,              # dense part
    sparse_indices=sparse_indices,   # sparse part
    sparse_values=sparse_values,
    top_k=5,
    ef=128,
    include_vectors=True
)

# Process results
for result in results:
    print(f"ID: {result['id']}")
    print(f"Similarity: {result['similarity']}")
    print(f"Metadata: {result['meta']}")
    print("---")

Hybrid Query Parameters:

  • vector: Dense query vector (required)
  • sparse_indices: Non-zero term positions in sparse query (required for hybrid)
  • sparse_values: Weights for sparse query terms (required for hybrid)
  • top_k: Number of results to return
  • ef: Search quality parameter
  • include_vectors: Include vector data in results (default: False)
  • filter: Optional filter criteria
  • dense_rrf_weight: Weight for dense ranking in RRF fusion (default: 0.5). See How Hybrid Ranking Works.
  • rrf_rank_constant: RRF smoothing constant (default: 60). See How Hybrid Ranking Works.
  • prefilter_cardinality_threshold: Controls when search switches from HNSW filtered search to brute-force prefiltering on the matched subset . See Filter Tuning for details.
  • filter_boost_percentage: Expands the internal HNSW candidate pool by this percentage when a filter is active, compensating for filtered-out results . See Filter Tuning for details.

Hybrid Result Fields:

  • id: Vector identifier
  • similarity: Similarity score
  • distance: Distance score (1.0 - similarity)
  • meta: Metadata dictionary
  • norm: Vector norm
  • filter: Filter dictionary (if filter dict was included in upsert vector object while upserting)
  • vector: Vector data (dense only) (if include_vectors=True)

How Hybrid Ranking Works: Reciprocal Rank Fusion (RRF)

When you run a hybrid query, the server independently ranks documents by dense similarity and sparse (BM25) score, then merges both lists using Reciprocal Rank Fusion (RRF):

rrf_score = dense_rrf_weight / (rrf_rank_constant + dense_rank)
          + (1 - dense_rrf_weight) / (rrf_rank_constant + sparse_rank)
Parameter What it does Default
dense_rrf_weight Weight for dense ranking (0.0 = full sparse, 1.0 = full dense) 0.5
rrf_rank_constant Smoothing constant — higher values flatten the score curve 60
results = index.query(
    vector=dense_query,
    sparse_indices=sparse_indices,
    sparse_values=sparse_values,
    top_k=10,
    dense_rrf_weight=0.7,   # favor semantic results
    rrf_rank_constant=60,
)

Hybrid Search Use Cases

1. BM25 + Neural Embeddings

# Combine traditional keyword search (BM25) with semantic embeddings
# sparse_indices: term IDs from BM25
# sparse_values: BM25 scores
# vector: neural embedding from model like BERT

2. SPLADE + Dense Retrieval

# Use learned sparse representations (SPLADE) with dense embeddings
# sparse_indices/values: SPLADE model output
# vector: dense embedding from same or different model

3. Multi-Signal Ranking

# Combine multiple ranking signals
# sparse: user behavior signals, click-through rates
# dense: content similarity embedding

Filtered Querying

The index.query() method supports structured filtering using the filter parameter. This allows you to restrict search results based on metadata conditions, in addition to vector similarity.

To apply multiple filter conditions, pass an array of filter objects, where each object defines a separate condition. All filters are combined with logical AND — meaning a vector must match all specified conditions to be included in the results.

index = client.get_index(name="your-index-name")

# Query with multiple filter conditions (AND logic)
filtered_results = index.query(
    vector=[...],
    top_k=5,
    ef=128,
    include_vectors=True,
    filter=[
        {"tags": {"$eq": "important"}},
        {"visibility": {"$eq": "public"}}
    ]
)

Filtering Operators

The filter parameter supports a range of comparison operators to build structured queries.

Operator Description Supported Type Example Usage
$eq Matches values that are equal String, Number {"status": {"$eq": "published"}}
$in Matches any value in the provided list String {"tags": {"$in": ["ai", "ml"]}}
$range Matches values between a start and end value, inclusive Number {"score": {"$range": [70, 95]}}

Important Notes:

  • Operators are case-sensitive and must be prefixed with a $
  • Filters operate on fields provided under the filter key during vector upsert
  • The $range operator supports values only within the range [0 – 999]. If your data exceeds this range (e.g., timestamps, large scores), you should normalize or scale your values to fit within [0, 999] prior to upserting or querying

Filter Examples

# Equal operator - exact match
filter=[{"status": {"$eq": "published"}}]

# In operator - match any value in list
filter=[{"tags": {"$in": ["ai", "ml", "data-science"]}}]

# Range operator - numeric range (inclusive)
filter=[{"score": {"$range": [70, 95]}}]

# Combined filters (AND logic)
filter=[
    {"status": {"$eq": "published"}},
    {"tags": {"$in": ["ai", "ml"]}},
    {"score": {"$range": [80, 100]}}
]

Filter Tuning

When using filtered queries, two optional parameters let you tune the trade-off between search speed and recall:

prefilter_cardinality_threshold

Controls when the search strategy switches from HNSW filtered search (fast, graph-based) to brute-force prefiltering (exhaustive scan on the matched subset).

Value Behavior
1_000 Prefilter only for very selective filters — minimum value
10_000 Prefilter only when the filter matches ≤10,000 vectors (default)
1_000_000 Prefilter for almost all filtered searches — maximum value

The intuition: when very few vectors match your filter, HNSW may struggle to find enough valid candidates through graph traversal. In that case, scanning the filtered subset directly (prefiltering) is faster and more accurate. Raising the threshold means prefiltering kicks in more often; lowering it favors HNSW graph search.

# Only prefilter when filter matches ≤5,000 vectors
results = index.query(
    vector=[...],
    top_k=10,
    filter=[{"category": {"$eq": "rare"}}],
    prefilter_cardinality_threshold=5_000,
)

filter_boost_percentage

When using HNSW filtered search, some candidates explored during graph traversal are discarded by the filter, which can leave you with fewer results than top_k. filter_boost_percentage compensates by expanding the internal candidate pool before filtering is applied.

  • 0 → no boost, standard candidate pool size (default)
  • 20 → fetch 20% more candidates internally before applying the filter
  • Maximum: 400 (doubles the candidate pool)
# Fetch 30% more candidates to compensate for aggressive filtering
results = index.query(
    vector=[...],
    top_k=10,
    filter=[{"visibility": {"$eq": "public"}}],
    filter_boost_percentage=30,
)

Using Both Together

results = index.query(
    vector=[...],
    top_k=10,
    filter=[{"category": {"$eq": "rare"}}],
    prefilter_cardinality_threshold=5_000,  # switch to brute-force for small match sets
    filter_boost_percentage=25,             # boost candidates for HNSW filtered search
)

Tip: Start with the defaults (prefilter_cardinality_threshold=10_000, filter_boost_percentage=0). If filtered queries return fewer results than expected, try increasing filter_boost_percentage. If filtered queries are slow on selective filters, try lowering prefilter_cardinality_threshold. Valid range for the threshold is 1,000–1,000,000.

Deletion Methods

The system supports two types of deletion operations — vector deletion and index deletion. These allow you to remove specific vectors or entire indexes from your workspace, giving you full control over lifecycle and storage.

Vector Deletion

Vector deletion is used to remove specific vectors from an index using their unique id. This is useful when:

  • A document is outdated or revoked
  • You want to update a vector by first deleting its old version
  • You're cleaning up test data or low-quality entries
from endee import Endee

client = Endee(token="your-token-here")
index = client.get_index(name="your-index-name")

# Delete a single vector by ID
index.delete_vector("vec1")

Filtered Deletion

In cases where you don't know the exact vector id, but want to delete vectors based on filter fields, you can use filtered deletion. This is especially useful for:

  • Bulk deleting vectors by tag, type, or timestamp
  • Enforcing access control or data expiration policies
from endee import Endee

client = Endee(token="your-token-here")
index = client.get_index(name="your-index-name")

# Delete all vectors matching filter conditions
index.delete_with_filter([{"tags": {"$eq": "important"}}])

Index Deletion

Index deletion permanently removes the entire index and all vectors associated with it. This should be used when:

  • The index is no longer needed
  • You want to re-create the index with a different configuration
  • You're managing index rotation in batch pipelines
from endee import Endee

client = Endee(token="your-token-here")

# Delete an entire index
client.delete_index("your-index-name")

⚠️ Caution: Deletion operations are irreversible. Ensure you have the correct id or index name before performing deletion, especially at the index level.

Additional Operations

Get Vector by ID

The index.get_vector() method retrieves a specific vector from the index by its unique identifier.

# Retrieve a specific vector by its ID
vector = index.get_vector("vec1")

# The returned object contains:
# - id: Vector identifier
# - meta: Metadata dictionary
# - filter: Filter fields dictionary
# - norm: Vector norm value
# - vector: Vector data array
# - sparse_indices: Sparse vector indices (hybrid indexes only)
# - sparse_values: Sparse vector values (hybrid indexes only)

Update Filters

The index.update_filters() method allows you to update the filters for specific vectors without modifying the vector data or other metadata fields. This is useful when you need to change filter attributes like categories, tags, or visibility settings.

from endee import Endee

client = Endee(token="your-token-here")
index = client.get_index(name="your-index-name")

# Update filters for multiple vectors
index.update_filters([
    {"id": "vec1", "filter": {"category": "B", "tags": "updated"}},
    {"id": "vec2", "filter": {"category": "C", "priority": 1}},
    {"id": "vec3", "filter": {"visibility": "private"}}
])

Parameters:

  • updates: List of dictionaries, each containing:
    • id (str): Unique vector identifier (required)
    • filter (dict): New filters to set (required)

Returns: Success message with the number of filters updated

Note: This operation only updates the filters. The vector data, metadata (meta), and other fields remain unchanged.

Describe Index

# Get index statistics and configuration info
info = index.describe()

API Reference

Endee Class

Method Description
__init__(token=None) Initialize client with optional API token
set_token(token) Set or update API token
set_base_url(base_url) Set custom API endpoint
create_index(name, dimension, space_type, M, ef_con, precision, sparse_model) Create a new vector index (precision as Precision enum, sparse_model optional for hybrid)
list_indexes() List all indexes in workspace
delete_index(name) Delete a vector index
get_index(name) Get reference to a vector index

Index Class

Method Description
upsert(input_array) Insert or update vectors (max 1000 per batch)
query(vector, top_k, filter, ef, include_vectors, sparse_indices, sparse_values, dense_rrf_weight, rrf_rank_constant, prefilter_cardinality_threshold, filter_boost_percentage) Search for similar vectors
delete_vector(id) Delete a vector by ID
delete_with_filter(filter) Delete vectors matching a filter
get_vector(id) Get a specific vector by ID
update_filters(updates) Update filters for multiple vectors by ID
describe() Get index statistics and configuration

Precision Enum

The Precision enum provides type-safe precision levels for vector quantization:

from endee import Precision

# Available values
Precision.BINARY2   # 1-bit binary quantization
Precision.INT8     # 8-bit integer quantization 
Precision.INT16    # 16-bit integer quantization (default)
Precision.FLOAT16   # 16-bit floating point
Precision.FLOAT32   # 32-bit floating point

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

endee-0.1.26.tar.gz (37.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

endee-0.1.26-py3-none-any.whl (31.7 kB view details)

Uploaded Python 3

File details

Details for the file endee-0.1.26.tar.gz.

File metadata

  • Download URL: endee-0.1.26.tar.gz
  • Upload date:
  • Size: 37.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for endee-0.1.26.tar.gz
Algorithm Hash digest
SHA256 a83f8eeae5e68bd3d900f9081d32f0b09869ae63db93e0dbf454ac251952822a
MD5 404bdfd59c801fe20ffb9b1cbd63e324
BLAKE2b-256 88b39c4d9e43af2777d1b7ebf14c359bec96c1e11e81b24e2a01cb68c9cc9942

See more details on using hashes here.

File details

Details for the file endee-0.1.26-py3-none-any.whl.

File metadata

  • Download URL: endee-0.1.26-py3-none-any.whl
  • Upload date:
  • Size: 31.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for endee-0.1.26-py3-none-any.whl
Algorithm Hash digest
SHA256 3bd59c8792f0570220c6d4ef8578f261ae4f5030eaf3bbd183bd18860dd52a0a
MD5 4a70942197ff6a45d70bbc6a52855bdb
BLAKE2b-256 51679ae973097f54b05b3c89e49cd24859c22a8bdf5bf31108a46d9ea403571a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page