A Polars plugin for JSON schema inference using genson-rs.
Project description
Polars Genson
A Polars plugin for working with JSON schemas. Infer schemas from JSON data and convert between JSON Schema and Polars schema formats.
Installation
pip install polars-genson[polars]
On older CPUs run:
pip install polars-genson[polars-lts-cpu]
Features
Schema Inference
- JSON Schema Inference: Generate JSON schemas from JSON strings in Polars columns
- Polars Schema Inference: Directly infer Polars data types and schemas from JSON data
- Multiple JSON Objects: Handle columns with varying JSON schemas across rows
- Complex Types: Support for nested objects, arrays, and mixed types
- Flexible Input: Support for both single JSON objects and arrays of objects
Schema Conversion
- Polars → JSON Schema: Convert existing DataFrame schemas to JSON Schema format
- JSON Schema → Polars: Convert JSON schemas to equivalent Polars schemas
- Round-trip Support: Full bidirectional conversion with validation
- Schema Manipulation: Validate, transform, and standardize schemas
Usage
The plugin adds a genson namespace to Polars DataFrames for schema inference and conversion.
import polars as pl
import polars_genson
import json
# Create a DataFrame with JSON strings
df = pl.DataFrame({
"json_data": [
'{"name": "Alice", "age": 30, "scores": [95, 87]}',
'{"name": "Bob", "age": 25, "city": "NYC", "active": true}',
'{"name": "Charlie", "age": 35, "metadata": {"role": "admin"}}'
]
})
print("Input DataFrame:")
print(df)
shape: (3, 1)
┌─────────────────────────────────┐
│ json_data │
│ --- │
│ str │
╞═════════════════════════════════╡
│ {"name": "Alice", "age": 30, "… │
│ {"name": "Bob", "age": 25, "ci… │
│ {"name": "Charlie", "age": 35,… │
└─────────────────────────────────┘
JSON Schema Inference
# Infer JSON schema from the JSON column
schema = df.genson.infer_json_schema("json_data")
print("Inferred JSON schema:")
print(json.dumps(schema, indent=2))
{
"$schema": "http://json-schema.org/schema#",
"properties": {
"name": {
"type": "string"
},
"age": {
"type": "integer"
},
"scores": {
"items": {
"type": "integer"
},
"type": "array"
}
"city": {
"type": "string"
},
"active": {
"type": "boolean"
},
"metadata": {
"properties": {
"role": {
"type": "string"
}
},
"required": [
"role"
],
"type": "object"
},
},
"required": [
"age",
"name"
],
"type": "object"
}
Polars Schema Inference
Directly infer Polars data types and schemas:
# Infer Polars schema from the JSON column
polars_schema = df.genson.infer_polars_schema("json_data")
print("Inferred Polars schema:")
print(polars_schema)
Schema({
'name': String,
'age': Int64,
'scores': List(Int64),
'city': String,
'active': Boolean,
'metadata': Struct({'role': String}),
})
The Polars schema inference automatically handles:
- ✅ Complex nested structures with proper
Structtypes - ✅ Typed arrays like
List(Int64),List(String) - ✅ Mixed data types (integers, floats, booleans, strings)
- ✅ Optional fields present in some but not all objects
- ✅ Deep nesting with multiple levels of structure
Map vs Record Inference Control
For objects with varying keys, you can control whether they're inferred as Maps (dynamic key-value pairs) or Records (fixed fields) using the map_threshold and map_max_required_keys parameters:
# Data with different key patterns
df = pl.DataFrame({
"json_data": [
'{"user": {"id": 1, "name": "Alice"}, "attributes": {"source": "web", "campaign": "summer"}}',
'{"user": {"id": 2, "name": "Bob"}, "attributes": {"source": "mobile"}}'
]
})
# Default: both user and attributes become Records
schema_default = df.genson.infer_json_schema("json_data")
# Lower thresholds: distinguish structured Records from dynamic Maps
schema_controlled = df.genson.infer_json_schema("json_data",
map_threshold=2, # Objects with ≥2 keys can be Maps
map_max_required_keys=1 # Maps can have ≤1 required key
)
In the controlled example:
userhas 2 required keys (id,name) > 1 → Record (structured)attributeshas 1 required key (source) ≤ 1 → Map (dynamic)
This gives you fine-grained control over how objects with different key stability patterns are classified.
Schema Unification
For objects with heterogeneous but compatible record structures, polars-genson can unify them into a single map schema instead of creating separate fixed fields. This is useful for dynamic data where keys represent similar entities with slightly different structures.
Unifying Compatible Record Types
import polars as pl
# Example: Letter frequency data with vowel/consonant variants
df = pl.DataFrame({
"json_data": [
'{"letter": {"a": {"alphabet": 0, "vowel": 0, "frequency": 0.0817}, "b": {"alphabet": 1, "consonant": 0, "frequency": 0.0150}, "c": {"alphabet": 2, "consonant": 1, "frequency": 0.0278}}}'
]
})
# Without unification: creates fixed record with separate a, b, c fields
schema_default = df.genson.infer_json_schema("json_data", avro=True, map_threshold=3)
# With unification: creates map with unified record values
schema_unified = df.genson.infer_json_schema("json_data", avro=True, map_threshold=3, unify_maps=True)
Without unification, you get separate fields:
{
"letter": {
"type": "record",
"fields": [
{"name": "a", "type": {...}},
{"name": "b", "type": {...}},
{"name": "c", "type": {...}}
]
}
}
With unification (unify_maps=True), compatible records are merged:
{
"letter": {
"type": "map",
"values": {
"type": "record",
"fields": [
{"name": "alphabet", "type": "int"}, // shared field (always present)
{"name": "frequency", "type": "float"}, // shared field (always present)
{"name": "vowel", "type": ["null", "int"]}, // optional (vowels only)
{"name": "consonant", "type": ["null", "int"]} // optional (consonants only)
]
}
}
}
Normalization with Unified Schema
# Normalise with unified schema - each key gets the same record structure
normalized = df.genson.normalise_json("json_data", map_threshold=3, unify_maps=True).to_dicts()
print(normalized[0])
Output:
{
'letter': [
{'key': 'a', 'value': {'alphabet': 0, 'frequency': 0.0817, 'vowel': 0, 'consonant': None}},
{'key': 'b', 'value': {'alphabet': 1, 'frequency': 0.0150, 'vowel': None, 'consonant': 0}},
{'key': 'c', 'value': {'alphabet': 2, 'frequency': 0.0278, 'vowel': None, 'consonant': 1}}
]
}
Parquet I/O
For working with JSON data stored in Parquet files, polars-genson provides direct I/O functions that handle reading from and writing to Parquet columns without needing to load data into DataFrames first.
Schema Inference from Parquet
from polars_genson import infer_from_parquet
# Infer schema from a Parquet column
schema = infer_from_parquet(
"data.parquet",
column="claims",
map_threshold=0,
unify_maps=True,
)
# Or write schema to a file
infer_from_parquet(
"data.parquet",
column="claims",
output_path="schema.json",
avro=True
)
Normalization with Parquet
from polars_genson import normalise_from_parquet
# Normalize JSON in a Parquet column and write back to Parquet
normalise_from_parquet(
input_path="input.parquet",
column="claims",
output_path="normalized.parquet",
map_threshold=0,
unify_maps=True
)
# In-place normalization (overwrites source file)
normalise_from_parquet(
input_path="data.parquet",
column="claims",
output_path="data.parquet"
)
Both functions accept the same schema inference and normalization options as the DataFrame methods, making it easy to work with Parquet files directly.
Root Wrapping (wrap_root)
By default, inferred schemas treat each JSON object as the root.
Sometimes you may want to wrap the schema in an extra record layer — for example, to make Avro schemas compatible with systems that require a named top-level record.
You can control this behavior with the wrap_root option:
wrap_root="true"→ Wraps using the column name as the record namewrap_root="<string>"→ Wraps using the given string as the record namewrap_root=None(default) → No wrapping (root is just"document"for Avro)
Example: Avro schema with wrap_root
df = pl.DataFrame({
"json_data": [
'{"value": "A"}',
'{"value": "B"}'
]
})
schema = df.genson.infer_json_schema("json_data", avro=True, wrap_root="payload")
print(json.dumps(schema, indent=2))
{
"type": "record",
"name": "document",
"namespace": "genson",
"fields": [
{
"name": "payload",
"type": {
"type": "record",
"name": "payload",
"namespace": "genson.document_types",
"fields": [
{
"name": "value",
"type": "string"
}
]
}
}
]
}
This is especially useful when:
- Exporting Avro to systems that require a named top-level record
- Keeping schema names consistent with your column names or domain models
Normalisation
In addition to schema inference, polars-genson can normalise JSON columns so that every row conforms to a single, consistent Avro schema.
This is especially useful for semi-structured data where fields may be missing, empty arrays/maps may need to collapse to null, or numeric/boolean values may sometimes be encoded as strings.
Features
- Converts empty arrays/maps to
null(default) - Preserves empties with
empty_as_null=False - Ensures missing fields are inserted with
null - Supports per-field coercion of numeric/boolean strings via
coerce_strings=True - Supports top-level schema evolution with
wrap_root
Example: Map Encoding in Polars
By default, Polars cannot store a dynamic JSON object ({"en":"Hello","fr":"Bonjour"})
without exploding it into a struct with fixed fields padded with nulls.
polars-genson solves this by normalising maps to a list of key/value structs:
This representation is schema-stable and preserves all map keys without null-padding.
It matches how Arrow/Parquet model Avro map types internally.
import polars as pl
import polars_genson
df = pl.DataFrame({
"json_data": [
'{"id": 123, "tags": [], "labels": {}, "active": true}',
'{"id": 456, "tags": ["x","y"], "labels": {"fr":"Bonjour"}, "active": false}',
'{"id": 789, "labels": {"en": "Hi", "es": "Hola"}}'
]
})
print(df.genson.normalise_json("json_data", map_threshold=0))
Output:
shape: (3, 4)
┌─────┬────────────┬──────────────────────────────┬────────┐
│ id ┆ tags ┆ labels ┆ active │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ list[str] ┆ list[struct[2]] ┆ bool │
╞═════╪════════════╪══════════════════════════════╪════════╡
│ 123 ┆ null ┆ null ┆ true │
│ 456 ┆ ["x", "y"] ┆ [{"fr","Bonjour"}] ┆ false │
│ 789 ┆ null ┆ [{"en","Hi"}, {"es","Hola"}] ┆ null │
└─────┴────────────┴──────────────────────────────┴────────┘
In the example above, normalise_json reshaped jagged JSON into a consistent, schema-aligned form:
-
Row 1
tagswas present but empty ([]) → normalised tonull(this prevents row elimination when exploding the column)labelswas present but empty ({}) → normalised tonullactivestayedtrue
-
Row 2
tagshad two values (["x","y"]) → preserved as a list of stringslabelshad one entry ({"fr":"Bonjour"}) → normalised to a list of one key:value structactivestayedfalse
-
Row 3
tagswas missing entirely → injected asnulllabelshad two entries ({"en":"Hi","es":"Hola"}) → normalised to a list of two key:value structsactivewas missing → injected asnull
Example: Empty Arrays
df = pl.DataFrame({"json_data": ['{"labels": []}', '{"labels": {"en": "Hello"}}']})
out = df.genson.normalise_json("json_data")
print(out)
Output:
shape: (2, 1)
┌─────────────────────────────┐
│ normalised │
│ --- │
│ str │
╞═════════════════════════════╡
│ {"labels": null} │
│ {"labels": {"en": "Hello"}} │
└─────────────────────────────┘
Example: Preserving Empty Arrays
out = df.genson.normalise_json("json_data", empty_as_null=False)
print(out)
Output:
┌─────────────────────────────┐
│ normalised │
╞═════════════════════════════╡
│ {"labels": []} │
│ {"labels": {"en": "Hello"}} │
└─────────────────────────────┘
Example: String Coercion
df = pl.DataFrame({
"json_data": [
'{"id": "42", "active": "true"}',
'{"id": 7, "active": false}'
]
})
# Default: no coercion
print(df.genson.normalise_json("json_data").to_list())
# ['{"id": null, "active": null}', '{"id": 7, "active": false}']
# With coercion
print(df.genson.normalise_json("json_data", coerce_strings=True).to_list())
# ['{"id": 42, "active": true}', '{"id": 7, "active": false}']
Schema-Aware Decoding
The decode parameter can be either a boolean or a schema.
decode=True→ Infer a schema automatically, then decode JSON into native Polars types.decode=False→ Leave values as normalised JSON strings.decode=pl.Schema | pl.Struct→ Use your own schema for decoding (skip re-inference).
import polars as pl
import polars_genson
df = pl.DataFrame({
"json_data": [
'{"id": 1, "active": true}',
'{"id": 2, "active": false}'
]
})
# Explicit schema
schema = pl.Struct({
"id": pl.Int64,
"active": pl.Boolean,
})
# Use schema directly for decoding
decoded = df.genson.normalise_json("json_data", decode=schema)
print(decoded)
Output:
shape: (2, 2)
┌─────┬────────┐
│ id ┆ active │
│ --- ┆ --- │
│ i64 ┆ bool │
╞═════╪════════╡
│ 1 ┆ true │
│ 2 ┆ false │
└─────┴────────┘
Note: Normalisation always aligns rows to a consistent schema internally.
Passing your own schema skips the extra inference step, which can improve performance,
but if your schema doesn’t match what’s in the data, you'll hit a decoding error
(polars.exceptions.ComputeError from .str.json_decode). That may in fact be desirable to halt on though.
For the best of both worlds, you can run with decode=True once, capture the resulting .schema,
and then reuse it in future calls.
Advanced Usage
Per-Row Schema Processing
- Only available with JSON schema currently (per-row/unmerged Polars schemas TODO)
# Get individual schemas and process them
df = pl.DataFrame({
"ABCs": [
'{"a": 1, "b": 2}',
'{"a": 1, "c": true}',
]
})
# Analyze schema variations
individual_schemas = df.genson.infer_json_schema("ABCs", merge_schemas=False)
The result is a list of one schema per row. With merge_schemas=True you would
get all 3 keys (a, b, c) in a single schema.
[{'$schema': 'http://json-schema.org/schema#',
'properties': {'a': {'type': 'integer'}, 'b': {'type': 'integer'}},
'required': ['a', 'b'],
'type': 'object'},
{'$schema': 'http://json-schema.org/schema#',
'properties': {'a': {'type': 'integer'}, 'c': {'type': 'boolean'}},
'required': ['a', 'c'],
'type': 'object'}]
JSON Schema Options
# Use the expression directly for more control
result = df.select(
polars_genson.infer_json_schema(
pl.col("json_data"),
merge_schemas=False, # Get individual schemas instead of merged
).alias("individual_schemas")
)
# Or use with different options
schema = df.genson.infer_json_schema(
"json_data",
ignore_outer_array=False, # Treat top-level arrays as arrays
ndjson=True, # Handle newline-delimited JSON
schema_uri="https://json-schema.org/draft/2020-12/schema", # Specify a schema URI
merge_schemas=True # Merge all schemas (default)
)
Polars Schema Options
# Infer Polars schema with options
polars_schema = df.genson.infer_polars_schema(
"json_data",
ignore_outer_array=True, # Treat top-level arrays as streams of objects
ndjson=False, # Not newline-delimited JSON
debug=False # Disable debug output
)
# Note: merge_schemas=False not yet supported for Polars schemas
Method Reference
The genson namespace provides three main methods:
infer_json_schema(column, **kwargs) -> dict | list[dict]
Infers a JSON Schema (or Avro, if requested) from a string column.
Parameters:
-
column: Name of the column containing JSON strings -
ignore_outer_array: Treat top-level arrays as streams of objects (default:True) -
ndjson: Treat input as newline-delimited JSON (default:False) -
schema_uri: Schema URI to embed in the output (default:"http://json-schema.org/schema#"). Ignored by some consumers whenavro=True. -
merge_schemas: Merge schemas from all rows (default:True). IfFalse, returns one schema per row as a list. -
debug: Print debug information (default:False) -
profile: Print profiling information on the duration of each step (default:False) -
map_threshold: Detect maps when object has more than N keys (default:20) -
map_max_required_keys: Maximum required keys for Map inference (default:None). Objects with more required keys will be forced to Record type. IfNone, no gating based on required key count. -
force_field_types: Dict of per-field overrides, values must be"map"or"record". Example:{"labels": "map", "claims": "record"} -
avro: Output Avro schema instead of JSON Schema (default:False) -
wrap_root: Control root wrapping.True→ wrap using the column namestr→ wrap using the given nameNone→ no wrapping (default)
Returns:
dictwhenmerge_schemas=Truelist[dict]whenmerge_schemas=False
infer_polars_schema(column, **kwargs) -> pl.Schema
Infers a native Polars schema from a string column.
Parameters:
-
column: Name of the column containing JSON strings -
ignore_outer_array: Treat top-level arrays as streams of objects (default:True) -
ndjson: Treat input as newline-delimited JSON (default:False) -
merge_schemas: Merge schemas from all rows (default:True). (Currently the only supported mode.) -
debug: Print debug information (default:False) -
profile: Print profiling information on the duration of each step (default:False) -
map_threshold: Detect maps when object has more than N keys (default:20) -
map_max_required_keys: Maximum required keys for Map inference (default:None). Objects with more required keys will be forced to Record type. IfNone, no gating based on required key count. -
force_field_types: Dict of per-field overrides, values must be"map"or"record" -
avro: Infer using Avro semantics (unions, maps, nullability) instead of pure JSON Schema semantics (default:False) -
wrap_root: Control root wrapping.True→ wrap using the column namestr→ wrap using the given nameNone→ no wrapping (default)
Returns:
pl.Schema
Note: merge_schemas=False is not supported for Polars schema inference.
normalise_json(column, **kwargs) -> pl.DataFrame | pl.Series
Normalises each JSON string in the column against a single, inferred Avro schema. Ensures every row matches the same structure and datatypes.
Parameters:
-
column: Name of the column containing JSON strings -
decode: IfTrue, decode to native Polars types (default:True) -
unnest: Ifdecode=True, expand the decoded struct into separate columns (default:True) -
ignore_outer_array: Treat top-level arrays as streams of objects (default:True) -
ndjson: Treat input as newline-delimited JSON (default:False) -
empty_as_null: Convert empty arrays/maps tonull(default:True) -
coerce_strings: Coerce numeric/boolean strings (e.g."42","true") into numbers/booleans where the schema expects them (default:False) -
map_encoding: Encoding for Avro maps:"kv"(default),"mapping", or"entries" -
map_threshold: Detect maps when object has more than N keys (default:20) -
map_max_required_keys: Maximum required keys for Map inference (default:None). Objects with more required keys will be forced to Record type. IfNone, no gating based on required key count. -
force_field_types: Dict of per-field overrides ("map"/"record") -
wrap_root: Control root wrapping.True→ wrap using the column namestr→ wrap using the given nameNone→ no wrapping (default)
Returns:
-
If
decode=True:unnest=True→pl.DataFramewith one column per schema fieldunnest=False→pl.DataFramewith a single struct column
-
If
decode=False→pl.Seriesof normalised JSON strings
Example:
df = pl.DataFrame({"json_data": ['{"labels": []}', '{"labels": {"en": "Hello"}}']})
out = df.genson.normalise_json("json_data")
print(out.to_list())
# ['{"labels": null}', '{"labels": {"en": "Hello"}}']
Schema Comparison Helper: schema_to_dict
For when you need to compare Polars schemas structurally — for example, to verify that a round-tripped or inferred schema is equivalent to another,
polars-genson provides a small utility function, schema_to_dict, to make life easier.
from polars_genson import schema_to_dict
import polars as pl
schema1 = pl.Schema({"id": pl.Int64, "data": pl.Struct({"x": pl.Int32, "y": pl.Utf8})})
schema2 = pl.Schema({"data": pl.Struct({"y": pl.Utf8, "x": pl.Int32}), "id": pl.Int64})
assert schema_to_dict(schema1) == schema_to_dict(schema2)
Unlike direct schema equality (schema1 == schema2), this approach:
- Recursively normalises nested Struct, List, and Array types
- Ignores field order when comparing
- Produces a pure-Python nested dict, suitable for JSON serialization or snapshot tests
This helper is used internally in polars-genson’s test suite (see tests/schema_roundtrip_test.py)
to verify equivalence of inferred, converted, and round-tripped schemas.
Examples
Working with Complex JSON
# Complex nested JSON with arrays of objects
df = pl.DataFrame({
"complex_json": [
'{"user": {"profile": {"name": "Alice", "preferences": {"theme": "dark"}}}, "posts": [{"title": "Hello", "likes": 5}]}',
'{"user": {"profile": {"name": "Bob", "preferences": {"theme": "light"}}}, "posts": [{"title": "World", "likes": 3}, {"title": "Test", "likes": 1}]}'
]
})
schema = df.genson.infer_polars_schema("complex_json")
print(schema)
Schema({
'user': Struct({
'profile': Struct({
'name': String,
'preferences': Struct({'theme': String})
})
}),
'posts': List(Struct({'likes': Int64, 'title': String})),
})
Using Inferred Schema
# You can use the inferred schema for validation or DataFrame operations
inferred_schema = df.genson.infer_polars_schema("json_data")
# Use with other Polars operations
print(f"Schema has {len(inferred_schema)} fields:")
for name, dtype in inferred_schema.items():
print(f" {name}: {dtype}")
Contributing
This crate is part of the polars-genson project. See the main repository for the contribution and development docs.
License
MIT License
- Contains vendored and slightly adapted copy of the Apache 2.0 licensed fork of
genson-rscrate
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file polars_genson-0.7.4.tar.gz.
File metadata
- Download URL: polars_genson-0.7.4.tar.gz
- Upload date:
- Size: 24.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
354b6a8f93e2b335c6105326b3d10c0e6b7052c25b02a39fbfae619e0e1df25b
|
|
| MD5 |
21dfbe5394f817f29dd8e47d84ad83e7
|
|
| BLAKE2b-256 |
36ef8af494f510586667ed7d93639933e99641a7669a4ea390ceff626febf70e
|
File details
Details for the file polars_genson-0.7.4-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: polars_genson-0.7.4-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 9.0 MB
- Tags: PyPy, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
065070312ea1fb1a012565d13b7fc1ae14cb206aa7c6b50bdeecd15993cc9c4a
|
|
| MD5 |
cb3c1dc66b5979efecf033b00a94e00c
|
|
| BLAKE2b-256 |
38abe7bfc05dbc59f2721cbd7082a93a588764e4a17134ad9f94afdae6c188d0
|
File details
Details for the file polars_genson-0.7.4-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl.
File metadata
- Download URL: polars_genson-0.7.4-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl
- Upload date:
- Size: 8.0 MB
- Tags: PyPy, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8018b7410f8a6692bdc3a91091a4ba63f0172b5de91eff2f25f99d7647847aae
|
|
| MD5 |
6c3b0dc574fdbc052a4b0abf1399931f
|
|
| BLAKE2b-256 |
4289478ce043426f64e87a1bebadef1bd87795a0a5dec14d50ce2a7f43f0a7eb
|
File details
Details for the file polars_genson-0.7.4-pp310-pypy310_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl.
File metadata
- Download URL: polars_genson-0.7.4-pp310-pypy310_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl
- Upload date:
- Size: 8.8 MB
- Tags: PyPy, manylinux: glibc 2.17+ ARMv7l
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3e333c43ffda112b6c76fe0ccf9d78177cc04658e6594bd134faee2d701b94f0
|
|
| MD5 |
610208ba744fb4ed752a9f56a76e73be
|
|
| BLAKE2b-256 |
42438af39a98dda9fb5a27893ca94fe768c62810531762c61f0ef256b1f172df
|
File details
Details for the file polars_genson-0.7.4-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl.
File metadata
- Download URL: polars_genson-0.7.4-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl
- Upload date:
- Size: 8.0 MB
- Tags: PyPy, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f561cba6b2d39a5c0ddd4a651522c258c17d09056812ff82d6de824cfbf58fb3
|
|
| MD5 |
0120278286b73b43f4e49774dbe11f9b
|
|
| BLAKE2b-256 |
8237be2a2605c46e2e095c1643c038d13f22ee18f4b938250eaf599c6edee131
|
File details
Details for the file polars_genson-0.7.4-pp39-pypy39_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl.
File metadata
- Download URL: polars_genson-0.7.4-pp39-pypy39_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl
- Upload date:
- Size: 8.8 MB
- Tags: PyPy, manylinux: glibc 2.17+ ARMv7l
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
65b6a004f112749aaebe1d494ab6ef02de5b57d6deaf724ed3200db58ef92183
|
|
| MD5 |
cc63d44b631f2f191d0a674214335226
|
|
| BLAKE2b-256 |
5ee68375972ddc5a2405b58580a0e4f2f6ffbf0d0b036a9173ddd70046af5f33
|
File details
Details for the file polars_genson-0.7.4-cp39-abi3-win_amd64.whl.
File metadata
- Download URL: polars_genson-0.7.4-cp39-abi3-win_amd64.whl
- Upload date:
- Size: 9.7 MB
- Tags: CPython 3.9+, Windows x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1753e8ce1373e4e5d28c6f1f8abc07bec918d64cb5238b1b06058e94c3992c84
|
|
| MD5 |
00c2c4b86ab5dbe2b9b2b6d91b1d8235
|
|
| BLAKE2b-256 |
80b8d46c36eca2c0f57f828176545a6dfa0613861b3a58c90f6123c215f648b1
|
File details
Details for the file polars_genson-0.7.4-cp39-abi3-manylinux_2_28_aarch64.whl.
File metadata
- Download URL: polars_genson-0.7.4-cp39-abi3-manylinux_2_28_aarch64.whl
- Upload date:
- Size: 8.0 MB
- Tags: CPython 3.9+, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1243247fe4b2b8695343bea25d66209d37344411318f41ead9c5f36dfd84f5b9
|
|
| MD5 |
948fd19742a2032c67bd8d7f920964d5
|
|
| BLAKE2b-256 |
c3b800bcfb6f8a60885795a889cd45dc4e32514b5500b93c05056e3b9b19c8ab
|
File details
Details for the file polars_genson-0.7.4-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: polars_genson-0.7.4-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 9.0 MB
- Tags: CPython 3.9+, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d7e5245774bd42347701509af4aa10cc22b73aaa833f93b68319d03929c2eb30
|
|
| MD5 |
839c3bfd0f82a2b25a11b809ad1b0fac
|
|
| BLAKE2b-256 |
dc9a05d61f656b3249de96e814da78fbeb857b3b49520adc1ce158ced65084d7
|
File details
Details for the file polars_genson-0.7.4-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl.
File metadata
- Download URL: polars_genson-0.7.4-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl
- Upload date:
- Size: 8.8 MB
- Tags: CPython 3.9+, manylinux: glibc 2.17+ ARMv7l
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
25446d467745f152a5de62c8e49f538aa8d1a299f9442f22cc053eb54ad7c661
|
|
| MD5 |
151da30233d23889c8996ce23637cd62
|
|
| BLAKE2b-256 |
af9adaf4c428b5e87028a784cdaebcae4cc8e7c9a68bd2172111aee8f7d60257
|
File details
Details for the file polars_genson-0.7.4-cp39-abi3-macosx_11_0_arm64.whl.
File metadata
- Download URL: polars_genson-0.7.4-cp39-abi3-macosx_11_0_arm64.whl
- Upload date:
- Size: 7.7 MB
- Tags: CPython 3.9+, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0d4f270dd4b7feb6046dba85ea98a4c0d77f8c9d19e6246f5e68e2e316214b59
|
|
| MD5 |
106b078b0cf9c1762fc0a95d4925f85f
|
|
| BLAKE2b-256 |
50c319f80ea0877e48ed12ee0aff45d6f070e545b784e47b2d35d4769e512d88
|
File details
Details for the file polars_genson-0.7.4-cp39-abi3-macosx_10_12_x86_64.whl.
File metadata
- Download URL: polars_genson-0.7.4-cp39-abi3-macosx_10_12_x86_64.whl
- Upload date:
- Size: 8.8 MB
- Tags: CPython 3.9+, macOS 10.12+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: maturin/1.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
56b8e75b1fa0553942ac7d7da33bc1b1438674d178f47feffa15210840a9a28b
|
|
| MD5 |
26aee1305d039b199a286a2358326f44
|
|
| BLAKE2b-256 |
f1503e9e5deac7fbbd6e9d0f97081a64e773331148051306f8bf0db2482c1bce
|