Skip to main content

Official Python SDK for the DC1 GPU compute marketplace

Project description

DC1 Python SDK

Official Python client for DC1 — Saudi Arabia's GPU compute marketplace.

Installation

pip install dc1

No third-party dependencies. Requires Python 3.9+.

Quickstart

import dc1

client = dc1.DC1Client(api_key='rk_your_key_here')

# 1. Browse available GPUs
providers = client.providers.list()
for p in providers:
    print(f"{p.name}: {p.gpu_model} ({p.vram_gb} GB VRAM) — reliability {p.reliability_score}%")

# 2. Submit an LLM inference job
job = client.jobs.submit(
    'llm_inference',
    {'prompt': 'Explain quantum computing in one paragraph.', 'model': 'llama3'},
    provider_id=providers[0].id,
    duration_minutes=2,
)

# 3. Wait for the result (polls every 5s, times out after 300s)
result = client.jobs.wait(job.id)
print(result.result['output'])

# 4. Check your wallet
wallet = client.wallet.balance()
print(f'Balance: {wallet.balance_sar:.2f} SAR')

Authentication

Get your API key from the DC1 renter dashboard. Keys look like dc1-renter-abc123....

client = dc1.DC1Client(api_key='dc1-renter-abc123')

Reference

DC1Client(api_key, base_url=None, timeout=30)

Parameter Type Default Description
api_key str required Your renter API key
base_url str https://dcp.sa/api/dc1 Override to hit staging/local
timeout int 30 HTTP timeout in seconds

client.jobs

jobs.submit(job_type, params, *, provider_id, duration_minutes, priority=2) → Job

Submit a compute job.

Parameter Type Description
job_type str llm_inference, image_generation, vllm_serve, rendering, training, benchmark, custom_container
params dict Job-type-specific parameters (see below)
provider_id int Provider to run on — get from client.providers.list()
duration_minutes float Max runtime in minutes (billing capped at actual usage)
priority int 1=high, 2=normal (default), 3=low

params by job type:

job_type Required params Optional params
llm_inference prompt (str) model (str, default llama3)
image_generation prompt (str) width, height, steps
vllm_serve model (str, e.g. mistralai/Mistral-7B-v0.1) tensor_parallel_size
rendering scene_url (str) frames, resolution
benchmark (none required)

Billing rates:

  • llm_inference: 15 halala/minute
  • image_generation: 20 halala/minute
  • vllm_serve: 20 halala/minute (~12 SAR/hr)

jobs.get(job_id) → Job

Fetch the current status and result of a job by ID.

jobs.wait(job_id, *, timeout=300, poll_interval=5) → Job

Block until the job reaches a terminal state (completed, failed, cancelled).

Raises JobTimeoutError if the job doesn't finish within timeout seconds.

jobs.list(limit=20) → list[Job]

List recent jobs for the authenticated renter.


client.providers

providers.list() → list[Provider]

List all online GPU providers. No authentication required at the API level, but the SDK always sends your key.

providers.get(provider_id) → Provider

Fetch a single provider by ID.


client.wallet

wallet.balance() → Wallet

Fetch the current wallet balance and account info.


Models

Job

Attribute Type Description
id str Unique job ID
status str queued, running, completed, failed, cancelled
job_type str Job type string
provider_id int Provider that ran the job
duration_minutes float Requested max duration
cost_halala int Billed amount in halala
cost_sar float cost_halala / 100
result dict | None Parsed output ({'output': ...} for text, {'image_url': ...} for images, {'endpoint_url': ...} for vllm_serve)
result_type str | None text, image, or endpoint
error str | None Error message if status == 'failed'
execution_time_sec float | None Actual wall-clock time
is_done bool True when status is terminal

Provider

Attribute Type Description
id int Numeric provider ID
name str Provider display name
gpu_model str GPU model string (e.g. RTX 4090)
vram_mib int VRAM in MiB
vram_gb float vram_mib / 1024
status str online or offline
reliability_score int 0–100 reliability percentage

Wallet

Attribute Type Description
balance_halala int Balance in halala
balance_sar float balance_halala / 100
name str Account name
email str Account email

Exceptions

Exception When raised
DC1Error Base class for all SDK errors
AuthError API key missing or invalid (HTTP 401)
APIError API returned an error. Has .status_code and .response attributes
JobTimeoutError jobs.wait() exceeded the timeout. Has .job_id and .timeout
from dc1 import DC1Client, JobTimeoutError, APIError

try:
    result = client.jobs.wait(job.id, timeout=60)
except JobTimeoutError as e:
    print(f'Job {e.job_id} is still running after {e.timeout}s')
except APIError as e:
    print(f'API error {e.status_code}: {e}')

Examples

See examples/ for runnable scripts:


Provider SDK (dc1_provider)

GPU providers use the DC1ProviderClient class from the dc1_provider package, included in this repo alongside the renter SDK.

Quickstart

from dc1_provider import DC1ProviderClient

# --- Step 1: Register once ---
client = DC1ProviderClient()          # no key needed for registration
spec = client.build_resource_spec()   # auto-detects GPU + RAM + OS

result = client.register(
    name="My GPU Farm",
    email="provider@example.com",
    gpu_model=spec.get("gpu_model", "RTX 4090"),
    resource_spec=spec,
)
API_KEY = result["api_key"]
print("API key:", API_KEY)

# --- Step 2: Ongoing operation ---
client = DC1ProviderClient(api_key=API_KEY)

me = client.me()
print(f"{me.name} | {me.gpu_model} | {me.status}")
print(f"Reputation: {me.reputation_score:.1f}/100")

# Send a heartbeat (daemon does this automatically every 30s)
client.announce(client.build_resource_spec())

# Poll for assigned jobs
jobs = client.get_jobs(status="queued")
for job in jobs:
    print(f"Job {job.id}: {job.job_type} — earns {job.earnings_sar:.2f} SAR")

# Earnings
e = client.get_earnings()
print(f"Available: {e.available_sar:.2f} SAR / Total: {e.total_earned_sar:.2f} SAR")

Authentication

Provider keys look like dc1-provider-<32-hex-chars>. You receive one when you call client.register(). Store it securely — it authenticates all provider API calls.

DC1ProviderClient reference

Constructor

DC1ProviderClient(api_key=None, base_url="https://api.dcp.sa", timeout=30)
Parameter Type Default Description
api_key str | None None Provider API key (omit for register())
base_url str https://api.dcp.sa Override for local testing
timeout int 30 HTTP timeout in seconds

Methods

Method Returns Description
me() ProviderProfile Your account profile, earnings totals, reputation
register(name, email, gpu_model, os=None, phone=None, resource_spec=None) dict Create a new provider account
heartbeat(gpu_spec=None) dict Send a lightweight heartbeat
announce(resource_spec) dict Heartbeat with full resource spec to advertise capacity
get_jobs(status=None) list[ProviderJob] List jobs assigned to you (queued, running, completed, failed)
get_earnings() Earnings Available balance and lifetime earnings
build_resource_spec() dict Auto-detect GPU + RAM + OS via nvidia-smi

Provider models

ProviderProfile

Attribute Type Description
id int Numeric provider ID
name str Display name
gpu_model str GPU string, e.g. RTX 4090
status str registered | online | offline | suspended
total_earnings_halala int Lifetime earnings in halala
total_earnings_sar float total_earnings_halala / 100
today_earnings_sar float Today's earnings in SAR
reputation_score float 0–100 composite score
uptime_pct float 7-day uptime percentage
last_heartbeat str | None ISO-8601 timestamp
is_online bool True when status == "online"

ProviderJob

Attribute Type Description
id str Job ID
job_type str e.g. llm_inference, image_gen
status str queued | running | completed | failed
cost_halala int Total job cost
provider_earnings_halala int Your 75% share
earnings_sar float provider_earnings_halala / 100
payload dict Workload parameters
hmac_signature str | None Task signature for daemon validation

Earnings

Attribute Type Description
available_halala int Balance ready to withdraw
available_sar float available_halala / 100
total_earned_sar float Lifetime total
total_jobs int Count of completed jobs

Provider exceptions

Exception When
AuthError API key invalid or provider suspended
DC1APIError Any other API error. Has .status_code and .response

Provider examples

See examples/ for runnable scripts:


License

MIT © DC1 / dhnpmp-tech

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dc1_provider-0.1.0.tar.gz (19.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dc1_provider-0.1.0-py3-none-any.whl (20.2 kB view details)

Uploaded Python 3

File details

Details for the file dc1_provider-0.1.0.tar.gz.

File metadata

  • Download URL: dc1_provider-0.1.0.tar.gz
  • Upload date:
  • Size: 19.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for dc1_provider-0.1.0.tar.gz
Algorithm Hash digest
SHA256 eacacfd3f714cc4c5969d2be15fb66dc18f9422606932166a145f22186a4f6d2
MD5 8a2d8f913158715865142aed104da7ee
BLAKE2b-256 0a471364cd3ce34793ece646ce4c0945590f8a9a35576dde5ef0877f904ad9ea

See more details on using hashes here.

File details

Details for the file dc1_provider-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: dc1_provider-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 20.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for dc1_provider-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f4763bc7ad3b08abb91dc03f1ed928053643fe3d80284f05cde119c73ea3cc03
MD5 ffdf4a309c3eacef1c027c505acaf770
BLAKE2b-256 12e0d51bffdca5964a7f2e5eeb85459a655687b4a4dfb2e481b714d6ee2c1dba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page