🐍

Python SDK

v0.1.2

Official Python SDK for CronBeats ping telemetry. Pythonic API with type hints, zero dependencies, works with Python 3.7+.

Installation

Install via pip:

pip install cronbeats-python

Requirements: Python 3.7 or higher

Quick Start

Send a simple ping to mark your cron job as successful:

from cronbeats_python import PingClient

client = PingClient('YCrXzYbV')
client.ping()

# Done! CronBeats knows your job ran successfully.

Sending Pings

Simple Ping

Send a single heartbeat when your job completes:

client = PingClient('YCrXzYbV')
client.ping()

Start & End Signals

Track execution time by signaling when your job starts and ends:

client = PingClient('YCrXzYbV')

client.start()
# ... do your work ...
client.success()  # or client.fail() on error

Real-World Cron Job Example

Complete example with error handling:

from cronbeats_python import PingClient

client = PingClient('YCrXzYbV')
client.start()

try:
    # Your actual cron job work
    process_emails()
    generate_reports()
    cleanup_temp_files()
    
    client.success()
except Exception as e:
    client.fail()
    print(f"Error: {e}")
    exit(1)

Progress Tracking

Send real-time progress updates from long-running jobs. CronBeats displays a live progress bar and status message on your dashboard.

📊 Two Progress Modes

Mode 1: With Percentageclient.progress(50, "message")

Dashboard shows progress bar (0-100%) + your message. Use when you can calculate meaningful progress.

Mode 2: Message Onlyclient.progress(message="...")

Dashboard shows only your status message (no percentage bar). Use for status updates without measurable progress.

Basic Progress Update

# Send progress percentage (0-100) with status message
client.progress(50, 'Processing batch 50/100')

# Using dict syntax
client.progress({
    'seq': 75,
    'message': 'Almost done'
})

Processing Records Example

from cronbeats_python import PingClient

client = PingClient('YCrXzYbV')
client.start()

total = 10000
for i in range(1, total + 1):
    process_record(i)
    
    # Update progress every 500 records
    if i % 500 == 0:
        percent = int(i * 100 / total)
        client.progress(percent, f'Processed {i:,} / {total:,} records')

client.progress(100, 'All records processed')
client.success()

ETL Pipeline Example

import pandas as pd
from cronbeats_python import PingClient

client = PingClient('YCrXzYbV')
client.start()

# Extract
client.progress(0, 'Step 1/4: Extracting data from API')
df = pd.read_csv('https://api.example.com/data.csv')

# Transform
client.progress(25, 'Step 2/4: Cleaning and transforming')
df = df.dropna().reset_index(drop=True)

# Load
client.progress(50, 'Step 3/4: Loading to database')
df.to_sql('analytics', engine, if_exists='replace')

# Verify
client.progress(75, 'Step 4/4: Validating results')
verify_data_quality()

client.progress(100, 'ETL complete')
client.success()

Error Handling

The SDK raises typed exceptions for validation and API errors:

from cronbeats_python import PingClient, ValidationError, ApiError

client = PingClient('YCrXzYbV')

try:
    client.ping()
    
except ValidationError as e:
    # Invalid job key format or parameters (client-side)
    print(f'Validation error: {e}')
    
except ApiError as e:
    # API/network issue with normalized metadata
    print(f'API error: {e.code}')        # e.g. 'RATE_LIMITED'
    print(f'HTTP status: {e.http_status}') # e.g. 429
    print(f'Retryable: {e.retryable}')     # bool

API Reference

Method Description
ping() Send a simple heartbeat ping
start() Signal job start and begin execution timer
end(status) Signal job end with status ("success" or "fail")
success() Alias for end("success")
fail() Alias for end("fail")
progress(seq, msg=None) Send progress update (0-100) with optional status message

For complete API documentation, see the GitHub repository.

Configuration Options

Customize the client behavior:

client = PingClient(
    'YCrXzYbV',
    base_url='https://cronbeats.io',  # Default
    timeout_ms=5000,                   # Default: 5 seconds
    max_retries=2                      # Default: 2 retries
)

Note: The default 5-second timeout ensures the SDK never blocks your cron job if CronBeats is unreachable. Adjust only if you need longer waits.

Complete Examples

Data Processing with Progress

import pandas as pd
from cronbeats_python import PingClient

client = PingClient('YCrXzYbV')
client.start()

try:
    client.progress(10, 'Loading data...')
    df = pd.read_csv('large_dataset.csv')
    
    client.progress(30, 'Processing data...')
    df = df.dropna()
    df['processed'] = df['value'].apply(transform_function)
    
    client.progress(70, 'Saving results...')
    df.to_csv('output.csv', index=False)
    
    client.progress(100, 'Processing complete')
    client.success()
    
except Exception as e:
    client.fail()
    raise

Django Management Command

# management/commands/backup_database.py

from django.core.management.base import BaseCommand
from cronbeats_python import PingClient
import os

class Command(BaseCommand):
    help = 'Backup database with CronBeats monitoring'
    
    def handle(self, *args, **options):
        client = PingClient(os.environ['CRONBEATS_JOB_KEY'])
        client.start()
        
        try:
            self.stdout.write('Starting backup...')
            # Your backup logic
            client.success()
            self.stdout.write(self.style.SUCCESS('Backup complete'))
        except Exception as e:
            client.fail()
            self.stdout.write(self.style.ERROR(f'Backup failed: {e}'))
            raise

Batch Processing Loop

from cronbeats_python import PingClient

client = PingClient('YCrXzYbV')
client.start()

batches = ['batch1.csv', 'batch2.csv', 'batch3.csv', 'batch4.csv']
total = len(batches)

for idx, batch_file in enumerate(batches, start=1):
    percent = int((idx - 1) * 100 / total)
    client.progress(percent, f'Processing {batch_file}')
    
    # Process batch
    process_batch(batch_file)

client.progress(100, 'All batches processed')
client.success()

Ready to Get Started?

Install the SDK and start monitoring your Python cron jobs in minutes