Delta Feed - Through Kafka

Describes specifics of integration with our Delta feed through Kafka.

circle-exclamation

This document assumes the client has already reviewed:

  • Message types documentation

  • Message sequencing rules

  • API authentication documentation

  • Snapshot endpoint documentation

This page only covers the Kafka protocol layer as an additional transport mechanism over the existing system.

Overview

The Odds88 Kafka Gateway provides an alternative method for consuming the Odds88 odds stream, in addition to the existing socket connection.

The Kafka connection is available in:

  • Stage Environment – Intended for integration and testing purposes. The stage environment serves data similar in structure and behavior to production.

  • Production Environment – Live production data.

This document describes the Kafka protocol layer. It assumes the client has already reviewed:

  • Message types documentation

  • Message sequencing rules

  • API authentication flow

Access & Network Requirements

Before connecting to Kafka, clients must:

  1. Provide their server public IP address(es) to Odds88.

  2. Wait for confirmation that IP whitelisting has been completed.

Only whitelisted IP addresses are allowed to connect.

Kafka Connection Details

Bootstrap Server (Stage)

circle-exclamation

Security Configuration

  • Security Protocol: SASL_SSL

  • SASL Mechanism: SCRAM-SHA-512

  • Authentication: Username and Password (provided by Odds88)

  • Encryption: TLS enabled

Each client will receive:

  • Username

  • Password

  • Dedicated topic

  • Dedicated consumer group

circle-exclamation

Topic & Partition Model

  • Each client is provided with a dedicated odds feed topic.

  • The topic contains a single partition by default.

Important Implications

Because the topic has only one partition:

  • Only one active consumer instance can consume messages at a time.

  • Message order is strictly preserved.

  • ⚠️Parallel consumption is not supported

High Availability Option

Clients may implement:

  • One primary consumer (active)

  • One secondary consumer (standby / backoff mode)

If the primary consumer disconnects, the secondary consumer may take over.

Offset Management (Optional)

Manual Offset Commit

We recommend using manual offset committing for advanced control.

This ensures that:

  • Offsets are committed only after the message has been fully processed by the client system.

  • If processing fails, the message is not marked as consumed.

  • Upon restart, Kafka will redeliver uncommitted messages.

Each message includes a sequence ID per event. If the client detects a sequence gap they may call the dedicated API endpoint to fetch a fresh snapshot of the event: This endpoint allows the client to resynchronize event state.

Kafka supports automatic offset committing, where the consumer periodically commits the latest processed offset at a defined interval.

When enable.auto.commit = true, Kafka will automatically mark messages as consumed after they are polled by the consumer, based on the configured commit interval.

While this simplifies implementation, it introduces certain risks:

  • A message may be committed before it has been fully processed by the client system.

  • If the client application crashes after the offset is committed but before processing completes, the message will not be redelivered.

  • This may lead to data inconsistencies or missed updates.

For this reason, automatic offset management is suitable only when:

  • The client system can tolerate occasional message loss, or

  • The processing logic is idempotent and capable of handling recovery independently.

For mission-critical odds processing, manual offset committing is strongly recommended, as it provides full control over when a message is marked as successfully processed.

Consumer Configuration

Sample Configuration (.NET Example)

Notes:

  • AutoOffsetReset = Latest is recommended.

  • Offset reset is only applied when the consumer group connects for the first time or their stored offset is not available anymore.

  • After offsets are committed (manually or automatically), reconnections will resume from the last committed offset.

Kafka will continue delivering messages from the point of disconnection.

circle-exclamation

Data Flow Control API

After successful integration, two API endpoints are provided:

Start Endpoint

The client must call:

This ensures that data flow to the client's Kafka topic is initiated.

This should be called once before starting consumption.

Stop Endpoint

If the client intends to stop consuming Kafka messages for an extended period, they are advised to call:

This prevents unnecessary data accumulation in the topic.

circle-info

Calling stop is not mandatory but considered good practice. It should NOT be used for short disconnections or temporary reconnects.

Message Handling

Compression

All Kafka messages are compressed using GZIP.

Upon receiving a message, the client must:

  1. Decompress the message using GZIP.

  2. Parse the decompressed payload as JSON.

  3. Process the message according to documented message types.

Message Format

After the decompression, message format matches with the format for the latest delta WebSocket stream.

Delta Feed Messageschevron-right

Last updated