Raw order book snapshot

What is this dataset?

The raw data on which our Level 2 Aggregations such as market depth , bid/ask spread, and price slippage are built. Details a point-in-time view of the bids and asks on an exhange's order book to 10% depth. Used to build your own custom level 2 aggregations. The snapshot is produced every 30 seconds.

Update frequencies

Channel
Update frequency

AWS

New CSV once a day

Azure

New CSV once a day

Google Cloud Platform

New CSV once a day

Snowflake

Data refreshes once a day

BigQuery

Data refreshes once a day

File structure details

  • File Name: [kaiko_legacy_slug]_[instrument_symbol]_[date].csv

    • Files are created on a daily basis.

    • example: oe_btcusdt_2023-05-30.csv

  • Cut-off time: 00:00:00 UTC

    • We use the timestamp when we send the Orderbook snapshot REST API requests to exchanges in order to cut-off the data points between days.

  • Column Delimeter: , (comma)

  • Decimal Mark (in numbers): . (dot)

Column
Description
Example

date

The timestamp when we send the REST API requests to exchanges in Unix Timestamp (in milliseconds)

1676246400344

type

Type(side) of the limit orders on the orderbook snapshot. a: ask. b: bid.

a

price

Quoted price

21779.5

amount

Amount of limit orders at the specific price (can be in base_asset, quote_asset or the number of contracts).

exchange

BigQuery and Snowflake only

Exchange code.

See Exchange codes

cbse

instrument BigQuery and Snowflake only

Exchange code.

See Exchange codes

btc-usd

Last updated

Was this helpful?