Page cover

Introduction

What is LYS Labs?

LYS Labs is a blockchain data infrastructure company focused on making real-time, structured, and AI-optimized data available to traders, researchers, agents, and institutional systems. Our mission is to turn raw blockchain activity into intelligent, usable information - delivered faster than anyone else.

We specialize in ultra-low-latency pipelines for Solana and EVM chains, decoding and enriching raw blockchain transactions in under 20ms. This allows clients - from high-frequency traders to AI model trainers - to react before the rest of the market even knows what happened.

Why It Matters

In Web3, data is power. But raw data from the blockchain is fragmented, noisy, and hard to use in real-time. LYS solves this by providing:

  • Real-time structured insights: Every decoded trade, token launch, or wallet movement is available instantly.

  • Custom ontologies & schemas: Users can define the data models they need for their strategies.

  • AI-ready pipelines: Ontology-grounded graphs power retrieval-augmented generation (RAG) for intelligent agents and trading models.

Your journey through these docs

This documentation is structured to mirror your learning curve:

1

Start here

Understand the platform and use cases

2

Dive into the architecture

Learn how the stack is built

3

Explore the data

From raw Solana logs to decoded events

4

Use the APIs

Live data, streaming endpoints, and GraphQL

5

Build

Train AI models or build apps on top of LYS

Who We Serve

LYS isn’t just another analytics dashboard - it’s an intelligence platform. Here's how different power users benefit:

For High Frequency Traders:

  • Sub-5ms latency on Solana token swaps

  • Early detection of whale buys, bundle activity, and liquidity shifts

  • Custom signals for arbitrage and spread adjustment

Key Capabilities

1. Real-time pipelines

  • Solana decoded in 4ms

  • Live trade and liquidity streaming

  • Support for EVM chains (block-by-block)

2. Custom ontologies

  • Define your own schema over blockchain activity

  • Baseline templates provided for fast start

3. Knowledge graphs

  • Multi-hop wallet, token, protocol relationships

  • Graph-native queries for advanced insights

4. AI-Optimized retrieval

  • Ontology-Grounded RAG (OG-RAG) integration

  • Built for LLMs and autonomous agents

How it works: The LYS data journey

1. Raw data ingestion

  • Direct from full nodes, mempool, or Geyser (for Solana)

  • Avoids third-party APIs = lower latency, more control

2. Parsing & decoding

  • Protocol-specific decoders identify real events (e.g. swaps, votes, liquidity adds)

  • Outputs are normalized to shared schemas

3. Indexing & aggregation

  • Events are indexed in real time and written to memory + database

  • Aggregators summarize trends (OHLCV, buy/sell counts, rug flags)

4. Contextualization

  • Link events across wallets and time

  • Discover hidden correlations (e.g., airdrop farming, vote manipulation)

5. Delivery

  • APIs and WSS streams deliver data to bots, dashboards, or AI agents

  • Sandbox enables querying in natural language, Cypher, or via API

Example:

Token Launch Detection

Here’s what happens when a token launches on Pump.fun:

  • Decoder detects token creation, wallet funding, and initial liquidity

  • Aggregator tracks early buys/sells, bundle flags, price rise

  • Sandbox shows key metrics like bonding % change or suspicious wallets

  • AI agents receive this context and decide whether to trade, alert, or ignore

Getting Started with LYS

You don’t need to be an engineer to use LYS. But if you are, we’ll give you every hook you need.

You can:

  • Ask natural language questions like:

Show me the wallets that bought a top 10 trending token and exited before ATH.
  • Use prebuilt aggregations (volume spikes, rug detection, bundle counts)

  • Subscribe to live streams of decoded Solana events

  • Train models on historical, structured token activity

Up next

In the next section, we’ll explore the LYS Platform in detail - how it all connects, what we’ve built, and how every layer serves data.

Ready? Let’s go.

Last updated