Skip to main content
Use Windsurf’s Cascade AI assistant to build trading dashboards and analysis tools powered by The Brain. This guide shows you how to integrate the Gigabrain API into your Windsurf workflow.

Prerequisites

  • Windsurf editor installed
  • Gigabrain API key from your Profile
  • Basic knowledge of Python or JavaScript

Setup

1

Get your API key

  1. Sign in to Gigabrain
  2. Navigate to Profile → API Keys
  3. Generate a new API key (starts with gb_sk_)
2

Create your project

Set up a new project directory:
mkdir gigabrain-dashboard
cd gigabrain-dashboard
Create .env for your API key:
GIGABRAIN_API_KEY=gb_sk_your_key_here
Never commit .env to version control. Add it to .gitignore.
3

Configure Windsurf workspace rules

Create .windsurf/rules.md to teach Cascade about Gigabrain API:
# Gigabrain API Development Rules

## API Configuration
- **Base URL**: `https://api.gigabrain.gg`
- **Endpoint**: `/v1/chat` (POST)
- **Auth**: `Authorization: Bearer gb_sk_...`
- **Response field**: `content` (not `message`)
- **Timeout**: Minimum 600 seconds

## Rate Limits
- 60 requests per minute
- Check `X-RateLimit-Remaining-Minute` header
- Implement exponential backoff on 429 errors

## The Brain - Specialists
The API routes queries to 7 specialists:

1. **Macro**: DXY, VIX, yields, Fed Funds, S&P 500, risk regime
2. **Microstructure**: Funding rates, OI, liquidations, long/short ratios, CVD
3. **Fundamentals**: TVL, protocol revenue, fees, active users, token metrics
4. **Market State**: Fear & Greed, narratives, sentiment, regime classification
5. **Price Movement**: EMAs, RSI, MACD, support/resistance, trade setups
6. **Trenches**: Micro-cap tokens, social momentum, KOL mentions
7. **Polymarket**: Prediction markets, odds, volume, resolution dates

## Query Patterns
For structured data, always specify exact JSON fields:

```
Get funding rates for top 10 perps. Respond as JSON array with:
symbol, funding_rate, open_interest, long_short_ratio
```

## Error Handling
- **401**: Invalid API key → Check key in profile settings
- **429**: Rate limit → Retry after `Retry-After` seconds
- **500/503/504**: Server error → Retry with exponential backoff
- Always log `session_id` for debugging

## Best Practices
- Use environment variables for API keys
- Cache responses to reduce API calls
- Implement retry logic for transient errors
- Break complex queries into smaller requests
- Monitor rate limit headers
- Parse `content` field to access JSON data
4

Start building with Cascade

Open Windsurf and start Cascade. It will automatically read your .windsurf/rules.md and understand Gigabrain API patterns.

Example: Multi-Agent Analysis Dashboard

Build a real-time dashboard that displays insights from multiple Gigabrain agents.
dashboard.py
import os
import streamlit as st
import requests
import json
from datetime import datetime
from dotenv import load_dotenv

load_dotenv()

API_KEY = os.getenv("GIGABRAIN_API_KEY")
BASE_URL = "https://api.gigabrain.gg"

st.set_page_config(
    page_title="Gigabrain Dashboard",
    page_icon="🧠",
    layout="wide"
)

@st.cache_data(ttl=300)  # Cache for 5 minutes
def query_gigabrain(message):
    """Query Gigabrain API with caching"""
    try:
        response = requests.post(
            f"{BASE_URL}/v1/chat",
            headers={
                "Authorization": f"Bearer {API_KEY}",
                "Content-Type": "application/json"
            },
            json={"message": message},
            timeout=600
        )

        if response.status_code == 200:
            data = response.json()
            return json.loads(data["content"])
        else:
            st.error(f"API Error: {response.status_code}")
            return None
    except Exception as e:
        st.error(f"Error: {e}")
        return None

def main():
    st.title("🧠 Gigabrain Intelligence Dashboard")
    st.caption(f"Last updated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")

    # Refresh button
    if st.button("🔄 Refresh Data"):
        st.cache_data.clear()
        st.rerun()

    # Create columns for different agents
    col1, col2 = st.columns(2)

    with col1:
        st.header("📈 Market Sentiment")
        with st.spinner("Fetching Fear & Greed..."):
            sentiment = query_gigabrain("""
                Get BTC fear and greed index. Respond as JSON with:
                fear_greed_index, fear_greed_label, btc_dominance,
                altcoin_season_index, market_cap_total
            """)

        if sentiment:
            # Fear & Greed gauge
            fg_index = sentiment["fear_greed_index"]
            fg_label = sentiment["fear_greed_label"]

            st.metric("Fear & Greed Index", f"{fg_index}/100", fg_label)
            st.progress(fg_index / 100)

            # Other metrics
            st.metric("BTC Dominance", f"{sentiment['btc_dominance']}%")
            st.metric("Altcoin Season Index", sentiment['altcoin_season_index'])

    with col2:
        st.header("💰 Funding Rates")
        with st.spinner("Fetching funding rates..."):
            funding = query_gigabrain("""
                Get funding rates for BTC, ETH, SOL. Respond as JSON array with:
                symbol, funding_rate, open_interest, long_short_ratio
            """)

        if funding:
            for token in funding:
                rate_pct = token["funding_rate"] * 100
                direction = "🟢 LONG" if rate_pct > 0 else "🔴 SHORT"

                st.metric(
                    f"{token['symbol']} Funding",
                    f"{rate_pct:.4f}%",
                    f"{direction} paying"
                )
                st.caption(f"OI: ${token['open_interest']:,.0f}")

    # Full width section for narratives
    st.header("🔥 Trending Narratives")
    with st.spinner("Fetching narratives..."):
        narratives = query_gigabrain("""
            Get current crypto narratives ranked by momentum. Respond as JSON array with:
            narrative, momentum_score, top_tokens, sentiment
        """)

    if narratives:
        for i, narrative in enumerate(narratives[:5], 1):
            with st.expander(f"{i}. {narrative['narrative']} (Momentum: {narrative['momentum_score']}/100)"):
                st.write(f"**Sentiment**: {narrative['sentiment']}")
                st.write(f"**Top Tokens**: {', '.join(narrative['top_tokens'])}")
                st.progress(narrative['momentum_score'] / 100)

    # Liquidations section
    st.header("⚡ Liquidations (24h)")
    with st.spinner("Fetching liquidation data..."):
        liquidations = query_gigabrain("""
            Get liquidation data for past 24 hours. Respond as JSON with:
            total_liquidations_usd, long_liquidations, short_liquidations,
            top_liquidated_tokens
        """)

    if liquidations:
        col1, col2, col3 = st.columns(3)

        with col1:
            st.metric("Total Liquidations", f"${liquidations['total_liquidations_usd']:,.0f}")
        with col2:
            st.metric("Long Liquidations", f"${liquidations['long_liquidations']:,.0f}")
        with col3:
            st.metric("Short Liquidations", f"${liquidations['short_liquidations']:,.0f}")

        if "top_liquidated_tokens" in liquidations:
            st.subheader("Top Liquidated Tokens")
            for token in liquidations["top_liquidated_tokens"][:5]:
                st.write(f"**{token['symbol']}**: ${token['amount']:,.0f}")

if __name__ == "__main__":
    main()
Run the dashboard:
pip install streamlit requests python-dotenv
streamlit run dashboard.py

Using Windsurf Cascade

Windsurf’s Cascade feature is perfect for building complex trading applications. Here’s how to use it with Gigabrain:

Example Cascade Prompts

Build a portfolio analyzer:
Create a portfolio analysis tool that:
1. Takes a list of tokens and allocation percentages
2. Fetches current prices and 24h changes from Gigabrain
3. Calculates total portfolio value and performance
4. Shows correlation matrix between assets
5. Provides rebalancing suggestions

Use the Gigabrain API patterns from .windsurf/rules.md
Create a trade signal aggregator:
Build a multi-timeframe trade signal aggregator:
1. Query Gigabrain for BTC analysis on 1H, 4H, 1D timeframes
2. Extract technical indicators (RSI, MACD, EMAs) from each
3. Calculate consensus signal (bullish/bearish/neutral)
4. Display in a clean terminal UI with colors
5. Save signals to SQLite database with timestamps
Implement a narrative tracker:
Create a narrative momentum tracker that:
1. Fetches trending narratives from Gigabrain every hour
2. Tracks momentum score changes over time
3. Alerts when a narrative crosses 80+ momentum
4. Stores historical data in JSON files
5. Generates a daily summary report

Integration Patterns

Pattern 1: Real-time Data Fetching

Use async/await for parallel queries:
async function fetchAllData() {
  const [macro, funding, sentiment] = await Promise.all([
    queryGigabrain("Get macro indicators..."),
    queryGigabrain("Get funding rates..."),
    queryGigabrain("Get fear and greed..."),
  ]);

  return { macro, funding, sentiment };
}

Pattern 2: Webhook Integration

Build a webhook server that triggers on market events:
from flask import Flask, request
import requests

app = Flask(__name__)

@app.route('/webhook/liquidation', methods=['POST'])
def handle_liquidation():
    event = request.json

    # Query Gigabrain for context
    analysis = query_gigabrain(f"""
        Analyze the impact of a ${event['amount']} {event['token']}
        liquidation on current market conditions. Respond as JSON with:
        impact_level, affected_tokens, recommended_action
    """)

    # Send alert
    send_telegram_alert(analysis)

    return {"status": "processed"}

Pattern 3: Scheduled Reports

Generate daily market reports:
import schedule
import time

def generate_daily_report():
    """Generate comprehensive market report"""
    report = {
        "sentiment": query_gigabrain("Get fear and greed..."),
        "funding": query_gigabrain("Get funding rates..."),
        "narratives": query_gigabrain("Get narratives..."),
        "liquidations": query_gigabrain("Get liquidations...")
    }

    # Save to file
    with open(f"report_{datetime.now().date()}.json", "w") as f:
        json.dump(report, f, indent=2)

    print(f"✅ Daily report generated")

# Run every day at 9 AM
schedule.every().day.at("09:00").do(generate_daily_report)

while True:
    schedule.run_pending()
    time.sleep(60)

Best Practices

1. Response Caching

Reduce API calls with intelligent caching:
from functools import lru_cache
import time

@lru_cache(maxsize=128)
def cached_query(message, ttl_hash):
    return query_gigabrain(message)

# Use with TTL
ttl = 300  # 5 minutes
ttl_hash = int(time.time() / ttl)
data = cached_query("Get funding rates...", ttl_hash)

2. Rate Limit Handling

Monitor and respect rate limits:
def check_rate_limits(response):
    remaining = int(response.headers.get("X-RateLimit-Remaining-Minute", 60))

    if remaining < 5:
        print(f"⚠️  Only {remaining} requests remaining this minute")
        time.sleep(60)  # Wait for reset

3. Error Recovery

Implement exponential backoff:
import time

def query_with_retry(message, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = requests.post(...)
            if response.status_code == 200:
                return response.json()
            elif response.status_code in [500, 503, 504]:
                wait_time = 2 ** attempt
                time.sleep(wait_time)
                continue
        except Exception as e:
            if attempt == max_retries - 1:
                raise
            time.sleep(2 ** attempt)

Next Steps

Gigabrain provides market intelligence tools, not financial advice. Always implement proper risk management in your trading applications. See the Risk Disclosure.