DOCUMENTATION

Docs

Everything you need to build on GOKA. From quick start guides to deep dives into the protocol architecture, consensus mechanisms, and distributed compute operations.

# Introduction

GOKA AI is a decentralized compute network built for artificial intelligence workloads. Unlike traditional cloud providers that centralize compute resources in data centers owned by single entities, GOKA distributes AI computation across a global network of independent node operators.

The network is secured by our novel Proof-of-Work 2.0 (PoW 2.0) consensus mechanism, which transforms the energy-intensive mining process into productive AI computation. Instead of solving arbitrary cryptographic puzzles, GOKA nodes perform real inference and training tasks, earning $GOKA tokens as rewards.

Key Benefits

  • 01.Cost Efficient — Up to 80% cheaper than AWS, GCP, or Azure for AI workloads
  • 02.Censorship Resistant — No single point of failure or control
  • 03.Privacy Preserving — Data never leaves your control with federated learning
  • 04.Globally Distributed — Low latency inference from 2,400+ nodes worldwide

# Installation

The GOKA CLI is the primary interface for interacting with the network. It supports all major operating systems and can be installed via npm, yarn, or downloaded as a standalone binary.

Using npm (Recommended)

terminal
# Install globally via npm
npm install -g @goka/cli

# Verify installation
goka --version
# Output: goka-cli v2.4.1

Using yarn

terminal
yarn global add @goka/cli

Using pnpm

terminal
pnpm add -g @goka/cli

Standalone Binary (Linux/macOS)

bash
# Download and install
curl -fsSL https://get.goka.ai | bash

# Or manually download
wget https://releases.goka.ai/cli/latest/goka-linux-x64.tar.gz
tar -xzf goka-linux-x64.tar.gz
sudo mv goka /usr/local/bin/

# Quick Start

Get up and running with GOKA in under 5 minutes. This guide will walk you through initializing a project, connecting your wallet, and running your first AI inference on the decentralized network.

Step 1: Initialize Project

terminal
# Create a new GOKA project
goka init my-ai-app

# Navigate to project directory
cd my-ai-app

# Project structure:
# my-ai-app/
# ├── goka.config.json    # Network configuration
# ├── models/             # Local model cache
# ├── data/               # Training datasets
# └── scripts/            # Automation scripts

Step 2: Connect Wallet

terminal
# Authenticate with your Solana wallet
goka auth login

# This will open a browser window for wallet connection
# Supported wallets: Phantom, Solflare, Backpack, Ledger

# Or import existing keypair
goka auth import --keypair ./my-wallet.json

# Check connection status
goka auth status
# Output: Connected as 7xKX...9pQm | Balance: 142.5 $GOKA

Step 3: Run Inference

terminal
# Run a simple inference request
goka run --model gpt-4-turbo --input "Explain quantum computing in one sentence"

# Output:
# {
#   "model": "gpt-4-turbo",
#   "node": "goka-node-eu-west-1",
#   "latency": "124ms",
#   "cost": "0.00012 $GOKA",
#   "response": "Quantum computing harnesses quantum mechanical phenomena..."
# }

# Stream responses in real-time
goka run --model gpt-4-turbo --input "Write a story" --stream

# Use different models
goka run --model llama-3.1-70b --input "Your prompt here"
goka run --model claude-3-opus --input "Another prompt"
goka run --model goka-mini --input "Fast inference"

# Configuration

The goka.config.json file controls all aspects of your GOKA project. Here is a complete reference of all available options.

goka.config.json
{
  // Network configuration
  "network": "mainnet-beta",           // Options: mainnet-beta, devnet, testnet
  "rpc": "https://api.goka.ai/rpc",    // Custom RPC endpoint (optional)
  
  // Wallet configuration  
  "wallet": {
    "path": "./wallet.json",           // Path to keypair file
    "autoSign": false                  // Auto-sign transactions under threshold
  },
  
  // Compute preferences
  "compute": {
    "priority": "balanced",            // Options: low, balanced, high, urgent
    "maxCost": "1.0",                  // Max $GOKA per request
    "timeout": 30000,                  // Request timeout in ms
    "retries": 3,                      // Auto-retry failed requests
    "regions": ["us", "eu", "asia"]    // Preferred node regions
  },
  
  // Model defaults
  "models": {
    "default": "gpt-4-turbo",          // Default model for inference
    "fallback": "goka-mini",           // Fallback if primary unavailable
    "cache": true,                     // Cache model weights locally
    "cachePath": "./models"            // Local cache directory
  },
  
  // Training configuration
  "training": {
    "distributed": true,               // Enable distributed training
    "minNodes": 4,                     // Minimum nodes for training jobs
    "checkpointInterval": 1000,        // Save checkpoint every N steps
    "checkpointStorage": "ipfs"        // Options: ipfs, arweave, local
  },
  
  // Logging and telemetry
  "logging": {
    "level": "info",                   // Options: debug, info, warn, error
    "format": "json",                  // Options: json, pretty
    "output": "./logs/goka.log"        // Log file path
  }
}

Environment Variables

You can also configure GOKA using environment variables. They take precedence over config file values.

GOKA_NETWORK=mainnet-beta
GOKA_WALLET_PATH=./wallet.json
GOKA_PRIORITY=high
GOKA_MAX_COST=0.5
GOKA_DEFAULT_MODEL=gpt-4-turbo

# Wallet Setup

GOKA runs on the Solana blockchain. You need a Solana wallet with $GOKA tokens to pay for compute. Here is how to set up your wallet for GOKA.

Option 1: Browser Wallet (Recommended)

  1. Install Phantom, Solflare, or Backpack
  2. Create a new wallet or import existing seed phrase
  3. Run goka auth login and approve the connection
  4. Purchase $GOKA on Raydium or Jupiter

Option 2: CLI Keypair

# Generate new keypair
goka wallet generate --output ./wallet.json

# Output:
# Public Key: 7xKXmP2...9pQmN4
# Save this file securely. Never share your private key.

# Fund wallet (testnet)
goka wallet airdrop --amount 100

# Check balance
goka wallet balance
# Output: 100.0 $GOKA (testnet)

Option 3: Hardware Wallet (Ledger)

# Connect Ledger device
goka auth ledger

# Select derivation path (default: m/44'/501'/0'/0')
# Approve connection on device

# All transactions will require Ledger confirmation

# PoW 2.0 Consensus

Proof-of-Work 2.0 is GOKA's novel consensus mechanism that transforms computational energy into productive AI work. Unlike Bitcoin's PoW which solves arbitrary hash puzzles, GOKA nodes prove their work by performing verifiable AI computations.

How It Works

  1. 1.
    Task Assignment

    When a user submits an inference or training request, the network selects qualified nodes based on hardware specs, reputation score, and geographic proximity.

  2. 2.
    Computation

    Selected nodes perform the AI computation (inference, training step, etc.) using their GPU/TPU resources.

  3. 3.
    Verification

    A subset of validator nodes re-execute the computation to verify correctness. Cryptographic commitments ensure integrity without revealing private data.

  4. 4.
    Reward Distribution

    Upon successful verification, the computing node receives $GOKA tokens from the user's payment, plus block rewards from network inflation.

Traditional PoW

  • - Arbitrary hash puzzles
  • - Energy wasted on useless work
  • - No productive output
  • - Hardware becomes e-waste

GOKA PoW 2.0

  • + Real AI computations
  • + Energy creates value
  • + Productive inference/training
  • + GPUs serve real users

# Compute Nodes

GOKA's compute layer consists of distributed GPU nodes that perform AI workloads. Anyone with compatible hardware can run a node and earn $GOKA tokens.

Minimum Hardware Requirements

ComponentMinimumRecommended
GPURTX 3080 (10GB)RTX 4090 / A100
VRAM10 GB24+ GB
RAM32 GB64+ GB
Storage500 GB SSD2+ TB NVMe
Network100 Mbps1+ Gbps

Running a Node

terminal
# Install GOKA node software
goka node install

# Configure your node
goka node config --gpu auto --stake 1000

# Register on-chain (requires $GOKA stake)
goka node register

# Start accepting compute jobs
goka node start

# Monitor earnings and performance
goka node status
# Output:
# Node: goka-node-7xKX
# Status: ONLINE
# Uptime: 99.7%
# Jobs Completed: 12,847
# Earnings (24h): 45.2 $GOKA
# Reputation: 98.5/100

# Model Registry

The GOKA Model Registry is a decentralized catalog of AI models available on the network. Models are stored on IPFS/Arweave and cached locally by nodes for fast inference.

terminal
# List available models
goka models list

# Output:
# NAME                  SIZE      LATENCY   COST/1K
# ─────────────────────────────────────────────────
# gpt-4-turbo          ~175B     120ms     0.015 $GOKA
# gpt-4o               ~175B     85ms      0.012 $GOKA
# claude-3-opus        ~175B     140ms     0.018 $GOKA
# llama-3.1-70b        70B       65ms      0.008 $GOKA
# llama-3.1-8b         8B        25ms      0.002 $GOKA
# goka-mini            3B        12ms      0.0005 $GOKA
# stable-diffusion-xl  ~6B       800ms     0.01 $GOKA
# whisper-large        1.5B      real-time 0.001 $GOKA

# Get model details
goka models info llama-3.1-70b

# Pre-cache model locally (for node operators)
goka models cache llama-3.1-70b

Publishing Custom Models

Anyone can publish models to the GOKA registry. Models are reviewed for safety and earn royalties when used.

# Publish your fine-tuned model
goka models publish ./my-model \
  --name "my-custom-llm" \
  --description "Fine-tuned for code generation" \
  --royalty 0.001

# Payment Channels

GOKA uses state channels for instant, low-fee micropayments. Instead of settling every inference on-chain, payments are batched and settled periodically.

Payment Flow

1.Open Channel — Deposit $GOKA into a payment channel
2.Off-chain Payments — Each inference deducts from channel balance (instant)
3.Settlement — Channel closes and final balance settles on-chain
terminal
# Open a payment channel with 10 $GOKA
goka channel open --amount 10

# Check channel status
goka channel status
# Output:
# Channel ID: ch_7xKX9pQm
# Balance: 8.45 $GOKA
# Spent: 1.55 $GOKA
# Transactions: 847
# Status: ACTIVE

# Top up channel
goka channel deposit --amount 5

# Close channel and withdraw remaining balance
goka channel close

# Network Topology

GOKA operates as a multi-layer decentralized network. Understanding the topology helps you optimize for latency and cost.

Layer 1: Settlement (Solana)

Final payment settlement, node registration, stake management, governance

Layer 2: Coordination

Job routing, node discovery, load balancing, reputation tracking

Layer 3: Compute

GPU nodes performing inference and training, distributed across regions

Layer 4: Storage

Model weights on IPFS/Arweave, training data in encrypted shards

Layer 5: Application

Your apps, SDKs, APIs consuming compute from the network

# Training Models

GOKA supports distributed model training across the network. Your training jobs are split across multiple GPUs for faster completion.

terminal
# Start a distributed training job
goka train \
  --base-model llama-3.1-8b \
  --dataset ./training-data.jsonl \
  --epochs 3 \
  --batch-size 32 \
  --learning-rate 2e-5 \
  --distributed \
  --nodes 8

# Output:
# Training Job: train_7xKX9pQm
# Base Model: llama-3.1-8b
# Dataset: 50,000 examples
# Nodes Allocated: 8 (4x A100, 4x RTX 4090)
# Estimated Time: 2h 15m
# Estimated Cost: 45.2 $GOKA
#
# Progress: [████████░░░░░░░░░░░░] 40%
# Loss: 1.234 → 0.567
# Checkpoint: gs://goka-checkpoints/train_7xKX9pQm/step-4000

# Monitor training job
goka train status train_7xKX9pQm

# Download trained model
goka train download train_7xKX9pQm --output ./my-model

Training Modes

Full Fine-tuning

Update all model weights. Most expensive but best quality.

LoRA

Low-rank adaptation. 10x cheaper, nearly same quality.

QLoRA

Quantized LoRA. 50x cheaper, good for experimentation.

# Running Inference

Inference is the most common operation on GOKA. Send prompts to AI models and receive responses with sub-second latency.

CLI Usage

terminal
# Simple inference
goka run --model gpt-4-turbo --input "What is GOKA?"

# Streaming output
goka run --model llama-3.1-70b --input "Write a poem" --stream

# With system prompt
goka run --model gpt-4o \
  --system "You are a helpful coding assistant" \
  --input "Write a React component for a button"

# JSON output mode
goka run --model gpt-4-turbo \
  --input "Extract entities from: John works at Google" \
  --format json

# Multi-modal (vision)
goka run --model gpt-4-vision \
  --input "Describe this image" \
  --image ./screenshot.png

SDK Usage (TypeScript)

inference.ts
import { Goka } from "@goka/sdk";

const goka = new Goka({ wallet: "./wallet.json" });

// Simple completion
const response = await goka.run({
  model: "gpt-4-turbo",
  input: "Explain quantum computing",
});
console.log(response.text);

// Streaming
const stream = await goka.stream({
  model: "llama-3.1-70b",
  input: "Write a story about AI",
});
for await (const chunk of stream) {
  process.stdout.write(chunk.text);
}

// Chat with history
const chat = await goka.chat({
  model: "gpt-4o",
  messages: [
    { role: "system", content: "You are a helpful assistant" },
    { role: "user", content: "Hello!" },
    { role: "assistant", content: "Hi! How can I help?" },
    { role: "user", content: "What's the weather like?" },
  ],
});

# Deploying Models

Deploy your custom or fine-tuned models to the GOKA network. Deployed models become available to all network users (or just your team with private deployments).

terminal
# Deploy a model from local files
goka deploy ./my-model \
  --name "my-custom-llm-v1" \
  --visibility public \
  --min-nodes 3

# Deploy from HuggingFace
goka deploy hf://username/model-name \
  --name "my-hf-model"

# Private deployment (only your wallet can access)
goka deploy ./my-model \
  --name "private-model" \
  --visibility private

# Team deployment (multiple wallets)
goka deploy ./my-model \
  --name "team-model" \
  --visibility team \
  --allowed-wallets 7xKX...,8yLY...,9zMZ...

# Check deployment status
goka deploy status my-custom-llm-v1
# Output:
# Model: my-custom-llm-v1
# Status: ACTIVE
# Nodes Serving: 12
# Total Inferences: 45,231
# Revenue (30d): 234.5 $GOKA

# Earning Rewards

There are multiple ways to earn $GOKA tokens on the network. Here's a breakdown of each earning mechanism.

Node Operation

~15-50 $GOKA/day

Run a compute node and earn tokens for every inference and training job you complete. Earnings scale with GPU power and uptime.

Model Publishing

0.1% royalty per use

Publish models to the registry and earn royalties every time someone uses your model. Popular models can generate significant passive income.

Staking

~8% APY

Stake $GOKA to secure the network and earn staking rewards. Staked tokens also boost your node's priority for job assignments.

Validation

~5 $GOKA/day

Run a validator node to verify compute results. Requires less hardware than compute nodes but needs high uptime.

# Node Operation

A comprehensive guide to running and maintaining a GOKA compute node for maximum earnings and network contribution.

docker-compose.yml
version: "3.8"
services:
  goka-node:
    image: goka/node:latest
    runtime: nvidia
    environment:
      - GOKA_WALLET=/config/wallet.json
      - GOKA_NETWORK=mainnet-beta
      - GOKA_STAKE=1000
    volumes:
      - ./wallet.json:/config/wallet.json:ro
      - ./models:/models
      - ./logs:/logs
    ports:
      - "8080:8080"
      - "9090:9090"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    restart: unless-stopped
node commands
# Start node
docker-compose up -d

# View logs
docker-compose logs -f goka-node

# Check earnings
goka node earnings --period 7d
# Output:
# Period: Last 7 days
# Jobs Completed: 89,234
# Compute Time: 156.4 hours
# Gross Earnings: 312.5 $GOKA
# Network Fees: -15.6 $GOKA
# Net Earnings: 296.9 $GOKA

# Withdraw earnings
goka node withdraw --amount 200

# Federated Learning

Train models across distributed data without ever moving the raw data. Perfect for privacy-sensitive applications in healthcare, finance, and enterprise.

How Federated Learning Works

  1. 1.Model weights are distributed to participating nodes
  2. 2.Each node trains on its local data (data never leaves the node)
  3. 3.Only model gradients/updates are sent back (encrypted)
  4. 4.Central coordinator aggregates updates using secure aggregation
  5. 5.Updated model is redistributed for next round
federated.ts
import { Goka, FederatedLearning } from "@goka/sdk";

const goka = new Goka({ wallet: "./wallet.json" });

// Initialize federated training
const fl = new FederatedLearning(goka, {
  baseModel: "llama-3.1-8b",
  aggregationMethod: "fedavg",
  minParticipants: 10,
  roundsPerEpoch: 5,
  privacyBudget: 1.0, // Differential privacy epsilon
});

// Start training (nodes with matching data tags join automatically)
const job = await fl.start({
  dataTags: ["medical-records", "hipaa-compliant"],
  epochs: 10,
});

// Monitor progress
job.on("round_complete", (round) => {
  console.log(`Round ${round.number}: Loss = ${round.avgLoss}`);
});

await job.waitForCompletion();
console.log("Final model:", job.modelHash);

# Custom Models

GOKA supports any PyTorch or ONNX model. Here's how to package and deploy your custom architectures.

goka.model.json
{
  "name": "my-custom-model",
  "version": "1.0.0",
  "framework": "pytorch",
  "architecture": "transformer",
  "inputSchema": {
    "type": "text",
    "maxLength": 4096
  },
  "outputSchema": {
    "type": "text",
    "streaming": true
  },
  "requirements": {
    "minVRAM": "16GB",
    "compute": "float16"
  },
  "files": {
    "weights": "./model.safetensors",
    "tokenizer": "./tokenizer.json",
    "config": "./config.json"
  }
}
terminal
# Validate model package
goka models validate ./my-model

# Test locally before deploying
goka models test ./my-model --input "Test prompt"

# Deploy to network
goka deploy ./my-model --name my-custom-model

# API Integration

GOKA provides OpenAI-compatible APIs, making it easy to switch from centralized providers. Just change the base URL.

openai-compatible.ts
import OpenAI from "openai";

// Just change the base URL - everything else works the same!
const client = new OpenAI({
  baseURL: "https://api.goka.ai/v1",
  apiKey: "your-goka-api-key", // From goka auth token
});

const response = await client.chat.completions.create({
  model: "gpt-4-turbo", // Or any GOKA model
  messages: [
    { role: "user", content: "Hello!" }
  ],
  stream: true,
});

for await (const chunk of response) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

REST Endpoints

MethodEndpointDescription
POST/v1/chat/completionsChat completion
POST/v1/completionsText completion
POST/v1/embeddingsGenerate embeddings
POST/v1/images/generationsImage generation
GET/v1/modelsList models

# Security

GOKA implements multiple security layers to protect your data, models, and tokens.

Encryption

  • - All data in transit encrypted with TLS 1.3
  • - Model inputs/outputs encrypted end-to-end
  • - Training data encrypted at rest with AES-256
  • - Wallet keys never leave your device

TEE Support

  • - Intel SGX and AMD SEV for confidential computing
  • - Model inference in hardware-isolated enclaves
  • - Attestation proofs for verified execution
  • - Optional for high-security workloads

Network Security

  • - Sybil resistance through staking requirements
  • - Slashing for malicious node behavior
  • - Reputation system to filter bad actors
  • - Multi-node verification for critical operations