Everything you need to build on GOKA. From quick start guides to deep dives into the protocol architecture, consensus mechanisms, and distributed compute operations.
GOKA AI is a decentralized compute network built for artificial intelligence workloads. Unlike traditional cloud providers that centralize compute resources in data centers owned by single entities, GOKA distributes AI computation across a global network of independent node operators.
The network is secured by our novel Proof-of-Work 2.0 (PoW 2.0) consensus mechanism, which transforms the energy-intensive mining process into productive AI computation. Instead of solving arbitrary cryptographic puzzles, GOKA nodes perform real inference and training tasks, earning $GOKA tokens as rewards.
The GOKA CLI is the primary interface for interacting with the network. It supports all major operating systems and can be installed via npm, yarn, or downloaded as a standalone binary.
# Install globally via npm
npm install -g @goka/cli
# Verify installation
goka --version
# Output: goka-cli v2.4.1yarn global add @goka/clipnpm add -g @goka/cli# Download and install
curl -fsSL https://get.goka.ai | bash
# Or manually download
wget https://releases.goka.ai/cli/latest/goka-linux-x64.tar.gz
tar -xzf goka-linux-x64.tar.gz
sudo mv goka /usr/local/bin/Get up and running with GOKA in under 5 minutes. This guide will walk you through initializing a project, connecting your wallet, and running your first AI inference on the decentralized network.
# Create a new GOKA project
goka init my-ai-app
# Navigate to project directory
cd my-ai-app
# Project structure:
# my-ai-app/
# ├── goka.config.json # Network configuration
# ├── models/ # Local model cache
# ├── data/ # Training datasets
# └── scripts/ # Automation scripts# Authenticate with your Solana wallet
goka auth login
# This will open a browser window for wallet connection
# Supported wallets: Phantom, Solflare, Backpack, Ledger
# Or import existing keypair
goka auth import --keypair ./my-wallet.json
# Check connection status
goka auth status
# Output: Connected as 7xKX...9pQm | Balance: 142.5 $GOKA# Run a simple inference request
goka run --model gpt-4-turbo --input "Explain quantum computing in one sentence"
# Output:
# {
# "model": "gpt-4-turbo",
# "node": "goka-node-eu-west-1",
# "latency": "124ms",
# "cost": "0.00012 $GOKA",
# "response": "Quantum computing harnesses quantum mechanical phenomena..."
# }
# Stream responses in real-time
goka run --model gpt-4-turbo --input "Write a story" --stream
# Use different models
goka run --model llama-3.1-70b --input "Your prompt here"
goka run --model claude-3-opus --input "Another prompt"
goka run --model goka-mini --input "Fast inference"The goka.config.json file controls all aspects of your GOKA project. Here is a complete reference of all available options.
{
// Network configuration
"network": "mainnet-beta", // Options: mainnet-beta, devnet, testnet
"rpc": "https://api.goka.ai/rpc", // Custom RPC endpoint (optional)
// Wallet configuration
"wallet": {
"path": "./wallet.json", // Path to keypair file
"autoSign": false // Auto-sign transactions under threshold
},
// Compute preferences
"compute": {
"priority": "balanced", // Options: low, balanced, high, urgent
"maxCost": "1.0", // Max $GOKA per request
"timeout": 30000, // Request timeout in ms
"retries": 3, // Auto-retry failed requests
"regions": ["us", "eu", "asia"] // Preferred node regions
},
// Model defaults
"models": {
"default": "gpt-4-turbo", // Default model for inference
"fallback": "goka-mini", // Fallback if primary unavailable
"cache": true, // Cache model weights locally
"cachePath": "./models" // Local cache directory
},
// Training configuration
"training": {
"distributed": true, // Enable distributed training
"minNodes": 4, // Minimum nodes for training jobs
"checkpointInterval": 1000, // Save checkpoint every N steps
"checkpointStorage": "ipfs" // Options: ipfs, arweave, local
},
// Logging and telemetry
"logging": {
"level": "info", // Options: debug, info, warn, error
"format": "json", // Options: json, pretty
"output": "./logs/goka.log" // Log file path
}
}You can also configure GOKA using environment variables. They take precedence over config file values.
GOKA_NETWORK=mainnet-beta GOKA_WALLET_PATH=./wallet.json GOKA_PRIORITY=high GOKA_MAX_COST=0.5 GOKA_DEFAULT_MODEL=gpt-4-turbo
GOKA runs on the Solana blockchain. You need a Solana wallet with $GOKA tokens to pay for compute. Here is how to set up your wallet for GOKA.
# Generate new keypair
goka wallet generate --output ./wallet.json
# Output:
# Public Key: 7xKXmP2...9pQmN4
# Save this file securely. Never share your private key.
# Fund wallet (testnet)
goka wallet airdrop --amount 100
# Check balance
goka wallet balance
# Output: 100.0 $GOKA (testnet)# Connect Ledger device
goka auth ledger
# Select derivation path (default: m/44'/501'/0'/0')
# Approve connection on device
# All transactions will require Ledger confirmationProof-of-Work 2.0 is GOKA's novel consensus mechanism that transforms computational energy into productive AI work. Unlike Bitcoin's PoW which solves arbitrary hash puzzles, GOKA nodes prove their work by performing verifiable AI computations.
When a user submits an inference or training request, the network selects qualified nodes based on hardware specs, reputation score, and geographic proximity.
Selected nodes perform the AI computation (inference, training step, etc.) using their GPU/TPU resources.
A subset of validator nodes re-execute the computation to verify correctness. Cryptographic commitments ensure integrity without revealing private data.
Upon successful verification, the computing node receives $GOKA tokens from the user's payment, plus block rewards from network inflation.
GOKA's compute layer consists of distributed GPU nodes that perform AI workloads. Anyone with compatible hardware can run a node and earn $GOKA tokens.
| Component | Minimum | Recommended |
|---|---|---|
| GPU | RTX 3080 (10GB) | RTX 4090 / A100 |
| VRAM | 10 GB | 24+ GB |
| RAM | 32 GB | 64+ GB |
| Storage | 500 GB SSD | 2+ TB NVMe |
| Network | 100 Mbps | 1+ Gbps |
# Install GOKA node software
goka node install
# Configure your node
goka node config --gpu auto --stake 1000
# Register on-chain (requires $GOKA stake)
goka node register
# Start accepting compute jobs
goka node start
# Monitor earnings and performance
goka node status
# Output:
# Node: goka-node-7xKX
# Status: ONLINE
# Uptime: 99.7%
# Jobs Completed: 12,847
# Earnings (24h): 45.2 $GOKA
# Reputation: 98.5/100The GOKA Model Registry is a decentralized catalog of AI models available on the network. Models are stored on IPFS/Arweave and cached locally by nodes for fast inference.
# List available models
goka models list
# Output:
# NAME SIZE LATENCY COST/1K
# ─────────────────────────────────────────────────
# gpt-4-turbo ~175B 120ms 0.015 $GOKA
# gpt-4o ~175B 85ms 0.012 $GOKA
# claude-3-opus ~175B 140ms 0.018 $GOKA
# llama-3.1-70b 70B 65ms 0.008 $GOKA
# llama-3.1-8b 8B 25ms 0.002 $GOKA
# goka-mini 3B 12ms 0.0005 $GOKA
# stable-diffusion-xl ~6B 800ms 0.01 $GOKA
# whisper-large 1.5B real-time 0.001 $GOKA
# Get model details
goka models info llama-3.1-70b
# Pre-cache model locally (for node operators)
goka models cache llama-3.1-70bAnyone can publish models to the GOKA registry. Models are reviewed for safety and earn royalties when used.
# Publish your fine-tuned model
goka models publish ./my-model \
--name "my-custom-llm" \
--description "Fine-tuned for code generation" \
--royalty 0.001GOKA uses state channels for instant, low-fee micropayments. Instead of settling every inference on-chain, payments are batched and settled periodically.
# Open a payment channel with 10 $GOKA
goka channel open --amount 10
# Check channel status
goka channel status
# Output:
# Channel ID: ch_7xKX9pQm
# Balance: 8.45 $GOKA
# Spent: 1.55 $GOKA
# Transactions: 847
# Status: ACTIVE
# Top up channel
goka channel deposit --amount 5
# Close channel and withdraw remaining balance
goka channel closeGOKA operates as a multi-layer decentralized network. Understanding the topology helps you optimize for latency and cost.
Final payment settlement, node registration, stake management, governance
Job routing, node discovery, load balancing, reputation tracking
GPU nodes performing inference and training, distributed across regions
Model weights on IPFS/Arweave, training data in encrypted shards
Your apps, SDKs, APIs consuming compute from the network
GOKA supports distributed model training across the network. Your training jobs are split across multiple GPUs for faster completion.
# Start a distributed training job
goka train \
--base-model llama-3.1-8b \
--dataset ./training-data.jsonl \
--epochs 3 \
--batch-size 32 \
--learning-rate 2e-5 \
--distributed \
--nodes 8
# Output:
# Training Job: train_7xKX9pQm
# Base Model: llama-3.1-8b
# Dataset: 50,000 examples
# Nodes Allocated: 8 (4x A100, 4x RTX 4090)
# Estimated Time: 2h 15m
# Estimated Cost: 45.2 $GOKA
#
# Progress: [████████░░░░░░░░░░░░] 40%
# Loss: 1.234 → 0.567
# Checkpoint: gs://goka-checkpoints/train_7xKX9pQm/step-4000
# Monitor training job
goka train status train_7xKX9pQm
# Download trained model
goka train download train_7xKX9pQm --output ./my-modelUpdate all model weights. Most expensive but best quality.
Low-rank adaptation. 10x cheaper, nearly same quality.
Quantized LoRA. 50x cheaper, good for experimentation.
Inference is the most common operation on GOKA. Send prompts to AI models and receive responses with sub-second latency.
# Simple inference
goka run --model gpt-4-turbo --input "What is GOKA?"
# Streaming output
goka run --model llama-3.1-70b --input "Write a poem" --stream
# With system prompt
goka run --model gpt-4o \
--system "You are a helpful coding assistant" \
--input "Write a React component for a button"
# JSON output mode
goka run --model gpt-4-turbo \
--input "Extract entities from: John works at Google" \
--format json
# Multi-modal (vision)
goka run --model gpt-4-vision \
--input "Describe this image" \
--image ./screenshot.pngimport { Goka } from "@goka/sdk";
const goka = new Goka({ wallet: "./wallet.json" });
// Simple completion
const response = await goka.run({
model: "gpt-4-turbo",
input: "Explain quantum computing",
});
console.log(response.text);
// Streaming
const stream = await goka.stream({
model: "llama-3.1-70b",
input: "Write a story about AI",
});
for await (const chunk of stream) {
process.stdout.write(chunk.text);
}
// Chat with history
const chat = await goka.chat({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant" },
{ role: "user", content: "Hello!" },
{ role: "assistant", content: "Hi! How can I help?" },
{ role: "user", content: "What's the weather like?" },
],
});Deploy your custom or fine-tuned models to the GOKA network. Deployed models become available to all network users (or just your team with private deployments).
# Deploy a model from local files
goka deploy ./my-model \
--name "my-custom-llm-v1" \
--visibility public \
--min-nodes 3
# Deploy from HuggingFace
goka deploy hf://username/model-name \
--name "my-hf-model"
# Private deployment (only your wallet can access)
goka deploy ./my-model \
--name "private-model" \
--visibility private
# Team deployment (multiple wallets)
goka deploy ./my-model \
--name "team-model" \
--visibility team \
--allowed-wallets 7xKX...,8yLY...,9zMZ...
# Check deployment status
goka deploy status my-custom-llm-v1
# Output:
# Model: my-custom-llm-v1
# Status: ACTIVE
# Nodes Serving: 12
# Total Inferences: 45,231
# Revenue (30d): 234.5 $GOKAThere are multiple ways to earn $GOKA tokens on the network. Here's a breakdown of each earning mechanism.
Run a compute node and earn tokens for every inference and training job you complete. Earnings scale with GPU power and uptime.
Publish models to the registry and earn royalties every time someone uses your model. Popular models can generate significant passive income.
Stake $GOKA to secure the network and earn staking rewards. Staked tokens also boost your node's priority for job assignments.
Run a validator node to verify compute results. Requires less hardware than compute nodes but needs high uptime.
A comprehensive guide to running and maintaining a GOKA compute node for maximum earnings and network contribution.
version: "3.8"
services:
goka-node:
image: goka/node:latest
runtime: nvidia
environment:
- GOKA_WALLET=/config/wallet.json
- GOKA_NETWORK=mainnet-beta
- GOKA_STAKE=1000
volumes:
- ./wallet.json:/config/wallet.json:ro
- ./models:/models
- ./logs:/logs
ports:
- "8080:8080"
- "9090:9090"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: unless-stopped# Start node
docker-compose up -d
# View logs
docker-compose logs -f goka-node
# Check earnings
goka node earnings --period 7d
# Output:
# Period: Last 7 days
# Jobs Completed: 89,234
# Compute Time: 156.4 hours
# Gross Earnings: 312.5 $GOKA
# Network Fees: -15.6 $GOKA
# Net Earnings: 296.9 $GOKA
# Withdraw earnings
goka node withdraw --amount 200Train models across distributed data without ever moving the raw data. Perfect for privacy-sensitive applications in healthcare, finance, and enterprise.
import { Goka, FederatedLearning } from "@goka/sdk";
const goka = new Goka({ wallet: "./wallet.json" });
// Initialize federated training
const fl = new FederatedLearning(goka, {
baseModel: "llama-3.1-8b",
aggregationMethod: "fedavg",
minParticipants: 10,
roundsPerEpoch: 5,
privacyBudget: 1.0, // Differential privacy epsilon
});
// Start training (nodes with matching data tags join automatically)
const job = await fl.start({
dataTags: ["medical-records", "hipaa-compliant"],
epochs: 10,
});
// Monitor progress
job.on("round_complete", (round) => {
console.log(`Round ${round.number}: Loss = ${round.avgLoss}`);
});
await job.waitForCompletion();
console.log("Final model:", job.modelHash);GOKA supports any PyTorch or ONNX model. Here's how to package and deploy your custom architectures.
{
"name": "my-custom-model",
"version": "1.0.0",
"framework": "pytorch",
"architecture": "transformer",
"inputSchema": {
"type": "text",
"maxLength": 4096
},
"outputSchema": {
"type": "text",
"streaming": true
},
"requirements": {
"minVRAM": "16GB",
"compute": "float16"
},
"files": {
"weights": "./model.safetensors",
"tokenizer": "./tokenizer.json",
"config": "./config.json"
}
}# Validate model package
goka models validate ./my-model
# Test locally before deploying
goka models test ./my-model --input "Test prompt"
# Deploy to network
goka deploy ./my-model --name my-custom-modelGOKA provides OpenAI-compatible APIs, making it easy to switch from centralized providers. Just change the base URL.
import OpenAI from "openai";
// Just change the base URL - everything else works the same!
const client = new OpenAI({
baseURL: "https://api.goka.ai/v1",
apiKey: "your-goka-api-key", // From goka auth token
});
const response = await client.chat.completions.create({
model: "gpt-4-turbo", // Or any GOKA model
messages: [
{ role: "user", content: "Hello!" }
],
stream: true,
});
for await (const chunk of response) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/chat/completions | Chat completion |
| POST | /v1/completions | Text completion |
| POST | /v1/embeddings | Generate embeddings |
| POST | /v1/images/generations | Image generation |
| GET | /v1/models | List models |
GOKA implements multiple security layers to protect your data, models, and tokens.