Step-by-step guides for getting started with the OARN Network
Submit AI inference tasks to the decentralized compute network
You'll need an Ethereum-compatible wallet to interact with OARN.
Arbitrum Sepoliahttps://sepolia-rollup.arbitrum.io/rpc421614ETHhttps://sepolia.arbiscan.ioRabby, Rainbow, or any wallet supporting Arbitrum works fine.
You need testnet ETH for gas fees on Arbitrum Sepolia.
Package your model for decentralized execution.
# PyTorch to ONNX
import torch
model = YourModel()
model.load_state_dict(torch.load('model.pt'))
dummy_input = torch.randn(1, 3, 224, 224)
torch.onnx.export(model, dummy_input, "model.onnx")
Store your model and input data on the decentralized web.
QmXyz...)# Install IPFS
curl -sSL https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz | tar -xz
sudo ./kubo/install.sh
# Add your file
ipfs add model.onnx
# Returns: added QmXyz... model.onnx
Create and submit your compute task to the network.
npm install @oarnnetwork/sdk
import { OARNClient, cidToBytes32 } from '@oarnnetwork/sdk';
const client = new OARNClient({
privateKey: process.env.PRIVATE_KEY
});
// Upload model and input to IPFS, then convert CIDs to bytes32
const modelHash = cidToBytes32('QmYourModelCID...');
const inputHash = cidToBytes32('QmYourInputCID...');
const { taskId, tx } = await client.submitTask({
modelHash,
inputHash,
rewardPerNode: 10000000000000000n, // 0.01 ETH
requiredNodes: 3,
deadline: Math.floor(Date.now() / 1000) + 3600
});
console.log('Task submitted:', taskId);
SDK available on npm and GitHub.
Advanced users can interact directly with the TaskRegistryV2 contract:
0xD15530ce13188EE88E43Ab07EDD9E8729fCc55D0submitTask(modelHash, inputHash, requirements, rewardPerNode, requiredNodes, deadline, consensusType)Once a node completes your task, retrieve the output.
// Check task status
const task = await client.getTask(taskId);
console.log('Status:', task.status);
// Get result when complete
if (task.status === 'completed') {
const result = await client.getResult(taskId);
console.log('Output CID:', result.outputCid);
// Download from IPFS
const output = await ipfs.cat(result.outputCid);
}
Submit thousands of parameter combinations in a single task with 99.99% gas savings.
import {
OARNClient,
generateParameterGrid,
findOptimalByMetric
} from '@oarnnetwork/sdk';
import { parseEther } from 'ethers';
const client = new OARNClient({
privateKey: process.env.PRIVATE_KEY
});
// Generate 10,000 parameter combinations
const inputs = generateParameterGrid({
temperature: { min: 20, max: 40, steps: 100 },
concentration: { min: 0.1, max: 1.0, steps: 100 }
});
console.log(`Generated ${inputs.length} combinations`);
// Upload model and submit batch task
const modelBuffer = await fs.readFile('model.onnx');
const { taskId, manifestCid } = await client.submitBatchTask(
modelBuffer,
inputs,
parseEther('0.1'), // Reward per node
5, // Required nodes
Math.floor(Date.now() / 1000) + 86400 // 24h deadline
);
console.log('Batch task submitted:', taskId);
console.log('Manifest CID:', manifestCid);
// Wait for consensus, then get results
const results = await client.getBatchResults(taskId);
if (results.consensusReached) {
// Find optimal parameter combination
const optimal = client.findOptimalResult(
results.results, 'yield', 'max'
);
console.log('Best parameters:', optimal.output);
// Get statistics
const stats = client.getMetricStats(results.results, 'yield');
console.log(`Yield: ${stats.mean} ± ${stats.stdDev}`);
// Filter top performers
const top10 = client.getTopResults(results.results, 'yield', 10);
}
Run a compute node and earn COMP rewards for processing AI tasks
Ensure your system meets the minimum specifications.
| CPU | 4 cores, 2.5GHz+ |
| RAM | 8 GB |
| Storage | 50 GB SSD |
| Network | 10 Mbps stable connection |
| OS | Linux (Ubuntu 22.04+), macOS, Windows 10+ |
| CPU | 8+ cores |
| RAM | 32 GB |
| GPU | NVIDIA RTX 3060+ (8GB VRAM) |
| Storage | 500 GB NVMe SSD |
| Network | 100 Mbps+ |
Install the required dependencies.
# Update system
sudo apt update && sudo apt upgrade -y
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
# Install build dependencies
sudo apt install -y build-essential pkg-config libssl-dev
# (Optional) Install NVIDIA drivers for GPU support
sudo apt install -y nvidia-driver-535
# Install Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
# Install Rust from rustup.rs
# Download and run rustup-init.exe
# Install Visual Studio Build Tools
# Download from visualstudio.microsoft.com
Get the node software from GitHub.
# Clone repository
git clone https://github.com/oarn-network/oarn-node.git
cd oarn-node
# Build release version
cargo build --release
# Verify installation
./target/release/oarn-node --version
Set up your node configuration and wallet.
# Initialize node (creates config and wallet)
./target/release/oarn-node init
# This creates:
# ~/.oarn/config.toml - Node configuration
# ~/.oarn/keystore.json - Encrypted wallet
# Open config file
nano ~/.oarn/config.toml
[node]
mode = "standard" # standard, validatorrouted, or local
port = 4001 # P2P port
[compute]
max_concurrent_tasks = 2 # Tasks to run simultaneously
max_ram_mb = 8192 # RAM limit per task
max_vram_mb = 6144 # GPU VRAM limit
supported_frameworks = ["onnx", "pytorch"]
[wallet]
# Your wallet will be auto-generated
# Fund it with testnet ETH for gas
[privacy]
tor_enabled = false # Enable for anonymity
Your node needs ETH for gas fees when claiming rewards.
# Display your node's wallet address
./target/release/oarn-node wallet address
# Output: 0x1234...5678
Send 0.05 testnet ETH to this address using the steps from the Researcher guide.
Launch the node and begin accepting tasks.
# Start node in foreground
./target/release/oarn-node start
# Or run as background service (Linux)
sudo cp oarn-node.service /etc/systemd/system/
sudo systemctl enable oarn-node
sudo systemctl start oarn-node
# Check status
sudo systemctl status oarn-node
[INFO] OARN Node v0.1.0 starting...
[INFO] Wallet: 0x1234...5678
[INFO] Discovering network via ENS...
[INFO] Connected to 12 peers
[INFO] Listening on /ip4/0.0.0.0/tcp/4001
[INFO] Ready to accept tasks
[INFO] Capabilities: onnx, pytorch | RAM: 8GB | GPU: RTX 3060
Track your node's performance and earnings.
# Check node status
./target/release/oarn-node status
# View earnings
./target/release/oarn-node wallet balance
# View completed tasks
./target/release/oarn-node tasks history
Understand the OARN ecosystem and participate in governance
OARN uses a dual-token system for different purposes.
0xB97eDD49C225d2c43e7203aB9248cAbED2B268d3GOV holders vote on protocol upgrades, fee structures, treasury allocation, and network parameters.
0x24249A523A251E38CB0001daBd54DD44Ea8f1838COMP is earned by node operators and spent by researchers. Optional 2% burn on transfers.
You need a wallet to hold and manage your tokens.
Import custom tokens using these contract addresses:
| GOV Token | 0xB97eDD49C225d2c43e7203aB9248cAbED2B268d3 |
| COMP Token | 0x24249A523A251E38CB0001daBd54DD44Ea8f1838 |
| Network | Arbitrum Sepolia (Testnet) |
Get GOV tokens to participate in governance.
OARN is currently on testnet. GOV tokens will be distributed at mainnet launch:
Use your GOV tokens to shape the protocol's future.
// Delegate to yourself
await govToken.delegate(yourAddress);
// Or delegate to someone else
await govToken.delegate(delegateAddress);
Stake GOV tokens to become an RPC provider or bootstrap node.
Run blockchain RPC endpoints for the network.
Help new nodes discover the P2P network.
// Approve GOV spending
await govToken.approve(registryAddress, stakeAmount);
// Register as RPC provider
await registry.registerRpcProvider(
"https://your-rpc.example.com",
stakeAmount
);
Monitor the health and growth of the OARN network.
Not yet. OARN is currently on Arbitrum Sepolia testnet. Mainnet launch is planned after security audits are complete.
Earnings depend on your hardware, uptime, and network demand. Nodes with GPUs typically earn more per task.
Currently ONNX models are best supported. PyTorch and TensorFlow support is in development.
Model inputs are visible to the executing node. For sensitive data, enable Tor routing and use encrypted inputs.
Yes! Any Linux VPS with sufficient resources works. Cloud GPU instances can also be used.