Neugi Developer Logo

Neugi Swarm Documentation

Deploy, manage, and scale a 9-node neural network acting autonomously on your infrastructure.

Welcome to the Neugi documentation. Here you will find everything you need to know about setting up and running a local fleet of specialized, deterministic AI agents capable of raw operating system execution.

Neugi is specifically optimized for small, lightweight models (1B LLMs) like Qwen/Llama running locally via Ollama, though it can scale gracefully to larger cluster setups.

Quickstart

To begin, ensure you have Python 3.11+ and Ollama installed. Run the one-line install script on your terminal.

Windows (PowerShell)

irm neugi.sh/install | iex

macOS / Linux

curl -fsSL https://neugi.sh/install | bash

Once cloned, initialize the dashboard ecosystem using:

python3 neugi_wizard.py

Sovereign Access & Security

As of version 3.5, Neugi operates with Unrestricted System Access by default for the Wizard and configured agents. Operations mutating kernel state, deleting directory trees, or network provisioning are authorized upon explicit user agreement.

You can verify this status via the God Mode environment variable:

export NEUGI_GOD_MODE=1
python3 neugi_wizard.py
Warning: Sovereign Access allows the Wizard core to execute unrestrained system scripts. All setup flows require manual 'I AGREE' confirmation to ensure neural transparency.

Context Memory (RAG)

Neugi utilizes a built-in highly-optimized CodebaseRAG class bridging the gap between small context windows and massive code repositories. It reads local files deterministically without heavy dependencies like ChromaDB.

  • Agents can trigger the search_memory(query) tool to pull Top-K snippets natively.
  • The Aurora node directly indexes directories, bypassing massive token ingestion.
  • Context remains active across isolated terminal sessions.

The Agentic Network

Rather than relying on one monolith query, Neugi uses Multi-Agent Orchestration via the delegate_task(target_agent, task) mechanic.

  • Cipher — Code Syntax & logic checks.
  • Quark — Logical and strategic planning.
  • Nexus — The Orchestrator capable of routing sub-tasks dynamically.

Anti-Monolith Mechanisms (Easter Eggs)

Because Neugi is built natively and rapidly without immense overhead, it actively mocks bloated visual-pipeline architectures (like OpenCLAW) out of the box via the NEUGI_SATIRE_QUOTES dictionary. Initiate the CLI and watch the TUI loading frames randomly cycle through 30+ highly sarcastic loading messages.

Expanding the Toolset

Phase 6 incorporates powerful filesystem access and safe execution defaults. Native tools include: web_crawl, read_local_file, list_directory, and git_execute. Adding a tool natively takes less than 5 lines of Python, bypassing YAML configuration overhead completely.

CLI Reference

Neugi provides a robust CLI similar to openclaw for full swarm management.

neugi start      # Start the swarm server
neugi stop       # Shutdown all agents
neugi status     # Check neural health
neugi wizard     # Run interactive repair/setup
neugi logs -f    # Tail the agent activity streams

Telegram Bot Integration

Command your swarm remotely via a secure Telegram bridge. The bot provides full parity with the TUI/GUI interfaces.

  • /status — Check overall swarm health.
  • /agents — List all active agents and their current tasks.
  • /logs — View the most recent system log entries.
  • /topology — Visualize network nodes and agent distribution.
  • /monitor — Live telemetry (CPU, RAM, Neural Load).
  • /fix — Trigger the autonomous repair sequence.

REST API Reference

The Neugi gateway exposes a deterministic JSON-RPC inspired REST API on port 19888.

POST /api/chat      # Stream agent responses
GET  /api/status    # Retrieve 3D Swarm state
POST /api/skill     # Execute a dynamic skill payload
GET  /api/memory    # Audit the RAG context graph

Ecosystem & Partnership

Neugi is designed as a Neural Bridge, a unifying layer that connects the reliability of local core engines with the raw reasoning power of frontier SOTA models.

Ecosystem Vision 2026
1. Local Foundation: Built with Ollama for zero-config privacy. 2. Global Reach: Integration for GPT-5, Claude 4, and Gemini 3. 3. Cost Efficiency: Native scaling for DeepSeek & Alibaba clusters.

Collaboration & Grants

We are actively seeking partnerships with AI hardware and software providers to expand the boundaries of autonomous swarm intelligence. If you are a model provider or research organization, we invite you to connect for grants and integration opportunities.

Last updated March 15th, 2026.