Developer Handbook · Version 1.0
Orion OS — Developer Handbook
Orion OS is the product name for this stack.
Appendix A covers SAL JSON-RPC, first-bootd HTTP, catalog, and ports (see below).
1. Introduction
Orion OS is a minimal, modular, AI-oriented Linux desktop built to be:
- Fast — small init, predictable boot, localhost-first services
- Understandable — thin layers: kernel,
init, compositor, session, apps - Extensible — catalog-driven apps, YAML profiles, JSON-RPC SAL
- Self-hosting-friendly — developer edition can carry tooling for building the OS
- AI-friendly — intent routing and a pluggable AI backend (HTTP)
The shipping stack uses Alpine Linux userland, a custom shell init (init/init.sh installed as /init), Weston (Wayland compositor), and Rust binaries for services and applications. Control and introspection go through SAL (System Abstraction Layer) over JSON-RPC on HTTP.
2. System architecture
Layers from hardware upward:
Hardware
→ Linux kernel (Alpine linux-lts in shipped images)
→ PID 1: init/init.sh (/init)
→ udev, seatd, optional Windows compat daemon, net-status, update-checker
→ Weston (DRM/fbdev) or compositor-watchdog wrapper
→ devtoolsd → SAL (sal-backend-linux) → ai-backend (+ ai-shell), background-loader
→ First-boot flow OR session-launcher → Hub / AI UI / catalog appsCompositor path. By default init tries compositor-watchdog when USE_COMPOSITOR_WATCHDOG is non-zero and the binary exists; otherwise it starts Weston directly (MYO_COMPOSITOR_BACKEND: auto, fbdev, or drm).
Service pattern. Long-running components are started as background processes from init, with logs under /var/log/myo/services/<name>.log.
Workspace layout (this repository).
| Area | Purpose |
|---|---|
init/ | PID 1 script |
apps/ | Application crates |
services/ | Daemon crates |
sal/backends/linux | SAL JSON-RPC server (sal-backend-linux) |
ai/backend, ai/ai-ui | AI HTTP backend and UI |
crates/ | Shared libraries (sal-types, profile-core, catalog-core, …) |
build/ | Rootfs overlay, image scripts, installer |
profiles/ | Boot profiles consumed by SAL / background-loader |
catalog/ | apps.yaml, categories |
internet-hub/ | Hub layout and config |
3. Boot and init
3.1 Sequence
- Firmware loads the kernel (installed image or live media).
- Kernel runs
/init(copy ofinit/init.sh). - Init mounts essential filesystems, prepares
XDG_RUNTIME_DIR, starts udev and seatd. - Optional net-start (firewall / NetworkManager baseline).
- Compositor (Weston or watchdog).
- devtoolsd (before SAL, for dev tool forwarding).
- SAL, ai-backend (loop with optional reload on exit 75), ai-shell, background-loader.
- If
/etc/myo/firstboot-doneis missing: first-bootd then first-boot UI; otherwise session-launcher after Wayland is ready. - Init stays alive in a sleep loop (PID 1).
3.2 Important environment variables
Documented in the header of init/init.sh. Highlights:
| Variable | Typical meaning |
|---|---|
WESTON_BIN | Path to Weston; empty skips direct Weston (watchdog may still run) |
SAL_BIN | Default /usr/bin/sal-backend-linux |
SAL_URL | Used by loader; default http://127.0.0.1:8750/rpc |
SAL_DANGEROUS=1 | Allows privileged SAL operations (mount, packages, modprobe, power mode); unsafe by design |
FIRST_BOOTD_LISTEN | Default 127.0.0.1:8793 |
NET_STATUS_LISTEN | Default 127.0.0.1:8791 |
HUB_CONFIG | Internet Hub YAML (dev trees may use repo path) |
PROFILES_DIR, BOOT_PROFILE | Profile YAML and boot profile name |
LOADER_REAL_APPLY=1 | Loader may run real package/service apply from profile |
4. Editions
Two editions drive UX and optional tooling:
| Edition | Intent |
|---|---|
| user | Minimal surface; no developer-centric extras |
| developer | Tooling, workspace expectations, dev-oriented Hub tiles / intents |
The active edition is stored in /etc/myo/edition (values user or developer). Image build scripts set this via staging (assemble-rootfs.sh).
5. Core services
Representative services (Rust daemons under services/):
| Service | Role |
|---|---|
net-status | Local HTTP API for connectivity; SAL calls NET_STATUS_URL |
update-checker | Update metadata (not full OS updater in v1) |
compositor-watchdog | Restarts compositor / fallback UX |
devtoolsd | Forwards dev SAL methods to local tooling |
first-bootd | Loopback HTTP API for the wizard |
background-loader | Applies boot profile via SAL RPC |
session-launcher | Starts session after first-boot completes |
sal-backend-linux | SAL JSON-RPC |
ai-backend | HTTP server for AI UI (POST /prompt placeholder path) |
Most speak HTTP on localhost; SAL uses JSON-RPC POST on /rpc (and /).
6. Application model
- Apps are Rust crates under
apps/<name>/(see workspaceCargo.tomlmembers). - Ship as ELF binaries installed to
/usr/binin the rootfs (viaassemble-rootfs.sh). - UIs use WebView or native stacks as implemented per app.
- Discovery happens through
catalog/apps.yamland the Internet Hub (hub.yaml).
Adding an app means: implement the crate, list it in the workspace, ensure the binary is staged by assemble-rootfs.sh, add a catalog entry, and optionally Hub tiles and AI intents.
7. Catalog and Internet Hub
Catalog (catalog/apps.yaml)
Each app entry commonly includes:
id,name,descriptioncommand— binary name on$PATH(older docs saidbinary; this repo usescommandin YAML)tags— categories / filtersinstall,hidden,autostartdownload_behavior— e.g.mode: processvsopen_webpagewithurl
Categories may be listed in catalog/categories.yaml.
Internet Hub
The Hub resolves tiles and sections from internet-hub/hub.yaml (and related assets). Catalog entries tie apps to user-visible launchers; the Hub organizes high-level navigation (system tiles, external links, etc.).
8. AI layer
- Intent router:
ai/backend/src/intent.rsmaps natural phrases to SAL actions (e.g. start process, SAL calls). Extendphrasesfor synonyms; keephelpstrings accurate. - Backend: Axum HTTP server; default listen
127.0.0.1:8787(AI_LISTEN).POST /promptis the integration seam (placeholder semantics as of this handbook). - UI:
ai-uiconnects users to the backend.
When adding a feature reachable by voice/text, update intents, catalog, and any Hub entry together so discovery stays coherent.
9. SAL overview
SAL is implemented by sal-backend-linux (crate sal-linux).
- Transport: HTTP POST JSON-RPC to
http://127.0.0.1:8750/rpcby default (SAL_LISTEN). - Semantics: Methods are namespaced strings such as
sal.system.ping,sal.process.start,sal.network.status. - Safety: Mutations that touch disks, packages, kernel modules, or power policy require
SAL_DANGEROUS=1in the environment; otherwise structured blocked responses are returned (see Milestone 4 rules in repo). - Process and profiles:
sal.profiles.*,sal.process.*,sal.services.*integrate with the profile loader and init expectations.
Full method list and request shapes are in Appendix A.
10. First-boot experience
Components
- first-bootd — Axum server on
FIRST_BOOTD_LISTEN(default127.0.0.1:8793), localhost-only. - first-boot — WebView (or embedded UI) calling that API;
FIRST_BOOTD_URLtells the UI where to connect.
Completion marker: /etc/myo/firstboot-done. After it exists, init skips the wizard and starts session-launcher (when Wayland and paths are valid).
Extending the wizard: evolve the first-bootd state machine and matching UI screens; keep endpoints documented in Appendix A.
11. Developing applications
- Create
apps/<name>/withCargo.tomlandsrc/main.rs. - Register the crate in the workspace root
Cargo.toml. - Ensure
assemble-rootfs.shcopies the binary (see itsfor b in …list). - Add
catalog/apps.yamlentry (commandmatches the installed binary name). - Optional:
internet-hub/hub.yamltile;ai/backend/src/intent.rsphrases.
Quality bar: small binaries, explicit errors, respect edition if behavior differs.
12. Developing services
- Create
services/<name>/with Axum or minimal event loop as needed. - Expose a clear localhost HTTP API if other components must call it.
- Wire startup in
init/init.shviastart_bg <logical-name> <binary> …(follow existing patterns; keep logs under/var/log/myo/services/). - Documentendpoints beside the crate or in this handbook's appendix if stable.
Guidelines: bounded timeouts, structured JSON responses, no unnecessary global state.
13. Profiles and background loader
- Profiles: YAML under
profiles/(installed toPROFILES_DIR, default/usr/share/myo/profiles). - Boot profile:
BOOT_PROFILE(defaultminimal) passed to background-loader, which callssal.profiles.applyoverSAL_URL. - LOADER_REAL_APPLY: when set, loader may perform real package/service operations from the profile (subject to
SAL_DANGEROUSgates).
Use profiles to describe “what should be true” after boot without hard-coding every sequence in init.
14. Edition-aware development
Read /etc/myo/edition at runtime:
let edition = std::fs::read_to_string("/etc/myo/edition")?;
if edition.trim() == "developer" {
// dev-only menus, shortcuts, or SAL paths
}Avoid baking edition checks into shared libraries unless necessary; prefer feature flags at the app boundary.
15. Debugging and logs
| Location | Contents |
|---|---|
/var/log/myo/init.log | PID 1 log (stdout/stderr of init) |
/var/log/myo/services/*.log | Per-service logs |
/var/log/myo/*.log | Other consolidated logs (e.g. AI backend) |
Use the log-viewer app on device. On failure, init prints fallback messages to the configured TTY (MYO_STATUS_TTY, default /dev/tty1) listing useful paths and ps/tail hints.
Host development: scripts/run-dev.sh / scripts/run-dev.ps1 coordinate SAL + loader + AI for local iteration (see script headers).
16. Build and release images
Shipping bootable .raw and .iso is Linux-centric (chroot, grub-install, losetup, …).
- Quick reference:
build/SHIP.txt,build/build-prerequisites - One-shot (Linux):
./build/scripts/build-images.sh [user|developer] - Docker:
Dockerfile.ship+docker-ship.sh(privileged) withbuild/in/binmounted
Windows checkout: use WSL2, Docker, or a Linux VM to produce images; native Windows Rust is fine for cargo check of crates but not for full image builds.
Cross-target binaries: build/scripts/build-linux-musl-release.sh (Docker-based musl builds including GUI crates by default).
17. Installer
The installer lives under build/installer/ and targets installation from live media: disk selection, edition, rootfs copy, bootloader setup, reboot. See installer scripts and build/SHIP.txt for install-to-disk.sh usage.
18. Contributing
- Rust 2021, workspace
rustfmtconsistency where applicable - Prefer small focused binaries and clear HTTP/JSON contracts
- New user-visible surfaces: update catalog, Hub, and intents together
- Security: never widen
SAL_DANGEROUSdefaults; document risky flags ininit.sh - PRs should state what changed and how to verify (
cargo check, image smoke, manual step)
19. Roadmap themes
| Phase | Focus |
|---|---|
| Foundation | Boot, SAL Milestone coverage, compositor session, catalog |
| Experience | Polish first-boot, settings, connectivity, AI UX |
| Platform | Broader hardware, renames (myo → orion paths), CI depth, optional backends |
Exact milestones live in repo plans and .cursor/rules; treat this table as directional.
Appendix A — Developer API reference
A.1 SAL JSON-RPC
Endpoint: HTTP POST to /rpc or / on the SAL listen address (default http://127.0.0.1:8750).
Envelope: standard JSON-RPC 2.0 (method, params, id). Responses use JsonRpcResponse in sal-types (success result or structured error).
Representative methods (non-exhaustive; see sal/backends/linux/src/dispatch.rs for the full match):
| Method | Category |
|---|---|
sal.system.ping | Version / milestone |
sal.system.fallback_status | Compositor watchdog / fallback |
sal.system.update_status | Update-checker snapshot |
sal.network.list_interfaces | Sysinfo network interfaces |
sal.network.status | Hostname, kernel, load, connectivity via net-status |
sal.network.connect_wifi | Stub (no credential handling in tree) |
sal.gpu.info | DRM sysfs on Linux |
sal.gpu.load_driver | Simulated / gated real modprobe |
sal.audio.list_devices | /proc/asound on Linux |
sal.audio.set_profile | Routing stub |
sal.power.status | Sysinfo + /sys/class/power_supply |
sal.power.set_mode | Gated cpufreq / policy |
sal.storage.list_mounts | Mounts / sysinfo disks |
sal.storage.mount / sal.storage.unmount | Gated mount/umount |
sal.packages.list_installed | dpkg/rpm style listing on Linux |
sal.packages.install / sal.packages.remove | Gated package managers |
sal.profiles.list / status / apply | Profile YAML |
sal.services.start / sal.services.stage | Service control |
sal.process.list / start / kill | Process table |
sal.dev.* | Forwarded to devtoolsd when configured |
Parameters are method-specific (serde_json structs in the Linux backend). Invalid params return INVALID_PARAMS RPC errors.
A.2 first-bootd HTTP API
Base URL: http://127.0.0.1:8793 by default (FIRST_BOOTD_URL / FIRST_BOOTD_LISTEN). Server must bind loopback only.
| Method | Path | Purpose |
|---|---|---|
GET | /state | Wizard state / progress |
POST | /set-edition | JSON body, e.g. { "edition": "developer" } |
POST | /set-username | JSON body, e.g. { "username": "alice" } |
POST | /set-network | Network step payload (see first-bootd implementation) |
POST | /finish | Complete wizard (theme and related fields per handler) |
CORS is open on localhost for browser/WebView convenience; do not expose this port beyond the device.
A.3 Catalog schema (practical)
Under apps: list entries using fields such as:
- id: my-app
name: "My App"
description: "Short description"
website: ""
package: ""
command: my-app
tags: [tools]
install: true
hidden: false
autostart: false
download_behavior:
mode: processFor external downloads use mode: open_webpage and url.
A.4 Default localhost ports
| Component | Default listen |
|---|---|
| SAL | 127.0.0.1:8750 |
| AI backend | 127.0.0.1:8787 |
| Internet Hub | 127.0.0.1:8790 (INTERNET_HUB_LISTEN) |
| net-status | 127.0.0.1:8791 |
| first-bootd | 127.0.0.1:8793 |
Override via environment where supported.
A.5 AI backend routes (integration)
The AI backend exposes static UI fallbacks and POST /prompt for the assistant flow (ai/backend). Treat non-stable routes as internal until documented in crate main.rs.
End of Orion OS Developer Handbook.