sandolab.xyz/ar

AI That Sees
What You See

Contextual AI overlays for software development and industrial field work. Not another screen to stare at—intelligence projected into your line of sight, exactly when you need it.

SANDOLENS HUD — LIVE
$ git status → 3 files changed
$ build → ✓ passed (2.1s)
⚡ agent: refactoring auth module...
△ PR #47 ready for review
◉ voice: "move the tests to shared"
→ transcription piped to active agent

The Screen Is the Bottleneck

Software developers context-switch between terminals, browsers, docs, and chat hundreds of times a day. Field workers juggle paper checklists, radios, and phone cameras while their hands are full.

AI agents are getting powerful—but they still live behind a screen you have to look at. The information bottleneck isn't compute. It's the last two feet between the display and your eyes.

SandoLens closes that gap. AI overlays projected into your field of vision, driven by agents that already understand your work.

Product One

SandoLens Code

Your AI coding agent, visible at a glance. Build status, agent progress, voice commands—always in your peripheral vision.

RIGHT-EYE OVERLAY
AGENT STATUS
◉ claude-opus — refactoring auth/middleware.ts
✓ 14 tests passing · 0 failures
△ PR #47 awaiting review
BUILD PIPELINE
✓ lint · ✓ typecheck · ✓ test · ✓ deploy
staging.app → v2.4.1 (32s ago)
VOICE PIPELINE
🎙 "move the auth tests into shared utils"
→ transcribed → routed to agent → executing...

Agent Awareness

See what your AI agents are doing without switching windows. Current task, file changes, blockers — glanceable in your peripheral vision.

Voice-to-Agent Pipeline

Speak naturally. On-device whisper transcription feeds directly into your active coding agent. No typing, no context switch.

Build & Deploy Status

CI/CD pipeline status, test results, and deployment confirmations appear as ambient overlays. Green means go.

Git & PR Flow

Branch status, merge conflicts, review requests — the information you check 50 times a day, now always visible.

Epistemic Hole Detection

When the system detects you're stuck — circling the same files, re-reading docs — it surfaces contextual hints before you have to ask.

Product Two

SandoLens Field

Hands-free AI for construction, maintenance, and industrial field work. Because you can't automate a plumber from a data center.

📋

Smart Checklists

Safety inspections and task sequences projected into view. Check off items with voice commands. Compliance documentation auto-generated.

📷

Visual Part ID

Point your camera at a component. AI identifies the part, pulls up specs, installation guides, and compatible replacements in real-time.

🌐

Crew Translation

Real-time speech translation for multilingual job sites. Spoken instructions transcribed and translated, displayed as subtitles in your field of view.

📐

Spatial Measurement

Camera-based dimension estimation overlaid on what you see. Measure openings, verify clearances, check level — no tape measure needed for quick checks.

👁

Remote Expert

An off-site specialist sees exactly what you see through the glasses camera. They annotate your view in real-time. Like screen-sharing for the physical world.

📸

Auto-Documentation

Continuous photo capture with GPS, timestamp, and task context. Insurance compliance, progress reports, and incident documentation happen automatically.

FIELD HUD — ACTIVE JOB SITE
◉ Task: Install HVAC unit 3BStep 4/7
✓ Mounting brackets verified — torque spec met
📷 Part detected: Carrier 50XC — specs loaded
⚠ Clearance check: 18" required, ~16" measured — flag for review
🎙 Foreman (ES→EN): "Move the ductwork two inches left"
GPS: 51.0447°N, 114.0719°W · Photos: 23 captured · Safety: ✓ current

Built on Real Infrastructure

SandoLens isn't a mockup. It's the visual layer on top of an existing AI agent orchestration platform already tracking code changes, routing tasks, and managing multi-agent workflows.

WEARABLE
AR Glasses (Brilliant Labs Frame / Android XR)
│ Bluetooth / USB
BRIDGE DAEMON
Display compositor · Camera relay · Voice pipeline
│ Local APIs
MESH
File watcher
CRDT versions
Activity log
ORCHESTRATOR
Agent routing
Task queue
Worktree mgmt
VOICE TOOLS
Whisper STT
Piper TTS
On-device GPU
│ Event Stream
Claude
Gemini
Codex
Local LLM

Event-Sourced

Every agent action, file change, and voice command is logged as an immutable event. Full audit trail, complete reproducibility.

Multi-Agent

Claude, Gemini, Codex, and local models work in parallel via git worktrees. The orchestrator routes tasks to the right model for the job.

On-Device First

Voice transcription, file watching, and the bridge daemon all run locally. Your code never leaves your machine unless you want it to.

The Thesis

As AI agents eat more software engineering work, physical-world applications become a moat. You can't automate a plumber from a data center. You can't optimize a construction site from a laptop.

The companies that win in AR won't be the ones building better displays. They'll be the ones building better intelligence behind the display—agents that understand context, track state, and surface the right information at the right moment.

We've already built the agent infrastructure: multi-model orchestration, event-sourced activity tracking, voice pipelines, CRDT-based file versioning. SandoLens is the visual projection layer on top of that foundation.

The screen was never the product. The understanding was. SandoLens just removes the screen from the equation.

Roadmap

Phase 1
In Development

Bridge Daemon + Developer HUD

  • Bluetooth bridge to Brilliant Labs Frame
  • Agent status & build pipeline overlay
  • Voice-to-agent transcription relay
  • Git/PR notification feed
Phase 2
Q3 2026

Contextual Intelligence Layer

  • Epistemic hole detection (stuck-state recognition)
  • Camera-based context awareness
  • Adaptive overlay complexity (ambient → focused)
  • Multi-agent coordination display
Phase 3
Q4 2026

Field Operations Platform

  • Construction site pilot program
  • Visual part identification + spec lookup
  • Real-time multilingual translation overlay
  • Compliance auto-documentation
Phase 4
2027

Platform & Scale

  • Android XR integration
  • Third-party agent plugin API
  • Enterprise field operations dashboard
  • Industry-specific overlay templates

Why Us

Taylor Sando brings 10+ years spanning HCI research, full-stack product engineering, and AI agent infrastructure. Former lead engineer at SkipTheDishes (acquired by JustEat) and CTO of a regulated fintech. Currently building the agent orchestration platform that SandoLens is built on.

10+
Years in Production Engineering
4
AI Models Orchestrated in Parallel
100%
On-Device Voice & File Processing

Let's Build This

Looking for partners and investors who see the convergence of AI agents and augmented reality as the next interface paradigm.

consulting@taylorsando.com