P

Personal AI Infrastructure

Open-source scaffolding for building your own AI-powered operating system

The best AI in the world should be available to everyone

The Evolution: From Mirror Platform to PAI Packages

PAI 1.0.0 represents a fundamental shift: from a monolithic "mirror my exact system" approach to a modular, package-based architecture that democratizes AI infrastructure.

⚠️

The Problem

The Jenga Tower Effect

Early PAI was a monolithic system where everything depended on everything else. Want to use just one skill? Too bad—you had to clone the entire infrastructure.

Updates broke workflows. Dependencies tangled. The system became fragile, like a Jenga tower where pulling one block could collapse everything.

The Solution

Modular Packages

PAI 1.0.0 introduces self-contained packages. Each package is a complete unit with its own dependencies, documentation, and installation scripts.

Install only what you need. Update packages independently. Build your own packages. Share them with the community. The infrastructure serves you—not the other way around.

What This Means for You

Democratized Contribution

Anyone can create and share packages without understanding the entire PAI architecture.

Platform Agnostic

Packages work across any AI platform. Your skills, your agents, your infrastructure.

Reduced Maintenance

Each package is self-contained. Update one without breaking others.

Fabric Integration

248 Fabric patterns run natively in your context. No CLI spawning required.

The Four Primitives

PAI is built on four core primitives that work together to create a powerful, flexible AI infrastructure tailored to your needs.

Skills

Core Foundation

Modular capabilities you can install and extend

  • 50+ built-in skills
  • Custom skill creation
  • MCP server integration
  • Self-contained documentation

Agents

Specialized AI personas with unique expertise

  • Parallel delegation
  • Custom personalities
  • Model selection per agent
  • Context inheritance

Hooks

Event-driven automation and workflows

  • Session lifecycle hooks
  • Skill-level hooks
  • System event triggers
  • Async execution

History

Unified Observation and Context System (UOCS)

  • Automatic capture
  • Searchable archive
  • Persistent learning
  • Privacy-first storage

The 13 Founding Principles

These principles define how to build reliable, scalable, and maintainable AI infrastructure. Click any principle to explore it in depth.

Clear Thinking + Prompting is King
PRINCIPLE 01

Clear Thinking + Prompting is King

The quality of outcomes depends on the quality of thinking and prompts. Clear thinking comes before code.

Learn More
Scaffolding > Model
PRINCIPLE 02

Scaffolding > Model

System architecture matters more than the underlying AI model. Structure outperforms raw power.

Learn More
As Deterministic as Possible
PRINCIPLE 03

As Deterministic as Possible

Favor predictable, repeatable outcomes over flexibility. Same input → Same output. Always.

Learn More
Code Before Prompts
PRINCIPLE 04

Code Before Prompts

Solve problems with code first. Only use prompts when code cannot handle the task.

Learn More
Spec / Test / Evals First
PRINCIPLE 05

Spec / Test / Evals First

Define specifications, write tests, and create evaluations before implementation.

Learn More
UNIX Philosophy
PRINCIPLE 06

UNIX Philosophy

Do one thing well. Compose small, focused tools into powerful systems.

Learn More
ENG / SRE Principles
PRINCIPLE 07

ENG / SRE Principles

Build for reliability, observability, and operability from day one.

Learn More
CLI as Interface
PRINCIPLE 08

CLI as Interface

Command-line interfaces are universal, scriptable, and composable.

Learn More
Goal → Code → CLI → Prompts → Agents
PRINCIPLE 09

Goal → Code → CLI → Prompts → Agents

Build from concrete to abstract: goals become code, code exposes CLIs, CLIs enable prompts, prompts power agents.

Learn More
Meta / Self Update System
PRINCIPLE 10

Meta / Self Update System

The system should improve itself. Capture learnings and evolve automatically.

Learn More
Custom Skill Management
PRINCIPLE 11

Custom Skill Management

Skills are the unit of capability. Manage them like code packages.

Learn More
Custom History System
PRINCIPLE 12

Custom History System

Context is everything. Automatically capture, organize, and surface relevant history.

Learn More
Custom Agent Personalities
PRINCIPLE 13

Custom Agent Personalities

Different tasks need different personas. Customize agent behavior and expertise.

Learn More

How It Works

PAI is built on a layered architecture where each component works together to create a powerful, extensible AI infrastructure.

PAI Architecture Overview
1

Skill Routing

Intelligent routing system directs requests to the appropriate skill based on patterns and keywords.

2

Agent Delegation

Parallel agent execution with context inheritance and specialized model selection per agent.

3

Hook Events

Lifecycle hooks capture session start/end, skill execution, and system events for automation.

4

History Capture

UOCS automatically documents all work with structured metadata for searchable context.

Get Started

Install PAI in minutes. Choose your platform and follow the steps below.

1

Clone PAI

git clone https://github.com/danielmiessler/PAI.git ~/PAI
2

Create Symlink

[ -d ~/.claude ] && mv ~/.claude ~/.claude.backup
ln -s ~/PAI/.claude ~/.claude
3

Run Setup Wizard

~/.claude/Tools/setup/bootstrap.sh
4

Add Your API Keys

cp ~/.claude/.env.example ~/.claude/.env
nano ~/.claude/.env
5

Start Claude Code

source ~/.zshrc  # Load PAI environment
claude

Note: The setup wizard will configure your name, email, AI assistant name, and environment variables to customize PAI to your environment.

Join the Community

PAI is built by and for the community. Connect with other builders, share your packages, and help shape the future of personal AI infrastructure.

Ready to Build Your AI Infrastructure?

The best AI in the world should be available to everyone. Join us in building the future of personal AI systems.