AI Demos
Hands-on glimpses of AI-assisted creation. These run locally in your browser against an Ollama endpoint you host (default: http://localhost:11434). No data leaves your machine.
Wish Chat Demo
What it is: A minimal orchestrator shell. Give the AI an ambitious "wish" and watch a structured response pattern emerge. Scenarios show multi-layer requests (research + verification, agent workflow design, meta app builder).
How to use: Ensure Ollama is running locally. Optionally supply ?ollama=http://host:port in the URL to point elsewhere. Edit the model name if needed.
Obscure Research Report
Citations + verification + simulation framing.
Adaptive Workflow Orchestrator
Reflection + memory + constraints.
App Builder Platform
Meta creation / scaffold engine.
Architecture & Next Steps
This simple shell posts your message sequence to an Ollama model. From here you could add streaming, structured tool invocation, parallel model A/B comparison, or automatic evaluation loops. With scaffold generation in place, agents can spin up experiment variants cheaply—making near-frictionless A/B testing a natural extension.