Quickstart
How to spin up the entire AnyaSelf stack locally.
Prerequisites
Before running the AnyaSelf stack, ensure you have the following installed on your machine:
- Docker
- Docker Compose
- A Google Cloud Project ID (for Vertex AI and Gemini Live proxying).
Running Locally
AnyaSelf uses a docker-compose.yml file to spin up all 8 microservices, including the API Gateway, Orchestrator, Wardrobe, and VTO.
Configure Environment Variables
Copy the sample environment file to configure your local secrets.
cp .env.example .envEnsure you inject valid GCP_PROJECT_ID and GCP credentials into your .env file before proceeding. The Orchestrator Vertex AI Agent will crash on boot without these (if ORCHESTRATOR_REQUIRE_VERTEX_AGENT=true).
Build and Start Containers
Run the following command from the root of the new_project directory to build the images and start the services in detached mode:
docker-compose up --build -dVerify Services are Running
You can check the status of the containers by running:
docker-compose psOnce running, the API Gateway will be accessible at http://localhost:8080.
Service Port Mapping
When running locally, the services are mapped to the following ports for easy debugging:
| Service | Port | Description |
|---|---|---|
api-gateway | 8080 | Main entry point for External Auth and Gemini Websockets. |
wardrobe | 8081 | Manages Items, Outfits, and Discover Feed endpoints. |
commerce | 8002 | Tracks synced Fashion Offers and semantic search schemas. |
orchestrator | 8003 | Hosts the LangChain Vertex AI Agent Missions. |
vto | 8004 | Virtual Try-On inference (runs in simulation mode by default). |
headless-cartprep | 8005 | Queues and manages headless checkout jobs. |
hyperbeam-bridge | 8006 | Handles asynchronous Chromium sessions and Agent extension events. |
artifacts-audit | 8007 | Ledger for generation plans, transcripts, and Household audits. |
[!NOTE] By default,
vtoruns viaVTO_INFERENCE_BACKEND=simulatedandwardroberuns viaWARDROBE_STORAGE_BACKEND=stub. This allows you to run the stack without needing heavy cloud credentials for every single persistence or GPU layer.