How portfoliochat.ai actually works.
A walk-through of portfoliochat.ai: the layers, the agents, and what makes portfolio AI reliable in production. For technical evaluators.
Three layers, one stack.
portfoliochat.ai sits as three layers above your portfolio management system: data ingestion, a portfolio intelligence layer that turns raw holdings into the structured context an AI can reason over, and an agent orchestration layer where specialised agents do the work. Each layer is separable, so source systems, context and agent logic can evolve independently.
A request, end to end.
Every request follows the same path. Scoping resolves tenant and portfolios, preprocessing loads the context downstream agents will need, and an orchestrator picks the route: a direct reply for simple turns, or a workflow run where a planner dispatches specialists and a synthesis agent writes the final answer. Each agent operates inside a clear boundary the runtime enforces, so the planner stays focused on planning and specialists stick to their job.
The agents at a glance.
Two roles of agents run inside portfoliochat.ai. Orchestrator agents plan which specialised agents to call and see only the process information they need to steer, not the full answers. Specialised agents pick up the actual tasks, produce the output, and hand a compact process report back to the orchestrator.
- PlannerPlans which specialised agents to call next. Sees task reports, not the full answers.
- Single-Asset AnalystProfiles a single asset: fundamentals, contribution and news.
- Portfolio & Collection AnalystPositions, P&L and composition for a portfolio or an asset collection. Can render charts.
- News Research AgentRuns news queries in parallel across the assets in scope.
- Recommendation AgentTurns an investment theme into a peer list of suitable assets.
- Exposure AgentTheme-based exposure analysis for a portfolio.
- Compliance AgentRuns restriction and trace checks when assets are added to a portfolio.
How an agent is built.
Every agent is built from the same pieces: a base contract that defines what a turn looks like, versioned prompts pulled from an external store, a configurable model class, and a registry of tools the runtime exposes per agent. Prompts and model choices are configuration, so changes are reviewable and reversible without touching code.
Memory works on two timescales. A long-lived conversation summary keeps continuity across turns, and a short-lived per-turn workspace lets agents share intermediate results. Prompts stay compact, and context carries forward where it should.
Audit-grade traceability, powered by Langfuse.
Every turn is logged end-to-end in Langfuse: prompts, tool calls, data sources, model outputs and human overrides. A reviewer can replay any historical run with the exact inputs and final answer the user saw. The trace is the audit record, not just a debug aid.

Langfuse trace view: full agent run with prompts, tool calls and outputs.
Continuous evals, also in Langfuse.
Each agent and end-to-end flow has measurable benchmarks in Langfuse: factuality, restriction adherence, tone, latency, citations. The pipeline replays against fixed inputs, so prompt or model changes are scored before they reach production. This is also how model upgrades stay safe.

Langfuse eval dashboard: score distribution and regression detection per agent.
Model and provider orchestration.
Model choice is configuration, not a code dependency. An agent declares the class of model it needs and a resolver picks the right provider for the deployment: frontier hosted models, smaller specialised models, or local models inside a customer's own infrastructure. No vendor lock-in, no rewrite when the landscape shifts.
Integrations, deployment and security.
portfoliochat.ai connects to portfolio management systems via API, ingests CSV and JSON, and pulls portfolio-aware market coverage from RavenPack. Internal research and CIO content can be added as documents. The integration layer normalises everything into the same data model.
Three deployment tiers, the same product underneath. Pick what fits your data residency and risk requirements.
SaaS
Hosted by us. Fastest to start, multi-tenant, with Swiss data residency available.
Virtual Private Cloud
Deploys into your own AWS, Azure or GCP account. You own the data plane, we ship portfoliochat.ai.
On-Premises
Runs inside your data centre or sovereign cloud. Sensitive flows never leave your perimeter.
Multi-tenant scoping is enforced from the first millisecond of every request. Sensitive flows can route through local models, and outputs are never sent to clients without a human reviewer's approval.

