
Requirements · Confidential
Following the Sovereign Agentic Layer overview
You have the general document — The Sovereign Agentic Layer — Platform Build & Architecture (PDF). This form is how we capture the specifics of your environment, priorities, and constraints so we can align LangGraph orchestration, MCP integrations, sovereign inference, and resilience (TDR) with what you actually need — no workshop required.
Organisation name (optional)
Part A — Aligned with the shared overview
The PDF describes the Sovereign Agentic Layer: closing the sovereignty gap versus foreign-routed AI, the Agentic Conductor (LangGraph central intelligence, MCP integration layer, sovereign infrastructure, no-code builder), the technical stack (e.g. vLLM, local GPU), and technical disaster recovery (regional failover, StateGraph rollback, model redundancy). Below is how we frame value — correct any line that does not apply to your organisation.
Three pillars from the overview
Your context — confirm or correct
PRIMARY AUDIENCE · SPONSOR + DECISION OWNER
After reviewing "The Sovereign Agentic Layer — Platform Build & Architecture", what best describes the next step you want from us?
This replaces any prior "partnership model" framing — we are scoping a sovereign platform build for your organisation.
What are the primary outcomes you need in the next 6–12 months (e.g. sovereignty / PDPA alignment, productivity, cost of inference, risk reduction)?
Aligns with the document’s mission: sovereign, simpler, metered inference you control — not foreign-routed SaaS.
What budget or investment range can you work within for discovery through first production milestone (currency and range)?
A bracket is enough. We scope LangGraph orchestration, MCP integrations, sovereign compute, and TDR to match.
Who must approve a programme of this nature (single signatory, committee, board) and are there mandatory procurement steps?
What does success look like at 12–24 months — measurable criteria, not slogans?
e.g. workloads moved off foreign Copilot-like tools, latency/cost per task, audit readiness, user adoption.
What competing or adjacent initiatives exist (e.g. Microsoft Copilot Studio, other AI vendors, in-house experiments) and should this platform replace, coexist, or integrate with them?
The PDF positions you against global stacks that route data offshore.
Longer term, do you aim to own operations in-house, rely on a managed service, or a hybrid?
PRIMARY AUDIENCE · BUSINESS OWNER + OPERATIONS
Describe your organisation / division context: sector, size, geography, and who ultimately benefits from the Agentic Conductor.
List the top 3–5 workflows or problems you want autonomous agents to handle first (plain English).
Maps to the no-code builder narrative: business users describe agents without engineering tickets for every change.
Who will own the programme internally (name or role) and who signs off on priorities?
Who are the day-to-day business users vs technical reviewers (security, infrastructure, legal)?
Rough scale signals: users, documents, transactions, or inference calls per month if known?
Preferred rollout shape: single department pilot, phased business units, or enterprise-wide once proven?
Which AI or automation tools has your organisation already tried, approved, or rejected — and why?
PRIMARY AUDIENCE · TECH / ARCHITECTURE
What identity and access standards must the platform use (SAML, OIDC, Azure AD/Entra, Okta, Keycloak, in-house LDAP)?
What deployment topology fits your environment for the Agentic Conductor — consistent with sovereign / local-first hosting from the overview?
If different sensitivity tiers need different topologies (e.g. regulated vs internal labs), describe the split.
Where must inference and data reside — e.g. Malaysia-only GPU/colocation (as in the overview), another jurisdiction, or multi-region?
Mention existing DC providers, MSAs, or GPU rental arrangements (e.g. H100/L40S class) if any.
What are the priority systems agents must reach via MCP-aligned connectors (ERP, CRM, ticketing, document stores, ticketing, email, custom APIs)?
The overview specifies an MCP-standard integration layer — list systems of record and sensitivity.
Inference preferences: self-hosted vLLM on sovereign GPUs, approved commercial APIs only, air-gapped caches, or mixed — and any banned providers?
Matches the document’s stack reference (vLLM + local GPU rental).
Availability targets and recovery expectations (uptime %, RTO/RPO) for agent-driven workflows?
The PDF describes TDR: dual Malaysia DC mirror, StateGraph rollback, model redundancy — your required SLOs calibrate that design.
Certifications you hold or require from suppliers (ISO 27001, SOC 2, sector-specific)?
Internal engineering capacity to co-build connectors, sandboxes, or templates vs relying on us for extensions?
Execution sandboxing (e.g. E2B-style isolation) may be part of scope.
PRIMARY AUDIENCE · LEGAL / DPO + COMPLIANCE
Your organisation’s PDPA posture: registration/certification status, DPIAs in flight, or gaps we should plan for?
Sector-specific frameworks applicable to your organisation and data (financial, healthcare, telecom, energy, listed-entity rules, etc.)?
Data residency and cross-border rules — must workloads stay in-country, and are there exceptions for aggregated or anonymised analytics?
Non-negotiable contractual terms for a build of this class (liability, indemnity, portability, audit, governing law)?
Breach and incident notification expectations you must meet internally and require from Tesse Technology?
PDPA-aligned timelines are a baseline; regulated sectors often tighten this.
AI governance today: hallucination, prompt injection, bias, logging, human oversight, retention — written policy or not?
Use cases, data classes, or jurisdictions that must be blocked on ethical or regulatory grounds?
PRIMARY AUDIENCE · OPERATIONS + SPONSOR
How should inference and usage be metered and reported so leadership can see cost, value, and risk (dashboards, exports, finance system)?
Aligned with “metered on inference we own” from the platform overview.
Operational model after go-live: who runs L1/L2 support, hours, languages, ticketing tool?
Typical procurement or security review duration for a platform engagement of this size?
Steering or governance group — who sits on it and how often should we align?
Preferred delivery cadence: time-boxed phases, fixed milestones, or agile increments — and any hard launch dates?
Who will be involved from your side during discovery and build, and roughly how many hours per week can each commit?
Executive sponsor, security/architecture, business owner, ops.
Organisational sensitivities we should know (reorg, leadership change, conflicting vendors, internal politics)?
Treated as confidential context only.
What do the next 30 days look like for you — async written follow-up only, scheduled video calls, target decision date?
No workshop is assumed; tell us how you prefer to engage.
Your submission is sent by email only. We treat it as commercial-in-confidence.
Save a copy (your records)
Drafts auto-save in this browser. You can also copy to the clipboard or download JSON to your computer before or after submitting.