Service 2 · Architecture Assessment
A vendor-neutral architecture, designed against your data.
Two to four weeks of discovery and synthesis, $40–80K fixed. The clean-sheet design engagement when the prospect isn't Splunk-bound or wants vendor-neutral architecture from the start. Five-step audit across the existing stack. Twelve-scenario decision framework matched to regulatory profile. MOAR component selection scored against the specific workload. Three-year TCO with cost-optimization roadmap. The deliverable survives the procurement review.
When this engagement
Three patterns where the Architecture Assessment is the right shape.
The Migration Assessment is the right shape for prospects with a Splunk-anchored stack and a specific question about migration economics. The Architecture Assessment is the right shape for three other patterns I see regularly:
- Greenfield deployments. Programs standing up the security data platform for the first time, or programs whose existing stack is too heterogeneous to migrate as a single unit. The assessment focuses on the architecture rather than analyzing a migration path.
- Post-acquisition consolidation. Two security data platforms inherited from an M&A event, neither of which the combined organization wants to keep as-is. The assessment maps the consolidated architecture and the phased path from two stacks to one.
- Multi-region or regulated environments where Splunk isn't the binding constraint. Programs whose architecture decision is shaped more by data sovereignty (EU residency, federal enclaves, multi-cloud), or by compliance frameworks (HIPAA, SOX, PCI-DSS, DoD/IC), than by per-GB licensing pressure. The assessment leads with the regulatory profile rather than with the cost model.
The Architecture Assessment runs longer than the Migration Assessment because the input space is wider. Migration Assessment scopes against a specific platform exit. Architecture Assessment scopes against the full input space — sources, ingest, storage, catalog, query, governance, plus the organizational and regulatory constraints layered on top.
The five-step audit
Sources to ingest to storage to catalog to query.
The current-state audit walks the data path end-to-end. Sources. The full inventory of log sources feeding the platform, classified by volume tier, schema stability, and OCSF mapping completeness. The pattern that surfaces here recurs: every program has more sources than the team remembers, and the long tail is where the data-quality issues hide.
Ingest. The ETL and routing layer — Cribl, Vector, Tenzir, Logstash, native shippers, whatever combination has accumulated. Throughput at peak versus capacity. Schema normalization shape and OCSF conformance percentage. Operational complexity tier match against current team capacity. The framework's source-count tier (S3) gets resolved here.
Storage. The retention shape — hot, warm, cold tiering as currently deployed. Cost per GB across tiers. Retention horizon by source. Compression ratio achieved versus theoretical. The framework's retention question (S4) gets resolved here, with the regulatory profile layered on top.
Catalog. The metadata and governance layer. RLS depth, RBAC scope, multi-engine query support, license cost. The catalog choice is where the actual lock-in surface lives — covered in the Databricks/Iceberg analysis. The audit produces an explicit catalog-portability assessment.
Query. The engine inventory and the workload running against it. Latency and concurrency profiles per use case (real-time SOC dashboards, ad-hoc threat hunting, compliance retention, streaming detection). Operational complexity match to team capacity. The framework's F4 question gets resolved here, with the multi-engine architecture pattern almost always emerging as the recommendation.
Twelve-scenario decision framework
The regulatory profile shapes the architecture, not the other way around.
The decision framework matches the architecture against twelve regulatory and operational scenarios that recur in this practice's engagements. HIPAA-bound healthcare. SOX-bound financial services. PCI-DSS for cardholder data. Multi-region SOC operating across data-sovereignty boundaries. GDPR data residency. DoD or IC enclave deployment. NIS2 critical infrastructure. DORA financial operational resilience. The scenarios cover the most common combinations; less common combinations get added to the framework from each engagement.
Each scenario maps to a specific architecture shape — which catalog the regulatory profile mandates (Unity Catalog for fine-grained governance in shared platforms; Polaris where isolated dedicated is viable), which storage tiering shape the retention requirement enforces, which authorization model the compliance framework requires. The scenarios aren't a checkbox tour — they're the working detail that determines which architectural patterns are actually viable for your environment versus which look plausible on paper but fail the audit review.
Deliverables
The artifacts that survive procurement.
- Current-state assessment report. The five-step audit findings, with named owners and gap inventory. Lands in week two; functions as the agreed baseline for the rest of the engagement.
- Requirements mapping. Performance, security, integrity, and cost requirements documented explicitly, with source attribution (regulatory framework, internal policy, business commitment, compliance audit response).
- MOAR component selection. Catalog (Hive Metastore, Polaris, Nessie, Unity Catalog, Glue) scored against the requirements. Query engines (one to three picks) scored against the workload. Ingestion architecture (Kafka, Vector, Cribl, Tenzir, native shippers) scored against the source-count tier. Storage tier (S3 plus Wasabi or MinIO, with on-premises option where required).
- Three-year TCO model with cost-optimization roadmap. Sensitivity bands attached for each major variable. The roadmap identifies the operational improvements that compound — typically compression-ratio improvements, hot-tier rightsizing, and detection-content consolidation — that deliver an additional 15–30% TCO improvement beyond the migration baseline.
- Phased migration roadmap. Typically 6–18 months, with named decision gates. The roadmap is sequenced to deliver early wins (the first phase produces measurable cost reduction) before committing to the higher-risk phases.
- Architecture Decision Record (ADR). Versioned, rationale-backed, signed by the engagement principals. Becomes the artifact the implementation team works from after the engagement closes.
Engagements include the six-month matrix subscription and two quarterly tool-eval reports per the standard bundle. Pricing is $40K–$80K fixed; the high end of the range applies to multi-region, regulated, or DoD/IC-adjacent environments where the regulatory-profile work is the load-bearing complexity.
Architecture as a defensible decision artifact, not a rendered diagram.
Discovery call confirms which engagement shape fits. Migration Assessment for Splunk-anchored exits; Architecture Assessment for the wider input space.