Security Data Works

Service 1 · Migration Assessment

Splunk-to-MOAR, with the math.

The flagship engagement. In two to three weeks, you'll know what your Splunk workload would cost — and how fast it would run — on the open lakehouse stack. Anchored on the published benchmark, projected against your data, with the migration path that fits your specific constraints. The engagement that either greenlights the migration with a defensible TCO, or names the no-go conditions before money is committed.

The problem this engagement solves

The "should we migrate off Splunk?" question, answered with measured numbers.

Every CISO running a Splunk-heavy SOC has been doing the math on whether to migrate for the last three years. The math is hard to do internally because the inputs are scattered — license cost, ingest volume, query patterns, retention horizon, detection content portability, organizational risk, and the analyst-experience trade-offs that don't show up on a TCO spreadsheet. The decision either gets deferred (continuing pain, growing licensing exposure) or made on incomplete evidence (a migration that surfaces unanticipated costs three months in). Both failure modes are common. Both are avoidable with structured assessment.

The Migration Assessment is the work that produces a defensible answer. Not "we should migrate" or "we shouldn't" as opinion — measured TCO numbers projected against your specific workload, with the sensitivity bands attached, with the workload-classification breakdown that shows which queries port cleanly versus which need redesign, with the migration roadmap sequenced against your constraints. The deliverable survives the procurement review and the CFO conversation; the underlying analysis is reproducible by your team.

What you get

Five deliverables. Each one stands alone as defensible work product.

Quantified TCO comparison.

Current Splunk spend (license, infrastructure, ops, professional services) versus modeled MOAR stack cost (Iceberg foundation, chosen query engine, storage tier, ingestion layer, ops overhead) over a three-year horizon. Sensitivity bands attached for each major variable — ingest growth rate, retention horizon extension, additional source onboarding. The output is a spreadsheet your CFO can audit, not a headline number to defend.

Workload classification.

Every production query in your workload sample classified by portability tier. Queries that port cleanly to standard SQL on the recommended engine. Queries that require redesign — typically the SPL-specific transactional and statistical patterns. Queries that should stay on Splunk indefinitely under federated query access. The classification is what turns the "migrate everything" assumption into a "migrate 80%, federate 15%, retain 5% on Splunk" plan that's actually deliverable.

Engine recommendation matrix.

The candidate engines scored against your specific workload shape: ClickHouse for raw speed on dashboard-driving subsets, Dremio for the semantic layer plus Reflections-driven acceleration on shared datasets, StarRocks for Iceberg-native columnar analytics, Trino for federation breadth across heterogeneous sources. The recommendation is usually a multi-engine architecture — most production deployments end up with two engines split by use case. The matrix is the working detail behind that recommendation. The component criteria page walks the per-engine scoring criteria in full.

Migration risk register.

The named risks the migration carries, with mitigation owners and decision gates. Detection content portability (which rules port cleanly, which require redesign, which should stay on Splunk during transition). Retention and compliance (regulatory frameworks affecting what can move when). UI workflow gaps (analyst-facing surfaces that need replacement or federated access). Team capacity (whether the organizational-constraints phase of the framework flags the engineering capacity gap). The register lives in the engagement deliverable and gets handed off to the implementation team.

Phased migration roadmap and executive deck.

A 6–18 month phased Gantt with named decision gates — the points where the migration commits or pauses based on measured progress. Each phase carries scope, dependencies, exit criteria, and a kill-switch condition that triggers a pause-and-reassess if the phase isn't producing the expected results. The executive deck is the eight-to-twelve-slide CISO-and-CFO consumable version of the full analysis. Both artifacts are designed to survive the procurement-and-board review process intact.

What it costs you

Three hours of executive time. Anonymized sample data. No production egress required.

The prospect-side investment is intentionally small relative to the deliverable. Roughly three hours of executive interview time spread across the engagement window. Two to three working sessions with the SOC and data-engineering leads to define workload scope, validate query coverage, and review interim findings. Sample query workload (anonymized) plus current Splunk license and storage spend documentation. Read-only access to schema documentation. No production data egress required; the benchmark and the TCO modeling run on the sample.

Pricing is $30K–$50K fixed. The high end of the range applies to environments above 2 PB ingest or 50+ detection use cases — the additional scope reflects the workload-classification work that scales with environment complexity. Engagements include the six-month matrix subscription and two quarterly tool-eval reports per the standard bundle.

What this engagement is not

Three boundaries documented up front.

The Migration Assessment is not the implementation. It produces the decision and the roadmap; the build is the next engagement (Architecture Assessment for the deeper design, Implementation Support for the embedded help during the build). The boundary is intentional — the assessment's value is in being a structured decision artifact, not in being the start of an open-ended consulting engagement.

The Migration Assessment is not vendor-neutral on principle. If the analysis says ClickHouse on the hot tier and Iceberg on the cold tier wins on your workload, the recommendation says so. If a future workload shape reverses the answer, the recommendation reverses. Vendor neutrality is a consequence of the empirical-skepticism method, not the goal — and the research page tracks where the prior recommendations have moved as evidence shifted.

The Migration Assessment ships on a defined timeline with defined deliverables; change requests run through a documented scope-change process. The fixed price reflects this discipline — predictable on both sides, scoped against the SOW rather than against billable hours.

Quantified answers. Defensible roadmap. Or a documented no-go.

Start with a 30-minute discovery call. If the POV is the right size first, that conversion is built in.