Skip to main content
Version: 2.0.0 (Latest)

Testing

Testing is runtime-safe experimentation: operators capture real stream traffic, replay it through editable PDL, and compare input against output in one validation workspace—without mutating live pipelines. The flow is Capture → persist replay dataset → configure transformation → execute on Core → inspect diff; inputs may be live captures, uploaded vendor samples, or synthetic JSON for edge cases.

This is an enterprise replay / validation console: production troubleshooting, incident reproduction, parser diagnostics, detection tuning, and regression verification all land here. It is not a lone JSON toy, a passive capture viewer, or an isolated query tester—it is the streaming event laboratory where runtime diagnostics meet transformation verification.

Live capture is typically started from Monitoring stream inspection surfaces; Testing continues the story with replay analysis and PDL iteration. The two products pair tightly, but Testing is its own operational surface—not a sub-module buried inside Monitoring.

Runtime scope

TopicOperational detail
Core-faithful executionRun Test targets a selected Core so parsing, functions, and limits mirror production runtime behavior.
Captures = realityReplay datasets preserve actual stream traffic from that engine—shape, field quirks, and volume snapshots—not guessed fixtures.
Per-Core varianceConnectors, schemas, and load differ by Core; rotate the selector when comparing production-faithful results.
Validation before shipOutcomes inform registry edits; deployment still flows through Configurations and Management.

Capturing live stream events

Live capture is available wherever operators already inspect streams—especially Monitoring (Monitor, stream rows, and related actions). A LIVE session surfaces flowing events, guardrails (time, rate, caps), and stop/save controls so teams freeze transient failures before they age out of buffers.

Saved slices become replay datasets that anchor incident reproduction, rollout validation, and parser debugging—the payload is runtime reality, not a scrubbed sample. That matters when chasing normalization drift, connector upgrades, or fleaky vendor logs.

Live capture modal with LIVE badge event table and capture limits
Live event capture.

Naming the capture, picking format (for example JSON), and persisting ties the dataset to stream id and Core context for later replay workflows.

Testing captures table save capture dialog and event testing sidebar
Save capture for replay.

Working with captures

The Captures inventory behaves like replay dataset management: searchable rows with stream, Core, event counts, and format, plus row actions for lifecycle control.

ActionOperational use
Capture EventsStart a fresh live capture when Monitoring surfaces a hot stream.
UploadImport vendor samples, ticket attachments, or regression fixtures alongside production captures.
ViewAudit payloads before sharing or replaying.
TestLoad the dataset into the Input → PDL → Output workspace for transformation verification.
DownloadShare evidence with engineering or attach to change records.
DeleteRetire stale datasets to reduce noise.

Mature teams often keep golden replay datasets for parser regressions or detection tuning—check them in via Upload or refresh from Capture Events after major connector changes.

Testing captures table Capture Events Upload Test Download
Replay dataset management.

Event testing workspace

The workspace enforces Input → Transformation → Output:

  • Input events — Rows from the active replay dataset (or events pushed from manual validation) with ids, timestamps, and payload previews; filter to isolate failing lines during dropped-event tracing.
  • Test configuration — Select Core, inject + Task / + Query snippets, then refine PDL inline for runtime experimentation.
  • Output events — Post-Run Test, compare transformed JSON against input for side-by-side validation, malformed-event debugging, and failed enrichment checks.

This is runtime-faithful processing in a sandbox: identical expression semantics to tasks/queries, without publishing experimental bodies to live pipeline paths.

Input events PDL editor Run Test output panel
Runtime testing workspace.

PDL testing and validation

  • Transformation verification — Prove parse_*, filters, eval, projections, and output against captured traffic before deployment (Quick Reference).
  • Expected vs actual — Inspect which rows drop, which fields appear, and whether detections fire—core to detection tuning and SIEM-style rule QA.
  • Parser iteration — Tighten regex, KV, or JSON parsing using replay-driven development loops on the same replay dataset.
  • Production-safe experimentation — Edit freely; nothing commits until you copy winning PDL back into Tasks or Queries and deploy.

Run Test shows how many records will execute—use it as a gate for pipeline validation checkpoints during release windows.

Custom event injection

The Event Testing tab supports synthetic event injection: paste JSON, hit Format, Validate, and read Validation Results before execution. That catches structural mistakes early—critical for edge-case validation and malformed-event debugging.

Send to Test Panel bridges handcrafted payloads into the shared Input / PDL / Output flow so synthetic rows interleave with live captures—ideal when reproducing a single bad line from a ticket without capturing thousands of siblings.

Event Testing JSON editor Validate Validation Results Send to Test Panel
Manual event validation.

Replay and troubleshooting workflows

ScenarioReplay workflow
Malformed vendor logsCapture or upload sample → iterate parsers → confirm normalized fields in output.
Parser regressionsKeep baseline replay dataset → rerun after PDL changes → diff outputs.
Detection tuningReplay suspicious alerts → adjust boolean logic → validate fired vs suppressed rows.
Normalization driftCompare historical capture vs fresh capture through same task snippet.
Connector or codec changesRecapture post-change traffic → replay against legacy PDL to spot breakage.
Failed enrichment / lookupsInject synthetic rows lacking lookup keys → verify fallback eval paths.
Dropped event tracingFilter inputs to dropped ids → step transforms until row disappears.
Transient incidentsFreeze streams via Monitoring capture → replay repeatedly under revised logic.
Post-fix verificationReplay saved incident bundle → ensure outputs match golden expectations before closing tickets.

Runtime validation workflows

  • Pipeline validation — Block merges until representative captures pass Run Test.
  • Incident response — Share captures + outputs as forensic artifacts; avoid replaying directly in prod streams.
  • Regression verification — Refresh captures after schema migrations or stream policy updates.
  • Operational QA — Pair with Monitoring metrics (drops, EPS) to confirm fixes under realistic load signatures.
warning

Testing does not modify live pipelines or connector bindings: runtime experimentation stays isolated here. Registry and deployment updates belong in Configurations and Management. Treat green Run Test results as validation evidence, then promote changes through your standard release path.