Outputs & Reporting
Each simulation run produces a suite of JSON, parquet, CSV, and PNG artefacts summarising the generated measurements, filter results, and diagnostics. The files live under the directory passed via --run-dir (e.g., outputs/lunar_static/), with one subdirectory per pipeline stage.
Simulation stage outputs (simulate/)
| File | Contents |
|---|---|
measurements.json |
Structured measurement catalogue with schema_version, metadata, and a nested measurements[] array (epoch info, satellite/receiver states, geometry, link budget, grouped observables, SISE terms). Use measurement_file_manipulation.load_measurement_catalogue to flatten it for pandas workflows. |
measurement_summary.csv |
Measurement quality stats per observable and satellite (counts, value mean/std/min/max, noise std summaries, C/N₀, elevation span) computed after outlier rejection. |
dop_timeseries.csv |
GDOP/PDOP/HDOP/VDOP and satellite counts per epoch, filtered with the same outlier mask. |
measurement_outliers.csv |
Per-satellite count of samples removed by the robust MAD filter (plus an ALL total). |
low_cn0_measurements.csv |
Samples discarded because their \(C/N_0\) fell below measurement.receiver_rf.cn0_threshold_dbhz, reported per satellite and in total. |
Figures
src/reporting.py renders several diagnostic plots (all saved under simulate/plots/) plus per-satellite CSVs tracking MAD-filtered and low-\(C/N_0\) removals:
dop_vs_time.png– DOP metrics vs epoch.signal_to_noise_vs_time.png– Received C/N₀ per satellite across the observation window.cn0_vs_elevation.png– Scatter plot coloured by satellite ID (outliers removed).satellite_visibility_cdf.png– Step chart showing how the cumulative percentage of epochs drops as the visible-satellite requirement (≥ n) increases.measurement_counts.png– Grouped bar chart showing measurement counts per type and satellite.satellite_polar.png– Sky plot of satellite azimuth/elevation coloured by satellite ID.two_way_link_schedule.png– Timeline showing which satellite carries the two-way link at each epoch (coloured by \(C/N_0\) when available) with shaded windows marking periods when two-way contacts are authorised.
In addition, ingest/plots/ contains 3D and XY orbit views plus an altitude/velocity timeline, and estimate/plots/ contains residual and error plots described below.
The ingest stage also writes ingest/altitude_velocity_stats.csv summarising the mean, standard deviation, minimum, and maximum altitude/velocity for every satellite across the ingest window.
Pass --disable-plots to any poetry run acons … command to skip PNG generation across all stages when you only need CSV/parquet artefacts; poetry run acons regression … applies this behavior by default, so use --enable-plots there only when you want diagnostics for debugging a failing run.
Estimation stage outputs (estimate/)
By default, all estimation artefacts are written to <run-dir>/estimate/. Supplying
--output-subdir <name> stores them in <run-dir>/estimate/<name>/ (or under the sibling estimate
directory next to an overridden --measurements-path), which keeps multiple filter runs organised.
| File | Contents |
|---|---|
ekf_states.parquet |
Full EKF state history (position, velocity, clock bias/drift). |
ekf_states_downsampled.parquet / .csv |
Down-sampled state history for quick inspection. |
ekf_residuals.parquet / .csv |
Full measurement residual log (range and range-rate) plus an observations_used column counting how many updates were applied at each epoch (no outlier filtering applied). |
ekf_covariances.parquet |
Full covariance matrices for each processed epoch. |
ekf_covariances_downsampled.csv |
Down-sampled covariance diagonals. |
ekf_covariance_diag.csv |
Final covariance diagonal (single row snapshot). |
state_errors.csv |
Truth-relative position/velocity/clock errors per epoch (all samples retained). |
state_error_stats.csv |
RMS/p95/p99.7/mean/std for per-axis position errors, pos_error_radial_m (vertical component), pos_error_horizontal_m, pos_error_3d_m, velocity components/3D speed error, and clock bias/drift (computed from the full error set). |
state_error_by_satellite_count.csv |
Horizontal, vertical, and 3D error RMS/p95/p99.7 grouped by the visible-satellite count. |
residual_rms_vs_time.csv |
RMS of residuals per epoch, aligned with GDOP values. |
Key plots (found in estimate/plots/) include the revamped position_error_cdf.png (now three stacked CDFs for horizontal, vertical, and 3D errors with annotated 0.95/0.90/0.66/0.50/0.30 probability markers plus log-scaled axes), the position-error timeline (3D position error vs. hours with 1σ/3σ envelopes and satellite count), the vertical-plus-clock timeline (vertical error stacked with receiver clock bias), observations_per_epoch.png, a step plot that tallies how many measurements the EKF assimilated at each epoch while overlaying the visible-satellite count, measurements_per_epoch.png, which draws counts directly from measurements.json per observable type so you can compare scheduler/availability constraints with what the filter ultimately used, computed_height_vs_plot.png (DEM-sampled height along the estimated trajectory, sampled even when the DEM constraint is disabled), and a family of position_error_cdf_sat_XX.png files (XX = integer satellite count) that repeat the CDF analysis for epochs sharing the same number of visible satellites. Estimation plots now operate on the full datasets without the MAD-based outlier filter, while the measurement-availability plots respectively reflect actual EKF updates and the raw measurement catalogue. When the DEM constraint is enabled, estimation plot titles append “(with DEM)”.
Log file
Each stage writes a structured log (<stage>.log) under its run directory. These logs capture inputs, applied outlier masks, warnings (e.g., low C/N₀), and paths to generated artefacts. Adjust verbosity with the CLI --log-level flag; selecting TRACE prints representative range and range-rate observations with every error contribution (geometry, user clock, SISE, calibration bias, and DLL/FLL noise) to aid debugging.
Processing assumption snapshot
Every CLI invocation now (re)generates <run-dir>/processing_assumptions.md (e.g., outputs/lunar_static/processing_assumptions.md) straight from the scenario YAML. The summary mirrors the config in prose/bullets so non-technical stakeholders can skim the time span, frames, SPICE kernels, user geometry, RF settings, and EKF tunings without reading raw YAML. After each stage finishes, the prompt prints the absolute path to this summary so downstream users immediately know where to look.
Legacy include_sise_placeholder switches have been removed from the reporting configuration, so scenarios no longer need to carry that flag and the summary omits the corresponding entry.
Extending the reports
Reporting utilities live in src/reporting.py. To add custom artefacts:
- Extend
generate_simulation_outputs()orgenerate_estimation_outputs()with new exports/plots. - Leverage the measurement/residual/state DataFrames returned by the pipeline stages.
- Consider whether the new diagnostic should apply the existing MAD-based outlier filter before rendering.
Refer to the source code for examples of writing CSV/JSON/parquet files and creating Matplotlib figures.
Cross-run analysis (analysis/)
Use the analysis CLI stage to compare the DOP timelines and EKF error statistics produced by
multiple runs. Define the inputs in a dedicated YAML file (see
configs/analysis/sample_lunar_comparison.yaml). The CLI accepts either full/relative paths such as
configs/analysis/sample_lunar_comparison.yaml or just the bare filename; in the latter case it
searches configs/analysis/ under the repository root automatically:
title: "Sample lunar EKF comparison"
output_directory: outputs/lunar_static/analysis
state_error_metrics:
- pos_error_horizontal_m
- pos_error_radial_m
- pos_error_3d_m
- vel_error_3d_mps
- clock_bias_m
solutions:
- name: baseline
run_dir: ../outputs/lunar_static/baseline
- name: high_gain
run_dir: ../outputs/lunar_static/high_gain
- name: low_power
run_dir: ../outputs/lunar_static/low_power
Each solution entry points to an existing run directory (the stage-specific folders such as
simulate/ and estimate/ must already contain dop_timeseries.csv /
dop_time_series.csv and state_error_stats.csv). Paths are resolved relative to the project
root, so outputs/... maps to <repo>/outputs/.... Include as many solutions as you like—the tool
stacks all of them in the combined CSVs and bar plots. The optional state_error_metrics list
restricts the rows copied from state_error_stats.csv (preserving the order you specify) so that
plots and delta tables focus only on the metrics you care about; omit it to include every metric.
The output_directory field both dictates where artefacts are written and, when provided, doubles as
the default run directory for the CLI. If you omit it, pass --run-dir when invoking the command so
the tool knows where to write logs and reports; in that case, outputs land in <run-dir>/analysis/.
Available state_error_metrics entries map directly to the columns in state_error_stats.csv and
include pos_error_horizontal_m, pos_error_radial_m, pos_error_3d_m, pos_error_x_m,
pos_error_y_m, pos_error_z_m, vel_error_x_mps, vel_error_y_mps, vel_error_z_mps,
vel_error_3d_mps, clock_bias_m, and clock_drift_mps.
Add an optional comparisons block when you need explicit pairwise delta tables:
comparisons:
- name: high_gain_vs_baseline
lhs: high_gain
rhs: baseline
Run the analysis via:
poetry run acons analysis \
--config configs/analysis/sample_lunar_comparison.yaml
Override the location (for example, when the YAML does not set output_directory) by appending
--run-dir <path>. Artefacts always populate <output_directory> when the field exists, otherwise
they fall back to <run-dir>/analysis/:
- The CLI clears the chosen analysis directory before each run, so copy out any plots you need prior to launching a new comparison.
dop_time_series_comparison.csv– stacked DOP rows with asolutioncolumn.state_error_stats_comparison.csv– stacked EKF error statistics with asolutioncolumn.dop_deltas_<pair>.csv– merged timeline with left/right GDOP/PDOP/... columns plusdelta_*(emitted only whencomparisonsare defined).state_error_stats_deltas_<pair>.csv– merged stats table with per-metric deltas (RMS, P95, etc.) for each requested comparison.plots/dop_mean_comparison.png– bar chart of per-solution mean GDOP/PDOP/TDOP/... values.plots/state_error_stats/absolute/<stat>_comparison.png– grouped bar charts for each statistic (rms,p95, …) with numeric labels on every bar.plots/state_error_stats/relative/<stat>_comparison.png– the same charts showing the percent increase/decrease relative to the first solution listed in the YAML (baseline delta = 0 %). Relative plots omit rows where the baseline metric is zero/undefined to avoid divide-by-zero artefacts.
These CSVs can be filtered or plotted in downstream notebooks to visualise how changes in a scenario or estimator tuning propagate to DOP availability and position accuracy while the generated bar plots provide an at-a-glance summary when comparing more than two runs.