Reference/Calculation Reference

Calculation Reference

This document explains the main calculations, units, assumptions, and boundaries used by the Vol Dashboard. It is intended for reviewers who need to verify whether numbers shown in the UI are mathematically and analytically defensible.

Unit Conventions

ConceptUnit / Convention
Implied volatilityPercent unless explicitly converted to decimal inside formulas
Realized volatilityAnnualized percent
DVolPercent-style volatility index
VRPImplied volatility minus realized volatility, in volatility points
Trade pricePer contract in price_currency
Trade premiumTotal USD notional for the whole trade
Contract multiplierSource-provided multiplier, e.g. 100 for listed equity options
Trade quantityFilled contract count unless quantity_unit states otherwise
Funding / basisAPR-style or exchange-provided percent values depending on endpoint

Alpha Trade Economics

For Alpha synced rows, the dashboard stores:

premium_usd = source premium_usd

If source premium_usd is missing and price_currency = USD, the backend calculates:

premium_usd = price * quantity * contract_multiplier

Rules:

  • price is per contract, not total dollars.
  • quantity must be positive.
  • price must be non-negative.
  • contract_multiplier must be positive.
  • Non-USD source prices require a source-provided premium_usd; the dashboard does not guess FX.

Accuracy control:

  • Upserts are keyed by (source, source_trade_id).
  • Full-scope sync diagnostics compare source row count, cached row count, total quantity, total premium USD, and unmatched IDs.

Alpha Reconciliation Metrics

Full-scope sync diagnostics use the complete configured Alpha scope.

quantity_delta = source_total_quantity - local_total_quantity
premium_delta_usd = source_total_premium_usd - local_total_premium_usd

Interpretation:

  • 0.0 quantity delta and 0.0 premium delta mean the selected Alpha source scope is loaded into the dashboard cache on aggregate economics.
  • Empty unmatched source IDs mean no source rows are missing from the dashboard cache for the selected scope.
  • Empty unmatched local IDs mean no cached synced rows are absent from the current source scope.

Incremental sync diagnostics compare only the newly fetched source delta against matching cached rows. They should not be used as the full sync-completeness diagnostic for the selected feed scope.

Trade Performance Analytics Contract

Trade analytics are governed by the Trade Performance Analytics Metric Contract. That contract defines execution-quality, opportunity-alignment, outcome-attribution, and realized-P&L source boundaries for the live Performance screen and the remaining data-sourcing work. For the current rule-based opportunity labels, point thresholds, and confidence penalties, use Opportunity Alignment Rules. For business-facing data and assumption gaps, use Trade Performance Business Discussion Guide.

Core rule:

Every displayed trade-performance metric must have a formula or deterministic definition,
a unit, required inputs, source metadata, and a data-quality state.

Do not display missing trade economics as zero. If a required input is absent, stale, proxy-mapped, estimated, or unsupported, the API and UI must say that explicitly.

Regime-Aware Copilot Calculation Boundary

The Regime-Aware Trade Copilot Strategy extends the dashboard toward pre-trade and restructuring decision support. Any new copilot metric must follow the same calculation discipline as the current Performance contract:

Every copilot output must have a deterministic definition, inputs, units,
source metadata, model assumptions, and a trust/confidence state.

Important boundary:

  • A regime warning is not P&L.
  • A conditional probability gap is not an execution-quality benchmark.
  • A proxy-mapped surface read is not a sourced traded-contract mark.
  • A model-estimated mark is not realized P&L.
  • A mixed thesis should reduce confidence rather than forcing a trade direction.

For accident-avoidance outputs, the calculation record should identify the surface location being evaluated: tenor, expiry/DTE, strike or price band, moneyness, delta bucket, smile segment, and whether the evidence comes from vanilla surface, conditional overlay, spot-vol behavior, term structure, portfolio exposure, or source-quality gates.

Preparatory calculation contracts for the next phase live in:

Alpha Opportunity Context Enrichment

Alpha trades currently use BTC dashboard context as an opportunity proxy for MSTR/COIN trades. This context describes the BTC volatility and liquidity backdrop at the trade timestamp. It is not the traded instrument's execution liquidity and must not be used as MSTR/COIN fill quality.

As-of rule:

context source timestamp <= trade_date

The dashboard does not use future rows to enrich trade context. If the latest source row is outside the configured freshness tolerance, the field remains unavailable.

Term-structure state is sourced from persisted BTC vol_metrics_snapshots:

front_spread = iv_atm_7d - iv_atm_30d

Classification:

StateRule
front_richfront_spread >= 2.0 vol points
front_cheapfront_spread <= -2.0 vol points
flatOtherwise, when both tenors are present and positive
unavailableMissing, non-positive, future-dated, or stale tenor inputs

Open interest is sourced as BTC proxy context from persisted oi_snapshots when available, falling back to vol_metrics_snapshots.total_oi only when that source row is fresh. This is BTC opportunity/liquidity context for signal interpretation, not the MSTR/COIN option book's executable open interest.

Execution-quality liquidity is separate. It is stored on trade_execution_benchmarks and must come from the actual Alpha-traded MSTR/COIN option contract near execution time. Fresh bid/ask quotes support mid, spread, spread capture, and quote-age diagnostics. Contract 24h volume and open interest are persisted only when Amberdata returns explicit traded-contract fields; missing provider fields remain null/unavailable and are not replaced with BTC proxy OI or volume.

DVol

DVol is fetched from Amberdata's volatility analytics endpoint and displayed as the market's implied 30-day volatility index. The dashboard treats this as a source-provided volatility metric rather than recalculating it internally.

Current usage:

  • Overview metrics ribbon.
  • DVol chart.
  • VRP and alert context when available.

Implied Volatility Surface

IV surfaces are sourced from Amberdata delta/moneyness surface endpoints or persisted snapshot rows. Surface points include timestamp, DTE, delta, IV, forward, source underlying/index price, and sometimes strike.

The probability engine prefers persisted iv_surface_snapshots for historical consistency.

Important assumptions:

  • IV values are normalized to percent when needed.
  • Deltas may arrive as whole-number deltas, e.g. 25, and are converted to decimal, e.g. 0.25, for probability calculations.
  • The engine first chooses one surface timestamp, then chooses the nearest available DTE bucket inside that timestamp. It must not mix hourly surfaces from the same snapshot day.
  • When actual strikes are missing, the engine derives model-implied strikes from the delta/IV point and synchronized source spot/reference price.

Realized Volatility

For trade P&L and IV tracking, realized volatility is computed from spot history in gex_hourly.

For hourly spot observations:

log_return_t = ln(spot_t / spot_{t-1})
realized_vol = std(log_returns) * sqrt(365 * 24) * 100

Boundaries:

  • Requires at least two valid spot observations.
  • Uses available gex_hourly spot history, so quality depends on snapshot coverage.
  • For sparse histories, realized vol may be blank.

VRP

VRP is the variance/volatility risk premium shown as the gap between implied volatility and realized volatility. In the UI this is generally represented in volatility points.

Conceptually:

VRP = implied_volatility - realized_volatility

The dashboard uses Amberdata metrics where available and snapshot-backed values where relevant. Do not interpret VRP as a dollar P&L value.

GEX Imbalance

GEX hourly aggregates are computed from Amberdata dealer gamma inventory snapshots.

For each timestamp bucket:

net_gex = sum(dealerNetInventory)
abs_gex = sum(abs(dealerNetInventory))
gex_imbalance = net_gex / abs_gex

Boundary handling:

  • If abs_gex = 0, imbalance is not meaningful.
  • Spot is carried from valid source rows in the bucket.
  • Current-day cache behavior differs from completed-day cache behavior because current-day data is still changing.

Live GEX Magnet And Repeller Levels

The Model Diagnostics page's (/analytics) Live GEX Magnet & Repeller Levels card is intentionally live, not historical-cache backed. It uses the current Amberdata gamma-exposures-snapshots response and groups rows by strike.

For the latest snapshotTimestamp:

net_gamma_by_strike = sum(dealerNetInventory)
abs_gamma_by_strike = sum(abs(dealerNetInventory))
distance_from_spot = (strike - current_spot) / current_spot

Interpretation:

  • Positive net_gamma_by_strike rows are displayed as magnets.
  • Negative net_gamma_by_strike rows are displayed as repellers.
  • Rows are ranked by abs_gamma_by_strike.
  • current_spot is the median of usable indexPrice values in the latest snapshot.
  • The card displays source status, as-of time, age, live row count, and strike count.

Boundary handling:

  • If the live source is older than the freshness threshold, the card is labelled stale.
  • If no usable strike/gamma rows are returned, the card is labelled unavailable.
  • Stale or unavailable live levels must not be treated as current actionable strike levels.

Gamma Regimes

Several modules classify gamma regime using historical GEX imbalance percentiles.

Important source distinction:

  • Model Diagnostics live magnet/repeller levels use the latest Amberdata strike-level GEX snapshot and describe current strike concentrations.
  • Probability current regime and conditional densities use stored gex_hourly imbalance history and describe where the latest aggregate imbalance ranks versus a historical lookback.

Common regime interpretation:

RegimeMeaning
short_gammaLow historical percentile / negative dealer gamma environment; potentially higher vol feedback
neutralMiddle percentile band
long_gammaHigh historical percentile / positive dealer gamma environment; potentially damped spot moves

For conditional returns, historical observations are grouped by GEX regime and forward returns are measured against future spot observations.

Forward Returns By Regime

Forward returns compare spot at a regime timestamp to the closest available future spot near the requested horizon.

forward_return = (future_spot - current_spot) / current_spot

Matching rules:

  • Uses a target horizon such as 24 hours.
  • Requires a future timestamp within tolerance.
  • If no valid future point exists, the observation is excluded.

Risk-Neutral Density

The probability engine extracts a risk-neutral density from one synchronized IV surface using a Breeden-Litzenberger-style approach.

Pipeline:

  1. Select the latest usable IV surface timestamp.
  2. Select nearest DTE surface bucket within that timestamp.
  3. Use the surface's synchronized underlyingPrice / indexPrice where available, falling back to nearest gex_hourly spot only when source price is absent.
  4. Convert delta/IV points to strike/IV points where strike is missing.
  5. Convert IVs to Black-Scholes call prices.
  6. Use the second derivative of call price with respect to strike:
f(K) = exp(rT) * d²C / dK²
  1. Run quality checks for monotonicity, convexity, excessive negative second derivative regions, excessive zero-density gaps, and too many local peaks.
  2. Normalize the density so its area integrates to approximately 1.

Unit convention:

  • The x-axis is model-implied terminal price / implied strike, not necessarily a listed option strike.
  • The y-axis is probability density per dollar. Raw y-values do not sum to one; the area under the curve is the probability total.
  • API responses keep probability as a backwards-compatible alias, but density is the mathematically correct field name.

Assumptions:

  • Risk-free rate is currently approximated as 4.5% in the probability engine.
  • The method depends on surface quality and enough valid strike points.
  • Sparse, mixed-timestamp, or malformed surfaces should return an error/no-data or degraded-quality state rather than a misleading chart.

Touch Probability

Touch probability uses the risk-neutral density and a reflection-principle approximation.

For an upside target:

finish_probability = P(S_T >= target)
touch_probability ~= min(1, 2 * finish_probability)

For a downside target:

finish_probability = P(S_T <= target)
touch_probability ~= min(1, 2 * finish_probability)

Boundary:

  • This is an approximation under GBM-style assumptions.
  • It is not a full path-dependent barrier model.

Conditional Density

Conditional density adjusts the vanilla risk-neutral density using empirical returns observed in similar gamma-regime states. It inherits the vanilla density quality state; if the vanilla density is degraded or unavailable, conditional density and downstream candidate screening should also be unavailable.

On the Probability page, the default gamma-regime lookback is 730 days so the Current Regime card, conditional density overlays, and touch-probability bundle use the same historical percentile basis.

Conceptually:

posterior_density ∝ vanilla_density * empirical_likelihood(similar_regime_returns)

Then the result is normalized.

Stationarity control:

  • Matching historical returns are split into baseline and recent windows.
  • A KS test and mean-shift threshold detect instability.
  • If unstable, the conditional density is blended back toward vanilla density.

Current blend behavior:

Stationarity statusBlend implication
stableTrust conditional empirical adjustment more strongly
insufficientPartially blend toward vanilla
unstableHeavily blend toward vanilla

Implementation note:

  • Internal probability calculations must use the full normalized density grid, including zero-probability points. Filtering low-probability points before recomputing means or conditional overlays can over-integrate across gaps and produce impossible distribution moments.

Model Lab Scenarios

The Model Lab runs probability scenarios from explicit request parameters. It compares production defaults against tuned parameters without changing the production Probability page, module-level defaults, database rows, environment variables, or saved presets.

V1 executable parameters:

GroupParameters
Horizondte
Vanilla densityrisk_free_rate, density_method, min_input_points, max_negative_density_fraction, max_zero_density_fraction, max_local_peaks
Regime conditioninggamma_lookback_days, matching_percentile_band
Stationaritystationarity_min_samples, stationarity_ks_pvalue_threshold, stationarity_mean_shift_bps_threshold, insufficient_blend_weight, unstable_blend_weight
Touch approximationtouch_mode = reflection_principle
Candidate filtersobjective, target_dte, budget_usd, max_positions, min_open_interest

The scenario response includes:

  • Baseline and tuned vanilla summaries.
  • Baseline and tuned conditional summaries.
  • Standard +/-5%, +/-10%, and +/-20% touch-probability comparisons.
  • Density-quality diagnostics and parameter provenance.
  • Candidate-filter provenance. Tuned optimizer integration is explicitly labelled not_included_in_v1.

Validation boundary:

  • Invalid parameter ranges return 422 validation errors.
  • Unsupported methods, such as SVI/SABR density extraction, KDE/bootstrap likelihood, as-of surface replay, durable preset saves, and full barrier touch models, are not executable in V1.
  • The Model Lab response is export-only provenance. Promotion to production must be a separate workflow and ticket.

Spot-Vol Correlation

Spot-vol analytics compare spot returns against implied-volatility changes.

Typical pipeline:

spot_return_t = ln(spot_t / spot_{t-1})
iv_change_t = iv_t - iv_{t-1}
rolling_corr = corr(spot_return, iv_change)

Regime interpretation commonly uses thresholds around positive, negative, or decorrelated spot-vol behavior.

Smile Tracking

Smile tracking compares the current IV point for a DTE/delta bucket against a trailing baseline.

Conceptually:

z_score = (current_iv - trailing_mean_iv) / trailing_std_iv

Labels:

  • rich: materially above trailing baseline.
  • cheap: materially below trailing baseline.
  • fair: within normal range.

Trade P&L Boundary

The trade P&L function can estimate mark-to-market using:

  • Current spot from gex_hourly.
  • Option type, strike, expiry, quantity, price, and multiplier from internal_trades.
  • Traded IV if available, otherwise realized vol where available.
  • Black-Scholes mark or intrinsic fallback.

Current boundary for Alpha rows:

  • Alpha synced rows currently lack traded_iv, execution-time bid/ask quotes, sourced post-trade marks, and authoritative realized-P&L fields.
  • If the dashboard cannot produce a trustworthy mark or authoritative realized-P&L source, P&L should remain blank.
  • Blank P&L is preferred over guessed P&L.
  • Trade-performance outcome metrics must follow the Trade Performance Analytics Metric Contract: sourced marks may be displayed, model estimates must be labelled as estimates, and expiry or realized P&L must remain unavailable unless backed by authoritative source data.