Value-at-Risk Model

The CVEX Protocol employs a Value-at-Risk (VaR) model for evaluating trader portfolios, a widely recognized risk assessment method in finance. VaR, a statistical risk measure, estimates the maximum potential loss in a given timeframe under normal market conditions, within a specified confidence interval. This calculation quantifies the worst expected loss over a specific period, offering a clear understanding of the risk associated with a trader's positions.

The model considers factors like asset volatility, correlations, and market dynamics to estimate potential losses. Such methodology standardises margin requirements for portfolios with varied market exposures, consolidating them into a single, comprehensible metric. Essentially, CVEX determines suitable margin requirements for each portfolio to ensure they remain secure against liquidation during a specified timeframe, and within a predetermined confidence level.

The VaR model offers several benefits, including efficient capital use by adjusting margins to actual risk exposure. This often results in lower margin needs for traders, particularly for diversified portfolios where overall risk is reduced through asset correlations. Additionally, VaR provides a dynamic risk assessment, adjusting to market changes and volatility, ensuring margin requirements are always in line with current market conditions. This enhances the safety and stability of the CVEX trading environment.

Calculation Methodology

The calculation of margin requirements using the VaR model depends on various parameters provided to the protocol by Risk Oracles. To evaluate parameters, Risk Oracles assess VaR for all underlying assets on a serviced platform, based on their historical returns.

While the historical method of evaluating VaR remains popular, it has notable drawbacks. Primarily, it is sensitive to tail values, which can lead to irregular VaR results. This sensitivity means that the VaR can fluctuate significantly based on rare, extreme return values, leading to inconsistent and less reliable risk assessments.

To address this issue, the CVEX Risk Oracles utilise a statistical methodology to compute VaR. In this approach, VaR is understood as a quantile in the distribution of returns. Adopting this statistical viewpoint enables a more comprehensive and precise evaluation of VaR. It involves using the entire spectrum of available data to formulate the distribution model, thus avoiding over-reliance on a limited number of extreme historical data points.

To calculate VaR, oracles first quantify the historical prices of assets based on the VaR timeframe, then transform these into log returns and normalise them to remove skewness. This step ensures an accurate risk assessment, unbiased by historical trends. The normalised data is then fitted into a statistical distribution model. The final step involves calculating the quantile of this distribution that corresponds to the VaR confidence level, then exponentiating it to effectively determine the Value-at-Risk for the initial data.

Selecting the appropriate distribution is crucial for accurate VaR results. While normal distribution is often deemed suitable for market modelling in quantitative finance, its application in risk assessment can be hazardous. Real markets exhibit imperfections, and sequential price changes aren't always independent, leading to more frequent extreme events than predicted by a normal distribution. Real market returns exhibit 'fatter tails' compared to the normal distribution. Ignoring these characteristics can result in a significant underestimation of risks, as the model might not capture the actual likelihood and impact of market extremes.

To accurately account for the 'fat tails' of return distributions, models with flexible kurtosis like the Student's t-distribution or Generalised hyperbolic distribution are mandatory. The Generalised hyperbolic distribution, while more comprehensive, often leads to overfitting due to its multiple parameters with limited data usually available to fit. EVT and Generalised Pareto Distribution are not yet widely accepted and are considered overkill but useful in stress testing. Consequently, the t-distribution is chosen for its simplicity, with just one additional parameter – degrees of freedom – to represent kurtosis. Numerous independent studies have shown that log returns in financial markets fit well into the t-distribution, making it a more reliable tool for predicting tail risks.

To fit log returns into t-distribution, Risk Oracles use the Maximum Likelihood Estimation method, finding the values of ν\nu (degrees of freedom) and σ\sigma (scale parameter) maximising likelihood function:

L(ν,σP)=i=1nΓ((ν+1)/2)Γ(ν/2)νπσ(1+1ν(ln(Pi)ln(Pi1)σ)2)(ν+1)/2 L(\nu, \sigma | P) = \prod_{i=1}^{n} \frac{\Gamma((\nu+1)/2)}{\Gamma(\nu/2)\sqrt{\nu\pi}\sigma}\left(1+\frac{1}{\nu}\left(\frac{ln(P_i) - ln(P_{i-1})}{\sigma}\right)^2\right)^{-(\nu+1)/2}

Where Γ\Gamma is gamma function, and PP is historical prices. The mean of the distribution is set to 0, ensuring the risk assessment remains unbiased with respect to the direction of historical price movement.

Finally, VaR is calculated as the exponentiated quantile of t-distribution, scaled with σ\sigma:

VaR(p)=exp(σF1(p,ν))VaR(p) = exp(\sigma F^{-1}(p, \nu))

Where F1F^{-1}represents the inverse cumulative distribution function of the t-distribution with νν degrees of freedom, and pp is VaR confidence level.

Last updated