Modelling Trading and Risk in the Market

Modelling trading and risk in the market: Research

 

 

Systemic Risk

The recent financial crisis has heightened awareness of the issues of liquidity and systemic risk. Liquidity quantifies the ability of firms to fund their operations and to efficiently adjust market positions while systemic risk refers to the risk of the entire financial system and the potential for shocks to the financial system spilling over to the real economy. Liquidity and systemic risk are intertwined since if the institutions in an economy are having difficulty funding there operations, less credit will be extended to help grow the economy. Policymakers and regulators are increasingly aware of these issues, as evident by the recent signing of the Dodd-Frank act in the U.S. that mandates the creation of the Office of Financial Research (OFR). A principal goal of the OFR is to create and monitor measures of systemic risk. This will help to identify systemically important "too-big-to-fail" and "too-interconnected-to-fail" institutions. Many recent works use random networks to model the financial system, with nodes being the institutions and connections their business agreements. Of interest to policymakers and regulators are system or network configurations and rules governing business agreements that lead to better system stability. These network models along with regulatory hybrid securities --- bonds which convert to equity in stressed times to recapitalize the institution --- are the focus of much recent work on systemic risk.

In the next year, Hurd and Metzler both intend to complete papers on credit-equity hybrid modelling and to develop several papers on the problem of the optimal capital structure of a bank, and how it fits into a model of the financial system. These securities may be analyzed in the context of the seminal Merton model. Many elements of our program represent a significant departure from the standard valuation and hedging problems of mathematical finance, and we are excited to illustrate how techniques from quantitative finance may be applied to problems faced by regulatory bodies.

Metzler and Reesor plan to use network mathematics to develop more sophisticated approaches to capital ratios, which incorporate an institution's systemic importance and reflect correlations among institutions' asset portfolios. Current requirements consider institutions in isolation, and fail to consider the impact of simultaneous failures or common exposures to correlated risk factors. We will continue progress on understanding the effect of network structure on overall stability, identifying the most systemically dangerous institutions and determining the minimum amount of capital required to prevent collapse.

Grasselli has started working on a new line of investigation concerning the study of asset price bubbles in financial markets. These have never been fully understood from the economical/mathematical point of view. Some researchers have been able to give a characterization to bubbles that only tackle their probabilistic nature as strict local martingales but are not successful in explaining why they exist in the first place. Moreover, the current models in the literature cannot deal with the birth process of a bubble either assuming their existence at the beginning of time or that they don't exist at any moment. On the other hand, there has been controversy to whether transaction costs would eliminate these bubbles, which would represent some serious weakness to this approach. In a different direction, it is well documented that bubbles always occur in the presence of a massive expansion of the monetary basis. To formalize this observation, Grasselli plans to link the supply of credit in the economy with the asset pricing framework, likely using recent techniques employed in unified models for credit and equity risk. The goal is to write a model which not only exhibits bubbles given the usual assumptions (we can have bubbles even when assuming no arbitrage) but also witnesses its birth and posterior burst (which would shed some light on potential crisis triggers) [top]

Electricity Generation

The goal of this subproject is to employ mathematical and computational techniques originally developed for financial portfolio optimization to the problems of renewable energy.   Two main problems comprise this area – that of optimally controlling a dam, or sequence of dams, and their associated reservoirs, and that of designing and sitting wind turbines.  The long term goal of the project, of integrating wind- and water-based generation, is increasingly important as clean but unpredictable wind power gains a growing share of electricity generation.

 

Good progress is being made on the hydroelectric facility optimization problem. Through the doctoral research of G. Zhao an interesting discovery has emerged.  Where reservoir volumes share a similar order of magnitude with mean daily flow, interesting and nontrivial optimal control strategies emerge even in the case where price, though variable, is deterministic. This result is currently being readied for publication.

With respect to wind power it is important that for the few wind farm development sites that intersect bird and bat migration corridors windmills be sited and/or operated so as to minimize the possibility of bird and bat kills, subject to a given level of wind exposure.In order to study this problem we have forged new links with bat zoologist Brock Fenton and a company called Echo Track which has developed new radar based tools for tracking wildlife.  Davison, Fenton, Echo Track and several other scientists were involved in an NSERC Strategic Project grant application to study this problem which, though unsuccessful in the 2006 round, has nonetheless nucleated an interesting collaboration.[top]

Energy Pricing

This subproject has clear links with both the market microstructure and the electricity generation subprojects. Our focus has been on obtaining stochastic models for electricity and natural gas – two very volatile commodities. Electricity in particular is, in many markets, plagued by price spikes which are difficult to reproduce with standard financial math tools. Ware has been active in modelling the Alberta power market together with the Alberta electricity market operator or Balancing Pool. Elliott has modelled the supply demand balance of the market with a Markov chain and continues this stream of research in joint work with H. Geman. Davison and Anderson have written two papers on a hybrid model for spot electricity prices applicable to the PJM market in which generator failure is non-Markovian; Davison had the chance to work with a very bright group of graduate students at the 2006 PIMS Industrial Problem Solving workshop on extending this model to the Alberta market.

Seco’s group has produced a framework within which ergodic models can be estimated; one such model is the ergodic variant of others proposed by Pilipovic and Barlow, a collaborator of the NCE research team. Ergodic models offer an alternative to the usual one in which prices are expressed in non-inflationary currencies, (i.e., “real dollars”). This result is being submitted for publication to the European Journal of Applied Math and was the Ph.D. thesis of J. Hernandez.[top]

Portfolio Choice Theory

Fundamental advances have been produced in one of the key areas previously identified as a priority: non normal returns of the underlying assets.  A model postulating that returns degrade by switching from one set of normal market conditions to another, distressed set of market conditions was modeled via a mixture of multivariate Gaussians for the underlying asset distribution. This provides with a simple yet realistic picture of real markets, and gives rise to simple algorithmic implementations which, to our knowledge, at least two firms are using in a real setting. This result appears in a paper which will appear in the European Journal of Optimization. At the same time, this development has opened the doors to new methodologies which are being pursued in some of the papers listed as works in progress.

Hausmann and Pirvu had investigated portfolio optimization problems under value at risk constraints and, motivated by insurance concerns, under the constraint of an unknown random horizon. [top]

Pricing and Hedging in Incomplete Markets

The McMaster group has been successful in applying the utility indifference framework to provide a price and hedge for volatility derivatives, including variance and volatility swaps. These are the first examples of explicit calculations of indifference indices for financially relevant products, allowing the framework to be readily used and compared to other approaches to volatility derivatives, such as semi-static replications using vanilla call and put options of all strikes and maturities available in the market.

Employees having exponential utility beliefs were assumed to hold stock options. Optimal exercise strategies for these options were determined, highlighting the intrinsic nonlinearity of indifference prices with respect to the number of contracts held. Next steps are to extend this work to incorporate other features of employee options, such as reload and resetting provisions. It can then be compared to valuation schemes that price the options from the point of view of the well-diversified company granting them, therefore ignoring the trading and hedging restrictions faced by the employees.[top]

Computational Derivatives Pricing

Delay Equations for Stochastic Volatility

A stock’s volatility is the annualized standard deviation of its returns during the period of interest. The easy way to trade this is to use variance swaps, sometimes called realized variance forward contracts (Carr & Madan (1998)). Variance swaps are forward contracts on future realized stock variance, the square of the future volatility.  Demeterfi, Derman, Kamal & Zou (1999) explained the properties and the theory of both variance and volatility swaps. They derived an analytical formula for theoretical fair value in the presence of realistic volatility skews, and pointed out that volatility swaps can be replicated by dynamically trading the more straightforward variance swap.

The paper Swishchuk (2004) determined the values of variance and volatility swaps for financial markets with underlying asset and variance that follow a Heston (1993) model. The paper Swishchuk (2005) found the value for variance swap for financial markets with underlying asset and stochastic volatility with delay.  The working paper by Elliott & Swishchuk  determines the value of variance swap for financial market with Markov stochastic volatility. Kazmerchuk, Swishchuk & Wu (2006) derived the Black-Scholes formula for security markets with delayed response and Kazmerchuk, Swishchuk & Wu (2005)  proposed and studied a continuous-time GARCH model for stochastic volatility with delay.

Next steps for this project include pricing variance swaps for a) stochastic volatility with delay, including multi-factor stochastic volatility models with delay and one-factor stochastic volatilities with delay and jumps; and b) Markov-modulated stochastic volatilities and to price options for Markov-modulated Brownian and fractional Brownian markets with jumps.

Mathematical Models for Non-Markovian Price Dynamics

Here the general objective is to improve the efficiency of  pricing algorithms for existing derivatives, to develop algorithms for new options  (especially those used in the energy markets) and for options based on underlyings with new dynamics. One approach is to seek more realistic non-Markovian  models; which incorporate history into the dynamics. Recent results of Arrioja, Mohammed and Pap [A delayed Black and Scholes formula, preprint] provide a theoretical basis for pricing European options in this context, while Kazmerchuk, Swishchuk and Wu [The pricing of options for securities markets with delayed response, preprint] derive PDE's for specific choices of the volatility in this setting. We have implemented the pricing strategy of Arrioja, Mohammed, and Pap, and are presently looking at the applicability and practicality of this approach in scenarios of significant delays. Previous studies were limited to short delays, and we believe this new implementation does not have this restriction. There are a number of questions that must be addressed: the applicability of these models for describing the behavior of real data, development of efficient computational methods for the valuation and generalization for other types of derivatives.

Efficient Iterative Algorithms for Exotic Option Valuation

Recent results by Kuske and Keller [Optimal exercise boundary for an American put option, Applied Math. Finance, 5 1998] and Evans, Kuske and Keller [American options on assets with dividends near expiry, Math. Finance 12, 2002] provide asymptotic approximations near expiry which are necessary as end conditions for computational methods. These provide a basis for deriving explicit results for a collection of exotic options. Recent results by Florica Coman implement an efficient iterative scheme for pricing American options with barriers.  As expected the numerical approximation is less efficient near expiry and where the barrier is close to the strike price. In this case we expect the asymptotic results to give good approximations which are complementary to the numerical approach.[top]

Risk Measures and Securitization

Here a first objective is the efficient computation of risk measures related to investment strategies in bimodal stochastic processes, such as climate or weather. The goal is to approximate the tails of the distribution while avoiding expensive Monte-Carlo simulations usually necessary for capturing the correct behavior. The approach borrows ideas for efficient approximations of the density of the bimodal process combined with computations of investment strategies. The computational difficulties here are related to a combination of the solution of PDE's for computing the investment strategies, as in Chaumont, Imkeller, and Muller, preprint, combined with the update of the stochastic distribution of the weather/climate indicator, based on [R. Kuske, J. Stat. Phys. 96 , 1999, 797-816.]; In particular we will explore the influence of the fact that the trader updates their trading strategy periodically, rather than continuously, which is the assumption on which present calculations of optimal strategies are based.

Another motivation for analyzing microstructure models of capital markets comes from securitization. The idea of transforming non-tradable risk arising from external risk factors such as climate and weather phenomena into tradable financial securities may be viewed as a part of the more general phenomenon of the convergence of financial and insurance markets. Our focus will be on pricing and hedging schemes for climate-sensitive financial products. The markets for these products are typically illiquid and incomplete resulting in a need to develop dynamic equilibrium models for pricing in this setting. Mathematically, this will include (multi-dimensional) Backward Stochastic Differential Equations with non-Lipschitz continuous drivers and a theory of general equilibrium for monetary utility functions. First case studies have already been analyzed by Chaumont, Horst, Imkeller & Muller (2005) and Horst & Muller (2006) but this project is still in its first stages. [top]

Interest rates and credit risk

Much existing credit risk research is of a statistical nature, trying to determine the key determinants of credit spreads (corporate bond prices), default probabilities and credit losses given default. Many phenomenological facts about credit risk are known. The existing mathematical models, which usually follow one of the two leading approaches, the reduced-form or intensity-based approach, and the structural or value-of-the-firm approach, do only a passable job of capturing such statistical effects, and are problematic for important applications such as credit risk management.  The pressure to understand complex portfolio credit products such as CDOs, which involve hundreds or possibly  thousands of creditors, has highlighted the inadequacy of existing  mathematical methods, such as the copula method, to understand the  joint probabilities of defaults, leaving an open door for innovative modeling frameworks.

Hurd and Kuznetsov developed an innovative approach to multifirm credit risk modelling called the Affine Markov Chain (AMC) framework. While still at an early stage of development, it has the capability to go beyond the structural and intensity-based frameworks in explaining statistical observations. Moreover, its mathematical structure has been designed to facilitate the computation of the most important credit derivative products such as corporate bonds and credit default swaps. They have studied CDOs in this framework, and show that it can be very flexible, and in many cases accurate pricing can be even faster in our dynamic credit models than the less sophisticated industry-standard static copula methods. Meanwhile, Yang, Hurd and Zhang have studied the structure of CDOs in copula models of credit risk, and shown that the saddlepoint method can outperform less refined approximation schemes such as the Edgeworth approximation. Finally, excellent statistical work by Chuang Yi with a large bond price dataset has refined our capability to implement nonlinear filtering in intensity-based credit models.

Hurd proposes to develop methods to test the usefulness and accuracy of realizations of the multifirm Affine Markov Chain (AMC) framework introduced in 2005, with the aim to establish it as a leading mathematical framework for credit risk. At the same time, analogous multifirm structural and intensity-based credit models will be developed and compared. All three approaches will share the common feature of dynamic credit spreads and credit dependence using stochastic time change.

Several specific tasks need addressing before the viability of the AMC framework can be determined. The first is to adapt intensity-based nonlinear filtering methods to the more general AMC model, and use it to calibrate a dataset including corporate and treasury bonds and the S&P500 index. A second task is to develop an improved theory of stochastic time change that extends classic works of Meyer, Lipster, Shiryaev, and others. A third task is to investigate the viability of the saddlepoint approximation method for computing conditional value at risk in portfolio credit risk management.

Collateralized Fund Obligations

Seco’s group studied defaultable forward contracts, perhaps the most traded credit instrument by volume, especially in the energy sector. We have obtained the basic term structure theories, analog of the well established Schwartz model for commodities or the model of Schonbucher for defaultable bonds. In this respect, we also managed to find the pricing theory behind Collateralized Fund Obligations (CFO), set as one of the objectives of the project two years ago, which has been presented in the paper accepted for publication in the Journal of Alternative Investments; this is the first such paper on CFOs.

In connection with CDOs, we have studied the partial differential equation which is to the Black-Cox credit model what the Black-Scholes equation is to the Merton model. This will provide with alternative credit pricing theories, which will adopt more realistic assumptions than the usual ones.[top]

Debt Management

Governments and corporations issue bonds to cover their funding requirements resulting in a portfolio of liabilities. The issuer controls the issuance timing and the relative amounts of bond issuance across the maturity spectrum. Debt management refers to both the manner in which bonds are sold and the structure of the resulting debt portfolio. That is, the issuer seeks an optimal issuance strategy or portfolio structure that minimizes the expected debt cost subject to a number of constraints, including a risk constraint. Analysis of this constrained stochastic optimal control problem gives rise to many interesting quantitative issues. These issues include a stochastic simulation framework; model specification and estimation; and investigation of interest-rate model risk.

Reesor, D.J. Bolder of the Bank of Canada and doctoral student S. Liu are in the process of completing a project developing joint models for Canadian interest rates and the macro-economy. Typically the in-sample fit of many of these models is sufficient; however, their out-of-sample forecasting ability is generally poor and we plan a subsequent project aimed at improving the out-of-sample forecasts. To achieve this, we propose to alter the estimation scheme (e.g., maximum likelihood, Kalman filter, or some other moment-based method) by combining both in- and out-of-sample forecasts. This may result in estimated models having a poorer in-sample fit and better out-of-sample forecasts. Given that one of the proposed applications of these models is to compute debt charges based on simulated future interest rates, this is a reasonable compromise.

When estimating joint models for interest rates and the macro-economy, there is an issue regarding the frequency of the observations as term structures are observed daily (or more frequently) whereas macroeconomic variables such as inflation and Gross Domestic Product (GDP) are observed monthly. Most estimation schemes require observations at the same frequency which, in this case, implies that much of the interest-rate data is simply discarded. Recently, DiCesare and McLeish (submitted) have proposed a method for imputing missing observations that are assumed to arise from multivariate diffusions. The observations are missing as a result of observing one or more of the variables at a particular time but not all of them. The missing observations are imputed via simulation, resulting in a full set of data. This imputation scheme can then be combined with an estimation scheme in an iterative way, leading to simulation-based estimators. This methodology will allow one to use daily observations on interest rates and monthly observations on macroeconomic variables when estimating joint models. With the goal of producing models with superior out-of-sample forecasting ability, this planned work of imputing missing values may also be combined with the project described above.

A project by Reesor and doctoral student Liu that investigates the effect of interest-rate model risk on the measures of cost and risk used to evaluate debt strategies is nearing completion. This work has shown that the choice of interest-rate model significantly impacts the evaluation criteria of debt strategies and hence this issue cannot be ignored when recommending an optimal debt portfolio structure. Also in this work, we have used a recently-proposed class of interest-rate models, called positive interest-rate models, the first application of these models to debt strategy analysis. For these models, the prices of zero-coupon bonds need to be computed using a one-dimensional numerical integration. This feature renders existing estimation techniques impractical, as the computational burden is immense. To get around this problem, we plan to approximate the bond prices using some basis functions, and then to estimate the model using the approximation. Furthermore, we plan to extend these models to joint models for interest-rates and macroeconomic variables. For these extended models, we plan to use both the imputation-estimation method and the estimation scheme designed to improve out-of-sample forecasts described above.

Debt strategies or portfolio structures are analyzed on the basis of cost and risk. Traditionally the measures used are the mean and the standard deviation of the debt charges at a particular time horizon. Other risk measures, such as cost-at-risk, have been used by Bolder (2002) and Hahm and Kim (2003), similar to the work of Haussmann and Pirvu reported above. We plan to investigate the use of various cost/risk measures when analyzing debt strategies. Furthermore, it is of interest to use dynamic risk measures to evaluate debt strategies over entire paths, rather than at just one time in the future.[top]