**Time**

Wednesdays 1:15 pm - 2:05 pm

**Location**

Hunter East Building Room 922

695 Park Avenue

Hunter College

New York, NY 10065

Main entrance (Visitors Center): Southwest corner of Lexington Ave. and 68th St.

**Student Organizer**

Warren Tai (wtai AT gradcenter.cuny.edu)

**Undergraduate Student Representative**

John Huebert (johnhuebert AT gmail.com)

**Professors**

Jiangtao Gou (jiangtao.gou AT hunter.cuny.edu)

Olympia Hadjiliadis (olympia.hadjiliadis AT gmail.com)

Let us know if you would like to be added to our weekly mailing list.

Speakers are welcome to give talks on their research interest or any topic they have learned. Topics include but are not limited to Probability, Statistics, Data Analysis, Quickest Detection, Sequential Analysis, Statistical Physics, Random Walks, Stochastic Analysis, and Mathematical Finance. Our goal is to provide friendly introductions to various research areas in applied probability and statistics.

##### October 7, 2015

**Speaker:** Kris Joanidis

*Ph.D. Candidate in Mathematics at the CUNY Graduate Center*

**Title:** Risk and Asset Allocation and the Black-Litterman Model

**Abstract:** Quite often, an investor will have a decent idea of how much a stock is under/overvalued based on qualitative research. But how do they move from that, to deciding how much capital to allocate to each stock? This talk will give a birds eye view of the Black-Litterman model and its role in the investment process.

##### October 21, 2015

**Speaker:** Chris Knaplund

*Ph.D. Candidate in Mathematics at the CUNY Graduate Center*

**Title:** Drawdown-Based Measures of Risk

**Abstract:** Common risk measures, such as value-at-risk and conditional value-at-risk, are based on the distribution of terminal returns, and do not incorporate path dependence of returns. The drawdown process can be used to describe the path-wise risk –it is defined as the difference between the running maximum and the current position of a process. We define and discuss the risk measures drawdown-at-risk, conditional drawdown-at-risk, maximum drawdown-at-risk and the co-drawdown-at-risk.

##### October 28, 2015

**Speaker:** Dror Rom

*President, Prosoft Software, Inc.*

**Title:** Multiple Testing Procedures and the ‘Closure Principle’ in Pharmaceutical Research

**Abstract:** Closed testing procedures used in medical research are designed to control the type-1 error rate while facilitating the identification of specific drug effects as compared to either other active drugs, or placebo. Historically, the focus of drug development regulators, such as the FDA, has been on the control of type-1 error, leaving the responsibility of the control of type 2 error (1-power) to the drug industry. In this presentation, the multiplicity problem and current practices are revisited. A new method of implementation of the ‘closure principle’ is presented which is shown to control type-1 error, while providing increased power to identify specific drug effects. The method is illustrated with some examples from clinical studies.

##### November 4, 2015

**Speaker:** Peter Carr

*Global Head of Market Modeling at Morgan Stanley*

**Title:** Delta and Model Risk

**Abstract:** We focus on the problem of determining an option’s delta from the implied volatility smile in a wide class of models which includes scale invariant models as a special case. Working within the family of independently time changed (Constant Elasticity of Variance) CEV models, we link the delta of a call to the realized variance of a power of the underlying, given the final price. Using spectral methods, we determine both from the smile.

##### November 18, 2015

**Speaker:** Dan Pirjol

**Title:** On the Growth Rate of a Stochastic Compounding Process

**Abstract:** We consider the discrete time process of a bank account accruing interest, under the assumption that the interest rate for each period follows a geometric Brownian motion sampled at the start of the period. The expectation (and higher moments) of the bank account has a numerical explosion after a certain number of time steps, or for sufficiently large interest rate volatility. This phenomenon is related to non-analyticity of the growth rate of the expectation of the bank account (Lyapunov exponent). The Lyapunov exponent can be computed using large deviation theory methods, and a criterion for the explosion of the moments is obtained.

##### November 25, 2015

**Speaker:** Shelemyahu Zacks

*Distinguished Professor Emeritus
Department of Mathematical Sciences, Binghamton University*

**Title:**Probability Law and Flow Function of Brownian Motion Driven by a Generalized Telegrapher Process

**Abstract:**We consider a Brownian motion whose drift alternates between positive and negative values according to a generalized telegrapher process. We explain the structure of the telegrapher process, and develop the distribution of the fraction of time, , in which the drift goes up in . With this distribution we obtain the distribution of the location of the process and its flow function. Several special cases are shown.

Joint work with Antonio Di Crescenzo

Reference: Meth. and Comp. in Appl. Prob. (2015) 17, 761-780

##### December 2, 2015

**Speaker:** Jiangtao Gou

*Assistant Professor, Department of Mathematics and Statistics, CUNY Hunter College*

**Title:** A Review of -value Based Multiple Testing Procedures and its Applications

**Abstract:** Multiple testing refers to the testing of more than one hypotheses simultaneously. It is desirable to correctly and effectively adjust multiplicity in order to ensure valid statistical inference in confirmatory studies. Multiple testing procedures by means of -values are widely used because of their simplicity and since they do not require strong distributional assumptions. In this presentation, we first introduce the basic principles of multiple testing and some standard -value based multiple testing procedures, then introduce gatekeeping procedures and graphical approaches, which are commonly applied in modern phase III confirmatory clinical trials, where multidimensional study objectives with logical relationships among them are involved.

##### February 3, 2016

**Speaker:** Yang Feng

*Assistant Professor of Statistics at Columbia University*

**Title: **Feature Augmentation via Nonparametrics and Selection (FANS) in High Dimensional Classification

**Abstract: **We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

##### February 10, 2016 (3 pm - 4 pm)

**Note Special Time. Same Room.**

**Speaker:** Dimitrios G. Konstantinides

*Professor, Department of Mathematics, University of the Aegean*

**Title:** Asymptotics for ruin probabilities in a discrete-time risk model with dependent financial and insurance risks

**Abstract:** In this paper, we consider some non-standard renewal risk models with some dependent claim sizes and stochastic return, where an insurance company is allowed to invest her/his wealth in financial assets, and the price process of the investment portfolio is described as a geometric Lévy process. When the claim-size distribution belongs to some classes of heavy-tailed distributions and a constraint is imposed on the Lévy process in terms of its Laplace exponent, we obtain some asymptotic formulas for the tail probability of discounted aggregate claims and ruin probabilities holding uniformly for some finite or infinite time horizons.

Joint work with Yang Yang of Nanjing Audit University and Kayoing Wang of Southeast University, China

##### February 17, 2016

**Speaker:** Nick Costanzino (*AIG Investments*)

**Title:** A Fast and Efficient Backtesting Method for Expected Shortfall

**Abstract:** With the move from VaR to Expected Shortfall (ES) as the risk measure of choice in the revised Market Risk framework proposed in the Fundamental Review of the Trading Book, there is a heightened urgency develop reliable backtesting methods for ES. However, for a number of reasons these have remained elusive until recently. Here I present a very efficient backtesting method for Expected Shortfall that is consistent with the standard Binomial test for VaR and requires no addition simulation or memory requirements. This is joint work with Michael Curran (BMO Capital Markets).

##### February 24, 2016

**No Meeting Scheduled.**

##### March 2, 2016

**Speaker:** Stéphane Guerrier

*Assistant Professor, Department of Statistics, University of Illinois Urbana-Champaign*

**Title:** Simulation based Bias Correction Methods for Complex Models

**Abstract:** Along the ever increasing data size and model complexity, an important challenge frequently encountered in constructing new estimators or in implementing a classical one such as the maximum likelihood estimator, are the numerical aspects of the estimation procedure. To carry out estimation, approximate methods such as pseudo-likelihood functions or approximated estimating equations are increasingly used in practice as these methods are typically easier to implement numerically although they can lead to inconsistent and/or biased estimators. In this context, we extend and provide refinements on the known bias correction properties of two simulation based methods, respectively indirect inference and bootstrap, each with two alternatives. These results allow one to build a framework defining simulation based estimators that can be implemented for complex models. Indeed, based on one biased or even inconsistent estimator, several simulation based methods can be used to define new estimators that are both consistent and with reduced finite sample bias. For example, this framework includes the classical method of indirect inference for bias correction without requiring specification of an auxiliary model. We demonstrate the equivalence between (one version of) the indirect inference and the iterative bootstrap which both correct sample biases up to the order . Therefore, our results provide different tools to correct the asymptotic as well as finite sample biases of estimators and give insight as to which method should be applied according to the problem at hand. The usefulness of the proposed approach is illustrated with the estimation of robust income distributions and generalized linear latent variable models.

##### March 9, 2016

**No Meeting Scheduled.**

##### March 16, 2016

**Speaker:** Matthew J. Baker

*Associate Professor in Economics, CUNY Hunter College and CUNY Graduate Center*

**Title: **A Theoretical Foundation for the Age-Area Hypothesis

**Abstract: **The Age-Area Hypothesis originally advanced by Sapir (1915) is an often-used tool in historical linguistics to pinpoint the area of origin of a linguistic stock. One way of stating this hypothesis is the point of origin of a linguistic stock is at or near the place where the languages comprising the stock are maximally differentiated. While the hypothesis is compelling and its predictions are often corroborated by other evidence, a cohesive theoretical structure has to the author's knowledge never been described. In this paper a probabilistic model based on likelihood is presented, along with a computational algorithm for calculating the relative likelihood of different points of language stock origin. The paper concludes with some applications.

##### March 23, 2016

Classes follow Friday schedule

**Speaker:** Mike Ludkovski

*Associate Professor, Department of Statistics and Applied Probability, UCSB*

**Title: **Gaussian Process Regression in Finance

**Abstract: **Gaussian processes (GP) offer a flexible framework for nonparametric regression. Originating in the machine learning context, they are quickly becoming the tool of choice for a variety of response surface modeling problems. I will discuss two simulation frameworks from mathematical finance where GP regression is beneficial. The first example reimagines the Least Squares Monte Carlo method for optimal stopping/Bermudan options as contour-finding of noisily observed functions. The second example proposes efficient valuation of deferred annuities under stochastic mortality through a GP metamodel. Time permitting, I will also discuss a non-financial problem involving detection of infectious disease epidemics. Such applications of GP methodology offer new perspectives on computational finance, while raising novel statistical challenges that include low signal-to-noise ratios, a high-throughput environment, and a specialized loss function.

##### March 30, 2016

**No Meeting Scheduled.**

##### April 6, 2016, 1:15 pm - 2:05 pm

**Speaker:** Murad S. Taqqu

*Professor
Department of Mathematics and Statistics, Boston University*

**Title:**Self-similar processes and computer network traffic

**Abstract:**In this lecture we will introduce self-similarity in the context of computer network traffic and talk not only of fractional Brownian motion but also of self-similar processes with infinite variance.

Ethernet local area network traffic appears to be approximately statistically self-similar. This discovery, made a number of years ago, has had a profound impact on the field. I will try to explain what statistical self-similarity means, how it is detected and indicate how one can construct random processes with that property by aggregating a large number of "on-off" renewal processes. If the number of replications grows to infinity then, after rescaling, the limit turns out to be the Gaussian self-similar process called fractional Brownian motion. If, however, the rewards are heavy-tailed, then the limit is a stable non-Gaussian process with infinite variance and dependent increments. Since linear fractional stable motion is the stable counterpart of the Gaussian fractional Brownian motion, a natural conjecture is that the limit process is linear fractional stable motion. This conjecture, it turns out, is false. The limit is a new type of infinite variance self-similar process.

##### April 14, 2016 (Thursday)

**Note Special Date.**

**Speaker: **Frank Bretz

*Global Head of the Statistical Methodology Group at Novartis International AG*

**Title:** Efficient tests to demonstrate the similarity of dose response curves

**Abstract:** This talk investigates the problem whether the difference between two parametric models describing the relation between a response variable and several covariates in two different groups is practically irrelevant, such that inference can be performed on the basis of the pooled sample. Statistical methodology is developed for testing the hypothesis against to demonstrate equivalence between the two regression curves , where denotes a metric measuring the distance between and and a pre-specified relevance margin. Our approach is based on the asymptotic properties of a suitable estimate of this distance. In order to improve the approximation of the nominal level for small sample sizes a bootstrap test is developed, which addresses the specific form of the interval hypotheses. In particular, data has to be generated under the null hypothesis, which implicitly defines a manifold for the parameter vector. The results are illustrated with a simulation study. It is demonstrated that the new methods substantially improve currently available approaches with respect to power.

##### April 20, 2016

**No Meeting Scheduled.**

##### April 27, 2016

**No Meeting Scheduled.**

##### May 4, 2016

**No Meeting Scheduled.**

##### May 11, 2016

**Speaker: **Harvey Stein

*Head of Regulation and Credit Modeling at Bloomberg LP*

**Title: **Fixing Risk Neutral Risk Measures

**Abstract: **In line with regulations and common risk management practice, the credit risk of a portfolio is managed via its potential future exposures (PFEs), expected exposures (EEs), and related measures, the expected positive exposure (EPE), effective expected exposure (EEE), and the effective expected positive exposure (EEPE). Notably, firms use these exposures to set economic and regulatory capital levels. Their values have a big impact on the capital that firms need to hold to manage their risks.

Due to the growth of credit valuation adjustment (CVA) computations, and the similarity of CVA computations to exposure computations, firms find it expedient to compute these exposures under the risk neutral measure.

Here we show that exposures computed under the risk neutral measure are essentially arbitrary. They depend on the choice of numeraire, and can be manipulated by choosing a different numeraire. The numeraire can even be chosen in such a way as to pass backtests. Even when restricting attention to commonly used numeraires, exposures can vary by a factor of two or more. As such, it is critical that these calculations be carried out under the real world measure, not the risk neutral measure. To help rectify the situation, we show how to exploit measure changes to efficiently compute real world exposures in a risk neutral framework, even when there is no change of measure from the risk neutral measure to the real world measure. We also develop a canonical risk neutral measure that can be used as an alternative approach to risk calculations.

» Biography of Harvey Stein

**Biography:**

Harvey Stein is Head of Regulation and Credit Modeling at Bloomberg, responsible for Basel compliant regulatory risk models, default modeling, and specific risk and incremental risk charge calculations. Dr. Stein graduated from Worcester Polytechnic Institute in 1982 with a Bachelor's degree in mathematics. After working at Bolt, Beranek and Newman for three years on developing and designing the precursor to the Internet, Dr. Stein went to graduate school at the University of California, Berkeley, where he studied arithmetical geometry while working at Wells Fargo Investment Advisors. He received his PhD in mathematics from Berkeley in 1991.

For the last twenty-one years, Dr. Stein has worked at Bloomberg LP. He built one of the top quantitative finance research and development groups in the industry. His group supplied derivative valuation models for interest rate derivatives, mortgage backed securities, foreign exchange, credit, equities, and commodities, and built Linux clusters to supply these valuations to Bloomberg's customers.

Dr. Stein is well known in the industry, having published and lectured on mortgage backed security valuation, CVA calculations, interest rate modeling, credit exposure calculations, and other subjects. Dr. Stein built Bloomberg's business in the area of counterparty credit risk modeling and is currently focusing on default modeling and Basel risk models. He is also a member of the advisory board of the IAQF, an adjunct professor at Columbia University, and a board member of the Rutgers University Mathematical Finance program and of the NYU Enterprise Learning program.

» Hide Biography