CUNY Applied Probability and Statistics Seminar | Fall 2016 - Spring 2017

Time
Wednesdays 1:15 pm - 2:05 pm

Location
922 Hunter East
695 Park Avenue
Hunter College
New York, NY 10065

Main entrance (Visitors Center): Southwest corner of Lexington Ave. and 68th St.

Student Organizer
Warren Tai (wtai AT gradcenter.cuny.edu)

Professors
Jiangtao Gou (jiangtao.gou AT hunter.cuny.edu)
Olympia Hadjiliadis (olympia.hadjiliadis AT gmail.com)

Let us know if you would like to be added to our weekly mailing list.

 

October 19, 2016

Speaker:  Jonathan Conning
Associate Professor, Department of Economics, CUNY Hunter College and CUNY Graduate Center
Title:  The 2016 Nobel for the Economics of Contracts: A primer on contract design modeling as a problem of statistical inference with applications in financial contracting
Abstract:  The 2016 Nobel Prize in Economics has just been awarded to Oliver Hart and Bengt Holmstrom 'for their contributions to contract theory.' This talk will provide a short primer on some of the main modeling ideas in the field of field contract design under asymmetric information, with an emphasis on financial contracting under moral hazard. Holmstrom's (1979) paper on 'Moral Hazard and Observability' and Grossman and Hart's (1983) paper 'An Analysis of the Principal-Agent Problem' established the modern 'state space' approach to the problem which allowed the field to flourish and explode. In the canonical single-task moral hazard contracting problem a Principal (e.g. landowner, firm owner, investor) wishes to enter into a contract with an Agent (e.g. worker-cum-tenant, employee, entreprenneur/borrower) to carry out a task or project whose stochastic outcome can be described by a statistical distribution which that can be shifted by the agent's choice of action (e.g. the agent's diligence or effort). When both the project's outcomes and the agent's action choices are both observable and contractible this is just a standard neo-classical problem (e.g. financial contracts with Arrow-Debreu state-contingent commodities and standard asset pricing formulas). When the agent's actions are not observable the contract design problem becomes a statistical inference and constrained optimization problem: how to design a contract that ties the agent's renumeration to observable outcomes that strikes a balance between providing incentives for the agent to choose a right level of unobserved diligence/effort without imposing too much costly risk. After establishing a few key results of the canonical case the presentation moves on to study more challenging and interesting contracting situations from Holmstrom's work (and this author's own work) to study multi-task and multi-agent principal agent problems. I discuss questions such as the possible uses of relative-performance evaluation (tournaments), whether to organize contracting directly through bilateral contracts or via specialized intermediaries of joint-liability structures and other topics and show how the framework is helpful for analyzing key questions in modern corporate finance such as how firms borrow (via bonds, bank debt or equity), the design of microfinance contracts for the (collateral) poor, questions of regulation, the optimal size of banks and ownership structure of banks and much else.

 

November 2, 2016

Speaker:  Mark Brown
Professor, Department of Statistics, Columbia University
Title: Taylor's Law via Ratios, for Some Distributions with Infinite Mean
Abstract: Taylor’s law (TL) originated as an empirical pattern in ecology. In many sets of samples of population density, the variance of each sample was approximately proportional to a power of the mean of that sample. In a family of nonnegative random variables, TL asserts that the population variance is proportional to a power of the population mean. TL, sometimes called fluctuation scaling, holds widely in physics, ecology, finance, demography, epidemiology, and other sciences, and characterizes many classical probability distributions and stochastic processes such as branching processes and birth-and-death processes. We demonstrate analytically for the first time that a version of TL holds for a class of distributions with infinite mean. These distributions and the associated TL differ qualitatively from those of light-tailed distributions. Our results employ and contribute to methodology of Albrecher and Teugels (2006) and Albrecher, Ladoucette and Teugels (2010). This work opens a new domain of investigation for generalizations of TL.
This work is joint with Professors Joel Cohen and Victor de la Peña.

» Click for Biography of Mark Brown


Biography:
Mark Brown is a Professor in the Department of Statistics at Columbia University. Prior to joining Columbia he was for many years a Professor of Mathematics at his undergraduate alma mater, CCNY. He has also has held faculty appointments at Florida State University and Cornell University, as well as visiting positions at Stanford University, NYU, GWU, IBM Research Center, and the Memorial Sloan Kettering Cancer Center. He has been a Fellow of the ASA since 1975, and of the IMS since 1980. His research has been in applied probability models, inequalities in probability and statistics, first passage times, error bounds for exponential approximations, Markov chains, reliability theory, and quantitative sports analysis.

» Hide Biography

 

February 22, 2017

Speaker: Vinay Vaishampayan
Professor, Electrical Engineering, CUNY College of Staten Island
Title: Lattice Coding, Distributed Systems and Communication Complexity
Abstract: The communication complexity of the function computation problem is to determine the minimum amount of communication (measured in bits) required to compute a function f(x_1,x_2,...,x_n) when x_1,x_2,...,x_n are available at physically separated nodes of a network with links of finite capacity.
Communication complexity of function computation has been studied by computer scientists, mathematicians and engineers for several decades. Some of the initial motivations came from VLSI (Circuit Design and layout). Recent motivations for studying the problem range from 'cloud' wireless signal processing to distributed statistical analysis and detection in large networks.
I will briefly review some of the recent motivations for studying this problem followed by a review of previous theoretical results. Much of this work focuses on exact function computation. I will then present some recent work on approximate function computation, where the function f(x_1,x_2,...,x_n) computes the nearest lattice vector to (x_1,x_2,...x_n) in a given lattice \Lambda.
(This is joint work with Maiara Bollauf.)

 

March 22, 2017

Speaker: Giovanni Motta
Pontificia Universidad Catolica de Chile
Title: Semi-parametric dynamic factor models for non-stationary time series
Abstract: A novel dynamic factor model is introduced for multivariate non-stationary time series. In a previous work, we have developed asymptotic theory for a fully non-parametric approach based on the principal components of the estimated time-varying covariance and spectral matrices. This approach allows both common and idiosyncratic components to be non-stationary in time. However, a fully non-parametric specification of covariances and spectra requires the estimation of high-dimensional time-changing matrices. In particular when the factors are loaded dynamically, the non-parametric approach delivers time-varying filters that are two-sided and high-dimensional. Moreover, the estimation of the time-varying spectral matrix strongly depends on the chosen bandwidths for smoothing over frequency and time. As an alternative, we propose a new approach in which the non-stationarity in the model is due to the low-dimensional latent factors. We distinguish between the (double asymptotic) framework where the dimension of the panel is large, and the case where the cross-section dimension is finite. For both scenarios we provide identification conditions, estimation theory, simulation results and applications to real data.

 

March 29, 2017

Speaker: Asohan Amarasingham
Associate Professor, Department of Mathematics, CUNY City College
Title: Interpreting variation across trials in neurophysiology
Abstract: How do neurons code information, and communicate with one another via synapses? Experimental approaches to these questions are challenging because the spike outputs of a neuronal population are influenced by a vast array of factors. Such factors span all levels of description, but only a small fraction of these can be measured, or are even understood. As a consequence, it is not clear to what degree variations in unknown and uncontrolled variables alternately reveal or confound the underlying signals that observed spikes are presumed to encode. A related consequence is that these uncertainties also disturb our comfort with common models of statistical repeatability in neurophysiological signal analysis. I will describe these issues to contextualize tools developed to interpret large-scale electrophysiology recordings in behaving animals, focusing on conceptual issues. Applications will be suggested to the problems of synaptic and network identification in behavioral conditions as well as neural coding studies.​

 

April 26, 2017

Speaker: Daniel Q. Naiman
Professor, Department of Applied Mathematics and Statistics, Whiting School of Engineering, Johns Hopkins University
Title: To Replace or Not to Replace in Finite Population Sampling
Abstract: A classical result in finite population sampling states that in equally-likely “simple” random sampling the sample mean is more reliable when we do not replace after each draw. This talk focuses on the case of weighted sampling, where it is natural to compare the the Horvitz-Thompson inverse probability weighted estimator to the estimator based on sampling with replacement for a sampling design with the same marginals. A sufficient condition for superiority of the sampling without replacement scheme is presented based on sampling schemes in which bivariate selection probabilities take a special form. This form is one leading to interesting geometric considerations, questions as to existence of joint distributions, and effective sampling algorithms arise. Instances in which it is better to sample without replacement are described. Similar results from the literature are reviewed, including results due to Bondesson & Traat (2013), and the connection to our work is explained.
This is joint work with Fred Torcaso.