Program
2025 Kansas Econometrics Workshop
Date: April 26 (Saturday), 2025
Venue: Alderson Room, The KU Memorial Union
07:30-08:15 Breakfast (Alderson Room, KU Memorial Union)
08:20-08:30 Opening Remark: John Keating, Economics Department, KU
Session I (Financial Econometrics): Chair, Zongwu Cai, University of Kansas
[1] 08:30 - 09:00 Liang Peng, Georgia State University
“Systemic Risk: CoVaR and Comovement”
[2] 09:00 - 09:30 Viktor Todorov, Northwestern University
“Observable versus Latent Risk Factors”
[3] 09:30-10:00 Zhipeng Liao, University of California at Los Angels
“Testing for the Minimum Mean-Variance Spanning Set”
10:00-10:30 Coffee Break (Alderson Room, KU Memorial Union)
Session II (Applied Econometrics): Chair, Zongwu Cai, University of Kansas
[4] 10:30-11:00 Yuya Sasaki, Vanderbilt University
“Genuinely Robust Inference for Clustered Data”
[5] 11:00-11:30 Iván Fernández-Val, Boston University
“Conditional Rank-Rank Regression”
[6] 11:30-12:00 Li Gan, Texas A&M University
“Tiered Migration: Education Quality, Consumption Variety and Family Strategies in Development”
12:15-14:00 Lunch (Alderson Room, KU Memorial Union)
Session III (Time Series Econometrics): Chair, John Keating, University of Kansas
[7] 14:00-14:30 John Chao, University of Maryland
“Consistent Estimation, Variable Selection, and Forecasting in Factor-Augmented VAR Models”
[8] 14:30-15:00 Leland E. Farmer, University of Virginia
“Disagreement About the Term Structure of Inflation Expectations”
[9] 15:00-15:30 Zeqin Liu, Shanxi University of Finance and Economics
“Dual-Pillar Regulation and Growth-at-Risk: Insights from Dynamic Quantile Treatment Effects”
15:30-16:00 Coffee Break (Alderson Room, KU Memorial Union)
Session IV (Theoretical Econometrics): John Keating, University of Kansas
[10] 16:00-16:30 Peter R. Hansen, University of North Carolina at Chapel Hill
“Moments by Integrating the Moment-Generating Function”
[11] 16:30-17:00 Boris Hanin, Princeton University
"Bayesian Inference with Deep Neural Networks"
[12] 17:00-17:30 Xinwei Ma, University of California at San Diego
“Covariate-Adjusted Response Adaptive Design with Delayed Outcomes”
17:30-20:30 Reception and Dinner at the Hotel (Alderson Room, KU Memorial Union)
ABSTRACT:
Systemic Risk: CoVaR and Comovement
Liang Peng, School of Business, Georgia State University
Systemic risk concerns the impact of an individual entity on a financial system, while (extreme) comovement measures one individual (extreme) loss given another individual (extreme) loss. A natural and challenging question is how to measure and forecast the collective impact of two individual losses on systemic risk, conditional on certain predictors and the comovement of these two individuals. In this paper, we introduce a novel systemic risk measure, CoVaRCM, which integrates both comovement and predictor variables to assess the joint effect of two individual losses on systemic risk. Since the comovement event in our model depends on predictors and has zero probability, we employ a three-quantile regression model to conduct an efficient inference. We further propose two metrics to compare CoVaRCM with the more conventional CoVaR, which does not account for comovement. Our empirical analysis revisits the dataset from Adrian and Brunnermeier (2016) and demonstrates the significant influence of comovement on systemic risk. Hence, it is important for risk managers to vigilantly monitor not only key individual entities but also critical pairs within a financial system.
Observable versus Latent Risk Factors
Viktor Todorov, School of Management, Northwestern University
We test for temporal stability in local linear projection coefficients of observable risk factors on latent ones embedded in the cross-section of asset prices and extracted via Principal Component Analysis (PCA). The test can be used for deciding if and over what horizon conventional linear asset pricing techniques can be employed for studying the pricing of observable factors. The proposed test explores the fact that under the null hypothesis residuals from global linear projections of observable factors on latent ones, computed over a fixed time interval via PCA, should be also locally uncorrelated with the PCA factors. The test is fully nonparametric. Its asymptotic behavior is derived under a joint in-fill and large cross-section asymptotic setup. In an empirical application, we show that a linear relation between the market volatility factor and the latent systematic risk factors embedded in the cross-section of stock returns exists only over short periods of length of one trading day. Co-author: Yuan Liao.
Testing for the Minimum Mean-Variance Spanning Set
Zhipeng Liao, Department of Economics, UCLA
This paper explores the estimation and inference of the minimum spanning set (MSS), the smallest subset of risky assets that spans the mean-variance efficient frontier of the full asset set. We establish identification conditions for the MSS and develop a novel procedure for its estimation and inference. Our theoretical analysis shows that the proposed MSS estimator covers the true MSS with probability approaching 1 and converges asymptotically to the true MSS at any desired confidence level, such as 0.95 or 0.99. Monte Carlo simulations confirm the strong finite-sample performance of the MSS estimator. We apply our method to evaluate the relative importance of individual stock momentum and factor momentum strategies, along with a set of well-established stock return factors. The empirical results highlight factor momentum, along with several stock momentum and return factors, as key drivers of mean-variance efficiency. Furthermore, our analysis uncovers the sources of contribution from these factors and provides a ranking of their relative importance, offering new insights into their roles in mean-variance analysis. Co-authors: Bin Wang and Wenyu Zhou.
Genuinely Robust Inference for Clustered Data
Yuya Sasaki, Department of Economics, Vanderbilt University
Conventional methods for cluster-robust inference are inconsistent when clusters of unignorably large size are present. We formalize this issue by deriving a necessary and sufficient condition for consistency, a condition frequently violated in empirical studies. Specifically, 77% of empirical research articles published in American Economic Review and Econometrica during 2020–2021 do not satisfy this condition. To address this limitation, we propose two alternative approaches: (i) score subsampling and (ii) size-adjusted reweighting. Both methods ensure uniform size control across broad classes of data-generating processes where conventional methods fail. The first approach (i) has the advantage of ensuring robustness while retaining the original estimator. The second approach (ii) modifies the estimator but is readily implementable by practitioners using statistical software such as Stata and remains uniformly valid even when the cluster size distribution follows Zipf’s law. Extensive simulation studies support our findings, demonstrating the reliability and effectiveness of the proposed approaches.
Conditional Rank-Rank Regression
Iván Fernández-Val, Department of Economics, Boston University
Rank-rank regression is commonly employed in economic research as a way of capturing the relationship between two economic variables. It frequently features in studies of intergenerational mobility as the resulting coefficient, capturing the rank correlation between the variables, is easy to interpret and measures overall persistence. However, in many applications, it is common practice to include other covariates to account for differences in persistence levels between groups defined by the values of these covariates. In these instances, the resulting coefficients can be difficult to interpret. We propose the conditional rank-rank regression, which uses conditional ranks instead of unconditional ranks, to measure average within-group persistence. The difference between conditional and unconditional rank-rank regression coefficients can then be interpreted as a measure of between-group persistence. We develop a flexible estimation approach using distribution regression and establish a theoretical framework for large sample inference. An empirical study on intergenerational income mobility in Switzerland demonstrates the advantages of this approach. The study reveals stronger intergenerational persistence between fathers and sons compared to fathers and daughters, with the within-group persistence explaining 62% of the overall income persistence for sons and 52% for daughters. Smaller families and those with highly educated fathers exhibit greater persistence in economic status. Co-authors: Victor Chernozhukov, Jonas Meier, Aico van Vuuren and Francis Vella.
Tiered Migration: Education Quality, Consumption Variety and Family Strategies in Development
Li Gan, Department of Economics, Texas A&M University
This paper introduces a "tiered migration" framework explaining sequential mobility patterns in rural China. Using night light data, we identify settlements of approximately 4,000 residents experiencing 25% population growth between 2010-2020 while smaller villages declined. Census data reveals significant within-county migration (38.8%) and prevalent family separation arrangements. Our theoretical model introduces two innovations: education quality and consumption variety as location-specific factors driving household decisions, and explicit modeling of split-family arrangements as rational strategies. A multinomial logit estimation confirms strong state dependence in migration decisions, with income significantly influencing mobility patterns. This framework explains why small towns serve as critical stepping-stones in the development process—a pattern potentially applicable to other developing economies.
Consistent Estimation, Variable Selection, and Forecasting in Factor-Augmented VAR Models
John Chao, Department of Economics, University of Maryland
We introduce a completely consistent method for variable selection with high dimensional datasets. The method is presented in a framework where latent factors are estimated for the purpose of dimension reduction and is meant to serve as a complement to extant methods. We argue that the method is of particular interest in empirical settings where there may be many irrelevant predictor variables. The reason for this is that situations where there are “too many” irrelevant variables can lead to inconsistent factor estimates. Interestingly, our method yields a consistent estimate of the number of such irrelevant variables, which can aid the applied practitioner in assessing the strength of the underlying factor structure for a particular application. We also show that when factors constructed using our variable selection method are inputted into an estimated factor augmented vector autoregressive (FAVAR) model for the purpose of forecasting, the associated conditional mean forecasting functions can be consistently estimated. Monte Carlo results are presented indicating that the variable selection method performs well in finite samples. In an empirical illustration, we simulate a real-time FAVAR forecasting environment and show that using our selection method yields forecasts that are sometimes better than those associated with simply using all variables or using thresholding when constructing factors, even in small samples, suggesting that our method may offer a useful complement to extant variable selection methods when estimating factors. Co-authors: Yang Liu, Kaiwen Qiu, and Norman R. Swanson.
Disagreement About the Term Structure of Inflation Expectations
Leland E. Farmer, Department of Economics, University of Virginia
This paper presents a parsimonious framework for analyzing individual forecasters' inflation expectations across different horizons. We show that inflation expectations are well captured by two factors, level and slope, and we decompose expectations into contributions from long-term beliefs, public information, and private information. Our model uniquely captures heterogeneous responses to public information and how these responses amplify disagreement. Estimating the model with individual-level data from the Survey of Professional Forecasters, we find that in normal times, long-horizon disagreement is primarily driven by long-term beliefs, while short-horizon disagreement arises from private information. However, during economic downturns, variations in responses to public information become the primary driver of disagreement across all horizons. When forecasters interpret public information differently, monetary policy responses are delayed, and a price puzzle emerges, underscoring the importance of clear monetary policy communication in anchoring inflation expectations. Co-author: Hie Joo Ahan.
Dual-Pillar Regulation and Growth-at-Risk: Insights from Dynamic Quantile Treatment Effects
Zeqin Liu, School of Statistics, Shanxi University of Finance and Economics
Set against the complex global economic and financial backdrop, the dual-pillar policy framework of monetary and macroprudential policies is vital for macroeconomic stability. Existing studies, however, mainly concentrate on policies' impact on the expected economic growth value, ignoring the distributional features of policy effects and their changes across cycles. This paper brings in growth-at-risk and uses the dynamic quantile treatment effects method. It analyzes China's dual-pillar policy impact from 2007-2023 across cycles and assesses its effect on economic growth distribution. The study shows asymmetric policy effects in different cycle stages. In recessions, loose monetary policy with neutral/loose macroprudential policy eases downturn risks, while tight monetary policy with neutral/tight macroprudential policy worsens recession. Notably, loose macroprudential policy, though not significantly impacting overall growth, alleviates recessionary pressures when the economy faces big downside risks. Also, the combination of loose monetary and tight macroprudential policy can ease recession pressure. In booms, tight monetary policy with neutral/tight macroprudential policy curbs overheating, while loose monetary policy with neutral/loose macroprudential policy may increase overheating risks. Yet, the combination of loose monetary and tight macroprudential policy can inhibit overheating. Our research shows that coordinated cooperation of the two policies can achieve stable growth and control risks in different cycles, enhancing economic resilience. It offers a theoretical basis for policymakers and stresses the importance of flexible use of the dual-pillar framework in different economic stages.
Moments by Integrating the Moment-Generating Function
Peter R. Hansen, Department of Economics, University of North Carolina
We introduce a novel method for obtaining a wide variety of moments of a random variable with a well-defined moment-generating function (MGF). We derive new expressions for fractional moments and fractional absolute moments, both central and non-central moments. The new moment expressions are relatively simple integrals that involve the MGF, but do not require its derivatives. We label the new method CMGF because it uses a complex extension of the MGF and can be used to obtain complex moments. We illustrate the new method with three applications where the MGF is available in closed form, while the corresponding densities and the derivatives of the MGF are either unavailable or very difficult to obtain. Co-author with Chen Tong.
Bayesian Inference with Deep Neural Networks
Boris Hanin, Department of Operations Research and Financial Engineering and, Princeton University
A core problem in deep learning theory is to make precise the notion of feature learning in deep neural networks. I will present joint work with Alexander Zlokapa (MIT) which answers this question in the setting of Bayesian inference with deep and width fully connected neural networks. Specifically, I explain how a novel combination of data and model — given by the number of samples times the depth-to-width ratio — plays the role of an effective depth for the posterior. In particular, in the regime where the network width tends to infinity first, we recover the statement that neural networks are equivalent to linear models (kernel methods). But if samples times the depth-to-width is strictly positive, then we prove that Bayesian inference with neural networks is equivalent to using a data-dependent kernel method. For this data-aware kernel we give an explicit computation of the resulting data-dependent feature map used for inference, thereby giving the first model of neural networks in which one can completely describe learning in the regime where depth, width, input dimension, and number of samples diverge.
Covariate-Adjusted Response Adaptive Design with Delayed Outcomes
Xinwei Ma, Department of Economics, UCSD
Covariate-adjusted response adaptive (CARA) designs have gained widespread adoption for their clear benefits in enhancing experimental efficiency and participant welfare. These designs dynamically adjust treatment allocations during interim analyses based on participant responses and covariates collected during the experiment. However, delayed responses can significantly compromise the effectiveness of CARA designs, as they hinder timely adjustments to treatment assignments when certain participant outcomes are not immediately observed. In this manuscript, we propose a fully forward-looking CARA design that dynamically updates treatment assignments throughout the experiment as response delay mechanisms are progressively estimated. Our design strategy is informed by novel semiparametric efficiency calculations that explicitly account for outcome delays in a multi-stage adaptive experiment. Through both theoretical investigations and simulation studies, we demonstrate that our proposed design offers a robust solution for handling delayed outcomes in CARA designs, yielding significant improvements in both statistical power and participant welfare.