Statistics Seminar - Fall 2019
Please note the time and location for each event in the schedule below.
For questions about the seminar schedule, please contact Zuofeng Shang.
October 24
Zuofeng Shang, New Jersey Institute of Technology
Time and Location: 4pm/CKB 126
Statistical Optimality of Deep Neural Networks in Regression and Classification
In this talk, I will discuss statistical optimality of deep neural network methods in regression and classification. In the first part, I will consider linear regression model with instrumental variables that characterize endogenous errors. Deep neural network provides a flexible way to characterize the relationship between the design variable and instrumental variable. Asymptotic distribution and semiparametric efficiency are established for the proposed estimator. In the second part, I will consider nonparametric classification in which classifiers are characterized by deep neural networks. Tight upper and lower bounds for the classification risk are established. The proposed methods enjoy the so-called intrinsic dimension phenomenon.
October 31
Jiahui Yu, Boston University
Time and Location: 4pm/CKB 126
Smoothing Spline Semiparametric Density Models
Density estimation plays a fundamental role in many areas of statistics and machine learning. Semiparametric density models are flexible in incorporating domain knowledge and uncertainty regarding the shape of the density function. Existing literature on semiparametric density models is scattered and lacks a systematic framework. We consider a unified framework based on reproducing kernel Hilbert space for modeling, estimation, computation and theory. We propose a general (nonlinear) semiparametric density model which includes many existing models as special cases. In this talk, I will focus on the theoretical results for our proposed models. In particular, we establish joint consistency and derive convergence rates of the proposed estimators. In addition, we obtain the convergence rate of the parametric component in the standard Euclidean norm, as well as the convergence rate of the overall density function in the symmetrized Kullback-Leibler distance.
November 11
Han Xiao, Rutgers University
Time and Location: 11am/Campus Center 225
Autoregressive Models for Matrix-Valued Time Series
In economics, finance and many other scientific fields, observations at each time point may naturally take the form of a matrix, with potentially large dimensions. We propose an autoregressive model for the matrix-valued time series. It preserves the matrix structure and admits corresponding interpretations. Comparing with classical vector autoregressive models, it is also a more parsimonious model because the autocorrelations are introduced by separating the row-wise and column-wise dependencies. A further dimension reduction can be achieved by requiring the coefficient matrices to be of small ranks. Estimation procedures and their theoretical properties are investigated. The performances of the models and various estimators are demonstrated with simulated and real examples.
Updated November 15, 2019