Seminars are held on Thursdays at 4:00PM. Please note the location for each event in the schedule below, which will either be Cullimore 611 (CULM 611) or the Campus Center. For questions about the seminar schedule, please contact Antai Wang.
|Date||Location||Speaker, Affiliation, and Title||Host|
|March 1||Campus Center,
|Prof. Thomas Mattew, University of Maryland at Baltimore County
Statistical Methods for Cost-Effectiveness Analysis: A Selected Review
Identifying treatments or interventions that are cost-effective (more effective at lower cost) is clearly important in health policy decision making, especially in the allocation of health care resources. Various measures of cost-effectiveness that are informative, intuitive and simple to explain have been suggested in the literature, along with statistical inference concerning them. Popular and widely used measures include the incremental cost-effectiveness ratio (ICER), defined as the ratio between the difference of expected costs and the difference of expected effectiveness in two populations receiving two treatments. Although very easy to interpret as the additional cost per unit of effectiveness gained, being a ratio, the ICER presents difficulties regarding interpretation in certain situations, for example, when the difference in effectiveness is close to zero, and it also presents challenges in the statistical inference. Yet another measure proposed in the literature is the incremental net benefit (INB), which is the difference between the incremental cost and the incremental effectiveness after multiplying the latter with a “willingness-to-pay parameter”. Both ICER and INB are functions of population means, and inference concerning them has been widely investigated under a bivariate normal distribution, or under a log-normal/normal distribution for the cost and effectiveness measures. In the talk, we will briefly review these, focusing on recent developments. An alternative probability-based approach will also be introduced, referred to as cost-effectiveness probability (CEP), which is the probability that the first treatment will be less costly and more effective compared to the second one. Inference on the CEP will also be discussed. Numerical results and illustrative examples will be given.
|March 22||CULM 611||Prof. Jiangtao Gou from Fox Chase Cancer Center, Temple University Health System
Multiple Endpoints in Clinical Trials: P-value Based Tests, Dependence Assumptions, and Group Sequential Procedures
The design and analysis of clinical trials often involve multiple endpoints. It is desirable to correctly and effectively adjust multiplicity in order to ensure valid statistical inference in confirmatory studies. In this presentation, we first introduce a family of p-value based procedures which have a step-up structure similar to the Hochberg procedure and uniformly improves upon the Hochberg procedure. We further discuss the dependence assumptions for controlling the error rate for correlated endpoints. In addition, we study the problem of testing a primary and a secondary endpoint, subject to gatekeeping constraint, using a group sequential design.
|March 29||CULM LH3||Prof. Yichao Wu, University of Illinois at Chicago
Nonparametric Estimation of Multivariate Mixtures
A multivariate mixture model is determined by three elements: the number of components, the mixing proportions and the component distributions. Assuming that we are given the number of components and that each mixture component has independent marginal distributions, we propose a non-parametric method to estimate the component distributions in a multivariate mixture model. The basic idea is that we convert the estimation of density functions as a problem of estimating the coordinates of density functions under a good set of basis functions. Specifically, we construct a set of basis functions by using conditional density functions and try to recover the coordinates of component distributions under this basis. Furthermore, we show that our estimator for the component density functions are consistent. In the simulation study, we compare our algorithm with other existing non-parametric methods of estimating component distributions in mixture models under the assumption of conditionally independent marginal.
|April 5||Campus Center,
|Dr. Xiaodong Luo, Sanofi
Points of Consideration for Non-constant Hazard Ratios in Survival Analyses
In this talk, I will discuss some issues in design of survival trials accounting for complex scenarios such as delayed treatment effect, treatment dilution and treatment crossover. These scenarios often lead to non-proportional hazards, making study design and monitoring more difficult. I will compare two popular methods (log-rank vs. restricted mean survival time) through examples in these non-proportional scenarios.
|April 19||CULM 611||Prof. Yaqun Wang, Rutgers School of Public Health
Inference of Gene Regulatory Network Through Adaptive Dynamic Bayesian Network Modeling
The reconstruction of gene regulatory networks (GRN) using gene expression data can gain new insights into the causality of transcriptional and cellular processes that make a complex living system. Dynamic Bayesian network (DBN) modeling has been increasingly used to reconstruct GRN for the temporal pattern of transcriptional interactions in a time course, but this approach requires expression data measured at even time intervals. In practice, time points at which gene expression is recorded are usually uneven-spaced, determined on the basis of distinct phases of biological processes. We reform DBN modeling to accommodate to any possible irregularity and sparsity of time series data. The model is implemented with functional clustering that classifies dynamic genes into distinct clusters by adaptively fitting mean expression curves for each cluster, followed by a step of interpolating expression data at missing time points. The model is also equipped with unique power to integrate data from multiple expression experiments and, therefore, provides an unprecedented tool to elucidate a comprehensive picture of GRN. By analyzing real data sets from a surgical study and through extensive simulation studies, the new model has been well demonstrated for its usefulness and utility.
|CULM 611||Prof. Xiaoli Gao, University of North Carolina at Greensboro
(Seminar information to follow)
|May 10||Campus Center, Rm. 215||Prof. Yuan Ao, Georgetown University
Sub-Group Analysis with Nonparametric Unimodal Symmetric Error Distribution
In clinical trials and moving towards precision medicine, a very key issue is to test if there exists subgroup of patients who may benefit the most from a treatment, i.e.g, treatment-favorable vs non-favorable subgroups, and then to classify the patients into such subgroups. Existing parametric methods are non-robust, and the commonly used classification rules do not consider the priority of the treatment-favorable subgroup. To address these issues, we propose a semiparametric model, with the sub-densities specified nonparametric. For nonparametric mixture identifiability, the sub-density is assumed symmetric, and unimodal to find its nonparametric maximum likelihood estimate. The Wald statistics is used to test the existence of subgroups, while the Neyman-Pearson rule to classify each subject. Asymptotic properties are derived, simulation studies conducted to evaluate the performance of the method, and then it is used to analyze a real data.
Updated: April 25, 2018