Prof. Judea Pearl
University of California, Los Angeles, USA

Title: We will announce soon
Biography: Professor Judea Pearl is a recipient of the A.M. Turing Award, often called the “Nobel Prize in Computing”. The foundations of modern artificial intelligence are built on his breakthrough work, paving the way for AI tech such as driverless cars and voice recognition software. Judea Pearl is Chancellor professor of computer science and statistics and director of the Cognitive Systems Laboratory at UCLA, where he conducts research in artificial intelligence, human reasoning and the philosophy of science. He is the author of Heuristics (1983) Probabilistic Reasoning (1988) and Causality (2000,2009) and a founding editor of the Journal of Causal Inference.

Asst. Prof. Ahmed Alaa
University of California, Berkeley, USA

Title: Conformal Meta-learners for Predictive Inference of Individual Treatment Effects
Abstract: This talk investigates the problem of machine learning-based (ML) predictive inference on individual treatment effects (ITEs). Previous work has focused primarily on developing ML-based meta-learners that can provide point estimates of the conditional average treatment effect (CATE); these are model-agnostic approaches for combining intermediate nuisance estimates to produce estimates of CATE. In this talk I discuss our recent paper in which we develop conformal meta-learners, a general framework for issuing predictive intervals for ITEs by applying the standard conformal prediction (CP) procedure on top of CATE meta-learners. We focus on a broad class of meta-learners based on two-stage pseudo-outcome regression and develop a stochastic ordering framework to study their validity. We show that inference with conformal meta-learners is marginally valid if their (pseudo outcome) conformity scores stochastically dominate oracle conformity scores evaluated on the unobserved ITEs. Additionally, we prove that commonly used CATE meta-learners, such as the doubly-robust learner, satisfy a model- and distribution-free stochastic (or convex) dominance condition, making their conformal inferences valid for practically-relevant levels of target coverage. Whereas existing procedures conduct inference on nuisance parameters (i.e., potential outcomes) via weighted CP, conformal meta-learners enable direct inference on the target parameter (ITE). Numerical experiments show that conformal meta-learners provide valid intervals with competitive efficiency while retaining the favorable point estimation properties of CATE meta-learners.

Asst. Prof. Mona Azadkia
ETH Zurich, Switzerland

Title: A Simple Measure of Conditional Dependence
Abstract: We propose a coefficient of conditional dependence between two random variables, Y and Z, given random vector X based on an i.i.d. sample. The coefficient has a long list of desirable properties, the most important of which is that under absolutely no distributional assumptions, it converges to a limit in [0,1], where the limit is 0 if and only if Y and Z are conditionally independent, given X and is one if and only if Y is equal to a measurable function of Z given X. Using this statistic, we devise a new variable selection algorithm, called Feature Ordering by Conditional Independence (FOCI), which is model-free, has no tuning parameters, and is provably consistent under sparsity assumptions. We introduce some recent advances based on this measure.

Asst. Prof. Matteo Bonvini
Rutgers University, USA

Title: On the possibility of doubly robust root-n inference.
Abstract: We study the problem of constructing an estimator of the average treatment effect (ATE) that exhibits doubly-robust asymptotic linearity (DR-AL). This is a stronger requirement than doubly-robust consistency. In fact, a DR-AL estimator can yield asymptotically valid Wald-type confidence intervals even in the case when the propensity score or the outcome model is inconsistently estimated. On the contrary, the celebrated doubly-robust, augmented-IPW estimator requires consistent estimation of both nuisance functions for root-n inference. Previous authors have considered this problem (van der Laan, 2014, Benkeser et al, 2017, Dukes et al 2021) and provided sufficient conditions under which the proposed estimators are DR-AL. Such conditions are typically stated in terms of ``high-level nuisance error rates" needed for root-n inference. In this paper, we build upon their work and establish sufficient and more explicit smoothness conditions under which a DR-AL estimator can be constructed. We also consider the case of slower-than-root-n convergence rates and study minimax optimality within the structure-agnostic framework proposed by Balakrishnan et al (2023). Finally, we clarify the connection between DR-AL estimators and those based on higher-order influence functions (Robins et al, 2017) and complement our theoretical findings with simulations.

Dr. Guangyi Chen
MBZUAI, UAE

Title: Causal representation learning for video understanding
Abstract: We will announce soon.

Asst. Prof. Carlos Cinelli
UW-Seattle, USA

Title: Long Story Short: Omitted Variable Bias in Causal Machine Learning
Abstract: We derive general, yet simple, sharp bounds on the size of the omitted variable bias for a broad class of causal parameters that can be identified as linear functionals of the conditional expectation function of the outcome. Such functionals encompass many of the traditional targets of investigation in causal inference studies, such as, for example, (weighted) average of potential outcomes, average treatment effects (including subgroup effects, such as the effect on the treated), (weighted) average derivatives, and policy effects from shifts in covariate distribution--all for general, nonparametric causal models. Our construction relies on the Riesz-Frechet representation of the target functional. Specifically, we show how the bound on the bias depends only on the additional variation that the latent variables create both in the outcome and in the Riesz representer for the parameter of interest. Moreover, in many important cases (e.g, average treatment effects and avearage derivatives) the bound is shown to depend on easily interpretable quantities that measure the explanatory power of the omitted variables. Therefore, simple plausibility judgments on the maximum explanatory power of omitted variables (in explaining treatment and outcome variation) are sufficient to place overall bounds on the size of the bias. Furthermore, we use debiased machine learning to provide flexible and efficient statistical inference on learnable components of the bounds. Finally, empirical examples demonstrate the usefulness of the approach.

 

Asst. Prof. Yifan Cui
Zhejiang University, China

Title: Policy Learning with Distributional Welfare
Abstract: In this paper, we explore optimal treatment allocation policies that target distributional welfare. Most literature on treatment choice has considered utilitarian welfare based on the conditional average treatment effect (ATE). While average welfare is intuitive, it may yield undesirable allocations especially when individuals are heterogeneous (e.g., with outliers) - the very reason individualized treatments were introduced in the first place. This observation motivates us to propose an optimal policy that allocates the treatment based on the conditional quantile of individual treatment effects (QoTE). Depending on the choice of the quantile probability, this criterion can accommodate a policymaker who is either prudent or negligent. The challenge of identifying the QoTE lies in its requirement for knowledge of the joint distribution of the counterfactual outcomes, which is generally hard to recover even with experimental data. Therefore, we introduce minimax policies that are robust to model uncertainty. A range of identifying assumptions can be used to yield more informative policies. For both stochastic and deterministic policies, we establish the asymptotic bound on the regret of implementing the proposed policies. In simulations and two empirical applications, we compare optimal decisions based on the QoTE with decisions based on other criteria. The framework can be generalized to any setting where welfare is defined as a functional of the joint distribution of the potential outcomes.

 

Asst. Prof. Xiaowu Dai
University of California, Los Angeles, USA

Title: Kernel ordinary differential equations
Abstract: The ordinary differential equation (ODE) is widely used in modelling biological and physical processes in science. A new reproducing kernelbased approach is proposed for the estimation and inference of ODE given noisy observations. The functional forms in ODE are assumed to be known or restricted to be linear or additive, and pairwise interactions are allowed. Sparse estimation is performed to select individual functionals and construct confidence intervals for the estimated signal trajectories. The estimation optimality and selection consistency of kernel ODE are established under both the low-dimensional and high-dimensional settings, where the number of unknown functionals can be smaller or larger than the sample size. The proposal builds upon the smoothing spline analysis of variance (SS-ANOVA) framework, but tackles several important problems that are not yet fully addressed, and extends the existing methods of dynamic causal modeling.

Dr. Yuhao Deng
University of Michigan, USA

Title: Causal Inference in Multi-state Models with Multiple Intermediate Events
Abstract: Multi-state models are widely used in biomedical sciences to demonstrate the disease progression mechanism. However, causal inference in multi-state models is challenging due to the complicated interaction between treatment and history in transition rates. We adopt the counterfactual cumulative incidence of an event as the estimand. Then treatment effects are defined by contrasting counterfactual cumulative incidences under different combinations of transition-specific treatment components. Under a dismissible treatment components condition, we derive the semiparametric efficient estimators for the counterfactual cumulative incidences and treatment effects. Then we provide hypothesis testing methods to test the treatment effects. The proposed framework has three potential uses: (1) to detect on which event the treatment has an effect on, (2) to estimate path-specific treatment effects, and (3) to infer optimal dynamic treatment regimes. Through a real-world application on stem cell transplantation, we illustrate the usefulness of the proposed framework.

 

Assoc. Prof. Ivan Diaz
New York University, USA

Title: Recanting Twins: Addressing Intermediate Confounding in Mediation Analysis
Abstract: Online learning is a popular paradigm for decision making in dynamic even adversarial environments. With the advent of big data and advanced technologies, etc., decentralized online learning has thus far been increasingly focused in the last decade, where a group of agents commit their decisions via local communication in dynamic environments, which is usually classified into decentralized online optimization (DOO) and online games (OG) according to cooperative and noncooperative characteristics among all the agents, respectively. This talk aims to briefly introduce decentralized online learning and present some cutting-edge developments in three aspects, i.e., DOO with coupled inequality constraints, decentralized online aggregative optimization, and online game with time-varying coupled inequality constraints. For each scenario, decentralized online algorithms are proposed with guaranteed performances, i.e., sublinear static/dynamic regret.

Asst. Prof. Kara E. Rudolph
University of Columbia, USA

Title: Improving Efficiency in Transporting Average Treatment Effects
Abstract: We develop flexible, semiparametric estimators of the average treatment effect (ATE) transported to a new population (``target population'') that offer potential efficiency gains. First, we propose two one-step semiparametric estimators that incorporate knowledge of which covariates are effect modifiers and which are both effect modifiers and differentially distributed between the source and target populations. These estimators can be used even when not all covariates are observed in the target population; one requires that only effect modifiers are observed, and the other requires that only those modifiers that are also differentially distributed are observed. Second, we propose a collaborative one-step estimator when researchers do not have knowledge about which covariates are effect modifiers and which differ in distribution between the populations, but require all covariates to be measured in the target population. We use simulation to compare finite sample performance across our proposed estimators and existing estimators of the transported ATE, including in the presence of practical violations of the positivity assumption. Lastly, we apply our proposed estimators to a large-scale housing trial.

Dr. Konstantin Genin
University of Tübingen, Germany

Title:Feasible Success Concepts for Confounded Causal Discovery
Abstract: Existing causal discovery algorithms are often evaluated using two success criteria, one that is too strong to be feasible and the other which is too weak to be satisfactory. The unachievable criterion—uniform consistency—requires that a discovery algorithm identify the correct causal structure at a known sample size. The weak but achievable criterion—pointwise consistency—requires only that one identify the correct causal structure in the limit. We investigate two intermediate success criteria—decidability and progressive solvability—that are stricter than mere consistency but weaker than uniform consistency. To do so, we review several topological theorems characterizing the causal discovery problems that are decidable and progressively solvable. We show, under a variety of common modeling assumptions, that there is no uniformly consistent procedure for identifying the direction of a causal edge, but there are statistical decision procedures and progressive solutions. We focus on faithful linear models in which the error terms are either non-Gaussian or contain no Gaussian components, where the latter is relatively novel to the causal discovery literature (the FLAMNGCo, or “flamingo” model). We focus especially on which success criteria remain feasible when confounders are present.

Assoc. Prof. Zijian Guo
Rutgers University, USA

Title: Robust Causal Inference with Possibly Invalid Instruments: Post-selection Problems and A Solution Using Searching and Sampling
Abstract: Instrumental variable methods are among the most commonly used causal inference approaches to deal with unmeasured confounders in observational studies. The presence of invalid instruments is the primary concern for practical applications, and a fast-growing area of research is inference for the causal effect with possibly invalid instruments. This paper illustrates that the existing confidence intervals may undercover when the valid and invalid instruments are hard to separate in a data-dependent way. To address this, we construct uniformly valid confidence intervals that are robust to the mistakes in separating valid and invalid instruments. We propose to search for a range of treatment effect values that lead to sufficiently many valid instruments. We further devise a novel sampling method, which, together with searching, leads to a more precise confidence interval. Our proposed searching and sampling confidence intervals are uniformly valid and achieve the parametric length under the finite-sample majority and plurality rules. We apply our proposal to examine the effect of education on earnings. The proposed method is implemented in the R package \texttt{RobustIV} available from CRAN.

Ms. Yiyi Huo
UW-Seattle, USA

Title:On the adaptation of causal forests to manifold data
Abstract: Researchers often hold the belief that random forests are "the cure to the world's ills". But how exactly do they achieve this? Focused on the recently introduced causal forests, we aim to contribute to an ongoing research trend towards answering this question, proving that causal forests can adapt to the unknown covariate manifold structure. In particular, our analysis shows that a causal forest estimator can achieve the optimal rate of convergence for estimating the conditional average treatment effect, with the covariate dimension automatically replaced by the manifold dimension.

Assoc. Prof. Lucas Janson
Harvard University, USA

Title: Conditional Independence Testing and Conformal Inference with Adaptively Collected Data
Abstract: Randomization testing is a fundamental method in statistics, enabling inferential tasks such as testing for (conditional) independence of random variables, constructing confidence intervals in semiparametric location models, and constructing (by inverting a permutation test) model-free prediction intervals via conformal inference. Randomization tests are exactly valid for any sample size, but their use is generally confined to exchangeable data. Yet in many applications, data is routinely collected adaptively via, e.g., (contextual) bandit and reinforcement learning algorithms or adaptive experimental designs. In this paper we present a general framework for randomization testing on adaptively collected data (despite its non-exchangeability) that uses a weighted randomization test, for which we also present computationally tractable resampling algorithms for various popular adaptive assignment algorithms, data-generating environments, and types of inferential tasks. Finally, we demonstrate via a range of simulations the efficacy of our framework for both testing and confidence/prediction interval construction.

Prof. Zhichao Jiang
Sun Yat-sen University, China

Title: Principal Stratification with Continuous Post-Treatment Variables
Abstract: Post-treatment variables often complicate causal inference. They appear in many scientific problems, including noncompliance, truncation by death, mediation, and surrogate endpoint evaluation. Principal stratification is a strategy that adjusts for the potential values of the post-treatment variables, defined as the principal strata. It allows for characterizing treatment effect heterogeneity across principal strata and unveiling the mechanism of the treatment on the outcome related to post-treatment variables. However, the existing literature has primarily focused on binary post-treatment variables, leaving the case with continuous post-treatment variables largely unexplored, due to the complexity of infinitely many principal strata that challenge both the identification and estimation of causal effects. We fill this gap by providing nonparametric identification and semiparametric estimation theory for principal stratification with continuous post-treatment variables. We propose to use working models to approximate the underlying causal effect surfaces and derive the efficient influence functions of the corresponding model parameters.

Prof. Theis Lange
University of Copenhagen, Denmark

Title: The potentials for Large Language Models when doing causal inference
Abstract: In this talk I will provide and overview of how we at University of Copenhagen work towards harnessing the power of Large Langue Models (LLMs) and AI in general to do better causal inference. I will detail some of the use-cases, which we see as most promising. Finally, I will try to detail how I see we can improve how causal inference uses the potentials of LLMs and AI in general.

Prof. Mark Van Der Lann
University of California, Berkeley, USA

Title: Targeted Learning, HAL, and Causal Inference for Generating Real World Evidence in Drug Development
Abstract: Targeted Learning follows a general roadmap for 1) accurately translating the real world into a formal statistical estimation problem in terms  of causal estimand, a corresponding statistical estimand, and statistical model; 2) a corresponding template for construction of a targeted maximum likelihood estimator (TMLE) of the statistical estimand; and finally 3) a sensitivity analysis addressing the possible causal gap. The TMLE represents an optimal plug-in machine learning based estimator of the estimand combined with formal statistical inference. The three pillars of TMLE are super-learning, Highly Adaptive Lasso (HAL), and the TMLE-update step, where the latter has various choices such as CV-TMLE/C-TMLE, and the recently developed adaptive TMLE (Lars van der Laan et al., 2023).  Through super-learning it can  incorporate high dimensional and diverse data sources such as images, NLP features, and state of art algorithms tailored for such data sources. To optimize finite sample performance, the precise specification of TMLE can be tailored towards the precise experiment and statistical estimation problem in question, while being theoretically grounded, optimal, and benchmarked.  We provide a motivation, explanation, and overview of targeted learning; the key role of super-learning and HAL; discuss some of the key choices and considerations in specifying the TMLE-step; and discuss (a priori specified) SAP construction based on targeted learning, incorporating outcome-blind simulations to choose a best specification of the SAP. We also discuss a Sentinel and FDA RWE demonstration project of targeted learning demonstrating SAP specification on real data.

Asst. Prof. Lihua Lei
Stanford University, USA

Title: Inference for Synthetic Controls via Refined Placebo Tests
Abstract: The synthetic control method is often applied to problems with one treated unit and a small number of control units. A common inferential task in this setting is to test null hypotheses regarding the average treatment effect on the treated. Inference procedures that are justified asymptotically are often unsatisfactory due to (1) small sample sizes that render large-sample approximation fragile and (2) simplification of the estimation procedure that is implemented in practice. An alternative is permutation inference, which is related to a common diagnostic called the placebo test. It has provable Type-I error guarantees in finite samples without simplification of the method, when the treatment is uniformly assigned. Despite this robustness, the placebo test suffers from low resolution since the null distribution is constructed from only N reference estimates, where N is the sample size. This creates a barrier for statistical inference at a common level like α=0.05, especially when N is small. We propose a novel leave-two-out procedure that bypasses this issue, while still maintaining the same finite-sample Type-I error guarantee under uniform assignment for a wide range of N. Unlike the placebo test whose Type-I error always equals the theoretical upper bound, our procedure often achieves a lower unconditional Type-I error than theory suggests; this enables useful inference in the challenging regime when α 1/N. Empirically, our procedure achieves a higher power when the effect size is reasonably large and a comparable power otherwise. We generalize our procedure to non-uniform assignments and show how to conduct sensitivity analysis. From a methodological perspective, our procedure can be viewed as a new type of randomization inference different from permutation or rank-based inference, which is particularly effective in small samples.

 

Dr. Jiangmeng Li
Institute of Software Chinese Academy of Sciences, China

Title: Modeling Causal Mechanisms Underlying OOD Generalization via Interventional and Counterfactual Lens
Abstract: Existing machine learning methods, when faced with out-of-distribution (OOD) scenarios, tend to learn spurious correlations or shortcuts between data and labels rather than exploring the underlying causal mechanisms that generate the labels. Therefore, accurately modeling OOD problems using causal mechanisms has become a highly prominent research topic. Following the ladder of causation, we explore building the OOD structural causal model (SCM) from the interventional and counterfactual perspective, thereby identifying confounders and controlling confounding bias. Accordingly, we determine the correctness of collider-specific spurious correlation in interventional SCM for OOD. A general and simple graphical representation of counterfactuals is proposed to empower researchers to determine the independence between cross-world variables and identify the intrinsic confounder in counterfactual SCM for OOD.

 

Assoc. Prof. Shuwei Li
Guangzhou University, China

Title: Instrumental variable estimation of complier causal treatment effect with interval-censored data
Abstract: Assessing causal treatment effect on a time-to-event outcome is of key interest in many scientific investigations. Instrumental variable (IV) is a useful tool to mitigate the impact of endogenous treatment selection to attain unbiased estimation of causal treatment effect. Existing development of IV methodology, however, has not attended to outcomes subject to interval censoring, which are ubiquitously present in studies with intermittent follow-up but are challenging to handle in terms of both theory and computation. In this work, we fill in this important gap by studying a general class of causal semiparametric transformation models with interval-censored data. We propose a nonparametric maximum likelihood estimator of the complier causal treatment effect. Moreover, we design a reliable and computationally stable expectation-maximization (EM) algorithm, which has a tractable objective function in the maximization step via the use of Poisson latent variables. The asymptotic properties of the proposed estimators, including the consistency, asymptotic normality, and semiparametric efficiency, are established with empirical process techniques. We conduct extensive simulation studies and an application to a colorectal cancer screening data set, showing satisfactory finite-sample performance of the proposed method as well as its prominent advantages over naive methods.

 

Assoc. Prof. Wei Li
Renming University of China, China

Title: Inference of Possibly Bi-directional Causal Relationships with Invalid Instrumental Variables
Abstract: Learning causal relationships between pairs of complex traits from observational studies is of great interest across various scientific domains. However, most existing methods assume the absence of unmeasured confounding and restrict relationships between two traits to be uni-directional, which may be violated in real-world systems. In this paper, we address the challenge of causal discovery and effect estimation for two traits while accounting for unmeasured confounding and potential feedback loops. By leveraging possibly invalid instrumental variables, we establish sufficient identifying conditions for bi-directional and uni-directional models, respectively. Then we introduce a data-driven procedure to detect whether the causal relationship is bi-directional. When it is detected to be uni-directional, we propose another procedure to determine the causal direction. We show that our method can consistently identify the true direction between two traits. Additionally, we provide estimation and inference results about causal effects along the identified direction. The proposed estimators are asymptotically normal under certain regularity conditions. We demonstrate the proposed method via simulations and real data examples from UK Biobank.

Asst. Prof. Xinran Li
University of Chicago, USA

Title: Robust Sensitivity Analysis for Matched Observational Studies
Abstract: Observational studies provide invaluable opportunities for causal inference, but they often suffer from biases due to pretreatment difference between treated and control units. Matching has been a popular approach to reduce observed covariate imbalance. To tackle the unmeasured confounding, Rosenbaum proposed a sensitivity analysis framework for matched observational studies, which adapts and extends the conventional randomization inference for randomized experiments. However, Rosenbaum’s analysis may exhibit two potential limitations. First, it focuses mainly on sharp null hypotheses, say Fisher’s null of no effect for any unit, which can be restrictive in practice and are not able to accommodate unknown individual effect heterogeneity. Second, it considers mainly a uniform bound on the strength of hidden confounding across matched sets, under which the sensitivity analysis will lose power if extreme hidden confounding is suspected, e.g., some units may be almost certain to take the treatment or control due to unmeasured confounding. In this talk, we will extend Rosenbaum’s framework to overcome both limitations. First, we propose sensitivity analysis for quantiles of individual treatment effects, without any constant-effects assumptions. Second, we will propose sensitivity analysis based on quantiles of hidden biases, which can strengthen the evidence supporting a causal finding.

Asst. Prof. Xinyi Li
Clemson University, USA

Title: Functional Individualized Treatment Regimes with Imaging Features
Abstract: Precision medicine seeks to discover an optimal personalized treatment plan and thereby provide informed and principled decision support, based on the characteristics of individual patients. With recent advancements in medical imaging, it is crucial to incorporate patient-specific imaging features in the study of individualized treatment regimes. We propose a novel, data-driven method to construct interpretable image features that can be incorporated, along with other features, to guide optimal treatment regimes. The proposed method treats imaging information as a realization of a stochastic process, and employs smoothing techniques in estimation. We show that the proposed estimators are consistent under mild conditions. The proposed method is applied to a dataset provided by the Alzheimer's Disease Neuroimaging Initiative..

Asst. Prof. Muxuan Liang
University of Florida, USA

Title: A General Framework for Incorporating Systematic Prediction Errors in Individualized Treatment Rules
Abstract: Estimating individualized treatment rules (ITRs) from observational data or clinical trials with non-adherence is challenging due to possible unmeasured confounding bias. An instrumental variable (IV) can be used to provide partial identifications on possible values of the conditional average treatment effects (CATEs). When making treatment decisions under partially identified CATEs, the optimal treatment decisions may be uncertain; and current literature fails to inform such uncertainty in making treatment decisions. In this work, we adopt the idea of a reject option and develop a new class of `optimal' ITRs to guide treatment decisions and inform the identification uncertainty. The reject option beyond original treatment options informs who are susceptible to identification uncertainty and allows collecting additional information to determine the optimal decisions for these patients. To achieve this, we define a novel loss function, which connects the reject option with uncertain treatment decisions, and derive the associated IV-optimal ITRs. Our framework allows users to control the size of subgroups receiving the reject option by taking into account the relative cost of collecting additional information to ascertain decisions or risk of taking the reject option compared with the cost of an incorrect treatment decision. To estimate the IV-optimal ITRs with a reject option, we develop a weighted classification framework with a modified hinge loss function, where the weights are non-smooth transformations of nuisance parameters. We further propose an augmented empirical risk minimization approach to estimate the IV-optimal ITRs, which achieves a fast convergence rate even if the nuisance parameters are estimated using nonparametric or machine learning methods.

 

Asst. Prof. Zhaotong Lin
Florida State University, USA

Title: A Robust Cis-Mendelian Randomization Method with Application to Drug Target Discovery
Abstract: Mendelian randomization (MR) uses genetic variants as instrumental variables (IVs) to investigate a causal relationship between two traits, an exposure and an outcome. Compared to conventional MR using only independent IVs selected from the whole genome, cis-MR focuses on a single genomic region using only cis-SNPs. For example, using cis-pQTLs for each circulating protein as an exposure for a disease opens an economical path for drug target discovery. Despite the significance of such applications, only few methods are robust to (horizontal) pleiotropy and linkage disequilibrium (LD) of cis-SNPs as IVs. In this work, we propose a cis-MR method based on constrained maximum likelihood, called cisMR-cML, which accounts for LD and (horizontal) pleiotropy in a general likelihood framework. It is robust to the violation of any of the three valid IV assumptions with strong theoretical support. We further clarify the severe but largely neglected consequence of the current practice of modeling marginal effects, instead of conditional effects, of SNPs in cis-MR analysis. Numerical studies demonstrated the advantage of our method over other existing methods. We applied our method in a drug-target analysis for coronary artery disease (CAD), including a proteome-wide application, in which three potential drug targets, PCSK9, COLEC11 and FGFR1, for CAD were identified.

 

Asst. Prof. Lin Liu
Shanghai Jiaotong University, China

Title: DNA-SE: Towards Deep Neural-Nets Assisted Semiparametric Estimation
Abstract: Semiparametric statistics play a pivotal role in a wide range of domains, including but not limited to missing data, causal inference, and transfer learning, to name a few. In many settings, semiparametric theory leads to (nearly) statistically optimal procedures that yet involve numerically solving Fredholm integral equations of the second kind. Traditional numerical methods, such as polynomial approximations or grid-based methods, are difficult to scale to multi-dimensional problems. Alternatively, statisticians may choose to approximate the original integral equations by ones with closed-form solutions, resulting in computationally more efficient, but statistically suboptimal or even incorrect procedures. To bridge this gap, we propose a new framework by formulating the semiparametric estimation problem as a bi-level optimization problem; and then we develop a scalable algorithm called Deep Neural-Nets Assisted Semiparametric Estimation (DNA-SE) by leveraging the universal approximation property of Deep Neural-Nets (DNN) to streamline semiparametric procedures. Through extensive numerical experiments and a real data analysis, we demonstrate the numerical and statistical advantages of (DNA-SE) over traditional methods.

 

Assoc. Prof. Ruixuan Liu
The Chinese University of Hong Kong, China

Title: Double Robust Bayesian Inference on Average Treatment Effects
Abstract: We propose a double robust Bayesian inference procedure on the average treatment effect (ATE) under unconfoundedness. Our robust Bayesian approach involves two important modifications: first, we adjust the prior distributions of the conditional mean function; second, we correct the posterior distribution of the resulting ATE. Both steps make use of a pilot estimator of the Riesz representor. We prove asymptotic equivalence of our Bayesian estimator and double robust frequentist estimators by establishing a new semiparametric Bernstein-von Mises theorem under double robustness; i.e., the lack of smoothness of conditional mean functions can be compensated by high regularity of the propensity score and vice versa. Consequently, the resulting Bayesian point estimator internalizes the bias correction and the Bayesian credible sets form confidence intervals with asymptotically exact coverage probability. In simulations, our robust Bayesian procedure leads to significant bias reduction of point estimation and accurate coverage of confidence intervals, especially when the dimensionality of covariates is large relative to the sample size and the underlying functions become complex. We illustrate our method in an application to the National Supported Work Demonstration.

Asst. Prof. Zhonghua Liu
Columbia University, USA

Title: Robust Mendelian Randomization Coupled with Alphafold2 for Drug Target Discovery
Abstract: Mendelian randomization (MR) uses genetic variants as instrumental variables (IVs) to infer the causal effect of a modifiable exposure on the outcome of interest by removing unmeasured confounding bias. However, some genetic variants might be invalid IVs due to violations of core IV assumptions. MR analysis with invalid IVs might lead to biased causal effect estimate and misleading scientific conclusions. To address this challenge, we propose a novel MR method that first Selects valid genetic IVs and then performs Post-selection Inference (MR-SPI) based on two-sample genome-wide summary statistics. We analyze 912 plasma proteins using the large-scale UK Biobank proteomics data in 54,306 participants and identify 7 proteins significantly associated with the risk of Alzheimer’s disease. We employ AlphaFold2 to predict the 3D structural alterations of these 7 proteins due to missense genetic variations, providing new insights into their biological functions in disease etiology.

Asst. Prof. Francesco Locatello
Institute of Science and Technology Austria

Title: Causal representation learning for science
Abstract: Machine learning and AI have the potential to transform data-driven scientific discovery, enabling accurate predictions for several scientific phenomena. Much of the current progress is driven by scale and, conveniently, many scientific questions require analyzing massive amounts of data. At the same time, in scientific applications predictions are often incorporated into broader analysis to draw new insights that are causal in nature. In this talk, I will discuss the open challenges solving real-world causal downstream tasks in the sciences. For this, I will present ISTAnt, the first real-world benchmark for estimating causal effects from high-dimensional observations in experimental ecology. Next, I will discuss contrastive and decoder-based causal representation learning methods and our efforts to scale them to real world climate data. For this, I will connect causal representation learning with recent advances in dynamical systems discovery that, when combined, enable learning scalable and controllable models with identifiable trajectory-specific parameters.

Prof. Wenbin Lu
North Carolina State University, USA

Title: Off-Policy Evaluation with Irregularly-Spaced, Outcome-Dependent Observation Times
Abstract: While the classic off-policy evaluation (OPE) literature commonly assumes decision time points to be evenly spaced for simplicity, in many real-world scenarios, such as those involving user-initiated visits, decisions are made at irregularly-spaced and potentially outcome-dependent time points. For a more principled policy evaluation, this paper introduces a novel OPE framework which concerns not only the (state-action) decision-making process but also an observation process that dictates the time points at which decisions are made. Two distinct value functions, derived from cumulative rewards and integrated rewards respectively, are formulated within the framework. Statistical inference for each value function is developed under modified Markov and time-homogeneous assumptions. The validity of our method is further supported by theoretical results, simulation studies, and a real-world application in dental treatment.

Assoc. Prof. Huijuan Ma
East China Normal University, China

Title: Quantile Regression Models for Compliers in Randomized Experiments with Noncompliance
Abstract: Understanding the causal effect of a treatment in randomized experiments with noncompliance is of fundamental interest in many domains. Utilizing the instrumental variable (IV) framework, compliers are the only subpopulation that closely relevant to the assessment of causal treatment effect. In this paper, we study flexible quantile regression models for compliers with and without treatment. We establish unbiased estimating equations by investigating the relationship between observed data and latent subgroup indicators. A novel iterated algorithm is proposed to solve the discontinuous equations that involve unknown parameters in a complicated way. The complier average treatment effect and quantile treatment effects can be estimated. The consistency and asymptotic normality of the proposed estimators are established. Numerical results, including extensive simulation studies and real data analysis of the Oregon health insurance experiment, are presented to show the practical utility.

Asst. Prof. Daniel Malinsky
Columbia University, USA

Title: Post-selection inference for causal effects after causal discovery
Abstract: Algorithms for constraint-based causal discovery select graphical causal models from among a space of possible candidates (e.g., all directed acyclic graphs) by executing a sequence of conditional independence tests. These may be used to inform the estimation of causal effects (e.g., average treatment effects) when there is uncertainty about which covariates ought to be adjusted for, or which variables act as confounders versus mediators. However, naively using the data twice, for model selection and estimation, would lead to invalid confidence intervals. Moreover, if the selected graph is incorrect, the inferential claims may apply to a chosen functional that is distinct from the actual causal effect. We propose an approach to post-selection inference that is based on a resampling procedure, that essentially performs causal discovery multiple times with randomly varying intermediate test statistics. Then, an estimate of the target causal effect and corresponding confidence sets are constructed from a union of individual graph-based estimates and intervals. We show that this construction has asymptotically correct coverage. Though most of our exposition focuses on the PC algorithm for learning directed acyclic graphs and the multivariate Gaussian case for simplicity, the approach is general and modular, so it can be used with other conditional independence based discovery algorithms and (semi-)parametric families. This is joint work with Ting-Hsuan Chang and Zijian Guo.

Assoc. Prof. Kosuke Morikawa
Osaka University, Japan

Title: Singular Propensity Score: Reducing Variance in Weighted Estimators
Abstract: In the fields of survey sampling, missing data analysis, and causal inference, researchers often have access to only a subset of the population of interest, which can lead to biased results. To correct this bias, weighting the estimating equations with the inverse of the propensity scores is a common method. However, this approach encounters challenges when propensity scores are extremely close to 0 or 1, as the inverse probabilities may cause divergence, thereby increasing the variance of the estimates. To address this issue, this talk introduces a singular propensity score that incorporates upper and lower bounds. We propose an information criterion specifically designed for selecting these bounds, based on observed data, and propose a new weighted estimator that aims to minimize mean squared error.

Asst. Prof. Mengjiao Peng
East China Normal University, China

Title: Doubly Robust Estimation of Optimal Individual Treatment Regimes in Semi-supervised Framework
Abstract: In many health-care datasets like the electronic health record (EHR) dataset, col- lecting labeled data can be a laborious and expensive task, resulting in a scarcity of labeled data while unlabeled data is already available. This has sparked a growing in- terest in developing methods to leverage the abundant unlabeled data. We thus develop several types of semi-supervised methods for estimating optimal treatment regimes that utilize both labeled and unlabeled data in a general model-free framework, with effi- ciency gains compared to supervised estimation methods. Our proposed method first utilizes a flexible imputation technique through single index kernel smoothing to exploit the unlabeled data, which performs well even in cases of high-dimensional covariates, with a follow-up estimation to determine the optimal treatment regime by directly op- timizing the imputed value function. Additionally, in cases where the propensity score function is unknown like in observational studies, we also develop a doubly robust semi-supervised estimation method based on a class of monotonic index models. Our estimators are shown to be consistent with the cube root convergence rate and exhibit a nonstandard asymptotic distribution characterized as the maximizer of a centered Gaussian process with a quadratic drift. Simulations demonstrate the efficiency and robustness of the proposed method compared to existing approaches in finite samples. Additionally, a practical example from the ACTG 175 study illustrates its real-world application.

Prof. Guoyou Qin
Fudan University, China

Title: Estimating overall hazard ratio by using both global and local propensity score models with multi-site data
Abstract: Objectives: We propose a propensity score weighting-based method using both the global and local propensity score model to estimate the overall hazard ratio (HR) in multi-site studies. Methods: We first fit the global propensity score model for the entire population and the local propenstiy score for each site, and then generate a empirical likelihood weight using both the fitted global and local propensity score models. A weighted Cox regression model was then used to estimate the overall HR with the obtained weight. We further extend our method to allow multiple global or local propensity score models. Results: Simulation studies show that our proposed method greatly improves the performance in estimating overall HR with small empirical bias and lowest root mean squared error (RMSE) over a broad spectrum of settings. Using data from the Surveillance, Epidemiology, and End Result (SEER) database, we observe that combination therapy of radiotherapy and chemotherapy improves survival in breast cancer patients compared with radiotherapy alone, and our proposed method yields more precise estimate with a smaller standard error. Conclusion: Our proposed method improves the estimation efficiency with negligible bias in estimating the overall HR in multi-site studies. The proposed method also improves the likelihood of correctly specifying the propensity scoring model.

Assoc. Prof. Yumou Qiu
Peking University, China

Title: Uniform Inference for Local Conditional Quantile Treatment Effect Curve with High-Dimensional Covariates
Abstract: We will announce soon.

Asst. Prof. Zhimei Ren
University of Pennsylvania, USA

Title: Sensitivity Analysis of Individual Treatment Effects: A Robust Conformal Inference Approach
Abstract: In this talk, I will introduce a model-free framework for sensitivity analysis of individual treatment effects (ITEs), building upon ideas from conformal inference. For any unit, our procedure reports the Γ-value, a number which quantifies the minimum strength of confounding needed to explain away the evidence for ITE. Our approach rests on the reliable predictive inference of counterfactuals and ITEs in situations where the training data is confounded. Under the marginal sensitivity model of Tan (2006), we characterize the shift between the distribution of the observations and that of the counterfactuals. We first develop a general method for predictive inference of test samples from a shifted distribution; we then leverage this to construct covariate-dependent prediction sets for counterfactuals. No matter the value of the shift, these prediction sets (resp. approximately) achieve marginal coverage if the propensity score is known exactly (resp. estimated). We describe a distinct procedure also attaining coverage, however, conditional on the training data. In the latter case, we prove a sharpness result showing that for certain classes of prediction problems, the prediction intervals cannot possibly be tightened. We verify the validity and performance of the new methods via simulation studies and apply them to analyze real datasets. This is joint work with Ying Jin and Emmanuel Candès.

Assoc. Prof. Bruno Ribeiro
Purdue University, USA

Title: Leveraging Causal Invariances for Improved Zero-Shot Domain Generalization in Neural Networks
Abstract: In this talk, we explore how the imposition of different types of causal invariances within neural networks forces them to learn domain-transferable, invariant patterns that significantly bolster zero-shot domain and out-of-distribution (OOD) generalizations. We will start discussing how invariances improve abstract reasoning capabilities of neural networks for zero-shot domain generalization in knowledge graphs. Then, we extend the exploration of this causal invariance-centric design principle to a diverse array of OOD generalization scenarios, ranging from causal link prediction and computer networking to emerging frontiers in Physics-Informed Machine Learning. This talk aims to discuss the transformative potential of causality and invariance in improving robustness and domain transferability in neural networks.

Prof. Donald B. Rubin
Tsinghua University, China

Title: Is there a role for counternull sets in statistical practice?
Abstract: We will announce soon.

Prof. Shohei Shimizu
Shiga University and RIKEN, Japan

Title: Causal Discovery Based on Non-Gaussianity and Nonlinearity
Abstract: Statistical causal inference is a methodology that combines domain knowledge and data to support decision-making based on understanding causal mechanisms. A central problem in science is to elucidate the causal mechanisms underlying natural phenomena and human behavior. Statistical causal inference offers various tools to study such mechanisms. However, due to a lack of background knowledge, preparing causal graphs required for performing statistical causal inference is often difficult. To alleviate this difficulty, a lot of work has been conducted to develop statistical methods for estimating causal relationships, i.e., the causal structure of variables, from observational data obtained from sources other than randomized experiments. Statistical causal discovery is such a methodology that uses data to infer the causal structure of variables. This talk outlines the basic ideas of statistical causal discovery to introduce some recent advances in the field. In particular, I will focus on methods based on non-Gaussianity and non-linearity that can handle unobserved variables.

Dr. Xinwei Shen
ETH Zurich, Switzerland

Title: Causality-oriented robustness: exploiting general additive interventions
Abstract: Since distribution shifts are common in real-world applications, there is a pressing need for developing prediction models that are robust against such shifts. Existing frameworks, such as empirical risk minimization or distributionally robust optimization, either lack generalizability for unseen distributions or rely on postulated distance measures. Alternatively, causality offers a data-driven and structural perspective to robust predictions. However, the assumptions necessary for causal inference can be overly stringent, and the robustness offered by such causal models often lacks flexibility. In this paper, we focus on causality-oriented robustness and propose Distributional Robustness via Invariant Gradients (DRIG), a method that exploits general additive interventions in training data for robust predictions against unseen interventions, and naturally interpolates between in-distribution prediction and causality. In a linear setting, we prove that DRIG yields predictions that are robust among a data-dependent class of distribution shifts. We extend our approach to the semi-supervised domain adaptation setting to further improve prediction performance.

Prof. Dylan Small
University of Pennsylvania, USA

Title: Exploratory Data Analysis, Confirmatory Data Analysis and Replication in the Same Observational Study: A Two Team Cross-Screening Approach to Studying the Effect of Unwanted Pregnancy on Mothers' Later Life Outcomes
Abstract: Exploratory data analysis, confirmatory data analysis and replication are three important aspects of building strong evidence from observational studies.  Exploratory data analysis, confirmatory data analysis and replication are often thought of as being done on separate studies.  However, for settings where randomized experiments are impossible to conduct for ethical reasons and observational studies must be relied on, it is common that there is a data set with unique strengths.  We develop a two-team cross screening approach that allows for exploratory data analysis, confirmatory data analysis and replication to be done in the same observational study data set.  We apply the approach to study the effect of unwanted pregnancy on mothers’ later life outcomes using data from the Wisconsin Longitudinal Study.   This is joint work with Samrat Roy, Marina Bogomolov, Ruth Heller, Amy Claridge and Tishra Beeson.

Dr. Matthew Smith
London School of Hygiene and Tropical Medicine, UK

Title: A New Weighted Estimator for Causal Inference in the Relative Survival Setting
Abstract: In public health research, the causal effect of a treatment (or exposure/intervention/policy) on cause-specific death after a disease diagnosis is often of interest. Other causes of death prevent the event of interest from happening, thus defining a competing risk setting. In such settings, the total (or direct) causal effect can be estimated when the cause of death is known because the overall hazard of death can be decomposed into the sum of cause-specific hazards (due to the disease and due to other-cause) (Young et al , 2020). However, this relies on a strong assumption of knowing the exact cause of death, which, if violated, could lead to biased estimates of the causal effect: in population-based settings, records for the cause of death are often unreliable or poorly recorded. Alternatively, one could estimate the causal effect of the treatment in a relative survival setting by using external information obtained from population life tables (stratified by sociodemographic characteristics) to estimate the other-cause mortality hazard and then estimate the disease-specific mortality hazard (Pohar Perme et al , 2016). The relationship between these hazards can be arranged to give a probability that an observed death is due to the cause of interest or other causes (Maringe et al , 2018). In a population-based cohort of patients with a disease of interest, we propose to weight the overall mortality (i.e., regardless the cause of death) by the probability of cause-type. After applying weights, the total causal effect is estimated using the g-formula in a conventional competing risk analysis, thereby providing marginal causal interpretations for the estimand of interest (Young et al , 2020). We will illustrate the performance of our proposed methodological framework using a simulation study and highlight its benefits and interpretation in a practical application using population-based cancer data.

Asst. Prof. Armeen Taeb
University of Washington, USA

Title: Convex mixed integer programming for causal discovery Experiments
Abstract: Causal discovery is a fundamental problem in statistical learning with broad applications. In this talk, we tackle the problem of learning Bayesian networks corresponding to linear structural equation models (SEMs) using mixed-integer programming. Although the optimal solution to this mathematical program has desirable statistical properties under certain conditions, the state-of-the-art optimization solvers are not able to obtain provably optimal solutions to the existing mathematical formulations for medium-size problems within reasonable computational times. To bridge this gap, we tackle the problem from both computational and statistical perspectives. In particular, we propose a new optimization strategy based on learning the layering of the DAG and show that this strategy offers significant computational advantages compared to existing approaches, especially when information from a super-structure (such as the Markov blanket) is utilized. We then propose a concrete early stopping criterion to terminate the branch-and-bound process in order to obtain a near-optimal solution to the mixed-integer program, and establish the consistency of this approximate solution. This is joint work with Tong Xu, Simge Küçükyavuz, and Ali Shojaie.

Dr. Bingkai Wang
University of Pennsylvania, USA

Title: Model-Robust and Efficient Covariate Adjustment for Cluster-Randomized Experiments
Abstract: Cluster-randomized experiments are increasingly used to evaluate interventions in routine practice conditions, and researchers often adopt model-based methods with covariate adjustment in the statistical analyses. However, the validity of model-based covariate adjustment remains unclear when the working models are misspecified, leading to ambiguity of estimands and risk of bias. In this article, we first adapt two model-based methods—generalized estimating equations and linear mixed models—with weighted g-computation to achieve robust inference for cluster-average and individual-average treatment effects. To further overcome the limitations of model-based covariate adjustment methods, we propose efficient estimators for each estimand that allow for flexible covariate adjustment and additionally address cluster size variation dependent on treatment assignment and other cluster characteristics. Such cluster size variations often occur post-randomization and, if ignored, can lead to bias of model-based estimators. For our proposed covariate-adjusted estimators, we prove that when the nuisance functions are consistently estimated by machine learning algorithms, the estimators are consistent, asymptotically normal, and efficient. When the nuisance functions are estimated via parametric working models, the estimators are triply-robust. Simulation studies and analyses of three real-world cluster-randomized experiments demonstrate that the proposed methods are superior to existing alternatives.

Prof. Hongyu Wang
Peking University, China

Title: Mathematical intellectualization life-cycle vascular health management and cardiovascular disease hierarchical diagnosis and treatment practice of medical and preventive integration
Abstract: We will announce soon.

Prof. Lan Wang
University of Miami, USA

Title: Robust High-dimensional Inference for Causal Effects Under Unmeasured Confounding and Invalid Instruments with an Application to Multivariable Mendelian Randomization Analysis
Abstract: We consider a new framework for Mendelian Randomization analysis with multivariate exposures and high-dimensional confounders and genetic instruments based on individual-level data without specifying an exposure model. Within this framework, we propose a novel approach to constructing confidence intervals for the causal effects in the challenging setting where many instruments may have direct effects on the outcome and/or be correlated with an unmeasured confounder. The validity of the confidence intervals is established under relatively weak condition without requiring prior knowledge of a subset of valid instruments. Our new procedure explores the sparsity of the outcome model and requires weaker conditions than existing methods for identifying the causal effects with potentially invalid instruments or many weak instruments. We also extend the approach to nonlinear outcome models with Poisson-type responses. Numerically, we demonstrate that the new method has satisfactory performance and is robust to invalid instruments. The performance of the proposed method is illustrated through its applications to two real data examples from the UK Biobank. (Joint work with Yunan Wu, Lan Wang, Baolin Wu, Yixuan Ye and Hongyu Zhao).

Prof. Lu Wang
University of Michigan, USA

Title: Multi-Objective Tree-based Reinforcement Learning for Estimating Tolerant Dynamic Treatment Decision Rules
Abstract: We will announce soon.

Asst. Prof. Yixin Wang
University of Michigan, USA

Title: Harnessing Geometric Signatures in Causal Representation Learning
Abstract: Causal representation learning aims to extract high-level latent causal factors from low-level sensory data. Many existing methods often identify these latent factors by assuming they are statistically independent. However, correlations and causal connections between factors are prevalent across applications. In this talk, we explore how geometric signatures of latent causal factors can facilitate causal representation learning with interventional data, without any assumptions about their distributions or dependency structure. The key observation is that the absence of causal connections between latent causal factors often carries geometric signatures of the latent factors' support  (i.e. what values each latent can possibly take). Leveraging this fact, we can identify latent causal factors up to permutation and scaling with data from perfect do interventions. Moreover, we can achieve block affine identification with data from imperfect interventions. These results highlight the unique power of geometric signatures in causal representation learning.

Asst. Prof. Yuhao Wang
Tsinghua University, China

Title: Debiased regression adjustment in completely randomized experiments with moderately high-dimensional covariates
Abstract: Completely randomized experiment is the gold standard for causal inference. When the covariate information for each experimental candidate is available, one typical way is to include them in covariate adjustments for more accurate treatment effect estimation. In this paper, we investigate this problem under the randomization-based framework, i.e., that the covariates and potential outcomes of all experimental candidates are assumed as deterministic quantities and the randomness comes solely from the treatment assignment mechanism. Under this framework, to achieve asymptotically normal convergence, existing estimators usually require either (i) that the dimension of covariates p grows at a rate no faster than O(n^2/3) as sample size n→∞; or (ii) certain sparsity constraints on the linear representations of potential outcomes constructed via possibly high-dimensional covariates. In this paper, we consider the moderately high-dimensional regime where p is allowed to be in the same order of magnitude as n. We develop a novel debiased estimator with a corresponding inference procedure and establish its asymptotic normality under mild assumptions. Our estimator is model-free and does not require any sparsity constraint on potential outcome's linear representations. We also discuss its asymptotic efficiency improvements over the unadjusted treatment effect estimator under different dimensionality constraints. Numerical analysis confirms that compared to other regression adjustment based treatment effect estimators, our debiased estimator performs well in moderately high dimensions.

 

Asst. Prof. Haoran Xue
City University of Hong Kong, China

Title: Inferring causal direction between two traits using R-squared with application to transcriptome-wide association studies
Abstract: In the framework of Mendelian randomization, two single SNP-trait Pearson’s correlation-based methods have been developed to infer the causal direction between an exposure (e.g. a gene) and an outcome (e.g. a trait), including the widely used MR Steiger’s method and its recent extension called Causal Direction-Ratio (CD-Ratio). Steiger’s method uses a single SNP as an instrumental variable (IV) for inference, while CD-Ratio combines the results from each of multiple SNPs. Here we propose an approach based on R-squared, the coefficient of determination, to simultaneously combine information from multiple (possibly correlated) SNPs to infer the presence and direction of a causal relationship between an exposure and an outcome. Our proposed method can be regarded as a generalization of Steiger’s method from using a single SNP to multiple SNPs as IVs. It is especially useful in transcriptome-wide association studies (TWAS) (or similar applications) with typically small sample sizes for gene expression (or other molecular trait) data, providing a more flexible and powerful approach to inferring causal directions. It can be applied to GWAS summary data with a reference panel. We also discuss its potential robustness to invalid IVs. We compared the performance of TWAS, Steiger’s method, CD-Ratio, and the new R-squared based method in simulations to demonstrate some advantages of the proposed method. We applied the methods to identify causal genes for high/low-density lipoprotein cholesterol (HDL/LDL) using the individual-level GTEx (V8) gene expression data and UK Biobank GWAS data. The proposed method was able to confirm some well-known causal genes, and identified some novel gene-trait relationships, suggesting its power gains through its use of multiple correlated SNPs as IVs.

Prof. Fan Yang
Tsinghua University, China

Title: An Integrative Multi-context Mendelian Randomization Method for Identifying Risk Genes Across Human Tissues
Abstract: Mendelian randomization (MR) provides valuable assessments of the causal effect of exposure on outcome, yet the application of conventional MR methods for mapping risk genes encounters new challenges. One of the issues is the limited availability of expression quantitative trait loci (eQTLs) as instrumental variables (IVs), hampering the estimation of sparse causal effects. Additionally, the often context/tissue-specific eQTL effects challenge the MR assumption of consistent IV effects across eQTL and GWAS data. To address these challenges, we propose a multi-context multivariable integrative MR framework, mintMR, for mapping expression and molecular traits as joint exposures. It models the effects of molecular exposures across multiple tissues in each gene region, while simultaneously estimating across multiple gene regions. It uses eQTLs with consistent effects across more than one tissue type as IVs, improving IV consistency. A major innovation of mintMR involves employing multi-view learning methods to collectively model latent indicators of disease relevance across multiple tissues, molecular traits, and gene regions. The multi-view learning captures the major patterns of disease-relevance and uses these patterns to update the estimated tissue relevance probabilities. The proposed mintMR iterates between performing a multi-tissue MR for each gene region and joint learning the disease-relevant tissue probabilities across gene regions, improving the estimation of sparse effects across genes. We apply mintMR to evaluate the causal effects of gene expression and DNA methylation for 35 complex traits using multi-tissue QTLs as IVs. The proposed mintMR controls genome-wide inflation and offers new insights into disease mechanisms.

Ms. Mengyue Yang
University College London, UK

Title: Essential Causal Representation Learning via Probability of Sufficient and Necessary
Abstract: Causal representation learning aims to discover the implicit causal structures and feature information from observational data, which is generally considered to have stronger robustness and generalization ability compared to the correlational information extracted in traditional machine learning. However, sometimes even stable/invariant causal information may mislead the model to produce incorrect results in certain scenarios, due to the neglect of the sufficiency and necessity of causal variables in leading to predictive outcomes. This talk primarily introduces the method of causal representation learning based on the probability of sufficiency and necessity, and how to apply it in invariant learning tasks.

Assoc. Prof. Shu Yang
North Carolina State University, USA

Title: Multiply robust off-policy evaluation and learning under truncation by death
Abstract:Typical off-policy evaluation (OPE) and off-policy learning (OPL) are not well-defined problems under "truncation by death", where the outcome of interest is not defined after some events, such as death. The standard OPE no longer yields consistent estimators, and the standard OPL results in suboptimal policies. In this paper, we formulate OPE and OPL using principal stratification under "truncation by death". We propose a survivor value function for a subpopulation whose outcomes are always defined regardless of treatment conditions. We establish a novel identification strategy under principal ignorability, and derive the semiparametric efficiency bound of an OPE estimator. Then, we propose multiply robust estimators for OPE and OPL. We show that the proposed estimators are consistent and asymptotically normal even with flexible semi/nonparametric models for nuisance functions approximation. Moreover, under mild rate conditions of nuisance functions approximation, the estimators achieve the semiparametric efficiency bound. Finally, we conduct experiments to demonstrate the empirical performance of the proposed estimators.

Asst. Prof. Ruoqi Yu
University of Illinois Urbana-Champaign, USA

Title: Balancing Weights for Causal Inference in Observational Factorial Studies
Abstract: Many scientific questions in biomedical, environmental, and psychological research involve understanding the effect of multiple factors on outcomes. While randomized factorial experiments are ideal for this purpose, randomization is often infeasible in many empirical studies. Therefore, investigators must rely on observational data, where drawing reliable causal inferences for multiple factors remains challenging. As the number of treatment combinations grows exponentially with the number of factors, some treatment combinations can be rare or missing by chance in observed data, further complicating factorial effects estimation. To address these challenges, we propose a novel weighting method tailored to observational studies with multiple factors. Our approach uses weighted observational data to emulate a randomized factorial experiment, enabling simultaneous estimation of the effects of multiple factors and their interactions. Our investigations reveal a crucial nuance: achieving balance among covariates, as in single-factor scenarios, is necessary but insufficient for unbiasedly estimating factorial effects. Our findings suggest that balancing the factors is also essential in multi-factor settings. Moreover, we extend our weighting method to handle missing treatment combinations in observed data. Finally, we study the asymptotic behavior of the new weighting estimators and propose a consistent variance estimator, providing reliable inferences on factorial effects in observational studies.

Asst. Prof. Bo Zhang
Fred Hutchinson Cancer Center, USA

Title: Nested Instrumental Variables Design: Switcher Average Treatment Effect, Identification, Efficient Estimation and Generalizability
Abstract: In this talk, I will introduce how to leverage a naturally strengthened, binary IV to assess the generalizability of IV-based estimates. Under a monotonicity assumption, a valid binary IV nonparametrically identifies complier average treatment effect, whose generalizability is often under debate. In many studies, there may exist multiple versions of a binary IV, for instance, different nudges to take the treatment in different study sites in a clinical trial. I will introduce a novel nested IV assumption and study the identification of the average treatment effect among two latent subgroups: always-compliers and switchers, who are defined based on the joint potential treatment uptake under two versions of a binary IV. We derive the efficient influence function for the SWitcher Average Treatment Effect (SWATE) and propose efficient estimators. We then propose formal statistical tests of the principal ignorability assumption based on comparing the conditional average treatment effect among the always-compliers and that among the switchers under the nested IV framework. This is joint work with Rui Wang (UW PhD student), Oliver Dukes (Ghent University) and Yingqi Zhao (Fred Hutch).

Prof. Yichuan Zhao
Georgia State University, USA

Title: Bayesian Jackknife Empirical Likelihood-based Inference for Missing Data and Causal Inference Problems
Abstract:Missing data reduces the representativeness of the sample and can lead to inference problems. This study applied the Bayesian jackknife empirical likelihood method for inference with missing data that were missing at random and causal inference. The semiparametric fractional imputation estimator, propensity score weighted estimator, and doubly robust estimator were used for constructing the jackknife pseudo values which were needed for conducting Bayesian jackknife empirical likelihood-based inference with missing data. Existing methods, such as normal approximation and jackknife empirical likelihood, were compared with the Bayesian jackknife empirical likelihood approach in a simulation study. The proposed approach had better performance in many scenarios in terms of the behavior of credible intervals. Furthermore, we demonstrated the application of the proposed approach for causal inference problems in a study of risk factors for impaired kidney function.

Assoc. Prof. Zheng Zhang
Renmin University of China, China

Title: Causal Inference on Quantile Dose-response Functions via Local ReLU Least Squares Weighting
Abstract: This paper proposes a new local ReLU network least squares weighting method to estimate quantile dose-response functions in observational studies. Unlike the conventional inverse propensity weighting (IPW) method, we estimate the weighting function involved in the treatment effect estimator directly through local ReLU least squares optimization. The proposed method takes advantage of ReLU networks applied for the multivariate baseline covariates to alleviate the dimensionality problem while retaining flexibility and local kernel smoothing for the continuous treatment to precisely estimate the quantile dose-response function and prepare for statistical inference. Our method enjoys computational convenience and scalability. It also improves robustness and numerical stability compared to the conventional IPW method. For the ReLU network approximation, we introduce a mixed fractional Sobolev class and show that the two-layer ReLU networks can break the `curse of dimensionality' when the weighting function belongs to this function class. We also establish the convergence rate for the ReLU network estimator and the asymptotic normality of the proposed estimator for the quantile dose-response function. We further propose a multiplier bootstrap method to construct confidence bands for quantile dose-response functions. The finite sample performance of our proposed method is illustrated through simulations and a real data application.

Prof. Xiao-Hua Zhou
Peking University, China

Title: We will announce soon.
Abstract: We will announce soon.