Abstract
Forecasting temporal events from high-dimensional sparse observational data presents significant challenges due to noise, confounding factors, and data sparsity. Traditional sequence models often struggle in extracting underlying causal relationships, leading to biased forecasts. Causal Representation Learning (CRL) aims to uncover latent causal factors from observational data, thereby enabling more robust forecasting in complex temporal settings. This paper explores recent advancements in CRL for temporal event prediction, proposes an architecture integrating recurrent encoders with causal graph discovery, and evaluates performance on synthetic and real-world sparse datasets. Results show CRL-enhanced models significantly outperform standard LSTM baselines in both accuracy and counterfactual reasoning tasks.
View more >>