S. Das, A. Rafe
Texas State University,
United States
Keywords: Causal AI, Multimodal LLMs, Structural Causal Models, Counterfactual Reasoning, Traffic Safety Engineering
Summary:
Current approaches to traffic safety and autonomous navigation rely heavily on correlational Deep Learning models. While these models excel at high-dimensional pattern recognition, they cannot fundamentally distinguish between causation and correlation, making them brittle in Out-of-Distribution (OOD) scenarios and "edge cases" common in real-world driving environments. To address this epistemic gap, we propose the development of Causal-Safe, a novel Causal AI agent designed to revolutionize safety analytics by systematically enabling higher-order causal inference capabilities, specifically, interventional modeling and counterfactual analysis, beyond the limitations of observational statistics. The core innovation of Causal-Safe lies in its neuro-symbolic architecture, which integrates a Multimodal Large Language Model (MLLM) with a rigorous Structural Causal Model (SCM). Unlike traditional black-box predictors, this agent ingests heterogeneous data streams: high-dimensional visual data from traffic surveillance and dashcams, unstructured natural language from police accident narratives, and structured historical crash datasets. Technically, the system operates in three phases. First, utilizing Causal Discovery algorithms, the agent processes raw visual data (via computer vision encoders) and textual narratives to identify latent causal variables and construct a Directed Acyclic Graph (DAG) representing the traffic environment. This allows the system to formally disentangle confounding variables (e.g., distinguishing whether a crash was caused by reduced surface friction or visibility, rather than merely associating it with "rain") that traditional models often conflate. Second, the agent employs the MLLM as a reasoning engine to parameterize the SCM. By grounding the LLM in the logic of the causal graph, we mitigate hallucinations and ensure that predictions adhere to physical and behavioral constraints. This neuro-symbolic approach enables the calculation of the do-operator, allowing the system to simulate interventions, such as changing traffic signal timing or autonomous braking profiles-without requiring physical testing. Third, and most critically, the agent is capable of Level 3 Counterfactual reasoning. In a post-accident analysis scenario, Causal-Safe can answer: "Given that a crash occurred under distinct conditions, what would have happened had the driver reacted two seconds earlier?" This capability transforms safety management from reactive statistical analysis to proactive, explanatory intelligence. The real-world applications of Causal-Safe are vast. For transportation agencies, it provides a granular tool for infrastructure diagnosis, pinpointing specific causal mechanisms of failure rather than hot-spot correlations. For the autonomous vehicle industry, it offers a framework for safety validation that is robust to distributional shifts. This framework directly addresses critical reliability challenges in autonomous systems and infrastructure management, advancing the robustness of AI in high-stakes, safety-critical dynamic environments and bridging the gap between theoretical causal inference and actionable engineering solutions.