Dependence-Aware Multi-Agent Intelligence for Adaptive Fraud Detection

T. Ekin, K. Mandadapu, L. Shaw
Texas State University,
United States

Keywords: multi-agent systems, dependence modeling, explainable fraud analytics

Summary:

Current multi-agent AI frameworks, while effective at distributed reasoning and decision-making, often fail to model the statistical dependence among agents’ outputs. This independence assumption limits interpretability and weakens resilience—particularly in adversarial domains such as fraud analytics, where coordinated anomalies and correlated errors are common. In FraudSphere, a multi-agent platform for fraud and anomaly detection, intelligent agents collaboratively analyze complex financial and behavioral evidence streams. Although these agents communicate through shared contexts or message passing, their output dependencies remain implicit and unquantified, restricting our ability to detect redundant reasoning, coordinated deception, and shared uncertainty across models. To overcome this limitation, we extend Adversarial Risk Analysis (ARA) to capture latent dependencies among agent outputs, enabling a structured understanding of how adversaries exploit correlated decision boundaries within multi-agent AI systems. The proposed dependence-aware ARA framework integrates information-theoretic and probabilistic techniques—such as mutual information, copula-based coupling, and dependency graphs—to quantify inter-agent relationships and propagate uncertainty. By explicitly modeling these dependencies, the framework identifies correlated vulnerabilities, refines risk estimation, and improves collective decision robustness. This dependence-aware reasoning layer bridges the gap between procedural coordination and statistical collectivity, enhancing both interpretability and operational resilience. Ultimately, the framework provides the theoretical foundation for next-generation agentic fraud analytics, serving as the analytic core of simulation environments like FraudSphere. It enables scalable, explainable, and adaptive AI assurance in adversarial settings where inter-agent dependencies shape system integrity and decision reliability.