Efficient in-sensor and multimodal signal processing using Hyperdimensional computing algorithms implemented on an FPGA

V. Ehsan, N. Srinivasa, R. Kim, and Y. Khurana
Arch Systems, LLC,
United States

Keywords: hyperdimensional computing (HDC), FPGA-based edge AI, in-sensor online learning, collaborative automatic target recognition (C-ATR)

Summary:

Arch Systems’ MINDS (Multi-sensor IN-pixel hyperDimensional computing for edge operations) represents a groundbreaking advance in edge artificial intelligence by embedding real-time, adaptive learning capabilities directly within sensor hardware. Unlike conventional deep neural networks that require centralized training, high power, and static deployment, MINDS employs Hyperdimensional Computing (HDC) implemented on low-SWAP (size, weight, and power) FPGA hardware to enable continuous online learning at the edge. This neuromorphic-inspired approach allows the system to autonomously adapt to changing environments without retraining or connectivity to data centers, addressing a critical gap in current AI solutions used for Department of the Air Force (DAF) missions such as Long Range Kill Web (LRKW) and Collaborative Automatic Target Recognition (C-ATR). Over a three-year period, MINDS advances from TRL-3 to TRL-6, delivering functional AI chips capable of in-sensor signal processing and multimodal fusion. In the first nine months, the program develops methods to encode raw electro-optical (EO) and infrared (IR) data into hypervectors using Intel’s Agilex FPGA prototypes. In subsequent phases, it demonstrates multisensor fusion, compute-in-memory architectures, and flight-ready prototypes for C-ATR applications. The HDC framework uses binding, bundling, and permutation operations to represent and manipulate high-dimensional data efficiently, enabling one-shot and few-epoch learning without catastrophic forgetting. This yields an unprecedented combination of adaptability, efficiency, and resilience, achieving 60 frames per second (fps) for 1,000+ object classes at less than 5 watts of power, a 100× improvement in energy efficiency, and 10× faster learning compared to GPU-based systems. Operationally, MINDS empowers distributed and cooperative intelligence among multiple platforms such as small UAVs, munitions, and ground sensors through hypervector-level data fusion. This collaborative reasoning enhances accuracy, trajectory prediction, and target coordination across diverse and noisy environments, improving decision-making during high-speed engagements. The FPGA-based architecture not only supports real-time adaptive inference but also ensures cybersecurity by performing computations in hardware, reducing exposure to software-based attack vectors. Aligned with Air Force priorities in energy-efficient autonomy, adaptive AI/ML sensing, and resilient mission systems, MINDS delivers a scalable and secure solution for contested environments. Beyond defense applications, the technology has strong dual-use potential for homeland security missions, including border surveillance, drone-based ISR, and smart autonomous sensing systems. By uniting efficiency, adaptability, and robustness, Arch Systems’ MINDS initiative marks a paradigm shift in AI at the tactical edge, achieving real-time, in-sensor intelligence that redefines the future of adaptive, collaborative, and energy-efficient computing for both military and civilian domains.