Loading Events

Explainability-Driven Autonomous AI agents for multi-Domain Applications

March 7 @ 3:00 pm - 4:00 pm CST

As Artificial Intelligence systems rapidly transition from passive predictive tools to fully autonomous decision-making agents, ensuring transparency, trust, and accountability has become a critical priority. This event focuses on Explainability-Driven Autonomous AI Agents—intelligent systems capable of operating independently across multiple domains while providing interpretable and trustworthy decision insights.
Organized under the broader vision of the Institute of Electrical and Electronics Engineers, this session brings together researchers, industry leaders, and practitioners working at the intersection of autonomous systems, explainable AI (XAI), edge intelligence, and safety-critical applications.
The event will explore:

Foundations of explainability in autonomous AI agents

Architectures for multi-domain intelligent systems

Trust, robustness, and verification in AI-driven autonomy

Human-AI collaboration and interpretable decision pipelines

Applications across healthcare, smart infrastructure, defense, consumer electronics, and distributed intelligence

Ethical, regulatory, and societal implications of autonomous AI
Special emphasis will be placed on circuit- and system-level innovations that enable deployable, real-time, and resource-efficient explainable AI frameworks. Through invited talks, panel discussions, and technical presentations, participants will gain insights into emerging methodologies that bridge algorithmic intelligence with hardware-aware design and system reliability.
This event aims to foster interdisciplinary collaboration within the IEEE community, encouraging the development of next-generation autonomous AI agents that are not only intelligent—but also transparent, accountable, and societally aligned.
Speaker(s): Reshma,
Virtual: https://events.vtools.ieee.org/m/543717