Cyber Physical AI Trust and Integrity Research Cluster

As these cyber-physical systems grow in complexity, so do threats to their safety, reliability and integrity.

The CAITI Cluster (Cyber Physical AI Trust and Integrity) unites expertise across control systems, secure hardware, trustworthy AI/ML, game theory, and information assurance to build intelligent systems that are safe, explainable and resilient under adversarial influence. We develop methods ensuring trustworthiness from silicon to system level, combining hardware roots of trust, adversarial-robust learning and game-theoretic resilience.

Through interdisciplinary research, industry partnerships, and hands-on education, the CAITI Cluster trains engineers and scientists to lead at the intersection of cyber, physical and intelligent systems, engineering a future in which machines can be trusted to act safely, securely, and ethically.


Specialties

  • Cyber-physical systems security
  • Control and game theory under adversarial conditions
  • Secure and tamper-resistant hardware
  • Information assurance and digital forensics
  • Machine learning for safety-critical systems
  • Trustworthy and robust AI Human–machine trust and decision integrity
  • Adversarial signal processing and anomaly detection
  • Secure embedded and real-time systems

Related courses

  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...
  • Loading Course Info...


Potential career pathways

Students from the CAITI Cluster are prepared for high-impact roles at the intersection of autonomy, security and critical infrastructure, working across the full cyber–physical stack from silicon- level safeguards to AI-driven systems in the field.

  • Cyber-Physical Security Engineer: Architect secure, resilient and verifiable control and sensing pipelines for autonomous vehicles, industrial automation, robotics and smart infrastructure as a cyber- physical systems security engineer, secure autonomy engineer or critical Infrastructure protection specialis
  • AI/ML Safety and Robustness Engineer: Develop adversarially robust models, safe-learning controllers and explainable AI systems for safety- critical domains as a trustworthy AI engineer, machine learning security Specialist or robust perception and control architect
  • Hardware Security and Embedded Trust Engineer: Design tamper-resistant chips, secure boot architectures and hardware roots-of-trust for defense, medical devices and embedded platforms as a hardware security engineer, secure SoC architect or embedded integrity engineer
  • Digital Forensics, Verification, and Incident Response Expert: Support investigative workflows, integrity auditing, and compliance frameworks as a digital forensics analyst, verification engineer, or cyber-physical incident response lead
  • Digital Forensics, Verification and Incident Response Expert: Engineer decision-making frameworks that remain stable under uncertainty, failures, or adversarial behavior as a resilient control engineer, game-theoretic systems analyst, or autonomy assurance specialist

Research accomplishments and activities

  • Distinguished Professor Jessica Fridrich was named Fellow of the National Academy of Inventors for her impactful innovations.
  • Assistant Professor Emrah Akyol won an NSF CAREER Award for his game‐theory research on communication between “misaligned” senders and receivers.
  • Researchers developed a system using everyday devices to better detect when older adults fall at home.
  • Professor Yu Chen honored as an SPIE Fellow for his deepfake video detection research.
  • The Binghamton University Rover Team competed in the University Rover Challenge in Utah in 2023, 2024, and 2026.
  • New research developed tools that use AI “fingerprints” to detect altered photos and videos by analyzing anomalies in their frequency domain signatures.

Faculty