I’m a 3rd-year Cybersecurity student working at the intersection of AI Security, Autonomous Systems, and Distributed Safety Engineering.
My research focuses on how agentic and multi-agent AI systems behave under adversarial pressure, how secure runtime architectures can be designed for them, and how to engineer resilient, explainable, and trustworthy autonomous intelligence.
I work across adversarial ML, multi-agent systems, and runtime safety—combining theory with systems engineering using Rust, Golang, and Kubernetes.
- Agentic Swarm Intelligence — autonomous defense agents, role-switching, emergent coordination
- AI Runtime Security — kill-switches, governance modules, safety-critical constraints
- Adversarial Machine Learning — robustness, poisoning/evasion, red-team AI
- RL-LoRA Behaviour Systems — modular behavioural LoRA patches, dynamic skill composition
- Cyber-Physical & CPS Security — secure sensing, anomaly detection, physical-world adversaries
- Distributed Systems for AI — Kubernetes-orchestrated agents, fault-tolerant multi-agent infrastructure
- Trustworthy ML — explainability, uncertainty, distribution shifts
- Building AADS — Agentic AI Defense Swarms with safe governance and swarm-level autonomy
- Developing the REVA4 Runtime — RL-LoRA behavioural runtime for modular, controllable AI behaviours
- Engineering runtime safety systems in Rust — kill-switches, isolation layers, fault boundaries
- Creating distributed multi-agent runtimes in Go/K8s — CRDs, gossip systems, orchestration logic
- Designing adversarial stress-tests for MARL agents and behavioural LoRA modules.
- Prototyping GNIM — cyber-geospatial intelligence mapping and RF-aware situational awareness
- Writing research in AI Security, Swarm Intelligence, Runtime Safety, and CPS
To contribute foundational work in:
- Safe autonomous systems
- Agentic AI runtime design
- Swarm intelligence for cyber defense
- High-assurance ML and runtime safety
- Secure cyber-physical intelligence
I am particularly interested in pursuing research pathways & research careers at institutes like CISPA, Max Planck, or Helmholtz, leading toward an MSc + PhD focused on AI Security & Research Development and CPS.
If my research tools or experiments contribute to your projects or spark ideas, you can support my work here:
"To emulate the adversary is to embody every doubt that was never tested enough."