Skip to main content

CALL - October 2025

Postdoctoral Research position on Adversarial Machine Learning & Formal Verification

Deadline:

December 14, 2025, 11:59 PM CEST.

The AI Security Lab is looking for a creative and highly motivated Postdoc or Research Scientist to join our founding team and help build the next generation of secure agentic AI systems through cutting-edge research in adversarial machine learning and formal verification.

The Role

As a research scientist, you will contribute to frontier AI research that addresses emerging security threats in AI products, particularly as autonomous agents become more sophisticated and widely deployed. Working alongside a dedicated team of security researchers, you will have the chance to shape the future of AI security by developing novel defense mechanisms, verification methods, and security frameworks that can withstand adversarial attacks.

Key Responsibilities

  • Conduct foundational research in adversarial machine learning, exploring novel attack vectors and defense mechanisms for AI agents and large language models.
  • Develop formal verification methods and theoretical frameworks to prove security properties and safety guarantees in agentic AI systems.
  • Publish research findings in top-tier academic conferences and journals, contributing to the broader scientific understanding of AI security.
  • Collaborate with academic partners and the research community to advance the state-of-the-art in adversarial robustness and trustworthy AI systems.

Minimum Qualifications

  • PhD in Computer Science, Engineering, Mathematics, or a related field with a focus on machine learning, security, or formal methods.
  • Strong publication record in top-tier conferences or journals in AI security, adversarial machine learning, robustness, or formal verification (e.g., NeurIPS, ICML, ICLR, IEEE S&P, USENIX Security, CCS).
  • Proficiency in Python and experience with at least one additional programming language such as C++, Julia, or Rust for implementing research prototypes.

Preferred Qualifications

  • Deep expertise with modern ML frameworks including PyTorch, JAX, Hugging Face Transformers, or TensorFlow for conducting large-scale experiments.
  • Knowledge of emerging AI security threats including prompt injection, jailbreaking, model inversion, membership inference, and poisoning attacks against foundation models.
  • Familiarity with privacy-preserving machine learning techniques such as differential privacy, federated learning, or secure multi-party computation.
  • Experience collaborating across disciplines, mentoring junior researchers, and communicating complex technical concepts to diverse audiences.

What We Offer

  • A pioneering research team: You will work alongside a highly talented and collaborative team of security researchers and engineers who share your passion for advancing AI safety and security. We foster an environment of innovation and mutual support, with clear pathways for career advancement and technical leadership.
  • Research impact and visibility: We are committed to advancing both practical security solutions and fundamental research. You will have opportunities to publish at top-tier venues, while also contributing to national and European industrial research initiatives that shape the future of secure AI.
  • Prime location at OGR Torino: Our offices are situated at OGR Torino, the city’s leading technology and innovation hub. You’ll be immersed in Italy’s vibrant tech ecosystem with access to countless events, meetups, and a dynamic community of innovators and entrepreneurs.
  • Comprehensive support and resources: We provide competitive compensation packages and full support for conference travel and professional development.

You’ll have access to state-of-the-art high-performance computing infrastructure and GPU clusters essential for conducting cutting-edge AI security research.

Salary range: €30,000 – €55,000 gross per year, depending on experience.
(Researchers relocating from abroad may be eligible for tax exemptions of up to 90%).

If you’re passionate about shaping the future of AI security and want to see your research protect the next generation of AI systems, we’d love to hear from you. Let’s build secure AI together!

Start Date: Flexible, as soon as possible.

Application Requirements

  • Cover letter (max. 1 page) describing how your background aligns with this specific position and outlining your research interests and professional goals in AI security.
  • CV including your publication record and links to open-source contributions, code repositories (e.g., GitHub), or research prototypes.

ABOUT US

AI4I – THE ITALIAN RESEARCH INSTITUTE FOR ARTIFICIAL INTELLIGENCE FOR INDUSTRIAL IMPACT

AI4I has been founded to perform transformative, application-oriented research in Artificial Intelligence.

AI4I is set to engage and empower gifted, entrepreneurial, young researchers who commit to producing an impact at the intersection of science, innovation, and industrial transformation.

Highly competitive pay, bonus incentives, access to dedicated high-performance computing, state-of-the-art laboratories, industrial collaborations, and an ecosystem tailored to support the initiation and growth of startups stand out as some of the distinctive features of AI4I, bringing together people in a dynamic international environment.

AI4I is an Institute that aims to enhance scientific research, technological transfer, and, more generally, the innovation capacity of the Country, promoting its positive impact on industry, services, and public administration. To this end, the Institute contributes to creating a research and innovation infrastructure that employs artificial intelligence methods, with particular reference to manufacturing processes, within the framework of the Industry 4.0 process and its entire value chain. The Institute establishes relationships with similar entities and organizations in Italy and abroad, including Competence Centers and European Digital Innovation Hubs (EDIHs), so that the center may become an attractive place for researchers, companies, and start-ups.

Apply Here