Frontier AI
Our identity is rooted in the concept of Frontier AI, representing the pinnacle of AI innovation. Frontier models are highly advanced AI systems at the leading edge of technological progress, embodying the most sophisticated and cutting-edge advancements in performance and capabilities. These models are large-scale and intricate, trained on vast datasets, and are the product of major breakthroughs in AI and machine learning research. They consistently surpass previous models in accuracy, efficiency, and their ability to tackle complex tasks.
However, with these advanced capabilities come significant risks. Frontier AI models can be misused for malicious purposes, create highly convincing deceptive content, amplify social and economic inequalities, and potentially lead to systems whose decision-making processes elude human oversight, among other concerns.

What We Do
To Cultivate Local Talent
Our goal is to enhance the local research ecosystem by nurturing local talent and providing the necessary tools and support for career development in this field. We aim to facilitate knowledge acquisition, create valuable opportunities, and improve accessibility, thereby fostering growth and advancement within the community. Through these efforts, we are dedicated to establishing and advancing the field of long-term AI safety in our region.
To Advance AI Safety Research
Our aim is to conduct both theoretical and empirical research to advance Frontier AI Safety as a sociotechnical challenge. We integrate four key areas: AI Alignment, AI Catastrophic Risks, AI Systems Evaluation, and AI Governance. This includes developing rigorous techniques for creating safe and trustworthy AI systems and building confidence in their behavior and robustness to ensure their successful societal adoption. Our approach combines deeply technical expertise with social science considerations.
To Foster a Global Community
Given the global impact of extreme AI risks, we are committed to collaborating with a diverse international community of AI safety experts. Our goal is to build an interdisciplinary network that includes academics, technologists, and policymakers from around the world.
Meet the team

María Victoria Carro
Director
AI Safety Researcher. Lawyer. She is dedicated to fostering a future where AI systems are not only powerful but also secure, ethical, and aligned with human values.

Mario Leiva
Lead Engineer
PhD in Computer Science, Universidad Nacional del Sur. CONICET Postdoctoral Fellow in AI.

Denise Alejandra Mester
Research Scientist
AI Safety Researcher. Lawyer (University of Buenos Aires). She is committed to advancing AI Safety, focusing on alignment, ethical frameworks, and evaluation methodologies for safer AI systems.

Francisca Gauna
Research Scientist
Engineer (University of Buenos Aires). Data Analyst and AI (Raízen). She is passionate about AI Safety, focusing on evaluations and potential misaligned behavior of AI systems.

Melania Gadea
Research Assistant

Luca Forziati
Research Assistant

Margarita González
Research Assistant & Public Relations

Dolores Val Eyras
Research Assistant

Felicitas Rodríguez
Research Assistant

Lola Ramos
Research Assistant

Agostina Jara Rey
Research Assistant

Ana López
Research Assistant

Juan Cruz Changazo
Research Assistant
Advisors

Gerardo Simari
Advisor
Gerardo I. Simari is a professor at Universidad Nacional del Sur in Bahía Blanca and a researcher at CONICET (Argentina), as well as adjunct faculty at Arizona State University (USA). His research focuses on topics within Artificial Intelligence and Databases, with applications to cybersecurity.

Maria Vanina Martinez
Advisor
Dr. Maria Vanina Martinez is a tenured scientist at the Artificial Intelligence Research Institute (IIIA-CSIC) in Barcelona. Her research is in the area of knowledge representation and reasoning, with a focus on knowledge dynamics, management of inconsistency and uncertainty, and the study of the ethical and social impact of Artificial Intelligence.

Juan Gustavo Corvalán
Advisor
PhD. Director of IALAB UBA. Master in AI. Co-creator of Prometea and PretorIA, two AI predictive systems for justice. Speaker at Google Talks, the UN, OAS, Oxford University, Massachusetts Institute of Technology (MIT), French Council of State, European Union Agency for the Protection of Fundamental Rights, and various national and international universities.
