Site logo
Categories
Description

I aim to operate at the forefront of AI, AI safety, and AI alignment, developing the technical strategies required to ensure advanced AI systems are robust, reliable, and beneficial.

As a student researcher at MIT, currently completing a dual degree in AI & Decision-Making and Physics, I have been specializing in the complex challenges of understanding and steering ML systems and large language models.

In the MIT Algorithmic Alignment Group, I've been leading research on cross-model interpretability, uncovering the underlying mechanisms of LLM behavior across different model families.

At MIT FutureTech, I have been developing a RAG pipeline to enhance the accessibility of the MIT AI Risk Index platform (airisk.mit.edu) and of the MIT AI Risk Repository to synthesize, juggle, browse tens of thousands of datapoints for easy user access on topics of AI risks, AI governance frameworks, and AI experts.

Overall, my hope is to bridge the critical gap between cutting-edge technical research and real-world impact and policy, having also presented on AI misuse threats to Congressional members and staffers in Washington, D.C.

Whether it's mechanistic interpretability, systematic risk prioritization, or analyzing the societal impacts of Artificial General Intelligence, I am focused on using data and deep technical insight to chart a path - and a safe one - forward for artificial intelligence development and usage.

Email

david@turturean.com

Social Networks