Research

My work is currently organized around two directions that operate in parallel: AI for Science and AI for Systems. Both are grounded in earlier work on scientific computing, HPC, compilers, memory systems, and distributed computation.

AI for Science

In this direction, I develop AI methods in collaboration across the natural sciences, aiming at questions that lead to measurement, explanation, or discovery.

The application areas include materials, physics, and biology. Methodologically, the work is intentionally broad: computer vision, reinforcement learning, foundation models, and multimodal learning all appear where they are useful.

  • Scientific imaging and vision for materials and physics
  • Texture-aware foundation-model adaptation
  • Learning-based optimization in simulation-driven science

AI for Systems

In this direction, AI is aimed not at a scientific domain, but at the computational system itself. The question is how language models and agents can reason about code, compilers, parallel programming models, accelerator targets, runtime behavior, and performance constraints.

One way to phrase this is AI for computer science: models that do not merely generate text, but interact with programs, hardware-facing abstractions, and the mechanics of modern computing.

  • LLMs for OpenMP, MPI, and parallel code generation
  • Code translation across HPC programming models
  • Reasoning about performance, complexity, and hardware behavior