minyoung huh (jacob)

email  /  google scholar  /  github

Researcher at OpenAI. I tinker with models.

I received my PhD from MIT under the supervision of Phillip Isola and Pulkit Agrawal. Before that, I completed my Bachelor's at UC Berkeley, advised by Alexei (Alyosha) Efros at BAIR. Prior to joining BAIR, I worked at Swarm Lab with Maysam Chamanzar and Michel Maharbiz.

I spent some time with wonderful people at Google Research, Facebook (Meta) Research, Adobe Research, and Snap Research.

profile photo

Highlights

[Quanta Magazine] Distinct AI Models Seem to Converge on How They Encode Reality
OpenAI GPT-5 System Card
gpt-oss-120b & gpt-oss-20b Model Card
ChatGPT search
Scalable Optimization in the Modular Norm
The Platonic Representation Hypothesis
Training Neural Networks from Scratch with Parallel Low-Rank Adapters
Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks
The Low-Rank Simplicity Bias in Deep Networks
Totems: Physical Objects for Verifying Visual Integrity
Learning to Ground Multi-Agent Communication with Autoencoders
Transforming and Projecting Images into Class-conditional Generative Networks
Feedback Adversarial Learning: Spatial Feedback for Improving Generative Adversarial Networks
[The New Yorker] In the Age of AI, Is Seeing Still Believing?
Fighting Fake News: Detecting Malicious Image Manipulations via Learned Self-Consistency
Multi-view to Novel view: Synthesizing Views via Self-Learned Confidence
What makes ImageNet good for Transfer Learning?
Ultrasonic sculpting of virtual, steerable optical waveguides in tissue
Virtual Acousto-optic Beam Paths for Steerable Deep-tissue Optical Stimulation and Imaging