I’m a PhD Student at the Linguistic Department at Georgetown University, advised by Ethan Wilcox. My research explores the mechanisms by which language and information is processed. I am specifically interested in:

  • how modality shapes language processing,
  • how language models learn linguistic structures either via explicit training or “emergence,”
  • and how such models align with or diverge from human-like judgment and behavior.

Before coming to Georgetown, I was NLP engineer at NCSOFT, where I worked on NLU and information extraction.

Outside of classes and assistantships, I like to participate in intramural sports as both player and coach and attend free food events around campus.

You can reach me at jm3743@georgetown.edu.

News

  • Apr 2026: I’ll be spending the summer in New Jersey as Machine Learning and AI Intern at Bell Labs.
  • Feb 2026: I will be attending EACL 2026–see you in Rabat!
  • Jan 2026: I will be serving as Teaching Assistant for COSC-5402 Empirical Methods in NLP at Georgetown University during the spring semester.
  • Sep 2025: I will be attending EMNLP 2025–see you in Suzhou!

Publications

2026

Abhishek Purushothama*, Junghyun Min*, Brandon Waldon, Nathan Schneider. Not ready for the bench: LLM legal interpretation is unstable and out of step with human judgment. ACM FAccT 2026.

Hannah Liu, Junghyun Min, En-Shiun Annie Lee, Ethan Yue Heng Cheung, Shou-Yi Hung, Elsie Chan, Shiyao Qian, Runtong Liang, Kimlan Huynh, Wing Yu Yip, York Hay Ng, TSZ Fung Yau, Ka Ieng Charlotte Lo, You-Wei Wu, Richard Tzong-Han Tsai. SiniticMTError: A Machine Translation Dataset with Error Annotations for Sinitic Languages. LREC 2026.

Junghyun Min, Na-Rae Han, Jena D. Hwang, Nathan Schneider. A Curious Class of Adpositional Multiword Expressions in Korean. MWE at EACL 2026.

Minho Lee, Junghyun Min, Yerang Kim, Woochul Lee, Yeonsoo Lee. Structured Language Generation Model: Loss Calibration and Formatted Decoding for Robust Structure Prediction. FrontierIR at AAAI 2026.

2025

Junghyun Min, Xiulin Yang, Shira Wein. When does meaning backfire? Investigating the role of AMRs in NLI. *SEM at EMNLP 2025.

Lauren Levine, Junghyun Min, Amir Zeldes. Building UD Cairo for Old English in the Classroom. UDW at SyntaxFest 2025.

Junghyun Min, Minho Lee, Woochul Lee, Yeonsoo Lee. Punctuation restoration improves structure understanding without supervision. RepL4NLP at NAACL 2025.

2020

Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. Syntactic data augmentation increases robustness to inference heuristics. ACL 2020.

R. Thomas McCoy, Junghyun Min, and Tal Linzen. BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. BlackboxNLP at EMNLP 2020.

Education and work experience

Ph.D. student, Linguistics, Georgetown University. Advised by Ethan Wilcox2024 -
Visiting Researcher, Computer Science, University of Toronto2025
NLP Engineer, NCSOFT2021 - 2024
M.A. Cognitive Science, Johns Hopkins University. Advised by Tal Linzen2019 - 2020
Data Analyst, Harford Community College2018 - 2019
B.S. Physics, B.A. Mathematics, Johns Hopkins University2014 - 2017