Hello! I'm Will, and I'm a Sr. Data Scientist at Capital One where I work on the natural language understanding for Eno, our virtual assistant. Prior to joining Capital One, I was a Master's student in the Courant Institute of Mathematical Sciences at New York University. I was also a member of the Machine Learning for Language (ML²) group (subset of the CILVR group) where I was advised by Prof. Samuel R. Bowman.


Please refer to my Semantic Scholar page for an up-to-date list of publications. (* indicates equal contribution.)

Publications (2021)

Types of Out-of-Distribution Texts and How to Detect Them
Udit Arora, William Huang, and He He
In Proceedings of EMNLP 2021

Does Putting a Linguist in the Loop Improve NLU Data Collection?
Alicia Parrish, William Huang, Omar Agha, Soo-Hwan Lee, Nikita Nangia, Alex Warstadt, Karmanya Aggarwal, Emily Allaway, Tal Linzen, Samuel R. Bowman
In Findings of EMNLP 2021

Comparing Test Sets with Item Response Theory
Clara Vania,* Phu Mon Htut,* William Huang,* Dhara Mungra, Richard Yuanzhe Pang, Jason Phang, Haokun Liu, Kyunghyun Cho, Samuel R. Bowman
In Proceedings of ACL 2021

Publications (2020)

Precise Task Formalization Matters in Winograd Schema Evaluations
Haokun Liu,* William Huang,* and Samuel R. Bowman
In Proceedings of EMNLP 2020

Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
William Huang, Haokun Liu, and Samuel R. Bowman
In Proceedings of EMNLP 2020 Workshop on Insights from Negative Results


Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair.
Jason Phang, Angelica Chen, William Huang, and Samuel R. Bowman
In arXiv


  • STAR Talk at New York Academy of Sciences' Natural Language, Dialog and Speech Symposium; 2020


  • NYU AI School; 2021

Last updated: November 18, 2021. Contact: Get in touch at wh322 at cornell dot edu!