Christopher Clark

I am a research scientist with PRIOR team at the non-profit AI research institute the Allen Institute for AI. My general research interests are in unified vision and language systems and out-of-domain generalization. My recent projects have involved training models that can complete many multi-modal tasks with a shared architecture. Previously I have worked on training models to play the drawing and guessing game Iconary, and my PhD focused on ways to prevent models using spurious correlations or non-generalization patterns found in the training data.

I received my PhD from UW where I was advised by Luke Zettlemoyer. Before that I was a Predoctoral Young Investigator at the AI2 and completed a Masters at the University of Edinburgh.


Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks
Jiasen Lu*, Christopher Clark*, Rowan Zellers, Roozbeh Mottaghi, Aniruddha Kembhavi
[paper] [demo]

A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge
Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, Roozbeh Mottaghi
In EMNLP 2021
[paper] [code] [project page]

Webly Supervised Concept Expansion for General Purpose Vision Models
Amita Kamath*, Christopher Clark*, Tanmay Gupta*, Eric Kolve, Derek Hoiem, Aniruddha Kembhavi
In Submission
[paper] [code] [demo] [project page]

Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text
Christopher Clark, Jordi Salvador, Dustin Schwenk, Derrick Bonafilia, Mark Yatskar, Eric Kolve, Alvaro Herrasti, Jonghyun Choi, Sachin Mehta, Sam Skjonsberg, Carissa Schoenick, Aaron Sarnat, Hannaneh Hajishirzi, Aniruddha Kembhavi, Oren Etzioni, Ali Farhadi
In EMNLP 2021
[paper] [code]

Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles
Christopher Clark, Mark Yatskar, Luke Zettlemoyer
In EMNLP Findings 2020
[paper] [code]

Don’t Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases
Christopher Clark, Mark Yatskar, Luke Zettlemoyer
In EMNLP 2019
[paper] [code]

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, Kristina Toutanova
In NAACL 2019
[paper] [dataset] [leaderboard]

Simple and Effective Multi-Paragraph Reading Comprehension
Christopher Clark, Matt Gradner
In ACL 2018
[paper] [code] [demo]

Deep Contextualized Word Representations
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer
In NAACL 2018
[paper] [website]

IKE - An Interactive Tool for Knowledge Extraction
Bhavana Dalvi, Sumithra Bhakthavatsalam, Chris Clark, Peter Clark, Oren Etzioni, Anthony Fader, Dirk Groeneveld
In AKBC at NAACL 2016
[paper] [website] [code]

PDFFigures 2.0: Mining Figures from Research Papers
Christopher Clark, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente Ordonez.
In JCDL 2016
[paper] [website] [code]

Looking Beyond Text: Extracting Figures, Tables, and Captions from Computer Science Papers
Christopher Clark, Santosh Divvala
In Workshop on Scholarly Big Data at AAAI 2015
[paper] [website] [code]

Training Deep Convolutional Neural Networks to Play Go
Christopher Clark, Amos Storkey
In ICML 2015
[paper] [demo]