For a more up to date list of my publications, please see my Google Scholar.


Position-aware Attention and Supervised Data Improve Slot Filling

Conference on Empirical Methods in Natural Language Processing (Oral, Honourable Mention, EMNLP 2017)

Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Mannning.

We propose a end-to-end neural relation extractor with a entity-position aware attention mechanism.
As a part of this work, we also release the TAC KBP Relation Extraction Dataset (TACRED), a hand-annotated relation extraction dataset based on past KBP competitions that is orders of magnitude larger than existing relation extraction datasets.

PDF

@inproceedings{zhang2017tacred,
 author = {Zhang, Yuhao and Zhong, Victor and Chen, Danqi and Angeli, Gabor and Manning, Christopher D.},
 booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
 title = {Position-aware Attention and Supervised Data Improve Slot Filling},
 year = {2017}
}

Dynamic coattention networks for question answering

International Conference on Learning Representations (ICLR 2017)

Victor Zhong^, Caiming Xiong^, and Richard Socher (^Equal contribution)

The Dynamic Coattention Network (DCN) is a neural network model with a novel coattention encoder, which creates conditional representations of both the document and the question, as well as a novel dynamic decoder, which allows multiple attempts at guessing the solution to recover from local optima.
We introduced the DCN in late 2016 to achieve the state of the art results on the Stanford Question Answering Dataset. Our work was also featured on Venture Beat and Fast Company.

Arxiv Blog

@inproceedings{xiong2017dynamic,
  title={Dynamic Coattention Networks For Question Answering},
  author={Xiong, Caiming and Zhong, Victor and Socher, Richard},
  booktitle={ICLR 2017},
  year={2017}
}

Ask me anything: Dynamic memory networks for natural language processing

International Conference on Machine Learning (ICML 2016)

Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher

The Dynamic Memory Network (DMN), a general framework for building dynamic memory models, achieved state of the art results on the Stanford Sentiment Treebank dataset, the bAbI dataset, as well as the POS tagging task on the Penn Sentiment Treebank. Our work was also featured on Wired and MIT Technology Review.

Arxiv Blog

@inproceedings{kumar2016ask,
  title={Ask me anything: Dynamic memory networks for natural language processing},
  author={Kumar, Ankit and Irsoy, Ozan and Su, Jonathan and Bradbury, James and English, Robert and Pierce, Brian and Ondruska, Peter and Gulrajani, Ishaan and Zhong, Victor and Paulus, Romain and others},
  booktitle={ICML 2016},
  year={2016}
}

Bootstrapped self training for knowledge base population

Proceedings of the Eighth Text Analysis Conference (TAC2015)

Gabor Angeli, Victor Zhong, Danqi Chen, Arun Chaganty, Jason Bolton, Melvin Johnson Premkumar, Pasupat Panupong, Sonal Gupta and Christopher D Manning

The Stanford entry to the 2015 Knowledge Base Population challenge, in which our system achieved first place.
This marks the first time any TAC KBP team has submitted a neural network based model for knowledge base population.

PDF (NIST)

@inproceedings{angeli2015bootstrapped,
  title={Bootstrapped self training for knowledge base population},
  author={Angeli, Gabor and Zhong, Victor and Chen, Danqi and Chaganty, Arun and Bolton, Jason and Premkumar, Melvin Johnson and Pasupat, Panupong and Gupta, Sonal and Manning, Christopher D},
  booktitle={Proceedings of the Eighth Text Analysis Conference (TAC2015)},
  year={2015}
}