George Mason NLP

George Mason NLP

George Mason University

George Mason Natural Language Processing Group

Natural language processing (NLP) aims to enable computers to use human languages – so that people can, for example, interact with computers naturally; or communicate with people who don’t speak a common language; or manipulate speech or text data at scales not otherwise possible. The NLP group at George Mason Computer Science is interested in all aspects of NLP, with a focus on building tools for under-served languages, and constructing natural language interfaces that can reliably assist humans in knowledge acquisition and task completion.

We are currently working on multilingual models, on building Machine Translation robust to L2-language variations, on NLP for documentation of endangered languages, on exploring the interplay between language and code, on constructing interactive natural language interfaces, and on improving the efficiency of NLP models.

News

  • August 2021 - The GMNLP group is growing, as Ziyu Yao will be joining CS@GMU as an assistant professor!
  • July 2021 - Antonis gave a talk to the UNESCO language working group (organized by the Translation Commons), on how "Modern NLP changes the requirements for building automatic Translation Systems" Video and slides are available!
  • July 2021 - Mahfuz, Antonis, and collaborators preprint "On the Evaluation of Machine Translation for Terminology Consistency" was featured in an article in Slator!
  • May 2021 - Antonis received a grant from Virginia Research Inverstment Fund (with Hemant Purohit (PI), Huzefa Rangwala, and Tonya Reaves) to design tools for proactive counter-disinformation communication!
  • April 2021 - 2 papers accepted at ACL 2021! One is about fairness and equity in Question Answering systems, and the second is on Machine Translation for dialectal language variants. Preprints and code are available here !
  • March 2021 - 1 paper accepted at NAACL! Preprint and details here .
  • January 2021 - Antonis received a grant from the National Endowment for the Humanities to build Optical Charachter Recognition tools for under-served languages (and especially Indigenous Latin American ones)!
  • January 2021 - Antonis spoke to the Global Podcast for the TICO-19 project.
  • November 2020 - Congratulations to Mahfuz for winning one of the two best paper awards at the W-NUT workshop!
  • September 2020 - 4 papers accepted at the main EMNLP conference and 1 paper accepted at the Findings of EMNLP! Preprints are available below!

Projects

Our research is/has been supported by the following organizations/companies:

NSF logo NEH logo Google logo Amazon logo VRIF logo
*

Efficient NLP/AI

We study building NLP/AI models with limited supervision, especially for low-resource domains (e.g., healthcare).

Human-AI Interaction

We explore how machine learning systems can interact with humans effectively. This includes being able to converse with humans through dialogues, as well as proactively collaborate with and learn from humans during decision making.

Language and Code

We seek to build natural language interfaces that allow humans to communicate with computers/machines easily. This requires modeling natural language, programming language, and their interplay. Applications of this research include semantic parsing and general-purpose code generation.

OCR

This NEH-funded project focuses on the development of modern Optical Character Recognition (OCR) and post-correction tools tailored for Indigenous Latin American Languages.

Fairness

Advances in natural language processing (NLP) technology now make it possible to perform many tasks through natural language or over natural language data – automatic systems can answer questions, perform web search, or command our computers to perform specific tasks.

Speech

Most languages of the world are “oral”: they are not traditionally written and even if an alphabet exists, the community doesn’t usually use it. Hence, building NLP systems that can directly operate on speech input is paramount.

Morphology

Human language is marked by considerable diversity around the world, and the surface form of languages varies substantially. Morphology describes the way through which different word forms arise from lexemes. Computational morphology attempts to reproduce this process across languages, or uses machine learning models to model/discover the morphophonological processes that exist in a language.

Robustness

NLP systems are typically trained and evaluated in “clean” settings, over data without significant noise. However, systems deployed in the real world need to deal with vast amounts of noise. At GMU NLP we work towards making NLP systems more robust to several types of noise (adversarial or naturally occuring).

Language Documentation

Language Documentation aims at producing a permanent record that describes a language as used by its language community by producing a formal grammatical description along with a lexicon. Our group works on integrating NLP systems into the documentation workflow, aiming to speed-up the process and help the work of field linguists and language communities.

Machine Translation

Machine Translation is the task of translating between human languages using computers. Starting from simple word-for-word rule-based system in 1950s, we now have large multilingual neural models that can learn translate between dozens of languages.

Multilingual NLP

An exciting research direction that we pursue at GMU NLP is building multi-lingual and polyglot systems. The languages of the world often share similar characteristics, and training systems cross-lingually allows us to leverage these similarities and overcome data scarcity issues.

Members

Members

Avatar

Antonios Anastasopoulos

Assistant Professor

Computational Linguistics, Machine Translation, Speech Recognition, NLP for Endangered Languages

Avatar

Ziyu Yao

Assistant Professor

Human-AI Interaction, Language and Code, Efficient NLP/AI

Avatar

Fahim Faisal

PhD Student

Computational linguistics, Natural language processing, Machine learning

Avatar

Md Mahfuz Ibn Alam

PhD Student

Natural Language Processing, Machine Learning, Computer Vision, Common Sense Reasoning

Avatar

Sharlina Keshava

CS Master’s Student

Natural Language Processing, Fairness in AI, Multilingual NLP, Machine Learning, Deep Learning

Avatar

Huayu Zhou

PhD Student

Natural Language Processing, Machine Translation, Machine Learning, Data Mining

Avatar

Ruoyu (Roy) Xie

Undergraduate Student

Natural Language Processing, Machine Learning, Computer Vision

Collaborators

Claytone Sikasote

MS@African Masters of Machine Intelligence and Lecturer@University of Zambia

Language Processing for Bemba

Recent Publications

Browse all publications.

Towards more equitable question answering systems: How much more data do you need?. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021.

PDF Code Project

Machine Translation into Low-resource Language Varieties. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021.

PDF Code Project

Code to Comment Translation: A Comparative Study on Model Effectiveness & Errors. Proceedings of the First Workshop on Natural Language Processing for Programming, 2021.

PDF Code Dataset Project

Phoneme Recognition through Fine Tuning of Phonetic Representations: a Case Study on Luhya Language Varieties. Proceedings of Interspeech 2021, 2021.

PDF Project

Evaluating the Morphosyntactic Well-formedness of Generated Texts. arXiv, 2021.

PDF Code