Justin Zhun Liu

Justin Zhun Liu

Applied Scientist

Microsoft, Project Turing


I am an applied scientist at Microsoft, Project Turing, working on large-scale pretraining of language representation and generation models. Before joining Microsoft, I am a research associate in the Language Technologies Institute at Carnegie Mellon University, advised by Professor Louis-Philippe Morency. My previous research revolves around deep learning, multimodal machine learning and natural language processing. I obtained my M.Sc in Intelligent Information Systems degree from CMU and B.Sc in Applied Mathematics degree from Wuhan University.


  • Deep Learning
  • Representation Learning
  • Natural Language Processing
  • Multimodal Machine Learning


  • M.Sc in Intelligent Information Systems, 2017 - 2018

    Carnegie Mellon University

  • B.Sc in Applied Mathematics, 2013 - 2017

    Wuhan University



Applied Scientist

Project Turing, Microsoft

Sep 2019 – Present Bellevue, WA
I workd on large-scale pretraining of language representation and generation models and try to apply them in application scenarios at Microsoft. An example of our work: https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/

Research Associate

Language Technologies Institute, Carnegie Mellon University

Jan 2019 – Jul 2019 Pittsburgh, PA
I work on robust methods for learning multimodal representations and some machine learning applications in affective computing

Graduate Research Assistant

MultiComp Laboratory, Carnegie Mellon University

Sep 2017 – Dec 2018 Pittsburgh, PA
I work on compute/data-efficient machine learning methods for learning multimodal representations.

Recent Publications

Quickly discover relevant content by filtering publications.

Language to Network: Conditional Parameter Adaptation with Natural Language Descriptions

Transfer learning using ImageNet pre-trained models has been the de facto approach in a wide range of computer vision tasks. However, …

Reconsidering the Duchenne Smile: Indicator of Positive Emotion or Artifact of Smile Intensity?

The Duchenne smile hypothesis is that smiles that include eye constriction (AU6) are the product of genuine positive emotion, whereas …

Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization

There has been an increased interest in multimodal language processing including multimodal dialog, question answering, sentiment …

Words Can Shift: Dynamically Adjusting Word Representations using Nonverbal Behaviors

Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication. Speaker …

Efficient Low-rank Multimodal Fusion with Modality-specific Factors

Multimodal research is an emerging field of artificial intelligence, and one of the main research problems in this field is multimodal …



Contributed to the early prototyping of CMU Multimodal SDK, a toolkit for facilitating research in multimodal human communication


mlboot is a toolkit for Bootstrap confidence interval estimation.