Linyuan Gong

Linyuan Gong

PhD Student in Artificial Intelligence

EECS Department, UC Berkeley

I am a PhD student in Computer Science at University of California, Berkeley, advised by Prof. Alvin Cheung. I was also advised by Prof. Dawn Song at UC Berkeley. Before that, I received my Bachelor’s degree in Computer Science from Peking University, China, advised by Prof. Liwei Wang and Prof. Di He.

I specialize in Artificial Intelligence (AI) with a focus on Large Language Models (LLMs). My research includes pretraining, prompting, and evaluation methodologies for a variety of language models, including BERT, T5, and GPT-like LLMs. My recent research delves into leveraging LLMs for code generation, infilling, transpilation, and understanding, pushing the boundaries of how AI interacts with programming languages.

Interests
  • Artificial Intelligence
  • Natural Language Processing
  • Large Language Models
Education
  • Ph.D. in Computer Science, 2020 - Present

    University of California, Berkeley

  • B.S. in Computer Science, 2016 - 2020

    Peking University, Beijing, China

Recent Publications

(2023). ADELT: Transpilation Between Deep Learning Frameworks. In IJCAI 2024.

PDF Cite

(2022). Joint Language Semantic and Structure Embedding for Knowledge Graph Completion. In COLING 2022.

PDF Cite Code

(2021). PlotCoder: Hierarchical Decoding for Synthesizing Visualization Code in Programmatic Context. In ACL 2021.

PDF Cite Code Video DOI

(2021). Anytime Sampling for Autoregressive Models via Ordered Autoencoding. In ICLR 2021.

PDF Cite Code Slides

(2020). Improved Clinical Abbreviation Expansion via Non-Sense-Based Approaches. In ML4H (NeurIPS Workshop) 2020.

PDF Cite Code

(2020). MC-BERT: Efficient Language Pre-Training via a Meta Controller.

PDF Cite Code

(2019). Microsoft Research Asia's Systems for WMT19. In WMT19 (ACL 2019 Workshop).

PDF Cite DOI

(2019). Efficient training of BERT by progressively stacking. In ICML 2019.

PDF Cite Code