Skip to content

Papers & Works for large languange models (ChatGPT, GPT-3, Codex etc.).

Notifications You must be signed in to change notification settings

KSESEU/LLMPapers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Resources on ChatGPT and Large Language Models

Collection of papers and related works for Large Language Models (ChatGPT, GPT-3, Codex etc.).

Contributors

This repository is contributed by the following contributors.

The automation script of this repo is powered by Auto-Bibfile. If you'd like to commit to this repo, please modify bibtex.bib or related_works.json and re-generate README.md using python scripts/run.py.

Papers

Outline

Hyperlinks

Evaluation

Survey

In-Context Learning

Instruction Tuning

RLHF

Pre-Training Techniques

Mixtures of Experts

Knowledge Enhanced

Knowledge Distillation

Knowledge Generation

Knowledge Editing

Reasoning

Chain of Thought

Multi-Step Reasoning

Arithmetic Reasoning

Symbolic Reasoning

Chain of Verification

Knowledge Graph Embedding

Federated Learning

Distributed AI

Selective Annotation

  • img Selective Annotation Makes Language Models Better Few-Shot Learners, img img img img img img img
    by Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf et al.
    This paper proposes a graph-based selective annotation method named vote-k to
    (1) select a pool of examples to annotate from unlabeled data,
    (2) retrieve prompts (contexts) from the annotated data pool for in-context learning.
    Specifically, the selection method first selects a small set of unlabeled examples iteratively and then labels them to serve as contexts for LLMs to predict the labels of the rest unlabeled data. The method selects the predictions with highest confidence (log probability of generation output) to fill up the selective annotation pool.

  • img Selective Data Acquisition in the Wild for Model Charging,
    by Chengliang Chai, Jiabin Liu, Nan Tang, Guoliang Li and Yuyu Luo

Program and Code Generation

Code Representation

Code Fixing

Code Review

Program Generation

Software Engineering

AIGC

Controllable Text Generation

Continual Learning

Prompt Engineering

Natural Language Understanding

Multimodal

Multilingual

Reliability

Robustness

Dialogue System

Recommender System

Event Extraction

Event Relation Extraction

Data Argumentation

Data Annotation

Information Extraction

Domain Adaptive

Question Answering

Application

Meta Learning

  • img Meta-learning via Language Model In-context Tuning, img img img
    by Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis and He He

  • img MetaICL: Learning to Learn In Context, img img
    by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi
    MetaICL proposes a supervised meta-training framework to enable LMs to more effectively learn a new task in context. In MetaICL, each meta-training example includes several training examples from one task that will be presented together as a single sequence to the LM, and the prediction of the final example is used to calculate the loss.

Generalizability

Language Model as Knowledge Base

Retrieval-Augmented Language Model

Quality

Interpretability/Explainability

Data Generation

Safety

Graph Learning

Knowledge Storage and Locating

Knowledge Fusion

Agent

LLM and GNN

Vision LLM

LLM and KG

Others

Related Works

Git Repos

  • Awesome-ChatGPT,
    ChatGPT资料汇总学习,持续更新......

  • Awesome ChatGPT Prompts,
    In this repository, you will find a variety of prompts that can be used with ChatGPT.

  • ChatRWKV,
    ChatRWKV is like ChatGPT but powered by my RWKV (100% RNN) language model, which is the only RNN (as of now) that can match transformers in quality and scaling, while being faster and saves VRAM. Training sponsored by Stability EleutherAI.

  • ChatGPT-Hub,
    ChatGPT资源汇总

  • PaLM-rlhf-pytorch,
    Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture.

  • BAAI-WuDao/Data,
    “悟道”项目构建了高质量的数据集,用于支撑大模型的训练和测评工作,本仓库提供所有开源数据集的链接。

  • Colossal-AI,
    Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training and inference in a few lines.

Articles

Blogs

Demos

  • CPM-Bee,
    CPM-Bee是一个开源的双语预训练语言模型,参数量为10B,拥有十余种原生能力和强大的通用语言能力,并支持结构化输入和输出。

Reports

Lectures

Related Works

Git Repos

  • Awesome-ChatGPT,
    ChatGPT资料汇总学习,持续更新......

  • Awesome ChatGPT Prompts,
    In this repository, you will find a variety of prompts that can be used with ChatGPT.

  • ChatRWKV,
    ChatRWKV is like ChatGPT but powered by my RWKV (100% RNN) language model, which is the only RNN (as of now) that can match transformers in quality and scaling, while being faster and saves VRAM. Training sponsored by Stability EleutherAI.

  • ChatGPT-Hub,
    ChatGPT资源汇总

  • PaLM-rlhf-pytorch,
    Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture.

  • BAAI-WuDao/Data,
    “悟道”项目构建了高质量的数据集,用于支撑大模型的训练和测评工作,本仓库提供所有开源数据集的链接。

  • Colossal-AI,
    Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training and inference in a few lines.

Articles

Blogs

Demos

  • CPM-Bee,
    CPM-Bee是一个开源的双语预训练语言模型,参数量为10B,拥有十余种原生能力和强大的通用语言能力,并支持结构化输入和输出。

Reports

Lectures

img Researcher Recruitment 科研人员招聘

Knowledge Science and Engineering Lab is recruiting researchers! You are welcome to apply for the following positions:

  • Research Assistant: Bachelor degree or above, proficient in Python/Java, familiar with machine learning espicially deep learning models.
  • Postdoctoral Fellow: Doctoral research in Artificial Intelligence, published at least 3 high-quality papers.
  • Lecturer, Associate Professor and Professor

If you are interested in our research and meet the above requirements, feel free to contact Prof. Guilin Qi.

知识科学与工程实验室正在招聘科研人员!欢迎申请以下岗位:

  • 科研助理:本科学历以上,精通Python/Java,熟悉机器学习,特别是深度学习模型。
  • 博士后:博士研究人工智能相关方向,发表至少3篇高水平论文。
  • 讲师、副教授、教授等教职

如果您对我们的研究工作感兴趣并满足以上要求,欢迎您与漆桂林教授联系。

About

Papers & Works for large languange models (ChatGPT, GPT-3, Codex etc.).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published