Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
Signed-off-by: jihyeonRyu <[email protected]>
  • Loading branch information
jihyeonRyu committed Dec 20, 2024
1 parent 126c16d commit dda634a
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 1 deletion.
3 changes: 3 additions & 0 deletions tutorials/llm/llama-3/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,6 @@ This repository contains Jupyter Notebook tutorials using the NeMo Framework for
* - `Llama3 LoRA Fine-Tuning and Supervised Fine-Tuning using NeMo2 <./nemo2-sft-peft>`_
- `SQuAD <https://arxiv.org/abs/1606.05250>`_ for LoRA and `Databricks-dolly-15k <https://huggingface.co/datasets/databricks/databricks-dolly-15k>`_ for SFT
- Perform LoRA PEFT and SFT on Llama 3 8B using NeMo 2.0
* - `Llama3 Domain Adaptive Pre-Training <./dapt>`_
- `Domain-Specific Data <https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/dapt-curation>`_
- Perform Domain Adaptive Pre-Training on Llama 3 8B using NeMo 2.0
2 changes: 1 addition & 1 deletion tutorials/llm/llama-3/dapt/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Training Code for DAPT (Domain Adaptive Pre-Training)

[ChipNeMo](https://arxiv.org/pdf/2311.00176) is a chip design domain adapted LLM. Instead of directly deploying off-theshelf commercial or open-source LLMs, the paper instead adopts the following domain adaptation techniques: domain-adaptive tokenization, domain adaptive continued pretraining, model alignment with domain-specific instructions, and domain adapted retrieval models. Specifically, LLama 2 foundation models are continually pre-trained with 20B plus tokens on domain-specific chip design data, including code, documents, etc., and then fine-tuned with instruction datasets from design data as well as external sources. Evaluations on the resultant domain-adapted ChipNeMo model demonstrate that domain-adaptive pretraining of language models, can lead to superior performance in domain related downstream tasks compared to their base LLaMA2 counterparts, without degradations in generic capabilities.
[ChipNeMo](https://arxiv.org/pdf/2311.00176) is a chip design domain adapted LLM. Instead of directly deploying off-theshelf commercial or open-source LLMs, the paper instead adopts the following domain adaptation techniques: domain-adaptive tokenization, domain adaptive continued pretraining, model alignment with domain-specific instructions, and domain adapted retrieval models.

Here, we share a tutorial with best practices on training for DAPT (domain-adaptive pre-training).

Expand Down

0 comments on commit dda634a

Please sign in to comment.