An Abstractive Summarizer, based on a T5-small LLM (60 million parameters), finetuned on part of the train set of the CNN_dailymail dataset.
The model was finetuned twice, with 7k and 45k sentences. The train set of the cnn_dailymail dataset was used for training, and more specifically the parquet file 0002.parquet, with 55,113 sentences.