I'm currently working on the awesome Flair library and love contributing to various open source projects.
Latest news of new language models, PRs and many more!
-
02.10.2024: Zeitungs-LM, a new language model trained on Historical German Newspapers is out now!
-
04.07.2024: Flair fine-tuned NER models on the awesome CleanCoNLL dataset are now available on the Model Hub.
-
28.03.2024: New project: NER models on the recently released CO-Fun NER dataset. Repo is here with a lot of fine-tuned models on the Model Hub.
-
23.12.2023: New project: NER Datasets for Historical German (HisGermaNER) is out and available on the Model Hub here.
-
11.10.2023: New launch of hmBench project: it benchmarks Historical Multilingual Language Models such as hmBERT, hmTEAMS and hmByT5, see here.
-
25.05.2023: New project: Historical Multilingual and Monolingual ELECTRA Models is released here.
-
25.05.2023: Several ByT5 Historical Language Models are released under hmByT5 Preliminary and hmByT5 are released on the Hugging Face Model Hub. More information can be found in this repository.
-
06.03.2023: Updated Ukrainian ELECTRA repository, see here.
-
05.02.2023: New repository on experiments for XLM-V 🤗 Transformers Integeration, see here.
-
03.02.2023: New repository for on-going evaluation of German T5 models on the GermEval 2014 NER task is up now! See here.
-
28.01.2023: Start of new language models trained on the British Library corpus (model size ranges from 110M to 1B!), repository is here.
-
23.01.2023: New German T5 models are released (trained on the the head and middle of GC4 corpus) and are available here.
-
09.06.2022: Preprint of our upcoming HIPE-2022 Working Notes paper is now available here: hmBERT: Historical Multilingual Language Models for Named Entity Recognition.
-
20.02.2022: Check out our new GermanT5 organization - expect new T5 models for German soon!
-
14.12.2021: New badge: Member of Hugging Face Supporter org now 🎉
-
13.12.2021: Release of Historical Language Model for Dutch (trained on Delpher corpus) - see repo here.
-
06.12.2021: Release of smaller multilingual Historical Language Models (ranging from 2-8 layers) - see repo here.
-
18.11.2021: Release of new multilingual and monolingual Historical Language Models - as preparation for upcoming CLEF-HIPE 2022 - see repo here.
-
23.09.2021: Release of ConvBERTurk (cased and uncased) and ELECTRA (uncased) trained on Turkish part of mC4 corpus - see repo here.
-
07.09.2021: Release of new larger German GPT-2 model - see model hub card here.
-
17.08.2021: Release of new re-trained German GPT-2 model - see repo here.
-
05.07.2021: Preprint of the ICDAR 2021 paper "Data Centric Domain Adaptation for Historical Text with OCR Errors" together with Luisa März, Nina Poerner, Benjamin Roth and Hinrich Schütze is out now!
-
24.06.2021: Turkish Language Model Zoo repo got a new logo from Merve Noyan, please follow her! Additionally, a new Turkish ELECTRA model was released, that was trained on the Turkish part of multilingual C4 dataset. More details here.
-
03.05.2021: GC4LM: A Colossal (Biased) language model for German was released. Repo with more details here.
-
27.04.2021: Our paper "Data Centric Domain Adaptation for Historical Text with OCR Errors" was accepted at ICDAR 2021. More details soon!
-
16.03.2021: Turkish model zoo is still growing! Public release of ConvBERTurk - see repo here.
-
07.02.2021: Public release of German Europeana DistilBERT and ConvBERT models. Repo with more information is here.
-
28.01.2021: Expect a new German Europeana ELECTRA Large model incl. a distilled German Europeana BERT model soon 🤗
-
16.11.2020: Public release of French Europeana BERT and ELECTRA models - see repository here.
-
16.11:2020: Public release of a German GPT-2 model (incl. fine-tuned model on Faust I and II). Repo with more information is available here.
-
11.11.2020: Public release of Ukrainian ELECTRA model. Repo is now available here.
-
11.11.2020: New workstation build (RTX 3090 and Ryzen 9 5900X) has completed! Expect a lot of new Flair/Transformers models in near future!
-
02.11.2020: Public release of Italian XXL ELECTRA model. New repo for Italian BERT and ELECTRA models is now available here 🎉
-
22.10.2020: Preprint of "German's Next Language Model" is now available here. Models are also available on the Hugging Face model hub 🎉
-
22.10.2020: Our shared task paper Triple E - Effective Ensembling of Embeddings and Language Models for NER of Historical German together with Luisa März is released 🎉
-
30.09.2020: "German's Next Language Model" together with Branden Chan and Timo Möller was accepted at COLING 2020! Expect new language models for German on the Hugging Face model hub soon 🤗
-
23.09.2020: Flair in version 0.6.1 is out now!
-
02.09.2020: Slow response time - I'm currently focussing on EACL 2021. Expect great new things 😎
-
18.08.2020: French BERT model, trained on Historical newspapers from Europeana: find the model here and the corresponding repository here.
-
Lukas Thoma, Ivonne Weyers, Erion Çano, Stefan Schweter, Jutta L Mueller and Benjamin Roth. CogMemLM: Human-Like Memory Mechanisms Improve Performance and Cognitive Plausibility of LLMs. In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning (CoNLL 2023).
-
Stefan Schweter, Luisa März, Katharina Schmid and Erion Çano. hmBERT: Historical Multilingual Language Models for Named Entity Recognition. In Experimental IR Meets Multilinguality, Multimodality, and Interaction - Proceedings of the Eleventh International Conference of the CLEF Association (CLEF 2022).
-
Francesco De Toni, Christopher Akiki, Javier de la Rosa, Clémentine Fourrier, Enrique Manjavacas, Stefan Schweter and Daniel Van Strien. Entities, Dates, and Languages: Zero-Shot on Historical Texts with T0. Accepted at "Challenges & Perspectives in Creating Large Language Models" Workshop at ACL 2022.
-
Luisa März, Stefan Schweter, Nina Poerner, Benjamin Roth and Hinrich Schütze. Data Centric Domain Adaptation for Historical Text with OCR Errors. In International Conference on Document Analysis and Recognition, ICDAR 2021.
-
Branden Chan, Stefan Schweter and Timo Möller. German's Next Language Model. In Proceedings of the 28th International Conference on Computational Linguistics.
-
Stefan Schweter and Luisa März. Triple E - Effective Ensembling of Embeddings and Language Models for NER of Historical German. In Experimental IR Meets Multilinguality, Multimodality, and Interaction - Proceedings of the Eleventh International Conference of the CLEF Association (CLEF 2020).
-
Stefan Schweter and Sajawel Ahmed. Deep-EOS: General-Purpose Neural Networks for Sentence Boundary Detection. In Proceedings of the 15th Conference on Natural Language Processing (KONVENS 2019).
-
Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter and Roland Vollgraf. FLAIR: An Easy-to-Use Framework for State-of-the-Art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations).
-
Stefan Schweter and Johannes Baiter. Towards Robust Named Entity Recognition for Historic German. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019).
- Part of BLOOM: A 176B-Parameter Open-Access Multilingual Language Model.
- Stefan Schweter and Alan Akbik. FLERT: Document-Level Features for Named Entity Recognition.
Please open an issue in the corresponding repository, tag me (@stefan-it) in issues/prs/commits on GitHub or connect with me on LinkedIn :)