pdelobelle's picture
Update README.md
07a82d7 verified
|
raw
history blame
1.81 kB
metadata
license: apache-2.0
language:
  - nl
library_name: transformers

Pieter Delobelle, François Remy, Miryam de Lhoneux, Thomas Demeester

Tweety-7b-dutch: A Dutch Large Language Model

Model Card for tweety-7b-dutch

tweety-7b-dutch is a foundation model with a focus on the Dutch language, incorporating a Dutch tokenizer for better understanding and generation of Dutch text. It's built on the mistral architecture, employing flash attention for efficient processing within a context window of 8192 tokens. Tweety-7b-dutch is trained on the cleaned Dutch mC4 dataset, without instruction finetuning.

Model Details

Model Description

Our tweety-7b-dutch model has an Apache 2.0 license, encouraging applications in research, content creation, and language analysis.

  • Developed by: KU Leuven and UGent
  • Funded by: KU Leuven BOF, VSC (Flemish Supercomputer Center), Vlaams AI-onderzoeksprogramma
  • Model type: Foundation model
  • Language(s) (NLP): Dutch
  • License: Apache 2.0

Uses

As a base model, tweety-7b-dutch is primed for direct applications across text generation and understanding within the Dutch language.

Technical Specifications

Compute Infrastructure

Hardware

Training utilized Nvidia H100 and A100 GPUs. Inference is accessible on lower-end GPUs, basically any GPU capable of running mistral models.