中文对话0.2B小模型(ChatLM-Chinese-0.2B),开源所有数据集来源、数据清洗、tokenizer训练、模型预训练、SFT指令微调、RLHF优化等流程的全部代码。支持下游任务sft微调,给出三元组信息抽取微调示例。
-
Updated
Apr 20, 2024 - Python
中文对话0.2B小模型(ChatLM-Chinese-0.2B),开源所有数据集来源、数据清洗、tokenizer训练、模型预训练、SFT指令微调、RLHF优化等流程的全部代码。支持下游任务sft微调,给出三元组信息抽取微调示例。
PromptCLUE, 全中文任务支持零样本学习模型
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
Official implementation of the paper "CoEdIT: Text Editing by Task-Specific Instruction Tuning" (EMNLP 2023)
AraT5: Text-to-Text Transformers for Arabic Language Understanding
Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
NLP model zoo for Russian
Abstractive text summarization by fine-tuning seq2seq models.
A extension of Transformers library to include T5ForSequenceClassification class.
[Pytorch] Unofficial Implementation of "Recommender Systems with Generative Retrieval"
Materials for "IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation" 🇮🇹
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" (SPNLP@ACL2022)
In this implementation, using the Flan T5 large language model, we performed the Text Classification task on the IMDB dataset and obtained a very good accuracy of 93%.
End-to-End Model - Finetuned T5 for Text-to-SPARQL Task
Repository about small code models
Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP 2023)
Automated Headline generation and Aspect Based Sentiment Analysis
Add a description, image, and links to the t5-model topic page so that developers can more easily learn about it.
To associate your repository with the t5-model topic, visit your repo's landing page and select "manage topics."