浙江省网站集约化建设通知,wordpress自动给关键词加内链方法,网站开发吗和APP软件一样吗,保定网站建设团队动手学习RAG: 向量模型动手学习RAG: BGE向量模型微调实践]()动手学习RAG: BCEmbedding 向量模型 微调实践]()BCE ranking 微调实践]()GTE向量与排序模型 微调实践]()模型微调中的模型序列长度]()相似度与温度系数
本文我们来进行ColBERT模型的实践#xff0c;按惯例#xff…动手学习RAG: 向量模型动手学习RAG: BGE向量模型微调实践]()动手学习RAG: BCEmbedding 向量模型 微调实践]()BCE ranking 微调实践]()GTE向量与排序模型 微调实践]()模型微调中的模型序列长度]()相似度与温度系数
本文我们来进行ColBERT模型的实践按惯例还是以open-retrievals中的代码为蓝本。在RAG兴起之后ColBERT也获得了更多的关注。ColBERT整体结构和双塔特别相似但迟交互式也就意味着比起一般ranking模型交互来的更晚一些。
准备环境
pip install transformers
pip install open-retrievals准备数据
还是采用C-MTEB/T2Reranking数据。
每个样本有query, positive, negative。其中query和positive构成正样本对query和negative构成负样本对
使用
由于ColBERT作为迟交互式模型既可以像向量模型一样生成向量也可以计算相似度。BAAI/bge-m3中的colbert模型是基于XLMRoberta训练而来因此使用ColBERT可以直接从bge-m3中加载预训练权重。
import transformers
from retrievals import ColBERT
model_name_or_path: str BAAI/bge-m3
model ColBERT.from_pretrained(model_name_or_path,colbert_dim1024, use_fp16True,loss_fnColbertLoss(use_inbatch_negativeTrue),
)model生成向量的方法
sentences_1 [In 1974, I won the championship in Southeast Asia in my first kickboxing match, In 1982, I defeated the heavy hitter Ryu Long.]
sentences_2 [A dog is chasing car., A man is playing a guitar.]output_1 model.encode(sentences_1, normalize_embeddingsTrue)
print(output_1.shape, output_1)output_2 model.encode(sentences_2, normalize_embeddingsTrue)
print(output_2.shape, output_2)计算句子对 相似度的方法
sentences [[In 1974, I won the championship in Southeast Asia in my first kickboxing match, In 1982, I defeated the heavy hitter Ryu Long.],[In 1974, I won the championship in Southeast Asia in my first kickboxing match, A man is playing a guitar.],
]scores_list model.compute_score(sentences)
print(scores_list)微调
尝试了两种方法来做一种是调包自己写代码一种是采用open-retrievals中的代码写shell脚本。这里我们采用第一种另外一种方法可参考文章最后番外中的微调
import transformers
from transformers import AutoTokenizer, TrainingArguments, get_cosine_schedule_with_warmup, AdamW
from retrievals import AutoModelForRanking, RerankCollator, RerankTrainDataset, RerankTrainer, ColBERT, RetrievalTrainDataset, ColBertCollator
from retrievals.losses import ColbertLoss
transformers.logging.set_verbosity_error()model_name_or_path: str BAAI/bge-m3learning_rate: float 1e-5
batch_size: int 2
epochs: int 1
output_dir: str ./checkpointstrain_dataset RetrievalTrainDataset(C-MTEB/T2Reranking, positive_keypositive, negative_keynegative, dataset_splitdev
)tokenizer AutoTokenizer.from_pretrained(model_name_or_path, use_fastFalse)data_collator ColBertCollator(tokenizer,query_max_length64,document_max_length128,positive_keypositive,negative_keynegative,
)
model ColBERT.from_pretrained(model_name_or_path,colbert_dim1024,loss_fnColbertLoss(use_inbatch_negativeFalse),
)optimizer AdamW(model.parameters(), lrlearning_rate)
num_train_steps int(len(train_dataset) / batch_size * epochs)
scheduler get_cosine_schedule_with_warmup(optimizer, num_warmup_steps0.05 * num_train_steps, num_training_stepsnum_train_steps)training_args TrainingArguments(learning_ratelearning_rate,per_device_train_batch_sizebatch_size,num_train_epochsepochs,output_dir ./checkpoints,remove_unused_columnsFalse,gradient_accumulation_steps8,logging_steps100,)
trainer RerankTrainer(modelmodel,argstraining_args,train_datasettrain_dataset,data_collatordata_collator,
)
trainer.optimizer optimizer
trainer.scheduler scheduler
trainer.train()model.save_pretrained(output_dir)训练过程中会加载BAAI/bge-m3模型权重 损失函数下降
{loss: 7.4858, grad_norm: 30.484981536865234, learning_rate: 4.076305220883534e-06, epoch: 0.6024096385542169}
{loss: 1.18, grad_norm: 28.68316650390625, learning_rate: 3.072289156626506e-06, epoch: 1.2048192771084336}
{loss: 1.1399, grad_norm: 14.203865051269531, learning_rate: 2.068273092369478e-06, epoch: 1.8072289156626506}
{loss: 1.1261, grad_norm: 24.30337905883789, learning_rate: 1.0642570281124499e-06, epoch: 2.4096385542168672}
{train_runtime: 471.8191, train_samples_per_second: 33.827, train_steps_per_second: 1.055, train_loss: 2.4146631079984, epoch: 3.0}评测
在C-MTEB中进行评测。微调前保留10%的数据集作为测试集验证
from datasets import load_datasetdataset load_dataset(C-MTEB/T2Reranking, splitdev)
ds dataset.train_test_split(test_size0.1, seed42)ds_train ds[train].filter(lambda x: len(x[positive]) 0 and len(x[negative]) 0
)ds_train.to_json(t2_ranking.jsonl, force_asciiFalse)微调前的指标 微调后的指标
{dataset_revision: null,mteb_dataset_name: CustomReranking,mteb_version: 1.1.1,test: {evaluation_time: 221.45,map: 0.6950128151840831,mrr: 0.8193114944390455}
}番外从语言模型直接训练ColBERT
之前的例子里是从BAAI/bge-m3继续微调这里再跑一个从hfl/chinese-roberta-wwm-ext训练一个ColBERT模型
注意从头跑需要设置更大的学习率与更多的epochs
MODEL_NAMEhfl/chinese-roberta-wwm-ext
TRAIN_DATA/root/kaggle101/src/open-retrievals/t2/t2_ranking.jsonl
OUTPUT_DIR/root/kaggle101/src/open-retrievals/t2/ft_outcd /root/open-retrievals/srctorchrun --nproc_per_node 1 \--module retrievals.pipelines.rerank \--output_dir $OUTPUT_DIR \--overwrite_output_dir \--model_name_or_path $MODEL_NAME \--tokenizer_name $MODEL_NAME \--model_type colbert \--do_train \--data_name_or_path $TRAIN_DATA \--positive_key positive \--negative_key negative \--learning_rate 5e-5 \--bf16 \--num_train_epochs 5 \--per_device_train_batch_size 32 \--dataloader_drop_last True \--query_max_length 128 \--max_length 256 \--train_group_size 4 \--unfold_each_positive false \--save_total_limit 1 \--logging_steps 100 \--use_inbatch_negative False微调后指标
{dataset_revision: null,mteb_dataset_name: CustomReranking,mteb_version: 1.1.1,test: {evaluation_time: 75.38,map: 0.6865308507184888,mrr: 0.8039965986394558}
}