做商城网站哪家好,做任务领游戏的网站,网站嵌入播放器,网站建设伍际网络打卡 目录
打卡
环境准备
准备阶段
数据加载与预处理
BertTokenizer
部分输出
模型构建
gpt2模型结构输出
训练流程
部分输出
部分输出2#xff08;减少训练数据#xff09;
推理流程 环境准备
pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspo…打卡 目录
打卡
环境准备
准备阶段
数据加载与预处理
BertTokenizer
部分输出
模型构建
gpt2模型结构输出
训练流程
部分输出
部分输出2减少训练数据
推理流程 环境准备
pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore2.2.14pip install tokenizers0.15.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
# 该案例在 mindnlp 0.3.1 版本完成适配如果发现案例跑不通可以指定mindnlp版本执行!pip install mindnlp0.3.1pip install mindnlp 准备阶段
nlpcc2017摘要数据内容为新闻正文及其摘要总计50000个样本。
来源nlpcc2017摘要数据 数据加载与预处理
原始数据格式
article: [CLS] article_context [SEP]
summary: [CLS] summary_context [SEP]预处理后的数据格式
[CLS] article_context [SEP] summary_context [SEP] BertTokenizer
因GPT2无中文的tokenizer使用BertTokenizer替代。代码如下
from mindspore.dataset import TextFileDataset
import json
import numpy as np
from mindnlp.transformers import BertTokenizer# preprocess dataset
def process_dataset(dataset, tokenizer, batch_size6, max_seq_len1024, shuffleFalse):def read_map(text):data json.loads(text.tobytes())return np.array(data[article]), np.array(data[summarization])def merge_and_pad(article, summary):# tokenization# pad to max_seq_length, only truncate the articletokenized tokenizer(textarticle, text_pairsummary,paddingmax_length, truncationonly_first, max_lengthmax_seq_len)return tokenized[input_ids], tokenized[input_ids]dataset dataset.map(read_map, text, [article, summary])# change column names to input_ids and labels for the following trainingdataset dataset.map(merge_and_pad, [article, summary], [input_ids, labels])dataset dataset.batch(batch_size)if shuffle:dataset dataset.shuffle(batch_size)return dataset# load dataset
dataset TextFileDataset(str(path), shuffleFalse)
print(dataset.get_dataset_size()) ### 50000# split into training and testing dataset
train_dataset, test_dataset dataset.split([0.9, 0.1], randomizeFalse)
print(len(train_dataset)) ### 45000# We use BertTokenizer for tokenizing chinese context.
tokenizer BertTokenizer.from_pretrained(bert-base-chinese)
len(tokenizer)train_dataset process_dataset(train_dataset, tokenizer, batch_size4)
## next(train_dataset.create_tuple_iterator())
部分输出 模型构建
如下通过两个类实现
构建GPT2ForSummarization模型注意shift right的操作。动态学习率
from mindspore import ops
from mindnlp.transformers import GPT2LMHeadModel
from mindspore.nn.learning_rate_schedule import LearningRateSchedulefrom mindspore import nn
from mindnlp.transformers import GPT2Config, GPT2LMHeadModel
from mindnlp._legacy.engine import Trainer
from mindnlp._legacy.engine.callbacks import CheckpointCallbackclass GPT2ForSummarization(GPT2LMHeadModel):def construct(self,input_ids None,attention_mask None,labels None,):outputs super().construct(input_idsinput_ids, attention_maskattention_mask)shift_logits outputs.logits[..., :-1, :]shift_labels labels[..., 1:]# Flatten the tokensloss ops.cross_entropy(shift_logits.view(-1, shift_logits.shape[-1]), shift_labels.view(-1), ignore_indextokenizer.pad_token_id)return lossclass LinearWithWarmUp(LearningRateSchedule):Warmup-decay learning rate.def __init__(self, learning_rate, num_warmup_steps, num_training_steps):super().__init__()self.learning_rate learning_rateself.num_warmup_steps num_warmup_stepsself.num_training_steps num_training_stepsdef construct(self, global_step):if global_step self.num_warmup_steps:return global_step / float(max(1, self.num_warmup_steps)) * self.learning_ratereturn ops.maximum(0.0, (self.num_training_steps - global_step) / (max(1, self.num_training_steps - self.num_warmup_steps))) * self.learning_rate## 训练参数设置
num_epochs 1
warmup_steps 2000
learning_rate 1.5e-4num_training_steps num_epochs * train_dataset.get_dataset_size()config GPT2Config(vocab_sizelen(tokenizer))
model GPT2ForSummarization(config)lr_scheduler LinearWithWarmUp(learning_ratelearning_rate, num_warmup_stepswarmup_steps, num_training_stepsnum_training_steps)
optimizer nn.AdamWeightDecay(model.trainable_params(), learning_ratelr_scheduler)# 记录模型参数数量
print(number of model parameters: {}.format(model.num_parameters()))
gpt2模型结构输出
1. 1级主类GPT2ForSummarization
2. 2级类GPT2Model 层是transformer 结构是模型的核心部分。
3. 2级类lm_head 结构的 Dense 全连接层 , dim[in, out][768, 21128]。 4. GPT2Model 结构下的3级类组件分三层 wte 嵌入层dim[in, out][21128, 768] 即使用了 21128 个词汇每个词汇映射到一个768 维的向量。 wpe 嵌入层dim[in, out][1024, 768] drop 层。 layers h 隐网络结构层Transformer模型的主体包含 12 个 GPT2Block。 ln_f LayerNorm 最后的层归一化。 5. GPT2Block 的结构 》》ln_1 LayerNorm层层归一化用于在注意力机制之前对输入进行归一化。 》》attn GPT2Attention层自注意力机制用于计算输入序列中不同位置的注意力权重。共包括3层Conv1D、Conv1D、CustomDropout、CustomDropout。 》》ln_2 LayerNorm层用于自注意力之后的归一化。 》》mlp GPT2MLP层多层感知机用于对自注意力层的输出进行进一步的非线性变换。这里使用的操作包括Conv1D、Conv1D、GELU、CustomDropout。
$ print(model)GPT2ForSummarization(transformer): GPT2Model(wte): Embeddingvocab_size21128, embedding_size768, use_one_hotFalse, weightParameter (Tensor(shape[21128, 768], dtypeFloat32, value[...], nametransformer.wte.weight), requires_gradTrue), dtypeFloat32, padding_idxNone(wpe): Embeddingvocab_size1024, embedding_size768, use_one_hotFalse, weightParameter (Tensor(shape[1024, 768], dtypeFloat32, value[...], nametransformer.wpe.weight), requires_gradTrue), dtypeFloat32, padding_idxNone(drop): CustomDropout(h): CellList(0): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.0.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.0.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.0.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.0.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(1): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.1.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.1.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.1.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.1.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(2): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.2.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.2.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.2.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.2.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(3): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.3.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.3.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.3.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.3.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(4): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.4.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.4.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.4.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.4.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(5): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.5.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.5.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.5.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.5.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(6): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.6.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.6.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.6.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.6.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(7): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.7.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.7.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.7.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.7.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(8): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.8.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.8.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.8.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.8.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(9): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.9.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.9.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.9.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.9.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(10): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.10.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.10.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.10.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.10.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(11): GPT2Block(ln_1): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.11.ln_1.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.11.ln_1.bias), requires_gradTrue)(attn): GPT2Attention(c_attn): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(attn_dropout): CustomDropout(resid_dropout): CustomDropout(ln_2): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.11.ln_2.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.h.11.ln_2.bias), requires_gradTrue)(mlp): GPT2MLP(c_fc): Conv1D(matmul): Matmul(c_proj): Conv1D(matmul): Matmul(act): GELU(dropout): CustomDropout(ln_f): LayerNormnormalized_shape[768], begin_norm_axis-1, begin_params_axis-1, weightParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.ln_f.weight), requires_gradTrue), biasParameter (Tensor(shape[768], dtypeFloat32, value[...], nametransformer.ln_f.bias), requires_gradTrue)(lm_head): Denseinput_channels768, output_channels21128 训练流程
from mindspore import nn
from mindnlp.transformers import GPT2Config, GPT2LMHeadModel
from mindnlp._legacy.engine import Trainer
from mindnlp._legacy.engine.callbacks import CheckpointCallback# 记录模型参数数量
print(number of model parameters: {}.format(model.num_parameters()))ckpoint_cb CheckpointCallback(save_pathcheckpoint, ckpt_namegpt2_summarization,epochs1, keep_checkpoint_max2)trainer Trainer(networkmodel, train_datasettrain_dataset,epochs1, optimizeroptimizer, callbacksckpoint_cb)
trainer.set_amp(levelO1) # 开启混合精度trainer.run(tgt_columnslabels) 部分输出
注建议使用较高规格的算力训练时间较长 部分输出2减少训练数据
此次活动的 notebook 只可以连续运行8小时此次目的也不是性能优化故此我将训练数据减少到了1/10此时的部分输出如下。 推理流程
## 向量数据转为中文数据
def process_test_dataset(dataset, tokenizer, batch_size1, max_seq_len1024, max_summary_len100):def read_map(text):data json.loads(text.tobytes())return np.array(data[article]), np.array(data[summarization])def pad(article):tokenized tokenizer(textarticle, truncationTrue, max_lengthmax_seq_len-max_summary_len)return tokenized[input_ids]dataset dataset.map(read_map, text, [article, summary])dataset dataset.map(pad, article, [input_ids])dataset dataset.batch(batch_size)return datasettest_dataset process_test_dataset(test_dataset, tokenizer, batch_size1)
print(next(test_dataset.create_tuple_iterator(output_numpyTrue)))model GPT2LMHeadModel.from_pretrained(./checkpoint/gpt2_summarization_epoch_0.ckpt, configconfig)model.set_train(False)
model.config.eos_token_id model.config.sep_token_id
i 0
for (input_ids, raw_summary) in test_dataset.create_tuple_iterator():output_ids model.generate(input_ids, max_new_tokens50, num_beams5, no_repeat_ngram_size2)output_text tokenizer.decode(output_ids[0].tolist())print(output_text)i 1if i 1:break
减少训练数据后的模型推理结果展示。