BERT实战——(4)问答任务-抽取式问答

引言

我们将展示如何使用 🤗 Transformers代码库中的模型来解决问答任务中的机器问答任务

任务介绍

注意我们这里主要解决的是抽取式问答任务:给定一个问题和一段文本,从这段文本中找出能回答该问题的文本片段(span)。抽取式问答任务是从文本中抽取答案,并不是直接生成答案。

比如:

输入:
问题:我家在哪里?
文本:我的家在东北。
输出:东北

这在之前的博客【定位词:copy from input】中有详细介绍,这里简单复习一下,就是需要对每个token进行分类,看token是不是答案文本片段的start或是end。

主要分为以下几个部分:

  1. 数据加载
  2. 数据预处理
  3. 微调预训练模型:使用transformer中的Trainer接口对预训练模型进行微调;
  4. 模型评估。

前期准备

安装以下库:

pip install datasets transformers
#transformers==4.9.2
#datasets==1.11.0

数据加载

数据集介绍

我们使用的数据集是SQUAD 2,Stanford Question Answering Dataset (SQuAD) 是一个阅读理解数据集,由众工对一组维基百科文章提出的问题组成,每个问题的答案都是从相应的阅读文章中节选出来的,或者这个问题可能是无法回答的。

加载数据

该数据的加载方式在transformers库中进行了封装,我们可以通过以下语句进行数据加载:

# squad_v2等于True或者False分别代表使用SQUAD v1 或者 SQUAD v2。
# 如果您使用的是其他数据集,那么True代表的是:模型可以回答“不可回答”问题,也就是部分问题不给出答案,而False则代表所有问题必须回答。
squad_v2 = False
# 下载数据(确保有网络)
datasets = load_dataset("squad_v2" if squad_v2 else "squad")

如果你使用的是自己的数据,参考第一篇实战博客【定位词:加载数据】加载自己的数据。

给定一个数据切分的key(train、validation或者test)和下标即可查看数据。无论是训练集、验证集还是测试集,对于每一个问答数据样本都会有“context", "question"和“answers”三个key。

  1. answers代表答案,answers除了给出了文本片段里的答案文本之外,还给出了该answer所在位置(以character开始计算,下面的例子是第515位)。
  2. context代表文本片段;
  3. question代表问题。
datasets["train"][0]
#{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']},
# 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.',
# 'id': '5733be284776f41900661182',
# 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
# 'title': 'University_of_Notre_Dame'}

下面的函数将从数据集里随机选择几个例子进行展示:

from datasets import ClassLabel, Sequence
import random
import pandas as pd
from IPython.display import display, HTML

def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)

df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):
df[column] = df[column].transform(lambda x: [typ.feature.names[i] for i in x])
display(HTML(df.to_html()))
show_random_elements(datasets["train"], num_examples=1)
answers context id question title
0 {'answer_start': [185], 'text': ['diesel fuel']} In Alberta, five bitumen upgraders produce synthetic crude oil and a variety of other products: The Suncor Energy upgrader near Fort McMurray, Alberta produces synthetic crude oil plus diesel fuel; the Syncrude Canada, Canadian Natural Resources, and Nexen upgraders near Fort McMurray produce synthetic crude oil; and the Shell Scotford Upgrader near Edmonton produces synthetic crude oil plus an intermediate feedstock for the nearby Shell Oil Refinery. A sixth upgrader, under construction in 2015 near Redwater, Alberta, will upgrade half of its crude bitumen directly to diesel fuel, with the remainder of the output being sold as feedstock to nearby oil refineries and petrochemical plants. 571b074c9499d21900609be3 Besides crude oil, what does the Suncor Energy plant produce? Asphalt

数据预处理

在将数据喂入模型之前,我们需要对数据进行预处理。

仍然是两个数据预处理的基本流程:

  1. 分词;
  2. 转化成对应任务输入模型的格式;

Tokenizer用于上面两步数据预处理工作:Tokenizer首先对输入进行tokenize,然后将tokens转化为预模型中需要对应的token ID,再转化为模型需要的输入格式。

初始化Tokenizer

之前的博客已经介绍了一些Tokenizer的内容,并做了Tokenizer分词的示例,这里不再重复。use_fast=True指定使用fast版本的tokenizer。

from transformers import AutoTokenizer
model_checkpoint = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)

转化成对应任务输入模型的格式

机器问答预训练模型通常将question和context拼接之后作为输入,然后让模型从context里寻找答案。对于context中无答案的情况,我们直接将标注的答案起始位置和结束位置放置在CLS的下标处。

我们将question作为tokenizer的句子1,context作为tokenizer的句子2,tokenizer会将他们拼接起来并加入特殊字符作为模型输入:

example = datasets["train"][0]
tokenized_example=tokenizer(example["question"], example["context"])
tokenized_example["input_ids"]
#[101,2000,3183,2106,1996,6261,2984,9382,3711,1999,8517,1999,10223,26371,2605,1029,102,6549,2135,1010,1996,2082,2038,1037,3234,2839,1012,10234,1996,2364,2311,1005,1055,2751,8514,2003,1037,3585,6231,1997,1996,6261,2984,1012,3202,1999,2392,1997,1996,2364,2311,1998,5307,2009,1010,2003,1037,6967,6231,1997,4828,2007,2608,2039,14995,6924,2007,1996,5722,1000,2310,3490,2618,4748,2033,18168,5267,1000,1012,2279,2000,1996,2364,2311,2003,1996,13546,1997,1996,6730,2540,1012,3202,2369,1996,13546,2003,1996,24665,23052,1010,1037,14042,2173,1997,7083,1998,9185,1012,2009,2003,1037,15059,1997,1996,24665,23052,2012,10223,26371,1010,2605,2073,1996,6261,2984,22353,2135,2596,2000,3002,16595,9648,4674,2061,12083,9711,2271,1999,8517,1012,2012,1996,2203,1997,1996,2364,3298,1006,1998,1999,1037,3622,2240,2008,8539,2083,1017,11342,1998,1996,2751,8514,1007,1010,2003,1037,3722,1010,2715,2962,6231,1997,2984,1012,102]

我们使用sequence_ids方法来获取mask区分question和contextNone对应了special tokens,然后0或者1分表代表第1个文本和第2个文本,由于我们question第1个传入,context第2个传入,所以分别对应question和context。

sequence_ids = tokenized_example.sequence_ids()
print(sequence_ids)
#[None,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,None,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,None]

现在需要特别思考一个问题:当遇到超长context时(超过了模型能处理的最大长度)会不会对模型造成影响呢?

一般来说预训练模型输入有最大长度要求,然后通常将超长输入进行截断。但是,如果我们将问答数据三元组<question, context, answer>中超长context截断,那么可能丢掉答案(因为是从context中抽取出一个小片段作为答案)。

那么预训练机器问答模型是如何处理超长文本的呢?

我们首先找到一个超过模型最大长度的例子,然后分析处理上述问题的机制。

for循环遍历数据集,寻找一个超长样本,我们前面选择的模型所要求的最大输入是384(经常使用的还有512):

for i, example in enumerate(datasets["train"]):
if len(tokenizer(example["question"], example["context"])["input_ids"]) > 384:
break
example = datasets["train"][i]

如果不截断的话,那么输入的长度是396,如果我们截断成最大长度384,将会丢失超长部分的信息

len(tokenizer(example["question"], example["context"])["input_ids"])
#396
len(tokenizer(example["question"], example["context"], max_length=max_length, truncation="only_second")["input_ids"]) #truncation="only_second"表示只对句子2进行截断
#384

我们把超长的输入切片为多个较短的输入,每个输入都要满足模型最大长度输入要求。由于答案可能存在与切片的地方,因此允许相邻切片之间有交集,tokenizer中通过doc_stride参数控制

预训练模型的tokenizer包装了方法帮助我们完成上述步骤,只需要设定一些参数即可。

max_length = 384 # 输入feature的最大长度,question和context拼接之后
doc_stride = 128 # 2个切片之间的重合token数量。

注意:一般来说,我们只对context进行切片,不会对问题进行切片,由于context是拼接在question后面的,对应着第2个文本,所以使用only_second控制tokenizer使用doc_stride控制切片之间的重合长度。

tokenized_example = tokenizer(
example["question"],
example["context"],
max_length=max_length,
truncation="only_second",
return_overflowing_tokens=True,
stride=doc_stride
)

由于对超长输入进行了切片,我们得到了多个输入,这些输入input_ids对应的长度是

[len(x) for x in tokenized_example["input_ids"]]
#[384, 157]

我们可以将预处理后的token IDs,input_ids还原为文本格式,方便检查切片结果。可以发现tokenizer自动帮我们为第二个切片的context拼接了question文本

for i, x in enumerate(tokenized_example["input_ids"][:2]):
print("切片: {}".format(i))
print(tokenizer.decode(x))
#切片: 0
#[CLS] how many wins does the notre dame men's basketball team have? [SEP] the men's basketball team has over 1, 600 wins, one of only 12 schools who have reached that mark, and have appeared in 28 ncaa tournaments. former player austin carr holds the record for most points scored in a single game of the tournament with 61. although the team has never won the ncaa tournament, they were named by the helms athletic foundation as national champions twice. the team has orchestrated a number of upsets of number one ranked teams, the most notable of which was ending ucla's record 88 - game winning streak in 1974. the team has beaten an additional eight number - one teams, and those nine wins rank second, to ucla's 10, all - time in wins against the top team. the team plays in newly renovated purcell pavilion ( within the edmund p. joyce center ), which reopened for the beginning of the 2009 – 2010 season. the team is coached by mike brey, who, as of the 2014 – 15 season, his fifteenth at notre dame, has achieved a 332 - 165 record. in 2009 they were invited to the nit, where they advanced to the semifinals but were beaten by penn state who went on and beat baylor in the championship. the 2010 – 11 team concluded its regular season ranked number seven in the country, with a record of 25 – 5, brey's fifth straight 20 - win season, and a second - place finish in the big east. during the 2014 - 15 season, the team went 32 - 6 and won the acc conference tournament, later advancing to the elite 8, where the fighting irish lost on a missed buzzer - beater against then undefeated kentucky. led by nba draft picks jerian grant and pat connaughton, the fighting irish beat the eventual national champion duke blue devils twice during the season. the 32 wins were [SEP]
#切片: 1
#[CLS] how many wins does the notre dame men's basketball team have? [SEP] championship. the 2010 – 11 team concluded its regular season ranked number seven in the country, with a record of 25 – 5, brey's fifth straight 20 - win season, and a second - place finish in the big east. during the 2014 - 15 season, the team went 32 - 6 and won the acc conference tournament, later advancing to the elite 8, where the fighting irish lost on a missed buzzer - beater against then undefeated kentucky. led by nba draft picks jerian grant and pat connaughton, the fighting irish beat the eventual national champion duke blue devils twice during the season. the 32 wins were the most by the fighting irish team since 1908 - 09. [SEP]

我们知道机器问答模型将使用答案的位置(答案的起始位置和结束位置,start和end)作为训练标签(而不是答案的token IDS)。那么由于进行了切片,一个新的问题出现了:答案所在的位置被改变了,因此需要重新寻找答案所在位置(相对于每一片context开头的相对位置)。

所以切片需要和原始输入有一个对应关系,每个token在切片后context的位置和原始超长context里位置的对应关系

获取切片前后的位置对应关系

在tokenizer里可以使用return_offsets_mapping=True参数得到这个对应关系的map

tokenized_example = tokenizer(
example["question"],
example["context"],
max_length=max_length,
truncation="only_second",
return_overflowing_tokens=True,
return_offsets_mapping=True,
stride=doc_stride
)

我们打印tokenized_example切片1(也就是第二个切片,前30个tokens)在原始context片里的位置。注意特殊token是(如[CLS]设定为(0, 0)),是因为这个token不属于qeustion或者answer的一部分。第2个token对应的起始和结束位置是0和3。可以发现切片1context部分第2个token没有从(0,N)标记,而是从记录了其在原本超长文本中的位置。

# 打印切片前后位置下标的对应关系
print(tokenized_example["offset_mapping"][1][:30])
#[(0, 0), (0, 3), (4, 8), (9, 13), (14, 18), (19, 22), (23, 28), (29, 33), (34, 37), (37, 38), (38, 39), (40, 50), (51, 55), (56, 60), (60, 61), (0, 0), (1093, 1105), (1105, 1106), (1107, 1110), (1111, 1115), (1115, 1116), (1116, 1118), (1119, 1123), (1124, 1133), (1134, 1137), (1138, 1145), (1146, 1152), (1153, 1159), (1160, 1166), (1167, 1172)]

因此我们可以根据切片后的token id转化对应的token,然后使用offset_mapping参数映射回切片前的token位置,找到原始位置的tokens。由于question拼接在context前面,所以直接从question里根据下标找。

first_token_id = tokenized_example["input_ids"][0][1] #2129
offsets = tokenized_example["offset_mapping"][0][1] #(0, 3)
print(tokenizer.convert_ids_to_tokens([first_token_id])[0], example["question"][offsets[0]:offsets[1]])
#how How

因此我们得到更新答案相对切片context位置的流程:

  1. 我们首先找到context在句子1和句子2拼接后的句子中的起始位置token_start_index、终止位置token_end_index;
  2. 然后判断答案是否在文本区间外部:
    1. 若在,则更新答案的位置;
    2. 若不在,则让答案标注在CLS token位置。

最终我们可以更新标注的答案在预处理之后的features里的位置:

answers = example["answers"] #答案{'answer_start': [30], 'text': ['over 1,600']}
start_char = answers["answer_start"][0] #答案起始位置30
end_char = start_char + len(answers["text"][0])#答案终止位置40

# 找到当前文本的Start token index.
token_start_index = 0 #得到context在句子1和句子2拼接后的句子中的起始位置
while sequence_ids[token_start_index] != 1: #sequence_ids区分question和context
token_start_index += 1

# 找到当前文本的End token index.
token_end_index = len(tokenized_example["input_ids"][0]) - 1#得到context在句子1和句子2拼接后的句子中的终止位置,可能还要去掉一些padding
while sequence_ids[token_end_index] != 1:
token_end_index -= 1

# 检测答案是否在文本区间的外部,这种情况下意味着该样本的数据标注在CLS token位置。
offsets = tokenized_example["offset_mapping"][0]
if (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char): #答案在文本内
# 将token_start_index和token_end_index移动到answer所在位置的两侧.
# 注意:答案在最末尾的边界条件.
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1 #之前的token_start_index在context的第一个token位置
start_position = token_start_index - 1
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1#之前的token_end_index在context的最后一个token位置
end_position = token_end_index + 1
print("start_position: {}, end_position: {}".format(start_position, end_position))
else: #答案在文本外
print("The answer is not in this feature.")

start_position: 23, end_position: 26

我们对答案的位置进行验证:使用答案所在位置下标,取到对应的token ID,然后转化为文本,然后和原始答案进行但对比。

print(tokenizer.decode(tokenized_example["input_ids"][0][start_position: end_position+1]))
print(answers["text"][0])

over 1, 600 over 1,600

此外,还需要注意的是:有时候question拼接context,而有时候是context拼接question,不同的模型有不同的要求,因此我们需要使用padding_side参数来指定。

pad_on_right = tokenizer.padding_side == "right" #context在右边

现在,把所有步骤合并到一起。如果allow_impossible_answers这个参数是False的话,无答案的样本都会被扔掉。

def prepare_train_features(examples):
# 既要对examples进行truncation(截断)和padding(补全)还要还要保留所有信息,所以要用的切片的方法。
# 每一个超长文本example会被切片成多个输入,相邻两个输入之间会有交集。
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)

# 我们使用overflow_to_sample_mapping参数来映射切片片ID到原始ID。
# 比如有2个expamples被切成4片,那么对应是[0, 0, 1, 1],前两片对应原来的第一个example。
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# offset_mapping也对应4片
# offset_mapping参数帮助我们映射到原始输入,由于答案标注在原始输入上,所以有助于我们找到答案的起始和结束位置。
offset_mapping = tokenized_examples.pop("offset_mapping")

# 重新标注数据
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []

for i, offsets in enumerate(offset_mapping):
# 对每一片进行处理
# 将无答案的样本标注到CLS上
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)

# 区分question和context
sequence_ids = tokenized_examples.sequence_ids(i)

# 拿到原始的example 下标.
sample_index = sample_mapping[i]
answers = examples["answers"][sample_index]
# 如果没有答案,则使用CLS所在的位置为答案.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# 答案的character级别Start/end位置.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])

# 找到token级别的index start.
token_start_index = 0
while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
token_start_index += 1

# 找到token级别的index end.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
token_end_index -= 1

# 检测答案是否超出文本长度,超出的话也使用CLS index作为标注.
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# 如果不超出则找到答案token的start和end位置。.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)

return tokenized_examples

以上的预处理函数可以处理一个样本,也可以处理多个样本exapmles。如果是处理多个样本,则返回的是多个样本被预处理之后的结果list。

接下来使用map函数对数据集datasets里面三个样本集合的所有样本进行预处理,将预处理函数prepare_train_features应用到(map)所有样本上。参数batched=True可以批量对文本进行编码。这是为了充分利用前面加载fast_tokenizer的优势,它将使用多线程并发地处理批中的文本。

tokenized_datasets = datasets.map(prepare_train_features, batched=True, remove_columns=datasets["train"].column_names)

微调预训练模型

数据已经准备好了,我们需要下载并加载预训练模型,然后微调预训练模型。

加载预训练模型

机器问答任务,那么需要一个能解决这个任务的模型类。我们使用AutoModelForQuestionAnswering 这个类

和之前几篇博客提到的加载方式相同不再赘述。

from transformers import AutoModelForQuestionAnswering
model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)

设定训练参数

为了能够得到一个Trainer训练工具,我们还需要训练的设定/参数 TrainingArguments。这个训练设定包含了能够定义训练过程的所有属性

from transformers import TrainingArguments
args = TrainingArguments(
f"test-squad",
evaluation_strategy = "epoch",
learning_rate=2e-5, #学习率
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=3, # 训练的次数
weight_decay=0.01,
)

数据收集器data collator

接下来需要告诉Trainer如何从预处理的输入数据中构造batch。我们使用数据收集器data collator,将经预处理的输入分batch再次处理后喂给模型。

我们使用一个default_data_collator将预处理好的数据喂给模型。

from transformers import default_data_collator
data_collator = default_data_collator

定义评估方法

注意,本次训练的时候,我们将只会计算loss,暂时不定义评估方法。

开始训练

将数据/模型/参数传入Trainer即可:

from transformers import Trainer
trainer = Trainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
)

调用train方法开始训练:

trainer.train()
#及时保存模型
trainer.save_model("test-squad-trained")

模型评估

模型的输出是answer所在start/end位置的logits。

用第一个batch来举一个例子:

import torch

for batch in trainer.get_eval_dataloader(): #产生batch的迭代器
break
batch = {k: v.to(trainer.args.device) for k, v in batch.items()}
with torch.no_grad():
output = trainer.model(**batch)
output.keys()
#odict_keys(['loss', 'start_logits', 'end_logits'])

还记得我们在分析BERT-based Model源码时,也可以看出BertForQuestionAnswering的输出包括:

return QuestionAnsweringModelOutput(
loss=total_loss,
start_logits=start_logits,
end_logits=end_logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)

我们在输出预测结果的时候不需要看loss,每个feature(切片)里的每个token都会有两个logit值(分别组成start_logits,end_logits),直接根据logits找到答案的位置即可。

如何根据logits找到答案的位置?

方法1

预测answer最简单的方法就是选择start_logits里最大的下标作为answer起始位置,end_logits里最大下标作为answer的结束位置。

output.start_logits.shape, output.end_logits.shape
#(torch.Size([16, 384]), torch.Size([16, 384]))

output.start_logits.argmax(dim=-1), output.end_logits.argmax(dim=-1)
# (tensor([ 46, 57, 78, 43, 118, 15, 72, 35, 15, 34, 73, 41, 80, 91, 156, 35], device='cuda:0'),tensor([ 47, 58, 81, 55, 118, 110, 75, 37, 110, 36, 76, 53, 83, 94, 158, 35], device='cuda:0'))

该策略大部分情况下都是不错的。但是,如果我们的输入告诉我们找不到答案:比如start的位置比end的位置下标大,或者start和end的位置指向了question。该怎么办呢?

方法2

这个时候,简单的方法是继续需要选择第2好的预测作为答案,实在不行看第3好的预测,以此类推。

但是方法2不太容易找到可行的答案,还有没有更合理一些的方法呢?

方法3

分为四个步骤:

  1. 我们先找到最好的n_best_size个(自定义)start和end对应的可能的备选起始点和终止点;
  2. 从中先构建合理的备选答案,不合理的情况包括以下几种:
    1. start>end的备选起始点和终止点;
    2. start或end超过最大长度;
    3. start和end位置对应的文本在question里面而不在context里面(这里埋了一个雷,);
  3. 然后将合理备选答案的start和end的logits相加得到新的打分;
  4. 最后我们根据scorevalid_answers进行排序,找到最好的那一个做为答案。

为了找到第3种不合理的情况,我们在validation的features里添加以下两个信息

  1. 产生feature的example ID-overflow_to_sample_mapping:由于每个example可能会产生多个feature,所以每个feature/切片的feature需要知道他们对应的example是哪一个。
  2. offset mapping-offset_mapping: 将每个切片tokens的位置映射回原始文本基于character的下标位置,把question部分的offset_mapping用None掩码,context部分保留不变。

我们现在利用一个prepare_validation_features函数处理validation验证集,添加上面两个信息,该函数和处理训练的时候的prepare_train_features稍有不同。然后利用处理后的验证集进行评估。

def prepare_validation_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)

# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
# 我们使用overflow_to_sample_mapping参数来映射切片片ID到原始ID。
# 比如有2个expamples被切成4片,那么对应是[0, 0, 1, 1],前两片对应原来的第一个example。
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")

# We keep the example_id that gave us this feature and we will store the offset mappings.
tokenized_examples["example_id"] = []

for i in range(len(tokenized_examples["input_ids"])):
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0

# One example can give several spans, this is the index of the example containing this span of text.
# 拿到原始的example 下标.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])

# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
# 检查答案是否在context中,如果不在offset_mapping为None
# 其实就是把question部分的offset_mapping用None掩码,context部分保留不变
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]

return tokenized_examples

和之前一样将prepare_validation_features函数应用到每个验证集合的样本上。

validation_features = datasets["validation"].map(
prepare_validation_features,
batched=True,
remove_columns=datasets["validation"].column_names
)

使用Trainer.predict方法获得所有预测结果:

raw_predictions = trainer.predict(validation_features)

这个 Trainer 隐藏了 一些模型训练时候没有使用的属性(这里是 example_idoffset_mapping,后处理的时候会用到),所以我们需要把这些设置回来:

validation_features.set_format(type=validation_features.format["type"], columns=list(validation_features.features.keys()))

经过前面的prepare_validation_features函数处理,当一个token位置对应question部分offset mappings为None所以我们根据offset mapping可以判断token是否在context里面。

更近一步地,我们用max_answer_length控制去掉特别长的答案。

代码如下:

n_best_size = 20
max_answer_length = 30

start_logits = output.start_logits[0].cpu().numpy()
end_logits = output.end_logits[0].cpu().numpy()
offset_mapping = validation_features[0]["offset_mapping"]
# The first feature comes from the first example. For the more general case, we will need to be match the example_id to
# an example index
context = datasets["validation"][0]["context"]
# 收集最佳的start和end logits的位置
# Gather the indices the best start/end logits:
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
valid_answers = []
for start_index in start_indexes:
for end_index in end_indexes:

# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (#答案不合理
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:#答案不合理
continue
if start_index <= end_index: # We need to refine that test to check the answer is inside the context # 如果start小于end,那么是合理的可能答案
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]# 后续需要根据token的下标将答案找出来
}
)
#最后根据`score`对`valid_answers`进行排序,找到最好的那一个
valid_answers = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[:n_best_size]
valid_answers

将预测答案和真实答案进行比较,验证一下:

datasets["validation"][0]["answers"]
#{'answer_start': [177, 177, 177],
# 'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos']}

这里还有一个问题需要思考:

当一个example被分成多个切片输入模型,模型会把这些切片当作多个单独的“样本”进行训练,那我们在计算正确率和召回率的时候,不能以这多个切片为单位直接计算,而是应该将其对应的一个example为单位进行计算。

对于上面地例子来说,由于答案正好在第1个feature,而第1个feature一定是来自于第1个example,所以相对容易。对于一个超长example产生的其他fearures来说,需要一个features和examples进行映射的map。因此由于一个example可能被切片成多个features,所以我们需要将所有features里的答案全部收集起来。

以下的代码就将exmaple的下标和features的下标进行map映射。

import collections

examples = datasets["validation"]
features = validation_features

example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)

对于后处理过程基本上已经全部完成了。

但是这里还还还有一个问题:如何解决无答案的情况(squad_v2=True)

以上的代码都只考虑了context里面的asnwers,我们同样需要将无答案的预测得分进行搜集(无答案的预测对应了CLS token的start和end logits)。如果一个example样本有多个features,那么我们还需要在多个features里预测是不是都无答案。所以无答案的最终得分是所有features的无答案得分最小的那个。(为什么是最小的那个呢?因为答案如果只在一个切片里,其他切片肯定是没有答案的,如果要确保整个example是没有答案的话,相当于最有可能有答案的切片里面也没有答案只要无答案的最终得分高于其他所有答案的得分,那么该问题就是无答案。

最终的后处理函数:

from tqdm.auto import tqdm

def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):
all_start_logits, all_end_logits = raw_predictions
# Build a map example to its corresponding features.
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)

# The dictionaries we have to fill.
predictions = collections.OrderedDict()

# Logging.
print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")

# Let's loop over all the examples!
for example_index, example in enumerate(tqdm(examples)):
# Those are the indices of the features associated to the current example.
feature_indices = features_per_example[example_index]

min_null_score = None # Only used if squad_v2 is True.
valid_answers = []

context = example["context"]
# Looping through all the features associated to the current example.
for feature_index in feature_indices:
# We grab the predictions of the model for this feature.
start_logits = all_start_logits[feature_index]
end_logits = all_end_logits[feature_index]
# This is what will allow us to map some the positions in our logits to span of texts in the original
# context.
offset_mapping = features[feature_index]["offset_mapping"]

# Update minimum null prediction.
cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
feature_null_score = start_logits[cls_index] + end_logits[cls_index]
if min_null_score is None or min_null_score < feature_null_score:
min_null_score = feature_null_score

# Go through all possibilities for the `n_best_size` greater start and end logits.
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue

start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)

if len(valid_answers) > 0:
best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
else:
# In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
# failure.
best_answer = {"text": "", "score": 0.0}

# Let's pick our final answer: the best one or the null answer (only for squad_v2)
if not squad_v2:
predictions[example["id"]] = best_answer["text"]
else:
answer = best_answer["text"] if best_answer["score"] > min_null_score else ""
predictions[example["id"]] = answer

return predictions

将后处理函数应用到原始预测输出上:

final_predictions = postprocess_qa_predictions(datasets["validation"], validation_features, raw_predictions.predictions)

然后加载评测指标:

from datasets import load_metric
metric = load_metric("squad_v2" if squad_v2 else "squad")

基于预测和标注对评测指标进行计算。为了合理的比较,我们需要将预测和标注的格式。对于squad2来说,评测指标还需要no_answer_probability参数(由于已经无答案直接设置成了空字符串,所以这里直接将这个参数设置为0.0)

if squad_v2:
formatted_predictions = [{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in predictions.items()]
else:
formatted_predictions = [{"id": k, "prediction_text": v} for k, v in final_predictions.items()]
references = [{"id": ex["id"], "answers": ex["answers"]} for ex in datasets["validation"]]
metric.compute(predictions=formatted_predictions, references=references)

到这里才结束啦!

参考文献

4.4-问答任务-多选问答.md

BERT实战——(1)文本分类

transformers官方文档