删除和更新用于NER训练数据的文本文档中的字符串和实体索引

Deleting and updating a string and entity index in a text document for NER training data(删除和更新用于NER训练数据的文本文档中的字符串和实体索引)
本文介绍了删除和更新用于NER训练数据的文本文档中的字符串和实体索引的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试创建用于NER识别的训练数据集。为此,我有大量数据需要标记并删除不必要的句子。在删除不必要的句子时,索引药水必须更新。上一天,我看到了一些用户关于这一点的令人难以置信的代码片段,现在我找不到了。修改他们的代码段,我可以简要说明我的问题

我们取一个训练样本数据:

data = [{"content":'''Hello we are hans and john. I enjoy playing Football.
I love eating grapes. Hanaan is great.''',"annotations":[{"id":1,"start":13,"end":17,"tag":"name"},
                                {"id":2,"start":22,"end":26,"tag":"name"},
                                {"id":3,"start":68,"end":74,"tag":"fruit"},
                                {"id":4,"start":76,"end":82,"tag":"name"}]}]

这可以使用以下空格显示代码进行可视化

import json
import spacy
from spacy import displacy

data = [{"content":'''Hello we are hans and john. I enjoy playing Football.
I love eating grapes. Hanaan is great.''',"annotations":[{"id":1,"start":13,"end":17,"tag":"name"},
                                {"id":2,"start":22,"end":26,"tag":"name"},
                                {"id":3,"start":68,"end":74,"tag":"fruit"},
                                {"id":4,"start":76,"end":82,"tag":"name"}]}]

annot_tags = data[data_index]["annotations"]
entities = []
for j in annot_tags:
    start = j["start"]
    end = j["end"]
    tag = j["tag"]
    entitie = (start,end,tag)
    entities.append(entitie)
data_gen = (data[data_index]["content"],{"entities":entities})
data_one = []
data_one.append(data_gen)

nlp = spacy.blank('en')
raw_text = data_one[0][0]
doc = nlp.make_doc(raw_text)
spans = data_one[0][1]["entities"]
ents = []
for span_start, span_end, label in spans:
    ent = doc.char_span(span_start, span_end, label=label)
    if ent is None:
        continue

    ents.append(ent)

doc.ents = ents
displacy.render(doc, style="ent", jupyter=True)

输出将为

Output 1

现在,我想删除未标记的句子并更新索引值。因此,所需的输出如下

Required Output

此外,数据必须采用以下格式。删除未标记的句子,并且必须更新索引值,这样我才能获得如上所示的输出。

必填输出数据

[{"content":'''Hello we are hans and john.
I love eating grapes. Hanaan is great.''',"annotations":[{"id":1,"start":13,"end":17,"tag":"name"},
                                {"id":2,"start":22,"end":26,"tag":"name"},
                                {"id":3,"start":42,"end":48,"tag":"fruit"},
                                {"id":4,"start":50,"end":56,"tag":"name"}]}]

我上一天关注了一篇帖子,得到了一个几乎可以工作的代码。

代码

import re

data = [{"content":'''Hello we are hans and john. I enjoy playing Football.
I love eating grapes. Hanaan is great.''',"annotations":[{"id":1,"start":13,"end":17,"tag":"name"},
                                {"id":2,"start":22,"end":26,"tag":"name"},
                                {"id":3,"start":68,"end":74,"tag":"fruit"},
                                {"id":4,"start":76,"end":82,"tag":"name"}]}]
         
         
         
for idx, each in enumerate(data[0]['annotations']):
    start = each['start']
    end = each['end']
    word = data[0]['content'][start:end]
    data[0]['annotations'][idx]['word'] = word
    
sentences = [ {'sentence':x.strip() + '.','checked':False} for x in data[0]['content'].split('.')]

new_data = [{'content':'', 'annotations':[]}]
for idx, each in enumerate(data[0]['annotations']):
    for idx_alpha, sentence in enumerate(sentences):
        if sentence['checked'] == True:
            continue
        temp = each.copy()
        check_word = temp['word']
        if check_word in sentence['sentence']:
            start_idx = re.search(r'({})'.format(check_word), sentence['sentence']).start()
            end_idx = start_idx + len(check_word)
            
            current_len = len(new_data[0]['content'])
            
            new_data[0]['content'] += sentence['sentence'] + ' '
            temp.update({'start':start_idx + current_len, 'end':end_idx + current_len})
            new_data[0]['annotations'].append(temp)
            
            sentences[idx_alpha]['checked'] = True
            break
print(new_data)

输出

[{'content': 'Hello we are hans and john. I love eating grapes. Hanaan is great. ',
  'annotations': [{'id': 1,
    'start': 13,
    'end': 17,
    'tag': 'name',
    'word': 'hans'},
   {'id': 3, 'start': 42, 'end': 48, 'tag': 'fruit', 'word': 'grapes'},
   {'id': 4, 'start': 50, 'end': 56, 'tag': 'name', 'word': 'Hanaan'}]}]

约翰这个名字在这里遗失了。如果存在多个标记,我不能将其丢失

推荐答案

这是一项相当复杂的任务,因为您需要识别句子,因为对'.'进行简单的拆分可能不起作用,因为它会对'Mr.'等进行拆分。

既然您使用Spacy,为什么不让它识别句子,然后遍历这些句子并计算出那些开始和结束索引,而不包括任何没有实体句子。然后重新构建内容。

import json
import spacy
from spacy import displacy
import re

data = [{"content":'''Hello we are hans and john. I enjoy playing Football. 
I love eating grapes. Hanaan is great. Mr. Jones is nice.''',"annotations":[{"id":1,"start":13,"end":17,"tag":"name"},
                                {"id":2,"start":22,"end":26,"tag":"name"},
                                {"id":3,"start":68,"end":74,"tag":"fruit"},
                                {"id":4,"start":76,"end":82,"tag":"name"},
                                {"id":5,"start":93,"end":102,"tag":"name"}]}]

for idx, each in enumerate(data[0]['annotations']):
    start = each['start']
    end = each['end']
    word = data[0]['content'][start:end]
    data[0]['annotations'][idx]['word'] = word
    
         
text = data[0]['content']

nlp = spacy.load('en_core_web_sm')
nlp.add_pipe('sentencizer')

doc = nlp(text)
sentences = [i for i in doc.sents]
annotations = data[0]['annotations']

new_data = [{"content":'',
            'annotations':[]}]
for sentence in sentences:
    idx_to_remove = []
    for idx, annotation in enumerate(annotations):
        if annotation['word'] in sentence.text:
            temp = annotation.copy()
            
            start_idx = re.search(r'({})'.format(annotation['word']), sentence.text).start()
            end_idx = start_idx + len(annotation['word'])
            
            current_len = len(new_data[0]['content'])
            
            
            temp.update({'start':start_idx + current_len, 'end':end_idx + current_len})
            new_data[0]['annotations'].append(temp)
            
            idx_to_remove.append(idx)
            
    if len(idx_to_remove) > 0:
        new_data[0]['content'] += sentence.text + ' '
    for x in range(0,len(idx_to_remove)):
        del annotations[0]

输出:

print(new_data)
[{'content': 'Hello we are hans and john. I love eating grapes. Hanaan is great. Mr. Jones is nice. ', 
'annotations': [
{'id': 1, 'start': 13, 'end': 17, 'tag': 'name', 'word': 'hans'}, 
{'id': 2, 'start': 22, 'end': 26, 'tag': 'name', 'word': 'john'}, 
{'id': 3, 'start': 42, 'end': 48, 'tag': 'fruit', 'word': 'grapes'}, 
{'id': 4, 'start': 50, 'end': 56, 'tag': 'name', 'word': 'Hanaan'}, 
{'id': 5, 'start': 67, 'end': 76, 'tag': 'name', 'word': 'Mr. Jones'}]}]

这篇关于删除和更新用于NER训练数据的文本文档中的字符串和实体索引的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Leetcode 234: Palindrome LinkedList(Leetcode 234:回文链接列表)
How do I read an Excel file directly from Dropbox#39;s API using pandas.read_excel()?(如何使用PANDAS.READ_EXCEL()直接从Dropbox的API读取Excel文件?)
subprocess.Popen tries to write to nonexistent pipe(子进程。打开尝试写入不存在的管道)
I want to realize Popen-code from Windows to Linux:(我想实现从Windows到Linux的POpen-code:)
Reading stdout from a subprocess in real time(实时读取子进程中的标准输出)
How to call type safely on a random file in Python?(如何在Python中安全地调用随机文件上的类型?)