项目作者: mrm8488

项目描述 :
Spanish RoBERTa
高级语言:
项目地址: git://github.com/mrm8488/RuPERTa-base.git
创建时间: 2020-08-06T13:54:49Z
项目社区:https://github.com/mrm8488/RuPERTa-base

开源协议:

下载


" class="reference-link">RuPERTa-base: the Spanish RoBERTa 🎃spain flag

RuPERTa-base (uncased) is a RoBERTa model trained on a uncased verison of big Spanish corpus.
RoBERTa iterates on BERT’s pretraining procedure, including training the model longer, with bigger batches over more data; removing the next sentence prediction objective; training on longer sequences; and dynamically changing the masking pattern applied to the training data.
The architecture is the same as roberta-base:

roberta.base: RoBERTa using the BERT-base architecture 125M params

Benchmarks 🧾

WIP (I continue working on it) 🚧

Task/Dataset F1 Precision Recall Fine-tuned model Reproduce it
POS 97.39 97.47 97.32 RuPERTa-base-finetuned-pos Open In Colab
NER 77.55 75.53 79.68 RuPERTa-base-finetuned-ner
SQUAD-es v1 to-do RuPERTa-base-finetuned-squadv1
SQUAD-es v2 to-do RuPERTa-base-finetuned-squadv2

Model in action 🔨

Usage for POS and NER 🏷

  1. import torch
  2. from transformers import AutoModelForTokenClassification, AutoTokenizer
  3. id2label = {
  4. "0": "B-LOC",
  5. "1": "B-MISC",
  6. "2": "B-ORG",
  7. "3": "B-PER",
  8. "4": "I-LOC",
  9. "5": "I-MISC",
  10. "6": "I-ORG",
  11. "7": "I-PER",
  12. "8": "O"
  13. }
  14. tokenizer = AutoTokenizer.from_pretrained('mrm8488/RuPERTa-base-finetuned-ner')
  15. model = AutoModelForTokenClassification.from_pretrained('mrm8488/RuPERTa-base-finetuned-ner')
  16. text ="Julien, CEO de HF, nació en Francia."
  17. input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
  18. outputs = model(input_ids)
  19. last_hidden_states = outputs[0]
  20. for m in last_hidden_states:
  21. for index, n in enumerate(m):
  22. if(index > 0 and index <= len(text.split(" "))):
  23. print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())])
  24. # Output:
  25. '''
  26. Julien,: I-PER
  27. CEO: O
  28. de: O
  29. HF,: B-ORG
  30. nació: I-PER
  31. en: I-PER
  32. Francia.: I-LOC
  33. '''

For POS just change the id2label dictionary and the model path to mrm8488/RuPERTa-base-finetuned-pos

Fast usage for LM with pipelines 🧪

  1. from transformers import AutoModelWithLMHead, AutoTokenizer
  2. model = AutoModelWithLMHead.from_pretrained('mrm8488/RuPERTa-base')
  3. tokenizer = AutoTokenizer.from_pretrained("mrm8488/RuPERTa-base", do_lower_case=True)
  4. from transformers import pipeline
  5. pipeline_fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
  6. pipeline_fill_mask("España es un país muy <mask> en la UE")
  1. [
  2. {
  3. "score": 0.1814306527376175,
  4. "sequence": "<s> españa es un país muy importante en la ue</s>",
  5. "token": 1560
  6. },
  7. {
  8. "score": 0.024842597544193268,
  9. "sequence": "<s> españa es un país muy fuerte en la ue</s>",
  10. "token": 2854
  11. },
  12. {
  13. "score": 0.02473250962793827,
  14. "sequence": "<s> españa es un país muy pequeño en la ue</s>",
  15. "token": 2948
  16. },
  17. {
  18. "score": 0.023991240188479424,
  19. "sequence": "<s> españa es un país muy antiguo en la ue</s>",
  20. "token": 5240
  21. },
  22. {
  23. "score": 0.0215945765376091,
  24. "sequence": "<s> españa es un país muy popular en la ue</s>",
  25. "token": 5782
  26. }
  27. ]

Acknowledgments

I thank 🤗/transformers team for answering my doubts and Google for helping me with the TensorFlow Research Cloud program.

Created by Manuel Romero/@mrm8488

Made with in Spain