其中的主要模块 Transformer 来自 Attention Is All You Need
Effect of Pre-training Tasks
Effect of Model Size
Effect of Number of Training Steps
Feature-based Approach with BERT
Recent empirical improvements due to transfer learning with language models have demonstrated that rich, unsupervised pre-training is an integral part of many language understanding systems. Inparticular, these results enable even low-resource tasks to benefit from very deep unidirectional architectures.Our major contribution is further generalizing these findings to deep bidirectional architectures, allowing the same pre-trained model to successfully tackle a broad set of NLP tasks. While the empirical results are strong, in some cases surpassing human performance, important future work is to investigate the linguistic phenomena that may or may not be captured by BERT.
|BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding||原始论文||20181011|
|BERT-pytorch||Google AI 2018 BERT pytorch implementation|
|【NLP】Google BERT详解||李入魔 解读|
|如何评价 BERT 模型？||解读论文思想点|
|NLP突破性成果 BERT 模型详细解读||章鱼小丸子 解读|
|谷歌最强 NLP 模型 BERT 解读||AI科技评论|
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova (Submitted on 11 Oct 2018)
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%. Comments: 13 pages
摘要：本文介绍了一种新的语言表征模型 BERT，意为来自 Transformer 的双向编码器表征（Bidirectional Encoder Representations from Transformers）。与最近的语言表征模型（Peters et al., 2018; Radford et al., 2018）不同，BERT 旨在基于所有层的左、右语境来预训练深度双向表征。因此，预训练的 BERT 表征可以仅用一个额外的输出层进行微调，进而为很多任务（如问答和语言推断任务）创建当前最优模型，无需对任务特定架构做出大量修改。
BERT 的概念很简单，但实验效果很强大。它刷新了 11 个 NLP 任务的当前最优结果，包括将 GLUE 基准提升至 80.4%（7.6% 的绝对改进）、将 MultiNLI 的准确率提高到 86.7%（5.6% 的绝对改进），以及将 SQuAD v1.1 的问答测试 F1 得分提高至 93.2 分（提高 1.5 分）——比人类表现还高出 2 分。
Subjects: Computation and Language (cs.CL) Cite as: arXiv:1810.04805 [cs.CL] (or arXiv:1810.04805v1 [cs.CL] for this version) Bibliographic data Select data provider: Semantic Scholar [Disable Bibex(What is Bibex?)] No data available yet Submission history From: Jacob Devlin [view email] [v1] Thu, 11 Oct 2018 00:50:01 GMT (227kb,D)
最近谷歌发布了基于双向 Transformer 的大规模预训练语言模型，该预训练模型能高效抽取文本信息并应用于各种 NLP 任务，该研究凭借预训练模型刷新了 11 项 NLP 任务的当前最优性能记录。如果这种预训练方式能经得起实践的检验，那幺各种 NLP 任务只需要少量数据进行微调就能实现非常好的效果，BERT 也将成为一种名副其实的骨干网络。
BERT, or B idirectional E ncoder R epresentations from T ransformers, is a new method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks.
Our academic paper which describes BERT in detail and provides full results on a number of tasks can be found here: arxiv.org/abs/1810.04… .
To give a few numbers, here are the results on theSQuAD v1.1 question answering task:
|SQuAD v1.1 Leaderboard (Oct 8th 2018)||Test EM||Test F1|
|1st Place Ensemble – BERT||87.4||93.2|
|2nd Place Ensemble – nlnet||86.0||91.7|
|1st Place Single Model – BERT||85.1||91.8|
|2nd Place Single Model – nlnet||83.5||90.1|
And several natural language inference tasks:
|OpenAI GPT (Prev. SOTA)||82.2||88.1||75.0|
Plus many other tasks.
Moreover, these results were all obtained with almost no task-specific neural network architecture design.
If you already know what BERT is and you just want to get started, you candownload the pre-trained modelsandrun a state-of-the-art fine-tuningin only a few minutes.
Pre-training of Deep Bidirectional Transformers for Language Understanding
Keras implementation of BERT(Bidirectional Encoder Representations from Transformers)
PyTorch version of Google AI's BERT model with script to load Google's pre-trained models.
For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation benchmark (GLUE), a tool for evaluating and analyzing the performance of models across a diverse range of existing NLU tasks. GLUE is model-agnostic, but it incentivizes sharing knowledge across tasks because certain tasks have very limited training data. We further provide a hand-crafted diagnostic test suite that enables detailed linguistic analysis of NLU models. We evaluate baselines based on current methods for multi-task and transfer learning and find that they do not immediately give substantial improvements over the aggregate performance of training a separate model per task, indicating room for improvement in developing general and robust NLU systems.