top of page

Natural Language Processing Workload Optimization Using Container Based Deployment

BERT stands for Bidirectional Encoder Representations from Transformers. It is a pretrained model that sets condition for both the left and the right context to pre-train deep bi-directional representations from the unlabeled text. This allows us for a broad perspective of NLP works to be finetuned by the use of just one marginal output layer. The GLUE Benchmark is a set of resources which is used to train, assess and analyze the natural language in order to understand the systems with the final goal of stimulating research into the growth of general and reliable NLU systems. BERT, or Bidirectional Encoder Representations from Transformers, is a pre-training method for deep learning networks. It was developed by Google AI researchers and first released in late 2018. BERT has quickly become one of the most popular methods for natural language processing (NLP) tasks such as text classification, translation, and sentiment analysis. The aim of this research is to fine tune the BERT model so that it can perform GLUE tasks in NLP workloads (Dewangan et al. in IET Commun 15:1869–1882, 2021 [1]) such as CoLA (Corpus of Linguistic Acceptability), SST-2 (Stanford Sentiment Treebank), MRPC (Microsoft Research Paraphrase Corpus), QQP (Quora Question Pairs2), MNLI (Multi-Genre Natural Language Inference), QNLI (Question-answering Natural Language Inference), RTE (Recognizing Textual Entailment) and WNLI (Winograd Natural Language Inference). These are all important NLP tasks that have been used to evaluate a variety of different models. So far, the results have been promising. The BERT model has achieved state-of-the art performance on many GLUE tasks when compared to other pre-trained models such as XLNet and GPT2. This suggests that the BERT model may be a good choice for applications where natural language understanding is required.

Keywords

  • BERT

  • NLP

  • Language modelling

  • Transfer learning

  • Natural language processing

  • Containerization

  • AI University, Montana

  • AI For Research Student Community

Natural Language Processing Workload Optimization Using Container Based Deployment | SpringerLink


3 views0 comments
bottom of page