Simple and Efficient Deep Learning for Natural Language Processing

Online Campus

Online
Anywhere
Online

Past Locations for this Event

Simple and Efficient Deep Learning for Natural Language Processing | Online

Online Campus

Online
Anywhere
Online

Past Locations for this Event

About this event

Large transformer-based neural networks such as BERT, GPT and XLNET have recently achieved state-of-the-art results in many NLP tasks. The success of these models is based on transfer learning between a generic task (for example, language modeling) and a specific downstream task.

However, the use of these generic pre-trained models come at a cost and challenges: 1. The number of parameters and depth of the model brings a heavy computational burden, making then a non-energy efficient and challenge to deploy during inference. 2. Deployment of these models in dynamic commercial environments often yield poor results. This is because commercial environments are usually dynamic, and contain continuous domain shifts (e.g. new themes, new vocabulary or new writing styles) between inference and training data.

In the first part of the session we will present the latest methods for efficient NLP like compression, quantization, and distillation with few real use-cases examples. In the second part of the session, we will focus on Aspect Based Sentiment Analysis application to demonstrate the challenges in low resource environments, and propose a step toward closing the gap with embedding structural information.

Coming up near you

Let’s Keep You Updated

Enter your email to start following

I have read and acknowledge General Assembly's Privacy Policy and Terms of Service. SMS message and data rates may apply.