Deep Learning Techniques for Natural Language Processing
What’s the deal with the Natural Language Processing (NLP)? Language is ambiguous, and relies on subtle cues to aid in understanding, but with deep learning, clever use of linear algebra, calculus, and probability, we can do a lot. This session explores NLP: how we find semantics of words, model languages with RNNs, classify text, and recognise speech. Learn from first principles how it all works.
Specifically, this session will explore:
- how we’re on the route to “Artificial General Intelligence”, via a key component: NLP
- how old-fashioned symbolic AI is completely useless for representing and analysing the ambiguous, noisy data the his human speech
- how and why there have been so many recent advances, particularly using statistical techniques, in NLP (e.g. BERT, ELMO, and beyond)
This applied session will explore recent advances in NLP, lately using recurrent neural networks (RNNs), and explore how they work from the mathematical principles they’re based on, through to the algorithms that result. We will also touch on the practical repercussions of NLP needs for CPU and GPUs.
This session is for everyone that wants a foundation on NLP, but hasn’t had the time to build one yet!