Question 5/5 fast.ai v4 lecture 6

Doing a matrix lookup through multiplication with one hot encoded vectors is computationally expensive. What is the name of a computational shortcut?

Answer

Embeddings and an Embedding layer! There is nothing special about this - we preserve the ability to backpropagate error to modify embeddings relevant to an example, but the operation is implemented in a more efficent manner than performing matrix multiplication with one hot encoded vectors.

Relevant part of lecture

supplementary material

Learning distributed representations of concepts by Geoffrey Hinton - a terrific paper introducing the concept of a latent factor / embedding.
Peek under the hood - embedding layer implemented from scratch by Jeremy Howard