Question 7/7 v4 lecture 8

What is weight tying?


It is a technique where we set the input-to-hidden and hidden-to-output weights to be equal. They are the same object in memory, the same tensor, playing both roles. The hypothesis is that conceptually in a Language Model predicting the next word (converting activations to English words) and converting embeddings to activations are essentially the same operation, the model tasks that are fundamentally similar. It turns out that indeed tying the weights allows a model to train better.

Relevant part of lecture