When the performance of the model vary widely across some important feature of the data it will be run on. Often, the source of this bias is not having a diverse enough dataset used for training.
Relevant part of lecture
supplementary material
Gender Shades - a great website popularizing the work of Joy Buolamwini & Timnit Gebru analyzing the performance of algorithms across geneder and skin tone.