We're rebooting the DC-NLP meetup with a program featuring two speakers: Shabnam Tafreshi will speak on Examining Gender and Race Bias in Emotion Classification Systems, and Georgetown Prof. Nathan Schneider will be our second speaker, subject to confirmation. Doors will open at 6 pm and our program will start at 6:30 pm.
Shabnam's precis: Automatic machine learning systems can inadvertently highlight and perpetuate inappropriate human biases. Recent studies have shown that multiple NLP systems trained on human-written texts learn human-like biases based on gender, ethnicity, race, or religion. This study measures the effect of such training in emotion classification and discuss mitigation methodologies to reduce such biases in emotion detection and classification systems.