Negative transfer continues to limit the benefits of multi-task learning (MTL) in harmful language detection, where related tasks must share representations without diluting task-specific nuances. We introduce Task Awareness (TA), a methodological framework that explicitly conditions MTL models on the task they must solve. TA is instantiated through two complementary mechanisms: Task-Aware Input (TAI), which augments textual inputs with natural-language task descriptions, and Task Embedding (TE), which learns task-specific transformations guided by a task identification vector. Together they enable the encoder to disentangle shared and task-dependent signals, reducing interference during joint optimisation. We integrate TA with BETO and AraBERT encoders and evaluate on six Spanish and Arabic datasets covering sexism, toxicity, offensive language, and hate speech. Across cross-validation and official train-test splits, TA consistently mitigates negative transfer, surpasses single-task and conventional MTL baselines, and yields new state-of-the-art scores on EXIST-2021, HatEval-2019, and HSArabic-2023. The proposed methodology therefore combines a principled architectural innovation with demonstrated practical gains for multilingual harmful language detection. The resources to reproduce our experiments are publicly available at https://github.com/AngelFelipeMP/Arabic-MultiTask-Learning
Add the full text or supplementary notes for the publication here using Markdown formatting.