Despite their richness and diversity, low-resource languages have not received as much attention from NLP researchers as high-resource languages like English and Spanish. However, recent progress in transfer learning, unsupervised learning, and data augmentation techniques show promise for improving NLP systems for low-resource languages. Leveraging the latent symmetry learned by multilingual language models through joint training, this report explores how cross-lingual learning can benefit the understanding of Dravidian languages, specifically, Telugu, Malayalam and Tamil. The blogpost covers tasks related to question answering, transliteration, code-switching, and hate speech detection. This non-exhaustive survey aims to facilitate further research in these important and socially beneficial tasks.