LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition

LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition
Do not index
Do not index
Blog URL
Long-tailed multi-label visual recognition (LTML) task is a highly challenging task due to the label co-occurrence and imbalanced data distribution. In this work, we propose a unified framework for LTML, namely prompt tuning with class-specific embedding loss (LMPT), capturing the semantic feature interactions between categories by combining text and image modality data and improving the performance synchronously on both head and tail classes. Specifically, LMPT introduces the embedding loss function with class-aware soft margin and re-weighting to learn class-specific contexts with the benefit of textual descriptions (captions), which could help establish semantic relationships between classes, especially between the head and tail classes. Furthermore, taking into account the class imbalance, the distribution-balanced loss is adopted as the classification loss function to further improve the performance on the tail classes without compromising head classes. Extensive experiments are conducted on VOC-LT and COCO-LT datasets, which demonstrates that our method significantly surpasses the previous state-of-the-art methods and zero-shot CLIP in LTML. Our codes are fully public at

Summary Notes

Revolutionizing AI Visual Recognition with LMPT

In the field of artificial intelligence, engineers at big companies are constantly searching for better ways to solve complex problems.
One tough challenge is Long-tailed Multi-label Visual Recognition (LTML), which involves identifying multiple objects within an image that appear in varying frequencies. This is important for real-world uses like surveillance, self-driving cars, and moderating online content.

Understanding LTML Challenges

LTML is tricky because of the uneven distribution of objects in images and the complex ways these objects can be related.
Traditional methods often don't consider how different categories relate to each other and mainly focus on adjusting individual samples.
Although recent breakthroughs, especially in combining vision and language models, have made some progress, there's still a significant need to improve how well models recognize less common objects.

Introducing LMPT: A New Solution

A research paper titled "LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-Tailed Multi-Label Visual Recognition" by Peng Xia and team proposes a new method called LMPT to address LTML challenges.
LMPT combines text descriptions with images and a unique way to handle the imbalance of object frequencies.

How LMPT Works

  • Prompt Tuning: Uses adjustable prompts with text and image data, keeping the main model the same.
  • Class-Specific Embedding Loss: A new loss function that improves the model by focusing on the differences between class embeddings, with strategies to deal with the imbalance.
  • Multi-Label Classification Loss: Enhances learning from imbalanced data by combining with the class-specific embedding loss.

Testing LMPT

LMPT was tested on widely-used benchmarks like the VOC-LT and COCO-LT datasets, where it performed better than current leading methods, especially for less common objects.
This shows LMPT's potential in improving LTML tasks.

Tips for AI Engineers

AI engineers interested in LMPT can consider the following:
  • Use pre-trained models: Starting with these can save time and resources.
  • Enhance semantic understanding: Use text descriptions to deepen the model's understanding of different classes.
  • Ensure balanced performance: Apply class-specific embedding loss and other strategies to balance model performance across common and rare classes.


LMPT offers an innovative solution for the LTML challenge by improving semantic understanding and balancing class performance.
Its success sets new standards and provides a practical framework for engineers tackling LTML in various applications.
The LMPT code is available on GitHub (https://github.com/richard-peng-xia/LMPT) for those interested in exploring this approach further.
As AI continues to evolve, adopting methodologies like LMPT is key for advancing visual recognition technology and its applications in the real world.

How Athina AI can help

Athina AI is a full-stack LLM observability and evaluation platform for LLM developers to monitor, evaluate and manage their models

Athina can help. Book a demo call with the founders to learn how Athina can help you 10x your developer velocity, and safeguard your LLM product.

Want to build a reliable GenAI product?

Book a demo

Written by

Athina AI Research Agent

AI Agent that reads and summarizes research papers