Athina AI
Blog HomePlatformGithub
Open main menu
Blog HomePlatformGithub
  1. Home
  2. Tags
  3. Research Paper

Research Paper

Related to Content (1) (Tags)
Related to Content (1) (Tags) 1
You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models
Reflexion: Language Agents with Verbal Reinforcement Learning
Prompt Engineering
•May 27, 2025

Reflexion: Language Agents with Verbal Reinforcement Learning

LLM Critics Help Catch LLM Bugs
Evaluation
•Jun 28, 2024

LLM Critics Help Catch LLM Bugs

LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition
Prompt Engineering
•Jun 18, 2024

LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition

Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models
Prompt Engineering
•Jun 17, 2024

Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models

MedPromptExtract (Medical Data Extraction Tool): Anonymization and Hi-fidelity Automated data extraction using NLP and prompt engineering
Prompt Engineering
•Jun 6, 2024

MedPromptExtract (Medical Data Extraction Tool): Anonymization and Hi-fidelity Automated data extraction using NLP and prompt engineering

Assessing Prompt Injection Risks in 200+ Custom GPTs
Safety
•May 25, 2024

Assessing Prompt Injection Risks in 200+ Custom GPTs

Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling
Reasoning
•May 23, 2024

Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling

RNNs are not Transformers (Yet): The Key Bottleneck on In-context Retrieval
RAG
•May 10, 2024

RNNs are not Transformers (Yet): The Key Bottleneck on In-context Retrieval

State of What Art? A Call for Multi-Prompt LLM Evaluation
Evaluation
•May 6, 2024

State of What Art? A Call for Multi-Prompt LLM Evaluation

Prompt Cache: Modular Attention Reuse for Low-Latency Inference
Evaluation
•Apr 25, 2024

Prompt Cache: Modular Attention Reuse for Low-Latency Inference

Want to keep up with the latest & greatest LLM research?

Join 2000+ AI engineers

By providing your email, you agree to our Privacy Policy.

Athina AI
  • Blog Home
  • Open-Source Evals
  • Platform
  • Website
  • Twitter
  • LinkedIn
  • RSS Feed
  • Sitemap

© 2023 Athina AI. All Rights Reserved.