athina.ai
Research and guides for building safe and reliable AI products. Helping thousands of AI engineers build safer products.

•
From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings

•
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
The EVER (Real-Time Verification and Rectification) framework is designed to dynamically mitigate hallucinations during text generation by ensuring the accuracy and trustworthiness of each sentence before proceeding.
•
How to evaluate your Llama Index query engine using Ragas evals + Athina AI
If you're using Llama Index to work with advanced retrieval strategies, you're going to need a great evaluation setup. Here's how you can use Athina's SDK to run Ragas evals on your Llama Index RAG pipeline.
Evaluation
Athina
Prompt Engineering
Hallucinations

•
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
The EVER (Real-Time Verification and Rectification) framework is designed to dynamically mitigate hallucinations during text generation by ensuring the accuracy and trustworthiness of each sentence before proceeding.