safe.llm
Research and guides for building safe and reliable AI products. Helping thousands of AI engineers build safer products.
![From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings](https://cdn.feather.blog?src=https%3A%2F%2Fwww.notion.so%2Fimage%2Fhttps%3A%252F%252Fprod-files-secure.s3.us-west-2.amazonaws.com%252F3068bd9e-92f6-4a05-b487-82947771da91%252F0868291a-0ff9-4143-ba76-7577ff430ca0%252FScreenshot_2024-04-16_at_4.53.01_PM.png%3Ftable%3Dblock%26id%3Dc6f886de-c322-404c-a5c5-938a1842484a%26cache%3Dv2&optimizer=image&quality=80&width=280)
•
From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings
![Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification](https://cdn.feather.blog?src=https%3A%2F%2Fwww.notion.so%2Fimage%2Fhttps%3A%252F%252Fprod-files-secure.s3.us-west-2.amazonaws.com%252F3068bd9e-92f6-4a05-b487-82947771da91%252Ff9ac5f95-9f2f-4407-853b-8a5a037d219d%252Fever.png%3Ftable%3Dblock%26id%3Db9142a44-bd73-4965-9be0-4670fa9409b0%26cache%3Dv2&optimizer=image&quality=80&width=280)
•
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
The EVER (Real-Time Verification and Rectification) framework is designed to dynamically mitigate hallucinations during text generation by ensuring the accuracy and trustworthiness of each sentence before proceeding.
•
How to evaluate your Llama Index query engine using Ragas evals + Athina AI
If you're using Llama Index to work with advanced retrieval strategies, you're going to need a great evaluation setup. Here's how you can use Athina's SDK to run Ragas evals on your Llama Index RAG pipeline.
Evaluation
Athina
Prompt Engineering
Hallucinations
![Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification](https://cdn.feather.blog?src=https%3A%2F%2Fwww.notion.so%2Fimage%2Fhttps%3A%252F%252Fprod-files-secure.s3.us-west-2.amazonaws.com%252F3068bd9e-92f6-4a05-b487-82947771da91%252Ff9ac5f95-9f2f-4407-853b-8a5a037d219d%252Fever.png%3Ftable%3Dblock%26id%3Db9142a44-bd73-4965-9be0-4670fa9409b0%26cache%3Dv2&optimizer=image&quality=80&width=280)
•
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
The EVER (Real-Time Verification and Rectification) framework is designed to dynamically mitigate hallucinations during text generation by ensuring the accuracy and trustworthiness of each sentence before proceeding.