DeepEval
NEWAbout DeepEval
DeepEval is an open-source LLM evaluation framework with 14+ evaluation metrics including hallucination, answer relevancy, and faithfulness. pytest-style testing makes AI quality evaluation part of the development workflow.
Screenshot
Use Cases
Answer relevancy, and faithfulness
Testing makes AI quality evaluation part of the development
Write and review code faster with AI
Autocomplete code in real time
Pros & Cons
Pros
Completely free to use
Developer API for custom integrations
Open-source and self-hostable
Cons
Suggestions need review for edge cases
Works best on well-structured codebases
Outputs benefit from human review