Monitoring LLMs in Production using LangChain and WhyLabs
WhyLabs WhyLabs
1.21K subscribers
416 views
21

 Published On Streamed live on Apr 2, 2024

Join this hands-on workshop to implement ML monitoring on OpenAI GPT models with LangChain and WhyLabs LangKit.

The ability to effectively monitor and manage large language models (LLMs) like GPT from OpenAI has become essential in the rapidly advancing field of AI. WhyLabs, in response to the growing demand, has created a powerful new tool, LangKit, to ensure LLM applications are monitored continuously and operated responsibly.

Join our workshop designed to equip you with the knowledge and skills to use LangKit with LangChain and OpenAI's GPT models. Guided by our team of experienced AI practitioners, you'll learn how to evaluate, troubleshoot, and monitor large language models more effectively.

This workshop will cover how to:
- Evaluate user interactions to monitor prompts, responses, and user interactions
- Configure acceptable limits to indicate things like malicious prompts, toxic responses, hallucinations, and jailbreak attempts.
- Set up monitors and alerts to help prevent undesirable behavior

What you’ll need:
- A free WhyLabs account (https://whylabs.ai/free)
- A Google account (for saving a Google Colab)
- An OpenAI account (for interacting with GPT)

show more

Share/Embed