Listen

Description

Large Language Models have taken the world by storm. But what are the real use cases? What are the challenges in productionizing them? In this event, you will hear from practitioners about how they are dealing with things such as cost optimization, latency requirements, trust of output, and debugging. You will also get the opportunity to join workshops that will teach you how to set up your use cases and skip over all the headaches.

Join the AI in Production Conference on February 15 and 22 here: https://home.mlops.community/home/events/ai-in-production-2024-02-15

_________________________________________________________


Aparna Dhinakaran is the Co-Founder and Chief Product Officer at Arize AI, a pioneer and early leader in machine learning (ML) observability.MLOps podcast #210 with Aparna Dhinakaran, Co-Founder and Chief Product Officer of Arize AI, LLM Evaluation with Arize AI's Aparna Dhinakaran.

// Abstract

Dive into the complexities of Language Model (LLM) evaluation, the role of the Phoenix evaluations library, and the importance of highly customized evaluations in software applications. The discourse delves into the nuances of fine-tuning in AI, the debate between the use of open-source versus private models, and the urgency of getting models into production for early identification of bottlenecks. Then, examine the relevance of retrieved information, output legitimacy, and the operational advantages of Phoenix in supporting LLM evaluations.

// Bio

Aparna Dhinakaran is the Co-Founder and Chief Product Officer at Arize AI, a pioneer and early leader in AI observability and LLM evaluation. A frequent speaker at top conferences and a thought leader in the space, Dhinakaran is a Forbes 30 Under 30 honoree. Before Arize, Dhinakaran was an ML engineer and leader at Uber, Apple, and TubeMogul (acquired by Adobe). During her time at Uber, she built several core ML Infrastructure platforms, including Michelangelo. She has a bachelor’s from Berkeley's Electrical Engineering and Computer Science program, where she published research with Berkeley's AI Research group. She is on a leave of absence from the Computer Vision Ph.D. program at Cornell University.

// MLOps Jobs board

jobs.mlops.community

// MLOps Swag/Merch

https://mlops-community.myshopify.com/

// Related Links

Arize-Phoenix: https://phoenix.arize.com/

Phoenix LLM task eval library: https://docs.arize.com/phoenix/llm-evals/running-pre-tested-evals

Aparna's recent piece on LLM evaluation: https://arize.com/blog-course/llm-evaluation-the-definitive-guide/

Thread on the difference between model and task LLM evals: https://twitter.com/aparnadhinak/status/1752763354320404488

Research thread on why numeric score evals are broken: https://twitter.com/aparnadhinak/status/1748368364395721128

--------------- ✌️Connect With Us ✌️ -------------

Join our Slack community: https://go.mlops.community/slack

Follow us on Twitter: @mlopscommunity

Sign up for the next meetup: https://go.mlops.community/register

Catch all episodes, blogs, newsletters, and more: https://mlops.community/

Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/

Connect with Aparna on LinkedIn: https://www.linkedin.com/in/aparnadhinakaran/

Timestamps:[00:00] AI in Production Conference[02:12] Aparna preferred coffee[02:26] Takeaways[04:40] Shout out to Arize team for being a sponsor of the MLOps Community since 2020![05:30] Please like, share, & subscribe to our MLOps channels![08:23] Evaluation space[12:23] Chatbots Prevent Misinformation[18:48] Balancing eval response and impact on speed[26:16] GPT-4 excels, prompt iterations affect outcomes[31:28] Multiple sub-steps and requiring visibility on Application calls[37:43] Classification for evaluation Research[41:08] Benchmarks on Huggingface and Twitter reliability[44:19] Power of observability and retrieval embeddings[48:02] Tweaking data points[50:28] Hot take[53:35] Bottlenecks and errors from rapid production