Listen

Description

## Summary


Today on the show I am talking to Proofreads CTO Alexandre Paris. Alex explains in great detail how they analyze digital books drafts to identify mistakes and instances within the document that dont follow guidelines and preferences of the user.




Alex is explaining how they fine-tune LLMs like, Mistrals 7B to achieve efficient resource usage and provide customizations and serve multiple uses cases with a single base model and multiple lora adapters.




We talk about the challenges and capabilities of fine-tuning, how to do it, when to apply it and when for example prompt engineering of an foundation model is the better choice.




I think this episode is very interesting for listeners that are using LLMs in a specific domain. It shows how fine-tuning a base model on selected high quality corpus can be used to build solutions outperform generic offerings by OpenAI or Google.




## AAIP Community


Join our discord server and ask guest directly or discuss related topics with the community.


https://discord.gg/5Pj446VKNU




## TOC


00:00:00 Beginning


00:02:46 Guest Introduction


00:06:12 Proofcheck Intro


00:11:43 Document Processing Pipeline


00:26:46 Customization Options


00:29:49 LLM fine-tuning


00:42:08 Prompt-engineering vs. fine-tuning




### References


https://www.proofcheck.io/


Alexandre Paris - https://www.linkedin.com/in/alexandre-paris-92446b22/