Listen

Description

As generative AI and Large Language Models further develop in computing strength and capabilities, how are financial professionals dealing with hallucinations and accuracy in decision making? That is the focus of this episode. Head of AI and Machine Learning Ryan Roser along with Senior Director of Product Development Collins Dunn discuss how LLMs work, how firms can mitigate accuracy risks, and the retrieval augmented generation (RAG) solution for stemming hallucinations and delivering useful, accurate information. They also discuss DeepSeek and open-source models, agents, emerging LLM capabilities, privacy, responsible AI, LLM temperature parameters, and GenAI system design. 

This episode was recorded on February 20, 2025.

The FactSet Insight podcast is for informational purposes only. The information contained within is not legal, tax, or investment advice. FactSet does not endorse or recommend any investments and assumes no liability for any consequence relating directly or indirectly to any action or inaction taken based on the information contained in this podcast episode. The views shared by third-party contributors do not necessarily reflect the opinion of FactSet.

© FactSet Research Systems Inc. All rights reserved.