Listen

Description

Welcome to today's AI update from Vertica. I'm your host, bringing you the latest news in artificial intelligence. Our main story today focuses on LinkedIn and its recent decision to train AI models using user data without explicit consent. This move has stirred up significant concerns around privacy and data usage. According to reports from 404Media and TechCrunch, LinkedIn introduced a new privacy setting and opt-out form while updating its privacy policy. [Source](https://www.theverge.com/2024/9/18/24248471/linkedin-ai-training-user-accounts-data-opt-in) This change allows LinkedIn to use data from its platform to improve and train AI models. While users can opt-out by navigating to the Data Privacy settings and switching off the 'Data for Generative AI Improvement' toggle, this does not affect data that has already been used for training. The company claims to use 'privacy enhancing technologies' to remove personal data from training datasets. They also clarified that data from EU, EEA, and Switzerland users is not used for AI model training. Interestingly, this step by LinkedIn follows similar actions by Meta, sparking a broader debate on ethical AI training and data privacy. Users who wish to opt-out of other machine learning applications can complete the LinkedIn Data Processing Objection Form. This unfolding situation at LinkedIn and Meta highlights critical issues about data utilization practices in AI and the need for clear regulatory frameworks to protect user data and trust. Stay tuned as we continue to monitor this evolving story. Moving on to our other key stories: OpenAI's Safety and Security Committee is now operating as an independent oversight entity. This change allows the committee to provide unbiased recommendations that can significantly influence future AI development policies and regulatory compliance. [Source](https://qz.com/openais-safety-security-committee-goes-independent-1851651687) Next up, Google is enhancing its search engine's capabilities to detect AI manipulated images. Collaborating with the Coalition for Content Provenance and Authenticity, Google aims to create a global standard for identifying the origins of AI-edited images. By implementing content credentials using the C2PA's new 2.1 standard in products like Google Search and eventually Google Ads, this initiative seeks to improve transparency and trust in online content. [Source](https://mashable.com/article/google-c2pa-ai-content-labeling) Now, for some other noteworthy stories: HubSpot has launched Breeze, an all-in-one AI solution for marketing, sales, and service with several new features. [Source](https://martech.org/hubspot-unveils-breeze-its-complete-ai-solution/) And Edera is working on improving Kubernetes and AI security from the ground up, which is crucial given the rise of cloud-based applications. [Source](https://techcrunch.com/2024/09/18/edera-is-building-a-better-kubernetes-and-ai-security-solution-from-the-ground-up/) That's all for today's update. Thanks for listening, and we'll be back tomorrow with more AI news.