Listen

Description

The potential risks associated with false claims and grifting in the AI industry.

The resurgence of Intel in the foundry and product space, with a goal to become the second-largest foundry by 2030.

The discovery of a new technique called "Many-shot jailbreaking" that can be used to evade the safety guardrails of large language models.

The evaluation of the performance of large language models in long in-context learning scenarios, highlighting a notable gap in current LLM capabilities for processing and understanding long, context-rich sequences.

Contact:  sergi@earkind.com

Timestamps:

00:34 Introduction

01:47 Google's DeepMind CEO says the massive funds flowing into AI bring with it loads of hype and a fair share of grifting

03:37 Many-shot jailbreaking

05:07 Is Intel Back? Foundry & Product Resurgence Measured

06:21 Fake sponsor

08:03 Logits of API-Protected LLMs Leak Proprietary Information

09:48 ViTamin: Designing Scalable Vision Models in the Vision-Language Era

11:24 Long-context LLMs Struggle with Long In-context Learning

13:09 Outro