The potential risks associated with false claims and grifting in the AI industry.
The resurgence of Intel in the foundry and product space, with a goal to become the second-largest foundry by 2030.
The discovery of a new technique called "Many-shot jailbreaking" that can be used to evade the safety guardrails of large language models.
The evaluation of the performance of large language models in long in-context learning scenarios, highlighting a notable gap in current LLM capabilities for processing and understanding long, context-rich sequences.
Contact:Â Â sergi@earkind.com
Timestamps:
00:34 Introduction
03:37Â Many-shot jailbreaking
05:07Â Is Intel Back? Foundry & Product Resurgence Measured
06:21 Fake sponsor
08:03Â Logits of API-Protected LLMs Leak Proprietary Information
09:48Â ViTamin: Designing Scalable Vision Models in the Vision-Language Era
11:24Â Long-context LLMs Struggle with Long In-context Learning
13:09 Outro