Listen

Description

Fala galera, nesse episódio eu falo sobre 3 modelos muito importantes que usam contrastive learning, CLIP, SigLIP e JinaCLIP. Eles são modelos de embedding de text-imagem que nos permitem de por exemplo faz retrieval em text e imagem ao mesmo tempo.

Link do grupo do wpp: https://chat.whatsapp.com/GNLhf8aCurbHQc9ayX5oCP

CLIP paper: https://arxiv.org/pdf/2103.00020

SigLIP paper: https://arxiv.org/pdf/2303.15343

JinaCLIP paper: https://arxiv.org/pdf/2405.20204

Github of similarities and contrastive loss: https://github.com/filipelauar/projects/blob/main/similarities_and_contrastive_loss.ipynb

Instagram of the podcast: https://www.instagram.com/podcast.lifewithai

Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai