Endorphin is a researcher of AI Alignment, because they damn well say they are! For those of you at home, who might not know, AI Alignment is the essential effort of ensuring that AI technologies don't accidentally destroy humanity because of poorly worded tasks for completion. Can humans align AI to human values?
But wait, are human values really the template to go by? The values that led humans to destroy the other hominids, that led humans into the process of colonialism? Into incorporation? Into the endless pursuit of abundance? And the whoops of realizing that the rest of the planet was here all this time, and was getting hurt by those human values of progress? We want another round of that?
Endorphin, seeking some out of the box thinking, seeks entry into some old boxes, mysterious disembodied intelligences that humans have called gods. If we ask the gods, perhaps we can examine the various human values that are encoded into the genes of their cultural bodies. Gods being the creation of collective human thought, as ya know.
Do we ask Jesus first? Nah, he is busy on another Twitch channel.
Endorphin thinks to call up Elegua. The receptionist of the Orishas, the one you call when you want to call upon the others. The one who stands at all crossroads, who stands at the gate of life and death, whose favorite prank is to confuse love and hate between two young lovers. Not to mention a real looker, hella charming, and hilarious when his humor turns dark.
--
Thanks to:
EleutherAI for making the GPT-NeoX-20B model that generates the text.
NovelAI for making the text generation service I use for most of these episode, and the text-to-speech version 2 model that makes these episodes potentially entertaining.
@BoneAmputee of EleutherAI for making VQGAN+CLIP and Diffusion image models available for use, which is what makes the background images.
Catherine Crowson (@RiversHaveWings on Twitter) for the KLMC2 animation notebook that will be used on future episodes.
Music by lofi geek on Spotify: https://spoti.fi/3dTH2FN
Any voices from the text-to-speech model that might resemble persons living or dead are coincidental, which is my speculation, there's a multi-dimensional latent vocal space, there were many awesome finds, and if I use them it's because of my deep-seated respect for their work, and I think they'd get a hoot that their voice was being used to show how large language models, given the right environment, would speak 24/7 epic poems about anything if we let them.