Listen

Description

Alignment architect wanted at $555K+ leading OpenAI Safety initiatives urgently. Role spans empirical deception studies to deployment safeguards comprehensively. Escalating comp signals superalignment criticality.

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.