Listen

Description

OpenAI Launches Independent Safety Board to Oversee Model ReleasesPublished Date: September 16, 2024In a bid to enhance the safety and security of its AI model releases, OpenAI has announced the creation of an independent Safety Board with the authority to delay model launches if safety concerns arise. This development comes on the heels of a comprehensive 90-day review of the company’s safety and security processes.The newly formed oversight body will derive its operational framework from OpenAI’s existing Safety and Security Committee, transforming it into an independent entity. Chaired by Zico Kolter, the Safety Committee includes notable figures such as Adam D’Angelo, Paul Nakasone, and Nicole Seligman. The committee holds the mandate to review safety evaluations for major model releases and make decisions regarding the timing of such releases, exercising the power to delay any launch until identified safety concerns are adequately addressed.Interestingly, the members of this Safety Committee are also integral to OpenAI’s broader board of directors, raising questions about the precise nature and scope of its independence. Unlike the structure of Meta’s Oversight Board—where members are distinct from the company’s board of directors—the overlap in OpenAI’s governance could imply a more intertwined decision-making process. The absence of Sam Altman, OpenAI’s CEO, on this committee marks a notable change in its composition.The committee will ensure that the full OpenAI board is periodically briefed on safety and security matters, further embedding safety oversight within the company’s operational ethos. This includes the broader board receiving updates on potential collaborations within the AI industry and exploring avenues for comprehensive information sharing on safety issues.By establishing this oversight committee, OpenAI aims to advance industry standards in AI safety and security. The initiative highlights the increasing importance of governance structures in the development and deployment of AI technologies. Moreover, the company’s commitment to finding more avenues for independent testing and sharing insights on its safety work underlines a proactive stance towards transparency and collaboration across the AI sector.This move by OpenAI resonates within an industry where the ethical implications and potential risks of AI continue to garner significant attention. By instituting a protocol where safety measures can override developmental timelines, OpenAI sets a precedent in prioritizing security in AI innovations. This model could potentially influence other AI firms to adopt similar governance frameworks, fostering a culture of safety and responsibility within the industry. Other Key Stories for Today’s PodcastHong Kong Considers Rules for AI Use in FinanceThe Hong Kong government is preparing to release its first policy statement on the use of artificial intelligence in the finance sector. This move is expected to significantly influence the application of AI technologies in trading, investment banking, and cryptocurrencies. The Financial Services and Treasury Bureau is drafting guidelines that will cover the ethical use of AI and general principles for its application in finance. Feedback from industry stakeholders is currently being incorporated into the final document, which remains subject to change. Intel to Make Custom AI Chip for Amazon, Delay German PlantIntel has secured a major deal with Amazon Web Services (AWS) for custom AI chips, marking a pivotal step in Intel's turnaround strategy. The partnership involves co-investment in a custom semiconductor for AI computing, known as a fabric chip, under a multiyear, multibillion-dollar framework. This collabora