( ) -q-28- UNDATED (Correspondent Jeremy House) “without adequate caution.’” A group of OpenAI’s current and former workers are calling on the ChatGPT-maker and other artificial intelligence companies to protect employees who flag safety risks about AI technology. [CutID: <Cuts> OPENAI-WHISTLEBLOWERS-house-q-WEDam.mp3 Time: 28s Title: OPENAI-WHISTLEBLOWERS-house-q-WEDam Out-cue: without adequate caution] TAG: Correspondent Jeremy House reporting. ————————- […]
Audio
Former OpenAI employees lead push to protect whistleblowers flagging artificial intelligence risks
( ) -q-28- UNDATED (Correspondent Jeremy House) “without adequate caution.'”
A group of OpenAI’s current and former workers are calling on the ChatGPT-maker and other artificial intelligence companies to protect employees who flag safety risks about AI technology.
[CutID: <Cuts> OPENAI-WHISTLEBLOWERS-house-q-WEDam.mp3
Time: 28s
Title: OPENAI-WHISTLEBLOWERS-house-q-WEDam
Out-cue: without adequate caution]
TAG: Correspondent Jeremy House reporting.
————————-
VERBATIM: An open letter asks tech companies to establish stronger whistleblower protections. That would enable researchers to raise concerns about the development of high-performing AI systems internally and with the public without fear of retaliation. Former OpenAI engineer Daniel Ziegler, one of the organizers behind the open letter, says the development of more powerful AI systems is “moving fast and there are a lot of strong incentives to barrel ahead without adequate caution.”