OpenAI Establishes Specialized Task Force to Address ‘Hyperintelligent’ AI Systems

2023-7-6 15:21

OpenAI Prepares to Forge a Specialized Team for Navigating the Risks of Advanced AI Systems

OpenAI, the organization behind the widely popular AI chatbot ChatGPT, has announced its intention to assemble a team dedicated to mitigating the potential risks associated with the advent of superintelligent AI systems. Through a blog post on July 5, the nonprofit revealed its plans to establish this team with the objective of effectively governing AI systems that surpass human intelligence.

Recognizing superintelligence as a technology that could yield profound impacts, OpenAI emphasized the need to address the potential dangers that accompany it, including the disempowerment or even extinction of humanity. The organization expressed its belief that such superintelligent systems may materialize within the next decade.

To tackle this challenge, OpenAI pledged to allocate 20% of its existing computing power towards this endeavor. The organization aims to develop an automated alignment researcher capable of understanding and aligning superintelligent AI systems with human values and intent, reaching a level of alignment equivalent to human understanding.

OpenAI appointed its chief scientist, Ilya Sutskever, and the head of its research lab’s alignment team, Jan Leike, as co-leaders of this initiative. They have extended an invitation to machine learning researchers and engineers to join their team.

OpenAI’s announcement coincides with global discussions surrounding the regulation and governance of AI systems. Notably, the European Union has made notable progress in enacting AI regulations, exemplified by the recent passing of the EU AI Act, which mandates disclosure of AI-generated content. Similar deliberations are ongoing in the United States, as lawmakers propose the establishment of a National AI Commission to shape the nation’s approach to AI. Concerns regarding potential constraints on innovation have arisen, prompting OpenAI CEO Sam Altman to engage with EU regulators and advocate for balanced regulations.

In light of these developments, Senator Michael Bennet recently drafted a letter urging major tech companies, including OpenAI, to adopt AI-generated content labeling practices. As the landscape surrounding AI governance evolves, OpenAI’s proactive measures to address superintelligent AI risks underscore the importance of responsible and thoughtful deployment of advanced AI systems.

Ñîîáùåíèå OpenAI Establishes Specialized Task Force to Address ‘Hyperintelligent’ AI Systems ïîÿâèëèñü ñíà÷àëà íà Coinstelegram.

Similar to Notcoin - TapSwap on Solana Airdrops In 2024

origin »

POLY AI (AI) íà Currencies.ru

$ 9.75E-5 (+0.00%)
Îáúåì 24H $0
Èçìåíåèÿ 24h: 0.00 %, 7d: 0.00 %
Cåãîäíÿ L: $9.75E-5 - H: $9.75E-5
Êàïèòàëèçàöèÿ $223 Rank 99999
Äîñòóïíî / Âñåãî 2.282m AI

openai systems risks specialized team chatgpt announced

openai systems → Ðåçóëüòàòîâ: 7


Ôîòî:

Who are the creators of AI-generated art — programmers or machines?

Two years ago, engineers released DeepBach, a neural network that generates Baroque anthems almost indistinguishable from Bach’s, but still nobody can decide if it’s really music.   DeepBach is so skilled at replicating Bach that more than half of those surveyed had attributed its computer-generated melodies to the man himself — despite music students and professionals making up 25 percent of respondents.

2019-5-8 19:08