OpenAI has developed CriticGPT, an AI assistant designed to help human trainers refine GPT-4’s code generation abilities. CriticGPT identifies subtle coding errors that humans might miss, improving the accuracy and quality of GPT-4’s code output. The model significantly outperforms both humans and other AI systems in bug detection, and even helps reduce hallucinations in code generation.
Results for: OpenAI
OpenAI has unveiled CriticGPT, an AI model specifically designed to identify and correct errors in code generated by GPT-4. This new model leverages reinforcement learning from human feedback (RLHF) to enhance code quality, achieving a 63% improvement over ChatGPT in error detection. While still under development, CriticGPT has the potential to revolutionize code review and enhance the reliability of AI-generated code.
The ChatGPT app, previously limited to Plus subscribers, is now available to everyone on macOS with Sonoma operating system. The app offers easy access to ChatGPT through a shortcut, allowing users to interact with the AI through text and voice. While Apple has partnered with OpenAI, a Windows version is not yet available.
While the release date of GPT-5 remains unclear, Microsoft AI CEO Mustafa Suleyman suggests that the next major leap in AI capabilities will arrive with GPT-6 in approximately two years. Suleyman believes these models will be capable of following instructions and taking consistent actions, marking a significant advancement from current models.
OpenAI, the creator of ChatGPT, has acquired Multi, a screensharing and collaboration tool for software engineering teams. This acquisition has sparked speculation that ChatGPT could gain new capabilities, potentially allowing it to remotely control users’ computers. The acquisition has raised concerns about security and privacy, but also highlights the growing trend of integrating AI deeper into PCs.
OpenAI’s Chief Technology Officer Mira Murati has revealed that the next generation of ChatGPT, potentially called GPT-5, is expected to be released in a year and a half and will possess ‘Ph.D.-level’ intelligence for specific tasks. Murati compares the progression from GPT-3 to GPT-5 as a person moving from being a toddler to a high schooler and then to a Ph.D. student, showcasing the rapid advancements in AI technology.
OpenAI has appointed former NSA director Paul Nakasone to its board and newly formed Safety and Security Committee. Nakasone’s expertise in cybersecurity and national security aims to address concerns and enhance OpenAI’s commitment to responsible AI development. Senator Mark Warner praised Nakasone’s appointment, highlighting his experience in cybersecurity, election security, and China-related tech challenges. The move responds to criticisms from former employees alleging that OpenAI prioritized speed over safety in AI development. OpenAI’s formation of the Safety and Security Committee and inclusion of Nakasone demonstrate its efforts to strengthen AI security and mitigate risks associated with powerful AI technologies.
Elon Musk has strongly criticized Apple’s plan to collaborate with OpenAI to integrate artificial intelligence features into the iPhone. Musk expressed concerns about security and privacy, accusing Apple of engaging in “creepy spyware” behavior. He threatened to ban iPhones from the premises of his companies, SpaceX, Tesla, and X. Apple’s partnership announcement with OpenAI at its Worldwide Developers Conference was met with immediate backlash from Musk, who labeled it an “unacceptable security violation.” He directly challenged Apple CEO Tim Cook, demanding that the company halt the initiative or face a ban on Apple devices in his companies’ facilities.
A group of current and former employees from leading AI companies, including OpenAI, Google’s Deep Mind, and Anthropic, have voiced concerns about the rapid development of generative AI technology without effective oversight. They highlight potential Risiken such as exacerbating inequality, manipulating information, and even the potential for AI systems to escape control. The group urges collaboration between scientists, legislators, and the public to mitigate these risks. They also call on AI companies to establish anonymous whistleblower protections and refrain from enforcing non-disparagement agreements.
OpenAI, the creator of ChatGPT, has taken action against deceptive uses of AI in covert operations focused on the Indian elections. The company disrupted a network of accounts originated from Israel that used AI-generated content to criticize the ruling BJP party and praise the opposition Congress party. The disrupted activity had no significant impact on audience engagement or reach. OpenAI is committed to developing safe AI and preventing abuse of its services.