Sunday , July 20 2025
Mira Murati, CTO, Open AI

Insider Q&A: OpenAI CTO Mira Murati on Shepherding ChatGPT

OpenAI was building a reputation in the artificial intelligence field but wasn’t a household name when Mira Murati joined the nonprofit research lab in 2018.

Soon after, the San Francisco lab started a major transformation. It turned itself into a business that’s attracted worldwide attention as the maker of ChatGPT.

HPE alerts of hardcoded passwords in Aruba access points

Hewlett-Packard Enterprise (HPE) warns that Aruba Instant On Access Points have hardcoded credentials, enabling attackers to skip normal authentication and...
Read More
HPE alerts of hardcoded passwords in Aruba access points

Akira Ransomware Allegedly Compromise 12 Companies in 72 Hours

The Akira ransomware group increased its attacks, adding 12 new victims to its dark web portal from July 15 to...
Read More
Akira Ransomware Allegedly Compromise 12 Companies in 72 Hours

Singapore urgently engage military force to tackle ‘serious’ cyberattack

Defence Minister Chan Chun Sing said these select units will work with the Cyber Security Agency (CSA) in a united...
Read More
Singapore urgently engage military force to tackle ‘serious’ cyberattack

Hackers infect 10M Androids with BADBOX 2.0

Google is suing 25 unidentified cybercriminals thought to be from China for running BADBOX 2.0, a major global botnet with...
Read More
Hackers infect 10M Androids with BADBOX 2.0

Oracle Patched 200 Vulns With July 2025 CPU

Oracle's July 2025 Critical Patch Update includes 309 new security patches, with 127 addressing remotely exploitable vulnerabilities. SecurityWeek found about...
Read More
Oracle Patched 200 Vulns With July 2025 CPU

Ivanti Zero-Days Exploited to Drop MDifyLoader

Cybersecurity researchers have revealed a new malware named MDifyLoader, linked to cyber attacks using security vulnerabilities in Ivanti Connect Secure...
Read More
Ivanti Zero-Days Exploited to Drop MDifyLoader

CISA added Fortinet FortiWeb vul to KEV catalog

U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added a crucial vulnerability in Fortinet FortiWeb in its Known Exploited Vulnerabilities...
Read More
CISA added Fortinet FortiWeb vul  to KEV catalog

Adoption Agency Exposes One Million+ Records

Security researcher Jeremiah Fowler discovered an online database exposing sensitive information from an adoption agency. Jeremiah Fowler Jeremiah specializes in...
Read More
Adoption Agency Exposes One Million+ Records

CVE-2025-20337
Patch Now! Cisco ISE bug allows pre-auth command execution

A critical vulnerability in Cisco Identity Services Engine (ISE) and Cisco ISE-PIC, identified as CVE-2025-20337, has a CVSS score of...
Read More
CVE-2025-20337  Patch Now! Cisco ISE bug allows pre-auth command execution

BD Bank Honours PABC Officials for Foiling $20 Million Cyber Fraud Attempt

On Tuesday, Bangladesh Bank organized a special award ceremony at its headquarters in Dhaka to formally recognize and honor a...
Read More
BD Bank Honours PABC Officials for Foiling $20 Million Cyber Fraud Attempt

Now its chief technology officer, Murati leads OpenAI’s research, product and safety teams. She’s led the development and launch of its AI models including ChatGPT, the image-generator DALL-E and the newest, GPT-4.

She spoke with The Associated Press about AI safeguards and the company’s vision for the futuristic concept of artificial general intelligence, known as AGI. The interview has been edited for length and clarity.

Q: What does artificial general intelligence mean for OpenAI?

A: By artificial general intelligence, we usually mean highly autonomous systems that are capable of producing economic output, significant economic output. In other words, systems that can generalize across different domains. It’s human-level capability. OpenAI’s specific vision around it is to build it safely and figure out how to build it in a way that’s aligned with human intentions, so that the AI systems are doing the things that we want them to do, and that it maximally benefits as many people out there as possible, ideally everyone.

Q: Is there a path between products like GPT-4 and AGI?

A: We’re far from the point of having a safe, reliable, aligned AGI system. Our path to getting there has a couple of important vectors. From a research standpoint, we’re trying to build systems that have a robust understanding of the world similarly to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities. The other angle has been scaling these systems to increase their generality. With GPT-4, we’re dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn’t even understand that high-level goal or high-level direction, it’s much harder to align it. It’s not enough to build this technology in a vacuum in a lab. We really need this contact with reality, with the real world, to see where are the weaknesses, where are the breakage points, and try to do so in a way that’s controlled and low risk and get as much feedback as possible.

Q: What safety measures do you take?

A: We think about interventions at each stage. We redact certain data from the initial training on the model. With DALL-E, we wanted to reduce harmful bias issues we were seeing. We adjusted the ratio of female and male images in the training dataset. But you have to be very careful because you might create some other imbalance. You have to constantly audit. In that case, we got a different bias because a lot of these images were of a sexual nature. So then you have to adjust it again and be very careful about every time you make an intervention, seeing what else is being disrupted. In the model training, with ChatGPT in particular, we did reinforcement learning with human feedback to help the model get more aligned with human preferences. Basically what we’re trying to do is amplify what’s considered good behavior and then de-amplify what’s considered bad behavior.

Q: Should these systems be regulated?

A: Yeah, absolutely. These systems should be regulated. At OpenAI, we’re constantly talking with governments and regulators and other organizations that are developing these systems to, at least at the company level, agree on some level of standards. We’ve done some work on that in the past couple of years with large language model developers in aligning on some basic safety standards for deployment of these models. But I think a lot more needs to happen. Government regulators should certainly be very involved.

Q: A letter calling for a 6-month industry pause on building AI models more powerful than GPT-4 got a lot of attention. What do you think of the petition and its assumption about AI risks?

A: Look, I think that designing safety mechanisms in complex systems is hard. There is a lot of nuance here. Some of the risks that the letter points out are completely valid. At OpenAI, we’ve been talking about them very openly for years and studying them as well. I don’t think signing a letter is an effective way to build safety mechanisms or to coordinate players in the space. Some of the statements in the letter were just plain untrue about development of GPT-4 or GPT-5. We’re not training GPT-5. We don’t have any plans to do so in the next six months. And we did not rush out GPT-4. We took six months, in fact, to just focus entirely on the safe development and deployment of GPT-4. Even then, we rolled it out with a high number of guardrails and a very coordinated and slow rollout. It’s not easily accessible to everyone, and it’s certainly not open source. This is all to say that I think the safety mechanisms and coordination mechanisms in these AI systems and any complex technological system is difficult and requires a lot of thought, exploration and coordination among players.

Q: How much has OpenAI changed since you joined?

A: When I joined OpenAI, it was a nonprofit. I thought this was the most important technology that we will ever build as humanity and I really felt like a company with OpenAI’s mission would be most likely to make sure that it goes well. Over time, we changed our structure because these systems are expensive. They require a lot of funding. We made sure to structure the incentives in such a way that we would still serve the nonprofit mission. That’s why we have a “capped profit” structure. People at OpenAI are intrinsically motivated and mission-aligned and that hasn’t changed from the beginning. But over the course of five years, our thinking has evolved a lot when it comes to what’s the best way to deploy, what’s the safest way. That’s probably the starkest difference. I think it’s a good change.

Q: Did you anticipate the response to ChatGPT before its Nov. 30 release?

A: The underlying technology had been around for months. We had high confidence in the limitations of the model from customers that had already been using it via an API. But we made a few changes on top of the base model. We adapted it to dialog. Then we made that available to researchers through a new ChatGPT interface. We had been exploring it internally with a small, trusted group, and we realized the bottleneck was getting more information and getting more data from people. We wanted to expand it to more people out there in what we call a research preview, not a product. The intention was to gather feedback on how the model is behaving and use that data to improve the model and make it more aligned. We didn’t anticipate the degree to which people would be so captivated with talking to an AI system. It was just a research preview. The number of users and such, we didn’t anticipate that level of excitement.

Check Also

FortiWeb

CVE-2025-25257
Fortinet Addresses Major SQL Injection Flaw in FortiWeb

Fortinet has issued a critical patch for a critical vulnerability in its FortiWeb product, a …

Leave a Reply

Your email address will not be published. Required fields are marked *