Saturday , December 21 2024
Mira Murati, CTO, Open AI

Insider Q&A: OpenAI CTO Mira Murati on Shepherding ChatGPT

OpenAI was building a reputation in the artificial intelligence field but wasn’t a household name when Mira Murati joined the nonprofit research lab in 2018.

Soon after, the San Francisco lab started a major transformation. It turned itself into a business that’s attracted worldwide attention as the maker of ChatGPT.

Eight New ICS Advisories released by CISA

CISA has released eight advisories on vulnerabilities in Industrial Control Systems (ICS). These vulnerabilities affect essential software and hardware in...
Read More
Eight New ICS Advisories released by CISA

Authority Denies
Hacker claim ransomware attack on Indonesia’s state bank BRI

Bank Rakyat Indonesia (BRI), the largest state bank by assets, has assured customers that their data and funds are secure...
Read More
Authority Denies  Hacker claim ransomware attack on Indonesia’s state bank BRI

London-based company “Builder.ai” reportedly exposed 1.2 TB data

Cybersecurity researcher Jeremiah Fowler reported to Website Planet that he found a non-password-protected 1.2 TB dataset containing over 3 million...
Read More
London-based company “Builder.ai” reportedly exposed 1.2 TB data

(CVE-2024-12727, CVE-2024-12728, CVE-2024-12729)
Sophos resolved 3 critical vulnerabilities in Firewall

Sophos has fixed three separate security vulnerabilities in Sophos Firewall.  The vulnerabilities CVE-2024-12727, CVE-2024-12728, and CVE-2024-12729 present major risks, such...
Read More
(CVE-2024-12727, CVE-2024-12728, CVE-2024-12729)  Sophos resolved 3 critical vulnerabilities in Firewall

“Workshop on Cybersecurity Awareness and Needs Analysis” held at BBTA

A time-demanding workshop on "Cybersecurity Awareness and Needs Analysis" was held on Thursday (December 19) at Bangladesh Bank Training Academy...
Read More
“Workshop on Cybersecurity Awareness and Needs Analysis” held at BBTA

CVE-2023-48788
Kaspersky reveals active exploitation of Fortinet Vulnerability

Kaspersky's Global Emergency Response Team (GERT) found that attackers are exploiting a patched SQL injection vulnerability (CVE-2023-48788) in Fortinet FortiClient...
Read More
CVE-2023-48788  Kaspersky reveals active exploitation of Fortinet Vulnerability

U.S. Weighs Ban on Chinese-Made Router TP-Link: WSJ reports

The US government is considering banning a well-known brand of Chinese-made home internet routers TP-Link due to concerns that they...
Read More
U.S. Weighs Ban on Chinese-Made Router TP-Link:  WSJ reports

Daily Security Update Dated: 18.12.2024

Every day a lot of cyberattack happen around the world including ransomware, Malware attack, data breaches, website defacement and so...
Read More
Daily Security Update Dated: 18.12.2024

CISA released best practices to secure Microsoft 365 Cloud environments

CISA has issued Binding Operational Directive (BOD) 25-01, requiring federal civilian agencies to improve the security of their Microsoft 365...
Read More
CISA released best practices to secure Microsoft 365 Cloud environments

Data breach! Ireland fines Meta $264 million, Australia $50m

The Irish Data Protection Commission fined Meta €251 million ($263.6 million) for GDPR violations related to a 2018 data breach...
Read More
Data breach! Ireland fines Meta $264 million, Australia $50m

Now its chief technology officer, Murati leads OpenAI’s research, product and safety teams. She’s led the development and launch of its AI models including ChatGPT, the image-generator DALL-E and the newest, GPT-4.

She spoke with The Associated Press about AI safeguards and the company’s vision for the futuristic concept of artificial general intelligence, known as AGI. The interview has been edited for length and clarity.

Q: What does artificial general intelligence mean for OpenAI?

A: By artificial general intelligence, we usually mean highly autonomous systems that are capable of producing economic output, significant economic output. In other words, systems that can generalize across different domains. It’s human-level capability. OpenAI’s specific vision around it is to build it safely and figure out how to build it in a way that’s aligned with human intentions, so that the AI systems are doing the things that we want them to do, and that it maximally benefits as many people out there as possible, ideally everyone.

Q: Is there a path between products like GPT-4 and AGI?

A: We’re far from the point of having a safe, reliable, aligned AGI system. Our path to getting there has a couple of important vectors. From a research standpoint, we’re trying to build systems that have a robust understanding of the world similarly to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities. The other angle has been scaling these systems to increase their generality. With GPT-4, we’re dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn’t even understand that high-level goal or high-level direction, it’s much harder to align it. It’s not enough to build this technology in a vacuum in a lab. We really need this contact with reality, with the real world, to see where are the weaknesses, where are the breakage points, and try to do so in a way that’s controlled and low risk and get as much feedback as possible.

Q: What safety measures do you take?

A: We think about interventions at each stage. We redact certain data from the initial training on the model. With DALL-E, we wanted to reduce harmful bias issues we were seeing. We adjusted the ratio of female and male images in the training dataset. But you have to be very careful because you might create some other imbalance. You have to constantly audit. In that case, we got a different bias because a lot of these images were of a sexual nature. So then you have to adjust it again and be very careful about every time you make an intervention, seeing what else is being disrupted. In the model training, with ChatGPT in particular, we did reinforcement learning with human feedback to help the model get more aligned with human preferences. Basically what we’re trying to do is amplify what’s considered good behavior and then de-amplify what’s considered bad behavior.

Q: Should these systems be regulated?

A: Yeah, absolutely. These systems should be regulated. At OpenAI, we’re constantly talking with governments and regulators and other organizations that are developing these systems to, at least at the company level, agree on some level of standards. We’ve done some work on that in the past couple of years with large language model developers in aligning on some basic safety standards for deployment of these models. But I think a lot more needs to happen. Government regulators should certainly be very involved.

Q: A letter calling for a 6-month industry pause on building AI models more powerful than GPT-4 got a lot of attention. What do you think of the petition and its assumption about AI risks?

A: Look, I think that designing safety mechanisms in complex systems is hard. There is a lot of nuance here. Some of the risks that the letter points out are completely valid. At OpenAI, we’ve been talking about them very openly for years and studying them as well. I don’t think signing a letter is an effective way to build safety mechanisms or to coordinate players in the space. Some of the statements in the letter were just plain untrue about development of GPT-4 or GPT-5. We’re not training GPT-5. We don’t have any plans to do so in the next six months. And we did not rush out GPT-4. We took six months, in fact, to just focus entirely on the safe development and deployment of GPT-4. Even then, we rolled it out with a high number of guardrails and a very coordinated and slow rollout. It’s not easily accessible to everyone, and it’s certainly not open source. This is all to say that I think the safety mechanisms and coordination mechanisms in these AI systems and any complex technological system is difficult and requires a lot of thought, exploration and coordination among players.

Q: How much has OpenAI changed since you joined?

A: When I joined OpenAI, it was a nonprofit. I thought this was the most important technology that we will ever build as humanity and I really felt like a company with OpenAI’s mission would be most likely to make sure that it goes well. Over time, we changed our structure because these systems are expensive. They require a lot of funding. We made sure to structure the incentives in such a way that we would still serve the nonprofit mission. That’s why we have a “capped profit” structure. People at OpenAI are intrinsically motivated and mission-aligned and that hasn’t changed from the beginning. But over the course of five years, our thinking has evolved a lot when it comes to what’s the best way to deploy, what’s the safest way. That’s probably the starkest difference. I think it’s a good change.

Q: Did you anticipate the response to ChatGPT before its Nov. 30 release?

A: The underlying technology had been around for months. We had high confidence in the limitations of the model from customers that had already been using it via an API. But we made a few changes on top of the base model. We adapted it to dialog. Then we made that available to researchers through a new ChatGPT interface. We had been exploring it internally with a small, trusted group, and we realized the bottleneck was getting more information and getting more data from people. We wanted to expand it to more people out there in what we call a research preview, not a product. The intention was to gather feedback on how the model is behaving and use that data to improve the model and make it more aligned. We didn’t anticipate the degree to which people would be so captivated with talking to an AI system. It was just a research preview. The number of users and such, we didn’t anticipate that level of excitement.

Check Also

HSBC

HSBC sued by ASIC: customers allegedly scammed of $23 million

HSBC Bank Australia Limited did not sufficiently safeguard customers from scams that resulted in millions …

Leave a Reply

Your email address will not be published. Required fields are marked *