13分で振り返る「OpenAIアルトマンCEOの議会証言」
こんにちは、Choimirai Schoolのサンミンです。
対話型AI「ChatGPT」を開発したOpenAIのサム・アルトマンCEOが2023年5月16日、米議会の公聴会で、3時間にわたって行った証言。重要なポイントを13分にまとめた動画です。スクリプトと日本語字幕を一緒にシェアします。
Richard Blumenthal
They are no longer fantasies of science fiction. They are real and present the promises of curing cancer or developing new understandings of physics and biology, or modeling, climate and weather. All very encouraging and hopeful, but we also know the potential harms and we've seen them already weaponized disinformation, housing discrimination, harassment of women, and impersonation fraud.
Voice cloning, uh, deep fakes. These are the potential risks despite the other rewards. And for me, perhaps the biggest nightmare is the looming new industrial revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare. For this new industrial revolution in skill training and relocation that may be required, and already industry leaders are calling attention to those challenges.
To quote ChatGPT, this is not necessarily the future that we want. We need to maximize the good over the bad. Congress has a choice. Now, we had the same choice when we faced social media. We failed to seize that moment. The result is predators on the internet, toxic content, exploiting children, creating dangers for them, and Senator Blackburn and I and others like Senator Durbin on the Judiciary Committee are trying to deal with it.
Kids Online Safety Act, but. Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real sensible. Safeguards are not in opposition to innovation. Accountability is not a burden far from it. They are the foundation of how we can move ahead while protecting public trust.
Perhaps the biggest nightmare is the looming new industrial revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution in skill training and relocation that may be required. And already industry leaders are calling attention.
Josh Hawley
And I think, my question is, what kind of an innovation is it going to be? Is it gonna be like the printing press that diffused knowledge and power and learning widely across the landscape that empowered, ordinary, everyday individuals that led to greater flourishing, that led above all two greater liberty?
Or is it gonna be more like the A-bomb? Huge technological breakthrough, but the consequences severe, terrible, continue to haunt us to this day.
Sam Altman
Before we release GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming and dangerous capability testing.
We are proud of the progress that we made. GPT-4 is more likely to respond helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability, however, We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.
For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities. There are several other areas I mentioned in my written testimony where I believe that companies like ours can partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements.
Facilitating processes to develop and update safety measures and examining opportunities for global coordination. And as you mentioned, uh, I think it's important that companies have their own responsibility here, no matter what Congress does.
Christina Montgomery
To that end, IBM urges Congress to adopt a precision regulation approach to AI.
This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself. Such an approach would involve four things. First, different rules for different risks. The strongest regulation should be applied to use cases with the greatest risks to people and society.
Second, clearly defining risks. There must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk. This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts.
Third, be transparent. So AI shouldn't be hidden. Consumers should know when they're interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system. And finally showing the impact For higher risk use cases, companies should be required to conduct impact assessments that show how their systems perform against test for bias and other ways that they could potentially impact the public and to attest that they've done so.
Richard Blumenthal
Uh, you may have had in mind the effect on, on Jobs, which is really my biggest nightmare in the long term. Uh, let me ask you, uh, What your biggest nightmare is and whether you share that concern?
Sam Altman
Like with all technological revolutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict.
If we went back to the, the other side of a previous technological revolution, talking about the jobs that exist on the other side, um, you know, you can go back and read books of this. It's, uh, what people said at the time, it's difficult. I believe that there will be far greater jobs on the other side of this and that the jobs of today will get better.
I, I think it's important, first of all, I think it's important to understand and think about GPT-4 as a tool, not a creature, which is easy to get confused and it's a tool that people have a great deal of control over and how they use it. Uh, and second GPT-4 and things, other systems like it, uh, are good at doing tasks, not jobs.
And so you see already people that are using GPT-4 to do their job much more efficiently, um, by helping them with tasks.
Josh Hawley
Should we be concerned about models that can, large language models that can predict survey opinion and then can help organizations, entities, fine tune strategies to elicit behaviors from voters should we be worried about this for our elections?
Sam Altman
Yeah. Uh, thank you Senator Hawley for the question. It's, it's one of my areas of greatest concern. The, the, the, the more general ability of these models to manipulate, to persuade, uh, to provide sort of one-on-one, uh, you know, interactive disinformation. I think that's like a broader version of what you are talking about, but giving that we're gonna face an election next year and these models are getting better.
Uh, I think this is a significant area of concern. I think there's a lot, there's a lot of policies that companies can voluntarily adopt, and I'm happy to talk about what we do there. Um, I do think some regulation would be quite wise on this topic. Uh, someone mentioned earlier, it's something we really agree with.
People need to know if they're talking to an ai, if, if content that they're looking at might be generated or might not. I think it's a, a great thing to do is to make that clear. Um, I think we also will need. Rules, guidelines, uh, about what, what's expected in terms of disclosure, uh, from a company providing a model, uh, that could have the, these sorts of, uh, abilities that you talk about. So I'm nervous about it.
Josh Hawley
Should we be concerned about that for its corporate applications, for the monetary applications, for the manipulation that that could come from, that, Mr. Altman?
Sam Altman
Uh, yes, we should be concerned about that. To be clear, uh, OpenAI does not, we we're not off, you know, we wouldn't have an ad-based business model.
So we're not trying to build up these profiles of our users. We're not, we're not trying to get them to use it more a actually, we'd love it if they use it less cause we don't have enough GPUs. Um, but I think other companies are already, uh, and certainly will in the future, use AI models to create. You know, very good ad predictions of what a user will like.
Gary Marcus
My view is that we probably need a cabinet level, uh, organization within the United States in order to address this. Um, and my reasoning for that is that the number of risks is large. The amount of. Information to keep up on is so much. I think we need a lot of technical expertise. I think we need a lot of coordination of these efforts.
So there is one model here where we stick to only existing law and try to shape all of what we need to do, and each agency does their own thing. But I think that AI is gonna be such a large part of our future. And is so complicated and moving so fast, and this does not fully solve your problem about a dynamic world.
Um, but it's a step in that direction. To have an agency that's full-time job is to do this. I personally have suggested in fact that we should want to do this at a global way.
Marsha Blackburn
We've lived through Napster.
Sam Altman
Yes, yes. But we're,
Marsha Blackburn
That was something that really cost a lot of artists, a lot of money.
Sam Altman
Oh, I understand. Yeah, for sure.
Marsha Blackburn
In the digital distribution era,
Sam Altman
I, I don't, I don't know the numbers on jukebox on the top of my head as a research release, I can, I can follow up with your office, but it's not, Jukebox is not something that gets much attention or usage. It was put out to, to show that something's possible.
Marsha Blackburn
Well, Senator Durbin just said, you know, and I think it's a fair warning to you all if we are not involved in this from the get-go, and you all already are a long way down the path on this, but if we don't step in, then this gets away from you. So are you working with a copyright office? Are you considering protections for content generators and creators in Generative AI?
Sam Altman
Yes, we are absolutely engaged on that. Again, to reiterate my earlier point, we think that content creators, content owners, need to benefit from this technology. Exactly what the economic model is. We're still talking to artists and content owners about what they want. I think there's a lot of ways this can happen, but very clearly, no matter what the law is, the right thing to do is to make sure people get significant upside benefit from this new technology.
Amy Klobuchar
um, with an election upon us. Uh, with primary elections upon us that we're gonna have all kinds of misinformation. And I just wanna know what you're planning on doing it, doing about it. I know we're gonna have to do something soon, not just for the images of the candidates, but also, um, for misinformation about the actual polling places and election rules.
Sam Altman
Thank you, Senator. The, we, we talked about this a little bit earlier. We are quite concerned about the impact this can have on elections. I think this is an area where, Hopefully the entire industry and the government can work together quickly. There's, there's many approaches, and I'll talk about some of the things we do, but before that, I think it's tempting to use the frame of social media.
Um, but this is not social media. This is different. And so the, the, the response that we need is different. You know, the, this is a tool that a user is using to help generate content more efficiently than before. They can change it, they can test the accuracy of it. If they don't like it, they can get another version.
Um, but it still then spreads through social media or other ways like chat. G B T is a, you know, single player experience where, where you're just using this. Um, and so I think as we think about what to do, that's, that's important to understand there. There's a lot that we can and do, do there. Um, There's, uh, things that the model refuses to generate.
Uh, we have policies, uh, we also importantly have monitoring. So at scale, uh, we can detect someone generating a lot of those tweets. Mm-hmm. Even if generating one tweet is okay.
この記事が気に入ったらサポートをしてみませんか?