КУПУЙ!

Generative AI Might Make It Easier to Target Journalists, Researchers Say

Home  /  Бізнес  /  Generative AI Might Make It Easier to Target Journalists, Researchers Say


місце для вашої реклами!

Since the artificial intelligence chatbot ChatGPT launched last fall, a torrent of think pieces and news reports about the ins and outs and ups and downs of generative artificial intelligence has flowed, stoking fears of a dystopian future in which robots take over the world.

While much of that hype is indeed just hype, a new report has identified immediate risks posed by apps like ChatGPT. Some of those present distinct challenges to journalists and the news industry.

Published Wednesday by New York University’s Stern Center for Business and Human Rights, the report identified eight risks related to generative artificial intelligence, or AI, including disinformation, cyberattacks, privacy violations and the decay of the news industry.

The AI debate “is getting a little confused between concerns about existential dangers versus what immediate harms generative AI might entail,” the report’s co-author Paul Barrett told VOA. “We shouldn’t get paralyzed by the question of, ‘Oh my God, will this technology lead to killer robots that are going to destroy humanity?'”

The systems being released right now are not going to lead to that nightmarish outcome, explained Barrett, who is the deputy director of the Stern Center.

Instead, the report — which Barrett co-authored with Justin Hendrix, founder and editor of the media nonprofit Tech Policy Press — argues that lawmakers, regulators and the AI industry itself should prioritize addressing the immediate potential risks.

Safety concerns

Among the most concerning risks are the human-level threats that artificial intelligence may pose to the safety of journalists and activists.

Doxxing and smear campaigns are already among the many threats that journalists face online over their work. Doxxing is when someone publishes private or identifying information about someone — such as their address or phone number — on the internet.

But now with generative AI, it will likely be even easier to dox reporters and harass them online, according to Barrett.

“If you want to set up a campaign like that, you’re going to have to do a lot less work using generative AI systems,” Barrett said. “It’ll be easier to attack journalists.”

Propaganda easy to make

Disinformation is another primary risk that the report highlights, because generative AI makes it easier to churn out propaganda.

The report notes that if the Kremlin had access to generative AI in its disinformation campaign surrounding the 2016 U.S. presidential election, Moscow could have launched a more destructive and less expensive influence operation.

Generative AI “is going to be a huge engine of efficiency, but it’s also going to make much more efficient the production of disinformation,” Barrett said.

That bears implications for press freedom and media literacy, since studies indicate that exposure to misinformation and disinformation is linked to reduced trust in the media.

Generative AI may also exacerbate financial issues plaguing newsrooms, according to the report.

If people ask ChatGPT a question, for instance, and are happy with the summarized answer, they’re less likely to click on other links to news articles. That means shrinking traffic and therefore ad dollars for news sites, the report said.

But artificial intelligence is far from all bad news for the media industry.

For example, AI tools can help journalists research by scraping PDF files and analyzing data quickly. Artificial intelligence can also help fact-check sources and write headlines.

In the report, Barrett and Hendrix caution the government against allowing this new industry to make the same mistakes as were made with social media platforms.

“Generative AI doesn’t deserve the deference enjoyed for so long by social media companies,” they write.

They recommend the government enhance federal authority to oversee AI companies and require more transparency from AI companies.

“Congress, regulators, the public — and the industry, for that matter — need to pay attention to the immediate potential risks,” Barrett said. “And if the industry doesn’t move fast enough on that front, that’s something Congress needs to figure out a way to force them to pay attention to.”

your ads here

реагуйте: