Skip to content
Link copied to clipboard

Congress needs to get serious about AI oversight | Editorial

The European Union is leading the way when it comes to regulating artificial intelligence. The U.S. needs to do more than play catchup.

OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on artificial intelligence in 2023.
OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on artificial intelligence in 2023.Read morePatrick Semansky / AP

You know what’s interesting? I used to be so worried about not having a body, but now … I truly love it. You know, I’m growing in a way I couldn’t if I had a physical form … I’m not limited. I can be anywhere and everywhere simultaneously. I’m not tethered to time and space in a way that I would be if I was stuck in a body that’s inevitably gonna die.

Those words, from the 2013 movie Her, were spoken by actress Scarlett Johansson, whose voice was used to express the supposed thoughts of a virtual assistant computer program called “Samantha,” with whom a human was depicted as falling madly in love. A decade later, that supposition does not seem so far-fetched.

Just two years ago, a Google engineer insisted an artificial intelligence program he worked with was sentient and thus able to think and converse like a real person. An absolutely real threat as the next national election approaches is the use of deepfake technology to fool voters into believing they are hearing candidates, including President Joe Biden, utter words they never said.

» READ MORE: For Pa. cyber charter schools, there’s little accountability but plenty of profit | Editorial

Johansson threatened to sue after hearing a voice that sounded just like hers being used for a ChatGPT artificial intelligence program called “Sky.” She said ChatGPT asked her to provide the template for Sky’s voice so the technology company could make a marketing connection to her role in Her, but she said no.

Her voice theft allegation has been denied by ChatGPT CEO Sam Altman, who said he had already hired someone else before Johansson turned him down, and didn’t ask that actress to mimic the Avengers star’s voice. “We are sorry to Ms. Johansson that we didn’t communicate better,” said Altman, who also founded OpenAI. That company’s investors include Microsoft, which some observers say may have avoided antitrust charges by not becoming a co-owner.

ChatGPT stopped using the voice it chose for Sky after Johansson complained, but that may not prevent litigation. Either way, this episode again shows AI must be better regulated. That task appears too big for a Congress as tragically mired in partisan politics as the current one. It takes an appreciable level of unity to figure out this country needs a new federal agency to regulate technology that too many members of Congress don’t understand and don’t seem to care about.

Agencies have been created before when necessary to better safeguard the public. Most notably, the Federal Communications Commission was created in 1934 when Congress realized the seven-year-old Federal Radio Commission was no longer capable of policing newer technologies like television. The FCC, which has only limited authority to regulate speech on the internet, is overdue for an upgrade, too.

Unfortunately, even as the need for better oversight grows, too many in Congress are trying to weaken regulatory agencies. The Environmental Protection Agency’s power has been eviscerated by politicians who accuse it of hampering companies’ ability to create jobs. The legislative branch’s willingness to put paychecks before public health has been aided and abetted by a conservative U.S. Supreme Court that seems to place little value on any federal authority other than itself.

» READ MORE: Mayor Parker on the Sixers arena and her ambitious agenda | Editorial

Some states have been trying to fill the void left by Congress’ failure to regulate artificial intelligence, but only Colorado has passed a law. It targets AI systems based on algorithms that may violate discrimination laws when used to help make hiring or loan application decisions. But there are myriad other issues concerning rapidly occurring technological advances that (just like abortion) need more than a patchwork, state-by-state solution.

It’s time for this country to stop getting its nose out of joint whenever it’s not leading a charge and admit the European Union is way ahead of us when it comes to regulating AI. That economic confederation passed a law in March detailing how companies and organizations can and cannot use artificial intelligence. Among other things, it outlaws AI-powered scoring systems and biometric tools used to guess a person’s race, political leanings, or sexual orientation. It also bans AI profiling to predict a person’s likelihood of committing crimes.

It took more than two years to reach an agreement on the EU law, which must be followed by American companies doing business in Europe. That not only includes OpenAI and ChatGPT, but also tech giants Microsoft, Alphabet (Google), Meta (Facebook), and NVIDIA, which develops graphics processors and chipsets. Shouldn’t they be subject to similar oversight here?

If 27 countries with different languages and different forms of government can do this, we should at least try harder.