We May Need to Stymie AI
OSTP goes all-tech, and AI goes big — but there is some hope it may be a flash in the pan
In December, before he took office, Donald Trump created a role of known as the “AI and crypto czar,” and named billionaire David Sacks to the role. Sacks is considered part of the “PayPal Mafia,” a group of former PayPal employees and executives, such as Elon Musk and Peter Thiel. Like Musk, Sacks was born in South Africa.
As part of this, Trump also nominated Michael Kratsios to lead the OSTP.
The only science in his history isn’t the kind you might expect — Kratsios has a BA in political science.
Along with computer scientist Lynne Parker, who retired from the University of Tennessee, Knoxville last May, Kratsios will form an OSTP team reporting up through Sacks.
The OSTP has become all about technology, and mostly about AI.
This is not the only way AI is being shoved down our throats.
- In the case of the Oscar-nominated film The Brutalist, it has been literally shoved down the actors’ throats, being used to refine their Hungarian dialogue.
- As one expert says, “AI here appears to be directly altering/improving an element of the actor’s performance, [which] could be seen as calling into question the authenticity of that performance. Would the average moviegoer really care if the lead actors were speaking perfect Hungarian? Given Hollywood’s catalogue of awards-nominated horror accents, I’d say no.”
Microsoft has pushed CoPilot into the Office suite, and according to a paralegal who has a solid track record investigating tech misbehavior, it won’t be possible to disable it for a month or more. For law firms, physicians, or others with client confidentiality concerns, there appears to be no way to indicate to CoPilot what information is confidential and not to be used in some manner across other documents, or even which documents should be off-limits.
Information leaks from other AI devices in different ways. Alexa and Siri are always listening, after all . . .
In an interesting interview last week on the Freakonomics Radio podcast, computer scientist Ben Zhao discussed how to thwart AI using both through technology and economics.
Zhao is a professor of computer science at the University of Chicago, and while he loves technology, AI doesn’t impress him much, and Zhao believes creators need to be protected.
Zhao and his team have been building tools to give users more control, or to make AI so inefficient at scraping and ingesting content that licensing becomes the more financially sensible approach.
One invention Zhao’s team has created they call “The Bracelet of Invisibility,” leaning on their Dungeon & Dragons days. Currently bulky but sure to streamline, the bracelet works by creating a silent sympathetic field that causes standard microphones to vibrate at a wavelength they don’t like, so they feedback on themselves at this same silent wavelength, preventing themselves from hearing any outside noise.
Another approach, designed to “poison” AI models if they try to scrape training data without permission, is called Nightshade. It leaves images and pages the same for human users, but if scraped, the targets inject poison pills, causing the training data to go bad and making the AI company incur costs troubleshooting, correcting, or redoing the work.
The goal is to make scraping so expensive that licensing is the most efficient and reliable choice.
Zhao has other approaches that are fascinating to read about. He also thinks AI is overhyped and running out of what it needs most — information:
There is an exceptional level of hype, like we’ve never seen before. That bubble is in many ways in the middle of bursting right now. . . . There’s been many papers published on the fact that these generative AI models are well at their end in terms of training data. To get better, you need something like double the amount of data that has ever been created by humanity. And you’re not going to get that by buying Twitter or by licensing from Reddit or New York Times or anywhere. You see now recent reports about how Google and Open AI are having trouble improving upon their models. It’s common sense, they’re running out of data. And no amount of scraping or licensing will fix that.
Zhao also notes the hype may lead to heartbreak:
And then, of course, just the fact that there are very few legitimate revenue-generating applications that will even come close to compensating for the amount of investment that VCs and these companies are pouring in. Obviously, I’m biased, doing what I do, but I’ve thought about this problem for quite some time. And honestly, these are great interpolation machines, these are great mimicry machines, but there’s only so many things that you can do with them. They are not going to produce entire movies, entire TV shows, entire books to anywhere near the value that humans will actually want to consume. They can disrupt, and they can bring down the value of a bunch of industries, but they are not going to actually generate much revenue in and of themselves. I see that bubble bursting, and so what I say to these students oftentimes is that things will take their course, and you don’t need to push back actively. All you need to do is to not get swept along with the hype.
Unfortunately, the hype is now atop major agencies of the US government, and with a transactional President seeking to enrich himself and his cronies.
It’s not surprising the new Administration looks like a perfect moment to exploit. After all, when MAGA met Big Tech, it might have been more like looking in the mirror than we’d like to imagine:
- Toxic masculinity, misogyny implicit
- Disrespect for individual rights
- Respect for greed and raw power
- Disdain for democratically established laws and customary norms
- Willingness to dance with dictators if it means greater wealth
Now, we have the two together. It’s a convergence of similar belief systems that seeks to marginalize others. And with OSTP’s repositioning and nominated leadership, it looks like technology is the pursuit of the day, and science is being left out in the cold.
If only to reassert discovery and work for the benefit of all based on evidence, we may need to stymie AI as a government priority, or funding for the future may be redirected to pay for machines chewing over texts of the past.