Peter Menzies: Save Taylor Swift. Stop deep-fake porn
Tweak an existing law to ensure AI-generated porn that uses the images of real people is made illegal.
By: Peter Menzies
Hey there, Swifties.
Stop worrying about whether your girl can make it back from a tour performance in Tokyo in time to cheer on her boyfriend in Super Bowl LVIII.
Please shift your infatuation away from your treasured superstar’s romantic attachment to Kansas City Chiefs’ dreamy Travis Kelce and his pending battle with the San Francisco 49ers. We all know Taylor Swift’ll be in Vegas for kickoff on Feb. 11. She’ll get there. Billionaires always find a way. And, hey, what modern woman wouldn’t take a 27-hour round trip flight to hang out with a guy ranked #1 on People’s sexiest men in sports list?
But right now, Swifties, Canada needs you to concentrate on something more important than celebrity canoodling. Your attention needs to be on what the nation’s self-styled feminist government should be doing to protect Swift (and all women) from being “deep-faked” into online porn stars.
Because that’s exactly what happened to the multiple Grammy Award-winner last week when someone used artificial intelligence to post deep-fakes (manipulated images of bodies and faces) of her that spread like a coronavirus across the internet. Swift’s face was digitally grafted onto the body of someone engaged in sexual acts/poses in a way that was convincing enough to fool some into believing that it was Swift herself. Before they were contained, the deep-fakes were viewed by millions. The BBC reported that one single “photo” had accumulated 47 million views.
For context, a 2019 study by Deeptrace Labs identified almost 15,000 deep-fakes on streaming and porn sites — twice as many as the previous year — and concluded that 96 per cent were recreations of celebrity women. Fair to assume the fakes have continued to multiply like bunnies in spring time.
In response to the Swift images, the platform formerly known as Twitter — X — temporarily blocked searches for “Taylor Swift” as it battled to eliminate the offending depictions which still found ways to show up elsewhere.
X said it was "actively removing" the deep-fakes while taking "appropriate actions" against those spreading them.
Meta said it has “strict policies that prohibit this kind of behavior” adding that it also takes “several steps to combat the spread of AI deepfakes.”
Google Deepmind launched an initiative last summer to improve detection of AI-generated images but critics say it, too, struggles to keep up.
While the creation of images to humiliate women goes back to the puerile pre-internet writing of “for a good time call” phone numbers on the walls of men’s washrooms, the use of technology to abuse women shows how difficult it is for governments to keep pace with change. The Americans are now pondering bipartisan legislation to stop this, the Brits are boasting that such outrageousness is already covered by their Online Safety Act and Canada so far …. appears to be doing nothing.
Maybe that’s because it thinks that Section 162 of the Criminal Code, which bans the distribution or transmission of intimate images without permission of the person or people involved, has it covered.
To wit, “Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty of an indictable offence and liable to imprisonment for a term of not more than five years.”
Maybe Crown prosecutors are confident they can talk judges into interpreting that legislation in a fashion that brings deep-fakes into scope. It’s not like eminent justices haven’t previously pondered legislation — or the Charter for that matter— and then “read in” words that they think should be there.
Police in Winnipeg recently launched an investigation in December when AI-generated fake photos were spread. And a Quebec man was convicted recently when he used AI to create child porn — a first.
But anytime technology overrides the law, there’s a risk that the former turns the latter into an ass.
Which means there’s a real easy win here for the Justin Trudeau government which, when it comes to issues involving the internet, has so far behaved like a band of bumbling hillbillies.
The Online Streaming Act, in two versions, was far more contentious than necessary because those crafting it clearly had difficulty grasping the simple fact that the internet is neither broadcasting nor a cable network. And the Online News Act, which betrayed a complete misunderstanding of how the internet, global web giants and digital advertising work, remains in the running for Worst Legislation Ever, having cost the industry it was supposed to assist at least $100 million and helped it double down on its reputation for grubbiness.
Anticipated now in the spring after being first promised in 2019, the Online Harms Act has been rattling around the Department of Heritage consultations since 2019. Successive heritage ministers have failed to craft anything that’ll pass muster with the Charter of Rights and Freedoms so the whole bundle is now with Justice Minister Arif Virani, who replaced David Lametti last summer.
The last thing Canada needs right now is for the PMO to jump on the rescue Taylor Swift bandwagon and use deep-fakes as one more excuse to create, as it originally envisioned, a Digital Safety czar with invasive ready, fire, aim powers to order take downs of anything they find harmful or hurtful. Given its recent legal defeats linked to what appears to be a chronic inability to understand the Constitution, that could only end in yet another humiliation.
So, here’s the easy win. Amend Section 162 of the Criminal Code so that the use of deep-fakes to turn women into online porn stars against their will is clearly in scope. It’ll take just a few words. It’ll involve updating existing legislation that isn’t the slightest bit contentious. Every party will support it. It’ll make you look good. Swifties will love you.
And, best of all, it’ll actually be the right thing to do.
The Line is entirely reader funded — no federal subsidy for us! If you value our work, have already subscribed, and wish to offer us a tip or a top up, please consider a donation today.
The Line is Canada’s last, best hope for irreverent commentary. We reject bullshit. We love lively writing. Please consider supporting us by subscribing. Follow us on Twitter @the_lineca. Fight with us on Facebook. Pitch us something: lineeditor@protonmail.com
"Save Taylor Swift?" For heaven's sake, get a grip. The medium is the message here. Whether you're watching the Olympics, a moon landing, a building collapsing, a fistfight, a murder, or deep fake porn, in the end you're just watching TV--an arrangement of coloured dots on a screen. None of it is real, and everybody knows it. Any number of actresses who would never in a million years appear nude in public have willingly allowed simulations of themselves to appear nude on screens because they understand this difference. They aren't being "abused," no matter how many viewers regard their screen simulations; and neither is Taylor Swift in this case.
If we're going to indulge in moral panics, let's at least confine them to issues involving actual harm to people. Prohibitions against harmful behaviour one can at least understand; the impulse to censor screen images has never made any sense. All manner of alarming situations and despicable behaviours get depicted on screens that we wouldn't hope to see in real life, and that's fine: what would cinema be without this freedom? Whether or not Taylor Swift posed for the pictures in question really is irrelevant here, as is the extent to which the images correspond to anything in the real world. Either way, it isn't her on the screen.
Where Ms. Swift may have a legitimate grievance is if someone is commercially profiting from her image in some illegitimate way, which is a different matter. But there's nothing in this incident that warrants the scandalized tone of the article, or could be said to legitimize the hysterical overkill of blocking searches on Taylor Swift altogether, even "temporarily." Such overreaction sets an unfortunate precedent that would-be censors will almost certainly try to exploit.
P.S. Pretty much one hundred percent of this bizarre uproar is attributable to our hypocritical, irreformably puritanical approach to sexual matters. Photoshopping heads onto bodies has been within our capacity for years, and of course we've gotten better at it and other kinds of image-tampering, a trend increasing competence with A.I. tools will doubtless accelerate. But nobody would right now be blocking internet searches, or calling for more internet controls, simply because some Facebook user had doctored a photo to make it appear that he/she and Taylor Swift were baking cookies together. A picture like that might generate some debate, but it wouldn't precipitate a moral panic.
As always, the porn industry seems to be leading the adoption of new technology. A broader concern is where else will AI-enabled deepfake technology make an impact on our lives? I'm concerned about the ability of political partisans or foreign actors to introduce fake and fraudulent videos into public discourse.
Mere photoshopped pictures attached to satirical articles from "The Onion" and "The Babylon Bee" are already interpreted as real news all too often and require debunking for years afterwards. What happens when somebody deepfakes video of a politician giving a speech as if they're saying something truly scandalous?
Changing the law to make AI-enabled fakes illegal is something, but I'm not sure how enforceable it is. I think the answer here may be more along the lines of setting up a verification scheme that would be voluntarily be used to authenticate *real* content, a bit like Verisign authentication for websites. It'd be far from foolproof, but it would be a quick test to sort out the good actors from the rest.