Is artificial intelligence enough to thwart natural stupidity?

Posted 1/24/18

Is there a more suitable metaphor for our time than the current craze of Tide Pod Challenge videos that have prompted far too many people to report to hospitals after ingesting the tiny plastic …

This item is available in full to subscribers.

Please log in to continue

Log in

Is artificial intelligence enough to thwart natural stupidity?

Posted

Is there a more suitable metaphor for our time than the current craze of Tide Pod Challenge videos that have prompted far too many people to report to hospitals after ingesting the tiny plastic packets of laundry detergent?

For those who haven’t heard – or who have and have wisely chosen not to believe their ears – 2018 began with an epidemic of people who have been forced to seek medical treatment after swallowing plastic packets of Tide laundry detergent. So significant was the problem that Tide was convinced it needed to shoot a video ad featuring New England Patriots running back Rob Gronkowski who looks at the camera –without irony – to say, “Use Tide pods for washing. Not eating.”

That’s a true story, not a work of satire.

As is the case with nearly every bad idea of the last decade, social media and online sharing services have been blamed for the public’s fixation with consuming laundry detergent. Both Facebook and YouTube (owned by Google) have received the most blame for providing the bandwidth to the sirens of Tide. They’ve promised to take measures to guard against the videos and have begun the process of taking them down.

The Tide pod challenge is ridiculous, but it is just one more piece of evidence that our online platforms are not working. From notorious individual mistakes like that of YouTuber Logan Paul, who released a video in which he showed the body of a man who had committed suicide to the current crisis we are still dealing with that “fake news” by Russian agents on social media was enough to decide our last Presidential election, there’s a growing consensus that these digital platforms are problematic. And yet, there is no similar agreement on how to fix them.

Both Google and Facebook have vowed to improve their platform code to recognize and block inappropriate, dangerous and illegal content. Facebook devised a scheme to crowdsource the effort and have users identify sources as trusted or not. In effect, subjecting news sources to a popularity contest. I have news for Facebook: The public is not a great judge of this.

U.S. law has long allowed these platforms to be freewheeling. In 1998, Constitutional law freed online providers from liability for opinions they publish. Unlike traditional media, in which editors are responsible for the veracity of everything they publish, the thinking 20 years ago was that holding online service providers and web portals (remember AOL and Yahoo?) responsible in the same way didn’t make sense.

Today, with major advances in technology, the reach and function of these platforms and the mountains of cash they accumulate in advertising revenue suggests the time to hold them to a higher standard may have arrived. While promises of tweaking AI and other various algorithms demonstrates Google, Facebook, et. al., recognize this, it’s hard to believe such tweaks will be enough. If nine members of the Supreme Court couldn’t agree on what good standards should be, how could an algorithm?

Tech companies need to sort this out fast or it won’t be long until something even worse than Tide pods and fake news comes along. Given how incredible these stories have been so far, who knows what is likely to happen next?

Pete Mazzaccaro

opinion