By Paul Brislen*
Opinion - Calls for social media to be regulated have escalated following their failure to act decisively in the public interest during the terror attacks in Christchurch.
The cry has been growing ever louder over the past few years. We have seen Facebook refuse to attend UK parliamentary sessions to discuss its role in the Cambridge Analytica affair, watched its CEO testify but not exactly add any clarity to inquiries into Russian interference in the US election, and seen the company accused of failing to combat the spread of hate speech amid violence in Myanmar.
US representatives are now openly talking about how to break up the company and our own prime minister has suggested that if Facebook can't find a way to manage itself, she will.
But how do we regulate companies that don't have offices in New Zealand (aside from the odd sales department) and that base their rights and responsibilities on another country's legal system?
And if we are going to regulate them, how do we do it in such a way as to avoid trampling on users' civil rights but makes sure we never see a repeat of the events of 15 March?
Politicians have traditionally been rubbish at regulating the internet, and not just local ones. While the EU got its laws regarding privacy absolutely right it is also currently grappling with two new regulations that will destroy the ability to share content online because it doesn't seem to fully appreciate how the internet actually works. And then there's Australia, which has introduced controversial new laws about encryption.
There is every danger that we will overstep the mark and regulate the social media and tech giants in such a way as to make our own lives worse than they were before, and that's something that needs to be taken into account before we start.
Let's start by making it clear that if these companies want to operate in New Zealand they must abide by New Zealand law. Shouldn't be too hard since they all say "oh yes, we always operate under local legal constraints" wherever they are in the world.
In Germany, for instance, with its harsh penalties around Holocaust deniers and Nazi symbols, Twitter and Facebook and Instagram and all the rest manage to avoid upsetting people on a regular basis by filtering out such content on a regular basis. If they can do it in Germany, they can do it here.
So what laws do we currently have in place that might provide a platform to work from?
In New Zealand we have the Films, Videos and Publications Act to protect us from the type of content nobody really wants to see. If the content meets the criteria, it's deemed objectionable and anyone caught with it, or caught sharing it, can expect a hefty fine and jail time.
But prosecuting individuals caught actively sharing the video of the Christchurch mosque shootings under the Act isn't likely to prompt changes in the social media platforms themselves.
We could start by making this Act more applicable to the content hosts as well as to the uploaders. Currently,under the Harmful Digital Communications Act there is a safe harbour arrangement. If you do the right thing by the law and act quickly to remove the content, we'll let you go about your business. I'd like to see that beefed up.
Let's see how quickly they can respond, make it mandatory to report on a quarterly basis how many complaints they receive about content and how they acted on each complaint. Let's put a time frame in - say 24 hours to assess and remove content. Let's put in some real incentives as well - rather than a $10,000 fine let's move to a model that will really get their attention. How about $50 million or 4 percent of global revenue? Per offence.
Let's not leave the decision-making on what is and isn't objectionable up to of minimum wage monitors based in the US who don't know New Zealand laws. Let's require that the community standards applied to New Zealand content for New Zealand users are based on New Zealand law.
And if we're going to have live streaming video footage uploaded by anonymous individuals, let's have a look at how best we can monitor and manage that. All video to be tagged with a hash, for starters. This is a couple of lines of code that identify the video so if it needs to be pulled from public view it can be found and removed quickly.
And let's have actual moderators looking at actual live feeds with the power to hit the "dump" button and remove content if it's offensive. Social media is fantastically quick to remove copyright material (and indeed material that it thinks is covered by copyright law) but incredibly slow to act on everything else so let's change that dynamic.
Let's hold senior leaders to account for any breaches of the law - just as we've introduced personal liability for company directors.
Privacy is an area that needs strengthening as well. Our Privacy Act is currently being reviewed but in light of the events of last week it probably needs to be looked at through a new lens. The Privacy Commissioner needs to be able to act decisively and act with some force.
While we're at it, let's introduce a tougher financial reporting regime. Facebook made around $800 million from New Zealand users last year so let's see it pay tax locally. There's work underway on this - I'd like to see it accelerated and scaled up significantly.
Ideally we would work with our counterparts around the globe. We need to work together with other jurisdictions to make sure these companies are compliant and don't simply move virtually to another location.
All companies operate under a social licence. We give Facebook and Twitter, Instagram and WhatsApp a huge amount of data about us and they make a huge amount of money from us, and most of that is because we allow them to. If they're not going to play fairly then the ultimate penalty is to take our ball and go home - uninstalling the app, refusing to pay for advertising, removing ourselves from the equation may be the only option that actually makes a difference.
But let's try the regulatory approach first.
* Paul Brislen is a technology commentator