Internet sites could be fined or blocked if they fail to tackle "online harms" such as terrorist propaganda and child abuse, under UK government plans.
Britain's Department for Culture, Media and Sport has proposed an independent watchdog and a code of practice that tech companies would have to follow.
Senior managers would be held liable for breaches, with a possible levy on the industry to fund the regulator.
But one think tank called the plans a "historic attack" on freedom of speech.
The Online Harms White Paper covers a range of issues, including spreading terrorist content, child sex abuse, so-called revenge pornography, hate crimes, harassment and "fake news".
Ministers also say social networks must tackle material that encourages self-harm and suicide, which became a prominent issue after 14-year-old Molly Russell took her own life in 2017.
After she died her family found distressing material about depression and suicide on her Instagram account. Molly's father holds the social media giant partly responsible for her death.
Unveiling the proposals, Digital, Culture, Media and Sport Secretary Jeremy Wright said: "The era of self-regulation for online companies is over.
"Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough."
Last week Australia passed a law making it an offence for social media companies not to immediately remove violent content, punishable by fines of up to 10 percent of their annual global turnover and imprisonment of executives. The move was in response to videos of the attack on Christchurch mosques.
What do the proposals say?
The paper calls for an independent regulator to hold internet companies to account.
Such a regulator would be funded by the tech industry. The government has not decided whether a new body will be established, or an existing one handed new powers.
The regulator will define a "code of best practice" that social networks and internet companies must adhere to.
As well as Facebook, Twitter and Google, the rules would apply to messaging services such as Snapchat and cloud storage services.
The regulator will have the power to fine companies and publish notices naming and shaming those that break the rules.
The government says it is also considering fines for individual company executives, or making search engines remove links to offending websites.
Ministers "envisage" that fines and warning notices to companies will be included in an eventual bill. They are further consulting over blocking harmful websites or stopping them from being listed by search engines.
Code of best practice
The white paper offers some suggestions that could be included in the code of best practice.
It suggests the spread of fake news could be tackled by forcing social networks to employ fact-checkers and promote legitimate news sources.
But the regulator will be allowed to define the code by itself.
The white paper also says social media companies should produce annual reports revealing how much harmful content has been found on their platforms.
The children's charity NSPCC has been urging new regulation since 2017 and has repeatedly called for a legal duty of care to be placed on social networks.
A spokeswoman said: "Time's up for the social networks. They've failed to police themselves and our children have paid the price."
Censorship concern
But TechUK, an umbrella group representing the UK's technology industry, said the government must be "clear about how trade-offs are balanced between harm prevention and fundamental rights".
Matthew Lesh, head of research at free market think tank the Adam Smith Institute, said the government should be "ashamed of themselves for leading the western world in internet censorship".
"The proposals are a historic attack on freedom of speech and the free press," he said.
"At a time when Britain is criticising violations of freedom of expression in states like Iran, China and Russia, we should not be undermining our freedom at home."
Freedom of speech campaigners Article 19 warned that the government "must not create an environment that encourages the censorship of legitimate expression".
A spokesman said: "Article 19 strongly opposes any duty of care being imposed on internet platforms.
"We believe a duty of care would inevitably require them to proactively monitor their networks and take a restrictive approach to content removal.
"Such actions could violate individuals' rights to freedom of expression and privacy."
- BBC