Most of the world's largest tech companies, including Amazon, Google and Microsoft, have agreed to tackle what they are calling deceptive artificial intelligence (AI) in elections.
The 20 firms have signed an accord committing them to fighting voter-deceiving content.
They say they will deploy technology to detect and counter the material.
But one industry expert says the voluntary pact will "do little to prevent harmful content being posted".
The Tech Accord to Combat Deceptive Use of AI in 2024 Elections was announced at the Munich Security Conference on Friday.
The issue has come into sharp focus because it is estimated up to four billion people will be voting this year in countries such as the US, UK and India.
Among the accord's pledges are commitments to develop technology to "mitigate risks" related to deceptive election content generated by AI, and to provide transparency to the public about the action firms have taken.
Other steps include sharing best practice with one another and educating the public about how to spot when they might be seeing manipulated content.
Signatories include social media platforms X - formerly Twitter - Snap, Adobe and Meta, the owner of Facebook, Instagram and WhatsApp.
Companies need to be 'proactive'
However, the accord has some shortcomings, according to computer scientist Dr Deepak Padmanabhan, from Queen's University Belfast, who has co-authored a paper on elections and AI.
He told the BBC it was promising to see the companies acknowledge the wide range of challenges posed by AI.
But he said they needed to take more "proactive action" instead of waiting for content to be posted before then seeking to take it down.
That could mean that "more realistic AI content, that may be more harmful, may stay on the platform for longer" compared to obvious fakes which are easier to detect and remove, he suggested.
Dr Padmanabhan also said the accord's usefulness was undermined because it lacked nuance when it came to defining harmful content.
He gave the example of jailed Pakistani politician Imran Khan using AI to make speeches while he was in prison.
"Should this be taken down too?" he asked.
Tools should not be weaponised - Microsoft
The accord's signatories say they will target content which "deceptively fakes or alters the appearance, voice, or actions" of key figures in elections.
It will also seek to deal with audio, images or videos which provide false information to voters about when, where, and how they can vote.
"We have a responsibility to help ensure these tools don't become weaponised in elections," said Brad Smith, the president of Microsoft.
On Wednesday, the US deputy attorney general, Lisa Monaco, told the BBC that AI threatened to "supercharge" disinformation at elections.
Google and Meta have previously set out their policies on AI-generated images and videos in political advertising, which require advertisers to flag when they are suing deepfakes or content which has been manipulated by AI.
- This story was first published by the BBC.