The European Commission is considering imposing an hour-long deadline for social networks to remove terrorist and extremist content after voluntary measures appear to have failed.
Brussels is preparing a fresh bid to get social networks to stamp out extremist content.
It is drafting new rules after concluding that a voluntary self-regulation programme wasn't working, according to a report in the Financial Times.
Facebook, Twitter and Google-owned YouTube agreed in 2016 to review and remove a majority of hate speech within 24 hours.
The category includes racist, violent or illegal posts.
The commission told social media platforms in March to take down terrorist content within 24 hours of it being flagged, warning that new laws could be written if the companies did not comply.
EU officials said back then they would give tech firms three months to report back.
The new draft regulation will affect Twitter, Facebook and Google among others, with fines if they fail to take down terrorist material within an hour.
While the details of the measures are still unknown, they would likely be based on the EU guidance from March.
The clampdown would lead to the EU ending its current approach – where the firms self-police – in favour of explicit rules.
The EU's commissioner for security, Julian King, told the Financial Times on Sunday that the EU would "take stronger action in order to protect our citizens. The law would apply to small social media apps as well as the bigger players.
"Platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent," he added.
The draft regulation is fixed to be published next month.
It will need to be approved by the European Parliament and a majority of EU states before it could be put into action.
Euronews asked Twitter, Facebook and Google to comment.
Google claims that 90 percent of terrorism-related content uploaded to YouTube is detected and removed automatically.
However, a recent study from The Counter Extremism Project suggests that ISIL content is still being uploaded and made available for hours afterwards.
Facebook says it has removed 1.9 million pieces of content related to ISIL, al-Qaeda, and their affiliates.
Twitter has closed 1.2 million accounts for promoting extremist content to date. In its most recent 'transparency report', Twitter revealed that between July and December last year, a total of 274,460 accounts were permanently suspended for violations related to the promotion of terrorism. The company says 74% of those accounts were suspended before their first tweet.
If the EU's proposed regulation is accepted, it will be the first time the European Commission has openly targeted tech firms' handling of illegal content.