The council said it was suing the French branches of the two companies for “broadcasting a message with violent content abetting terrorism, or of a nature likely to seriously violate human dignity and liable to be seen by a minor”, according to the complaint.
In France, such acts can be punished by three years’ imprisonment and €75,000 (£64,000) fine.
Facebook said it quickly removed the live video of the attack on two mosques by a white supremacist in Christchurch on 15 March in which 50 people were killed.
But recordings of the livestream were shared extensively on YouTube and Twitter, and internet platforms had to scramble to remove reposted videos.
The CFCM, which represents millions of Muslims in France, said it took Facebook until 29 minutes after the beginning of the broadcast to take it down.
Major internet platforms had pledged to crack down on the sharing of violent images and other inappropriate content through automated systems and human monitoring, but critics say it is not working.
Platforms have cooperated to develop technology that filters child pornography but have stopped short of joining forces on violent content.
A US congressional panel last week called on senior executives from Facebook, YouTube, Microsoft and Twitter to explain the online proliferation of the “horrific” New Zealand video. The house committee on homeland security said it was “critically important” to filter out the kind of violent images seen in the video.