Microblogging site, Twitter has failed in its protection of children, the Child Exploitation and Online Protection Centre (CEOP) has said.
According to reports, CEOP believes that Twitter is lagging behind other social networking sites, and has urged the site to do more to protect children using Twitter from online predators.
He added that while the centre does receive reports from Twitter relating to inappropriate material, they form a very small proportion of the 1,000 monthly reports CEOP gets “relating to a wide range of online environments”.
While Facebook and Bebo have worked with CEOP to include panic buttons on their sites, where users can click on a single link to report their concerns, Twitter users must first search the site and identify the offenders email address before reporting measures can be taken.
Despite this, Twitter claims that safety is a high priority and that it acts immediately on complaints of inappropriate behaviour. According to a BBC report, the company has said that there are plans in place to have a team working 24 hours a day to investigate complaints, which will be implemented in the next few months.
Davies said “Twitter have removed illegal images and other content on our request. We believe more can be done around the moderation of Twitter feeds and the strengthening of Twitter’s reporting mechanisms.”
Mark Williams-Thomas, former police detective and child protection expert told the BBC, however, that some users were still active on Twitter for days or weeks after their behaviour had been reported to the company.
“There is always going to be a problem with social networking sites, because where there is an opportunity offenders will seek that out,” added Williams-Thomas.
“Clearly what Twitter needs to do is to take responsibility for its users. And when they identify there is somebody promoting child abuse material, swapping it or even discussing it the site must come down straight away,” he said.
Twitter defends such actions, claiming that in certain circumstances, law enforcement investigations need to be considered, and that while content might be disturbing, removing it immediately could be detrimental to a criminal case being built against users.