Content Filtering: Weaving A Safer Web

Are your users’ browsers safe online? You may have a port-blocking firewall, anti-virus protection on the desktop, and a policy of reprimanding employees who surf porn from office networks. Even with these measures in place, the web is a dangerous place for your users. Criminals are becoming increasingly adept at weaponising the web, in ways that companies aren’t aware of. It is now unrealistic for users to trust any website – even legitimate ones such as news sites. Why is that? And how can you protect your network?

URL filtering has historically been the single most important way to protect users against malicious content. Organisations would provide scanners that checked website destinations, or web IP addresses, against a database of known malicious sites. This has several benefits. Customers can configure scanners to block certain sites, including legitimate ones that they consider too insecure or unproductive, such as Facebook, MySpace, or Twitter.

These databases are frequently updated with new sites, as they are reported. Phishing sites that attempt to scam users into handing over personal details are a common category of URL to block. Others include inappropriate content, such as pornography. URL filtering products have become increasingly sophisticated, with many including scheduling (perhaps you only want to block Twitter access outside lunch hours, for example). Some advanced filtering systems will also enable administrators to grant access to certain sites for particular employees or groups. An organisation might want to block Facebook access to all but the marketing department, which is using it to orchestrate a social media marketing campaign.

Concealed threats

URL filtering is far from the only tool available, though. Malicious web content is becoming increasingly difficult to spot. There was a time when web users could be safe by simply steering away from certain web destinations, such as pornography or warez sites, because these were the shady neighbourhoods of the web where browsers were hijacked, malicious scripts automatically downloaded, and computers ‘pwned’.

These days, however, any website is fair game. BusinessWeek, CNet, and the UN have all served up malicious scripts in the last few years. Even Paul McCartney’s site was for a time co-opting visitors’ computers to a botnet. But Sir McCartney has better things to do than orchestrate computer crime networks. So how does it happen?

Criminals have become increasingly adept at hacking web sites, using a variety of techniques, so that they can alter their content. Some of them use FTP passwords stolen from already-compromised machines. Others use ‘SQL injection’ attacks, in which poorly configured web site code fails to properly validate text entered into search engines, or as parameters in the site’s web address. This allows the criminal to alter text inside the database that serves up the website content, causing it to put malicious scripts on the page, or to embed IFRAME tags – effectively tiny windows invisibly pointing to a malicious server. This enables the criminal to infect the browser via legitimate websites that users trust.

If users can’t trust any website that they visit, content scanning is crucial. A URL filter might decide that visiting a small ecommerce site for floral arrangements is valid, even though that site has been compromised by hackers, and will attempt to infect any PC that visits it. The content scanning function, however, will often also focus on what the site contains.

Content scanning

Good content scanners will check for key words that could indicate malicious or illegitimate content, while also watching for the invisible content, such as scripts or IFRAME tags with links to other URLs, that could be damaging to a user’s browser. They will feature dynamic link analysis, which combs through the content of websites looking for internal links that could point to malicious sites. And they will monitor and clean search engine results, helping to thwart ‘blackhat’ search engine optimisation experts that specialise in poisoning legitimate search results with links to malicious websites.

Dynamic link analysis, malware scanning, and cloud-based intelligence are all important techniques used by the best web-based security products to help weed out malicious content online. Zyxel includes web security as part of its overall unified threat management (UTM) offering to protect customers across all channels, including the web, email, and instant messaging. It works with Bluecoat to incorporate specialist web security content scanning into its systems, giving customers both broad coverage and a best-of-breed approach to protection.

Small businesses need the web to increase productivity and keep up with the competition. But they don’t need the threats to profitability that access to inappropriate or malicious sites brings. Investing in URL filtering and content scanning is therefore an important aspect of any company’s IT infrastructure.

adminuk

Recent Posts

OpenAI Hit By Austrian Complaint Over ChatGPT ‘False Data’

Rights group argues ChatGPT tendency to generate false information on individuals violates GDPR data protection…

10 hours ago

EU Designates Apple’s iPad OS As DMA ‘Gatekeeper’

European Commission says Apple's iPadOS is 'gatekeeper' due to large number of businesses 'locked in'…

11 hours ago

Beating the Barbarians in the Cloud

As the cloud continues to be an essential asset for all businesses, developing and maintaining…

11 hours ago

Austria Conference Calls For Controls On ‘Killer Robots’

Internatinal conference in Vienna calls for controls on AI-powered autonomous weapons to ensure humans remain…

11 hours ago

Taiwanese Chip Giant Exits China Mainland

Major Taiwan chip assembly and test firm KYEC to sell Jiangsu subsidiary, exit mainland China…

12 hours ago

Deepfakes: More Than Skin Deep Security

As deepfake technology continues to blur the lines between reality and deception, businesses and individuals…

12 hours ago