Your Shameless Host
Hi! Welcome to the personal blog of Jason Stirk (Griffin) - a slightly unhinged web application developer living in Lismore, NSW (yes, that's in Australia).
I run a software consulting company called Aurora Software.
As Dave Hughes might say, "I'm angry tonight…". I'm angry because Australia's Internet is on a nasty road - the road to censorship. If you look to your left, you'll see ISPs forced to implement a technology doomed to fail. To your right, you'll see the Australian public getting shafted. Just up ahead, we'll see parents taking even less responsibility for what their kids do on the Internet.
It's not getting that much airtime outside of the IT community currently, but the ALP is pushing for it to be mandatory for all ISPs to take part in a system to filter out nasty content on the Internet. Sure, this will have it's perks, but at what costs?
In this post, I'm going to take you on a quick tour of the pros, the cons, and probably some knee-jerk conspiracy mongering thanks to recently reading 1984. Should be fun!
The ALP made a lot of noise during the recent election about giving Australia World Class Broadband - not before time! In Australia we've been plagued with high broadband costs, mediocre bandwidth and quotas (sorry, couldn't resist!), and even worse infrastructure.
To give you some idea, it's not uncommon for Australian ISPs to charge anywhere from $10/Gb to $90/Gb (That's extreme though) for network traffic. Our particular host charges $35 down to $15 per Gb, and that's for a co-located server, which one would expect to be cheaper thanks to the bulk of data.
Contrast this with other nations, such as the US where it's not uncommon to see $1/Gb excess charges, or even hosting providers offering 1TB of bandwidth for $20/month.
Sure, Australia isn't the worst, but for an advanced, affluent country, we're paying a huge amount for Internet access. I don't know exactly why this is the case, but I suspect a great deal of it owes to the fact that we don't have that many options when it comes to physical connections in and out of the country - undersea cable is expensive, and satellite communications only has a certain amount of bandwidth that it can handle. As such, it's going to take a heap of capital to get more bandwidth into Australia, and then far more infrastructure to get it delivered everywhere.
With regards to web filtering, it is not an uncommon thing to do. Home users have some solutions (albeit usually fairly unreliable), but it's very common for schools, Internet cafes, and other businesses to do some filtering. For example, the WA Department of Education and Training uses a very coarsely grained access filter to block out common and obvious websites that are not suitable for minors. Every school I've ever worked at or been involved with has run their own filtering solution on their own network in order to keep tabs, and filter out what students are looking at.
A few years ago, China also announced that they would start filtering web traffic, and this received a fair amount of condemnation from those who heard about it in most Western societies. That may have been mainly because they were filtering content that was anti-governmental, and news sources external to the strictly controlled Chinese media.
Some other countries, including the UK also currently filter their Internet traffic, however I am not 100% certain to what degree, or of the specifics.
First off, the concept of filtering the Internet has a few lofty goals - we can protect children from unsuitable content such a pornography, we can protect society from websites that might incite hate crimes or violent behaviour, and it gives us another way to stop and track paedophiles and those who are involved with child pornography.
These are all important things - I don't believe that anyone here would condone child pornography, or say that hatemongering, or exposing children to pornography or graphic violence are things that we should be defending. As a society we already do a fair deal to protect against these types of things - movies and TV shows have ratings (don't get me started on the lack of an R-rating for games in Australia, though!); hate and violence is condemned and we have laws and legislation to discourage it; we have task forces of police and civil servants who work locally and internationally to hunt and stop child pornography and paedophilia.
Simply put, I don't think it's going to work as well as the proponents think it will. I primarily put that down to the fact that those espousing these sorts of solutions neither fully understand the technical details on the implementation, nor do they understand what it is like to be sitting behind one of these sorts of solutions.
Anyone who has ever been at one of the aforementioned schools which uses content filtering knows that it's not an exact science - there are a few ways to manage these sorts of content filtering proxies, and none of them are perfect. As someone that's both run a system of this nature, and someone who's been trying to get around them, they're unreliable and require a large amount of maintenance. Furthermore, updating the filter needs to be done often, and needs to be done very rapidly after unsuitable content is found.
Let's look at a few of the technical options to run a filtering system such as this.
You can use keywords - for example, a program can look through every website as it is loaded, and if it contains certain words then it will be blocked. I'm sure you can guess most of the four-letter words that schools frequently block, and if you let the dirty side of your mind wander a little, you'll probably work out a few more. Keyword filtering sucks. We've seen this in other problem spaces too - just look at SPAM. As soon as solutions emerged that looked for obvious SPAM words such as "viagra", "penis", etc. SPAMmers quickly took to obfuscating them. As such, we now get emails offering us "v1agr4 3n1arg3 y0ur P3N!S" and crap like that. We've used these kinds of solutions at schools, and I can guarantee that they're not perfect. Is "breast" a keyword for pornography, or for Breast Cancer Awareness?
In addition to a keyword solution being unreliable, the infrastructure required to search for these keywords across all Australia HTTP traffic would be immense and expensive. Keyword filtering is not a good idea from a technical standpoint.
The other typical solution is to see what everyone is viewing and review those pages in order to find new "bad stuff" to add to your list of blocked sites. I did this for about 2 years while I was in high school, where each day I would check the traffic logs on the proxy server and have a quick look for inappropriate content. To do this reliably, every page that is viewed must be reviewed, and checked for any content deemed "inappropriate", and therefore blocked. Not withstanding that the process of identifying these "bad pages" as they are navigated to will be a bottleneck, the manpower required to review these pages would need to be immense in order to be anywhere near useful. Certainly, much of the traffic would be duplicated (and so might not need to be checked again) however it would not be possible to effectively review content such as snippets of information returned via AJAX, pages requiring authentication or cookies, or the result of POSTed content.
Reviewing sites has the computing overhead of using keywords, plus is human-intensive, is prone to errors, and won't even work for a bunch of the use cases. As such, that's not a suitable option.
Finally, you could just block pages that you get complaints about. I'm not sure if I need to point out the stupidity of this idea. The likelihood of someone complaining about inappropriate content is minimal, as they actually have to visit it in order to be able to complain. Consider that goatse.cx, one of the best known shock sites, was available for over 4 years before anyone actually complained about it - I consider this to be a very prominent example, as goatse was (and still is) remarkably well known for it's offensive nature. Yet, it still took several years before someone found it offensive enough to report it.
So let's jump forward to an Australia where this kind of filtering has been made mandatory. Who is going to pay for the core infrastructure to maintain the lists, employ the maintainers, etcetera? It will be us, the taxpayers, who will be paying for a system that will likely never provide a suitable level of protection. Additionally, ISPs will need to expand or modify their infrastructure to handle the new technical requirements. If we assume that the Government mandates that ISPs must filter the traffic themselves (based upon filtering lists, or something similar), the cost to process that traffic will be passed on directly to customers. I can't see iiNet, Telstra, or any ISP of any size for that matter, saying "Hey, it's OK guys - we'll spend a this wad of cash on new hardware, new personnel to maintain it, legal advice on whether or not we comply, etc. We won't charge you any more!".
Furthermore, this is an additional roadblock for new ISPs, who will not only have to compete with the big players, but have to comply with the legislation, and find out whether they comply.
However, the concept of ISPs managing the filtering themselves is a ludicrous idea - what is there to stop whoever is implementing the filters from ensuring that there is still a workaround for private or community use? Geeks are, stereotypically, against censorship and therefore I consider it inevitable that such a workaround will be made in at least one location.
Ok, so let's jump forward some more to an Australia where the filtering has been made mandatory, and it's being implemented somewhere, whether that be at a core Governmental location, or at the ISP level.
Who the hell is going to decide what is and what isn't appropriate? Certainly, some things are black and white - child pornography is well out, fuzzy bunnies are in! But what about the grey areas? What about tasteful (artistic) nudity? What about classic art or religious texts that suggest or promote violence and hate? What about graphic cruelty to animals such as is frequently featured on peta.org and other environmentalist sites? What about goatse, tubgirl, and all those shock sites?
You might think that the mention of goatse and tubgirl wasn't warranted, but I believe these are strong examples of content that is very offensive, but only in certain circles. Does that make them "nationally" inappropriate? Who decides so? I've been exposed to young Mr. Goatse so many times that he doesn't bother me any more (one of my 18th birthday cards included him - what a surprise!). However, I'd warrant that my grandparents (or most other people) would be less than pleased to find it in a Christmas card. It is how the legislation deals with these grey areas that best indicates the future direction of the legislation. Personally, I don't want to see pictures of a raccoon being beaten to death with a cudgel, but people who visit peta.org are likely very passionate about such things! Can I ask for it to be blocked as inappropriate?
Will adding something to the filter need the approval of a judge or court? What about a vote of parliament? A public vote? A council of religious leaders? Before even considering building a list, we need to decide on who should maintain such a thing, and what hard and fast rules we use to place content into that list.
Why is this important? Because it's only a small step from blocking sites that promote anti-social behaviour, to sites that promote fringe political beliefs, to sites of minor political parties or lobby groups. I sure don't support victimless crimes such as incest between adults however I do believe that you have a right to put forward that viewpoint. I strongly agree with Duncan when he said that The problem with censorship is once it starts it rarely stops.
I'm not saying that we shouldn't filter anything - however I do believe very strongly that the filtering has to be entirely transparent and 100% accountable.
Furthermore, the ALP is proposing an opt-out system, and it's not a big stretch to consider consumers could possibly run the risk of being subject to additional monitoring if they were to ask to opt-out. It's especially popular for proponents of these sorts of measures to say "If you don't support filtering, you must be viewing child pornography, as that's all that's going to be filtered". Despite that this kind of Straw Man attack is a logical fallacy, that is a fundamentally flawed perspective to take. I'm certain that there's something about presumption of innocence in this country, and it's illogical arguments such as these which open the doors to emotive, knee-jerk reactions and allows for people to be stripped of their rights in the name of "the greater good".
Don't get me wrong - I'm not under any illusion that our Internet traffic is not currently monitored and tracked for a range of purposes. Call me a raving conspiracy theorist nutjob, it's happening. I know enough about the technology to know it's within possibilities, and I've met enough people who have told me first hand of their involvement with it, or told me of their discussions with people who have been involved with it. Whether it's by ECHELON or something else, we're being monitored. I'm at ease with it - as someone who's grown up on the Internet, and has more stuff on the Internet than most of my friends actually know about me, I'm under no illusions that this plan by the ALP would be the first effort to keep track of what we're doing and reading on the Internet. I've got absolutely nothing to hide, but that doesn't mean that I'm not entitled to a modicum of privacy!
That said, I don't believe that we need any more monitoring, especially not monitoring that is being pushed using emotive knee-jerking under the misconception that it will actually make any difference in the fight against child pornography, hate crimes and the sliding of society down the proverbial toilet.
Earlier in this essay, I referred to a better solution, one that is distributed, low cost, and has beneficial side-effects.
It's called parenting.
I'm not a parent. I don't have kids. I am, however, a Scout Leader, and I know what it's like to try and manage what children are up to. They're smart, sneaky, cunning little beasts.
However, that is no excuse for you to not know what your children are up to. You're older, you're meant to be smarter too. If you child is on the Internet, and you don't know what they're up to then quite honestly, it's your own fault when they stumble upon something you don't want them to. I'm not saying that parents need to sit next to their children at the computer (where's the fun in that!) but all too many parents buy Johnny and Suzie a computer, put it in a room out of the way (so as that incessant beeping can't be annoying) and let them go wild. Put the computer in a public place, check what they're up to. Hell, take an interest in what they're up to!
Parents will get a bit offended at that. "But when they're at school! When they're at a friends place! But! But! But!".
No. No buts!
Schools already have an interest in protecting your child from this kind of content. There isn't a school I know of, or an education department that has any excuse for not filtering out the most run of the mill garbage from their students. It's not easy, and it's not perfect, but with a combination of teacher supervision and these light weight technical solutions, it does the job. If the school your children go to doesn't have something like this, then you should be asking why.
At a friends house, you might want to expect that your child is being given the same level of supervision as you might provide at home. Do you trust the parents of your children's friends to stop them playing with knives, matches or guns? Of course you do. The Internet is a tool, exactly the same. You should be watching your children's friends just as you would your own.
No level of technical measures will be anywhere near as effective as having someone who knows better keeping at eye on your children. Oh, and you get the benefit of actually having a relationship with your offspring. Pfft! How would that work again!?
Ok, I've been pretty negative all this post, so let's look for another alternative.
Get the Government involved in a distributed, open-source-esque initiative for blacklists. We're talking entirely transparent, entirely open blacklists that proxies (in particular, schools) can use in order to limit children's access to this "inappropriate" content. But make it entirely optional! Give ISPs the choice of using the solution in an opt-in arrangement. Don't make it some sort of Big-Brother solution whereby the government has control over what is added, and what is removed. Allow folks to subscribe to the protection they want - pornography, hate and violence, gambling, etc. Produce a resource that can be used by multiple technologies, extended, and most importantly - verified.
As much as I hate the anti-spam blacklists, a model similar to that would perhaps work well - the Australian Government produces a collection of blacklists for "bad" content of different grades, and other groups can take these lists and build upon them, remove incorrectly marked content, and things like that. Give the power back to the consumer, not some power-hungry bureaucrat working towards their own end!
Suit yourself. If all of this hasn't swayed you, then I doubt anything can. I'm just hoping that enough technical people point out the glaring stupidity of this kind of solution before more of our taxpayers dollars are wasted by yet another inept government.
Hey! Welcome to 2008! Man, this is going to be fun!