For a new generation of 'digital natives' privacy is no longer a requirement. Web 2.0 has brought with it a transformation in how we view the need for privacy and engage with the public realm - but at what cost? The discussion will be prefaced by a keynote address from Daniel J. Solove, Associate Professor of law at the George Washington University Law School, and author of The Digital Person: Technology and Privacy In the Information Age. Chaired by Irish Times writer Karlin Lillington, the panel will also feature Irish blogging guru Damien Mulley and solicitor/digital rights expert Caroline Campbell.
Issues to be considered include:
* Can bloggers say what they like?
* What's wrong with having nothing to hide?
* Who is really stalking you on Facebook? .. Does anyone care anymore?
* Is there a generation gap in approaches to online privacy?
Wednesday, June 25, 2008
Symposium - Privacy v. Publicity in the Virtual World
The Darklight Film Festival is hosting what should be a very interesting symposium on Privacy v. Publicity in the Virtual World this Friday, June 27th in the Film Base, Curved Street, Temple Bar at 10am:
Monday, June 23, 2008
Civil servants' illegal disclosure of personal information is "routine and very comprehensive"
The Independent has an update on the Data Protection Commissioner's investigation into the Department of Social and Family Affairs:
FOURTEEN employees of the Department of Social and Family Affairs are being investigated for allegedly passing comprehensive personal information to insurance companies on a regular basis.I've blogged before about other examples in this Department of disregard for citizens' privacy.
The Irish Independent has learned that some of the alleged breaches -- which came to light in April 2007 -- involve "one of Ireland's largest insurance companies" and date back to 2006.
The allegations involve the passing of personal and sensitive information, contained on data systems within the Department of Social and Family Affairs (DSFA), to third parties for commercial benefit.
The DSFA carries all personal details on all individuals in the state including PPS numbers, dates of birth, addresses as well as earnings details.
Private investigators work for the insurance companies to compile cases against drivers. But there is concern about the level of information that the inspectors for the insurance companies are obtaining.
Protection Commissioner Billy Hawkes said in an email to the DSFA last June: "I inspected five investigator files yesterday during a planned call back to X (large insurance company).
"This revealed very-worrying levels of disclosure from the DSFA to private investigators. From what I could discern, such disclosures are routine and very comprehensive."
Thursday, June 19, 2008
Data protection and bulletin boards
John Breslin of (amongst other things) Boards.ie has an interesting post on a data protection complaint from a banned user. The complaint? After the banning, all the posts he had previously made appeared with the word "Banned" next to them (which is the default setting for many forum software packages). The view of the Data Protection Commissioner was that this was an unauthorised disclosure of personal information (i.e. the user's status on the site), apparently on the basis that the username was very close to his real name:
Quite apart from the narrow data protection aspect of this particular case, it raises an interesting issue about the social dynamics of social software and whether the law might hinder effective moderation.
One of the way in which moderators on forums discourage certain behaviour is by putting users into a sin bin or banning them. Going one step further by naming and shaming - i.e. publicising the sanction by labeling posts from those users - has a social effect in two ways. At a general level it may help to reinforce the norms of the site by publicly reinforcing the message that certain types of behaviour are unacceptable and at the individual level it may also act as a deterrent to the user who knows that any sanction against them will be publicised.
If this sounds familiar it's because this argument mirrors, on a much smaller scale, the role of publicity in the criminal justice system. It also mirrors the increasing tendency in other areas for public bodies to "name and shame", whether it be young offenders in England or the list of tax defaulters in Ireland who settle with the Revenue.
The broader issue this raises is whether naming and shaming is an acceptable option - and if acceptable in (e.g.) the context of tax defaulters, why not in the context of troublesome users? Should it matter whether it's a public or private body naming and shaming? Should it matter that the gravity of the "offence" is much greater in one case than the other? If bulletin boards / forums can't publicly reveal which users have been banned or sin-binned, will this make the life of moderators more difficult?
Quite apart from the narrow data protection aspect of this particular case, it raises an interesting issue about the social dynamics of social software and whether the law might hinder effective moderation.
One of the way in which moderators on forums discourage certain behaviour is by putting users into a sin bin or banning them. Going one step further by naming and shaming - i.e. publicising the sanction by labeling posts from those users - has a social effect in two ways. At a general level it may help to reinforce the norms of the site by publicly reinforcing the message that certain types of behaviour are unacceptable and at the individual level it may also act as a deterrent to the user who knows that any sanction against them will be publicised.
If this sounds familiar it's because this argument mirrors, on a much smaller scale, the role of publicity in the criminal justice system. It also mirrors the increasing tendency in other areas for public bodies to "name and shame", whether it be young offenders in England or the list of tax defaulters in Ireland who settle with the Revenue.
The broader issue this raises is whether naming and shaming is an acceptable option - and if acceptable in (e.g.) the context of tax defaulters, why not in the context of troublesome users? Should it matter whether it's a public or private body naming and shaming? Should it matter that the gravity of the "offence" is much greater in one case than the other? If bulletin boards / forums can't publicly reveal which users have been banned or sin-binned, will this make the life of moderators more difficult?
Tuesday, June 10, 2008
How not to protect a domain name - the D4hotels saga
Remember D4hotels.com - the low cost hotels site which completely failed to protect variants of its name against cybersquatters? Well it now transpires that the ownership of D4hotels.com itself is now contested:
Update (27.1.09): It now seems that this case has been settled.
A dispute over ownership of the D4hotels.com domain name and website has come before the Commercial Court.While there's very little detail in this report, it suggests that there was no explicit agreement as to ownership of the intellectual property in the domain name and the site itself - which if true is one of the most fundamental mistakes one can make when establishing an online business. This, together with the failure to protect domain name variants, means that I will be using this case in class as a cautionary tale.
MJBCH Ltd, the leaseholder of the former Berkeley Court Hotel and the former Jury's hotels in Ballsbridge and The Towers, claims exclusive entitlement to the operation and management of the domain name and website.
It has alleged it had a hotel operation and management agreement with the two defendant companies -- Cloud Nine Management Services Ltd and Beechside Company Ltd, trading as The Park Hotel, Kenmare -- to manage the hotels as the Ballsbridge Inn, Ballsbridge Towers and the Ballsbridge Court hotel, but that agreement was terminated in February.
In those circumstances, it claims the defendants have no entitlement to use the d4 domain name and website.
...
The defendant companies deny the claims and say they at no time abandoned their rights to or property in the domain name, website or business name.
They companies say that, under their agreement with MJBCH of October 2007, they were authorised to act as the exclusive operator and manager of the hotels and that the domain name D4hotels.com was registered by Beechside in September 2007.
They also say the management agreement was summarily terminated by MJBCH in February and that at no stage had it been agreed the D4 domain name and website would become the property of MJBCH.
Update (27.1.09): It now seems that this case has been settled.
NY Attorney General forces ISPs to filter Internet
In another bad day for the end to end principle, the New York Times reports that the Attorney General of New York has succeeded in forcing ISPs to filter their users' internet connections. The expressed motivation is to prevent users from accessing child pornography, though this will be trivially easy to circumvent. There are many problems with internet filtering, and I've written a short summary of them (in a different context) for the Digital Rights Ireland blog. But the New York scenario raises one particular problem - whether this form of censorship, implemented and administered by private actors (who will face an incentive to overblock), can be reconciled with the rule of law. The issues raised are very similar to those presented by the UK Cleanfeed system, about which Colin Scott and myself had this to say at the inaugural TELOS Conference last year:
Edit (13.06.08): Richard Clayton indicates that the New York Times coverage may be inaccurate. He suggests that what the ISPs have agreed to is limited to removing certain newsgroups and taking down sites which they host - but does not include filtering of sites hosted elsewhere. There's also some confusion as to just what the effect on usenet will be, with Declan McCullagh reporting that in the case of Verizon all the newsgroups in the alt.* hierarchy will no longer be offered.
This presents a number of challenges for the rule of law. Even if an individual ISP’s actions can be described as voluntary, the effect is to subject users without their consent to a state mandated regime of internet filtering of which they may be unaware. The Internet Watch Foundation (IWF), which determines which URLs should be blocked, has a curious legal status, being a charitable incorporated body, funded by the EU and the internet industry, but working closely with the Home Office, the Ministry of Justice, the Association of Chief Police Officers and the Crown Prosecution Service. There is no provision for site owners to be notified that their sites have been blocked. While there is an internal system of appeal against the designation of a URL to be blocked, that mechanism does not provide for any appeal to a court – instead, the IWF will make a final determination on the legality of material in consultation with a specialist unit of the Metropolitan Police.Orin Kerr has more.
Consequently the effect of the UK policy is to put in place a system of censorship of internet content, without any legislative underpinning, which would appear (by virtue of the private nature of the actors) to be effectively insulated from judicial review. Though the take-up of the regime may be attributable to the steering actions of government, the way in which the regime is implemented and administered complies neither with the process or transparency expectations which would attach to legal instruments.
There is also cause for concern about the incentives which delegating filtering to intermediaries might create. From the point of view of the regulator, requiring intermediaries to filter may allow them to externalise the costs associated with monitoring and blocking, perhaps resulting in undesirably high levels of censorship. But perhaps more worrying are the incentives which filtering creates for intermediaries. Kreimer has argued that by targeting online intermediaries regulators can recruit “proxy censors”, whose “dominant incentive is to protect themselves from sanctions, rather than to protect the target from censorship”. As a result, there may be little incentive for intermediaries to engage in the costly tasks of distinguishing protected speech from illegal speech, or to carefully tailor their filtering to avoid collateral damage to unrelated content. Kreimer cites the US litigation in Centre for Democracy & Technology v. Pappert to illustrate this point. In that case more than 1,190,000 innocent web sites were blocked by ISPs even though they had been required to block fewer than 400 child pornography web sites.
Edit (13.06.08): Richard Clayton indicates that the New York Times coverage may be inaccurate. He suggests that what the ISPs have agreed to is limited to removing certain newsgroups and taking down sites which they host - but does not include filtering of sites hosted elsewhere. There's also some confusion as to just what the effect on usenet will be, with Declan McCullagh reporting that in the case of Verizon all the newsgroups in the alt.* hierarchy will no longer be offered.
Sunday, June 08, 2008
The Future of the Internet and How to Stop It
Jonathan Zittrain's superb new book The Future of the Internet and How to Stop It is now available for free download. His central theme is that the freedom associated with general purpose PCs and an end-to-end internet is increasingly being threatened - a variety of forces (including a push by the content industry for DRM, security fears, and state regulation) are leading towards a growth in "tethered appliances" outside the control of their users, coupled with increased internet filtering and gatekeeping. The result is to dramatically shift the balance struck by the law and possibly to threaten traditional freedoms. From the synopsis:
IPods, iPhones, Xboxes, and TiVos represent the first wave of Internet-centered products that can’t be easily modified by anyone except their vendors or selected partners. These “tethered appliances” have already been used in remarkable but little-known ways: car GPS systems have been reconfigured at the demand of law enforcement to eavesdrop on the occupants at all times, and digital video recorders have been ordered to self-destruct thanks to a lawsuit against the manufacturer thousands of miles away. New Web 2.0 platforms like Google mash-ups and Facebook are rightly touted—but their applications can be similarly monitored and eliminated from a central source. As tethered appliances and applications eclipse the PC, the very nature of the Internet—its “generativity,” or innovative character—is at risk.A must read.