Privacy Enhancing Technologies: Their Necessity and Future
Austin Heller
In an online environment, many different communities both interact and
share ideas on a global scale. Due to the rise in the sharing of personal
information, it’s clear that privacy has taken a step back in order to make room
for innovation and prosperity. This does not need to be the cost people pay to
improve society. Too often, privacy rights are violated needlessly and in many
ways reinforced by an unknowledgeable public. In this way, violations continue
to occur unnoticed and give the impression that the public simply does not care.
Some of these invasions of privacy can be stopped and are being opposed even
today.
With the use of
privacy enhancing technologies (PETs), any user that interacts with the global
network has the ability to maintain their privacy where privacy-protecting
measures are not inherently in place. The creation and wide-spread use of PETs
has changed the way communication occurs online and how governments react to the
Internet. There is a balance to be had between privacy enhancing tools like Tor,
an anonymous Internet browser, or Pretty Good Privacy (PGP) encryption, a
popular and power encryption algorithm, and the national security concerns that
result from their use (Levy, 1995; Schultz, 2012). Two such concerns, at least
for a regime that cannot handle criticism, are whistle-blowing and anonymous
communication.
Whistle-blowers
have a place in our world, but they cannot exist without the ability to spread
information anonymously. When governments fear that the community they govern
provides too safe an environment for whistle-blowers, they undoubtedly restrict
communication until they’re satisfied. When this happens, it becomes untenable
to maintain a free marketplace of ideas online. Free expression can be quickly
stifled by a government’s regulation and intervention in ways that may
permanently taint the ability for users of the Internet to share ideas with
other users, of who could be residing under a very different governing body with
valuable points of view. There is more to be gained by anonymous communication
than there is to lose in way of reputation and public government image. PETs are
the current answer to the concerns of the knowledgeable few, but
privacy-protecting legislation that encompasses everyone fairly should be the
goal. Without the government protecting privacy, it creates a precedent where
individuals have to be either paranoid and defensive, or docile and submissive
about their demeanor online.
Privacy
The concept of privacy can be expressed in many different ways, some of
which subjectively cover aspects of privacy better than others. One way that
privacy can be considered is in such a way that a person’s personal data is
something that they have a right to control (Birnhack & Elkin-Koren, 2011, p.
8). Under this principle, it is primarily about the person’s ability to have
complete control over all of the information that describes them. Coupled with
the right to control their personal information is the desire to ensure that
they have the ability to maintain their own identity. Identity management, in
terms of societal impact, dictates a person’s relationship with all others and
gives them the power to withhold or make known any information about them. With
this control, a person can determine how they would present themselves and in
what way they would interact with their environment in order to shape their
social role (Phillips, 2004, p. 5). This aspect of privacy (as control) provides
one way of understanding how a specific PET impacts and contributes to solving a
particular problem.
Another aspect of
privacy is having the right to prevent access to a person’s personal
information. Privacy as access is different from privacy as control. As control,
the individual can choose how the information is presented, to whom, and for how
long. As access, the individual is defining themselves as a separate entity from
all others, only permitting access to their information willfully and
specifically (Birnhack & Elkin-Koren, 2011, p. 8). When information is
considered as something that can be accessed, it places the individual in the
position of isolating their information with the desire to be free of intrusion.
Present concerns that people face today with “peeping Toms, warrantless searches
and laws restricting intimate behavior” (Phillips, 2004, p. 5) are ever present
and is easily understood when privacy is in terms of access. These concerns are
just another reason why it is vital to support the development and spread of
PETs; more often than not, once a person’s privacy has been violated online, it
becomes a terrible uphill battle to control its spread.
Invasions of Privacy
Invasions of privacy as control tend to relate more to companies and
content providers in regards to patents and intellectual property rights. This
is clear from the understanding of how privacy as control figures into an
intellectual property right holder’s perspective on their want to control their
information. Although that side of the argument is interesting in its own right,
in regards to PETs, invasions of privacy as access play a much stronger role in
the reason for their development and use. Due to the very nature of the
Internet, creating a bubble of protection around an individual’s information is
far easier than maintaining control over that same information across the
unfathomable expanse called cyberspace. As users venture farther into this
expanse, the need for new and innovative PETs has become more and more obvious.
Google and Facebook are even beginning to travel about the public domain,
collecting information that might not otherwise alarm the average person if it
weren’t so pervasive (Hill, 2012). This should have knowledgeable users
wondering how they can protect their information while engaged in public
services on the Internet or even outside the home; it should have them wondering
if it’s even possible.
It is well known that Facebook has taken off as a major social networking
site. Due to this, Google finally introduced a competing social networking
website cleverly named Google+. One problem that this new site suffers from is a
result of how it facilitates communication between users. With Google’s idea of
social circles, a user types out something to the screen and then selects which
circle of pre-selected people will receive the user’s message (Morda, 2011). The
main problem with this approach is the level of demand that is placed on the
user to ensure accurate communication. They will either have many special
circles for specific announcements or will inevitably announce something to
someone unintentionally. Some have noted, however, that because it’s possible to
create all of the unique circles for each unique circumstance, it’s actually not
too bad of a design at all. In fact, they would argue that the only real issue
with Google+ is that others can add the user in question to their circles
without their consent (Mediati, 2012). There are clearly issues with Google+,
but to say that there isn’t a problem so long as users aren’t lazy isn’t
realistic. Relying on users to be consistently motivated will only ever result
in their submission to a mistaken and unnecessary invasion of privacy.
With the habit that people have begun to form in the way of freely
divulging information online, they may want to ironically utilize Google’s “Me
on the Web” service (Thomas, 2011). This service acts very similar to Google
Alerts, but it is more geared for the specific goal of finding what information
can be known about the user online, if at all. It’s argued, though, that the
service was only developed as a means to get people to create Google Profiles
and that if a person’s goal is to actually find information about them online,
they should probably just stick with Google Alerts (Sullivan, 2011). If Google
was actually serious about providing users with the highest level of knowledge
about what their search engine knows about them, then Google Takeout, a service
that is stated to provide just that, would actually reveal that data in full
(Null, 2012). It doesn’t, though, and therefore forces the user to take an
extraordinary amount of effort to discover such information when Google could be
providing it forthright.
It’s obvious that Google is trying to perform two contradictory
activities at once. On one hand they want all of their users to be able to feel
secure online, while on the other they want to collect the IP addresses,
queries, and search patterns of their users (Leenes & Koops, 2005, p. 5). The
feeling of security that Google provides when both of these positions is
considered together can be compared to always having a group of people
sinisterly hovering overhead. It’s possible to ask the group questions, but they
don’t have to give real answers. In this way, Google should either be more
publicly transparent in regards to their actions or own up to their blatant
inconsistencies; their users should be able to depend on them to protect their
privacy.
Luckily, there
are alternatives to using search engines that are centralized and therefore
prone to this type of monitoring. The iTrust system is one such solution that
has been noted to be “particularly valuable for individuals who fear that the
conventional centralized Internet search mechanisms might be subverted or
censored” (Chuang, Lombera, Moser, & Melliar-Smith, 2011, p. 7). This particular
distributed search and retrieval system is not currently being utilized
publicly, but other solutions exist in the meantime such as Faroo (a
peer-to-peer web search engine), ixquick.com (a self-proclaimed private search
engine), Majestic-12 (a distributed web search based on community support for
their results), and YaCy (a distributed web search that is a peer-to-peer web
crawler) (Shark, 2007). Each of these services can be utilized by researchers,
journalists, and all others curious parties that want to ask questions but don’t
want those of ill-intent to know that the questions were asked.
Being able to
search through and browse the Internet anonymously only goes so far as a means
of protecting a person’s privacy online. With the use of embedded widgets from
sites like Facebook, Twitter, and Google+, the usefulness of an anonymous
browsing tool becomes nonexistent. Using social networking widgets defeats the
purpose of browsing anonymously in the most foolish way imaginable; logging into
Facebook in order to leave a comment on a site that the user navigated to
anonymously will obviously violate their anonymity. That aside, if the user
isn’t browsing anonymously, simply loading the page can send Facebook and others
the user’s browsing habits (Eckersley, 2011). In this way, Facebook can know
everything about their users regardless of if their services are even being
used.
Compared to the
world that existed prior to Facebook, it’s a bit unusual that today there’s a
social information-gathering giant just a few clicks away. The world has
definitely changed as can be seen by the fact that Facebook has gathered over
one billion members. With a user base that massive, they would obviously need to
be extra cautious about their user’s privacy settings and any changes they make
to them programmatically (Fowler, 2012). Regrettably, they aren’t as cautious as
their users would like. In fact, on more than one occasion Facebook has changed
the privacy preferences of its users from whatever they had personally selected
back to the default and more public settings (Doctorow, 2012, p. 1; Mediati,
2012). Facebook’s privacy settings are adjusted in such a way that once a
setting’s purpose has been modified, new settings are created or adapted to fill
in the gaps. This slicing action that Facebook continues to implement on the
privacy settings of their users is a constant source of confusion that quite
naturally leads to some users becoming too irritated to fix them (Mediati,
2012). It should make some users curious about the legality of their mistakes.
What was once private, now just another element of the public domain online.
Sadly, a privacy setting alteration isn’t the only problem that Facebook users
have started to become irritated about.
When companies
share client information, it shouldn’t be immediately interpreted as a terrible
offense so long as the privacy of the individuals whose data is being shared is
secured properly; hospitals need to share patient information with other
hospitals and law enforcement bureaus need to collect information relevant to
criminal cases. Because of this, an entire subfield of cryptography, known as
secure multi-party computation (SMC), has emerged. The purpose of an SMC is to
allow different parties to calculate and compute their own results from the
private data that the other party members possess while maintaining the privacy
of the particular inputs, the user’s personal information (Ishai, Kushilevitz,
Ostrovsky, & Sahai, 2009, p. 1). When designing a method to calculate the
result, privacy as control plays a very large role in defining the methodology
for protecting information. Being able to control exactly how the information is
directed to other parties relates directly to the “control” concept of privacy.
From the perspective of privacy as access, it has the uniform role of needing to
be strictly forbidden from being violated in all ways except to provide the
appropriate output per the query or computation.
A common example
of an SMC dilemma is
It’s undeniable
that B2B PETs are nice to have available, but like all PETs, implementing them
in a way that allows for the businesses to operate in a trusting and
straightforward manner becomes a deterrent to their use (Phillips, 2004, p. 10).
Not only that, but because there are only two different models for the
implementation of an SMC (Ideal Model Paradigm and Real Model Paradigm) and that
both have flaws in the area of trust, designing the perfect algorithm for each
situation can be very difficult or impossible depending upon the privacy
regulations of either party or the competence of the business itself (Phillips,
2004, p. 10; Sheikh, Mishra, & Kumar, 2011, p. 2). The Ideal Model Paradigm is
designed with an intermediary trusted third party (TTP) that executes the
request of either party. In this way, neither party has direct access to the
information. The Real Model Paradigm is designed with a protocol that either
party can use to achieve the desired computation. Trusting the TTP in the Ideal
Model or designing a protocol that maintains the privacy of either party’s data
in the Real Model present the largest challenge to developing a quality SMC.
Although the challenge is complicated when businesses are sharing their client’s
personal information, it only makes it more obvious that everyone should learn
how they can protect themselves and their information when they venture online.
Protecting Our Privacy
As the news outlets and blogs continue to report each new privacy
infringement from the large Internet power-houses such as Facebook and Google,
there are motivated people in the world who find the time and resources to
develop PETs (Hill, 2012; Sullivan, 2011). While the developers of these PETs
have gained support from informed users online, one particular so-called PET,
known as Platform for Privacy Preferences (P3P), has some users questioning its
usefulness or if it’s actually a PET at all.
Ruchika Agrawal, an IPIOP Science Policy Analyst at the Electronic
Privacy Information Center (EPIC), just so happens to be one of these people
(Zoom, 2012). She’s concluded that “P3P fails as a privacy-enhancing mechanism
because P3P does not aim at protecting personal identity, does not aim at
minimizing the collection of personally identifiable information, and is on a
completely different trajectory than the one prescribed by the definition of
PETs” (Agrawal, 2002, p. 4). P3P is a protocol that gives websites the ability
to declare how they intend to interact with a user’s information. In this way,
the browser can restrict access to sites that don’t meet the user’s predefined
preferences. This would be fine enough if it worked properly and was supported
by more than just Microsoft’s Internet Explorer (
When it comes down to it, P3P is all about the cookies. A cookie is a
file that a website can store on a user’s computer with the intention of being
accessed in the future. The main function of a cookie is to provide the website
with the understanding of who the user is and what their history is with the
site (Phillips, 2004, p. 7). Cookies remove the burden of saving user-specific
information on the web server, so long as the information can be deleted without
severely impacting the user experience. This is great for keeping track of
information as the user goes from page-to-page, so long as the cookie isn’t
designed to expire or is outright blocked from being created (Leenes & Koops,
2005, p. 5). Firefox and Chrome label cookies as being either third-party (used
to record long-term histories of the user’s browsing history) or first-party
(used to individualize the user online) (Opentracker, 2012). There are many
benefits to this aspect of website design, but too often their malicious nature
becomes a public nuisance.
There’s been a lot of negative press about cookies ever since the year
2000 when the White House disclosed that they were tracking which ads had been
more effective at migrating users from the site they were just looking over to
the drug office’s website (Lacey, 2000). The way in which this was done was by
having DoubleClick, an advertising company owned by Google and based out of
Thankfully, there is another option that accomplishes the same task but
from a more respectable source, Adblock Plus (Palant, 2012). With Adblock Plus,
a plug-in that can be installed through the user’s browser, ads that would
otherwise create and update their cookies will never become rendered in the
page, so long as the ad is part of the list that the user is subscribed to. This
is a welcomed extension to any popular browser, but it has had the obvious
drawback on those websites that depend upon ad-revenue (Evans, 2007). Web
masters and small websites are therefore taking an economic hit due, in part, to
the fear of the tracking performed by DoubleClick and other online advertising
companies.
To be fair to those websites that use the advertisement business model,
there are other options. An anonymizer or proxy server can be utilized and in
doing so both parties are satisfied. The web master gets paid because the ads
are being rendered, and if the anonymizer doesn’t block cookies, the advertising
company gets to track the movement of users online. The only catch is that the
advertising company won’t be able to know the IP address of the user they’re
monitoring, making it impossible to truly know a user’s identity. That’s the
entire purpose of an anonymizer, to make the webpage request come from another
location giving the impression that the user’s computer is actually located at
the proxy server’s location (Agrawal, 2002, p. 2; Phillips, 2004, p. 7; “Take
Control”, 2000). A website like Anonymizer.com is a perfect example, although
rather expensive for the average user (Anonymizer, 2012). There are free
anonymizers, but the quality of the service is exactly what the user pays for.
Using a proxy has its downsides, cost and consistent up-time being
relative to each other. Outside of those two, because of its design it is
inherently slower than accessing the web server that is hosting the website
directly. Another downside is the reliability of the host of the proxy service.
If, for instance, a user wanted to use a proxy in order to anonymously inject
malicious SQL commands into the Sony Pictures website with the goal of
publishing all of their user data, the proxy service can and does submit to
court orders to identify the user (Martin, 2011). Obviously breaking the law in
that manner should result in legal action, but what if it was against the law to
criticize the government?
There are still options for those who cannot exercise free speech and do
not want to use a centralized anonymizing service for fear of being identified
and persecuted. Tor, an uncentralized network of volunteer-run Tor routers that
allow clients to browse the Internet anonymously by using onion routing, has
steadily grown in use right up until about 2008 (AlSabah, Bauer, Goldberg,
Grunwald, McCoy, Savage, & Voelker, 2011, p. 1). Tor is definitely a great way
to browse the Internet anonymously, but unfortunately Tor has become tainted
with child pornography, the illegal sale of guns, and drug transactions
(Schultz, 2012). This leads to issues of wanting to communicate anonymously
while at the same time not being associated with unethical activities. If this
weren’t enough, because of its design, Tor is quite slow at sending and
receiving information. Being as slow as it is, there have been studies done to
find ways of changing the design of Tor in order to fix this problem (AlSabah et
al., 2011, p. 2). Alas, there has been no clear solution to date that doesn’t
also cause instability in terms of anonymity.
Since proxies report to the government and Tor is full of pedophiles, the
only other quick solution might just be encryption. Encrypting the connection
and request of the user doesn’t hide where the user is or what server they
connected to, but it does provide a degree of deniability on the content of the
request. Deciding on which encryption method to use can be daunting for the
uneducated user, so perhaps a good choice could be the one that almost had its
developer thrown in prison for developing and distributing it (Radcliff, 2002).
Yes, the Pretty Good Privacy (PGP) encryption algorithm was so profound that
while its creator, Phil Zimmermann, was accepting the prestigious Pioneer award
from the Electronic Frontier Foundation, he stated “I think it’s ironic ... that
the thing I’m being honored for is the same thing that I might be indicted for”
(Levy, 1995). Nothing excites a government’s politicians more than a new, easy
means for the public to protect themselves.
Government Reaction and
Interaction
It was mainly because of the Senate bill of 1991 that was going to ban
cryptography that motivated Phil Zimmerman to release PGP for free (Levy, 1995).
The reason that the
This occurs far too often at the borders of the
This type of
intrusion doesn’t only happen in court rooms, it happens for every person
crossing the
Every year
Freedom Not Fear, an anti-surveillance organization, organizes a broad
international protest of Big Brother (Rodriguez, 2012). Their goal is to raise
the awareness of and to demand a change in the way that civil liberties are
infringed upon when it comes to mass-surveillance. This wouldn’t be necessary if
the privacy of individuals weren’t being totally discarded as nation-wide
cameras record and centralize public footage like that of the CCTV surveillance
cameras on many popular street corners, some officers even calling this
outrageous setup of street cameras the “ring of steel” (Bowe, 2012). At this
stage, where Internet-related PETs are not applicable, demanding legislation
that protects privacy might well be the new anti-totalitarian PET. If only there
were a means to fight back directly and anonymously online.
This is, of
course, exactly the purpose of Wikileaks.org, to provide a website where
journalists and whistleblowers can anonymously submit their knowledge, however
classified it may be, in order to create public awareness. Wikileaks provides an
environment to exercise free speech while at the same time maintaining the
user’s anonymity. The cost of this protection has given some state officials
pause due to the very nature of the knowledge being spread. The Secretary of
State, Hillary Clinton, and others have stated that the cost could very well be
paid with innocent lives (Connolly, 2010). They would demand that the identities
of those that have submitted confidential diplomatic cables be known and that
they be prosecuted. The American image is so easily tarnished that, in their
embarrassment, they continue to attack Wikileaks’ founder Julian Assange even
today (Beckford, 2012). With something as profound as the Constitution, it’s a
shame that
Conclusion
Every person who interacts with the Internet runs the risk of exposing
facts about them that they otherwise would not have exposed in a physically
public domain. To safeguard against this injustice, PETs have been created,
shared, and utilized throughout much of the world. Issues with user
comprehension, suspicious centralized systems, and unconstitutional intervention
from law enforcement steadily plague individuals and flourish best when these
people have no knowledge that their privacy is being violated or when they have
come to accept these things as commonplace (Chappell, 2012; Green, 2010; Schoen
et al., 2011).
This feeling of tug-of-war between a person’s privacy and online policy
can only be shifted toward the individual’s side ever-so-slightly with the
application of PETs. In this way, users attempt to isolate themselves from the
outside, wearing a digital mask inside of a digital bomb shelter, as it were.
But they have no control over the information that is stripped from them,
unbeknownst to them purposely, and shared among information-gathering behemoths.
Government legislation needs to be the complimentary PET to the online privacy
battle so that the people have control of their information, even when and
especially if it has been stolen from them in secret. The European governments
are clearing moving in that direction as they formulate comprehensive measures
to protect publicly tradable client information, but there still the issue of
their personal information. Governments have to be more proactive when
developing ethical privacy legislation in an exponentially advancing computer
age, rather than being retroactive in response to criticism. This needs to
happen for everyone; this needs to happen now.
References
Agrawal, R.
(2002). Why is P3P not a PET? W3C Workshop
on the Future of P3P. Retrieved from
http://www.w3.org/2002/p3p-ws/pp/epic.pdf
AlSabah, M.,
Bauer, K., Goldberg,
Anonymizer, Inc.
(2012, October 20). Hide IP and anonymous web browsing software – Anonymizer.
Retrieved from http://anonymizer.com/
Beckford, M.
(2012, October 8). Julian Assange’s backers told to pay 93,500 pounds over bail
breach. The Telegraph. Retrieved from
http://www.telegraph.co.uk/news/worldnews/wikileaks/9594015/Julian-Assanges-backers-told-to-pay-93500-over-bail-breach.html
Birnhack, M. &
Elkin-Koren, N. (2011). Does law matter online? Empirical Evidence on Privacy
Law Compliance.
Bowe, R. (2012,
September 11). Freedom Not Fear: CCTV surveillance cameras in focus.
Electronic Frontier Foundation.
Retrieved from
https://www.eff.org/deeplinks/2012/09/freedom-not-fear-cctv-surveillance-cameras-focus
Chappell, K. (2012). I always feel like somebody's watching
me. Ebony, 67(10), 25-26.
Connolly, K.
(2010, December 1). Has release of Wikileaks documents cost lives?.
BBC News, Washington. Retrieved from
http://www.bbc.co.uk/news/world-us-canada-11882092
DeLoughry. (1999). Privacy problems hurt consumers' trust
in Net. Internet World, 5(34), 20.
Doctorow, C. (2012). The curious case of Internet privacy.
Technology Review, 115(4), 65-66.
Eckersley, P.
(2011, March 16). Tracking Protection Lists: A privacy enhancing technology that
complements Do Not Track. Electronic
Frontier Foundation. Retrieved from
https://www.eff.org/deeplinks/2011/03/tracking-protection-lists
European
Commission’s Directorate General for Justice. (2012, April 4). Protection of
personal data – Justice. Retrieved from
http://ec.europa.eu/justice/data-protection/index_en.htm
Evans, M. (2007,
September 11). Adblock Plus is still evil.
Mark Evans Tech. Retrieved from
http://www.markevanstech.com/2007/09/11/adblock-plus-is-still-evil/
Fowler, G.
(2012, October 4). Facebook: One billion and counting.
The Wall Street Journal. Retrieved from
http://online.wsj.com/article/SB10000872396390443635404578036164027386112.html
Google. (2012,
October 21). Google advertising cookie opt-out plugin. Retrieved from
http://www.google.com/ads/preferences/plugin/
Google. (2007,
April 13). Google to acquire DoubleClick. Retrieved from
http://googlepress.blogspot.com/2007/04/google-to-acquire-doubleclick_13.html
Green, D. (2010,
October 130. Passwords and prosecutions.
NewStatesman. Retrieved from
http://www.newstatesman.com/blogs/the-staggers/2010/10/police-drage-password-sex
Hill, K. (2012,
August 16).
Hofmann, M. &
Fakhoury, H. (2011, July 8). EFF’s Amicus Brief in support of Fricosu.
Electronic Frontier Foundation: Defending
your rights in the digital world. Retrieved from
https://www.eff.org/node/58527
Ishai, Y.,
Kushilevitz, E., Ostrovsky, R., & Sahai, A. (2009). Zero-knowledge proofs from
secure multiparty computation. Society for
Industrial and Applied Mathematics, 39(3), 1121-1152.
Lacey, M. (2000,
June 22). Drug office ends tracking of web users.
New York Times. Retrieved from
http://www.nytimes.com/2000/06/22/us/drug-office-ends-tracking-of-web-users.html
Leenes, R., &
Koops, B. (2005). ‘Code’: Privacy's death or saviour?.
International Review Of Law, Computers & Technology, 19(3), 329-340.
Levy, S. (1995, April 24). The encryption wars: Is privacy
good or bad?. Newsweek, 125(17), 55.
M Law Group.
(2012, February 2). New draft European data protection regime. Retrieved from
http://www.mlawgroup.de/news/publications/detail.php?we_objectID=227&lang=en
Martin, A.
(2011, September 23). LulzSec hacker exposed by the service he thought would
hide him. The Atlantic Wire. Retrieved
from
http://www.theatlanticwire.com/technology/2011/09/lulzsec-hacker-exposed-service-he-thought-would-hide-him/42895/
Mediati, N. (2012). Social network privacy settings
compared. PC World, 30(9), 37-38.
Morda, D.
(2011). Five steps to configuring privacy on Google Plus (+).
Branded Clever. Retrieved from
http://www.brandedclever.com/five-steps-to-configuring-privacy-on-google-plus/
Murphy, D.
(2011, April 10). Google abandons Street View in
Null, C. (2012). 'Liberate' your archived data from
Google?. PC World, 30(9), 25-26.
Opentracker.
(2012, October 21). Third-party cookies vs first-party cookies |
Opentracker.net. Retrieved from
http://www.opentracker.net/article/third-party-cookies-vs-first-party-cookies
Palant, W.
(2012). Adblock Plus for Chrome – for annoyance-free web surfing. Retrieved from
http://adblockplus.org/en/
Phillips, D. J. (2004). Privacy policy and PETs.
New Media & Society, 6(6), 691-706.
Radcliff, D. (2002, July 22). PGP on shaky ground.
Computerworld, 36(30), 33.
Rodriguez, K.
(2012, September 14). Freedom Not Fear: Creating a surveillance-free Internet.
Electronic Frontier Foundation.
Retrieved from
https://www.eff.org/deeplinks/2012/09/creating-surveillance-free-internet-movement-freedom-not-fear
Schartum, D.
(2001). Privacy enhancing employment of ICT: Empowering and assisting data
subjects. International Review Of Law,
Computers & Technology, 15(2), 157-169.
Schoen, S.,
Hofmann, M., & Reynolds, R. (2011). Defending privacy at the
Schultz, D.
(2012, August 17). A Tor of the Dark Web.
Sorry for the Spam: The Adventures of Dan Schultz. Retrieved from
http://slifty.com/2012/08/a-tor-of-the-dark-web/
Shark. (2007,
December 15). Anonymous web searching (& decentralized search engines).
FileShareFreak. Retrieved from
http://filesharefreak.com/2007/12/15/anonymous-web-searching-decentralized-search-engines
Sheikh, R.,
Mishra, D., & Kumar, B. (2011). Secure multiparty computation: From millionaires
problem to anonymizer. Information
Security Journal: A Global Perspective, 20(1), 25-33.
Sullivan, D.
(2011, June 15). Google’s “Me on the Web” pushes Google Profiles – take that
Facebook?.
Take control of your own privacy online. (2000).
Consumer Comments, 24(5), 2
.
Thomas, K.
(2011, June 16). Google’s ‘Me on the Web’ tool alerts you to personal data
leaks. PC World. Retrieved from
http://www.pcworld.com/article/230436/Googles_Me_on_the_Web_Keeps_User_Data_Under_Wraps.html
Zoom
Information, Inc. (2012, October 20). Ruchika Agrawal, IPIOP Science Policy
Analyst, Electronic