Table of Contents
- Introduction
- Theory
- Equality
- Objectivity
- Bias
- Traffic
- Relevance
- Self-Interest
- Transparency
- Manipulation
- Conclusion
</ul>
</div>
# Some Skepticism About Search Neutrality
## [James Grimmelmann](http://james.grimmelmann.net) ##
_In the last few years, some search-engine critics have suggested that dominant search engines (i.e. Google) should be subject to "search neutrality" regulations. By analogy to network neutrality, search neutrality would require even-handed treatment in search results: It would prevent search engines from playing favorites among websites. Academics, Google competitors, and public-interest groups have all embraced search neutrality_.
_Despite this sudden interest, the case for search neutrality is too muddled to be convincing. While "neutrality" is an appealing-sounding principle, it lacks a clear definition. This essay explores no fewer than eight different meanings that search-neutrality advocates have given the term. None of them would lead to sensible regulation. Some are too ill-defined to measure; others measure the wrong thing._
_Search is inherently subjective: it always involves guessing the diverse and unknown intentions of users. Regulators, however, need an objective standard to judge search engines against. Most of the common arguments for search neutrality either duck the issue or impose on search users a standard of "right" and "wrong" search results they wouldn't have chosen for themselves. Search engines help users avoid the websites they don't want to see; search neutrality would turn that relationship on its head. As currently proposed, search neutrality is likely to make search results spammier, more confusing, and less diverse._
This is a HTML version of an essay originally published at pages 435---59 in the collection [The Next Digital Decade: Essays on the Future of the Internet](http://nextdigitaldecade.com/) (TechFreedom 2010). The print version is also available as a [PDF](http://nextdigitaldecade.com/ndd_book.pdf). For the HTML version, I have simplified the footnotes and added a slightly updated bibliography. I would like to thank Aislinn Black and Frank Pasquale for their comments. The essay is available for reuse under the [Creative Commons Attribution 3.0 United States license](http://creativecommons.org/licenses/by/3.0/us/).
-----
### Introduction ###
>The perfect search engine would be like the mind of God.
>>--[Sergey Brin](http://www.technologyreview.com/web/14065/)
>The God that holds you over the pit of hell, much as one holds a spider, or some loathsome insect, over the fire, abhors you, and is dreadfully provoked; his wrath towards you burns like fire; he looks upon you as worthy of nothing else, but to be cast into the fire ...
>>--[Jonathan Edwards](http://edwards.yale.edu/archive?path=aHR0cDovL2Vkd2FyZHMueWFsZS5lZHUvY2dpLWJpbi9uZXdwaGlsby9nZXRvYmplY3QucGw/Yy4yMTo0Ny53amVv)
>If God did not exist, it would be necessary to invent him.
>>--[Voltaire](http://www.whitman.edu/VSA/trois.imposteurs.html)
Search engines are attention lenses; they bring the online world into focus. They can redirect, reveal, magnify, and distort. They have immense power to help and to hide. We use them, to some extent, always at our own peril. And out of the many ways that search engines can cause harm, the thorniest problems of all stem from their ranking decisions.
What makes ranking so problematic? Consider an example. The U.K. technology company [Foundem](http://www.foundem.co.uk/) offers "vertical search"--it helps users compare prices for electronics, books, and other goods. That makes it a [Google competitor](http://www.google.com/prdhp). But in June 2006, Google applied a "penalty" to Foundem's website, causing all of its pages to [drop dramatically](http://www.searchneutrality.org/foundem-google-story) in Google's rankings. It took more than three years for Google to remove the penalty and restore Foundem to the first few pages of results for searches like "[compare prices shoei xr-1000](http://www.google.com/search?q=compare+prices+shoei+xr-1000)." Foundem's traffic, and hence its business, dropped off dramatically as a result. The experience led Foundem's co-founder, Adam Raff, to become an outspoken advocate: creating the site [searchneutrality.org](http://www.searchneutrality.org/about), filing [comments](http://www.foundem.co.uk/FCC_Comments.pdf) with the Federal Communications Commission (FCC), and taking his story to the op-ed pages of [The New York Times](http://www.nytimes.com/2009/12/28/opinion/28raff.html?_r=1), calling for legal protection for the Foundems of the world.
Of course, the government doesn't get involved every time a business is harmed by a bad ranking--or Consumer Reports would be [out of business](http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=us&vol=466&invol=485). Instead, search-engine critics base their case for regulation on the immense power of search engines, which can "[break the business of a Web site that is pushed down the rankings](http://www.nytimes.com/2010/07/15/opinion/15thu3.html)." They have the power to shape what millions of users, carrying out [billions of searches](http://pff.org/issues-pubs/books/factbook_10th_Ed.pdf) a day, see. At that scale, search engines are the new mass media--or perhaps the new _meta_ media--capable of shaping public discourse itself. And while power itself may not be an evil, abuse of power is.
Search-engine critics thus aim to keep search engines--although in the U.S. and much of the English-speaking world, it might be more accurate to say simply "Google"--from abusing their dominant position. The hard part comes in defining "abuse." After a decade of various attempts, critics have hit on the idea of "neutrality" as a governing principle. The idea is explicitly modeled on network neutrality, which would "forbid operators of broadband networks to discriminate against third-party applications, content or portals." [[Barbara van Schewick](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=812991)] Like broadband Internet service providers (ISPs), search engines "accumulate great power over the structure of online life." [[Frank Pasquale](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1134159)] Thus, perhaps search engines should similarly be required not to discriminate among websites.
For some academics, this idea is a thought experiment: a way to explore the implications of network neutrality ideas. For others, it is a real proposal: a preliminary agenda for action. Lawyers for ISPs fighting back against network neutrality have seized on it, either as a _reductio ad absurdum_ or a way to kneecap their bitter rival Google. Even the _New York Times_ has gotten into the game, running an [editorial](http://www.nytimes.com/2010/07/15/opinion/15thu3.html) calling for scrutiny of Google's "editorial policy."[1](#note1) Since _New York Times_ editorials, as a rule, reflect no independent thought but only a kind of prevailing conventional wisdom, it is clear that search neutrality has truly arrived on the policy scene.
Notwithstanding its sudden popularity, the case for search neutrality is a muddle. There is a fundamental misfit between its avowed policy goal of protecting users and most of the tests it proposes to protect them. Scratch beneath the surface of search neutrality and you will find that it would protect not search users, but websites. In the search space, however, websites are as often users' _enemies_ as not; the whole point of search is to help users avoid the sites they don't want to see.
In short, search neutrality's ends and means don't match. To explain why, I will deconstruct eight proposed search-neutrality principles:
1. _Equality_: Search engines shouldn't differentiate at all among websites.
2. _Objectivity_: There are correct search results and incorrect ones, so search
engines should return only the correct ones.
3. _Bias_: Search engines should not distort the information landscape.
4. _Traffic_: Websites that depend on a flow of visitors shouldn't be cut off
by search engines.
5. _Relevance_: Search engines should maximize users' satisfaction with
search results.
6. _Self-interest_: Search engines shouldn't trade on their own account.
7. _Transparency_: Search engines should disclose the algorithms they use to
rank web pages.
8. _Manipulation_: Search engines should rank sites only according to general
rules, rather than promoting and demoting sites on an individual basis.
As we shall see, all eight of these principles are unusable as bases for sound search regulation.
I would like to be clear up front about the limits of my argument. Just because search neutrality is incoherent, it doesn't follow that search engines deserve a free pass under antitrust, intellectual property, privacy, or other well-established bodies of law.2 Nor is search-specific legal oversight out of the question. Search engines are capable of doing dastardly things: According to BusinessWeek, the Chinese search engine Baidu explicitly [shakes down websites](http://www.businessweek.com/magazine/content/09_02/b4115021710265.htm), demoting them in its rankings unless they buy ads. It's easy to tell [horror](http://craphound.com/scroogled.html) [stories](http://whimsley.typepad.com/whimsley/2008/03/mr-googles-guid.html) about what search engines might do that are just plausible enough to be genuinely scary. My argument is just that search neutrality, as currently proposed, is unlikely to be workable and quite likely to make things worse. It fails at its own goals, on its own definition of the problem.
### Theory ###
Before delving into the specifics of search-neutrality proposals, it will help to understand the principles said to justify them. There are two broad types of arguments made to support search neutrality, one each focusing on users and on websites. A search engine that misuses its ranking power might be seen either as _misleading users_ about what's available online, or as _blocking_ websites from reaching users.3 Consider the arguments in turn.
_Users_: Search helps people find the things they want and need. Good search results are better for them. And since search is both subjective and personal, users themselves are the ones who should define what makes search results good. The usual term for this goal is "relevance": relevant results are the ones that users themselves are most satisfied with. All else being equal, good search policy should try to maximize relevance.
A libertarian might say that this goal is trivial. Users are free to pick and choose among search engines and other informational tools.4 They will naturally flock to the search engine that offers them the most relevant results; the market will provide just as much relevance as it is efficient to provide. There is no need for regulation; relevance, being demanded by users, will be supplied by search engines. And this is exactly what [search](http://www.google.com/corporate/tech.html) [engines](http://help.yahoo.com/l/us/yahoo/search/indexing/ranking-01.html) [themselves](http://sp.ask.com/en/docs/about/ask_technology.shtml) say: relevance is their principal, or only, goal.
The response to this point of view--most carefully argued by Frank Pasquale ([\[1\]](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=888327), [\[2\]](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1134159), [\[3\]](http://www.law.umaryland.edu/academics/journals/jbtl/issues/3_1/3_1_061_Pasquale.pdf), [\[4\]](http://nextdigitaldecade.com/ndd_book.pdf#page=348), [\[5\]](http://nextdigitaldecade.com/ndd_book.pdf#page=402), [\[6\]](http://www.law.northwestern.edu/lawreview/v104/n1/105/LR104n1Pasquale.pdf), [\[7\]](http://www.lawschool.cornell.edu/research/cornell-law-review/upload/Bracha-Pasquale-Final.pdf))--is best described as "liberal." It focuses on maximizing the effective autonomy of search users, but questions whether market forces actually enable users to demand optimal relevance. For one thing, it questions whether users can actually detect deviations from relevance. The user who turns to a search engine, by definition, doesn't yet know what she's looking for or where it is. Her own knowledge, therefore, doesn't provide a fully reliable check on what the search engine shows her. The information she would need to know that the search engine is hiding something from her may be precisely the information it's hiding from her--a relevant site that she didn't know existed.
Perhaps just as importantly, structural features of the search market can make it hard for users to discipline search engines by switching. Search-neutrality advocates have argued that search exhibits substantial barriers to entry. The web is so big, and search algorithms so complex and refined, that there are substantial fixed costs to competing at all. Moreover, the rise of [personalized search](http://www.concurringopinions.com/archives/2008/02/personalized_se.html) both creates switching costs for individual users and also makes it harder for them to share information about their experiences with multiple search engines.
_Websites_: The case for protecting websites reaches back into free speech theory. Jerome Barron's 1967 article, [_Access to the Press--A New First Amendment Right_](http://www.judgewatch.org/lawsuit-nyt/outreach/law-schools/Barron-Access-to-Press.pdf), argued that freedom of speech is an empty right in a mass-media society unless one also has access to the mass media themselves. He thus argued that newspapers should be required to open their letters to the editor and their advertising to all points of view. Although his proposed right of access is basically a [dead letter](http://caselaw.lp.findlaw.com/scripts/getcase.pl?navby=case&court=US&vol=418&page=241) as far as First Amendment doctrine goes, it captured the imaginations of media-law scholars and media advocates.
Scholars have begun to adapt Barron's ideas to online intermediaries, including search engines. Dawn Nunziato's book [_Virtual Freedom_](http://www.sup.org/book.cgi?id=10874) draws extensively on Barron to argue that Congress may need to "authorize the regulation of dominant search engines to require that they provide meaningful access to content." [Jennifer Chandler][http://law.hofstra.edu/pdf/Academics/Journals/LawReview/lrv_issues_v35n03_i07.pdf] applies Barron's ideas to propose a "right to reach an audience" that would give website owners various protections against exclusion and demotion by search engines.5 Similarly, Frank Pasquale suggests bringing "universal service" over into the search space, perhaps through a government-provided search engine.
The Barronian argument for access, however, needs to be qualified. The free-speech interest in access to search engine ranking placement is really _audiences_' free speech interest; the real harm is that search users have been deprived of access to the speech of websites, not that websites have been deprived of access to users. Put another way, websites' access interest is derivative of users' interests. In the [Supreme Court's words](http://www.law.cornell.edu/supct/html/historics/USSC_CR_0505_0672_ZS.html), "The First Amendment protects the right of every citizen to 'reach the minds of willing listeners.'" Or, in Jerome Barron's, "[T]he point of ultimate interest is not the words of the speakers but the minds of the hearers." With these purposes in mind, let us turn to actual search-neutrality proposals.
### Equality ###
[Scott Cleland](http://precursorblog.com/content/why-google-is-not-neutral) observes that Google's "algorithm reportedly has over 1,000 variables/discrimination biases which decide which content gets surfaced." He concludes that "Google is not neutral" and thus should be subject to any FCC network-neutrality regulation. On this view, a search engine does something wrong if it treats websites differently, "surfac[ing]" some, rather than others. This is a theory of neutrality as equality, it comes from the network-neutrality debates, and it is nonsensical as applied to search.
Equality has a long pedigree in telecommunications. For years, common-carrier regulations required the AT&T system to offer its services on equal terms to anyone who wanted a phone. This kind of equality is at the heart of proposed network neutrality regulations: treating all packets identically once they arrive at an ISP's router, regardless of source or contents. Whether or not equality in packet routing is a good idea as a [technical matter](http://itpolicy.princeton.edu/pub/neutrality.pdf), the rule itself is simple enough and relatively clear. One can, without difficulty, identify Comcast's forging of packets to terminate BitTorrent connections as a [violation of the principle](http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-08-183A1.pdf). As long as an ISP isn't overloaded to the point of losing too many packets, equality does what it's supposed to: ensures that every website enjoys access to the ISP's network and customers.
Try to apply this form of equality to search and the results are absurd. Of course Google differentiates among sites--that's why we use it. Systematically favoring certain types of content over others isn't a defect for a search engine--it's the _point_. If I search for "[Machu Picchu pictures](http://www.google.com/search?q=Machu+Picchu+pictures)," I want to see llamas in a ruined city on a cloud-forest mountaintop, not horny housewives who whiten your teeth while you wait for them to refinance your mortgage. Search inevitably requires some form of editorial control. A search engine cannot possibly treat all websites equally, not without turning into the phone book. But for that matter, even the phone book is not neutral in the sense of giving fully equal access to all comers, as the proliferation of AAA Locksmiths and Aabco Plumbers attests. Differentiating among websites, without something more, is not wrongful.
### Objectivity ###
If search engines must make distinctions, perhaps we should insist that they make correct distinctions. Foundem, for example, [argues](http://www.searchneutrality.org/foundem-google-story) that the Google penalty was unfair by pointing to positive write-ups of Foundem from "the UK's leading technology television programme" and "the UK's leading consumer body," and to its high search ranks on Yahoo! and Bing. The unvoiced assumption here is that search queries can have objectively right and wrong answers. A search on "[James Grimmelmann blog](http://www.google.com/search?q=James+Grimmelmann+blog)" should come back with [my weblog](http://laboratorium.net); anything else is a wrong answer.
But this view of what search is and does is wrong. A search for "[apple](http://www.google.com/search?q=apple)" could be looking for information about Fiji apples, Apple computers, or Fiona Apple. "[bbs](http://www.google.com/search?q=bbs)" could refer to airgun pellets, bulletin-board systems, or bed-and- breakfasts. Different people will have different intentions in mind; even the same person will have different intentions at different times. Sergey Brin's theological comparison of perfect search to the "mind of God" shows us why perfect search is impossible. Not even Google is--or ever could be-- omniscient. The search query itself is necessarily an incomplete basis on which to guess at possible results.
The objective view of search, then, fails for two related reasons. First, search users are profoundly diverse. They have highly personal, highly contextual goals. One size cannot fit all. And second, a search engine's job always involves guesswork. Some guesses are better than others, but the search engine will always have to guess. "James Grimmelmann blog" shouldn't take users to Toyota's corporate page--but perhaps they were interested in my guest-blogging at [Concurring Opinions](http://www.concurringopinions.com/archives/author/James-Grimmelmann), or in blogs about me, or they have me mixed up with [Eric Goldman](http://law.scu.edu/faculty/profile/goldman-eric.cfm) and were actually looking for [his blog](http://blog.ericgoldman.org/). [Time Warner Cable's](http://fjallfoss.fcc.gov/ecfs/document/view?id=7020375997) complaint that "significant components of [Google's] Ad Rank scheme are subjective" is beside the point. Search itself is subjective.[6](#note6)
Few scholars go so far as to advocate explicit re-ranking to correct search results. But even those who acknowledge that search is subjective sometimes write as though it were not. Frank Pasquale gives a hypothetical in which "YouTube's results always appear as the first thirty [Google] results in response to certain video queries for which [a rival video site] has demonstrably more relevant content." One might ask, "demonstrably more relevant" _by what standard_? Often the answer will be contentious.
In Foundem's case, what difference should it make that Yahoo! and others liked Foundem? So? That's their opinion. Google had a different one. Who is to say that Yahoo! was right and Google was wrong? One could equally well argue that Google's low ranking was correct and Yahoo!'s high ranking was the mistake. "compare prices shoei xr-1000" is not the sort of question that admits of a right answer. This is why it doesn't help to say that the Foundem vote is four-to-one against Google. If deviation from the majority opinion makes a search engine wrong, then so much for search engine innovation--and so much for unpopular views.[7](#note7)
### Bias ###
Ironically, it is the goal of protecting unpopular views that drives the concern with search engine "bias." [Lucas Introna and Helen Nissenbaum](http://www.nyu.edu/projects/nissenbaum/papers/searchengines.pdf), for example, are concerned that search engines will direct users to sites that are already popular and away from obscure sites. [Alex Halavais](http://www.polity.co.uk/book.asp?ref=9780745642147) calls for "resistance to the homogenizing process of major search engines," including governmental interventions. These are structural concerns with popularity-based search. Others worry about more particular biases. [AT&T](http://fjallfoss.fcc.gov/ecfs/document/view?id=7020377217) complains that "Google's algorithms unquestionably _do_ favor some companies or sites." Scott Cleland objects that Google demotes content from other countries in its country-specific search pages.
The point that a technological system can display bias is one of those profound observations that is at once both startling and obvious. It naturally leads to the question of whether, when, and how one could correct for the bias search engines introduce. But to pull that off, one must have a working understanding of what constitutes search-engine bias. [Batya Friedman and Helen Nissenbaum](http://www.nyu.edu/projects/nissenbaum/papers/biasincomputers.pdf) define a computer system to be "biased" if it "_systematically_ and _unfairly_ discriminates against certain individuals or groups of individuals in favor of others." Since search engines systematically discriminate by design, all of the heavy lifting in the definition is done by the word "unfair." But this just kicks the problem down the road. One still must explain when discrimination is "unfair" and when it is not. Friedman and Nissenbaum's discussion is enlightening, but does not by itself help us identify which practices are abusive.[8](#note8)
The point that socio-technical systems have embedded biases also cuts against search neutrality. We should not assume that if only the search _engine_ could be made properly neutral, the search _results_ would be free of bias. Every search result requires both a user to contribute a search query, and websites to contribute the content to be ranked. Neither users nor websites are passive participants; both can be wildly, profoundly biased.
On the website side, the web is [anything but neutral](http://www.shirky.com/writings/powerlaw_weblog.html). Websites compete fiercely, and not always ethically, for readers. It doesn't matter _what_ the search engine algorithm is; websites will try to game it. Search-engine optimization, or SEO, is as much a fixture of the Internet as spam. Link farms, spam blog comments, hacked websites--you name it, and they'll try it, all in the name of improving their search rankings. A fully invisible search engine, one that introduced no new values or biases of its own, would merely replicate the underlying biases of the web itself: heavily commercial, and subject to a truly mindboggling quantity of spam. Raff says that search algorithms should be "comprehensive." But should users be subjected to a comprehensive presentation of discount Canadian pharmaceutical sites?
On the user side, sometimes the bias is between the keyboard and the chair. Fully de-biasing search results would also require de-biasing search queries--and users' ability to pick which results they click on. Take a search for "jew," for example. Google has been criticized both for returning anti-Semitic sites (to American users) and for _not_ returning such sites (to German users). The inescapable issue is that Google has users who want to read anti-Semitic web pages and users who don't. One might call some of those users "biased," but if they are, it's not Google's fault.
Some bias is going to leak through as long as search engines help users find what they want. And helping users find what they want is such a profound social good that one should be skeptical of trying to inhibit it. Telling users what they _should_ see is a serious intrusion on personal autonomy, and thus deeply inconsistent with the liberal argument for search neutrality. If you want Google to steer users to websites with views that differ from their own, your goal is not properly described as search _neutrality_. In effect, you have gone back to asserting the objective correctness of search results: Certain sites are good for users, like whole grains.
### Traffic ###
The most common trope in the search debates is the website whose traffic vanishes overnight when it disappears from Google's search results.[9](#note9) Because so much traffic flows through Google, it holds websites over the flames of website hell, ready at any instant to let them fall in the rankings. Chandler's proposed right to reach an audience and Foundem's proposed "effective, accessible, and transparent appeal process" attempt to protect websites from being dropped. Dawn Nunzatio, for her part, would require search engines to open their sponsored links to political candidates.
A right to continued customer traffic would be a legal anomaly; offline businesses enjoy no such right. Some Manhattanites who take the free IKEA ferry to its store in Brooklyn eat at the [nearby food trucks](http://newyork.seriouseats.com/2008/07/red-hook-vendors-soccer-tacos-guide-how-to-get-there-what-to-eat.html) in the Red Hook Ball Fields. The food truck owners would have no right to complain if IKEA discontinued the ferry or moved its store. Search neutrality advocates, however, would say that RedHookFoodTruck.com has a Jerome Barron-style free-speech interest in having access to the search engine's result pages, and thus has more right to complain if the Google ferry no longer comes to its neighborhood.
But, as we saw above, this is really an argument that _users_ have a _relevance_ interest in seeing the site. If no one actually wants to visit RedHookFoodTruck.com, then its owner shouldn't be heard to complain about her poor search ranking. When push comes to shove, search neutrality advocates recognize that websites must plead their case in terms of users' needs. Chandler's modern right of access is a "right to reach a _willing_ audience," which she describes as "the right to be free of the imposition of discriminatory filters _that the listener would not otherwise have used_." Even Foundem's Adam Raff presents his actual search-neutrality principle in user-protective terms: "search engines should have no editorial policies other than that their results be comprehensive, impartial and _based solely on relevance_." Relevance is, of course, the touchstone of users' interests, not websites'.[10](#note10)
Indeed, looking at the rankings from a website's perspective, rather than from users', can be counterproductive to free-speech values. If users really find other websites more relevant, then making them visit RedHookFoodTruck.com impinges on their autonomy and on _their_ free speech interests as listeners. For any given search query, there may be dozens, hundreds, thousands of competing websites. The _vast majority_ of them will thus have interests that diverge from users'--and every incentive to override users' wishes.
Even when users are genuinely indifferent among various websites, some search neutrality advocates think websites should be protected from "arbitrary" or "unaccountable" ranking changes as a matter of fairness. We should call the websites that currently sit at the top of search engine rankings by their proper name--_incumbents_--and we should look as skeptically on their demands to remain in power as we would on any other incumbent's. The search engine that ranks a site highly has conferred a benefit on it; turning that gratuitous benefit into a permanent entitlement gets the ethics of the situation exactly backwards.
Indeed, giving highly-ranked websites what is in effect a property right in search rankings runs counter to everything we know about how to hand out property rights. Websites don't create the rankings; search engines do. Similarly, search engines are in a better position to manage rankings and prevent waste. And if each individual search ranking came with a right to placement, every search-results page would be an anti-commons in the making.
Thus, it is _irrelevant_ that Foundem had a prominent search placement on Google before it landed in the doghouse. Just as the subjectivity of search means that search engines will frequently disagree with each other, it also means that a search engine will disagree with itself over time. From the outside looking in, we have no basis to say whether the initial high ranking or the subsequent low ranking made more sense. To give Foundem--and every other website currently enjoying a good search ranking--the right to continue where it is would lock in search results for all time, obliterating search-engine experimentation and improvement.
### Relevance ###
Given the importance of user autonomy to search-neutrality theory, relevance is a natural choice for a neutrality principle. In Foundem's words, search results should be "based solely on relevance." Chandler proposes a rule against "discrimination that listeners would not have chosen." [Oren Bracha and Frank Pasquale](http://www.lawschool.cornell.edu/research/cornell-law-review/upload/Bracha-Pasquale-Final.pdf) decry "search engines [that] highlight or suppress critical information" and thereby "shape and constrain [users'] choices"--that is, hide information that users would have found relevant.
Relevance, however, is such an obvious good that its virtue verges on the tautological. Search engines _compete_ to give users relevant results; they exist at all only because they do. Telling a search engine to be more relevant is like telling a boxer to punch harder. Of course, sometimes boxers do throw fights, so it isn't out of the question that a search engine might underplay its hand. How, though, could regulators tell? Regulators can't declare a result "relevant" without expressing a view as to why other possibilities are "irrelevant," and that is almost always going to be contested.
Here's an example: Foundem. Recall that Foundem is a "vertical search site" that specializes in consumer goods. Well, a great many vertical search sites are worthless. (If you don't believe me, please try using a few for a bit.) Like other kinds of sites that simply [roll up existing content](http://www.google.com/support/webmasters/bin/answer.py?answer=66361) and slap some of their own ads on it--Wikipedia clones and local business directories also come to mind--they superficially resemble legitimate sites that provide something of value to users. But only superficially. The "penalties" that reduce vertical search sites' Google ranks aren't an attempt to reduce competition at the expense of relevance; they're an attempt to [_implement_](http://www.theregister.co.uk/2009/11/19/google_hand_of_god/) relevance. There are a few relatively good, usable product-search sites, but most of them are junk and good riddance to them. You're welcome to disagree--search is subjective--but I'd rather have the anti-vertical penalty in place than not. Those who would argue that Google's rankings don't reflect relevance have a heavy burden of proof, in the face of ample, easily verified evidence to the contrary.
In fact, behind almost every well-known story of search engine caprice, there is a more persuasive relevance-enhancing counter-story. For example, [SourceTool](http://sourcetool.com/), another vertical search engine, has [sued](http://docs.justia.com/cases/federal/district-courts/new-york/nysdce/1:2009cv01400/340565/1/) Google under antitrust law for, in effect, demoting it in Google's rankings for search ads. SourceTool, though, is a "directory" with a taxonomic logic of dubious utility--the [United Nations Standard Products and Services Code](http://www.unspsc.org/)--and almost no content of its own. It's the rare user indeed who will find SourceTool relevant. If you care about relevance and user autonomy, you should applaud Google's decision to demote SourceTool.
### Self-Interest ###
In practice, even as search-neutrality advocates claim "relevance" as their goal, they rely on proxies for it. The most common is self-interest. A [Consumer Watchdog report](http://www.consumerwatchdog.org/resources/TrafficStudy-Google.pdf) accuses Google of "an abandonment of [its] pledge to provide neutral search capability" by "steering Internet searchers to its own services" to "muscle its way into new markets." Foundem alleges that Google demotes it and other vertical search sites to fend off competition, and alleges that Google's links to itself give it "an unassailable competitive advantage." Bracha and Pasquale worry that search engines can change their rankings "in response to positive or negative inducements from other parties."
Bad motive may lead to bad relevance, but it's also a bad proxy for it. The first problem is evidentiary. By definition, motivations are interior, personal.[11](#note11) Of course, the law has to guess at motives all the time, but the task is by its nature harder than looking to extrinsic evidence. People get it wrong all the time. In 2009, an Amazon employee with a fat finger [hit a wrong button](http://blog.seattlepi.com/amazon/archives/166384.asp) and categorized tens of thousands of gay-themed books as "adult." An [angry mob](http://www.shirky.com/weblog/2009/04/the-failure-of-amazonfail/) of Netizens assumed the company had deliberately pulled the books from its search engine out of anti-gay animus, and used the Twitter hashtag #amazonfail to express their very public outrage. Amazon's reclassification was a mistake (a quickly corrected one), and a [vivid demonstration](http://techcrunch.com/2009/04/14/guest-post-why-amazon-didnt-just-have-a-glitch) of the power of search algorithms--but not a case of bad motives.
In all but the most blatant of cases, in fact, a search engine will be able to tell a plausible relevance story about its ranking decisions.Proving that a relevance story is pretextual will be extraordinarily difficult, in view of the complexity and subjectivity of search. But it would also be disastrous to adopt the opposite point of view and presume pretext. The absence of bad motive is a negative that it will often be impossible for the search engine to prove. How can it establish, for example, that the engineer who added the anti-vertical penalty didn't have a lunchroom conversation with an executive who played up the competition angle? This is not to say that serious cases of abuse are implausible,[12](#note12) just that investigation will be unusually hard and that false positives will be dangerously frequent.
There _is_ a nontrivial antitrust issue lurking here. In the United States, Google has a dominant market share in both search and search advertising, and one could argue that Google has started to leverage its position in anticompetitive ways. Antitrust, however approaches such questions with a well-developed analytical toolkit: relevant markets, market power, pro-competitive and anti-competitive effects, and so on. Antitrust rightly focuses on the effects of business practices on consumers; search neutrality should not short-circuit that consumer-centric analysis by overemphasizing the role of a search engine's motives. Some things can be good for Google and good for its users.
Thus, when Google links to its own products, not only can there be substantial technical benefits from integration, but often Google is helping users by pointing them to services that really are better than the competition. Consumer Watchdog, for example, cries foul that Google "put its own [map] service atop all others for generic address searches," and that Google Maps has taken half of the local search market at the expense of previously dominant MapQuest and Yahoo! Maps. But perhaps MapQuest and Yahoo! Maps deserved to lose. Google Maps was [groundbreaking](http://www.zdnet.com/blog/carroll/google-maps-and-innovation/1499) when launched, and years later, it remains one of the best-implemented services on the Internet, with astonishingly clever scripting, flexible route-finding, and a powerful application programming interface (API).
One form of self-interest that may be well-enough defined to justify regulatory scrutiny is the straightforward bribe: a payment from a website to change its ranking, or a competitor's. Search-engine critics argue that search engines should disclose commercial relationships that bear on their ranking decisions. This is a standard, sensible policy response to the fear of stealth marketing. Indeed, the Federal Trade Commission (FTC) has [specifically warned](http://www.ftc.gov/os/closings/staff/commercialalertletter.shtm) search engines not to mix their organic and paid search results. More generally, the [FTC endorsement guidelines](http://ecfr.gpoaccess.gov/cgi/t/text/text-idx?c=ecfr&tpl=/ecfrbrowse/Title16/16cfr255_main_02.tpl) provide that endorsements must "reflect the honest opinions, findings, beliefs, or experience of the endorser" and that any connections between endorser and seller that "might materially affect the weight or credibility of the endorsement" must be fully disclosed. These policies have a natural application to search engines. A search engine that factors payments from sponsors into its ranking decisions is lying to its users unless it discloses those relationships, and this sort of lie would trigger the FTC's jurisdiction.[13](#note13) This isn't a neutrality principle, or even unique to search; it's just a natural application of a well-established legal norm.
### Transparency ###
Search-engine critics generally go further and argue that search engines should also be required to disclose their algorithms in detail:
* Introna and Nissenbaum: "As a first step we would demand full and truthful disclosure of the underlying rules (or algorithms) governing indexing, searching, and prioritizing, stated in a way that is meaningful to the majority of web users."
* Foundem: "Search Neutrality can be defined as the principle that search engines should be open and transparent about their editorial policies ... ."
* Pasquale: "[Dominant search engines] should submit to regulation that
bans stealth marketing and reliably verifies the absence of the practice."
These disclosures are meant to inform users about what they're getting from a search engine (Introna and Nissenbaum), to inform websites about the standards they're being judged by (Foundem), or to inform regulators about what the search engine is actually doing (Pasquale).
Algorithmic transparency is a delicate business. Full disclosure of the algorithm itself runs up against critical interests of the search engine. A fully public algorithm is one that the search engine's competitors can copy wholesale. Worse, it is one that websites can use to create highly optimized search-engine spam. Writing in 2000, long before the full extent of search-engine spam was as clear as it is today, Introna and Nissenbaum thought that the "impact of these unethical practices would be severely dampened if both seekers and those wishing to be found were aware of the particular biases inherent in any given search engine." That underestimates the scale of the problem. Imagine instead your inbox without a spam filter. You would doubtless be "aware of the particular biases" of the people trying to sell you fancy watches and penis pills--but that will do you little good if your inbox contains a thousand pieces of spam for every email you want to read. That is what will happen to search results if search algorithms are fully public; the spammers will win.
For this reason, search-neutrality advocates now acknowledge the danger of SEO and thus propose only limited transparency. Pasquale suggests, for example, that Google could respond to a question about its rankings with a list of a few factors that principally affected a particular result. But search is immensely complicated--so complicated that it may not be possible to boil a ranking down to a simple explanation. When the law demands disclosure of complex matters in simple terms, we get pro forma statements and boilerplate. Consumer credit disclosures and securities prospectuses have brought important information into the open, but they haven't done much to aid the understanding of their average recipient.
Google's algorithm depends on more than [200 different factors](http://www.google.com/corporate/tech.html). Google makes about [500 changes](http://www.wired.com/magazine/2010/02/ff_google_algorithm/all/1) to it a year, based on [ten times as many](http://www.businessweek.com/the_thread/techbeat/archives/2009/10/googles_udi_manber_search_is_about_people_not_just_data.html) experiments. One sixth of the hundreds of millions of queries the algorithm handles daily are queries it has never seen before. The PageRank of any webpage depends, in part, on every other page on the Internet. And even with all the computational power Google can muster, a full PageRank recomputation takes weeks. PageRank is, as algorithms go, elegantly simple--but I certainly wouldn't want to have the job of making Markov chains and eigenvectors "meaningful to the majority of Web users." In practice, any simplified disclosure is likely to leave room for the search engine to bury plenty of bodies.
Some scholars have suggested that concerns about transparency could be handled through regulatory opacity: The search engine discloses its algorithm to the government, which then keeps the details from the public. This is a promising way of dealing with search engines' operational needs for secrecy, but it sharpens the question of regulators' technical competence. If the record is sealed, they won't have third-party experts and interested amici to walk them through novel technical issues. Everything will hinge on their own ability to evaluate the implications of small details in search algorithms. The track record of agencies and courts in dealing with other digital technologies does not provide grounds for optimism on this score. Pasquale makes an important point that "it is essential that _someone_ has the power to 'look under the hood,'" but it is also important that algorithmic disclosure remain connected to a workable theory of what regulators are looking for and what they would do if they found it.
### Manipulation ###
Perhaps the most interesting idea in the entire search neutrality debate is the "manipulation" of search results. It's a slippery term, and used inconsistently in the search-engine debates--including by me.[14](#note14) In the [dictionary](http://www.oed.com/) sense of "process, organize, or operate on mentally or logically; to handle with mental or intellectual skill," all search results are manipulated and the more skillfully the better. But in the dictionary sense of "manage, control, or influence in a subtle, devious, or underhand manner," it's a bad thing indeed: no one likes to be manipulated.
In practice--although this is rarely made explicit--the concern is with what I have described [elsewhere](http://works.bepress.com/cgi/viewcontent.cgi?article=1012&context=james_grimmelmann) as "hand manipulation." This idea imagines the search engine as having both an automatic, general-purpose ranking algorithm and a human-created list of exceptions. Consumer Watchdog, for example, derides Google's claim to rank results "automatically by algorithms," saying, "It is hard to see how this can still be true, given the increasingly pronounced tilt toward its own services in Google's search results." Foundem calls it "manual intervention," "special treatment," and "manual bias," and documents how Google's public statements have quietly backed away from claims that its rankings are "objective" and "automatic."
Put this way, the distinction between objective algorithm and subjective manipulation is incoherent. Both kinds of decisions come from the same source: the search engine's programmers. Nor can the algorithm provide a stable baseline against which to measure manipulation, since each "manipulation" is a change to the algorithm itself. It's not like Bing has rooms full of employees looking over search results pages and making last-minute tweaks before the pages are delivered to users.
Academics, being more careful with concepts, have focused on intentionality: does the search engine intend the promotions and demotions that will result from an algorithmic change? [Mark Patterson](http://www.fordhamlawreview.org/assets/pdfs/Vol_78/Patterson_Vol_78_May.pdf), for example, refers to "intentional manipulation of results." Bracha and Pasquale sharpen this idea to speak of "highly specific or local manipulations," such as singling out websites for special treatment. Chandler argues that "search engines should not manipulate individual search results except to address instances of suspected abuse." Google itself is [remarkably coy](http://lawmeme.research.yale.edu/modules.php?name=News&file=article&sid=807) about whether and when it changes rankings on an individual basis.
Surprisingly, no one has explained why special-casing in and of itself is a problem. One possibility is that it captures the distinction between individual adjudication and general rulemaking: changes that only affect a few websites trigger a kind of due process interest in individualized procedural protections. There is also a kind of Rawlsian argument here, that algorithmic decisions should be made from behind a veil of ignorance, not knowing which websites they will favor. For whatever reason, local manipulations make people nervous, nervous enough that most of the stories told to instill fear of search engines involve what is or looks like manipulation.
Local manipulation, however, is a distraction. The real goal is relevance. From that point of view, most local manipulations aren't wrongful at all. Foundem should know; it benefited from a local manipulation. The penalty that afflicted it for three years appears to have been a relatively general change to Google's algorithm, one designed to affect a great many low-value vertical search sites. When Foundem was promoted back to prominent search placement, that was actually the manipulation, since it affected Foundem and Foundem alone. Google thus "manipulated" its search results to exempt Foundem from what would otherwise have been a generally applicable rule. To condemn manipulation on the basis of its specificity is to say that Google acted more rightfully when it demoted Foundem in 2006 than when it promoted it back in 2009.[15](#note15)
The point is that local manipulations, being quick and easy to implement, are often a useful part of a search engine's toolkit for delivering relevance. Search-engine-optimization is an endless game of loopholing. Regulators who attempt to prohibit unfair manipulations will have to wade quite far into the swamp of white-hat and black-hat SEO.[16](#note16) Prohibiting local manipulation altogether would keep the search engine from closing loopholes quickly and punishing the loopholers--giving them a substantial leg up in the SEO wars. Search results pages would fill up with spam, and users would be the real losers.
### Conclusion ###
Search neutrality gets one thing very right: Search is about user autonomy. A good search engine is more exquisitely sensitive to a user's interests than _any other communications technology_.[17](#note17) Search helps her find whatever she wants, whatever she needs to live a self-directed life. It turns passive media recipients into active seekers and participants. If search did not exist, then for the sake of human freedom it would be necessary to invent it. Search neutrality properly seeks to make sure that search is living up to its liberating potential.
Having asked the right question--_are structural forces thwarting search's ability to promote user autonomy?_--search neutrality advocates give answers concerned with protecting websites rather than users. With disturbing frequency, though, websites are not users' friends. Sometimes they are, but often, the websites want visitors, and will be willing to do what it takes to grab them.
If Flowers by Irene sells a bouquet for $30 that Bob's Flowers sells for $50, then Bob's interest in being found is in direct conflict with users' interest in being directed to Irene. The last thing that Bob wants is for the search engine to maximize relevance. Search-neutrality advocates fear that Bob will pay off the search engine to point users at his site. But that's not the only way the story can play out. Bob could also engage in self-help SEO to try to boost his ranking. In that case, the search engine may respond by demoting his site. And if that happens, then Bob has another card to play: search-neutrality itself.
Regulators bearing search neutrality can inadvertently prevent search engines from helping users find the websites they want. The typical model assumed by search neutrality is of a website and a search engine corruptly conspiring to put one over on users. But much, indeed most, of the time, the real alliance is between search engines and users, together trying to sort through the clamor of millions of websites' sales pitches. Giving websites search-neutrality rights gives them a powerful weapon in their wars with each other--one that need not be wielded with users' interests in mind.[18](#note18) Search neutrality will be born with one foot already in the grave of regulatory capture.
There is a profound irony at the heart of the liberal case for search neutrality. Requiring search _engines_ to behave "neutrally" will not produce the desired goal of neutral search _results_. The web is a place where site owners compete fiercely, sometimes viciously, for viewers and users turn to intermediaries to defend them from the sometimes-abusive tactics of information providers. Taking the search engine out of the equation leaves users vulnerable to precisely the sorts of manipulation search neutrality aims to protect them from. Whether it ranks sites by popularity, by personalization, or even by the idiosyncratic whims of its operator, a search engine provides an _alternative_ to the Hobbesian world of the unmediated Internet, in which the richest voices are the loudest, and the greatest authority on any subject is the spammer with the fastest server. Search neutrality is cynical about the Internet--but perhaps not cynical enough.
----
## Footnotes ##
1. But see Danny Sullivan's [parody](http://searchengineland.com/regulating-the-new-york-times-46521).
2. This essay is not the place for a full discussion of these issues (although we will meet antitrust and consumer protection law in passing). I provide a more detailed map in an [earlier article](http://works.bepress.com/cgi/viewcontent.cgi?article=1012&context=james_grimmelmann).
3. Other arguments for search neutrality reduce to these two. [Oren Bracha and Frank Pasquale](http://www.lawschool.cornell.edu/research/cornell-law-review/upload/Bracha-Pasquale-Final.pdf), for example, are concerned about democracy. They want "an open and relatively equal chance to all members of society for participation in the cultural sphere." Search engines provide that chance if individuals can both find (as users) and be found (as websites) when they participate in politics and culture. Similarly, Bracha and Pasquale's economic efficiency argument turns on users' ability to find market information and their fairness concern speaks to websites' losses of "audience or business." Whatever interest society has in search neutrality arises from users' and websites' interests in it--so we are justified in focusing our attention on users and websites.
4. In Google's words, "[Competition is just one click away.](http://googlepublicpolicy.blogspot.com/2009/05/googles-approach-to-competition.html)"
5. Exclusion from a search index may sound like a bright-line category of abuse, but note that a demotion from, say, #1 to #58,610 will have the same effect. No one ever clicks through 5861 pages of results. Thus, in practice, any rule against exclusion would also need to come with a--more problematic--rule against substantial demotions.
6. This point should not be confused with a considered opinion on the question of how the First Amendment applies to search-ranking decisions. Search engines make editorial judgments about relevance, but they also present information that can only be described as factual (such as maps and addresses), extol their objectivity in marketing statements, and are perceived by users as having an aura of reliability. It is possible to make false statements even when speaking subjectively--for example, I would be lying to you if I said that I enjoy eating scallops. The fact that search engines' judgments are expressed algorithmically, including in ways not contemplated by their programmers, complicates the analysis even further. The definitive First Amendment analysis of search-engine speech has yet to be written.
7. This last point should be especially troubling to Barron-inspired advocates of "access," since the point of such a regime is to promote opinions that are not widely shared.
8. If one fears, with Bracha and Pasquale, that "a handful of powerful gatekeepers" wield disproportionate influence, then the solution is simple: break up the bastards. If they reassemble or reacquire too much power, do it again. Neutrality will always be an imperfect half-measure if power itself is the problem.
9. Here is a partial list of sites, along with the Google programs they were allegedly excluded from:
* 2bigfeet.com, main index (John Battelle, [The Search](http://www.amazon.com/Search-Rewrote-Business-Transformed-Culture/dp/1591840880))
* various sites, AdWords and Google News (Nunziato)
* BMW Germany and Ricoh German, main index (Chandler)
* Inner City Press, Google News ([Fox News](http://www.foxnews.com/story/0,2933,331106,00.html))
* ExtremeTech.com and Fotolog.com, AdWords (Cleland)
* Studio Briefing, AdWords and main index ([Fast Company](http://www.fastcompany.com/blog/dan-macsai/popwise/why-did-neutral-google-de-list-webs-oldest-entertainment-publication))
* Foundem, main index (Foundem)
* NCJusticeFraud.com and ChinaIsEvil.com, AdWords ([Langdon v. Google Inc.](http://scholar.google.com/scholar_case?case=6938465105257233691), 474 F. Supp. 622 (D. Del. 2007))
* Kinderstart.com, main index (Kinderstart.com LLC v. Google Inc. No. C 06-2057 JF (RS), 2006 U.S. Dist. LEXIS 82481 (N.D. Cal. July 13, 2006))
* SearchKing.com, main index (Search King Inc. v. Google Tech., Inc. No. CIV-02-1457-M, 2003 U.S. Dist. LEXIS 27193 (W.D. Okla. 2003))
10. The emphases in the quotations in this paragraph are mine, not the original authors'.
11. As an artificial corporate entity, a search engine may not even have motives other than the ones the law attributes to it.
12. Baidu's alleged shakedown, if true, would be an example. Willingness to buy Baidu search ads is not in itself a reliable indicator of relevance to Baidu searchers. But then again, even pay-for-placement was once considered a plausible model for main-column search results--and willingness to pay is not inherently a crazy proxy for relevance. Indeed, search ads today are sold on an auction-based basis. They're often as relevant as main column search results, sometimes more so. It might be better to say that Baidu's real problems are monopoly pricing and (compulsory) stealth marketing.
13. Disclosure in common cases need not be onerous. Where, for example, a search engine auctions off sponsored links on its results pages, telling users that those links are auctioned off should generally suffice.
14. In [The Structure of Search Engine Law](http://works.bepress.com/cgi/viewcontent.cgi?article=1012&context=james_grimmelmann), I refer both to a "technical arms race between engines and manipulators" and to "hand manipulation of results [by search engines]."
15. If you are bothered more by demotions than promotions, remember that search rankings are zero-sum. Foundem's 50-place rise is balanced out by 50 one-place falls for other websites.
16. On the distinction between ethical, permitted "white-hat" SEO and unethical, forbidden "black-hat" SEO, see Frank Pasquale, [Trusting (and Verifying) Online Intermediaries' Policing](http://nextdigitaldecade.com/ndd_book.pdf#page=348). I believe that what Pasquale calls the intermediate "grey-hat" zone between the two is generally less grey than he and his sources perceive it to be.
17. Except, perhaps, the library reference desk. Unfortunately, librarians don't scale.
18. This has already happened in trademark law, which is supposed to prevent consumer confusion, but just as often is a form of offensive warfare among companies, consumer interests be damned.
----
## Bibliography ##
### Books ###
Ken Auletta, [Googled: The End of the World As We Know It](http://www.amazon.com/Googled-End-World-As-Know/dp/1594202354) (Penguin 2009)
John Battelle, [The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture](http://www.amazon.com/Search-Rewrote-Business-Transformed-Culture/dp/1591840880) (Portfolio 2005)
Alex Halavais, [The Search-Engine Society](http://www.polity.co.uk/book.asp?ref=9780745642147) (Polity 2009)
Amy N. Langville & Carl D. Meyer, [Google's PageRank and Beyond: The Science of Search Engine Rankngs](http://press.princeton.edu/titles/8216.html) (Princeton University Press 2006)
Dawn C. Nunziato, [Virtual Freedom: Net Neutrality and Free Speech in the Internet Age](http://www.sup.org/book.cgi?id=10874) (2009)
Ian H. Witten et al., [Web Dragons: Inside the Myths of Search Technology](http://www.elsevierdirect.com/product.jsp?isbn=9780123706096) (Morgan Kaufmann 2007)
### Articles ###
Jerome Barron, [Access to the Press: A New First Amendment Right](http://www.judgewatch.org/lawsuit-nyt/outreach/law-schools/Barron-Access-to-Press.pdf), 80 Harv. L. Rev. 1641 (1967)
Oren Bracha & Frank Pasquale, [Federal Search Commission: Access, Fairness, and Accountability in the Law of Speech](http://www.lawschool.cornell.edu/research/cornell-law-review/upload/Bracha-Pasquale-Final.pdf), 93 Cornell L. Rev. 1149 (2008)
Jennifer A. Chandler, [A Right to Reach an Audience: An Approach to Intermediary Bias on the Internet][http://law.hofstra.edu/pdf/Academics/Journals/LawReview/lrv_issues_v35n03_i07.pdf], 35 Hofstra L. Rev. 1095 (2007).
Alejandro M. Diaz, [Through the Google Goggles: Sociopolitical Bias in Search Engine Design](http://epl.scu.edu/~stsvalues/readings/Diaz_thesis_final.pdf) (May 23, 2005) (unpublished B.A. thesis, Stanford University)
Edward W. Felten, [The Nuts and Bolts of Network Neutrality](http://itpolicy.princeton.edu/pub/neutrality.pdf) (2006)
Batya Friedman & Helen Nissenbaum, [Bias in Computer Systems](http://www.nyu.edu/projects/nissenbaum/papers/biasincomputers.pdf), 14 ACM Trans. on Computer Sys. 330 (1996).
Eric Goldman, [A Coasean Analysis of Marketing](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=912524), 2006 Wis. L. Rev. 1151
Eric Goldman, [Search Engine Bias and the Demise of Search Engine Utopianism](http://www.yjolt.org/files/goldman-8-YJOLT-188.pdf), 8 Yale J. L. & Tech. 188 (2006), available in The Next Digital Deacde: Essays on the Future of the Internet 461 (Berin Szoka & Adam Marcus eds. 2010)
Eric Goldman, [Deregulating Relevancy in Internet Trademark Law](http://www.law.emory.edu/fileadmin/journals/elj/54/54.1/Goldman.pdf), 54 Emory L.J. 507 (2005)
Ellen P. Goodman, [Stealth Marketing and Editorial Integrity](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=896239), 85 Tex. L. Rev. 83 (2006)
James Grimmelmann, [Don't Censor Search](http://www.yalelawjournal.org/the-yale-law-journal-pocket-part/intellectual-property/don%E2%80%99t-censor-search/), 117 Yale L.J. Pocket Part 48 (2007)
James Grimmelmann, [The Structure of Search Engine Law](http://works.bepress.com/cgi/viewcontent.cgi?article=1012&context=james_grimmelmann), 93 Iowa L. Rev. 1 (2007)
James Grimmelmann, [Information Policy for the Library of Babel](http://works.bepress.com/cgi/viewcontent.cgi?article=1015&context=james_grimmelmann), 3 J. Bus. & Tech. L. 29 (2008)
James Grimmelmann, [The Google Dilemma](http://works.bepress.com/cgi/viewcontent.cgi?article=1018&context=james_grimmelmann), 53 New York L. Sch. L. Rev. 939 (2009)
Lucas D. Introna & Helen Nissenbaum, [Shaping the Web: Why the Politics of Search Engines Matters](http://www.nyu.edu/projects/nissenbaum/papers/searchengines.pdf), 16 Info. Soc. 169 (2000).
Geoffrey A. Manne & Joshua D. Wright, [Google and the Limits of Antitrust: The
Case Against the Antitrust Case Against Google](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1577556), Harv. J. L. & Pub. Pol'y. (forthcoming).
Viva R. Moffat, [Regulating Search](http://jolt.law.harvard.edu/articles/pdf/v22/22HarvJLTech475.pdf), 22 Harv. J. L. & Tech. 475 (2009)
Sandeep Pandey et al., [Shuffling a Stacked Deck: The Case for Partially Randomized Search Results](http://www.cs.cmu.edu/~spandey/publications/randomRanking.pdf), Proc. 31st Very Large Databases Conf. 781 (2005)
Frank Pasquale, [Dominant Search Engines: An Essential Cultural and Political Facility](http://nextdigitaldecade.com/ndd_book.pdf#page=402), in The Next Digital Deacde: Essays on the Future of the Internet 402 (Berin Szoka & Adam Marcus eds. 2010)
Frank Pasquale, [Trusting (and Verifying) Online Intermediaries' Policing](http://nextdigitaldecade.com/ndd_book.pdf#page=348), in The Next Digital Decade: Essays on the Future of the Internet 348 (Berin Szoka & Adam Marcus eds. 2010)
Frank Pasquale, [Beyond Innovation and Competition: THe Need for Qualified Transparency in Internet Intermediaries](http://www.law.northwestern.edu/lawreview/v104/n1/105/LR104n1Pasquale.pdf), 104 Nw. U. L. Rev. 105 (2010)
Frank Pasquale, [Asterisk Revisited: Debating a Right of Reply on Search Results](http://www.law.umaryland.edu/academics/journals/jbtl/issues/3_1/3_1_061_Pasquale.pdf), 3 J. Bus. & Tech. L. 61 (2008)
Frank Pasquale, [Internet Nondiscrimination Principles: Commercial Ethics for Carries and Search Engines](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1134159), 2008 U. Chi. Legal Forum 263.
Frank Pasquale, [Rankings, Reductionism, and Responsibility](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=888327), 54 Clev. St. L. Rev. 115 (2006).
Andrew Odlyzko, [Network Neutrality, Search Neutrality, and the Never-ending Conflict Between Efficiency and Fairness in Markets](http://www.dtc.umn.edu/~odlyzko/doc/rne81.pdf), 8 Rev. Network Econ. 40 (2009)
Mark R. Patterson, [Non-Network Barriers to Network Neutrality](http://www.fordhamlawreview.org/assets/pdfs/Vol_78/Patterson_Vol_78_May.pdf), 78 Fordham L. Rev. 2843 (2010)
Barbara van Schewick, [Towards an Economic Framework for Network Neutrality Regulation](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=812991), 5 J. Telecomm. & High Tech. L. 329 (2007)
Rebecca Tushnet, [It Depends on What the Meaning of "False" Is: Falsity and Misleadingness in Commercial Speech Doctrine](http://www.tushnet.com/law/whatisfalse.pdf), 41 Loyola L.A. L. Rev. 101 (2008)
### News, Blogs, and Web ###
Scott Cleland, [Why Google Is Not Neutral][http://precursorblog.com/content/why-google-is-not-neutral], Precursor Blog (Nov. 4, 2009)
Consumer Watchdog, [Inside Google](http://insidegoogle.com/)
Consumer Watchdog, [Traffic Report: How Google Is Squeezing Out Competitors and Muscling into New Markets](http://www.consumerwatchdog.org/resources/TrafficStudy-Google.pdf) (June 2, 2010)
Ben Edelman, [Hard-Coding Bias in Google "Algorithmic" Search Results](http://www.benedelman.org/hardcoding/) (Nov. 15, 2010)
Adam Kovacevich, [Google's Approach to Competition](http://googlepublicpolicy.blogspot.com/2009/05/googles-approach-to-competition.html), Google Policy Blog (May 8, 2009)
Chris Lake, [Foundem vs Google: A Case Study in SEO Fail](http://econsultancy.com/blog/4456-foundem-vs-google-a-case-study-in-seo-fail), Econsultancy (Aug. 18, 2009)
John Lettice, [When Algorithms Attack, Does Google Hear You Scream?](http://www.theregister.co.uk/2009/11/19/google_hand_of_god/), The Register (Nov. 19, 2009),
[The Google Algorithm](http://www.nytimes.com/2010/07/15/opinion/15thu3.html), N.Y. Times (July 14, 2010)
Frank Pasquale, [Could Personalized Search Ruin Your Life?](http://www.concurringopinions.com/archives/2008/02/personalized_se.html), Concurring Opinions (Feb. 7, 2008),
Adam Raff, [Search, But You May Not Find](http://www.nytimes.com/2009/12/28/opinion/28raff.html), N.Y. Times (Dec. 27, 2009)
Clay Shirky, [Power Laws, Weblogs, and Inequality](http://www.shirky.com/writings/powerlaw_weblog.html), Shirky.com (Feb. 8, 2003)
Danny Sullivan, [The New York Times Algorithm & Why It Needs Government Regulation](http://searchengineland.com/regulating-the-new-york-times-46521), Search Engine Land (Jul. 15, 2010)
SearchNeutrality.org:
* [About](http://www.searchneutrality.org/about) (Oct. 9, 2009)
* [Google Penalty Myths](http://www.searchneutrality.org/foundem-google-story/myths-surrounding-google-penalties) (Nov. 19, 2009)
* [Foundem's Google Story](http://www.searchneutrality.org/foundem-google-story) (Aug. 18, 2009)
* [Search Neutrality](http://www.searchneutrality.org/search-neutrality) (Oct. 11, 2009)
Techdirt:
* Karl Bode, [Google Might Stop Violating 'Search Neutrality' If Anybody Knew What That Actually Meant](http://www.techdirt.com/articles/20100504/1324279300.shtml) (May 7, 2010)
* Mike Masnick, [There Is No Such Thing As Search Neutrality, Because The Whole Point Of Search Is To Recommend What's Best](http://www.techdirt.com/articles/20100615/1849299842.shtml) (June 18, 2010)
* Mike Masnick, [A Recommendation Is Not The Same As Corruption](http://www.techdirt.com/articles/20100621/0355239887.shtml) (June 21, 2010)
* Mike Masnick, [Journalism Neutrality Now! Why The Government Needs To Oversee The NY Times' Editorial Neutrality](http://www.techdirt.com/articles/20100716/01484810239.shtml) (July 16, 2010)
Chi-Chu Tschang, [The Squeeze at China's Baidu](http://www.businessweek.com/magazine/content/09_02/b4115021710265.htm ), BusinessWeek (Dec. 31, 2008)
Richard Waters, [Unrest over Google’s Secret Formula](http://www.ft.com/cms/s/0/1a5596c2-8d0f-11df-bad7-00144feab49a.html), Financial Times (July 11, 2010)
### Legal Documents ###
Federal Trade Commission, Guides Concerning Use of Endorsements and Testimonials in Advertising [16 CFR part 255](http://ecfr.gpoaccess.gov/cgi/t/text/text-idx?c=ecfr&tpl=/ecfrbrowse/Title16/16cfr255_main_02.tpl)
In the Matter of Preserving the Open Internet Broadband Industry Practices, GN Docket No. 09-191 (F.C.C.):
* [Comments of AT&T Inc.](http://fjallfoss.fcc.gov/ecfs/document/view?id=7020377217) (Jan. 14, 2010)
* [Comments of Comcast Corporation](http://fjallfoss.fcc.gov/ecfs/document/view?id=7020375772) (Jan. 14, 2010)
* [Comments of Verizon and Verizon Wireless](http://fjallfoss.fcc.gov/ecfs/document/view?id=7020378523) (Jan. 14, 2010)
* [Comments of Time Warner Cable Inc.](http://fjallfoss.fcc.gov/ecfs/document/view?id=7020375997) (Jan. 14, 2010)
* [Reply Comments of Foundem](http://fjallfoss.fcc.gov/ecfs/document/view?id=7020389727) (Feb. 23, 1010)
* [Reply Comments of Google Inc.](http://fjallfoss.fcc.gov/ecfs/document/view?id=7020438889) (Apr. 26, 2010)
* [Reply Comments of Time Warner Cable Inc.](http://fjallfoss.fcc.gov/ecfs/document/view?id=7020437390) (Apr. 26, 2010)
[Letter from Robert W. Quinn, Jr.](http://www.att.com/Common/about_us/public_policy/Letter_to_FCC_Google_Voice.pdf), Senior Vice President, AT&T, to Sharon Gillett, Chief, Wireline Competition Bureau Federal Communications Commission (Sept. 25, 2009)
TradeComet.com LLC v. Google Inc, No. 09-CIV-1400 (S.D.N.Y.):
* [Complaint](http://docs.justia.com/cases/federal/district-courts/new-york/nysdce/1:2009cv01400/340565/1/) (Feb. 17, 2009)
* [Opinion](http://www.courthousenews.com/2010/03/08/Google%20opinion.pdf), 639 F. Supp. 2d 370 (2010)
[Letter from Heather Hippsley](http://www.ftc.gov/os/closings/staff/commercialalertletter.shtm), Acting Assoc. Dir., Div. of Adver. Practices, Fed. Trade Comm., to Gary Ruskin, Executive Dir. at Commercial Alert (June 27, 2002)