Search engines are the new canaries in the mineshaft. As they go, so goes the Internet. This new prominence is the result of two linked forms of convergence. The first is technical convergence. What used to be different categories of technologies increasingly resemble each other. The second is doctrinal convergence. What used to be different areas of law increasingly lay claim to regulating the same activities. Search engines are ground zero for both trends.
This presentation was about fifteen minutes long; its purpose was to get a discussion started, rather than to make any points in particular. Hence the abundance of cross-cutting categorizations.
There are three complementary ways of looking at online activities. The questions is whether we can harmonize them.
The law-centric view starts from various important social values, as embedded in particular disciplines of law. It asks first what body of law is most relevant to a given situation, and then applies that body of law's rules. This view is having increasing trouble at the borders between areas.
The activity-centric view starts from basic elementary actions and asks how they should be treated. It thus asks whether there should be an absolute freedom to surf, for example, or what the limits on copying should be. It has trouble because sometimes a troubling pattern of behavior may be strung together from several individual actions that are innocent on their own.
The service-centric view treats each kind of application as the proper subject of its own kind of regulation. It envisions a world of "search law," "virtual world law," "router law," and so on. This view has trouble when applications resemble each other.
These are the main headings under which one might take legal action against a search engine. The list is long. The law-centric approach to thinking about search engines starts here.
Another important slice through the ways search engines coud be sued is by who is doing the suing. It could either be someone with whom the search engine interacts (often the owner of an indexed site) or some outsider who is harmed by the search relationship (e.g. because it spreads secret knowledge).
If we take an activity-centric view of search, search engines do three main things. They surf the Internet, finding content. They rate that content by relevance. And they link back to the content in response to searches. This slide lists the leading cases (as of 2004) on those activities.
For the service-centric view, the line between search engine and ISP is the most worrisome. ISPs are involved,to some extent, with all three of the basic activities of search. The particular statutory immunities given to ISPs provide one possible approach to thinking about search engine liability.
These, in turn, are the principal ISP statutory immunities. They seem to be converging to a relatively stable set as courts interpret them. Whether they make sense in the search context is one topic for discussion..
There are many distinctions we might choose to consider relevant in evaluating the conduct of search engines.
We might choose to bias towards encouraging or discouraging them in general. (Thus, for example, the DMCA's notice-and-takedown provision is deliberately biased towards rapid takedown.)
We might say that search engine subjectivity is good, or we might consider it bad. SearchKing treats opinion as protected speech; Zeran arguably treats opinion as a sign of bad faith.
We might also say that individual tailoring is good, or a sign of bad faith. SearchKing raises ths issue without resolving it. How important is it for a search engine to have a general algorithm?
Whether a search engine is private or commercial almost certainly factors into a copyright fair use analysis and is probably relevant in other contexts, as well.
Does it matter whether a search engine is being generous or mean to an indexed site? Is it different to link to positive speech than to link to negative speech? These factors, even if not legally relevant, may weigh on a court's mind. They may also determine which sorts of cases are litigated in the first place.
Should law defer to technical norms and Internet customs? The robots.txt wars are probably just the beginning of this debate.