In 2013, there was certainly no shortage of complex issues raised in the policy debate. Some issues were relatively new (e.g., getting the IP Transition underway), some issues were those with which we constantly struggle (e.g., spectrum allocation and digital piracy), and some issues were those which re-emerged from hibernation after we thought everybody had moved on (e.g., handset unlocking and international broadband rankings). Naturally, whenever there were many complex policy issues raised in the debate, we also found that there was also no shortage of really bad analysis that either gerrymandered the facts, ignored the law, and/or lacked any understanding of basic economic theory. So with New Years rapidly approaching, I thought that I would use our last blog of the year to highlight what we here at the Phoenix Center found to be the most interesting policy issues of 2013, and some examples of where we believe we added constructively to the debate.
IP Transition
As new FCC Chairman Tom Wheeler recently observed, the Commission has spent the last year “listen[ing]” to various arguments about how to proceed with the IP Transition and “now it is the time to act.” While we do not know what exactly what ideas Mr. Wheeler is interested in, given the amount of noise we made on the issue we hope we got his attention.
For example, we issued a paper entitled Searching for a New Regulatory Paradigm: A Comment on AT&T’s Petition for Wire Center Trials, where we focused on firms’ incentives to act on their best behavior during AT&T’s proposed wire center trials. We presented an economic model which revealed that, given the heavy regulatory oversight of the proposed trials by the FCC, participating firms are likely to be on their best behavior during these field experiments. As a result, we showed that these trials will provide significant evidence of industry “best” practices (although the practices are expected to be slightly biased against the unconstrained interests of those favoring reform), leaving a trail of precedent applicable to a more widespread implementation of regulatory reform.
We also issued a paper entitled Lessons Learned from the U.S. Unbundling Experience, many of which can be applied to the IP Transition. In particular, our paper showed that (1) before you formulate any regulatory paradigm, it is important to understand the complex underlying economics of the last mile; (2) policymakers must make sure the incentives of the stakeholders are properly aligned (otherwise you create the incentive for sabotage); and (3) policymakers must recognize that any regularly paradigm can easily be undercut by technological advances.
Finally, last summer we held our ever-popular Rooftop Policy Roundtable to discuss the IP Transition over drinks and cigars. Not only were we honored with a keynote presentation from FCC Commissioner Ajit Pai, but for those who stayed and braved the Rain Gods we had an amazing discussion from an all-star panel. While you can read my full recap of the event here, perhaps my biggest take-away from our Roundtable is that the IP Transition is not a discrete issue; instead, the IP Transition concerns the whole regulatory ecosystem surrounding the communications industry. Indeed, the Roundtable made clear to me that if we are to develop a cohesive regulatory paradigm for an IP-based world, then we must also include in the conversation such complex and diverse issues such as interconnection, copper retirement, universal service, intercarrier compensation, carrier of last resort obligations, etc. Simply stated, the IP Transition is not going to be a quick and easy process for us to work through, but it’s an important one and we shouldn’t be afraid to take it on.
Spectrum/Wireless Issues
In 2013, the Phoenix Center once again focused on a wide range of spectrum issues which continue to occupy center stage among the policy set.
For example, we spent a considerable amount of time continuing our work exploring the relationship between spectrum allocation and industry structure. This work included the publication of our paper Wireless Competition Under Spectrum Exhaust in the Federal Communications Law Journal, having George testify before the Senate Commerce Committee (testimony available here) and having the Federal Communications Commission repeatedly cite our research in its recent 16th CMRS Report as analytical support to break the link between market shares and market performance (see here).
We also spent a fair amount of time focusing on the FCC’s upcoming voluntary incentive auctions and, in particular, efforts by the Department of Justice and others to impose some sort of bidder exclusion rules. This research included not only two formal papers (Phoenix Center Policy Bulletin No. 33, Equalizing Competition Among Competitors: A Review of the DOJ’s Spectrum Screen Ex Parte Filing (May 2013) and Phoenix Center Policy Perspective No. 13-03, Will Bidder Exclusions Increase Auction Revenue? A Review of the Arguments (June 11, 2013)), but also a series of blogs (available here, here and here). Yet, perhaps most surprising to us was the fact that even though the Phoenix Center has a long-standing policy of not filing anything at the FCC, T-Mobile nonetheless felt compelled to hire both a special outside counsel and expert witness to formulate (and subsequently file in the FCC’s record) a response to our research (which was simply posted on our webpage). While we were slightly dismayed by the harsh tone of T-Mobile’s attacks, we were nonetheless flattered that they believed our research has such a major impact in the policy debate. So, consistent with our policy of posting meaningful critiques of our work on our webpage to foster academic debate, you can view T-Mobile’s filing here, as well as our response (which was also not filed at the FCC but simply posted on our webpage) here.
Also, we attempted to tackle the difficult (and heretofore unresolved issue) of how government can use and spectrum more efficiently in the hopes of repurposing spectrum for commercial use. In a paper entitled Market Mechanisms and the Efficient Use and Management of Scarce Spectrum Resources, we looked into this issue and came to the conclusion that that if the goal of spectrum use and management is economic efficiency, then policymakers should expand the private sector’s management of the nation’s scarce spectrum resources.
George was also invited to testify before the House Energy and Commerce Committee on a hearing seeking to determine whether mHealth applications transformed smartphones into a “medical device” subject to both the 2.3% medical excise tax required by the Affordable Care Act as well as regulation by the Food and Drug Administration. For those interested in watching a fascinating hearing, video is available on the Phoenix Center’s YouTube channel here and George’s testimony is available here.
Finally, we are pleased to report that our spectrum work went international in 2013, as we were asked by USAID to author a study on Thailand’s efforts to use market mechanisms to allocate spectrum. We truly appreciated the hospitality we received when we traveled to Bangkok, and we certainly enjoyed noshing our way though Thailand’s famous street food scene. Best of all, we can now check off riding an elephant from our bucket lists. (Pictures available upon request.)
The “Monopolization” Narrative
One of the constant arguments we hear in the broadband debate is that firms are somehow “acting like monopolists” or “exercising market power.” However, as we repeatedly showed, these arguments are often based on an improper application of economic theory and, as such, the facts cited by the “monopolization narrative” proponents regularly belie their arguments.
For example, George felt compelled to call out the numerous flaws made when Mark Cooper of the Consumer Federation of America attempted a formal analysis of wireless firms’ pricing, profits, and efficiency. As George explained in great detail in his blog Price, Profit, and Efficiency: Mark Cooper’s Bungled Analysis, the evidence Mark presented leads to no strong suspicion that AT&T and Verizon are exercising undue market power in the mobile wireless industry. Using proper economic techniques, George demonstrated that what Mark’s evidence does suggest is that the companies are reaping positive returns on their investments in superior quality, and that’s a good thing.
And then we had Washington Post technology reporter Tim Lee’s unsupported claim that Comcast is “acting more and more like a monopolist.” Not only was George unable to find an economic argument in Mr. Lee’s reporting that supported Mr. Lee’s argument that large quality spreads are monopoly behavior—whether formal or informal—but George also pointed out (again using proper economic techniques) how Mr. Lee’s own data belied his argument. For those interested in reading the complete post, A Response to the WaPo’s Timothy Lee: Why Comcast is NOT Acting Like a Monopolist, it can be found here.
Last, but certainly not least, we had Susan Crawford’s controversial book Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age. This was a thought provoking book, and I was delighted to see that Professor Crawford quoted me by name in the text. As the overwhelming number of critiques of her book were nothing more than ad hominem attacks, we felt Professor Crawford deserved a legitimate academic critique on the merits. Unfortunately, we found, among numerous other problems, that because Professor Crawford did not check her citations carefully regarding, for example, the true cost of building out a nationwide network or the true number of homes cable companies serve in their service territories, it was not possible to assign the book any analytical credibility. As a result, rather than make a substantive contribution either to the debate or to the literature with scholarship and attention to detail, Professor Crawford’s sloppy research came off as pure advocacy. A copy of our critique, Sloppy Research Sinks Susan Crawford’s Book, may be downloaded here.
Examining the Relationship Between the DOJ and FCC
Examining the relationship between the antitrust authorities and the Federal Communications Commission has long been a popular subject of research at the Phoenix Center (see, e.g., here). In 2013, I authored two blogs on this topic which we believe warrant mention here.
In my first blog, I argued that it’s time for the FCC to eliminate its ex parte rule which essentially allows it to meet in total secrecy with the DOJ. As I explained, while this rule has been increasingly abused over the last several years (see, e.g., the DOJ/FCC tag team in the AT&T/T-Mobile merger), the fact that I found a video of Assistant Attorney General William Baer proudly testifying before the Senate Judiciary Committee that the DOJ worked “very cooperatively—quietly—with the Federal Communications Commission” in formulating the DOJ’s position on championing bidder exclusion rules for the upcoming voluntary incentive auctions is beyond the pale and raises some serious due process concerns.
I also authored a blog entitled A Fresh Analytical Start at the FCC, where I argued that if the FCC is to re-establish its reputation as the “expert” agency, then it needs to reject calls to adopt the same analytical approach used by the antitrust authorities. Using four recent case studies—the AT&T/T-Mobile merger, the Phoenix Forbearance Order, Special Access and the upcoming voluntary incentive auctions—I showed that using a typical antitrust “head count” approach leads to more regulation, not less. As such, I argued if we are truly going to achieve Congress’ stated desire of a “pro-competitive” and “deregulatory” communications marketplace, then the Commission is going to have to employ a more innovative analysis than a boilerplate “antitrust-type” approach can provide. Instead, the FCC must be an expert in the economics of markets characterized by few firms (often burdened by Congress with significant social obligations), not markets characterized by many firms.
Intellectual Property Protection
Over the past couple of years, the Phoenix Center has refuted the notion that digital piracy is costless to society (see, e.g., here). However, given the ever-growing popularity of the “information needs to be free” movement, in 2013 the Phoenix Center found itself pulled back into the debate.
For example in a Policy Perspective entitled Piracy and Movie Revenues: A Critical Review of “A Tale of the Long Tail”, George showed that a claim by researchers at the University of Munich and the Copenhagen Business School that digital piracy actually increases box office sales for some films was an artifact of a poorly-designed statistical model, which was, in part, a consequence of the study’s authors ignoring the basic economics of the box office. Similarly, in a blog I wrote entitled Who is at Fault for On-Line Piracy? According to PiracyData.org, Blame the Victim, I demonstrated that not only was the data used by PiracyData.org unreliable, but the dataset—which purports to show the on-line availability of the most pirated movies on the assumption that digital piracy would be reduced if studios simply made more of their content on-line faster and cheaper—was irrelevant to the piracy debate in the first instance. (After all, theft is still theft.) The debate about the appropriate legal and economic contours of digital piracy is expected to intensify in the coming years, and we look forward to continuing to contribute to that discussion.
We also looked into the legal ramifications of new technology on the “Transmit Clause” contained in Section 101 of the Copyright Act in a blog entitled The Curious Cases of Aereo, BarryDriller and FilmOn X. Specifically, I examined various courts’ rulings on the legality of new third-party subscription services designed to allow customers to view over-the-air broadcast television via the internet and mobile devices. At issue was a simple legal question: do these services facilitate a “public performance” of protected works under Section 101 of the Copyright Act (a.k.a. the “Transmit Clause”), which provides, inter alia, that a work is performed publicly whenever such work is “transmit[ed] or otherwise communicate[ed] … to the public, by means of any device or process, whether the members of the public capable of receiving the performance or display receive it in the same place or in separate places and at the same time or at different times.” The Second Circuit ruled that the answer is no, finding that these services essentially act as a high-tech DVR; however, two district courts (one in California and one in the District of Columbia) held that the answer is yes, and granted preliminary injunctions against further use of the technology. As I explained, these services, at least to two district courts, were clearly a bridge too far. To them, the technology was patently developed to produce a product designed to look and quack like a souped-up betamax but act like a (de facto) cable provider. As I also predicted, this issue is now pending before the Supreme Court, and it will be interesting to see if they grant certiorari and, if so, how they eventually rule.
Finally, we also waded into the Byzantine world of Retransmission Consent with a policy paper that provided (as far as we know) the first policy-relevant economic theory of the issue (see here). Taking into account the social contract between the government and broadcasters to serve the “public interest” (e.g., provide “local” programming and a “diversity of voices” to as many Americans as possible), we show that the “market” outcome for the license fee under the Retransmission Consent paradigm may not be socially efficient. As our paper explains, broadcast regulation creates a type of positive information externality, and private transactions do not typically account for externalities, meaning the market price for the retransmission fee is theoretically “too high,” both relative to the socially-optimal price and the market price of an otherwise-equivalent cable network. This “spread” (which we do not quantify) is a consequence of a disharmony between the historical and continuing policy of the broadcast social contract and the “market” approach embodied in the Retransmission Consent regime.
Handset Unlocking
The Phoenix Center has been looking at the law and economics of handset unlocking—and its broader cousin, “Wireless Carterfone”—for years. While we thought everybody had moved on from this sophistic debate, with the Librarian of Congress’s ruling that handset unlocking no longer required an exemption from the non-circumvention provisions of the Digital Millennium Copyright Act due to the abundance of competition for handset devices, the issue returned to the forefront of the debate (complete with White House intervention as a result of a populist write-in campaign). As I explained in two blogs (here and here) as well as in a debate in which I participated on this issue sponsored by our friends at TechFreedom (video available here), the handset unlocking debate is not about the appropriate bounds of DMCA copyright enforcement; instead, the real debate is about how to get an unlocked phone at a subsidized price. However, making his first act as the new FCC Chairman the enforcement (or non-enforcement) of Copyright law, Tom Wheeler shined the spotlight on the major wireless carriers to encourage them to set forth a formal policy on device unlocking that established some standardized intervals and triggering events. These policies put increasing pressure on the industry to kill the subsidized devices—a death which the FCC has been promoting for years. While we are generally not ones to say “see, I told you so” (OK, we are), I would remind folks that we formally demonstrated several years ago that such policies could slow the diffusion of new technology, diminish innovation in mobile handsets, and raise handset prices. So, with the growing demise of handset subsidies, it appears that our predictions are correct (at least in the short term).
Benefits of Broadband
Way back in 2009, George co-authored a seminal study entitled Internet Use and Depression Among the Elderly (subsequently published in the referred journal, Computers in Human Behavior), where he demonstrated that there was a strong statistical relationship between increased Internet use and reduced depression in the elderly. In 2013, George updated his research in a paper entitled Revisiting Internet Use and Depression Among the Elderly, where he expanded the dataset and applied statistical methods that assess depression and Internet use over time. In his update, George found that Internet use reduces depression by 34%, a result slightly larger than the 20-28% reduction found when the Phoenix Center released his original findings in 2009.
Net Neutrality
With the FCC’s Open Internet Order winding its way through courts in 2013, the fate of the Order now rests in the hands of the D.C. Circuit. While various legal arguments have been bandied about, to our knowledge we are the only ones who have focused on the factual predicate (or lack thereof) of the Commission’s Order. That is to say, in a Perspective we published last year (and re-published in a journal this year), we demonstrated how the Federal Communications Commission deliberately ignored its own evidence to support expanded regulatory jurisdiction over IP-based services. Specifically, we showed how the FCC ignored its own financial analysis conducted as part of its National Broadband Plan which found that ubiquitous availability using wireline or terrestrial wireless services is unreasonably costly. Even worse, we showed how the agency rejected the National Broadband Plan’s recommendation that it should consider using satellite, finding that such service to be inferior. Well guess what? In a blog I wrote this year entitled The FCC Contradicts Their Facts (Again) To Justify Expanded Broadband Regulation, I pointed out that in the FCC’s 2013 Measuring Broadband America Report, the FCC suddenly found that satellite broadband was the greatest thing since sliced bread. So, with the Commission’s admission that satellite will, in fact, now “support many types of popular broadband services and applications,” the FCC has plainly conceded that the major factual predicate for its invocation of Section 706 as an independent source of legal authority over advanced services is no longer true.
International Broadband Rankings
Way back in 2007/2008, proponents of increased government intervention attempted to use international broadband rankings as evidence of some sort of market failure. Through our extensive work (see here) and the work of others, it was demonstrated that these arguments were pure sophistry and the debate moved on. That said, some folks just can’t take “no” for an answer and nonetheless seek to take another bite at the apple. For example, in 2013 the New America Foundation twice attempted to invoke an emotional response to a fabricated broadband “crisis” using international ranking data. As George pointed out in two scathing critiques (see here and here), however, New America’s researchers committed numerous technical flaws and, as such, their work cannot be accorded any analytical credibility. Indeed, George demonstrated—using proper economic techniques—that the evidence New America presented actually suggested the following: First, that currently observed broadband prices in the U.S. (at least, in the form of a triple-play bundle) are consistent with competitive outcomes (i.e., the lowest reasonable prices); and second, an increase in government involvement, up to and including owning and operating a network, is not going to bring significantly lower prices for broadband services in America.
Conclusion
So why is it we do what we do? Because even as policy debates have become increasingly politicized over the years, we still believe (perhaps over-optimistically) that, in the end, substance matters. The policy choices we face are hard, and they should be treated with the respect and analytical rigor they deserve.
We appreciate everybody’s support of our work 2013, and we look forward to contributing to the debate again in 2014.