Roy Epstein & Paul Malherbe published a serious paper (2011) about comparable royalty rates used to determine patent damages in legal proceedings. However, they make several factual claims whose validity is compromised by the paucity of data examined. Despite the use of incomplete information, Epstein & Malherbe enumerate certain pitfalls of using royalty rates databases that we challenge as fallacious. See http://www.royepstein.com/Epstein%20&%20Malherbe%20-%202011.03.28%20-%20EPE%20(final%20AIPLA%20revisions).pdf
Herein we offer several friendly caveats addressing Epstein & Malherbe’s particular claims. More research is needed covering the topics discussed below to clarify the issues in more detail.
Among others, referring to certain recent court decisions, including Uniloc USA, Inc. v. Microsoft Corp., Lucent Technologies, Inc. v. Gateway, Inc., and i4i Ltd. Partnership v. Microsoft Corp., Epstein & Malherbe infer (p. 5): “These decisions show an increasing demand for economic analysis to support claimed reasonable royalties. Lucent emphasized close scrutiny of 'comparable' licenses used as damages benchmarks to avoid erroneous results.”
Epstein & Malherbe are defeatists about the usefulness of finding comparable royalty rates respecting the Lucent standard. The referenced authors write (pp. 7, 9): “Lucent severely criticized the use of royalty rates from other license agreements as damages benchmarks without sufficient regard to their comparability. … The Federal Circuit emphasized that all of the licenses used by Lucent’s expert as comparables involved pools and cross-licenses, although the case itself involved essentially a single Lucent patent. That made it impossible to determine the stand-alone value of a single patent, which was the key damages issue in Lucent.” Excluding their footnotes. A key grievence noted by the authors is insufficient regard to comparability analysis, and not data deficiency. These issues can't be conflated.
Further, Epstein & Malherbe state (p. 15): “Given the heightened scrutiny of licenses under Lucent, it would not be surprising if damages experts considered using summary royalty database information or royalty surveys as alternative evidence of comparable royalties.” Dampening enthusiasm, the authors reproach (p. 15) on a section called “Pitfalls in Using Royalty Databases”: “there are serious pitfalls in using these information sources in a damages analysis. The district court case IP Innovation L.L.C. v. Red Hat, Inc. foreshadows some of these issues by excluding the plaintiff’s expert in part for uncritical reliance on overall reported industry average royalty rates.” Emphases added. A problem identified by the authors is uncritical reliance on industry average data, and not impaired royalty rates databases.
Epstein & Malherbe continue (p. 16): “To help evaluate potential reliance on royalty databases in analyzing a hypothetical negotiation, we investigated the information in RoyaltySource® more closely. Based on our review, it is highly questionable whether the data could ever satisfy a Lucent standard.” It’s important to note that Epstein & Malherbe considered only one among several available databases. Also we are not persuaded that the referenced database was exploited and made to speak several years of collecting royalty rates information. We shall replicate Epstein & Malherbe’s complaints in italics, and write our comments using regular font:
First, RoyaltySource® is not limited to patent royalties. This is not a defect of RoyaltySource because the authors can introduce search criteria to select only patent agreements. E.g., in RoyaltyStat® (which is another, indepedent royalty rates database), which we encourage Epstein & Malherbe to consult, before drawing factual conclusions about royalty rates databases in general, we can select the type of agreement on the search form, and exclude non-patent agreements. RoyaltyStat contains only royalty rates extracted from license agreements, and the database contains over 1,500 unique "naked" patent license agreements with disclosed consideration, including the royalty rate. This is a large number of patent agreements available for comparability analysis.
Second, the actual licensing agreements are available for only a small fraction of the royalty rates reported in the database. What’s relevant is the number of patent agreements satisfying a particular search criteria reflecting the relevant dispute, and not a general statement about the small fraction of available agreements. Since the paper by William Gosset (1908), “The Probable Error of a Mean,” Biometrika, March 1908, as scientists, we are accustomed to draw conclusions from small samples, by convention regarded as containing up to 30 observations. Thus, a small number of naked patent license agreements per se is not an impediment to determine comparable royalty rates. We note that one of the authors, Roy Epstein, published a book about A History of Econometrics (North-Holland, 1987), so we shall not extoll this matter of the sample size needed to draw reliable conclusions.
Third, the reported royalties include patent rights bundled with other types of intellectual property, such as copyrights. Here we have two caveats. First, it’s likely that while using RoyaltySource, the authors can select patent agreements in which only “naked” patent rights are conveyed. In RoyaltyStat (despite the similarity of trade names, these are two separate databases that should not be confused), this naked patent restriction can be written on the search form. Second, it’s a factual matter if the royalty rates of patent only license agreements are different from those in agreements conveying more than patent rights. In principle, we can conjecture that an agreement conveying multiple rights can be modeled with dependent royalty rates, each with a different probability. From this conjecture, a measurable covariance can be subtracted from the total royalty rate conveyed in the agreement, such that a specific (or residual) patent royalty rate can be calculated. It’s unlikely that multiple royalty rates conveyed on a single agreement can be independent, with zero covariance. As such, disregarding the covariance is likely to overestimate the patent royalty rate parsed from the total royalty rate.
Fourth, the reported patent royalty rates frequently apply to multiple patents and even large portfolios of patents. The second consideration that we made immediately above is applicable here. This is a factual matter that can be resolved by due diligence. We know that not all patent license agreements with disclosed consideration enumerate the applicable patent numbers. Among those agreements in which the patent numbers are disclosed, a data field that we extract in RoyaltyStat, we can ascertain the existence of single versus multiple patent numbers.
Fifth, the databases generally do not distinguish licenses reached as part of litigation settlements from those negotiated in the normal course of business. This is an easy task to accomplish by reviewing the license agreements. We can also build a dichotomous data field distinguishing when the royalty rate pertains to a legal settlement and then test for differences in royalty rates using the constructed dummy (0, 1) variable.
Sixth, the data are likely biased upward for purposes of assessing representative royalty rates for litigated patents. This is a serious claim made by Epstein & Malherbe because biased results are not admissible in science. However, this biased claim made by Epstein & Malherbe is speculative, without probative evidence. RoyaltyStat includes over 2,600 license agreements that were originally filed in a redacted version, and then we obtained an unredacted copy from FOIA requests. Using such data, we can test if the FOIA agreements have significantly lower or different royalty rates than agreements that were disclosed without requesting confidential treatment from the Securities & Exchange Commission (SEC). Again, this is a factual matter that we can test by using comparable license agreements selected to analyze a given case.
In addition to the misleading claims identified above, Epstein & Malherbe erred because they don’t consider the less desirable alternatives to using royalty rates databases. In principle, we must confront a theory against facts and against the most reliable alternative. There is no refutation without a more credible alternative. Thus, determining the applicability of one method, such as using comparable royalty rates from databases, is a two-pronged process that cannot be reduced to “naïve empiricism,” especially based on the consideration of incomplete data. See Imre Lakatos, “Falsification and the Methodology of Scientific Research Programmes,” in Imre Lakatos & Alan Musgrave, Criticism and the Growth of Knowledge (Cambridge University Press, 1970), p. 120 (“the crucial element in falsification is whether the new theory offer any novel, excess information compared to its predecessor and whether some of this excess information is corroborated.”). Emphases on the original. As Lakatos argue, granted using a hyperbolic tone, "refutation without an alternative shows nothing but the poverty of our imagination in providing a rescue hypothesis." Emphases on the original. See also his p. 121, footnote 4. We conjecture that using royalty rates extracted from license agreements, such as the information contained and updated in RoyaltySource and RoyaltyStat, among others, provides excess probative information compared to its predecessor, i.e., compared to relying on a list of 15 qualitative factors in which desirable statistical properties (unbiasedness, minimum variance or reliability) of the results are difficult to ascertain.In summary, the pessimistic conclusions drawn by Epstein & Malherbe about the pitfalls of using royalty rates databases reflect a hurried attitude to data analysis. Some conclusions are reductionist (suffering from naïve empiricism), and some are non sequiturs; they are based on incomplete information, and thus lack persuasion. We agree with the authors' statement (p. 17): “This analysis abundantly confirms the findings in Lucent and Red Hat that merely tabulating rates on an industry-specific basis is inadequate to adjust for comparability.” However, this points to the unacceptability of defective analysis, peccable diligence, lack of due consideration of the available license agreements, and not necessarily defective data. Epstein & Malherbe’s search for comparable royalty rates satisfying the Lucent standard was incomplete, based on a rushed analysis using a single among several important royalty rates databases; thus, their conclusions about the usefulness of available royalty rates databases are misleading and subject to challenges.
Ednaldo Silva (Ph.D.) is founder and managing director at RoyaltyStat. He helped draft the US transfer pricing regulations and developed the comparable profits method called TNNM by the OECD. He can be contacted at: email@example.com
RoyaltyStat provides premier online databases of royalty rates extracted from unredacted agreements, normalized company financials (income statement, balance sheet, cash flow), and annual reports. We provide high-quality data, built-in analytical tools, customer training and attentive technical support.