The Search for Benchmarks: When Do Crowds Provide Wisdom?
View/ Open
Metadata
Show full item recordCitation
Lee, Charles M.C., Paul Ma, and Charles C.Y. Wang. "The Search for Benchmarks: When Do Crowds Provide Wisdom?" Harvard Business School Working Paper, No. 15-032, October 2014. (Revised November 2014.)Abstract
We compare the performance of a comprehensive set of alternative peer identification schemes used in economic benchmarking. Our results show the peer firms identified from aggregation of informed agents' revealed choices in Lee, Ma, and Wang (2014) perform best, followed by peers with the highest overlap in analyst coverage, in explaining cross-sectional variations in base firms' out-of-sample: (a) stock returns, (b) valuation multiples, (c) growth rates, (d) R&D expenditures, (e) leverage, and (f) profitability ratios. Conversely, peers firms identified by Google and Yahoo Finance, as well as product market competitors gleaned from 10-K dis-closures, turned in consistently worse performances. We contextualize these results in a simple model that predicts when information aggregation across heterogeneously informed individuals is likely to lead to improvements in dealing with the problem of economic benchmarking.Terms of Use
This article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAPCitable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:13350433
Collections
- HBS Scholarly Articles [854]
Contact administrator regarding this item (to report mistakes or request changes)