FINCAP, an unprecedented finance research project
The Finance Crowd Analysis Project is offering an unrivaled meta-scientific view in empirical Finance. 164 research teams worked on this project, an exceptional number in this field. It involved researchers from 207 institutions and 34 countries including central bank economists.
2 TBS Education associate professors involved in #fincap
Two TBS Education associate professors, Anna Calamia and Debrah Meloso, are co-authors of the research paper Non-standard-errors coming from Fincap. Anna and Debrah are professors in the department of Economics and Finance and are affiliated to the Finance, Economics, and Econometrics research laboratory at TBS Education. Their research has focused on the functioning of financial markets and they are engaged in several other projects together.
The first crowd-sourced empirical paper in Economics/Finance
Have you ever wondered whether researchers using the same data set to address a unique research question would come to the same conclusions? Fincap tried to answer this complex question.
This innovative project attracted eminent researchers, including some from the most prestigious French finance department. Fincap was run by project coordinators Anna Dreber, Felix Holzmeister, Juergen Huber, Magnus Johannesson, Michael Kirchler, Albert J. Menkveld, Sebastian Neusuess, Michael Razen, and Utz Weitzel from the Stockholm School of Economics, the University of Innsbruck, and the Vrije Universiteit Amsterdam.
Abstract
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.