ISSN 2013-9004 Papers 2013, 98/3 587-591 Revisiting How Professors Think across National and Occupational Contexts Michèle Lamont Harvard University mlamont@wjh.harvard.edu Revisiting How Professors Think through the lens of the evaluative cultures of Spanish peer reviewers and those of American policy experts raises diverse unanticipated challenges. Below I first discuss the three contributions that consider How Professors Think via cross-national comparisons (Díez Medrano, Lasén, and Valiente) before turning to the discussion of cognitive autonomy at the center of Medvetz’s comments, which are inspired by his own particularly illuminating study of American think tanks (Medvetz 2012). Before proceeding, I wish to thank our four colleagues for making time to seriously think about some of the implications of How Professors Think that I had not previously considered. I am greatly appreciative of their thoughtful contributions, as I am of Alvaro Santana-Acuña and Xavier Coller for suggesting this symposium and for so skillfully orchestrating it. They have created a much valued opportunity for me to reflect on How Professors Think four years after the publication of the book in English, and after that, it has made its way into various international audiences via translations in Korean, Chinese, and soon Spanish. How Professors Think concluded on whether it is desirable and possible for peer review “a la americana” to diffuse beyond U.S. borders. In the last chapter I described some of the conditions that make this type of evaluative practice possible in the United States (focusing on factors such as the significant demographic weight of the U.S. research community, the spatial distance and decentralization of its institutions of higher education, and the lengthy graduate education process that brings students in close contact with mentors who impact their self-concept while diffusing implicit evaluation standards). This chapter also suggested why it would not be reasonable to expect that the same customary rules of evaluation I described to appear in countries where different conditions for scientific work prevail. This has been confirmed in 588 Papers 2013, 98/3 Michèle Lamont my collaborative writings of evaluative cultures in Canada (SSHRC 2008), China (Lamont and Sun 2012), Finland (Lamont and Huutoniemi 2011) and France (Cousin and Lamont 2009). For instance, the NORFACE peer review system adopted in Finland and widely used in Europe (Lamont and Huutoniemi 2011) favors bringing in international reviewers to counter the localism that often prevails in small size academic communities. This system demonstrates the importance of adapting peer review processes to the features of national research communities. The comments on How Professors Think provided by Lasén, Valiente, and Díez Medrano go very much in the same direction and add considerable nuance and complexity to the problem by considering the Spanish case in light of these authors’ own experience as evaluators. Many of the points they raise concern the extent to which the Spanish system of evaluation can converge with U.S. qua international norms. Lasén notes how the growing participation of non-Spanish evaluators (especially U.S. and British) in the Agencia Nacional de Evaluación y Prospectiva (ANEP) is affecting the practices of Spanish panels in such a way that they come to converge with the practices adopted by the U.S. panels I studied. She notes that national academic status order is now challenged by a number of international status markers (e.g., publications in top-rated international journals), which senior scholars serving as evaluators often might not have since, when they started their careers, they were not expected to internationalize their work to the same extent as scholars applying for funding are now expected to do. This professional asymmetry creates paradoxes and tensions that Díez Medrano also notes. This is the first of several challenges the Spanish peer review system faces. A second significant challenge is the mismanagement of peer review by public administrators in charge of overseeing the system. Lasén mentions a recent case where panelists were asked to rank applicants across all disciplines, an impossible task from the perspective of the academic expertise required. France has faced comparable episodes, explained in part by a tradition of state centralization that is fundamentally at odds with the respect of academic autonomy and the integrity of the peer review system. Administrative interference tarnishes the legitimacy of research evaluation all together, and discourages researchers from getting involved in funded research (either as applicants or as peer reviewers). Thus, we learn that challenges to peer review come not only from insufficiently professionalized localistic and clientelistic academics, but also from hungry public administrators who overextend the tentacles of governmental power. An obvious conclusion is that those in charge of scientific and research policies need to show the way if they are seriously committed to fostering more universalistic academic communities. A third challenge has to do with criteria of evaluation used in prestigious Spanish competitions. Lasén mentions that one such competition puts more weight on the trajectory of candidates than on their project, which is at odds with international standards. In a recent evaluation of Canadian social science Revisiting How Professors Think across National and Occupational Contexts Papers 2013, 98/3 589 and humanities peer review I was involved in (SSHRC 2008), the international blue-ribbon panel in charge of the evaluation recommended that less weight be given to the past record of candidates as compared to their research proposal, so as to even the playing field for more junior researchers. I presume change is most likely to come from younger generations of Spanish researchers, due to their growing involvement in European and international research communities. For this reason, reducing the impact of past records in scoring proposals should foster major changes in the Spanish academic community. A fourth challenge concerns the dysfunctional consequences of academics competing for a diminishing pool of grant resources and space in prestigious journals, which generates a considerable waste in time and energy. This raises the question of the desirability of adopting more variegated forms and sites of evaluation (e.g., through the creation of electronic journals, as in Italian sociology—see the editors’ notes in the introduction) which would encourage the development of a wider range of complementary types of excellence. This approach is to be contrasted with a form of mono-cropping that pushes young academics to submit themselves to a narrow range of standards. Added to the requirement of writing in a language other than their native tongue, and that of adopting set formats for articles (as described by Abend 2006), such mono-cropping is unlikely to work to the advantage of Spanish academics. The alternative is to let a hundred thousand flowers bloom, with the risk that lower quality work emerges and that the better researchers be drowned in a climate of “anything goes.” In her contribution, Valiente points out additional challenges, such as that of finding highly qualified and disinterested reviewers given the size of the Spanish research community. She also mentions differences in the culture of evaluators (concerning for instance the respect of norms of confidentiality, which it should be noted is far from perfect in the U.S. academia as well), and the fact that Spanish academics are less likely to think of themselves as active members of a scientific community that requires to take turns in serving as reviewers. Most importantly, she suggests that, whereas the existence of informal customary rules of evaluation may “work” in the United States, it could well have pernicious effect in Spain by feeding clientelism. For instance, respect for the rule of “cognitive contextualization” may get in the way of denouncing instances of corruption when evaluators openly seek to favor researchers they are close to. She also stresses that in a context where there are few high quality proposals, meeting basic standards such as clarity, feasibility, and methodological soundness should be given more weight and importance than criteria such as originality. She concludes by stressing the significance of establishing and consolidating peer review nationally. I would venture that the British Economic and Social Research Council’s approach of selecting, training and rewarding members of a college of assessors could be a useful way forward for raising standards for peer review in Spain. As for Díez Medrano, I appreciated his comparison of How Professors Think not only to the Spanish evaluative culture, but also to the European Union’s 590 Papers 2013, 98/3 Michèle Lamont evaluation commissions where he has gained considerable experience. He notes a growing bifurcation within the Spanish system, between those who are embracing international norms and other researchers. The former, he argues, put more emphasis on criteria of evaluation such as social and policy significance and methodological rigor, as opposed to theoretical and substantive contribution. These criteria contrast with those preferred by more traditional researchers who have mostly put weight on alignment with particular theoretical paradigms (Marxism, feminism, etc.). Such differences in criteria of evaluation are a considerable source of conflict. The outcome is a separation between theoretical reflection and empirical research, which is at odds with U.S., but not European, trends. One is left wondering whether there is anything shared among the contributions that are judged significant across these various contexts (between, let’s say, Ulrich Beck and Axel Honneth (as representatives of European social theory) on the one hand, and Michael Hout and Alejandro Portes, to take two random examples of U.S. middle range sociology). Díez Medrano provides a most convincing description of the factors that may explain the current state of peer review in Spain (characterized by the low autonomy of the academic field), which ties current practices to the broader features of national and academic contexts. His analysis should inspire further collective reflections among Spanish academics on the future of peer review in their country and on how to reform the system while avoiding the perils of the over-quantification of excellence measurement. This kind of measurement is often perceived and denounced as a tool of neo-liberal governmental control; as experienced in France in recent years with the creation of the Agence d’évaluation de la recherche et de l’enseignement scientifique (AERES) and whose abolition was predictably announced by the socialist government shortly after it took power in 2012. An additional point made by Díez Medrano concerns the intensity of the involvement of U.S. academics in evaluation as compared to their Spanish counterparts. The image of U.S. academics he provides may imply a generalization of norms of behavior found in some elite research universities to the U.S. academic world as a whole (which certainly includes a fair share of cynical non-participators). Whether the ideal community of readers is more homogeneous in the United States is an empirical matter that will be well worth investigating. Finally, his comment about how disciplinary consensus translates into interdisciplinary deference is a point well-taken that should also be ascertained through comparative research, just as it is the case for the emerging concern about the complex relationship between types of diversity and constructions of excellence in the distinctive context of European research. Turning finally to Medvetz’s contribution, his point of departure is Bourdieu’s distinction between practical sense and theoretical reason as each manifests itself in research production. I must confess that while writing about the role of emotion in evaluation, I had forgotten about Bourdieu’s (1979) writing on practical sense and had not made the connection between his concerns and mine. Thus, I found Medvetz’s comparative discussion of our respective Revisiting How Professors Think across National and Occupational Contexts Papers 2013, 98/3 591 approaches refreshing and informative. More generally, he raises the question of cognitive autonomy in the fields of knowledge production each of us studied. This comparison is significant given that, as Medvetz himself points out, academic and policy experts are now engaged in a most consequential war of influence over policy relevant knowledge. While the world of think tanks is not one where peer review matters, the latter is omnipresent in academia. Yet, in the world of policy making, as in academia, merit is assessed based on particular “social and interaction dynamics.” But, are these worlds more similar or different? Medvetz convincingly points out three determinants (social, generative, and conditional) underlying constructions of autonomy across fields—each determinant feeding various notions of cognitive autonomy. The latter is always an illusion, to the extent that, from a radical interactionist perspective, autonomy is enabled by taken for granted agreements about ways to accomplish it. Note that contra Medvetz, this perspective suggests more similarities than differences between the two worlds we study. Nevertheless, my sense of how this operates converges with the description he offers and I thank him for situating my contribution within the much-needed broader framework of the sociology of intellectuals and that of social knowledge in the making (Camic et al 2011). Much more could be said, and I hope that this exchange marks the beginning of a longer conversation. Bibliographic References aBEnd, Gabriel (2006). “Styles of Sociological Thought: Sociologies, Epistemologies, and the Mexican and US Quests for Truth.” Sociological Theory, 24 (1), 1-41. BourdiEu, Pierre (1979). Le sens pratique. Paris: Éditions de Minuit. CaMiC, Charles; Gross, Neil and laMont, Michèle (2011). Social Knowledge in the Making. Chicago: University of Chicago Press. Cousin, Bruno and laMont, Michèle (2009). “Les conditions de l’évaluation universitaire: Quelques réflexions à partir du cas américain.” Revue Mouvements, 60, 113-117. laMont, Michèle and huutoniEMi, Katri (2011). “Comparing Customary Rules of Fairness: Evidence of Evaluative Practices in Peer Review Panels.” In: CaMiC, Charles; Gross, Neil and laMont, Michèle (Eds.). Social Knowledge in the Making. Chicago: University of Chicago Press. laMont, Michèle and sun, Anna (2012). “How China’s Elite Universities Will Have to Change.” The Chronicle of Higher Education. URL: MEdvEtz, Thomas (2012). Think Tanks in America. Chicago: University of Chicago Press. soCial sCiEnCE and huManitiEs rEsEarCh CounCil oF Canada (2008). “Promoting Excellence in Research—An International Blue Ribbon Panel Assessment of Peer Review Practices at the Social Sciences and Humanities Research Council of Canada.” URL: