Methodological and Theoretical Pluralism: Good or Bad?

Last week I was in Milan, Italy attending the International Conference on Public Policy.  Unlike many of my colleagues, I had yet to attend an international conference so this was a very exciting experience for me on a number of levels.

Anyway, a number of things struck me as a result of this conference (and I don’t mean the unbearable heat of Italy in July!).  One was the sheer number of people from different disciplines studying public policy.  On the one hand, it’s a strong sign of a healthy subfield, right?  On the other hand, it seems that a powerful consequence of size and diversity is theoretical and conceptual fragmentation.  In almost every panel I attended, there was significant disagreement about concepts and assumptions within very established theoretical traditions.  For instance, in the panels on “co-production”, presenters and audience members used the terms “co-management”, “co-creation”, “co-construction”, among many others, interchangeably or as meaning different yet similar things.

In one of the plenary sessions, political scientist Bryan Jones noted a similar phenomenon.  He believed that the literature on agenda setting, a concept that he helped invent and pioneer, had seemingly lost its way.  Much of the new literature on the topic, he argued, was no longer in sync with the original theoretical micro assumptions that he and others had originally grounded the work in, with predictably negative consequences. Continue reading

It seems to me that the trends Prof. Jones noted in his talk and the lack of conceptual agreement at the panels I attended were partly the result of the growth and democratization of the academy.  In the past, there were fewer journals, fewer scholars, and fewer students entering and finishing PhD programs.  The result, I think, was a smaller set of high performing scholars writing about public policy (and political science) issues. The demands to keep up with the literature were smaller and the people contributing were the best of the best (I think?!).  As a result, political science and public policy fields and subfields perhaps had more internal conceptual consistency or at least more consistency in terminology. Today, however, with the explosion of new journals and PhD programs, the sheer amount of literature is impossible to read and keep up with.  As a result, you get conceptual fragmentation.

In that same plenary panel, Grace Skogstad gave a powerful defence of methodological and theoretical pluralism and to some extent I agreed with her. Who doesn’t like pluralism when it comes to publishing our research!?  On the other hand, an important and negative consequence of pluralism that rarely gets mentioned is this trend towards fragmentation.  Embracing pluralism means embracing conceptual blurriness, to some extent. For instance, I use co-production but Bob uses co-construction. Do we mean different things? Well, it doesn’t matter.  What matters is that I cite and speak to the people who favour co-production and Bob cites and speak to the co-construction people.  I may try to come up with a new definition of co-production that encompasses co-construction, or I might invent a new term, but there’s no guarantee that anyone will adopt my new definition or term.  Even if some people do, others will continue with their preferred term or definition.  Why? Because we embrace methodological pluralism.

What’s the alternative to methodological pluralism? I’m not sure.  Maybe radically fewer journals?  Then again, if you believe in the work of John Stuart Mill, then methodological pluralism is perhaps the only way to ensure truth wins out eventually.

Researchers and Scholars! Beware of your Cognitive Biases!

I am in the midst of reading Joseph Heath’s Enlightenment 2.0, which was shortlisted for this year’s Donner Prize.  It covers a lot of similar ground in other recent books about how humans think, such as Daniel Kahnman’s and Jonathan Haidt’s books.  Collectively, these books are having a powerful impact on my views of the world and on my scholarship.

Heath’s book is a great read.  It is very accessible and provides an excellent summary of the literature on cognitive biases and decision making (at least it’s consistent with Kahnman’s and Haidt’s books!). Continue reading

Among many important and interesting tidbits, Heath argues that one of the major problems that all citizens face, whether they are academics or non-academics, is confirmation bias (and indeed there’s research showing that philosophers and statisticians, who should know better, also suffer from the same cognitive biases).  It’s why some scholars insist on the need to reject the null hypothesis when engaging in causal inference.

Yet confirmation bias is such a powerful cognitive effect on how we perceive the world and make decisions. Certainly in my subfield, and I assume in many others involving strong normative debates and positions, there is a strong temptation to accept and embrace confirmation bias.

In the words of Joseph Heath:

The whole “normative sociology” concept has its origins in a joke that Robert Nozick made, in Anarchy, State and Utopia, where he claimed, in an offhand way, that “Normative sociology, the study of what the causes of problems ought to be, greatly fascinates us all”(247). Despite the casual manner in which he made the remark, the observation is an astute one. Often when we study social problems, there is an almost irresistible temptation to study what we would like the cause of those problems to be (for whatever reason), to the neglect of the actual causes. When this goes uncorrected, you can get the phenomenon of “politically correct” explanations for various social problems – where there’s no hard evidence that A actually causes B, but where people, for one reason or another, think that A ought to be the explanation for B. This can lead to a situation in which denying that A is the cause of B becomes morally stigmatized, and so people affirm the connection primarily because they feel obliged to, not because they’ve been persuaded by any evidence.

 

Let me give just one example, to get the juices flowing. I routinely hear extraordinary causal powers being ascribed to “racism” — claims that far outstrip available evidence. Some of these claims may well be true, but there is a clear moral stigma associated with questioning the causal connection being posited – which is perverse, since the question of what causes what should be a purely empirical one. Questioning the connection, however, is likely to attract charges of seeking to “minimize racism.” (Indeed, many people, just reading the previous two sentences, will already be thinking to themselves “Oh my God, this guy is seeking to minimize racism.”) There also seems to be a sense that, because racism is an incredibly bad thing, it must also cause a lot of other bad things. But what is at work here is basically an intuition about how the moral order is organized, not one about the causal order. It’s always possible for something to be extremely bad (intrinsically, as it were), or extremely common, and yet causally not all that significant.

 

I actually think this sort of confusion between the moral and the causal order happens a lot. Furthermore, despite having a lot of sympathy for “qualitative” social science, I think the problem is much worse in these areas. Indeed, one of the major advantages of quantitative approaches to social science is that it makes it pretty much impossible to get away with doing normative sociology.

 

Incidentally, “normative sociology” doesn’t necessarily have a left-wing bias. There are lots of examples of conservatives doing it as well (e.g. rising divorce rates must be due to tolerance of homosexuality, out-of-wedlock births must be caused by the welfare system etc.) The difference is that people on the left are often more keen on solving various social problems, and so they have a set of pragmatic interests at play that can strongly bias judgement. The latter case is particularly frustrating, because if the plan is to solve some social problem by attacking its causal antecedents, then it is really important to get the causal connections right – otherwise your intervention is going to prove useless, and quite possibly counterproductive.

 

In the subfield of Aboriginal politics, there are powerful incentives to ascribe everything that has gone wrong with Aboriginal communities post-contact to the British and later the Canadian state.  Those who try to say otherwise are routinely hammered and ostracized by the public and some members of the academy without even taking a moment to consider seriously their work.  Say what you want about the books and articles by Tom Flanagan, Frances Widdowson and Ken Coates, but at least they are providing us with an opportunity to test for confirmation bias.  Causal inference requires eliminating rival explanations! Otherwise, how can you be sure that A causes B?

In many ways, it is for these reasons why I’ve long been suspicious and wary of ideology (and certainty), whether it comes from the right or the left.  Someone who is hard core left or right, it seems, is more likely to be driven by confirmation bias.  I’ve seen dozens of episodes in my life where ideologues (from the left and the right) or those with strong views of the political world, when confronted with overwhelming evidence, refuse to budge.  It’s irrational, in many ways.  And so I long ago vowed to try and avoid becoming one of them and to embrace uncertainty. Sure, I will take a strong a position in my articles, books, and op ed columns, but I’m always ready and willing to change my mind.

Perhaps it’s a cowardly way of approaching politics and scholarship (and so I guess I should never run for office!) but for me, it conforms to my goal of striving towards causal inference and certainty.

Reviewing Journal Manuscripts: Some Thoughts

Earlier this week, I received an email from the editors of Political Research Quarterly that I was one of the recipients of the 2014 PRQ Outstanding Reviewer Award. Odd, right? But I must admit it was also somewhat gratifying. Reviewing manuscripts is often a thankless task and doing a good job rarely produces any tangible benefit to the reviewer.  So the award, from a large and well-respected political science journal, was actually kind of nice.

The first thing I did after receiving the email, of course, was to pull up the reviews I did for PRQ last year.  According to my records, I seem to have only reviewed one manuscript (twice) for the journal, but the review was typical of how I do them now.  At the core of all of my reviews is to start from a position of respect for the author(s) of the manuscript.  Why respect? Because these authors probably spent months and months on this papers and it would be disingenuous of me to believe that I have some sort of absolute authority or expertise on the topic.  Also, if we keep the idea of respect front and centre when we review papers, no matter their level of development, then everyone will be happy and the peer review process, wait for it, may actually work to everyone’s advantage! Continue reading

So, what are some things I keep in mind when I review manuscripts?

1) Always respect what the authors are trying to do and never read the manuscript in terms of what you wished they had done.  You aren’t a co-author!  As long as the authors make the case that the paper makes a contribution in some form, then the task of the reviewer is to assess whether they are successful in making that contribution.  So once I make a decision on whether the paper’s question and answers are a contribution (which is almost always the case), given the journal, I usually focus all of my time on assessing the rigour of the paper (e.g. concepts/theory/methods/data and analysis/conclusions).

2) Subdivided your comments and suggestions into two categories: absolutely necessary changes and changes that would be nice, but are purely optional. Again, I try to respect the fact that this isn’t my paper and I haven’t been working on it for months and months.  And so I try to identify some things that are clearly necessary to ensuring the paper meets the standards of the journal, then I provide some other things that might be helpful, but perhaps aren’t necessary for defending the basic arguments and contribution of the paper.  I always tell the authors which suggestions are necessary for my support, and which suggestions are purely optional and can be dismissed if they provide some convincing reasons.

3) Turn around reviews within a week or two.  Without exception, we all hate waiting on reviews. Everyone I know complains about long delays from reviewers and journals.  Yet delays are the norm!  I don’t get it. Again, respect that publications matter for careers, for future research and for public policy development.  So get off your butt and carve out an afternoon or two to review that paper that has been sitting in your inbox.  Respect your colleague’s careers and the amount of time they put into the paper and turn around your reviews asap so they can make revisions or send the paper to another journal.

I would say those are the big three things I try to do. The other thing I do is communicate with editors when I know or can pretty much guess who the authors of the paper are.  Our discipline and subfields are pretty small and I think it’s important we declare any possible conflicts of interests to the journal editors.  Let the editors decide and make assessments of manuscripts and referee reports with full information. That includes letting them know that you reviewed a paper previously for another journal!

What is Community-Engaged Research? A Conversation with Dr. Leah Levac

Over the last decade or so, community-based participatory research has become a more prominent feature in the discipline. This fact is especially true in the area of Indigenous studies, where research partnerships with Indigenous communities have become almost the norm. Although I certainly appreciate and respect the idea of community-based research, I’ve also tended not to use it mainly because I’m uncertain about the tradeoffs involved. Luckily, I am visiting professor with the department of political science at the University of Guelph this term and just down the hall from my office is Dr. Leah Levac, assistant professor of political science at UofG. Her research, which has been supported by the Trudeau Foundation, the CIHR, and more recently, SSHRC, looks at how government and civil society actors engage “marginalized publics in public policy development and community decision-making”. In particular, she uses community-engaged research methodologies and approaches to study the participation of women and youth in Canada. The following is a conversation I had with her regarding her work and in particular, how she uses community-based research to work with marginalized populations and individuals in the pursuit of common research goals.

Continue reading

Alcantara: What is community-based research?

Levac: Community-based research is one of several methodological orientations to research that have, at their core, a commitment to social justice and equity, and to working directly with people in communities to address research questions that are important and relevant to their lives. Participatory action research, feminist participatory action research, community-based participatory research, and action research are other names used by community and academic researchers who uphold similar commitments to working with communities to bring about social change. Community-based research is committed to the principles of community relevance, equitable participation, and action and change (Ochocka & Janzen, 2014). Emerging from different contexts and histories, forms of community-based research have developed and been practiced in both the Global North and the Global South. In all cases, community-based research pursues the co-production and dissemination of knowledge, through both its process and its outcomes.

Alcantara: How do you use these methodologies in your work?

Levac: Over the last several years, I have been working with various community partners and academic colleagues to develop and use a feminist intersectional approach to community engaged scholarship (Levac, Stienstra, McCuaig, & Beals, forthcoming; Levac & Denis, 2014). The idea is that we use the principles of community-based research combined with a commitment to feminist intersectionality; a self-reflexive theoretical and methodological orientation to research that recognizes gender as a dimension of inequality, and understands that power exists and operates through the interactions between individual or group identities (e.g., gender, ability, age), systems (e.g., sexism, heterosexism, colonialism), institutions (e.g., governments, schools, family), and social structures (e.g. social class, economic structures, societies). We draw on the work of Collins, Dhamoon, Hankivsky, and others to inform our work. Practically, we apply this methodological orientation by engaging with (primarily) women in communities, along with academic colleagues across disciplines, to develop partnerships that lead to asking and answering research questions that are pressing for our community partners. Based on this commitment to developing shared research goals, we use one or several methods (e.g., community workshops, interviews, focus groups, surveys, photovoice) depending on the question(s) being asked. For example, I collected data through community workshops and focus groups, and then analyzed the data with members of the community, as part of the process for creating a Community Vitality Index in Happy Valley-Goose Bay, Labrador. In another case, I used key informant interviews and community focus groups to identify the key challenges facing women in Labrador West. The result, Keeping All Women in Mind, is part of a national community engaged research project focused on the impacts of economic restructuring on women in northern Canada.

Alcantara: Why have you decided to make this methodology central to your work? What advantages does it bring to your research and to your partners?

Levac: My commitment to community engaged scholarship emerged in part from my personal and professional experiences. I returned to school to pursue graduate studies after working with community organizations and community members – young people in particular – where I witnessed disconnects between researchers’ goals and community’s experiences, and where I learned more about the lack of equitable public participation in policy development. As I continue along this path, I am motivated by the ways in which this methodological orientation invites the voices of historically marginalized community members into important public conversations. I also appreciate that the approach brings ecological validity. Through our work, we see important instances of leadership emerging, especially in places and ways that the conventional leadership literature largely fails to recognize. Finally, the theoretical grounding of our work points explicitly to social justice and equity goals, which I feel obligated to pursue from my position.

Alcantara: One of the concerns I have long had about this methodology is the potential loss of autonomy for the researcher. Is that a real danger in your experience?

Levac: I think about this in two different ways. On one hand, I do not think it is a danger that is unique to community engaged scholarship. As I understand it, the core concern with autonomy in community engaged scholarship is about how the relationships themselves might influence the findings. However, the lack of relationships can also influence findings (e.g., if there is a lack of appropriate contextual understanding), as can funding arrangements, and so on. What is important then, is to foreground the relationships, along with other important principles such as self-reflexivity and positionality, so that the rigor of the scholarship can be evaluated. Another way to think about this is to consider that within a community engaged scholarship program, there can be multiple research questions under pursuit; some of which are explicitly posed by, and of interest to, the community, and others that are posed by the academic researcher(s). As long as all of these questions are clearly articulated and acceptable to all partners, then independent and collective research pursuits can co-exist. Having said this, I do find that I have had to become less fixated on my own research agenda per se, and more open to projects that are presented to me.

Alcantara: How do you approach divided communities? Here I’m thinking about situations such as working with Indigenous women on issues relating to gender and violence, identity, or matrimonial property rights. How do you navigate these types of situations, where some community members might welcome you while others might oppose you?

Levac: These are obviously difficult situations, and I certainly do not claim to have all of the answers, particularly in Indigenous communities, where I have not spent extensive time. Having said that, there are a couple of important things to keep in mind. First, the ethical protocols and principles of community engaged scholarship demand attention to the question of how communities are constituted. So, for example, an interest-based community and a geographic community are not necessarily coincidental. As a result, a community engaged scholarship project would be interested in how the community defines itself, and therefore might end up working only with people who identify themselves as victims of gendered violence, for example. Second, because relationships are central to all stages of community-based research projects, these methodologies can actually lend themselves to these difficult kinds of contexts. By this, I mean that similar to reconciliation processes, there is an opportunity for community engaged scholarship to play a role in opening dialogues for understanding across social, political, and cultural barriers. This is one of the reasons that community engaged scholarship is widely recognized as being so time intensive.

Alcantara: What kinds of literature and advice would you offer to scholars who want to use this type of methodology in their work for the first time?

Levac: My first and biggest piece of advice is to get involved in the community. All of my research – including and since I completed my PhD – has come about through existing relationships with community organizations and/or other researchers involved in community engaged projects. There are a number of books and authors that can provide a useful grounding, including Reason & Bradbury’s (Eds.) Handbook of Action Research, Minkler & Wallerstein’s Community-Based Participatory Research for Health, and Israel et al.’s Methods for Community-Based Participatory Research for Health. There are also several great peer-reviewed journals – including Action Research and Gateways: International Journal of Community Research and Engagement. Finally, there are many organizations and communities of practice that pursue and support various facets of community engaged scholarship. Guelph hosts the Institute for Community Engaged Scholarship. Other great organizations and centres include Community Based Research Canada, Community Campus Partnerships for Health, and the Highlander Research and Education Centre. Finally, beyond connecting with communities and community organizations, and reading more about the methods and theories of community engaged scholarship, it is really helpful to reach out to scholars using these approaches, who have, in my experience, been more than willing to offer support and suggestions. Feel free to contact me directly at LLevac@uoguelph.ca.

 

Ideology and Political Science: Diversity Matters!

I hate ideology.  Or at least, I’m suspicious of people who are extremely sure and confident about their ideological beliefs.

The discipline of political science is very ideological.  I know from first hand experience that academics like to sort different scholars into different ideological camps, usually based on superficial information (e.g. where you went to school or who you co-authored with) or the reading of only one publication.  Where do I fall? Most believe I’m a hard core right-winger, based on my association with Tom Flanagan (because he was my MA supervisor and we co-authored some books and articles in the past). Yet, the reality is, I’m ideologically confused! Continue reading

People are usually very surprised to hear that.  They would rather have you fall neatly into one of three ideological camps: left, right, or centre (the latter of which my buddy Chris Cochrane will show in his forthcoming book, is not the middle position that people assume it is!).  Last year or so, I participated on a panel for Steve Paikin’s, tv show, The Agenda.  One of the panelists was a very popular and well-known Aboriginal scholar.  Throughout the taping, this person was very cold and detached towards me, right from the first time we met.  By the end, however, he had warmed up considerably, even remarking to me that “you weren’t quite what I expected.”

In any event, I don’t trust ideological certainty and indeed, I value scholarly uncertainty because it facilitates meaningful knowledge production.  Indeed, in my view, an ideal scholarly environment is one where you are surrounded by people who inhabit all parts of the left-right divide but who are open to discussion, debate, and, dare I say it, changing their mind in the face of empirical evidence and logically-sound argument. Surprisingly, however, not all departments agree.

Recently, a number of prominent psychologists published a piece in Behavioural and Brain Sciences that confirms many of my beliefs on this topic. Although the authors are talking about social psychological science, my hunch is that their findings also apply to the discipline of political science in Canada.  Below is the abstract:

Abstract: Psychologists have demonstrated the value of diversity—particularly diversity of viewpoints—for enhancing creativity, discovery, and problem solving. But one key type of viewpoint diversity is lacking in academic psychology in general and social psychology in particular: political diversity. This article reviews the available evidence and finds support for four claims: 1) Academic psychology once had considerable political diversity, but has lost nearly all of it in the last 50 years; 2) This lack of political diversity can undermine the validity of social psychological science via mechanisms such as the embedding of liberal values into research questions and methods, steering researchers away from important but politically unpalatable research topics, and producing conclusions that mischaracterize liberals and conservatives alike; 3) Increased political diversity would improve social psychological science by reducing the impact of bias mechanisms such as confirmation bias, and by empowering dissenting minorities to improve the quality of the majority’s thinking; and 4) The underrepresentation of nonliberals in social psychology is most likely due to a combination of self-selection, hostile climate, and discrimination. We close with recommendations for increasing political diversity in social psychology.

Check out the article here.

Double Blind Peer Review: Some Thoughts for the First Timers

Double blind peer review is supposed to be the gold standard of academic research.

According to this model, authors submit manuscripts to journal or book editors who in turn send the papers to experts in the field.  These experts are supposed to evaluate the manuscripts anonymously; neither the reviewer nor the author is supposed to know the identity of each other until after publication when at the least the author is revealed.

Given how important peer review is to academic success, it’s astounding that we are rarely trained in how to actually do it!
Continue reading

The results of this lack of training, quite frankly, can be very frustrating and sometimes insulting for authors, who frequently, though not always, have to find a way to satisfy a reviewer who:

  • didn’t carefully read the paper but instead briefly skimmed it and the list of references (for their name!);
  • provided little to zero comments on how to improve the paper;
  • is fundamentally opposed to your theoretical or methodological choices; and/or
  • is just plain rude and insulting of your intellectual abilities and writing capabilities (apparently, I write like a first year undergraduate, which may be true if you talk to some of my co-authors!).

Of course, there are also many reviewers out there who provide very helpful comments and thoughtful reviews/rejections. But occasionally, one of the reviewers will commit one or more of the above sins, plus be four months overdue in submitting their review!

Having received and done many peer reviews over the last decade, I’ve started to develop a number of guidelines in writing my referee reports.  I try to review these guidelines before and after I complete a review.  For those who are just starting the peer review game as a referee, here are some tips or helpful advice to consider.

  1. Accountability and transparency is important!  If you know who the author is, or have a pretty good idea, let the editor know immediately before doing the review.  Discuss your ability to write a fair and relatively unbiased referee report and then leave it to the editor to decide whether you should complete the review.
  2. Read the manuscript at least twice! The first time through should be to simply understand and make sense of the argument, rather than to evaluate it.  Try to figure out exactly what the author is saying and how s/he says it.  Reserve judgment on the author’s theoretical, methodological, and analytical choices until the second read through.  During your second read through, carefully analyze the appropriateness of these choices, including the logic behind them and the integration of these choices given the research question.  Don’t “dump and run” or “snipe from the bushes” as one of my old UofT profs use to say.
  3. During this second reading, check your theoretical and methodological biases at the door!  If you hate political economy, don’t immediately reject a paper for using this framework (indeed, my paper on territorial devolution and my book on treaties both had to deal with reviewers who were extremely hostile mainly on the basis of my chosen theoretical framework, rather than how it was applied or whether alternatives were more appropriate).  Instead, consider how far the author’s framework or methodology takes them in terms of answering the question.  Consider whether there are plausible theoretical alternatives, given the evidence presented. Consider the nature of the evidence presented, given what currently exists out there in literature or elsewhere.  But don’t reject out of hand because you hate constructivism or whatever. Evaluate from within or take a pass on reviewing the paper.  Or, state your biases upfront to the editor and to the author (see Tip #1 above!)
  4. Provide a thorough list of suggestions, both major and minor.  Rejections should be accompanied by thoughtful and helpful comments about how to improve the paper for resubmission elsewhere. Accepts should say why the paper should be published.  Frequently editors have to deal with split decisions (e.g. one review says accept; the other says reject) and so giving a strong set of reasons for why the paper should be published could push the editor towards acceptance.  Sometimes, during revise and resubmits, I will actually comment on some of the other reviews if I think the author should not take some suggestions very seriously, which again can help editors make more informed decisions.
  5. Provide caveats to your review! I try to preface different sets of comments by saying which ones are really crucial and which ones the authors should consider but do not have to address.  I also try to tell authors that I don’t think they have to address all of my comments, but I think they should address some and tell me why the others do not need to be addressed. As reviewers, we sometimes forget that these aren’t our papers and so we end up trying to co-author them. Instead, I think our role is to provide advice, recognize that authors will disagree with us, and provide space for that give and take, as long as a certain scholarly bar is met.
  6. Provide even more caveats to your review! Sometimes I’ll be asked to review something that isn’t quite in my wheelhouse.  Given the frequency in which journal editors complain about reviewer fatigue, I almost always accept reviewer invitations even on papers that I really don’t have any really expertise in. In those situations, I always inform the editors and authors about the nature and extent of my expertise (sometimes none!) and that my comments should be read in that light.  Again, accountability and transparency are important!
  7. Be Nice! I remember once writing a really nasty review of a paper that was terrible on all fronts, and really shouldn’t have been sent out for review.  I’m talking grammatical errors, typos, spelling mistakes, referencing errors, and bad scholarship.  The paper got me in a really bad mood and the tone of the review reflected that fact.  The minute after I hit “submit”, I immediately regretted the tone of the review.  Having been on the receiving end of those reviews from time to time, I’ve come to appreciate how important it is to be, well, nice!  There’s nothing wrong with being critical; it’s part of the job.  However, the delivery is just as important as the content.  Indeed, authors are more likely to incorporate constructive criticism and reject nasty slagging.
  8. Finally, my biggest pet peeve is how long the review process takes.  There’s no one else to blame but ourselves! I’ve waited anywhere between 6 to 18 months at times for referee reports, which is outrageous.  I think four weeks is a reasonable expectation to find 5 or 6 hours to properly review a journal article.  Six weeks is also reasonable for book manuscripts.  Try to prioritize writing reviews please!  Authors appreciate quick turnaround times, especially because actual publication can take a long time, but so can finding a home for the paper.  So let’s help each other out and let’s all get to that “review pile” today!

Any other tips/observations? Provide them in the comment section!

A Political Theorist Teaching Statistics: Estimation and Inference

Still more about my experiences this term as a political theorist teaching methods.

How to introduce the core ideas of regression analysis: via concrete visual examples of bivariate relationships, culminating in the Gauss Markov theorem and the classical regression model? via a more abstract but philosophically satisfying story about inference and uncertainty, models and distributions? Some combination of each?

I took my lead here from my first teacher of statistics, and I want to describe and praise that approach, which still impresses me as quite beautiful in its way.

Continue reading

I remember with some fondness stumbling through Gary King‘s course on the likelihood theory of inference just over twenty years ago. That course, in turn, drew heavily on King’s Unifying Political Methodology, first published in 1989.

I’m too far removed from the methods community to have a sense of how this book is now received. I remember at the time, when I took King’s course, thinking that the discussion of Bayesian inference was philosophically … well, a bit dismissive, whereas nowadays Bayes seems just fine. Revisiting the relevant sections of UPM (especially pp. 28-30) I now think my earlier assessment was unfair.

Still, UPM is easily recognizable as the approach that led Chris Achen to say the following in surveying the state of political methods little more than a decade after King’s book first appeared …

… Even at the most quantitative end of the profession, much contemporary empirical work has little long-term scientific value. “Theoretical models” are too often long lists of independent variables from social psychology, sociology, or just casual empiricism, tossed helter-skelter into canned linear regression packages. Among better empiricists, these “garbage-can regressions” have become a little less common, but they have too frequently been replaced by garbage-can maximum-likelihood estimates (MLEs). …

Given this, it wouldn’t have surprised me if, upon querying methods colleagues, I’d found that UPM remains widely liked, its historical importance for political science acknowledged, but its position in cutting-edge methods syllabi quietly shuffled to the “suggested readings” list.

Is this the case? I doubt it, but even if all that were true, UPM is the book I learned from, and it’s the book I keep taking off the shelf, year after year, to see how certain basic ideas in distribution and estimation theory play out specifically for political questions.

Of course I say that as a theorist: whenever I’ve pondered high (statistical) theory, nothing much has ever been at stake for me personally, as a scholar and teacher. Now, with some pressure to actually do something constructive with my dilettante’s interest in statistics, I wanted to teach with this familiar book ready at hand.

I haven’t been disappointed, and I want to share an illustration of why I think this book should stand the test of time: King’s treatment of the classical regression framework and the Gauss-Markov theorem.

Try googling “the Classical Regression Model” and you’ll get a seemingly endless stream of (typically excellent) lecture notes from all over the world, no small number of which probably owe significant credit to the discussion in William Greene’s ubiquitous econometrics text. High up on the list will almost certainly be Wikipedia’s (actually rather decent) explanation of linear regression. The intuition behind the model is most powerfully conveyed in the bivariate case: here is the relationship, in a single year, between a measure of human capital performance for a sample of countries against their per capita GDP …

stata-reg-example-pwt

Now, let’s look at that again but with logged GDP per capita for each country in the sample (this is taken, by the way, from the most recent Penn World Table) …

reg-example-log-gdp-pwt

The straight line is, of course, universally understood as “the line of best fit,” but that interpretation requires some restrictions, which define the conditions under which calculating that line using a particular algorithm, ordinary least squares (OLS, or simply LS), results in the best linear unbiased predictor, or estimator, of y (thus the acronym BLUE, so common in introductory treatments of the CLRM). OLS minimizes the sum of squared errors, measured vertically, along values of x (rather than, say, perpendicular to the line). Together, those conditions are the Gauss-Markov assumptions, named thus thanks to the Gauss-Markov theorem, which, given those conditions (very roughly: normally distributed and uncorrelated errors with mean zero and constant variance, and those errors uncorrelated with x, or with the columns in the multivariate matrix X), establishes OLS as the best linear unbiased estimator of coefficients in the equation that describes that ubiquitous illustrative line,

lff-example

or, in matrix notion for multiple x variables,

clrm-matrixnotation

… and that’s how generations of statistics and econometrics students first encountered regression analysis: via this powerful visual intuition.

But as King notes in UPM, the intuition was never entirely satisfying upon more careful reflection. Why the sum of square errors, rather than, say, the sum of the absolute value of errors? And why calculate the respective errors along the X axis, rather than, again, perpendicular to the line we want to fit?

UPM is, so far as I know, unique (or at the very least, extraordinarily rare) in beginning not with these visual intuitions, but instead with a story about inference: how do we infer things about the world given uncertainty? How can we be clear about uncertainty itself? This is, after all, the point of an account of probability: to be precise about uncertainty, and the whole point of UPM was (is) to introduce statistical methods most useful for political science via a particular approach to inference.

So, instead of beginning with the usual story about convenient bivariate relationships and lines of best fit, UPM starts with the fundamental problem of statistical inference: we have evidence generated by mechanisms and processes in the world. We want to know how confident we should be in our model of those mechanisms and processes, given the evidence we have.

More precisely, we want to estimate some parameter \theta, taking much of the world as given. That is, we’d like to know how confident we can be in our model of that parameter \theta, given the evidence we have. So what we want to know is p( \theta | y), but what we actually have is knowledge of the world given some parameter \theta, that is, p( y | \theta ).

Bayes’s Theorem famously gives us the relationship between a conditional probability and its inverse:

bt-example

We could contrive to render p(y) as a function of p(\theta) and p(y | \theta) by differentiating p(\theta,y) over the whole parameter space \Theta, \int_\Theta p(\theta) p(y| \theta), but this still leaves us with the question of how to interpret p(\theta).

These days that interpretive task hardly seems much of a philosophical or practical hurdle, but Fisher’s famous approach to likelihood is still appealing. Instead of arguing about (variously informative) priors, we could proceed instead from an intuitive implication of Bayes’s result: that p(\theta |y) might be represented as some function of our evidence and our background understanding (such as a theoretically plausible model) of the parameter of interest. What if we took much of that background understanding as an unknown function of the evidence that is constant across rival models of the parameter \theta?

Following King’s convention in UPM, let’s call these varied hypothetical models \tilde{\theta}, and then define a likelihood function as follows:

L(\tilde{\theta}|y) = g(y) p(y|\tilde{\theta})

This gives us an appealing way to think about relative likelihoods associated with rival models of the parameter we’re interested in, given the same data …

\dfrac{L(\tilde{\theta_{i}}|y)}{L(\tilde{\theta_{j}}|y)} = \dfrac{g(y) p(y|\tilde{\theta_{i}})}{g(y) p(y|\tilde{\theta_{j}})}

g(y) cancels out here, but that is more than a mere computational convenience: our estimate of the parameter \theta is relative to the data in question, where many features of the world are taken as ceteris paribus for our purposes. These features are represented by that constant function (g) of the data (y). We can drop g(y) when considering the ratio

\dfrac{p(y|\tilde{\theta_{i}})}{p(y|\tilde{\theta_{j}})}

because our use of that ratio, to evaluate our parameter estimates, is always relative to the data at hand.

With this in mind, think about a variable like height or temperature. Or, say, the diameter of a steel ring. More relevant to the kinds of questions many social researchers grapple with: imagine a survey question on reported happiness using a thermometer scale (“If 0 is very unhappy and 10 is very happy indeed, how happy are you right now?”). We can appeal to the Central Limit Theorem to justify a working assumption that

y_{i} \sim f_{stn} (y_{i} | \mu_{i}) = \dfrac{e^{-\frac{1}{2}(y_{i}-\mu_{i})^{2}}}{\sqrt{2\pi}}

which is just to say that our variable is distributed as a special case of the Gaussian normal distribution, but with \sigma^{2}=1.

By now you may already be seeing where King is going with this illustration. The use of a normally distributed random variable to illustrate the concept of likelihood is just that: a illustrative simplification. We could have developed the concept with any of a number of possible distributions.

Now for a further illustrative simplification: suppose (implausibly) that the central tendency associated with our random variable is constant. Suppose, for instance, that everyone in our data actually felt the same level of subjective happiness on the thermometer scale we gave them, but there was some variation in the specific number they assigned to the same subjective mental state. So, the reported numbers cluster within a range.

I say this is an implausible assumption for the example at hand, and it is, but think about this in light of the exercise I mentioned above (and posted about earlier): there really is a (relatively) fixed diameter for a steel ring we’re tasked to measure, but we should expect measurement error, and that error will likely differ depending on the method we use to do the measuring.

We can formalize this idea as follows: we are assuming E(Y_{i})=\mu_{i} for each observation i. Further suppose that Y_{i}, Y_{j} are independent for all i \not= j. So, let’s take the constant mean to be the parameter we want to estimate, and we’ll use some familiar notation for this, replacing \theta with \beta, so that \mu_{i} = \beta_{i}.

Given what we’ve assumed so far (constant mean \mu = \beta, independent observations), what would the probability distribution look like? Since p(e_{i}e_{j}) = P(e_{i})p(e_{j}) for independent events e_{i}, e_{j}, the full distribution over all of those events is given by

\prod_{i}^{n} \dfrac{e^{-\frac{1}{2}(y_{i}-\beta)^{2}}}{\sqrt{2\pi}}

Let’s use this expression to define a likelihood function for \beta:

L(\tilde{\beta}|y) = g(y) \prod_{i}^{n} f_{stn}(y|\tilde{\beta})

Now, the idea here is to estimate \beta and we’re doing that by supposing that a lot of background information cannot be known, but can be taken as roughly constant with respect to the part of the world we are examining to estimate that parameter. Thus we’ll ignore g(y), which represents that unknown background that is constant across rival hypothetical values of \beta. Then we’ll define the likelihood of \beta given our data, y, with the expression \prod_{i}^{n} f_{stn}(y|\tilde{\beta}) and substitute in the full specification of the standardized normal distribution for \mu_{i} = \beta_{i},

L(\tilde{\beta}|y) = \prod_{i}^{n} \dfrac{e^{-\frac{1}{2}(y_{i}-\beta)^{2}}}{\sqrt{2\pi}}

Remember that we’re less interested here in the specific functional form of L(.) than in relative likelihoods, so any transformation of the probability function that preserves the properties of interest to us, the relative likelihoods of parameter estimates \tilde{\beta}, isn’t really relevant to our use of L(.). Suppose, then, that we took the natural logarithm of L(\tilde{\beta}|y)? Because we’re taking g(y) as constant, we know that ln(ab) = ln(a) + ln(b) and for some constant \alpha, ln(\alpha ab) = \alpha +ln(a) + ln(b). So, the natural logarithm of our likelihood function is

L(\tilde{\beta}|y) = g(y) + \sum_{i}^{n} ln(\dfrac{e^{-\frac{1}{2}(y_{i}-\tilde{\beta})^{2}}}{\sqrt{2\pi}})

 

= g(y) + \sum_{i}^{n} ln(\dfrac{1}{\sqrt{2\pi}}) - \dfrac{1}{2}\sum_{i}^{n}(y_{i}-\tilde{\beta})^{2}

 

= g(y) - \dfrac{n}{2}ln(2\pi) - \dfrac{1}{2}\sum_{i}^{n}(y_{i}-\tilde{\beta})^{2}

Notice that g(y) - \frac{n}{2}ln(2\pi) doesn’t include \tilde{\beta}. Think of this whole expression, then, as a constant term that may shift the relative position of the likelihood function, but that doesn’t affect it’s shape, which is what we really care about. That shape of the log-likelihood function is given by

ln L(\tilde{\beta}|y) = -\dfrac{1}{2} \sum_{i}^{n} (y_{i} - \tilde{\beta})^{2}

Now, there are still several steps left to get to the the classical regression model (most obviously, weakening the assumption of constant mean and instead setting \mu_{i}=x_{i}\beta) but this probably suffices to make the general point: using analytic or numeric techniques (or both), we can estimate parameters of interest in our statistical model by maximizing the likelihood function (thus MLE: maximum likelihood estimation), and that function itself can be defined in ways that reflect the distributional properties of our variables.

This is the sense in which likelihood is a theory of inference: it lets us infer not only the most plausible values of parameters in our model given evidence about the world, but also measures of uncertainty associated with those estimates.

While vitally important, however, this is not really the point of my post.

Look at the tail end of the right-hand side of this last equation above. The expression there ought to be familiar: it looks suspiciously like the sum of squared residuals from the classical regression model!

So, rather than simply appealing to the pleasing visual intuitions of line-fitting; or alternatively, appealing to the Gauss-Markov theorem as the justification for least squares (LS), by virtue of yielding the best linear unbiased predictor of parameters \beta (but why insist on linearity? or unbiasedness for that matter?), the likelihood approach provides a deeper justification, showing the conditions under which LS is the maximum likelihood estimator of our model parameters.

This strikes me as a quite beautiful point, and it frames King’s entire pedagogical enterprise in UPM.

Again, there’s more to the demonstration in UPM, but in our seminar at Laurier this sufficed (I hope), not to convince my (math-cautious-to-outright-phobic) students that they need to derive their own estimators if they want to do this stuff. What I hope they took away is a sense of how the tools we use in the social sciences have deep, even elegant, justifications beyond pretty pictures and venerable theorems.

Furthermore, and perhaps most importantly, understanding at least the broad brush-strokes of those justifications helps us understand the assumptions we have to satisfy if we want those tools to do what we ask of them.

A political Theorist Teaching Statistics: Measurement

Another post about my experiences this term as a political theorist teaching methods.

That gloss invites a question, I suppose. I guess I’m a political theorist, whatever that means. A lot of my work has been on problems of justice and legitimacy, often with an eye to how those concerns play out in and around cities, but also at grander spatial orders.

Still, I’ve always been fascinated with mathematics (even if I’m not especially good at it) and so I’ve kept my nose pressed against the glass whenever I can, watching developments in mathematical approaches to the social and behavioural sciences, especially the relationships between formal models and empirical tests. Continue reading

I was lucky enough in graduate school to spend a month hanging out with some very cool people working on agent-based modeling (although I’ve never really done much of that myself). This year, I was given a chance to put these interests into practice and teach our MA seminar in applied statistical methods.

I began the seminar with a simple exercise from my distant past. My first undergraduate physics lab at the University of Toronto had asked us to measure the diameter of a steel ring. That was it: measure a ring. There wasn’t much by way of explanation in the lab manual, and I was far from a model student. I think I went to the pub instead.

I didn’t stay in physics, and eventually I wound up studying philosophy and politics. It was only a few years ago that I finally saw the simple beauty of that lab assignment as a lesson in measurement. In that spirit, I gave my students a length of string, a measuring tape, and three steel hoops. Their task: detail three methods for finding the diameter of each hoop, and demonstrate that the methods converge on the same answer for each hoop.

hoops

measurement

I had visions of elegant tables of measurements, and averages taken over them. Strictly speaking, that vision didn’t materialize, but I was impressed that everyone quickly understood the intuitions at play here, and they did arrive at the three approaches I had in mind:

  1. First, use the string and take the rough circumference several times, find the average, then divide that figure by \pi.
  2. Second, use a pivot point to suspend both the hoop and a weighted length of string, then mark the opposing points and measure.
  3. Third, simply take a bunch of measurements around what is roughly the diameter.

The lesson that took a while to impart here was that I didn’t really care about the exact diameters, and was far more concerned that they attend to the details of the methods used for measurement, and that they explicitly report these details.

In the laboratory sciences measurement protocol is so vitally important. We perhaps don’t emphasize the simple point enough in the social sciences, but we should: it matters how you measure things, and what you use to make the measurements!

A Political Theorist Teaching Statistics: Stata? R?

What is a political theorist doing teaching a seminar in social science statistics? A reasonable question to ask my colleagues, but they gave me the wheel, so I drove off!

Later I’ll post some reflections on my experiences this term. For now, I want to weigh in briefly with some very preliminary thoughts on software and programming for statistics instruction at the graduate level, but in a MA programme that doesn’t expect a lot by way of mathematical background from our students.

Continue reading

In stats-heavy graduate departments R seems to be all the rage. In undergraduate methods sequences elsewhere (including here at Laurier) SPSS is still hanging on. I opted for Stata this term, mostly out of familiarity and lingering brand loyalty. If they ever let me at this seminar again, I may well go the R route.

This semester has reassured me that Stata remains a very solid statistical analysis package: it’s isn’t outrageously expensive, it has good quality control, and they encourage a stable and diverse community of users, all of which are vital to keeping a piece of software alive. Furthermore, the programmers have managed to balance ease of use (for casual and beginning users) with flexibility and power (for more experienced users with more complicated tasks).

All that said, I was deeply disappointed with the “student” version of Stata, which really is far more limited than I’d hoped. Not that they trick you: you can read right up front what those limits are, but reading them online is a whole lot different than running up against them full steam in the middle of a class demonstration, when you’re chugging along fine until you realize your students cannot even load the data set (that you thought you’d pared down sufficiently to fit in that modest version of stata!).

R, in contrast, is not a software package, but a programming environment. At the heart of that environment is an interpreted language (which means you can enter instructions off a command line and get a result, rather than compiling a program and then running the resulting binary file).

R was meant to be a dialect of the programming language S and an open source alternative to S+, a commercial implementation of S. R is not built in quite the same way as S+, however. R’s designers started with a language called Scheme, which is a dialect of the venerable (and beautiful) language LISP.

My sense is that more than a few people truly despise programming in R. They insist that the language is hopelessly clumsy and desperately flawed, but they often keep working in the R environment because enough of their colleagues (or clients, or coworkers) use it. Often these critics will grudgingly concede that, in addition to the demands of their profession or client base, R is still worth the trouble, in spite of the language.

These critics certainly make a good case. That said, I suspect these people cut their programming teeth on languages like C+ and that, ultimately, while their complaints are presented as practical failings of R, they are in fact deeper philosophical and aesthetic differences. (… but LISP is elegant!)

I remain largely agnostic on these aesthetic questions. A language simply is what it is, and if it — and as importantly, the community of users — doesn’t let you do what you want, the way you want, then you find another language.

If you’ve ever programmed before, then R doesn’t seem so daunting, and increasingly there are good graphical user interfaces to make the process of working with R more intuitive for non-programmers. Still, fundamentally the philosophy of R is “build it yourself” … or, more often, “hack together a script to do something based on code someone else has built themselves.”

This latter tendency is true of Stata also, of course, but when you use someone else’s package in Stata, you can be reasonably confident that it’s been checked and re-checked before being released as part of the official Stata environment. That is less-often the case with R (although things are steadily improving).

Indeed, there have been, not too long ago, some significant quality-control issues with R packages, and it always leaves the lingering worry in the back of your mind as to whether the code you’ve invoked with a command (“lm” say, for “linear model) is actually doing what it claims to do.

Advocates of R rejoin that this not a bug, but a feature: that lingering worry ought to inspire you to learn enough to check the code yourself!

They have a point.

Peer Review and Social Pyschology: Or Why Introductions are so Important!

Inspired by my colleagues Loren King and Anna Esselment, both of whom regularly make time in their busy schedules to read (I know! A crazy concept!), I’ve started to read a new book that Chris Cochrane recommended: Jonathan Haidt’s The Righteous Mind: Why Good People Are Divided By Politics and Religion.

I’m only in the first third of the book, but one of the main arguments so far is that when human make moral (and presumably other) judgements, we tend to use our intuitions first, and our reasoning second. That is to say, frequently we have gut feelings about all sorts of things and rather than reasoning out whether our feelings are correct, we instead search for logic, examples, or arguments to support those gut feelings. Haidt effectively illustrates this argument by drawing upon a broad set of published research and experiments he has done over the years.

At the end of chapter 2, he writes:

Continue reading

“I have tried to use intuitionism while writing this book. My goal is to change the way a diverse group of readers … think about morality, politics, religion, and each other …. I couldn’t just lay out the theory in chapter 1 and then ask readers to reserve judgement until I had presented all of the supporting evidence. Rather, I decided to weave together the history of moral psychology and my own personal story to create a sense of movement from rationalism to intuitionism. I threw in historical anecdotes, quotations from the ancients, and praise of a few visionaries. I set up metaphors (such as the rider and the elephant) that will recur throughout the book. I did these things in order to “tune up” your intuitions about moral psychology. If I have failed and you have a visceral dislike of intuitionism or of me, then no amount of evidence I could present will convince you that intuitionism is correct. But if you now feel an intuitive sense that intuitionism might be true, then let’s keep going.”

I found these first few chapters, and this paragraph in particular, to be extremely powerful and relevant to academic publishing (and other things!). If humans tend to behave in this manner, (e.g. we frequently rely on gut feelings to make moral judgements and we frequently try to find reasons to support those feelings), then the introduction of a journal article is CRUCIAL, both for peer review and afterwards. On the issue of peer review, I can’t tell you how many times I’ve received a referee report that was extremely negative, yet failed to: a) clearly show that they understood my argument; and b) demonstrate logically why my argument is wrong. I always blamed myself for not being clear enough, which is probably half true! But the real story is that sometimes my introductions were probably ineffective at connecting with people’s intuitions, and so these reviewers found reasons to reject it.

The lesson here, I think, is that introductions matter! You can’t ask or expect readers to withold judgement while you present the theory and evidence first. Instead, you have to find a way to tap immediately into their intuitions to make them open to considering the merits of your argument.

Other Experiences Using the Flipped Classroom

I continue to be a fan of the flipped classroom. Others have started to blog about their own experiences. Check them out:

1) Phil Arena, an IR scholar, is using it in his classes. Check out some of his reflections here.

2) Here’s economist John Cochrane’s take:

Continue reading

“A lot of mooc is, in fact, a modern textbook — because the twitter generation does not read. Forcing my campus students to watch the lecture videos and answer some simple quiz questions, covering the basic expository material, before coming to class — all checked and graded electronically — worked wonders to produce well prepared students and a brilliant level of discussion. Several students commented that the video lectures were better than the real thing, because they could stop and rewind as necessary. The “flipped classroom” model works.

The “flipped classroom” component will, I think, will be a very important use for online tools in business education especially. Our classes are infrequent — once a week for three hours at best. Our international program meets twice for 5 days in a row, three hours per day. We also offer intensive 1-3 day custom programs. An experience consisting of a strong online component, in which students work a little bit each day over a long period, mixed with online connection through forums, videos, chats, etc., all as background for classroom interaction — which also cements relationships formed online — can be a big improvement on what we do now….

But a warning to faculty: Teaching the flipped classroom is a lot harder! The old model, we pretend to teach, you pretend to learn, filling the board with equations or droning on for an hour and a half, is really easy compared to guiding a good discussion or working on some problems together.”

I think Cochrane is wrong that the old model is as bad as he paints here. I’ve blogged previously about how some instructors are simply fantastic lectures and can regularly find that magic sweet spot for learning. But for people like me, I need my flipped classroom crutch to help me be an effective teacher!

More on Knowledge Mobilization: What About Individual Citizens?

There’s been some recent discussion on this blog about the importance of knowledge mobilization. Dr. Erin Tolley, for instance, provided some excellent advice several days ago based on her own experiences in government and academia. But recently I’ve been wondering: what can us academics do to better share our research findings with regular citizens?

My usual strategy has been to write op eds in regional or national newspapers. I have no idea whether this is an effective strategy. I have a hunch that op eds rarely persuade but instead simply reinforce people’s existing opinions on the issue (one day I’d like to run an experimental study to test this proposition. I just need to convince my colleague, Jason Roy, to do it!) Sometimes, I receive emails from interested citizens or former politicians. In one op ed, published in the Toronto Star, I briefly mentioned the Kelowna Accord, dismissing it as a failure. The day after the op ed was published, former Prime Minister Paul Martin called me up in my office to tell me why my analysis of the Accord was wrong. That was quite the experience!

Continue reading

But on the issue of communicating research results to interested citizens, I wonder if there is more that I can do? At least once every six months, I receive an email from a random First Nation citizen asking for advice. Usually, the questions they send me focus on the rights of individual band members against the actions of the band council. One email I received, for instance, asked about potential legal avenues that were available to members for holding band council members accountable, because somehow they saw my paper in Canadian Public Administration on accountability and transparency regimes. Just last week, I received an email from a band member who was fielding questions from fellow band members about the rights of CP holders (e.g. certificates of possession) against a band council that wanted to expropriate their lands for economic development. Apparently, this individual tried to look up my work online but all of the articles I’ve written on this topic are gated (with the exception of one).

So, what to do?

Well, one easy and obvious solution is to purchase open access rights for these articles, which is something SSHRC is moving towards anyway. That way, anyone can download and read the articles.

But what else can we do? Taking a page from Dr. Tolley’s post, maybe I need to start writing one page summaries of my findings in plain language and post them on my website?

Another thing I want to try is to put together some short animated videos that explain my findings. This is what I hope to do with my SSHRC project on First Nation-municipal relations, if Jen and I can ever get this project finished!

Any other ideas? Suggestions welcome!

Should We Change the Grant Adjudication Process? Part 2!

Previously, I blogged about the need to reconsider how we adjudicate research grant competitions.

Others agree:

Researchers propose alternative way to allocate science funding

HEIDELBERG, 8 January 2014 – Researchers in the United States have suggested an alternative way to allocate science funding. The method, which is described in EMBO reports, depends on a collective distribution of funding by the scientific community, requires only a fraction of the costs associated with the traditional peer review of grant proposals and, according to the authors, may yield comparable or even better results.
Continue reading

“Peer review of scientific proposals and grants has served science very well for decades. However, there is a strong sense in the scientific community that things could be improved,” said Johan Bollen, professor and lead author of the study from the School of Informatics and Computing at Indiana University. “Our most productive researchers invest an increasing amount of time, energy, and effort into writing and reviewing research proposals, most of which do not get funded. That time could be spent performing the proposed research in the first place.” He added: “Our proposal does not just save time and money but also encourages innovation.”

The new approach is possible due to recent advances in mathematics and computer technologies. The system involves giving all scientists an annual, unconditional fixed amount of funding to conduct their research. All funded scientists are, however, obliged to donate a fixed percentage of all of the funding that they previously received to other researchers. As a result, the funding circulates through the community, converging on researchers that are expected to make the best use of it. “Our alternative funding system is inspired by the mathematical models used to search the internet for relevant information,” said Bollen. “The decentralized funding model uses the wisdom of the entire scientific community to determine a fair distribution of funding.”

The authors believe that this system can lead to sophisticated behavior at a global level. It would certainly liberate researchers from the time-consuming process of submitting and reviewing project proposals, but could also reduce the uncertainty associated with funding cycles, give researchers much greater flexibility, and allow the community to fund risky but high-reward projects that existing funding systems may overlook.

“You could think of it as a Google-inspired crowd-funding system that encourages all researchers to make autonomous, individual funding decisions towards people, not projects or proposals,” said Bollen. “All you need is a centralized web site where researchers could log-in, enter the names of the scientists they chose to donate to, and specify how much they each should receive.”

The authors emphasize that the system would require oversight to prevent misuse, such as conflicts of interests and collusion. Funding agencies may need to confidentially monitor the flow of funding and may even play a role in directing it. For example they can provide incentives to donate to specific large-scale research challenges that are deemed priorities but which the scientific community can overlook.

“The savings of financial and human resources could be used to identify new targets of funding, to support the translation of scientific results into products and jobs, and to help communicate advances in science and technology,” added Bollen. “This funding system may even have the side-effect of changing publication practices for the better: researchers will want to clearly communicate their vision and research goals to as wide an audience as possible.”

Awards from the National Science Foundation, the Andrew W. Mellon Foundation and the National Institutes of Health supported the work.

From funding agencies to scientific agency: Collective allocation of science funding as an alternative to peer review

Johan Bollen, David Crandall, Damion Junk, Ying Ding, and Katy Börner

Read the paper:

doi: 10.1002/embr.201338068
On the one hand, some might argue that the system would be hijacked by logrolling and network effects. But if the system had strong accountability and transparency measures, which involved researchers disclosing expenditures, outcomes, and which researchers they financially supported, I think some of the possible negative effects would disappear.

It’s a neat idea and someone needs to try it. It could be SSHRC creating a special research fund that worked in this way: everyone who applied would receive money (equally divided among the applicants) and would have to donate a fixed portion of their money to others. Or maybe a university could try it with some internal funding.

Thoughts?

Hat tip to marginal revolution for this story.

Should we Apply Gender and Race Analysis to Everything?

According to some, the answer is yes.

So what to make of this article?

At first, I thought it was written “tongue-in-cheek”, but after reading it a second time, I’m not so sure!

And how about this one?

There’s no question that gender and race analyses have an important role to play in political science and we should be taking these perspectives more seriously than is usually the case.  But aren’t some domains more important, pervasive, and influential than others?  Angry Birds? Meh. Movember? Maybe.