With Trudeau on the rise, will Harper stick around?

Published Apr. 21, 2014, in The Guelph Mercury and Waterloo Region Record.

Justin Trudeau has been leader of the federal Liberal party for one year.

How is he doing?

Somewhat better, I think, than most people had anticipated. Although he has not taken the country by storm, neither has he wilted in the glare of public and party expectations. Like all politicians, he has made minor mistakes, but he has demonstrated the quickness of foot to acknowledge his errors, to apologize, to correct course and to carry on. The public has been forgiving.

After one year, he has raised his battered party from third place to first in the polls. Pollsters project he would become prime minister at the head of a minority Liberal government, if an election were held today (which, of course, it won’t be). Based on today’s numbers, LISPOP (Laurier Institute for the Study of Public Opinion and Policy) projects a wafer-thin margin: 127 Liberal seats, 120 Conservative and 81 New Democrat.
Continue reading

Time will tell. What has impressed the political community for many months is the durability of the Liberal revival. It has gone long past the honeymoon phase. Eric Grénier, the poll aggregator and an analyst at ThreeHundredEight.com, puts Liberal popular support at 36 per cent. That’s not terrific, but it’s eight points higher than Stephen Harper’s Conservatives, who remain mired at 28 points.

In an article for the Globe and Mail, Grénier reports that the Liberals have consistently led in the national polls for the past 12 months “The Liberals are up five points on where they stood a year ago, eight points on where they were in the month before naming their new leader, and 14 points compared to the support the Liberals enjoyed in September 2012, just before Mr. Trudeau announced his intentions to run for the leadership,” Grénier writes.

The Liberals have a huge lead in Atlantic Canada, have moved ahead of the NDP in Quebec and stand at 40 per cent in battleground Ontario, up 13 points from their pre-Trudeau level. They have gained ground, but still trail the Tories in the West.

Not all of this improvement is Trudeau’s doing, of course. In Canada, as in other democracies, opposition parties seldom win elections; they become the beneficiaries when governments defeat themselves. That certainly what happened in 2004-2006 when the Liberals defeated themselves over the Sponsorship scandal, bringing Harper’s Conservatives to power.

But Trudeau seems to wear well. He is no longer seen as a kid with a good name and a slender resume. He has established himself as a serious politician. He is also a genuinely likeable politician, and likeability is a significant asset in politics. Bill Davis and Peter Lougheed had it. So did Jean Chrétien in the early years. NDP leader Thomas Mulcair comes across too hard-edged to be truly likeable. And likeability is simply not part of Stephen Harper political wiring.

Ask yourself, if you were inviting a national leader over for a beer and a burger in your backyard, who would you ask? You would choose someone who is interesting and fun. Harper? No way. Mulcair? Probably not. Elizabeth May? Yeah, maybe. Trudeau? For sure. That likeability is reflected in renewed Liberal popularity, especially among young voters and female voters.

Eric Grénier notes that the Liberals’ year-long lead in the polls is the longest stretch that the Harper government has trailed in second since their election in 2006. “The last time a majority government trailed in the polls for as long as the Conservatives have was in the last years of Brian Mulroney’s tenure,” Grénier observes. That was back in 1992-93. Mulroney hung on. Kim Campbell eventually replaced him. And the mighty Tories won just two seats in the 1993 election.

No one is predicting obliteration on that scale for Harper’s Tories. But the question on Ottawa lips (it has passed the sotto voce stage) is: will Harper stay on if he is not pretty darned sure he will retain his majority? The smart money says No.

Why Elections Canada? Scrap? Reform?

Suppose you think there is a public goods rationale for government doing more than simply telling citizens where and when to vote. Suppose you accept — in principle, at least — that government has an interest in ensuring a fair political playing field. To be sure, you might disagree about the specific ‘goods’ in question and how they are best provided (as Chris and I do), but still think there is an important regulatory role for government here.

If you think this, then Chris quite reasonably asks: why Elections Canada? I may muse darkly about needing a strong and independent federal agency to stop a slide toward U.S.-style electoral theatrics, but at the end of the day Canada simply isn’t the United States: there are a range of social forces and public actors here that mitigate against the kind of free-spending, vitriolic, evidence-free acrimony that I fear.

So, even if my characterization of the U.S. system is accurate, why turn to Elections Canada, of all things, to do the work that could be done better by other agencies and non-government actors? That’s the essence of Chris’s challenge to me here. Continue reading

To be clear, I am not against serious reform to Elections Canada. Indeed, I think a genuinely fair elections act would do just that: reform and empower the agency. I’m also not wed to the centralized solution I’ve been lobbying for (although I do think there’s a good case for going that route). I might even share some of what I take to be Chris’s more generic suspicion about rushing to centralization of regulatory power as the solution whenever we find something that might vaguely resemble a public good.

I do think there is a public goods rationale for (i) non-partisan voter mobilization; (ii) maintaining the ‘information commons’ around elections in ways that require more than simply telling voters where and when to vote; and (iii) ensuring a fair political playing field. Chris rejects (i), but accepts (ii) and (iii). I’ll readily grant his scepticism about a strong centralized solution for (ii) and (iii). Indeed, if it can be shown that there is an effective and efficient way to provide the goods in question without an agency like Elections Canada, then I’m fine with that. It’s a technical question.

I am tempted, however, to respond to that scepticism by asserting a subsidiarity principle, and if you accept subsidiarity, then it seems as though federal elections invite a federal regulator, with the necessary powers at the federal level. An obvious analogy is policing and intelligence: there’s a reason the OPP doesn’t do CSIS and RCMP work, and vice versa.

Having said that, a contrasting analogy is securities regulation, and it’s interesting that here Canada does go a very different route than most big industrial economies: we don’t have an equivalent to the U.S. Securities and Exchange Commission, for example.

I suppose one could make the case that we do just fine in Canada without an SEC-styled federal agency. People make that case, certainly. Others have concerns. Still others note that, in the U.S., the SEC isn’t powerful, independent, and effective enough to actually do it’s job, and so the solution is not to go the Canadian route, but to make a better regulatory agency.

It probably won’t surprise any readers of this exchange to learn that I sympathise with the latter complaint, although here as in elections, I’m not wedded in principle to a centralized solution. Again, it’s a technical question.

So, I guess I’m agnostic on the question of whether or not there might be a (uniquely Canadian?) approach to providing the public goods required for fair elections—one that doesn’t need a federal agency like Elections Canada. Of course, if you already have such an agency in place then there may an efficiency rationale for simply going that route, by reforming and empowering that agency, rather than gutting it.

Again, however, that’s an empirical question, and I’ll happily concede that there might be a plausible case for trashing Elections Canada and instead trying a decentralized approach that manages elections through a bunch of different offices and agencies.

I’ll note, though, that the Poilievre and the PMO have not made anything like that case, and are instead pushing for less regulation on campaign spending and content, higher costs of entry to the political game, and more diffuse enforcement and investigatory powers. These are all initiatives that seem to mitigate against Chris’s optimism that we do things differently up here, and that we can rely on the status quo arrangement to maintain the informational commons around elections.

In short, then, I think I share some of Chris’s reservations in principle. I simply don’t trust this government not to screw things up.

Fair Elections Act Debate: One More Once!

Loren and I agree that the state should have a role in elections.

Where we fundamentally disagree, I think, is on this point:

“I don’t want Canada sliding further toward the U.S. in this respect, so I think we have a compelling interest in sustaining a credible non-partisan state agency [e.g. Elections Canada] to balance and correct the excesses of partisan politics.”

I agree with him that there must be some sort of mechanism in place to “balance and correct the excesses of partisan politics” but I don’t think it should be Elections Canada.
Continue reading

First, we need to consider “the excesses of partisan politics” argument in terms of degree (e.g. a continuum). In Canada, we don’t suffer from the excesses that exist in the U.S. and so an expanded role for Elections Canada doesn’t make sense, nor do I think there is any credible or even anecdotal evidence that Elections Canada in its current role has created this situation or would be able to correct it in the future.

Second, don’t we already have mechanisms in place that do a pretty good job of correcting partisan misinformation and hyperbole in Canada? We have national, regional, and local newspapers, TV stations, and radio stations that cover elections with summaries and analysis. We have academics in Canada who are constantly in the media, giving interviews, providing seat projections and analysis of polls, and writing op eds and commentaries on twitter. We also have many independent pollsters, pundits, and think tanks, all of whom regularly provide analysis of issues, policies, and elections. So why do we need Elections Canada?

Third, why all of this hullabaloo over the information/motivational role of Elections Canada in particular? I agree with Loren that there is “a public interest in leveling the playing field of campaign spending and media access” but that’s not the job, nor should it be the job of Elections Canada! It’s the job of Parliament to pass laws and regulations on these issues, and for the police and the judicial system to enforce them.

In any event, I don’t think Canada will turn into the U.S. because of the Fair Elections Act. I don’t think Elections Canada with its present powers can prevent the type of hyper-partisanship and partisan hyperbole that critics are worried about, nor do I think Elections Canada should have the necessarily large amount of power that would be needed to actively prevent those types of activities from occurring in the future. I do agree that the Canadian state, along with civil society, should work together, no question, to provide information and motivation. But I just don’t see why it should be Elections Canada in particular.

Still More on the Fair Elections Act: What Kind of Informational Role for Elections Canada?

In his latest post, Chris argues against the state treating voting as a positive right, and he asks whether, if my worries about alleged partisan pathologies are persuasive, we should “be asking Elections Canada to do much more than it actually does”?

I suspect Chris means the question to be rhetorical (‘no, of course we shouldn’t!’), but frankly I’d take the gambit here and answer yes: a genuinely Fair Elections Act would empower the agency and expand its mandate, not gut it and consign it to a very narrow (merely procedural) informational role. Continue reading

(Then again, I also like the idea of Statistics Canada taking a regular and reliable census, and now we don’t have that either, so I’m not holding my breath.)

Does the state have an interest in mobilizing voters in non-partisan ways? I think yes: there’s something morally desirable about the kind of democracy you get when citizens think of voting not only (or chiefly) in partisan terms, but also as part of a greater civic project.

That isn’t to deny the importance of partisanship in democratic politics: I agree with Chris that partisan difference is important, even desirable. Still, democracy should be more than partisan conflict. We need to recognize that, even when we disagree (sometimes passionately), we are still part of a shared public project that is worth maintaining. I worry that, in the U.S., a widespread sense of politics as a shared project is increasingly precarious. (I don’t agree with Michael Sandel on much, but I do on this.)

I don’t want Canada sliding further toward the U.S. in this respect, so I think we have a compelling interest in sustaining a credible non-partisan state agency to balance and correct the excesses of partisan politics.

My moral stance can be disputed, however, just as Chris suggests. If you follow the tradition of, say, Joseph Schumpeter and William Riker, then you’ll emphasize the “liberal” over “democracy” in “liberal democracy,” and you’ll worry about the state pushing people to exercise their rights. The state’s job is simply to affirm and protect those rights, not nudging people one way or another.

So let’s grant that point, for the sake of argument: the state may have an interest in socializing young citizens to take seriously their (negative) right to vote, and also in informing adult citizens about how, where, and when to exercise that right. There is no compelling interest, however, in trying to encourage citizens actually to vote. As Chris puts it, “the role of the state with respect to voting is to protect the ability of citizens to participate freely in elections,” not to nudge them toward participation.

Still, even granting that view, the vision of democracy behind the Fair Elections Act seems unjustifiably restrictive in how it understands the kinds of information that the state might have an interest in providing.

Remember Poilievre’s succinct rationale for his proposed reforms?

There are two things that drive people to vote: motivation and information. Motivation results from parties or candidates inspiring people to vote. Information (the “where, when and how”) is the responsibility of Elections Canada.

What I don’t understand is why we should limit the informational role of Elections Canada to little more than pointing to polling stations and announcing election dates. Even on Chris’s protective and procedural account of democracy, why isn’t there a state interest in correcting the informational pathologies that we know are likely to arise from partisan mobilization strategies during campaigns? Why isn’t there a public interest in leveling the playing field of campaign spending and media access?

So, is citizen participation (sometimes) a public good? I think so, but even if you reject my moral grounds for that position, there is still a compelling ‘public good’ rationale for the state doing more than simply providing the “where, when, and how” of voting, and doing so through an effective and credibly non-partisan agency like Elections Canada.

If we take the negative right of voting seriously, then we should also care about the substantive, rather than simply procedural, features of the informational environment in which voting takes place. Since we know that partisan actors have a clear incentive to distort that environment, why not empower a non-partisan agency to maintain the quality of the informational commons?

This line of reasoning also supports keeping investigative and enforcement powers within the same agency that maintains the informational commons within which citizens exercise their right to vote, and bolstering, not weakening those powers.

So, I think someone with Chris’s view of liberal democracy has reasons to reject my moral argument for a state interest in mobilizing voters, but not for rejecting a state interest in maintaining a certain kind of public sphere: not only an unbiased informational environment, but also a fair playing field for varied partisan and non-partisan players.

That demands more than simply telling voters where, when, and how to vote.

Of course, if you think that personal rights always trump these kinds of public concerns, then you get the current U.S. system, where any serious attempts to regulate campaign contributions, or to police the volume and content of political advertising, are now considered violations of free speech.

I’m not convinced Canada should strive for such a system. Indeed, I think it would be a disaster (even if we seem to have been stumbling in that direction for some time). By gutting Elections Canada so decisively, and bolstering the financial clout of established parties in funding campaigns, it’s pretty clear that Poilievre and the PMO want to move us in just that direction, however.

That should trouble all of us.

Citizen Participation is a Public Good?

In Loren’s latest post, he argues:

“I want a non-partisan government agency charged with important information and mobilization roles not because I think they can do it best, but because I think citizen participation is a kind of public good, and I’m not especially fond of how that good is provided when we leave it to partisan interests and underfunded NGOs.”

In one sense, I kind of agree with Loren that citizen participation is a sort of public good and that the state should have a role in ensuring that citizens have the opportunity to participate in public policy, or in this case, elections. But the million dollar question is what should that role actually entail? Continue reading

The federal minister believes that the role of Elections Canada should be purely informational. Many of my colleagues, on the other hand, argue that it should be informational AND motivational.

Why? Because we (they?) can’t trust partisan interests and civil society to provide these public goods (specifically, unbiased information and sufficient motivation).

Maybe they are right. Maybe we should distrust partisan interests and civil society and the messages they transmit during elections.

But what does that have to do with Elections Canada?

If we take these criticisms seriously (e.g. “cynical hyperbole and factual distortions aimed to placate the base, then exquisitely refined grassroots campaigning to win at the margins”), then shouldn’t we be asking Elections Canada to do much more than it actually does?

For instance, if we are worried about informational distortions, then shouldn’t we be asking Elections Canada to also provide factual and neutral summaries and commentaries of political campaign messages, press releases, speeches, political platforms, and the like, as they are released during election campaigns? Shouldn’t we also be demanding that Elections Canada conduct and publish its own public opinion polls during the pre-writ and post-writ periods, or at least commentaries of the accuracy of those polls? That might help us avoid situations like what happened in the 2011 federal election when those darn biased and underfunded pollsters failed to predict the orange wave in Quebec!

Unless there is evidence to suggest that Elections Canada can have a significant impact on motivating people to vote (e.g. beyond a 1-2% bump), I don’t think it’s the right tool or body for accomplishing this goal, nor do I see a moral justification for the various activities that critics want Elections Canada to continue to provide. Certainly there may be a moral justification for state to be involved, but Elections Canada in particular? I don’t see it.

I also think there’s value in partisanship and partisan differences. Indeed, partisan posturing is what makes Canada’s democratic system work and why jurisdictions with consensus government structures are not so enamoured with non-partisan systems (talk to someone from the Northwest Territories)!

Finally, given the state of democracy in Canada, at least when it comes to the ability of citizens to exercise their right to vote, I tend to think of the right to vote in Canada as belonging to the category of “negative rights” rather than “positive rights’. In other words, I think the role of the state with respect to voting is to protect the ability of citizens to participate freely in elections, and more specifically, to vote how they please without any undue coercion.

In short, I don’t see what all the fuss is with this particular part of the Fair Elections Act. Maybe I’m wrong. I’ve been wrong before! I’m hoping someone will convince me soon.

Motivating Citizens: Who Should Mobilize Voters?

In a reply to my recent complaints about the Fair Elections Act, my colleague Chris Alcantara asks three very good questions: has Elections Canada been successful in their mobilization efforts? Should they even be trying? And should the state even be involved in promoting turnout in the first place?

Continue reading

Poilievre and the PMO clearly think the answers are “no!” across the board.

In selling the Act, Poilievre cites declining turnout over the past decade as evidence that “public advertising and outreach campaigns of Elections Canada have not worked,” and insists that “Political candidates who are aspiring for office are far better at inspiring voters to get out and cast their ballot than our government bureaucracies.”

That view has considerable intuitive appeal, but as a political theorist I have some reasons for thinking that the result is an unattractive view of democracy.

We know that partisan mobilization strategies work: indeed, the last few rounds of presidential campaigning in the United States show us just how sophisticated and effective partisan mobilization efforts can be. It also revealed important moral shortcomings of that approach: cynical hyperbole and factual distortions aimed to placate the base, then exquisitely refined grassroots campaigning to win at the margins, getting out those committed voters and whichever independents and ‘leaners’ can be swayed, state by state, district by district, door to door, twitter sub-network to twitter sub-network.

To be sure, the game theorist and data nerd in me marvels at this sophistication (and notes the employability it might portend for some of our students, even possibly for me if Laurier decides to fire all the tenured Arts faculty someday soon, perhaps to better finance a new wave of administrative positions).

The political philosopher in me, however, wonders if this is the most desirable model for democracy? Partisan strategists playing elaborate chess games, with a few scrappy NGOs playing chronic catch-up, struggling to correct the inevitable distortions or outright lies in various target markets, and struggling to motivate citizens with non-partisan appeals.

What do we know, empirically, about partisan versus non-partisan mobilization efforts?

There have been some interesting field experiments in the U.S. addressing just this question, and the findings, while modest, are suggestive: face-to-face canvassing works (although it isn’t the whole story by any means), but whether partisan versus non-partisan messages make a difference isn’t at all clear, with social pressure being important, and the content of implied social norms seeming to be decisive.

So, the evidence isn’t at all clear on Poilievre’s claim that partisans are best-positioned to motivate voters. No doubt they are the most interested parties, and if they follow some of the emerging research in the US, perhaps they too will move toward non-partisan social pressure cues, emphasizing gratitude and high voter turnout (these seem to be the specific framing strategies that work well in the burgeoning experimental literature). But even if partisan actors are going door to door canvassing, other partisans are the most likely to be implicated in factual distortions, cynical manipulation, and gross simplification of complex policy issues.

All part of the game, perhaps?

If we settle for this as the limit of our democratic aspirations, then I suppose so. I prefer to think we might be able to do better.

But how? Why trust a government agency to mobilize voters? Isn’t Elections Canada more likely to waste money appointing political friends and famous faces to fluffy (and expensive) “expert” panels? (I’m not a conservative or libertarian, but I think their respective complaints about this panel are pretty much right on the mark).

I want a non-partisan government agency charged with important information and mobilization roles not because I think they can do it best, but because I think citizen participation is a kind of public good, and I’m not especially fond of how that good is provided when we leave it to partisan interests and underfunded NGOs.

You could, of course, strive to regulate those partisan interests more aggressively, but that would involve strengthening Elections Canada’s regulatory and enforcement powers over things like advertising and campaign contributions, which strikes me as not a very promising route for reform. Certainly the current government shows no interest in going this route. The current act, after all, wants to increase campaign spending limits, constrain third-party advertising (without any regulation on the spending or content of party advertising), remove the enforcement officer from EC, and doesn’t add any investigative powers (to compel testimony, for instance).

Or perhaps we could better fund those NGOs and other non-partisan voices, so as to level the playing field for political voice and correcting partisan excesses?  Again, that seems to be something best suited to an agency like Elections Canada, and insofar as the current mandate of EC involves such programmes, the Act wants to diminish that role (no more support for progammes like StudentVote, for example).

I’d certainly like to see an Elections Canada that can, with sufficient oversight and transparency, develop in-house expertise to engage in both information and mobilization programmes, but can also contract out that work to reputable non-partisan groups who can do the job cheaper and better. I’d like to see them have the funding, independence, and expertise to investigate bad behaviour, enforce regulations, and ensure a level political playing field during elections. I wish the Fair Voting Act were tailored to reform Elections Canada into such an agency.

The Fair Voting Act, as it stands, doesn’t do this. As is so often the case with this government, they insist on bundling together uncontroversial ‘housekeeping’ initiatives with dubious, ill-considered changes (along with some obviously partisan stuff that should have stayed in the dark recesses of Harper’s imagination). They then ignore any and all critics, including a range of experts, instead lashing out with political attacks.

Taking stock

Published Apr. 14, 2014, in The Waterloo Region Record

Politicians by nature are not the most introspective of creatures. They do what they think they have to do, or what their leaders tell them to do. It is a rare politician who pauses to ask himself or herself why they are doing it, or to question whether it is the right thing, or best thing, for the country they serve.

That said, members of Parliament have an opportunity this week and next week to take stock. The sudden death of former finance minister Jim Flaherty shocked everyone on Parliament Hill and far beyond. Here was a man who had worked too hard for eight years in the service of the Harper government. In the process he destroyed his health and, suddenly, he was gone before he could even start to enjoy retirement. Many of his former colleagues from all parties, most of whom genuinely liked and respected the feisty little Irishman, are asking themselves whether it is all worth it.

Parliament has become a very nasty place. Back when, I spent 15 happy years in the Parliamentary Press Gallery covering the Hill. I barely recognize the place today. In those days, the House of Commons was a rough and tumble arena, but respect for the rules and the firm hand of the Speaker prevented it from becoming what it is today: a place where blind partisanship, vitriol and personal attacks have taken over. In those days, there was no Pierre Poilievre and no Orwellian “Ministry of State for Democratic Reform” — for which those of us who were there might, in retrospect, be grateful.
Continue reading

The Commons is not sitting this week or next as MPs take an extended Easter recess. The break will not only enable them to say goodbye to Jim Flaherty — his state funeral is on Wednesday — but also to reflect on the sort of Parliament they want to return to.

Radical change is not in the cards, but MPs could take a few baby steps. On the government side, they could stop parroting the absurdly partisan and abusive lines written for them by the Prime Minister’s Office. On the opposition side, they could tone down the outrage; not everything the government does is wrong, badly motivated or an affront to democracy.

They could take a balanced approach to legislation. If a piece of legislation would make a bad law, they should expose its flaws (or, if on the government side, admit its flaws), then withdraw it or defeat it. If a piece of legislation would make a good law, they should applaud it and pass it.

The so-called Fair Elections Act is the place to start. This is Poilievre’s baby, conceived in the Conservative war room and handed to the young minister by the prime minister. The act surely has critics. Among other things, it’s being called the Unfair Elections Act, an Assault on Democracy Act, an Act to Perpetuate Conservative Governments (Forever), and Stephen Harper’s Revenge Against Elections Canada.

There are many things wrong with the Fair Elections Act, but I’ll mention just two. First, it is unnecessary. There is nothing fundamentally wrong with the existing Canada Elections Act. It act has given Canada some of the fairest and most honest elections in the world. Canadians are the first people other nations call for when they need international election observers. Our rules work.

The second thing that’s wrong is that the Fair Elections Act is thoroughly bad legislation — retrograde, badly motivated, poorly crafted and appallingly partisan. It would discourage turnout by making it more difficult for some (mainly non-Conservative) groups to vote (the poor, homeless and students). It would politicize enforcement by transferring authority over the rules from the public servants who are custodians of the act today to the agents of the party in power.

Jim Flaherty has reminded us of the fragility of life. Do we need Pierre Poilievre and his Fair Elections Act to remind us of the fragility of our democracy?

Why Elections Canada? Or Why Loren’s Latest Post is Somewhat Puzzling

My colleague, Loren King, in his latest post continues the “pile on” of the so-called Fair Elections Act and the beleaguered Minister of Democratic Reform, Pierre Poilievre.

He disagrees with Minister Poilievre’s following points: a) that it is up to parties and candidates to inspire people to vote; b) Elections Canada should be limited to communicating basic information, rather than trying to mobilize people to vote.

Loren’s argument is that “Citizen motivation to take part in their democracy shouldn’t be left to partisan forces. Sincere and informed civic participation is a public good, and there is no inconsistency (indeed, there is considerable virtue) in having Elections Canada involved in both informing voters and encouraging them to take part in public life, especially voting.”
Continue reading

I agree with some of Loren’s points, but I don’t understand this obsession from academics with defending Elections Canada’s role in mobilizing people to vote. The way it’s framed in most cases, it seems like it will be the end of the world if Elections Canada’s is not allowed to hold voting celebrations or to engage in social media campaigns to get out the vote! The basic message seems to be: “No Elections Canada = the death of democracy and the end of voting as we know it!”

Maybe I’m becoming an old curmudgeon, or maybe Daniel Kahneman’s book is starting to push me to more frequently engage my system 2 thinking in situations when system 1 has been oh so dominant in the past!

But, consider the following (to which I have no answers of course!):

First, is there any evidence that the activities that Elections Canada engages in actually produces increased voter turnout?

Second, is Elections Canada the most effective means for motivating people (more specifically, adults!) to vote? Or, would this task be better left to political parties and civil society actors (like Fair vote Canada) to mobilize the vote?

Third, how active should the state actually be in promoting voting turnout among adults? I agree that the state should be active when socializing youth in schools. Informing and educating students about the roles and duties involved in being a Canadian citizen is exactly the job of the state and it should be actively working hard to foster habitual voting among Canada’s youth (especially when the evidence suggests that habitual voting continues into adulthood).

But I admit, I’m not so sure that Elections Canada in particular should have this role.

Loren says that “there is considerable virtue” in having Elections Canada involved in motivation and information. I’m curious about what he means and I hope he will explain soon in his next post!

Pierre Poilievre’s Mistaken View of Democracy

Late to the party here (but I did sign the letter). I don’t have much to add to the excellent public commentary about this misguided act, but there is one point that hasn’t received enough scrutiny, and I think it’s important.

In his public attempt to defend a frankly poor piece of legislation, Pierre Poilievre, Minister for Democratic Reform, asserts the following:

“There are two things that drive people to vote: motivation and information. Motivation results from parties or candidates inspiring people to vote. Information (the “where, when and how”) is the responsibility of Elections Canada. … The Fair Elections Act will require Elections Canada to communicate this basic information, while parties do their job of voter motivation.”

This strikes me as interestingly wrong, betraying a misguided moral vision of what democracy is, and what it could be. Continue reading

Citizen motivation to take part in their democracy shouldn’t be left to partisan forces. Sincere and informed civic participation is a public good, and there is no inconsistency (indeed, there is considerable virtue) in having Elections Canada involved in both informing voters and encouraging them to take part in public life, especially voting.

We shouldn’t drive a partisan wedge between motivation and information in the way Poilievre so breezily suggests. To do so is to accept a cynical and, frankly, antidemocratic view of Canadian politics.

Think about voting. It is, most of the time and for most people, apparently inconsequential: as political scientists have (in)famously noted, it cannot be justified merely by expected gains associated with the very real costs of becoming informed and showing up at the ballot box. And yet it is a vitally important act, one that citizens routinely undertake regardless of the apparent waste of time and resources.

Whatever voting is, then, it isn’t merely a rational act, or a result of partisan haranguing. It is something far more valuable.

In a thoughtful recent commentary, Peter Loewen gets this point exactly right:

“If the decision to vote is really important, it is because it is a small act that tells us something about individuals’ values. It is like so many other democratic and civic acts: small in isolation, grand in aggregation. Seemingly trivial, but in fact deeply revealing of what an individual values and wants. Good societies are made up of these small acts.”

That profoundly important act is not something we should trust to partisan voices. It is the sine qua non of a healthy democracy, and as such, it deserves better than the partisan fate that Harper and Poilievre have in mind.

 

A Political Theorist Teaching Statistics: Estimation and Inference

Still more about my experiences this term as a political theorist teaching methods.

How to introduce the core ideas of regression analysis: via concrete visual examples of bivariate relationships, culminating in the Gauss Markov theorem and the classical regression model? via a more abstract but philosophically satisfying story about inference and uncertainty, models and distributions? Some combination of each?

I took my lead here from my first teacher of statistics, and I want to describe and praise that approach, which still impresses me as quite beautiful in its way.

Continue reading

I remember with some fondness stumbling through Gary King‘s course on the likelihood theory of inference just over twenty years ago. That course, in turn, drew heavily on King’s Unifying Political Methodology, first published in 1989.

I’m too far removed from the methods community to have a sense of how this book is now received. I remember at the time, when I took King’s course, thinking that the discussion of Bayesian inference was philosophically … well, a bit dismissive, whereas nowadays Bayes seems just fine. Revisiting the relevant sections of UPM (especially pp. 28-30) I now think my earlier assessment was unfair.

Still, UPM is easily recognizable as the approach that led Chris Achen to say the following in surveying the state of political methods little more than a decade after King’s book first appeared …

… Even at the most quantitative end of the profession, much contemporary empirical work has little long-term scientific value. “Theoretical models” are too often long lists of independent variables from social psychology, sociology, or just casual empiricism, tossed helter-skelter into canned linear regression packages. Among better empiricists, these “garbage-can regressions” have become a little less common, but they have too frequently been replaced by garbage-can maximum-likelihood estimates (MLEs). …

Given this, it wouldn’t have surprised me if, upon querying methods colleagues, I’d found that UPM remains widely liked, its historical importance for political science acknowledged, but its position in cutting-edge methods syllabi quietly shuffled to the “suggested readings” list.

Is this the case? I doubt it, but even if all that were true, UPM is the book I learned from, and it’s the book I keep taking off the shelf, year after year, to see how certain basic ideas in distribution and estimation theory play out specifically for political questions.

Of course I say that as a theorist: whenever I’ve pondered high (statistical) theory, nothing much has ever been at stake for me personally, as a scholar and teacher. Now, with some pressure to actually do something constructive with my dilettante’s interest in statistics, I wanted to teach with this familiar book ready at hand.

I haven’t been disappointed, and I want to share an illustration of why I think this book should stand the test of time: King’s treatment of the classical regression framework and the Gauss-Markov theorem.

Try googling “the Classical Regression Model” and you’ll get a seemingly endless stream of (typically excellent) lecture notes from all over the world, no small number of which probably owe significant credit to the discussion in William Greene’s ubiquitous econometrics text. High up on the list will almost certainly be Wikipedia’s (actually rather decent) explanation of linear regression. The intuition behind the model is most powerfully conveyed in the bivariate case: here is the relationship, in a single year, between a measure of human capital performance for a sample of countries against their per capita GDP …

stata-reg-example-pwt

Now, let’s look at that again but with logged GDP per capita for each country in the sample (this is taken, by the way, from the most recent Penn World Table) …

reg-example-log-gdp-pwt

The straight line is, of course, universally understood as “the line of best fit,” but that interpretation requires some restrictions, which define the conditions under which calculating that line using a particular algorithm, ordinary least squares (OLS, or simply LS), results in the best linear unbiased predictor, or estimator, of y (thus the acronym BLUE, so common in introductory treatments of the CLRM). OLS minimizes the sum of squared errors, measured vertically, along values of x (rather than, say, perpendicular to the line). Together, those conditions are the Gauss-Markov assumptions, named thus thanks to the Gauss-Markov theorem, which, given those conditions (very roughly: normally distributed and uncorrelated errors with mean zero and constant variance, and those errors uncorrelated with x, or with the columns in the multivariate matrix X), establishes OLS as the best linear unbiased estimator of coefficients in the equation that describes that ubiquitous illustrative line,

lff-example

or, in matrix notion for multiple x variables,

clrm-matrixnotation

… and that’s how generations of statistics and econometrics students first encountered regression analysis: via this powerful visual intuition.

But as King notes in UPM, the intuition was never entirely satisfying upon more careful reflection. Why the sum of square errors, rather than, say, the sum of the absolute value of errors? And why calculate the respective errors along the X axis, rather than, again, perpendicular to the line we want to fit?

UPM is, so far as I know, unique (or at the very least, extraordinarily rare) in beginning not with these visual intuitions, but instead with a story about inference: how do we infer things about the world given uncertainty? How can we be clear about uncertainty itself? This is, after all, the point of an account of probability: to be precise about uncertainty, and the whole point of UPM was (is) to introduce statistical methods most useful for political science via a particular approach to inference.

So, instead of beginning with the usual story about convenient bivariate relationships and lines of best fit, UPM starts with the fundamental problem of statistical inference: we have evidence generated by mechanisms and processes in the world. We want to know how confident we should be in our model of those mechanisms and processes, given the evidence we have.

More precisely, we want to estimate some parameter \theta, taking much of the world as given. That is, we’d like to know how confident we can be in our model of that parameter \theta, given the evidence we have. So what we want to know is p( \theta | y), but what we actually have is knowledge of the world given some parameter \theta, that is, p( y | \theta ).

Bayes’s Theorem famously gives us the relationship between a conditional probability and its inverse:

bt-example

We could contrive to render p(y) as a function of p(\theta) and p(y | \theta) by differentiating p(\theta,y) over the whole parameter space \Theta, \int_\Theta p(\theta) p(y| \theta), but this still leaves us with the question of how to interpret p(\theta).

These days that interpretive task hardly seems much of a philosophical or practical hurdle, but Fisher’s famous approach to likelihood is still appealing. Instead of arguing about (variously informative) priors, we could proceed instead from an intuitive implication of Bayes’s result: that p(\theta |y) might be represented as some function of our evidence and our background understanding (such as a theoretically plausible model) of the parameter of interest. What if we took much of that background understanding as an unknown function of the evidence that is constant across rival models of the parameter \theta?

Following King’s convention in UPM, let’s call these varied hypothetical models \tilde{\theta}, and then define a likelihood function as follows:

L(\tilde{\theta}|y) = g(y) p(y|\tilde{\theta})

This gives us an appealing way to think about relative likelihoods associated with rival models of the parameter we’re interested in, given the same data …

\dfrac{L(\tilde{\theta_{i}}|y)}{L(\tilde{\theta_{j}}|y)} = \dfrac{g(y) p(y|\tilde{\theta_{i}})}{g(y) p(y|\tilde{\theta_{j}})}

g(y) cancels out here, but that is more than a mere computational convenience: our estimate of the parameter \theta is relative to the data in question, where many features of the world are taken as ceteris paribus for our purposes. These features are represented by that constant function (g) of the data (y). We can drop g(y) when considering the ratio

\dfrac{p(y|\tilde{\theta_{i}})}{p(y|\tilde{\theta_{j}})}

because our use of that ratio, to evaluate our parameter estimates, is always relative to the data at hand.

With this in mind, think about a variable like height or temperature. Or, say, the diameter of a steel ring. More relevant to the kinds of questions many social researchers grapple with: imagine a survey question on reported happiness using a thermometer scale (“If 0 is very unhappy and 10 is very happy indeed, how happy are you right now?”). We can appeal to the Central Limit Theorem to justify a working assumption that

y_{i} \sim f_{stn} (y_{i} | \mu_{i}) = \dfrac{e^{-\frac{1}{2}(y_{i}-\mu_{i})^{2}}}{\sqrt{2\pi}}

which is just to say that our variable is distributed as a special case of the Gaussian normal distribution, but with \sigma^{2}=1.

By now you may already be seeing where King is going with this illustration. The use of a normally distributed random variable to illustrate the concept of likelihood is just that: a illustrative simplification. We could have developed the concept with any of a number of possible distributions.

Now for a further illustrative simplification: suppose (implausibly) that the central tendency associated with our random variable is constant. Suppose, for instance, that everyone in our data actually felt the same level of subjective happiness on the thermometer scale we gave them, but there was some variation in the specific number they assigned to the same subjective mental state. So, the reported numbers cluster within a range.

I say this is an implausible assumption for the example at hand, and it is, but think about this in light of the exercise I mentioned above (and posted about earlier): there really is a (relatively) fixed diameter for a steel ring we’re tasked to measure, but we should expect measurement error, and that error will likely differ depending on the method we use to do the measuring.

We can formalize this idea as follows: we are assuming E(Y_{i})=\mu_{i} for each observation i. Further suppose that Y_{i}, Y_{j} are independent for all i \not= j. So, let’s take the constant mean to be the parameter we want to estimate, and we’ll use some familiar notation for this, replacing \theta with \beta, so that \mu_{i} = \beta_{i}.

Given what we’ve assumed so far (constant mean \mu = \beta, independent observations), what would the probability distribution look like? Since p(e_{i}e_{j}) = P(e_{i})p(e_{j}) for independent events e_{i}, e_{j}, the full distribution over all of those events is given by

\prod_{i}^{n} \dfrac{e^{-\frac{1}{2}(y_{i}-\beta)^{2}}}{\sqrt{2\pi}}

Let’s use this expression to define a likelihood function for \beta:

L(\tilde{\beta}|y) = g(y) \prod_{i}^{n} f_{stn}(y|\tilde{\beta})

Now, the idea here is to estimate \beta and we’re doing that by supposing that a lot of background information cannot be known, but can be taken as roughly constant with respect to the part of the world we are examining to estimate that parameter. Thus we’ll ignore g(y), which represents that unknown background that is constant across rival hypothetical values of \beta. Then we’ll define the likelihood of \beta given our data, y, with the expression \prod_{i}^{n} f_{stn}(y|\tilde{\beta}) and substitute in the full specification of the standardized normal distribution for \mu_{i} = \beta_{i},

L(\tilde{\beta}|y) = \prod_{i}^{n} \dfrac{e^{-\frac{1}{2}(y_{i}-\beta)^{2}}}{\sqrt{2\pi}}

Remember that we’re less interested here in the specific functional form of L(.) than in relative likelihoods, so any transformation of the probability function that preserves the properties of interest to us, the relative likelihoods of parameter estimates \tilde{\beta}, isn’t really relevant to our use of L(.). Suppose, then, that we took the natural logarithm of L(\tilde{\beta}|y)? Because we’re taking g(y) as constant, we know that ln(ab) = ln(a) + ln(b) and for some constant \alpha, ln(\alpha ab) = \alpha +ln(a) + ln(b). So, the natural logarithm of our likelihood function is

L(\tilde{\beta}|y) = g(y) + \sum_{i}^{n} ln(\dfrac{e^{-\frac{1}{2}(y_{i}-\tilde{\beta})^{2}}}{\sqrt{2\pi}})

 

= g(y) + \sum_{i}^{n} ln(\dfrac{1}{\sqrt{2\pi}}) - \dfrac{1}{2}\sum_{i}^{n}(y_{i}-\tilde{\beta})^{2}

 

= g(y) - \dfrac{n}{2}ln(2\pi) - \dfrac{1}{2}\sum_{i}^{n}(y_{i}-\tilde{\beta})^{2}

Notice that g(y) - \frac{n}{2}ln(2\pi) doesn’t include \tilde{\beta}. Think of this whole expression, then, as a constant term that may shift the relative position of the likelihood function, but that doesn’t affect it’s shape, which is what we really care about. That shape of the log-likelihood function is given by

ln L(\tilde{\beta}|y) = -\dfrac{1}{2} \sum_{i}^{n} (y_{i} - \tilde{\beta})^{2}

Now, there are still several steps left to get to the the classical regression model (most obviously, weakening the assumption of constant mean and instead setting \mu_{i}=x_{i}\beta) but this probably suffices to make the general point: using analytic or numeric techniques (or both), we can estimate parameters of interest in our statistical model by maximizing the likelihood function (thus MLE: maximum likelihood estimation), and that function itself can be defined in ways that reflect the distributional properties of our variables.

This is the sense in which likelihood is a theory of inference: it lets us infer not only the most plausible values of parameters in our model given evidence about the world, but also measures of uncertainty associated with those estimates.

While vitally important, however, this is not really the point of my post.

Look at the tail end of the right-hand side of this last equation above. The expression there ought to be familiar: it looks suspiciously like the sum of squared residuals from the classical regression model!

So, rather than simply appealing to the pleasing visual intuitions of line-fitting; or alternatively, appealing to the Gauss-Markov theorem as the justification for least squares (LS), by virtue of yielding the best linear unbiased predictor of parameters \beta (but why insist on linearity? or unbiasedness for that matter?), the likelihood approach provides a deeper justification, showing the conditions under which LS is the maximum likelihood estimator of our model parameters.

This strikes me as a quite beautiful point, and it frames King’s entire pedagogical enterprise in UPM.

Again, there’s more to the demonstration in UPM, but in our seminar at Laurier this sufficed (I hope), not to convince my (math-cautious-to-outright-phobic) students that they need to derive their own estimators if they want to do this stuff. What I hope they took away is a sense of how the tools we use in the social sciences have deep, even elegant, justifications beyond pretty pictures and venerable theorems.

Furthermore, and perhaps most importantly, understanding at least the broad brush-strokes of those justifications helps us understand the assumptions we have to satisfy if we want those tools to do what we ask of them.

A political Theorist Teaching Statistics: Measurement

Another post about my experiences this term as a political theorist teaching methods.

That gloss invites a question, I suppose. I guess I’m a political theorist, whatever that means. A lot of my work has been on problems of justice and legitimacy, often with an eye to how those concerns play out in and around cities, but also at grander spatial orders.

Still, I’ve always been fascinated with mathematics (even if I’m not especially good at it) and so I’ve kept my nose pressed against the glass whenever I can, watching developments in mathematical approaches to the social and behavioural sciences, especially the relationships between formal models and empirical tests. Continue reading

I was lucky enough in graduate school to spend a month hanging out with some very cool people working on agent-based modeling (although I’ve never really done much of that myself). This year, I was given a chance to put these interests into practice and teach our MA seminar in applied statistical methods.

I began the seminar with a simple exercise from my distant past. My first undergraduate physics lab at the University of Toronto had asked us to measure the diameter of a steel ring. That was it: measure a ring. There wasn’t much by way of explanation in the lab manual, and I was far from a model student. I think I went to the pub instead.

I didn’t stay in physics, and eventually I wound up studying philosophy and politics. It was only a few years ago that I finally saw the simple beauty of that lab assignment as a lesson in measurement. In that spirit, I gave my students a length of string, a measuring tape, and three steel hoops. Their task: detail three methods for finding the diameter of each hoop, and demonstrate that the methods converge on the same answer for each hoop.

hoops

measurement

I had visions of elegant tables of measurements, and averages taken over them. Strictly speaking, that vision didn’t materialize, but I was impressed that everyone quickly understood the intuitions at play here, and they did arrive at the three approaches I had in mind:

  1. First, use the string and take the rough circumference several times, find the average, then divide that figure by \pi.
  2. Second, use a pivot point to suspend both the hoop and a weighted length of string, then mark the opposing points and measure.
  3. Third, simply take a bunch of measurements around what is roughly the diameter.

The lesson that took a while to impart here was that I didn’t really care about the exact diameters, and was far more concerned that they attend to the details of the methods used for measurement, and that they explicitly report these details.

In the laboratory sciences measurement protocol is so vitally important. We perhaps don’t emphasize the simple point enough in the social sciences, but we should: it matters how you measure things, and what you use to make the measurements!

Everyone’s at fault in Mideast peace talks

Published Apr. 10, 2014, in the Waterloo Region Record.

Rumours emanating from the Middle East peace talks suggest things are not going well.

This is hardly a surprise for anyone who has followed the twists and turns of past negotiations between Israelis and Palestinians. It will inevitably lead to a bout of finger-pointing as to who is at fault, where sympathizers of both sides will quickly blame each other. However, the truth is everyone is at fault, because whatever narratives are spun, neither side is prepared to make the difficult concessions for a real peace treaty to emerge.

Critics of Israel can and will blame the expansion of settlements in the West Bank as the core reason for the impasse and it is a problem, but Palestinian representatives have never acknowledged the legitimacy of an Israeli state even before 1967, when the entire area in question was in Arab hands.

Read more.

Dr. Manuel Riemer Interviewed in the Centre of Environmental Health Equity

Interview by Julie Rempel from the Centre of Environmental Health Equity.

Community Psychology and Environmental Justice

From your own perspective, what is the specialty of your research? As a community psychologist my research focuses on the intricate interactions between community, the environment and justice. These issues cannot be examined independently as they are intimately connected and psychology, especially community psychology, can help to understand these connections.

Traditionally, community psychology includes topics such as oppression, promoting diversity, citizen participation, and striving for social justice. But the current environmental crisis illuminates with unforgiving clarity how closely linked these issues are to the environment and further emphasizes a sense of urgency for the need to act.

Social justice is one of the core values for community psychology and in order to adequately address inequality, it is essential that environmental issues are an integral part of the discussion.  While affecting all of us, the impact of these environmental threats to our health and well-being are not evenly distributed. For example, as identified with my work and that completed by others, homeless individuals and those living below the poverty line in big cities are most vulnerable to the extreme weather in North-Western countries like Canada. It’s apparent beyond our countries borders that within developing countries, the poor have the least means to fight vector-borne diseases and that natural disaster like floods and hurricanes are much more likely to occur in developing countries that have fewer means to protect themselves.

Read more.

A Political Theorist Teaching Statistics: Stata? R?

What is a political theorist doing teaching a seminar in social science statistics? A reasonable question to ask my colleagues, but they gave me the wheel, so I drove off!

Later I’ll post some reflections on my experiences this term. For now, I want to weigh in briefly with some very preliminary thoughts on software and programming for statistics instruction at the graduate level, but in a MA programme that doesn’t expect a lot by way of mathematical background from our students.

Continue reading

In stats-heavy graduate departments R seems to be all the rage. In undergraduate methods sequences elsewhere (including here at Laurier) SPSS is still hanging on. I opted for Stata this term, mostly out of familiarity and lingering brand loyalty. If they ever let me at this seminar again, I may well go the R route.

This semester has reassured me that Stata remains a very solid statistical analysis package: it’s isn’t outrageously expensive, it has good quality control, and they encourage a stable and diverse community of users, all of which are vital to keeping a piece of software alive. Furthermore, the programmers have managed to balance ease of use (for casual and beginning users) with flexibility and power (for more experienced users with more complicated tasks).

All that said, I was deeply disappointed with the “student” version of Stata, which really is far more limited than I’d hoped. Not that they trick you: you can read right up front what those limits are, but reading them online is a whole lot different than running up against them full steam in the middle of a class demonstration, when you’re chugging along fine until you realize your students cannot even load the data set (that you thought you’d pared down sufficiently to fit in that modest version of stata!).

R, in contrast, is not a software package, but a programming environment. At the heart of that environment is an interpreted language (which means you can enter instructions off a command line and get a result, rather than compiling a program and then running the resulting binary file).

R was meant to be a dialect of the programming language S and an open source alternative to S+, a commercial implementation of S. R is not built in quite the same way as S+, however. R’s designers started with a language called Scheme, which is a dialect of the venerable (and beautiful) language LISP.

My sense is that more than a few people truly despise programming in R. They insist that the language is hopelessly clumsy and desperately flawed, but they often keep working in the R environment because enough of their colleagues (or clients, or coworkers) use it. Often these critics will grudgingly concede that, in addition to the demands of their profession or client base, R is still worth the trouble, in spite of the language.

These critics certainly make a good case. That said, I suspect these people cut their programming teeth on languages like C+ and that, ultimately, while their complaints are presented as practical failings of R, they are in fact deeper philosophical and aesthetic differences. (… but LISP is elegant!)

I remain largely agnostic on these aesthetic questions. A language simply is what it is, and if it — and as importantly, the community of users — doesn’t let you do what you want, the way you want, then you find another language.

If you’ve ever programmed before, then R doesn’t seem so daunting, and increasingly there are good graphical user interfaces to make the process of working with R more intuitive for non-programmers. Still, fundamentally the philosophy of R is “build it yourself” … or, more often, “hack together a script to do something based on code someone else has built themselves.”

This latter tendency is true of Stata also, of course, but when you use someone else’s package in Stata, you can be reasonably confident that it’s been checked and re-checked before being released as part of the official Stata environment. That is less-often the case with R (although things are steadily improving).

Indeed, there have been, not too long ago, some significant quality-control issues with R packages, and it always leaves the lingering worry in the back of your mind as to whether the code you’ve invoked with a command (“lm” say, for “linear model) is actually doing what it claims to do.

Advocates of R rejoin that this not a bug, but a feature: that lingering worry ought to inspire you to learn enough to check the code yourself!

They have a point.