broken boat

Epistemics (Part 1: Introduction)

It is the sense of power attached to a sense of knowledge that makes men desirous of believing, and afraid of doubting. This sense of power is the highest and best of pleasures when the belief on which it is founded is a true belief, and has been fairly earned by investigation. … But if the belief has been accepted on insufficient evidence, the pleasure is a stolen one. Not only does it deceive ourselves by giving us a sense of power which we do not really possess, but it is sinful, because it is stolen in defiance of our duty to mankind. That duty is to guard ourselves from such beliefs as from pestilence, which may shortly master our own body and then spread to the rest of the town.

William Clifford, “The ethics of belief”
Listen to this post

1. About this series

Effective altruists use the term `epistemics’ to describe practices that shape knowledge, belief and opinion within a community. The term is understood broadly enough to encompass diverse categories such as forms of publication; discourse norms; institutional structures; attitudes towards authority; and means of reporting uncertainty.

Effective altruists care a lot about epistemics. They want to have good epistemics and learn to avoid bad epistemics. I think that in some ways, effective altruists already have good epistemics. In other ways, community epistemics could do with improvement.

This series will focus on areas in which community epistemics could be productively improved. Today, I want to introduce the series by giving three examples of the topics that I will be concerned about. Let me emphasize that these are only sketches of topics to be covered in the future: they are intended as previews, not as complete outlines of claims about community epistemics, and certainly not as full defenses of those claims.

2. Money talks

Effective altruists have at their fingertips billions (previously tens of billions) of dollars of pledged funding. They use this funding to influence the direction of research and debate. Effective altruists have established research institutes at major universities including Oxford and Cambridge, and funded lavish prizes including $100k in prizes for papers presented at a single conference symposium, and $1.5m for short arguments about the risks posed by artificial intelligence.

Money talks, and this kind of money introduces at least three types of distortions in the direction of research and discussion. First, there is an inflation of authoritativeness and seriousness in which money is used to buy markers of prestige for authors and topics that would otherwise be taken less seriously within debates. Research institutes at leading universities and purchased symposia at major machine learning conferences lend an aura of respectability to a movement often driven more strongly by blogs and forum posts than by scholarly research.

Second, philanthropic spending leads to topical biases in research: topics such as existential risk and AI safety are rocketed to the top of research agendas, not because scholars judge them to be the most serious or pursuit-worthy topics, but simply because research on these topics is more likely to be funded. Scholars willing to do work on these topics find themselves with prime, research-only positions at institutes flush with cash, and these conditions are attractive to many researchers who would otherwise prefer to study something else.

Finally, philanthropic funding leads to opinion biases in which the opinions expressed in published research are skewed towards those popular with funders. Opinion biases can be produced intentionally, as scholars seek to ingratiate themselves with funders. But opinion biases can also be produced unintentionally, as funders seek out and reward work that resonates with them.

I have said before that one thing I admire about effective altruists is their willingness to support scholarly research. Perhaps epistemic distortions are the price to pay for scholars’ full bellies. But even so, we should be honest about what foundations purchase in return for research funding, and how these purchases can shape research in directions unrelated to, or even contrary to truth or scholarly interest.

3. Publication practices

Publication practices matter. The way in which material is published affects its likely audience; how the material will be vetted and evaluated; and what it is likely to discuss. For example, this blog differs from my scholarly work in targeting a wider and more generalist audience; being largely unvetted; and allowing me to discuss a wide range of material that would not be of interest to leading academic journals.

Effective altruists often disseminate their work using a variety of nonscholarly media such as blog posts, forum discussions, podcasts, and reports commissioned and published by philanthropic foundations. While these media have advantages, such as accessibility, they also present unique epistemic challenges.

One question concerns the nature of peer review. Scholarly work is anonymously vetted by independent experts at the cutting edge of their fields, who are given the power to accept, reject, or demand changes to the research they evaluate. By contrast, work published by effective altruists is often unvetted, and even when reviews are commissioned, reviewers often lack terminal degrees, have limited publication records, are ideologically aligned with effective altruists, and are given no power to reject or alter the contents of research.

Another question concerns the evidentiary practices used by different publication media. For example, nonscholarly publications often make substantially fewer citations to other work and often cite their claims to other nonscholarly publications rather than to scholarly articles and books. It may be worth asking how these practices can serve to strengthen or weaken the collective evidential basis for claims made by effective altruists, as well as whether they might serve to promote a coalescing of opinion or a false sense of consensus driven by an overlapping network of citations to similarly-motivated authors.

A final question concerns the authorship of publications. Many materials circulated by effective altruists, including a good number of canonical texts, are written by authors lacking terminal degrees in their fields. These authors are frequently young, inexperienced, and lack substantial records of scholarly publication. Outside observers might be forgiven for asking whether these authors are qualified to be making confident claims about subject matters which often overlap with traditional research programs and whether it might be appropriate to show more deference to traditional scholarly work than to publications authored by untrained authors in less reliable formats.

4. Deference and authority

Effective altruism is, in many ways, a remarkably democratic and egalitarian movement. Anyone can post on the EA Forum and have their ideas engaged with by the community. But although anyone can speak, not everyone’s voice is treated equally. There are a number of celebrity figures within the movement whose opinions are much more likely to be read and believed than the opinions of average members.

Many of the questions to be asked about deference and authority have nothing to do with the specific figures to whom authority is granted. For example, too much trust in authority may allow false or undersupported claims to go unchallenged; may promote premature convergence of opinion; and may worsen citation practices as the opinions of authorities begin to be treated as legitimate citations in their own right.

However, at times we should also ask whether the specific individuals granted authority are deserving of our trust. For example, Eliezer Yudkowsky has a track record of bombastic and inaccurate predictions and has been known to contradict scholarly consensus on the basis of what turned out to be elementary confusions. Now, Yudkowsky is taken quite seriously when he urges that humanity has lost the battle for survival against artificial intelligence and should shift the focus of our efforts towards dying with dignity. Does such an individual, or such a claim, deserve the level of deference and respect shown within the community?

Sometimes, questions about the role of EA celebrities go beyond trust and become questions of amplification. Should the voices of leading figures be amplified, even when those doing the amplifying suspect that those voices are wrong? What kind of epistemic price is paid when the voices amplified hold views unsupported by the best available evidence?

5. Looking forward

In addition to the issues discussed above, this series will discuss many other questions about epistemics. For example:

  • In what ways may biases of group deliberation such as familiar biases of self-selecting, homogenous, internet-driven groups lead to failures of group deliberation?
  • May the community’s focus on probabilization and statements of credence at times serve to over-represent authors’ certainty, or to foster unwarranted convergence of opinion within the community?
  • What role do canonical texts play within the community? Do these texts deserve to play the role that they have been given?
  • Might views imported from Silicon Valley, such as a preference for longshots, quantification, and technology-driven solutions, be given undue weight in new areas to which they are less well-suited?

Please do let me know if there are other questions about epistemics that you would like to hear discussed.


Posted

in

by

Comments

6 responses to “Epistemics (Part 1: Introduction)”

  1. Jason Avatar
    Jason

    Thanks for starting this series — I think it will have a lot of value. As a reaction to the opening post, I thought I’d share a few thoughts that might be relevant to some future installments. The first two are specifically related to AI but may also apply in certain other areas as well.

    * For AI issues in particular: Money indeed does talk, and there’s a lot of money in AI (e.g., Microsoft just committed $10B, and I expect the overall spend in AI capabilities research to continue accelerating). The salaries can be (in the NYT’s words, even back in 2018) “eye-popping.” As can be shown from comparing what typical faculty members in medicine and law can command versus those in sociology or English literature, the private-sector market demand has a significant effect on what non-profits need to pay for talent. I think that’s relevant to assessing the total spend in the AI area. Is EA money distorting the field of AI research or correcting for the distortions caused by the big AI firms’ profit motives? (Also, it’s unlikely that that $1.5MM contest would have paid out anywhere near $1.5MM.)

    * The existence of all that private-sector money (and potential massive profits from AI) has and will continue to have a massive influence on what gets researched, talked about, seen as prestigious, etc. So one could look at EA activity in this area through the lens of a balancing effect — that raising the prominence of AI-skeptical viewpoints is necessary to counteract the all the influence from corporations with a vested interest in AI and restore some sort of neutral balance to the study of AI.

    * I think it would be helpful to acknowledge the epistemic challenges of the academy as well, at least in passing. Although I think it is a helpful comparison to show some weakness of EA epistemic practices, I am less convinced that it is a gold standard. In my own field (law), I could tell you which topics and viewpoints are more likely to get you ahead in legal academia by playing to the views of people who are already tenured law professors. (Note that most US law professors do not have advanced law degrees, so this is not a function of the profesoriate being better educated than other highly-qualified lawyers.) The academic system grants pretty strong deference to the tenured faculty member regarding what is worthy of study. That has its advantages, but there’s no way to enforce that the work be socially useful — and people may choose research topics and viewpoints for any number of potentially idiosyncratic reasons. And to the extent that there is a source of significant influence over tenured faculty research agendas, a lot of it comes from what external funders are willing to fund (which raises some of the same concerns as your article does). To be sure, academic epistemics may be on more solid ground overall because power and influence is more decentralized, but they have some downsides as well.

    1. David Thorstad Avatar

      Thanks Jason! These are important points to think about.

      **Does money talk in AI?**

      I would tend to agree with you that money talks in AI research. To take an example close to home, a very large number of jobs on the academic job market in philosophy suddenly call for specializations in the philosophy of AI, and indeed my own job was secured largely on my ability to relate my research to normative issues raised by AI. Philosophers are at once genuinely thankful for the funding and also concerned about the effects it may have on the direction of research within the field if we are not careful to retain our emphasis on core areas, questions and insights in philosophy. We think we can take the money while retaining the field’s emphasis on traditional questions, methods, standards and the like, but we think that some care is needed to make sure that really happens.

      I guess you know that it is controversial whether most EA writings about the risks posed by artificial intelligence should be considered as contributions to AI research. Many AI researchers would not see them in this way.

      It might also be controversial whether and to what extent AI companies have a strong ideological agenda to push when funding research. They may largely be interested simply in advancing the state of technological capabilities. I suspect EAs might say that companies are committed to advancing an “AI is safe” agenda, analogous to the “tobacco is safe” agenda of tobacco companies a half-century ago. I suspect many observers would not accept this characterization, both because they do not see tech companies (by contrast to tobacco companies) investing much money in trying to convince us that their products won’t kill us, and also because they might see the safety of AI as a claim on rather firmer epistemic ground.

      **On EA money as a balancing act against private money**

      It is definitely possible for private philanthropy to balance the playing field within a research area. I would tend to resist the characterization of EA money as balancing the research playing field within AI. Largely, I say this for the reasons sketched above: I’m not sure that I would see most EA writings about AI as contributions to the field of AI research. It might be helpful to stress two additional points.

      One point is that EA money, like AI research money, often comes from Silicon Valley. To this extent, it might be fair to view EA funding as a continuation of, rather than a counterbalance to, the research influence of Silicon Valley.

      Another point to stress is that effective altruism is far from exhausted by what the movement says about AI. When we move into other fields, the argument that there is already a massive amount of opposed money to counterbalance EA arguments tends to weaken.

      **On epistemic challenges of the academy**

      It is definitely true that academia faces epistemic challenges, and some fields face more challenges than others. The purpose of this series is to point out some epistemic challenges raised by effective altruism in the interest of solving them when they are fixable, and when they are not fixable, helping to make sure people are aware of and appropriately compensate for potential biases.

  2. S Avatar
    S

    You’ve hit upon my bugbear. I keep a distance from EA circles (online and offline) to, ironically, protect my own epistemics. When I first encountered them, I was pleasantly surprised by their degree of thoughtfulness and reflection. But it dawned over time that this is also one of the most insular groups I’ve ever encountered. It’s not just publication practices — for a significant chunk, nearly their entire social life revolves around EA.

    A very eye-opening moment was when a skeptical acquaintance remarked that the main correlate for becoming an AI doomer was hanging out with EAs/rationalists rather than anything to do with knowledge of the subject matter itself. I myself have a relevant PhD and research experience and feel out of place for not being a “believer”.

    Even with active awareness, we can’t protect ourselves from our primate instincts to conform. It’s strange that a community that prides itself on navigating cognitive biases can’t see this risk. Now equipped with a full-blown eschatology, we’re set for religion-level degrees of insularity. I still appreciate (aspects of) EA, but the only solution I found was to keep interaction to an acceptable minimum.

    1. David Thorstad Avatar

      Thanks S.

      I hope more people see this comment!

  3. River Avatar
    River

    You use the word “scholarly” a lot, so my first question is what exactly do you mean by that? I would hope you mean discovery of truth, otherwise why should I care about “scholarly” things? But you also list it side by side with truth, which suggests you mean something else.

    The impression I get is that you are just going to evaluate EA’s epistemics by how closely it matches the academy, and for that to be useful, you need to make a case that what the academy does is right, in the sense of leading to truth. You can’t take that as a given. To illustrate this point, compare physicists to theologians. (If you are religious, imagine I am talking about the theologians of someone else’s religion.) One succeeds at discovering truth, the other does not, yet both follow the forms of the academy. How do we know? Because physicists build cool things like space ships and particle colliders, and theologians don’t. Ultimately out knowledge about what epistemic norms work and what epistemic norms don’t has to be grounded in empirical observations, just like anything else. For your argument to work, you need to ground your claims about epistemic norms in empirical observations, not just the observation that the academy uses them.

    Also, you do realize that in practice, a lot of “peer” review is actually delegated to random grad students, who lack terminal degrees, who do not have long publication records, and who sometimes leave the academy to become the EAs you are discussing, right?

    1. David Thorstad Avatar

      Hi River,

      Scholarship certainly involves the pursuit of truth, but it also involves a good deal more than that. Some things generally associated with sound scholarship include a high degree of rigor, carefulness, and thoughtfulness in reasoning; a strong knowledge of background literatures; disciplinary grounding in cutting-edge knowledge and methods from the relevant scholarly discipline; advanced training involving a course of graduate education at the hands of leading scholars; structured assessment of a specific type (typically peer review); detailed citation to relevant literature; appropriate deference to scholarly consensus; and the like.

      Early effective altruists recognized the importance of relying on cutting-edge scholarship to guide altruistic decisionmaking. In areas such as global health and development, only the best randomized controlled trials would do for early effective altruists. The recent turn to longtermism has brought something of a liberalization of research standards and in some corners even a tendency to attack cannons of scholarly knowledge, along with associated methods of research and assessment. This is not a good look for those longtermists. It is no way to reliably reach the truth, and no way to guide altruistic decisionmaking.

      I am intimately involved in the practice of peer review and I am aware of the broad shape that this practice takes at leading journals. While some leading journals do commission graduate students as reviewers, no leading journal commissions reviewers randomly. When graduate students are chosen as reviewers, they are chosen for their specific expertise combined with either a recognition of their scholarly achievements or else the recommendation of a trusted scholar.

      To be honest, I often think that late-stage graduate students make very good reviewers. When they are appropriately knowledgeable and skilled, they tend to put a good deal of effort into the review process and give detailed comments on the work that they review. Graduate students can sometimes be a bit overzealous in posing challenges to the work that they review, but in general, I would be very happy to have my work reviewed by an appropriately selected late-stage graduate student with the relevant skills and knowledge.

Leave a Reply

%d bloggers like this: