Downvotes Are Evidence

I also posted this on the Effective Altruism forum.


Downvotes are evidence. They provide information. They can be interpreted, especially when they aren’t accompanied by arguments or reasons.

Downvotes can mean I struck a nerve. They can provide evidence of what a community is especially irrational about.

They could also mean I’m wrong. But with no arguments and no links or cites to arguments, there’s no way for me to change my mind. If I was posting some idea I thought of recently, I could take the downvotes as a sign that I should think it over more. However, if it’s something I’ve done high-effort thinking about for years, and written tens of thousands of words about, then “reconsider” is not a useful action with no further information. I already considered it as best I know how to.

People can react in different ways to downvotes. If your initial reaction is to stop writing about whatever gets downvotes, that is evidence that you care a lot about social climbing and what other people think of you (possibly more than you value truth seeking). On the other hand, one can think “strong reactions can indicate something important” and write more about whatever got downvoted. Downvotes can be a sign that a topic is important to discuss further.

Downvotes can also be evidence that something is an outlier, which can be a good thing.

Downvoting Misquoting Criticism

One of the things that seems to have struck a nerve with some people, and has gotten me the most downvotes, is criticizing misquoting (examples one and two both got to around -10). I believe the broader issue is my belief that “small” or “pedantic” errors are (sometimes) important, and that raising intellectual standards would make a large overall difference to EA’s correctness and therefore effectiveness.

I’ll clarify this belief more in future posts despite the cold reception and my expectation of getting negative rewards for my efforts. I think it’s important. It’s also clarified a lot in prior writing on my websites.

There are practical issues regarding how to deal with “small” errors in a time-efficient way. I have some answers to those issues but I don’t think they’re the main problem. In other words, I don’t think many people want to be able to pay attention to small errors, but are limited by time constraints and don’t know practical time-saving solutions. I don’t think it’s a goal they have that is blocked by practicality. I think people like something about being able to ignore “small” or “pedantic” errors, and practicality then serves as a convenient excuse to help hide the actual motivation.

Why do I think there’s any kind of hidden motivation? It’s not just the disinterest in practical solutions to enable raising intellectual standards (which I’ve seen year after year in other communities as well, btw). Nor is it just the downvotes that are broadly not accompanied by explanations or arguments. It’s primarily the chronic ambiguity about whether people already agree with me and think obviously misquotes are bad on the one hand or disagree with me and think I’m horribly wrong on the other hand. Getting a mix of responses including both ~“obviously you’re right and you got a negative reaction because everyone already knows it and doesn’t need to hear it again” and ~“you’re wrong and horrible” is weird and unusual.

People generally seem unwilling to actually clearly state what their misquoting policies/attitudes are, but nevertheless say plenty of things that indicate clear disagreements with me (when they speak about it at all, which they often don’t but sometimes do). And this allows a bunch of other people to think there already are strong anti-misquoting norms, including people who do not actually personally have such a norm. In my experience, this is widespread and EA seems basically the same as most other places about it.

I’m not including examples of misquotes, or ambiguous defenses of misquotes, because I don’t want to make examples of people. If someone wants to claim they’re right and make public statements they stand behind, fine, I can use them as an example. But if someone merely posts on the forum a bit, I don’t think I should interpret that as opting in to being some kind of public intellectual who takes responsibility for what he says, claims what he says is important, and is happy to be quoted and criticized. (People often don’t want to directly admit that they don’t think what they post is important, while also not wanting to claim it’s important. That’s another example of chronic ambiguity that I think is related to irrationality.) If someone says to me “This would convince me if only you had a few examples” I’ll consider how to deal with that, but I don’t expect that reaction (and if you care that much you can find two good examples by reviewing my EA posting history, and many many examples of representative non-EA misquotes on my websites and forum).

Upvoting Downvoted Posts

There’s a pattern on Reddit, which I’ve also observed on EA, where people upvote stuff that’s a negative points which they don’t think deserves to be negative. They wouldn’t upvote it if it had positive votes. You can tell because the upvoting stops when it gets back to neutral karma (actually slightly less on EA due to strong votes – people tend to stop at 1, not at the e.g. 4 karma an EA post might start with).

In a lot of ways I think this is a good norm. Some people are quite discouraged by downvotes and feel bad about being disliked. The lack of reasons to accompany downvotes makes that worse for some types of people (though others would only feel worse if they were told reasons). And some downvotes are unwarranted and unreasonable so counteracting those is a reasonable activity.

However, there’s a downside to upvoting stuff that’s undeservedly downvoted. It hides evidence. It makes it harder for people to know what kinds of things get how many downvotes. Downvotes can actually be important evidence about the community. Reddit is larger and many subreddits have issues with many new posts tending to get a few downvotes that do not reflect the community and might even come from bots. I’m not aware of EA having this problem. It’s stuff that is downvoted more than normal which provides useful evidence. On EA, a lot of posts get no votes, or just a few upvotes. I believe getting to -10 quickly isn’t normal and is useful evidence of something, rather than something that should just be ignored as meaningless. Also it only happens to a minority of my posts. The majority get upvotes not downvotes.)


Elliot Temple | Permalink | Messages (0)

Misquoting and Scholarship Norms at EA

Link to the EA version of this post.


EA doesn’t have strong norms against misquoting or some other types of errors related to having high intellectual standards (which I claim are important to truth seeking). As I explained, misquoting is especially bad: “Misquoting puts words in someone else’s mouth without their consent. It takes away their choice of what words to say or not say, just like deadnaming takes away their choice of what name to use.”

Despite linking to lizka clarifying the lack of anti-misquoting norms, I got this feedback on my anti-misquoting article:

One of your post spent 22 minutes to say that people shouldn't misquote. It's a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.

So let me try to explain that EA really doesn’t have strong anti-misquoting norms or strong norms for high intellectual standards and scholarship quality. What would such norms look like?

Suppose I posted a single misquote in Rationalization: AI to Zombies. Suppose it was one word added or omitted and it didn’t change the meaning much. Would people care? I doubt it. How many people would want to check other quotes in the book for errors? Few, maybe zero. How many would want to post mortem the cause of the error? Few, maybe zero. So there is no strong norm against misquotes. Am I wrong? Does anyone really think that finding a single misquote in a book this community likes would result in people making large updates to their views (even is the misquote is merely inaccurate, but doesn’t involve a large change in meaning)?

Similarly, I’m confident that there’s no strong norm against incorrect citations. E.g. suppose in RAZ I found one cite to a study with terrible methodology or glaring factual errors. Or suppose I found one cite to a study that says something different than what it’s cited for (e.g. it’s cited as saying 60% X but the study itself actually says 45% X). I don’t think anything significant would change based on pointing out that one cite error. RAZ’s reputation would not go down substantially. There’d be no major investigation into what process created this error and what other errors the same process would create. It probably wouldn’t even spark debates. It certainly wouldn’t result in a community letter to EY, signed by thousands of people with over a million total karma, asking for an explanation. The community simply tolerates such things. This is an example of intellectual standards I consider too low and believe are lowering EA’s effectiveness a large amount.

Even most of RAZ’s biggest fans don’t really expect the book to be correct. They only expect it to be mostly correct. If I find an error, and they agree it’s an error, they’ll still think it’s a great book. Their fandom is immune to correction via pointing out one error.

(Just deciding “RAZ sucks” due to one error would be wrong too. The right reaction is more complicated and nuanced. For some information on the topic, see my Resolving Conflicting Ideas, which links to other articles including We Can Always Act on Non-Criticized Ideas.)

What about two errors? I don’t think that would work either. What about three error? Four? Five? Nah. What exactly would work?

What about 500 errors? If they’re all basically indisputable, then I’ll be called picky and pedantic, and people will doubt that other books would stand up to a similar level of scrutiny either, and people will say that the major conclusions are still valid.

If the 500 errors include more substantive claims that challenge the book’s themes and concepts, then they’ll be more debatable than factual errors, misquotes, wrong cites, simple, localized logic errors, grammar errors, etc. So that won’t work either. People will disagree with my criticism. And then they won’t debate their disagreement persistently and productively until we reach a conclusion. Some people won’t say anything at all. Others will comment 1-5 times expressing their disagreement. Maybe a handful of people will discuss more, and maybe even change their minds, but the community in general won’t change their minds just because a few people did.

There are errors that people will agree are in fact errors, but will dismiss as unimportant. And there are errors which people will deny are errors. So what would actually change many people’s minds?

Becoming a high status, influential thought leader might work. But social climbing is a very different process than truth seeking.

If people liked me (or whoever the critic was) and liked some alternative I was offering, they’d be more willing to change their minds. Anyone who wanted to say “Yeah, Critical Fallibilism is great. RAZ is outdated and flawed.” would be receptive to the errors I pointed out. People with the right biases or agendas would like the criticisms because the criticisms help them with their goals. Other people would interpret the criticism as fighting against their goals, not helping – e.g. AI alignment researchers basing a lot of their work on premises from RAZ would tend to be hostile to the criticism instead of grateful for the opportunity to stop using incorrect premises and thereby wasting their careers.

I’m confident that I could look through RAZ and find an error. If I thought it’d actually be useful, I’d do that. I did recently find two errors in a different book favored by the LW and EA communities (and I wasn’t actually looking for errors, so I expect there are many others – actually there were some other errors I noticed but those were more debatable). The first error I found was a misquote. I consider it basically inexcusable. It’s from a blog post, so it would be copy/pasted not typed in, so why would there be any changes? That’s a clear-cut error which is really hard to deny is an error. I found a second related error which is worse but requires more skill and judgment to evaluate. The book has a bunch of statements summarizing some events and issues. The misquote is about that stuff. And, setting aside the misquote, the summary is wrong too. It gives an inaccurate portrayal of what happened. It’s biased. The misquote error is minor in some sense: it’s not particularly misleading. The misleading, biased summary of events is actually significantly wrong and misleading.

I can imagine writing two different posts about it. One tries to point out how the summary is misleading in a point-by-point way breaking it down into small, simple points that are hard to deny. This post would use quotes from the book, quotes from the source material, and point out specific discrepancies. I think people would find this dry and pedantic, and not care much.

In my other hypothetical post, I would emphasize how wrong and misleading what the book says is. I’d focus more on the error being important. I’d make less clear-cut claims so I’d be met with more denials.

So I don’t see what would actually work well.

That’s why I haven’t posted about the book’s problems previously and haven’t named the guilty book here. RAZ is not the book I found these errors in. I used a different example on purpose (and, on the whole, I like RAZ, so it’s easier for me avoid a conflict with people who like it). I don’t want to name the book without a good plan for how to make my complaints/criticisms productive, because attacking something that people like, without an achievable, productive purpose, will just pointlessly alienate people.


Elliot Temple | Permalink | Messages (0)

Organized EA Cause Evaluation

I wrote this for the Effective Altruism forum. Link.


Suppose I have a cause I’m passionate about. For example, we’ll use fluoridated water. It’s poison. It lowers IQs. Changing this one thing is easy (just stop purposefully doing it) and has negative cost (it costs money to fluoridate water; stopping saves money) and huge benefits. That gives it a better cost to benefit ratio than any of EA’s current causes. I come to EA and suggest that fluoridated water should be the highest priority.

Is there any *organized** process by which EA can evaluate these claims, compare them to other causes, and reach a rational conclusion about resource allocation to this cause?* I fear there isn’t.

Do I just try to write some posts rallying people to the cause? And then maybe I’m right but bad at rallying people. Or maybe I’m wrong but good at rallying people. Or maybe I’m right and pretty good at rallying people, but someone else with a somewhat worse cause is somewhat better at rallying. I’m concerned that my ability to rally people to my cause is largely independent of the truth of my cause. Marketing isn’t truth seeking. Energy to keep writing more about the issue, when I already made points (that are compelling if true, and which no one has given a refutation of), is different than truth seeking.

Is there any reasonable on-boarding process to guide me to know how to get my cause taken seriously with specific, actionable steps? I don’t think so.

Is there any list of all evaluated causes, their importance, and the reasons? With ways to update the list based on new arguments or information, and ways to add new causes to the list? I don’t think so. How can I even know how important my cause is compared to others? There’s no reasonable, guided process that EA offers to let me figure that out.

Comparing causes often depends on some controversial ideas, so a good list would take that into account and give alternative cause evaluations based on different premises, or at least clearly specify the controversial premises it uses. Ways those premises can be productively debated are also important.

Note: I’m primarily interested in processes which are available to anyone (you don’t have to be famous or popular first, or have certain credentials given to you be a high status authority) and which can be done in one’s free time without having to get an EA-related job. (Let’s suppose I have 20 hours a week available to volunteer for working on this stuff, but I don’t want to change careers. I think that should be good enough.) Being popular, having credentials, or working at a specific job are all separate issues from being correct.

Also, based on a forum search, stopping water fluoridation has never been proposed as an EA cause, so hopefully it’s a fairly neutral example. But this appears to indicate a failure to do a broad, organized survey of possible causes before spending millions of dollars on some current causes, which seems bad. (It could also be related to the lack of any way good way to search EA-related information that isn’t on the forum.)

Do others think these meta issues about EA’s organization (or lack thereof) are important? If not, why? Isn’t it risky and inefficient to lack well-designed processes for doing commonly-needed, important tasks? If you just have a bunch of people doing things their own way, and then a bunch of other people reaching their own evaluations of the subset of information they looked at, that is going to result in a social hierarchy determining outcomes.


Elliot Temple | Permalink | Messages (0)

Criticizing "Against the singularity hypothesis"

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post. I criticize Against the singularity hypothesis by David Thorstad.

Introduction

FYI, I disagree with the singularity hypothesis, but primarily due to epistemology, which isn't even discussed in this article.

Error One

As low-hanging fruit is plucked, good ideas become harder to find (Bloom et al. 2020; Kortum 1997; Gordon 2016). Research productivity, understood as the amount of research input needed to produce a fixed output, falls with each subsequent discovery.

By way of illustration, the number of FDA-approved drugs per billion dollars of inflation-adjusted research expenditure decreased from over forty drugs per billion in the 1950s to less than one drug per billion in the 2000s (Scannell et al. 2012). And in the twenty years from 1971 to 1991, inflation-adjusted agricultural research expenditures in developed nations rose by over sixty percent, yet growth in crop yields per acre dropped by fifteen percent (Alston et al. 2000). The problem was not that researchers became lazy, poorly educated or overpaid. It was rather that good ideas became harder to find.

There are many other reasons for drug research progress to slow down. The healthcare industry, as well as science in general (see e.g. the replication crisis), are really broken, and some of the problems are newer. Also maybe they're putting a bunch of work into updates to existing drugs instead of new drugs.

Similarly, decreasing crop yield growths (in other words, yields are still increasing but by lower percentages) could have many other causes. And also decreasing crop yields are a different thing than a decrease in the number of new agricultural ideas that researchers come up with – it's not even the right quantity to measure to make his point. It's a proxy for the actual thing his argument relies on, and he makes no attempt to consider how good or bad of a proxy it is, and I can easily think of some reasons it wouldn't be a very good proxy.

The comment about researchers not becoming lazy, poorly educated or overpaid is an unargued assertion.

So these are bad arguments which shouldn't convince us of the author's conclusion.

Error Two

Could the problem of improving artificial agents be an exception to the rule of diminishing research productivity? That is unlikely.

Asserting something is unlikely isn't an argument. His followup is to bring up Moore's law potentially ending, not to give an actual argument.

As with the drug and agricultural research, his points are bad because singularity claims are not based on extrapolating patterns from current data, but rather on conceptual reasoning. He didn't even claim his opponents were doing that in the section formulating their position, and my pre-existing understanding of their views is they use conceptual arguments not extrapolating from existing data/patterns (there is no existing data about AGI to extrapolate from, so they use speculative arguments, which is OK).

Error Three

one cause of diminishing research productivity is the difficulty of maintaining large knowledge stocks (Jones 2009), a problem at which artificial agents excel.

You can't just assume that AGIs will be anything like current software including "AI" software like AlphaGo. You have to consider what an AGI would be like before you can even know if it'd be especially good at this or not. If the goal with AGI is in some sense to make a machine with human-like thinking, then maybe it will end up with some of the weaknesses of humans too. You can't just assume it won't. You have to envision what an AGI would be like, or what many different things it might be like that would work (narrow it down to various categories and rule some things out) before you consider the traits it'd have.

Put another way, in MIRI's conception, wouldn't mind design space include both AGIs that are good or bad at this particular category of task?

Error Four

It is an unalterable mathematical fact that an algorithm can run no more quickly than its slowest component. If nine-tenths of the component processes can be sped up, but the remaining processes cannot, then the algorithm can only be made ten times faster. This creates the opportunity for bottlenecks unless every single process can be sped up at once.

This is wrong due to "at once" at the end. It'd be fine without that. You could speed up 9 out of 10 parts, then speed up the 10th part a minute later. You don't have to speed everything up at once. I know it's just two extra words but it doesn't make sense when you stop and think about it, so I think it's important. How did it seem to make sense to the author? What was he thinking? What process created this error? This is the kind of error that's good to post mortem. (It doesn't look like any sort of typo; I think it's actually based on some sort of thought process about the topic.)

Error Five

Section 3.2 doesn't even try to consider any specific type of research an AGI would be doing and claim that good ideas would get harder to find for that and thereby slow down singularity-relevant progress.

Similarly, section 3.3 doesn't try to propose a specific bottleneck and explain how it'd get in the way of the singularity. He does bring up one specific type of algorithm – search – but doesn't say why search speed would be a constraint on reaching the singularity. Whether exponential search speed progress is needed depends on specific models of how the hardware and/or software are improving and what they're doing.

There's also a general lack of acknowledgement of, or engagement with, counter-arguments that I can easily imagine pro-singularity people making (e.g. responding to the good ideas getting harder to find point by saying some stuff about mind design space containing plenty of minds that are powerful enough for a singularity with a discontinuity, even if progress slows down later as it approaches some fundamental limits). Similarly, maybe there is something super powerful in mind design space that doesn't rely on super fast search. Whether there is, or not, seems hard to analyze, but this paper doesn't even try. (The way I'd approach it myself is indirectly via epistemology first.)

Error Six

Section 2 mixes Formulating the singularity hypothesis (the section title) with other activities. This is confusing and biasing, because we don't get to read about what the singularity hypothesis is without the author's objections and dislikes mixed in. The section is also vague on some key points (mentioned in my screen recording) such as what an order of magnitude of intelligence is.

Examples:

Sustained exponential growth is a very strong growth assumption

Here he's mixing explaining the other side's view with setting it up to attack it (as requiring a super high evidential burden due to such strong claims). He's not talking from the other side's perspective, trying to present it how they would present it (positively); he's instead focusing on highlighting traits he dislikes.

A number of commentators have raised doubts about the cogency of the concept of general intelligence (Nunn 2012; Prinz 2012), or the likelihood of artificial systems acquiring meaningful levels of general intelligence (Dreyfus 2012; Lucas 1964; Plotnitsky 2012). I have some sympathy for these worries.[4]

This isn't formulating the singularity hypothesis. It's about ways of opposing it.

These are strong claims, and they should require a correspondingly strong argument to ground them. In Section 3, I give five reasons to be skeptical of the singularity hypothesis’ growth claims.

Again this doesn't fit the section it's in.

Padding

Section 3 opens with some restatements of material from section 2 which was also in the introduction some. And look at this repetitiveness (my bolds):

Near the bottom of page 7 begins section 3.2:

3.2 Good ideas become harder to find

Below that we read:

As low-hanging fruit is plucked, good ideas become harder to find

Page 8 near the top:

It was rather that good ideas became harder to find.

Later in that paragraph:

As good ideas became harder to find

Also, page 11:

as time goes on ideas for further improvement will become harder to find.

Page 17

As time goes on ideas for further improvement will become harder to find.

Amount Read

I read to the end of section 3.3 then briefly skimmed the rest.

Screen Recording

I recorded my screen and made verbal comments while writing this:

https://www.youtube.com/watch?v=T1Wu-086frA


Elliot Temple | Permalink | Messages (0)

Critiquing an Axiology Article about Repugnant Conclusions

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post. I criticize Minimalist extended very repugnant conclusions are the least repugnant by Teo Ajantaival.

Error One

Archimedean views (“Quantity can always substitute for quality”)

Let us look at comparable XVRCs for Archimedean views. (Archimedean views roughly say that “quantity can always substitute for quality”, such that, for example, a sufficient number of minor pains can always be added up to be worse than a single instance of extreme pain.)

It's ambiguous/confusing about whether by "quality" you mean different quantity sizes, as in your example (substitution between small pains and a big pain), or you actually mean qualitatively different things (e.g. substitution between pain and the thrill of skydiving).

Is the claim that 3 1lb steaks can always substitute for 1 3lb steak, or that 3 1lb pork chops can always substitute for 1 ~3lb steak? (Maybe more or less if pork is valued less or more than steak.)

The point appears to be about whether multiple things can be added together for a total value or not – can a ton of small wins ever make up for a big win? In that case, don't use the word "quality" to refer to a big win, because it invokes concepts like a qualitative difference rather than a quantitative difference.

I thought it was probably about whether a group of small things could substitute for a bigger thing but then later I read:

Lexical views deny that “quantity can always substitute for quality”; instead, they assign categorical priority to some qualities relative to others.

This seems to be about qualitative differences: some types/kinds/categories have priority over others. Pork is not the same thing as steak. Maybe steak has priority and having no steak can't be made up for with a million pork chops. This is a different issue. Whether qualitative differences exist and matter and are strict is one issue, and whether many small quantities can add together to equal a large quantity is a separate issue (though the issues are related in some ways). So I think there's some confusion or lack of clarity about this.

I didn't read linked material to try to clarify matters, except to notice that this linked paper abstract doesn't use the word "quality". I think, for this issue, the article should stand on its own OK rather than rely on supplemental literature to clarify this.

Actually, I looked again while editing, and I've now noticed that in the full paper (as linked to and hosted by PhilPapers, the same site as before), the abstract text is totally different and does use the word "quality". What is going on!? PhilPapers is broken? Also this paper, despite using the word "quality" in the abstract once (and twice in the references), does not use that word in the body, so I guess it doesn't clarify the ambiguity I was bringing up, at least not directly.

Error Two

This is a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation.

I suspect you're using an offsetting view in epistemology when making this statement concluding against offsetting views in axiology. My guess is you don't know you're doing this or see the connection between the issues.

I take a "strong point in favor" to refer to the following basic model:

We have a bunch of ideas to evaluate, compare, choose between, etc.

Each idea has points in favor and points against.

We weight and sum the points for each idea.

We look at which idea has the highest overall score and favor that.

This is an offsetting model where points in favor of an idea can offset points against that same idea. Also, in some sense, points in favor of an idea offset points in favor of rival ideas.

I think offsetting views are wrong, in both epistemology and axiology, and there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field.

Error Three

The article jumps into details without enough framing about why this matters. This is understandable for a part 4, but on the other hand you chose to link me to this rather than to part 1 and you wrote:

Every part of this series builds on the previous parts, but can also be read independently.

Since the article is supposed to be readable independently, then the article should have explained why this matters in order to work well independently.

A related issue is I think the article is mostly discussing details in a specific subfield that is confused and doesn't particularly matter – the field's premises should be challenged instead.

And another related issue is the lack of any consideration of win/win approaches, discussion of whether there are inherent conflicts of interest between rational people, etc. A lot of the article topics are related to political philosophy issues (like classical liberalism's social harmony vs. Marxism's class warfare) that have already been debated a bunch, and it'd make sense to connect claims and viewpoints to that the existing knowledge. I think imagining societies with different agents with different amounts of utility or suffering, fully out of context of imagining any particular type of society, or design or organization or guiding principles of society, is not very productive or meaningful, so it's no wonder it's gotten bogged down in abstract concerns like the very repugnant conclusion stuff with no sign of any actually useful conclusions coming up.

This is not the sort of error I primarily wanted to point out. However, the article does a lot of literature summarizing instead of making its own claims. So I noticed some errors in the summarized ideas but that's different than errors in the articles. To point out errors in an article itself, when its summarizing other ideas, I'd have to point out that it has inaccurately summarized the ideas. That requires reading the cites and comparing them to the summaries. Which I don't think would be especially useful/valuable to do. Sometimes people summarize stuff they agree with, so criticizing the content works OK. But here a lot of it was summarizing stuff the author and I both disagree with, in order to criticize it, which doesn't provide many potential targets for criticism. So that's why I went ahead and made some more indirect criticism (and included more than one point) for the third error.

But I'd suggest that @Teo Ajantaival watch my screen recording (below) which has a bunch of commentary and feedback on the article. I expect some of it will be useful and some of the criticisms I make will be relevant to him. He could maybe pick out some things I said and recognize them as criticisms of ideas he holds, whereas sometimes it was hard for me to tell what he believes because he was just summarizing other people's ideas. (When looking for criticism, consider if I'm right, does it mean you're wrong? If so, then it's a claim by me about an error, even if I'm actually mistaken.) My guess is I said some things that would work as better error claims than some of the three I actually used, but I don't know which things they are. Also, I think if we were to debate, discussing the underlying premises, and whether this sub-field even matters, would acutally be more important than discussing within-field details, so it's a good thing to bring up. I think my disagreement with the niche that the article is working within is actually more important than some of the within-niche issues.

Offsetting and Repugnance

This section is about something @Teo Ajantaival also disagrees with, so it's not an error by him. It could possibly be an error of omission if he sees this as a good point that he didn't know but would have wanted to think of but didn't. To me it looks pretty important and relevant, and problematic to just ignore like there's no issue here.

If offsetting actually works – if you're a true believer in offsetting – then you should not find the very repugnant scenario to be repugnant at all.

I'll illustrate with a comparison. I am, like most people, to a reasonable approximation, a true believer in offsetting for money. That is, $100 in my bank account fully offsets $100 of credit card debt that I will pay off before there are any interest charges. There do exist people who say credit cards are evil and you shouldn't have one even if you pay it off in full every month, but I am not one of those people. I don't think debt is very repugnant when it's offset by assets like cash.

And similarly, spreading out the assets doesn't particularly matter. A billion bank accounts with a dollar each, ignoring some adminstrative hassle details, are just as good as one bank account with a billion dollars. That money can offset a million dollars of credit card debt just fine despite being spread out.

If you really think offsetting works, then you shouldn't find it repugnant to have some negatives that are offset. If you find it repugnant, you disagree with offsetting in that case.

I disagree with offsetting suffering – one person being happy does not simply cancel out someone else being victimized – and I figure most people also disagree with suffering offsetting. I also disagree with offsetting in epistemology. Money, as a fungible commodity, is something where offsetting works especially well. Similarly, offsetting would work well for barrels of oil of a standard size and quality, although oil is harder to transport than money so location matters more.

Bonus Error by Upvoters

At a glance (I haven't read it yet as I write this section), the article looks high effort. It has ~22 upvoters but no comments, no feedback, no hints about how to get feedback next time, no engagement with its ideas. I think that's really problematic and says something bad about the community and upvoting norms. I talk about this more at the beginning of my screen recording.

Update after reading the article: I can see some more potential reasons the article got no engagement (too specialized, too hard to read if you aren't familiar with the field, not enough introductory framing of why this matters) but someone could have at least said that. Upvoting is actually misleading feedback if you have problems like that with the article.

Bonus Literature on Maximizing or Minimizing Moral Values

https://www.curi.us/1169-morality

This article, by me, is about maximizing squirrels as a moral value, and more generally about there being a lot of actions and values which are largely independent of your goal. So if it was minimizing squirrels or maximizing bison, most of the conclusions are the same.

I commented on this some in my screen recorded after the upvoters criticism, maybe 20min in.

Bonus Comments on Offsetting

(This section was written before the three errors, one of which ended up being related to this.)

Offsetting views are problematic in epistemology too, not just morality/axiology. I've been complaining about them for years. There's a huge, widespread issue where people basically ignore criticism – don't engage with it and don't give counter-arguments or solutions to the problems it raises – because it's easier to go get a bunch more positive points elsewhere to offset the criticism. Or if they already think their idea already has a ton of positive points and a significant lead, then they can basically ignore criticism without even doing anything. I commented on this verbally around 25min into the screen recording.

Screen Recording

I recorded my screen and talked while creating this. The recording has a lot of commentary that isn't written down in this post.

https://www.youtube.com/watch?v=d2T2OPSCBi4


Elliot Temple | Permalink | Messages (0)

Finding Errors in The Case Against Education by Bryan Caplan

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post.

Introduction

I'm no fan of university nor academia, so I do partly agree with The Case Against Education by Bryan Caplan. I do think social climbing is a major aspect of university. (It's not just status signalling. There's also e.g. social networking.)

I'm assuming you can electronically search the book to read additional context for quotes if you want to.

Error One

For a single individual, education pays.

You only need to find one job. Spending even a year on a difficult job search, convincing one employer to give you a chance, can easily beat spending four years at university and paying tuition. If you do well at that job and get a few years of work experience, getting another job in the same industry is usually much easier.

So I disagree that education pays, under the signalling model, for a single individual. I think a difficult job search is typically more efficient than university.

This works in some industries, like software, better than others. Caplan made a universal claim so there's no need to debate how many industries this is viable in.

Another option is starting a company. That's a lot of work, but it can still easily be a better option than going to university just so you can get hired.

Suppose, as a simple model, that 99% of jobs hire based on signalling and 1% don't. If lots of people stop going to university, there's a big problem. But if you individually don't go, you can get one of the 1% of non-signalling jobs. Whereas if 3% of the population skipped university and competed for 1% of the jobs, a lot of those people would have a rough time. (McDonalds doesn't hire cashiers based on signalling – or at least not the same kind of signalling – so imagine we're only considering good jobs in certain industries so the 1% non-signalling jobs model becomes more realistic.)

When they calculate the selfish (or “private”) return to education, they focus on one benefit—the education premium—and two costs—tuition and foregone earnings.[4]

I've been reading chapter 5 trying to figure out if Caplan ever considers alternatives to university besides just entering the job market in the standard way. This is a hint that he doesn't.

Foregone earnings are not a cost of going to university. They are a benefit that should be added on to some, but not all, alternatives to university. Then univeristy should be compared to alternatives for how much benefit it gives. When doing that comparison, you should not subtract income available in some alternatives from the benefit of university. Doing that subtraction only makes sense and works out OK if you're only considering two options: university or get a job earlier. When there are only two options, taking a benfit from one and instead subtracting it from the other as an opportunity cost doesn't change the mathematical result.

See also Capitalism: A Treatise on Economics by George Reisman (one of the students of Ludwig von Mises) which criticizes opportunity costs:

Contemporary economics, in contrast, continually ignores the vital connection of income and cost with the receipt and outlay of money. It does so insofar as it propounds the doctrines of “imputed income” and “opportunity cost.”[26] The doctrine of imputed income openly and systematically avows that the absence of a cost constitutes income. The doctrine of opportunity cost, on the other hand, holds that the absence of an income constitutes a cost. Contemporary economics thus deals in nonexistent incomes and costs, which it treats as though they existed. Its formula is that money not spent is money earned, and that money not earned is money spent.

That's from the section "Critique of the Concept of Imputed Income" which is followed by the section "Critique of the Opportunity-Cost Doctrine". The book explains its point in more detail than this quote. I highly recommend Reisman's whole book to anyone who cares about economics.

Risk: I looked for discussion of alternatives besides university or entering the job market early, such as a higher effort job search or starting a business. I didn't find it, but I haven't read most of the book so I could have missed it. I primarily looked in chapter 5.

Error Two

The answer would tilt, naturally, if you had to sing Mary Poppins on a full-price Disney cruise. Unless you already planned to take this vacation, you presumably value the cruise less than the fare. Say you value the $2,000 cruise at only $800. Now, to capture the 0.1% premium, you have to fork over three hours of your time plus the $1,200 difference between the cost of the cruise and the value of the vacation.

(Bold added to quote.)

The full cost of the cruise is not just the fare. It's also the time cost of going on the cruise. It's very easy to value the cruise experience at more than the ticket price, but still not go, because you'd rather vacation somewhere else or stay home and write your book.

BTW, Caplan is certainly familiar with time costs in general (see e.g. the last sentence quoted).

Error Three

Laymen cringe when economists use a single metric—rate of return—to evaluate bonds, home insulation, and college. Hasn’t anyone ever told them money isn’t everything! The superficial response: Economists are by no means the only folks who picture education as an investment. Look at students. The Higher Education Research Institute has questioned college freshmen about their goals since the 1970s. The vast majority is openly careerist and materialist. In 2012, almost 90% called “being able to get a better job” a “very important” or “essential” reason to go to college. Being “very well-off financially” (over 80%) and “making more money” (about 75%) are almost as popular. Less than half say the same about “developing a meaningful philosophy of life.”[2] These results are especially striking because humans exaggerate their idealism and downplay their selfishness.[3] Students probably prize worldly success even more than they admit.

(Bold added.)

First, minor point, some economists have that kind of perspective about rate of return. Not all of them.

And I sympathize with the laymen. You should consider whether you want to go to university. Will you enjoy your time there? Future income isn't all that matters. Money is nice but it doesn't really buy happiness. People should think about what they want to do with their lives, in realistic ways that take money into account, but which don't focus exclusively on money. In the final quoted sentence he mentions that students (on average) probably "prize worldly success even more than they admit". I agree, but I think some of those students are making a mistake and will end up unhappy as a result. Lots of people focus their goals too much on money and never figure out how to be happy (also they end up unhappy if they don't get a bunch of money, which is a risk).

But here's the more concrete error: The survey does not actually show that students view education in terms of economic returns only. It doesn't show that students agree with Caplan.

The issue, highlighted in the first sentence, is "economists use a single metric—rate of return". Do students agree with that? In other words, do students use a single metric? A survey where e.g. 90% of them care about that metric does not mean they use it exclusively. They care about many metrics, not a single one. Caplan immediately admits that so I don't even have to look the study up. He says 'Less than half [of students surveyed] say the same [very important or essential reason to go to university] about “developing a meaningful philosophy of life.”' Let's assume less than half means a third. Caplan tries to present this like the study is backing him up and showing how students agree with him. But a third disagreeing with him on a single metric is a ton of disaagreement. If they surveyed 50 things, and 40 aren't about money, and just 10% of students thought each of those 40 mattered, then maybe around zero students would agree with Caplan about only the single metric being important (the answers aren't independent so you can't just use math to estimate this scenario btw).

Bonus Error

Self-help gurus tend to take the selfish point of view for granted. Policy wonks tend to take the social point of view for granted. Which viewpoint—selfish or social—is “correct”? Tough question. Instead of taking sides, the next two chapters sift through the evidence from both perspectives—and let the reader pick the right balance between looking out for number one and making the world a better place.

This neglects to consider the classical liberal view (which I believe, and which an economist ought to be familiar with) of the harmony of (rational) interests of society and the individual. There is no necessary conflict or tradeoff here. (I searched the whole book for "conflict", "harmony", "interests" and "classical" but didn't find this covered elsewhere.)

I do think errors of omission are important but I still didn't want to count this as one of my three errors. I was trying to find somewhat more concrete errors than just not talking about something important and relevant.

Bonus Error Two

The deeper response to laymen’s critique, though, is that economists are well aware money isn’t everything—and have an official solution. Namely: count everything people care about. The trick: For every benefit, ponder, “How much would I pay to obtain it?”

This doesn't work because lots of things people care about are incommensurable. They're in different dimensions that you can't convert between. I wrote about the general issue of taking into account multiple dimensions at once at https://forum.effectivealtruism.org/posts/K8Jvw7xjRxQz8jKgE/multi-factor-decision-making-math

A different way to look at it is that the value of X in money is wildly variable by context, not a stable number. Also how much people would pay to obtain something is wildly variable by how much money they have, not a stable number.

Potential Error

If university education correlates with higher income, that doesn't mean it causes higher income. Maybe people who are likely to get high incomes are more likely to go to university. There are also some other correlation isn't causation counter-arguments that could be made. Is this addressed in the book? I didn't find it, but I didn't look nearly enough to know whether it's covered. Actually I barely read anything about his claims that university results in higher income, which I assume are at least partly based on correlation data, but I didn't really check. So I don't know if there's an error here but I wanted to mention it. If I were to read the book more, this is something I'd look into.

Screen Recording

Want to see me look through the book and write this post? I recorded my process with sporadic verbal commentary:

https://www.youtube.com/watch?v=BQ70qzRG61Y


Elliot Temple | Permalink | Messages (0)

Misquoting Is Conceptually Similar to Deadnaming: A Suggestion to Improve EA Norms

Our society gives people (especially adults) freedom to control many aspects of their lives. People choose what name to go by, what words to say, what to do with their money, what gender to be called, what clothes to wear, and much more.

It violates people’s personal autonomy to try to control these things without their consent. It’s not your place to choose e.g. what to spend someone else’s money on, what clothes they should wear, or what their name is. It’d be extremely rude to call me “Joan” instead of “Elliot”.

Effective Altruism (EA) has written norms related to this:

Misgendering deliberately and/or deadnaming gratuitously is not ok, although mistakes are expected and fine (please accept corrections, though).

I think this norm is good. I think the same norm should be applied to misquoting for the same reasons. It currently isn’t (context).

Article summary: Misquoting is different than sloppiness or imprecision in general. Misquoting puts words in someone else’s mouth without their consent. It takes away their choice of what words to say or not say, just like deadnaming takes away their choice of what name to use.

I’d also suggesting applying the deadnaming norm to other forms of misnaming besides deadnaming, though I don’t know if those ever actually come up at EA, whereas misquoting happens regularly. I won’t include examples of misquotes for two reasons. First, I don’t want to name and shame individuals (especially when it’s a widespread problem and it could easily have been some other individuals instead). Second, I don’t want people to respond by trying to debate the degree of importance or inaccuracy of particular misquotes. That would miss the point about people’s right to control their own speech. It’s not your place to speak for other people, without their consent, even a little bit, even in unimportant ways.

I’ll clarify how I think the norm for deadnaming works, which will simultaneously clarify what I think about misquoting. There are some nuances to it. Then I’ll discuss misquoting more and discuss costs and benefits.

Accidents

Accidental deadnaming is OK but non-accidental deadnaming isn’t. If you deadname someone once, and you’re corrected, you should fix it and you shouldn’t do it again. Accidentally deadnaming someone many times is implausible or unreasonable; reasonable people who want to stop having those accidents can stop.

While “mistakes are expected and fine”, EA’s norm is that deadnaming on purpose is not fine nor expected. Misquotes, like deadnaming, come in accidental and non-accidental categories, and the non-accidental ones shouldn’t be fine.

How can we (charitably) judge what is an accident?

A sign that deadnaming wasn’t accidental is when someone defends, legitimizes or excuses it. If they say, “Sorry, my mistake.” it was probably a genuine accident. If they instead say “Deadnaming is not that bad.” or “It’s not a big deal.” or “Why do you care so much?”, or “I’m just using the name on your birth certificate.” then their deadnaming was partly due to their attitude rather than by accident. That violates EA norms.

When people resist a correction, or deny the importance of getting it right, then their mistake wasn’t just an accident.

For political reasons, some people resist using other people’s preferred name or pronouns. There’s a current political controversy about it. This makes deadnaming more common than it would otherwise be. Any deadnaming that occurs in part due to political attitudes is not fully accidental. Similarly, there is a current intellectual controversy about whether misquoting is a big deal or whether, instead, complaining about it is annoyingly pedantic and unproductive. This controversy increases the frequency of misquotes.

However, that controversy about misquotes and precision is separate from the issue of people’s right to control their own speech and choose what words to say or not say. Regardless of the outcome of the precision vs. sloppiness debate in general, misquotes are a special case because they non-consensually violate other people’s control over their own speech. It’s a non sequitur to go from thinking that lower effort, less careful writing is good to the conclusion that it’s OK to say that John said words that he did not say or choose.

People who deadname frequently claim it’s accidental when there are strong signs it isn’t accidental, such as resisting correction, making political comments that reveal their agenda, or being unapologetic. If they do that repeatedly, I don’t think EA would put up with it. Misquoting could be treated the same way.

Legitimacy

Sometimes people call me “Elliott” and I usually say nothing about the misspelling. I interpret it as an accident because it doesn’t fit any agenda. I don’t know why they’d do it on purpose. If I expected them to use my name many times in the future, or they were using it in a place that many people would read it, then I’d probably correct them. If I corrected them, they would say “oops sorry” or something like that; as long as they didn’t feel attacked or judged, and they don’t have a guilty conscience, then they wouldn’t resist the correction.

My internet handle is “curi”. Sometimes people call me “Curi”. When we’re having a conversation and they’re using my name repeatedly, I may ask them to use “curi”. A few people have resisted this. Why? Besides feeling hostility towards a debate opponent, I think some were unfamiliar with internet culture, so they don’t regard name capitalization as a valid, legitimate choice. They believe names should be formatted in a standard way. They think I’m in the wrong by wanting to have a name that starts with a lowercase letter. They think, by asking them to start a name with a lowercase letter, I’m the one trying to control them in a weird, inappropriate way.

People resist corrections when they think they’re in the right in some way. In that case, the mistake isn’t accidental. Their belief that it’s good in some way is a causal factor in it happening. If it was just an accident, they wouldn’t resist fixing the mistake. Instead, there is a disagreement; they like something about the alleged mistake. On the EA forum, you’re not allowed to disagree that deadnaming is bad and also act on that disagreement by being resistant to the forum norms. You’re required to go along with and respect the norms. You can get a warning or ban for persistent deadnaming.

People’s belief that they’re in the right usually comes from some kind of social-cultural legitimacy, rather than being their own personal opinion. Deadnaming and misgendering are legitimized by right wing politics and by some traditional views. Capitalizing the first letter of a name, and lowercasing the rest, is a standard English convention/tradition which some internet subcultures decided to violate, perhaps due to their focus on written over spoken communication. I think misquoting is legitimized primarily by anti-pedantry or anti-over-precision ideas (which is actually a nuanced debate where I think both standard sides are wrong). But viewpoints on precision aren’t actually relevant to whether it’s acceptable or violating to put unchosen words in someone else’s mouth. Also, each person has a right to decide how precise to be in their own speech. When you quote, it’s important to understand that that isn’t your speech; you’re using someone else’s speech in a limited way, and it isn’t yours to control.

When someone asks you not to deadname, you may feel that they’re asking you to go against your political beliefs, and therefore want to resist what feels like politicized control over your speech, which asks you to use your own speech contrary to your values. However, a small subset of speech is more about other people than yourself, so others need to have significant control over it. That subset includes names, pronouns and quotes. When asked not to misquote, instead of feeling like your views on precision are being challenged, you should instead recognize that you’re simply being asked to respect other people’s right to choose what words to say or not say. It’s primarily about them, not you. And it’s primarily about their control over their own life and speech, not about how much precision is good or how precisely you should speak.

Control over names and pronouns does have to be within reason. You can’t choose “my master who I worship” as a name or pronoun and demand that others say it. I’m not aware of anyone ever seriously wanting to do that. I don’t think it’s a real problem or what the controversy is actually about (even though it’s a current political talking point).

Our culture has conflicting norms, but it does have a very clear, well known norm in favor of exact quotes. That’s taught in schools and written down in policies at some universities and newspapers. We lack similarly clear or strong norms for many other issues related to precision. Why? Because the norm against misquoting isn’t primarily about precision. Misquoting is treated differently than other issues related to precision because it’s not your place to choose someone else’s words any more than it’s your place to choose their name or gender.

Misquotes Due to Bias

Misquotes usually aren’t random errors.

Sometimes people make a typo. That’s an accident. Typos can be viewed as basically random errors. I bet there are actually patterns regarding which letters or letter combinations get more typos. And people could work to make fewer typos. But there’s no biased agenda there, so in general it’s not a problem.

Most quotes can be done with copy/paste, so typos can be avoided. If someone has a general policy of typing in quotes and keeps making typos within quotes, they should switch to using copy/paste. At my forum, I preemptively ask everyone to use software tools like copy/paste when possible to avoid misquotes. I don’t wait and ask them to switch to less error-prone quoting methods after they make some errors. That’s because, as with deadnaming, those errors mistreat other people, so I’d rather they didn’t happen in the first place.

Except for typos and genuine accidents, misquotes are usually changed in some way that benefits or favors the misquoter, not in random ways.

People often misquote because they want to edit things in their favor, even in very subtle ways. Tiny changes can make a quote seem more or less formal or tweak the connotations. People often edit quotes to remove some ambiguity, so it reads as an author more clearly saying something than he did.

Sometimes people want their writing to look good with no errors, so they want to change anything in a quote that they regard as an error, like a comma or lack of comma. Instead of respecting the quote as someone else’s words – their errors are theirs to make (or to disagree are errors) – they want to control it because they’re using it within their own writing, so they want to make it conform to their own writing standards. People should understand that when they quote, they are giving someone else a space within their writing, so they are giving up some control.

People also misquote because they don’t respect the concept of accurate quotations. These misquotes can be careless with no other agenda or bias – they aren’t specifically edited to e.g. help one side of a debate. However, random changes to the wordings your debate partners use tend to be bad for them. Random changes tend to make their wordings less precise rather than more precise. As we know from evolution, random changes are more likely to make something less adapted to a purpose rather than more adapted.

If you deadname people because you don’t respect the concept of people controlling their name, that’s not OK. If you are creating accidents because you don’t care to try to get names right, you’re doing something wrong. Similarly, if you create accidental misquotes because you don’t respect the concept of people controlling their own speech and wordings, you’re doing something wrong.

Also, imprecision in general is an enabler of bias because it gives people extra flexibility. They get more options for what to say, think or do, so they can pick the one that best fits their bias. A standard example is rounding in their favor. If you’re 10 minutes late, you might round that down to 5 minutes in a context where plus or minus five minutes of precision is allowed. On the other hand, if someone else is 40 minutes late, you might round that up to an hour as long as that’s within acceptable boundaries of imprecision. People also do this with money. Many people round their budget up but round their expenses down, and the more imprecise their thinking, the larger the effect. If permissible imprecision gives people multiple different versions of a quote that they can use, they’ll often pick one that is biased in their favor, which is different than a fully accidental misquote.

Misquotes Due to Precise Control or Perfectionism

Some non-accidental misquotes, instead of due to bias, are because people want to control all the words in their essay (or book or forum post). They care so much about controlling their speech, in precise detail, that they extend that control to the text within quotes just because it’s within their writing. They’re used to having full control over everything they write and they don’t draw a special boundary for quotations; they just keep being controlling. Then, ironically, when challenged, they may say “Oh who cares; it’s just small changes; you don’t need precise control over your speech.” But they changed the quote because of their extreme desire to exactly control anything even resembling their own speech. If you don’t want to give up control enough to let someone else speak in entirely their own words within your writing, there is a simple solution: don’t quote them. If you want total control of your stuff, and you can’t let a comma be out of place even within a quote, you should respect other people wanting control of their stuff, too. Some people don’t fully grasp that the stuff within quotes is not their stuff even though it’s within their writing. Misquotes of this nature come more from a place of perfectionism and precise control, and lack of empathy, rather than being sloppy accidents. These misquotes involve non-random changes to make the text fit the quoter’s preferences better.

Types of Misquotes

I divide misquotes into two categories. The first type changes a word, letter or punctuation mark. It’s a factual error (the quote is factually wrong about what the person said). It’s inaccurate in a clear, literal way. Computers can pretty easily check for this kind of quotation error without needing any artificial intelligence. Just a simple string comparison algorithm can do it. In this case, there’s generally no debate about whether the quote is accurate or inaccurate. There are also some special rules that allow changing quotes without them being considered inaccurate, e.g. using square brackets to indicate changes or notes, or using ellipses for omitted words.

The second type of misquote is a misleading quote, such as taking words out of context. There is sometimes debate about whether a quote is misleading or not. Many cases are pretty clear, and some cases are harder to judge. In borderline cases, we should be forgiving of the person who did it, but also, in general, they should change it if the person being quoted objects. (Or, for example, if you’re debating someone about Socrates’ ideas, and they’re the one taking Socrates’ side, and they think your Socrates quote is misleading, then you should change it. You may say all sorts of negative things about the other side of the debate, but that’s not what quotation marks are for. Quotations are a form of neutral ground that should be kept objective, not a place to pursue your debating agenda.)

Here’s an example of a misleading quote that doesn’t violate the basic accuracy rules. You say, “I do not think John is great.” but I quote you as saying “John is great.” The context included an important “not” which has been left out. I think we can all agree that this counts as misquoting even though no words, letters or punctuation marks were changed. And, like deadnaming, it’s very rude to do this to someone.

Small Changes

Sometimes people believe it’s OK to misquote as long as the meaning isn’t changed. Isn’t it harmless to replace a word with a synonym? Isn’t it harmless to change a quote if the author agrees with the changed version? Do really small changes matter?

First of all, if the changes are small and don’t really matter, then just don’t do them. If you think there’s no significant difference, that implies there’s no significant upside, so then don’t misquote. It’s not like it takes substantial effort to refrain from editing a quote; it’s less work not to make changes. And copy/pasting is generally less work than typing.

If someone doesn’t mind a change to a quote, there are still concerns about truth and accuracy. Anyone in the audience may not want to read things he believes are exact quotes but which aren’t. He may find that misleading (and EA has a norm against misleading people). Also, if you ever non-accidentally use inaccurate quotes, then reasonable people will doubt that they can trust any of your quotes. They’ll have to check primary sources for any quotes you give, which will significantly raise the cost of reading your writing and reduce engagement with your ideas. But the main issue – putting words in someone’s mouth without their consent – is gone if they consent. Similarly, it isn’t deadnaming to use an old name of someone who consents to be called by either their old or new name.

However, it’s not your place to guess what words someone would consent to say. If they are a close friend, maybe you have a good understanding of what’s OK with them, and I guess you could try to get away with it. I wouldn’t recommend that and I wouldn’t want to be friends with someone who thought they could speak for me and present it as a quote rather than as an informed guess about my beliefs or about what I would say. But if you want to quote your friend (or anyone else) saying something they haven’t said, and you’re pretty sure they’d be happy to say it, there’s a solution: ask them to say it and then quote them if they do choose to say it. On the other hand, if you’re arguing with someone, you’re in a poor position to judge what words they would consent to saying or what kind of wording edits would be meaningful to them. It’s not reasonable to try to guess what wording edits a debate opponent would consent to and then go ahead with them unilaterally.

Inaccurately paraphrasing debate opponents is a problem too, but it’s much harder to avoid than misquoting is. Misquoting, like deadnaming, is something that you can almost entirely avoid if you want to.

The changes you find small and unimportant can matter to other people with different perspectives on the issues. You may think that “idea”, “concept”, “thought” and “theory” are interchangeable words, but someone else may purposefully, non-randomly use each of those words in different contexts. It’s important that people can control the nuances of their wordings when they want to (even if they can’t give explicit arguments for why they use words that way). Even if an author doesn’t (consciously) see any significant difference between his original wording and your misquote, the misquote is still less representative of his thinking (his subconscious or intuition chose to say it the other way, and that could be meaningful even if he doesn’t realize it).

Even if your misquote would be an accurate paraphrase, and won’t do a bunch of harm by spreading severe misinformation, there’s no need to put quote marks around it. If you’re using an edited version of someone else’s words, so leaving out the quote marks would be plagiarism, then use square brackets and ellipses. There’s already a standard solution for how to edit quotes, when appropriate, without misquoting. There’s no good reason to misquote.

Cost and Benefit

How costly is it to avoid misquotes or to avoid deadnaming? The cost is low but there are some reasons people misjudge it.

Being precise has a high cost, at least initially. But misquoting, like misnaming, is a specific case where, with a low effort, people can get things right with high reliability and few accidents. Reducing genuine accidents to zero is unnecessary and isn’t what the controversy is about.

When a mistake is just an accident, correcting it shouldn’t be a big deal. There is no shame is infrequent accidents. Attempts to correct misquotes sometimes turn into a much bigger deal, with each party writing multiple messages. It can even initiate drama. This is because people oppose the policy of not misquoting, rather than a cost inherent in a policy of not misquoting. It’s the resistance to the policy, not the policy itself, which wastes time and energy and derails conversations.

Most of the observed conversational cost, that goes to talking about misquotes, is due to people’s pro-misquoting attitudes rather than due to any actual difficulty of avoiding misquotes. This misleads people about how large the cost is.

Similarly, if you go to some right wing political forums, getting people to stop deadnaming would be very costly. They’d fight you over it. But if they were happy to just do it, then the costs would be low. It’s not very hard to very infrequently make updates to your memory about the names of a few people. Cost due to opposition to doing something correctly should be clearly differentiated from the cost of doing it correctly.

To avoid misquotes, copy and paste. If you type in a quote from paper, double check it and/or disclaim it as potentially containing a typo. Most books are available electronically so typing quotes in from paper is usually unnecessary and more costly. Most cases of misquoting that I’ve seen, or had a conflict over, involved a quote that could have been copy/pasted. Copy/pasting is easy not costly.

Avoiding misquotes also involves never adding quotation marks around things which are not quotes but which readers would think were quotes. For example, don’t write “John said” and then a paraphrase then also quote marks around it in order to make it seem more exact, precise, rigorous or official than it is. And don’t put quote marks around a paraphrase because you believe you should use a quote, but you’re too lazy to get the quote, and you want to hide that laziness by pretending you did quote.

Accurate quoting can be more about avoiding bias than about effort or precision. You have to want to do it and then resist the temptation to violate the rules in ways that favor you. For some people, that’s not even tempting. It’s like how some people resist the temptation to steal while others don’t find stealing tempting in the first place. You can get to the point that things aren’t tempting and really don’t take effort to not do. Norms can help with that. Due to better anti-stealing norms, many more people aren’t tempted to steal than aren’t tempted to misquote. Anyway, if someone gives in to temptation and steals, deadnames or misquotes, that is not an accident. It’s a different thing. It’s not permissible at EA to deadname because you gave in to temptation, and I suggest misquoting should work that way too.

What’s the upside of misquoting? Why are many people resistant to making a small effort to change? I think there are two main reasons. First, they confuse the misquoting issue with the general issue of being imprecise. They feel like someone asking them not to misquote is demanding that they be a more precise thinker and writer in general. Actually, people asking not to be misquoted, like people asking not to be deadnamed, don’t want their personal domain violated. Second, people like misquoting because it lets lets them make biased changes to quotes. People don’t like being controlled by rules that give them less choice of what to do and less opportunity to be flexible in their favor (a.k.a. biased). Many people have a general resistance to creating and following written policies. I’ve written about how that’s related to not understanding or resisting the rule of law.

Another cost of avoiding misquotes is that you should be careful when using software editing tools like spellcheck or Grammarly. They should have automatic quote detection features and warn you before making changes within quotes, but they don’t. These tools encourage people to quickly make many small changes without reading the context, so people may change something without even knowing it’s within a quote. People can also click buttons like “correct all” and end up editing quotes. Or they might decide to replace all instances of “colour” with “color” in their book, do a mass find/replace, and accidentally change a quote. I wonder how many small misquotes in recent books are caused this way, but I don’t think it’s the cause of many misquotes on forums. Again, the occasional accident is OK; perfection is not necessary but people could avoid most errors at a low cost and stop picking fights in defense of misquotes or deadnaming.

If non-accidental misquoting is prohibited at EA, just like deadnaming, then it will provide a primary benefit by defending people’s control over their own speech. It will also provide a secondary benefit regarding truth, accuracy and precision. It’s debatable how large that accuracy benefit is and how much cost it would be worth. However, in this case, the marginal cost of that benefit would be zero. If you change misquoting norms for another reason which is worth the cost by itself, then then the gain in accuracy is a free bonus.

There are some gray areas regarding misquoting, where it’s harder to judge whether it’s an error. Those issues are more costly to police. However, most of the benefit is available just by policing misquotes which are clearly and easily avoidable, which is the large majority of misquotes. Doing that will have a good cost to benefit ratio.

Another cost of misquoting is it can gaslight people, especially with small, subtle changes. It can cause them to doubt themselves or create false memories of their own speech to match the misquote. It takes work to double check what you actually said after reading someone quote you, which is a cost. Many people don’t do that work, which leaves them vulnerable. There’s a downside both do doing or not doing that work. That’s a cost imposed by allowing misquotes to be common and legitimized.

Tables

Benefits and costs of anti-misquoting norms:

Benefits Costs
Respect people’s control over their speech Avoiding carelessness
Accuracy Resisting temptation
Prevent conflicts about misquotes Not getting to bias quotes in your favor
No hidden, biased tweaks in quotes you read Learning to use copy/paste hotkeys
Less time editing quotes Not getting full control over quoted text like you have over other text in your post
Quotes and paraphrases differentiated Not getting to put quote marks around whatever you want to
Filter out persistent misquoters Lose people who insist on misquoting
Effort to spread and enforce norm

For comparison, here’s a cost/benefit table for anti-deadnaming norms:

Benefits Costs
Respect people's control over their name Avoiding carelessness
Accuracy Resisting temptation
Filter out persistent deadnamers Lose people who insist on deadnaming
Not getting to call people whatever you want
Effort to spread and enforce norm

Potential Objections

If I can’t misquote, how can I tweak a quote wording to fit my sentence? Use square brackets.

If I can’t misquote, how can I supply context for a quote and keep it short? Use square brackets or explain the context before giving the quote.

What if I want to type in a quote and make a typo? If you’re a good enough typist that you don’t mind typing extra words, I’m sure you can also manage to use copy/paste hotkeys.

What if I’m quoting a paper book? Double check what you typed in and/or put a disclaimer that it’s typed in by hand.

What if an accident happens? As with deadnaming, rare, genuine accidents are OK. Accidents that happen because you don’t really care about deadnaming or misquoting are not fine.

Who cares? People who think about what words to say and not say, and put effort into those decisions. They don’t want someone else to overrule those decisions. Whether you’re one of those people or not, people who think about what to say are people you should want to have on your forum.

Who else cares? People who want to form accurate beliefs about the world and have high standards don’t want to read misquotes and potentially be fooled by them or have to look stuff up in primary sources frequently. It’s much less work for people to not misquote in the first place than for readers (often multiple readers independently) to check sources.

Is it really that big a deal? Quoting accurately isn’t very hard and isn’t that big a deal to do. If this issue doesn’t matter much, just do it in the way that doesn’t cause problems and doesn’t draw attention to quoting. If people would stop misquoting then we could all stop talking about this.

Can’t you just ignore being misquoted? Maybe. You can also ignore being deadnamed, but you shouldn’t have to. It’s also hard enough to have discussions when people subtly reframe the issues, and indirectly reframe what you said (often by replying as if you said something, without claiming you said it), which is very common. Those actions are harder to deal with and counter when they involve misquotes – misquotes escalate a preexisting problem and make it worse. On the other hand, norms in favor of using (accurate) quotes more often would make it harder to be subtly biased and misleading about what discussion partners said.

Epistemic Status

I’ve had strong opinions about misquoting for years and brought these issues up with many people. My experiences with using no-misquoting norms at my own forum have been positive. I still don’t know of any reasonable counter-arguments that favor misquotes.

Conclusion

Repeated deadnaming is due to choice not accident. Even if a repeat offender isn’t directly choosing to deadname on purpose, they’re choosing to be careless about the issue on purpose, or they have a (probably political) bias. They could stop deadnaming if they tried harder. EA norms correctly prohibit deadnaming, except by genuine accident. People are expected to make a reasonable (small) effort to not deadname.

Like deadnaming, misquoting violates someone else’s consent and control over their personal domain. People see misquoting as being about the open debate over how precise people should be, but that is a secondary issue. They should have more empathy for people who want to control their own speech. I propose that EA’s norms should be changed to treat misquoting like deadnaming. Misquoting is a frequent occurrence and the forum would be a better place if moderators put a stop to it, as they stop deadnaming.

Norms that allow non-accidental misquoting alienate some people who might otherwise participate, just like allowing non-accidental deadnaming would alienate some potential participants. Try to visualize in your head what a forum would be like where the moderators refused to do anything about non-accidental deadnaming. Even if you don’t personally have a deadname, it’d still create a bad, disrespectful atmosphere. It’s better to be respectful and inclusive, at a fairly small cost, instead of letting some forum users mistreat others. It’s great for forums to enable free speech and have a ton of tolerance, but that shouldn’t extend to people exercising control over something that someone else has the right to control, such as his name or speech. It’s not much work to get people’s names right nor to copy/paste exact quotes and then leave them alone (and to refrain from adding quotation marks around paraphrases). Please change EA’s norms to be more respectful of people’s control over their speech, as the norms already respect people’s control over their name.


Elliot Temple | Permalink | Messages (0)

How I Misunderstood TCS

I saw the blog "Taking Children Seriously" Is Bad. I agree. I’ve thought of more and more flaws with TCS as time has gone on and I’ve written some criticism. I’ve also put warnings/disclaimers on some of my old TCS writing. Also the TCS founders are bad people who are responsible for a harassment campaign against me. Anyway, I wanted to share some thoughts on how/why I didn’t notice TCS’s flaws sooner.


I think I misunderstood TCS for a bunch of reasons, but in a way where the version of TCS in my head was better than what David and Sarah meant. One thing that happened was DD said there was knowledge on some topics, and I believed him and tried to learn/understand it. Then I created some of it.

DD often let me talk a lot while making some comments, and he didn’t tell me when I was saying things that were new to him, which was misleading. I often thought I was figuring out things he already knew with some hints/help, when actually he was hiding his ignorance from me. The best example of this is my method for avoiding coercion, which was part of my attempt to learn (and organize and write down publicly) existing TCS knowledge, but was actually me creating new knowledge. And I’m not sure that to this day DD learned my avoiding coercion method or agrees with it or likes it. But without my method, how do you always find common preferences (quickly, not given unbounded time)? TCS has no real, substantive, usable answer. Just discuss and try, while trying to not be irrational and not coerce. TCS also lacks details for how to have a rational discussion. I’ve tried to understand/create rational methods more than TCS (or Popper) ever did with ideas like Paths Forward, Impasse Chains, decisive arguments, debate trees, and idea-goal-context decision making. TCS never had methods with that level of specificity and usefulness.

DD told me I was really good at drawing out explicit statements of knowledge he already had. But I think a lot of what happened is I brought up issues – via questions, criticism or explanations – which he hadn’t actually thought of. That prompted him to make new explicit statements to address my new ideas.

TCS had very broad, abstract claims like “problems are soluble”, as well as simple examples and naive advice. Examples of naive advice are that custody courts and child protective services aren’t very dangerous and you shouldn’t worry about them. Also saying that child predators are very rare and not really a concern even when saying that children are full adults in principle and advocating abolishing age of consent laws. Another example of the lack of substance in TCS advice was DD suggesting to tell teachers to let your child use the phone whenever he wants. If teachers (or babysitters, daycare workers, camp workers, etc.) would actually listen to that kind of request, that would be wonderful. But we don’t live in that world. And if we did, parents would be able to think of the idea “ask them to let my child use the phone whenever he wants” without DD’s help. It’s not a very clever idea; most parents could come up with that themselves (if they had the sort of goals where it’d be a good idea – most TCS-inclined parents would want their kid to be able to phone for help but some other parents wouldn’t actually want that).

Another thing that happened, from my perspective, is I won a lot of arguments. I criticized a lot of genuine errors. I thought that was important and useful, and would lead to progress. DD encouraged and liked it. It was useful practice for my own intellectual development. Before I found DD/TCS I was way above average at critical debate, logic, etc. But now I’ve improved a ton compared to my past self. The critical discussions had value for me but weren’t much use for changing the world. It didn’t help people much. They tended not to learn from criticism. And other people in the audience (besides whoever I was directly replying to) tended not to learn much even if they were making very similar mistakes to what I commented on, and they also tended not to learn much from my example about how to debate, think critically, get logic right, etc.

TCS seemed right and important to me because I used ideas related to it and won arguments. That made it seem to me like people were doing worse than TCS and TCS was a clear improvement. While TCS or any sort of gentle parenting has some improvements over mean parenting, I don’t think that was really the issue. I could have won a lot of arguments using other ideas too. The bigger issue is that people are bad at arguing, logic, learning and following ideas correctly, etc. So yeah they wouldn’t get even the basics of TCS right. In some sense, TCS didn’t seem to need more advanced or complex ideas because people weren’t learning and using the main ideas it did say. TCS is like “be way nicer to your kids guys” and then people post about how they’re mean to their kids and blind to it. They needed more practical help. They needed more guidance to actually learn ideas and integrate them into their lives. These are some of the things I’ve been working on with CF. TCS didn’t do that. It wasn’t actually very good.

TCS actually had ideas that were against being organized or methodical, or intentionally following long term goals. It was more like “follow the fun” and “being untidy helps you be creative” which are just personal irrationalities and errors of DD and SFC, not principles with anything to do with Popperian epistemology. I did OK at learning and making progress despite the lack of structure, but most people didn’t, and I think I would have learned more and faster with more organization and structure. I’ve now imposed more structure on my life and organized things more and it is not self-coercive for me; I’m fine with it and find it useful. I understand that for DD it would be self-coercive, but many people can do it some without major downsides, and DD is wrong and should really work on fixing his flaws. TCS never told people to practice anything but practice is a key part of turning intellectual ideas into something that makes a difference in your daily life (rather than only affecting some decisions that you use conscious analysis for, which often leads to clashes between your conscious and subconscious if you don’t do any practice).

This article itself isn’t very organized, but that’s an intentional choice. I’d rather put organizing and editing effort into epistemology articles for the CF website than into this article. I want to write this article cheaply (in terms of resource use like effort). Similarly, I could write a lot of detailed criticism of TCS and of DD’s books, but I don’t want to because I have other things to do. I’ve made some intentional choices about what to prioritize. My CF site has the stuff I think is most important to put energy into. It avoids parenting, relationships and politics. I think stuff about rationality itself is more important because it’s needed to deal with those other topics well. On a related note, I would like to study math and physics, but I don’t, because I don’t want to take the energy away from my philosophy work. TCS discouraged that kind of resource budgeting choice. But I don’t feel bad or self-coerced about it. I think it’s a good choice. I don’t have time or energy to do everything that would be nice to do. Prioritizing is part of life. If you don’t prioritize in a conscious or intentional way, you’ll still end up doing some things and not others. The difference will be some more important things don’t get done. Unintentionally not doing some things because you run out of time and energy won’t lead to better outcomes than making some imperfect, intentional, conscious decisions.

It’s important not to fight with yourself and suppress your desires with willpower. It’s important not to consciously choose some priorities that your subconscious disagrees with. People don’t live up to this perfectly. It’s a good goal to try to do better at, but don’t get paralyzed or sad about it. Just don’t purposefully suppress with willpower and think that’s a good longterm strategy to never improve.

It’s pretty common to like something subconscious/emotionally/intuitively and also think it’s important. That’s an achievable, realistic thing. Not everyone is really conflicted about prioritizing whatever their main interest or profession is. Some people like something and prioritize it and that works well for them. It’s not really all that special that I like philosophy, and do it, and I’m OK with deprioritizing math and physics even though those would be fun too. I don’t think DD can do it though, which is part of why he started TCS but later abandoned it – he has poor control over his priorities and they’re unstable. In retrospect, when he wrote over 100 blog posts about politics for his blog Setting the World to Rights, that was a betrayal of TCS. He could and should have written 100 articles about parenting instead (or if he didn’t want to, then don’t found a parenting movement and recruit people to join it in the first place – choose the politics blog instead).

Also, by saying things were very abusive, monstrous, etc., TCS implied the current state of the world was better than it is. Saying TCS was practical and immediately achievable also implied the world is better than it is. I didn’t realize how screwed up the world is and TCS was wrong about it. The world being more screwed up makes TCS thinking less reasonable. (It doesn’t affect abstract principles but it affects applications.) While TCS said most of the world is better than reality, it said all other parenting is really bad. It’s actually pretty common for people to notice errors in their speciality, think it’s a big problem, and assume other specialties aren’t so screwed up. It’s been said that people reading a newspaper article about their profession often see that it’s full of glaring, basic errors … but then for some reason they believe the same newspaper on every other topic. TCS saw parenting errors but believed the same society was reasonable on other topics. (TCS got some of the errors wrong, but there are plenty of real errors in everything so when you decide to be a harsh critic you’ll often get some things right. Or put another way, everything has lots of room for improvement. If you just try to point out flaws, then it’s not so hard to be right some. If you try to suggest viable ways to improve things, that’s much harder, because your suggestions will contain flaws too.)

The best parts of TCS were short, abstract general principles. Their applications of those principles were not so good. The best principles were unoriginal and came from Popper (rationality stuff) or classical liberalism (freedom, cooperative relationships, mutual benefit, win/win solutions). They were open about getting ideas from those two sources. What was more original were the specific applications to parenting, but those weren’t so good… What happened is I learned TCS by trying to understand and apply the principles myself. I reinvented a lot of the applications while trying to figure out the details because TCS didn’t have enough details and because I cared much more about the principles than about parenting (so did DD, who, for that reason, should not have founded a parenting movement – it would have been better if he made a philosophy blog instead, as I have done). Anyway when I worked out applications of the principles myself I came up with a lot of different conclusions without realizing it. That’s a common thing people do when they read something and don’t discuss much, but I was discussing with DD all the time and he didn’t tell me that I was coming up with new and different ideas, and he didn’t express disagreement with the stuff I came up with, which was really misleading to me. An example is that I figured out that TCS implies having only one child (at a time), but DD and SFC didn’t say that and I doubt they believe it, but I don’t recall DD ever expressing disagreement with that idea. TCS also said a bunch of stuff about getting helpers, but what I figured out is its principles suggest that even having a co-parent is very problematic because it gets in the way of taking individual responsibility for a very hard, unconventional project you’re doing where you need full control and can’t rely on others to be rational participants. Not having a co-parent is also very problematic so there’s a hard problem there that TCS doesn’t address at all. (Having only one kid has some problems too, btw. There are downsides to address which TCS hasn’t tried to develop knowledge about.) Having little other help besides a co-parent is reasonably realistic though – much more so than having other helpers who are actually TCS. Thinking you could have lots of TCS helpers is also related to the incorrect adequate society mindset of TCS.


Elliot Temple | Permalink | Messages (0)