[Previous] Praise from David Deutsch | Home | [Next] Discussion: Analyzing a Claimed Contradiction

Overreaching Discussion

How can we understand, manage and deal with our error rate and our error correction rate/capacity? How can we avoid being overwhelmed with errors? How can we succeed more? How can we know what we're talking about instead of blundering around lost and confused?


Elliot Temple on April 20, 2020

Messages (13)

Overreaching Example

https://twitter.com/jeremyphoward/status/1251900333351149575

This thread catches some "scientists" massively overreaching. It points out their errors.

They are trying to use – as a key part of their professional work! – math and statistics that *they blatantly don't understand the basics of*. They also don't understand basic concepts of what numbers and measurements are and what they mean in reality – these are people who measure something as being between 0 and X and then factor that into their results by ... excluding it entirely because they don't conceptually understand how their measuring instruments work or what results they got or what those results mean in reality. (They mixed up there being less of something than their tool could detect – which means the real value is between 0 and the minimum their tool detects – with basically their tool failing so throw it out as bad data.)

Their basic errors change the conclusion from "masks work" (what their data shows) to "masks don't work".

Here are the paper authors:

https://annals.org/aim/fullarticle/2764367/effectiveness-surgical-cotton-masks-blocking-sars-cov-2-controlled-comparison

> Seongman Bae, MD *; Min-Chul Kim, MD *; Ji Yeun Kim, PhD *; Hye-Hee Cha, BS; Joon Seo Lim, PhD; Jiwon Jung, MD; Min-Jae Kim, MD; Dong Kyu Oh, MD; Mi-Kyung Lee, MD; Seong-Ho Choi, MD; Minki Sung, PhD; Sang-Bum Hong, MD; Jin-Won Chung, MD; Sung-Han Kim, MD

Note that all of them have professional credentials.


curi at 10:25 AM on April 20, 2020 | #16396 | reply | quote

https://elischragenheim.com/2016/11/01/when-support-is-truly-required/

> A personal story: I realized the critical importance of that kind of intuition when my first book “Management Dilemmas” was translated into Japanese. The translator sent me several paragraphs asking me “What’s the hell did you mean in this paragraph?” Well, she used much more polite words, but that was the meaning. It was with a lot of pain that I realized that every paragraph she included caused me, when I wrote it, a struggle and eventually dissatisfaction, which I simply ignored. If only I had the wit to ask one of my friends to read and comment. In my later books I have co-authored with others, mainly with Bill Dettmer, to avoid this feeling.

People know they're overreaching/confused, they know stuff isn't good enough, and they do it anyway. This example comes from an exceptional person.


curi at 11:28 AM on April 20, 2020 | #16397 | reply | quote

People dislike criticism because they are overreaching, and in that context criticism doesn't lead to error corrections / solutions.


curi at 12:31 PM on April 20, 2020 | #16398 | reply | quote

Another Example

Saw this study posted on reddit:

https://www.theguardian.com/environment/2020/apr/20/air-pollution-may-be-key-contributor-to-covid-19-deaths-study

with the title:

> Air pollution may be ‘key contributor’ to Covid-19 deaths

The very first comment: https://www.reddit.com/r/science/comments/g52c9x/air_pollution_may_be_key_contributor_to_covid19/fo0zsiu?utm_source=share&utm_medium=web2x

>> Research shows almost 80% of deaths across four countries were in most polluted regions

> Article doesn't explicitly say they adjusted for population, but hopefully they did.

> Obviously urban areas have not only more polution, but also more people.

It turns out that they didn't adjust for the population.


Freeze at 9:56 PM on April 20, 2020 | #16406 | reply | quote

This is an extended example showing how much people are overreaching all the time. People can't read two short, simple sentences and make a thoughtful judgment of whether those sentences contradict. They can't explain why think things contradict. They don't know how to point out contradictions. They are stuck with only vague approximations and handwaving. And they make errors like misunderstanding what the word "from" means.

https://curi.us/2319-discussion-analyzing-a-claimed-contradiction

I think these results are broadly representative. You can imagine what they imply about how much people are overreaching when doing harder tasks.


curi at 10:09 AM on April 21, 2020 | #16411 | reply | quote

upgrade to 21st century thinking maybe


Anonymous at 4:37 PM on April 25, 2020 | #16440 | reply | quote

What are you fully confident of?

That's an apple. That's a pig. That's a chair.

I measured with ruler and that is 6 inches. I put it on a scale and it's 2 lbs. I used a stopwatch and that was 25 seconds.

Those are common knowledge. For me personally, I can identify chess openings with full confidence. That's a Najdorf, that's a Benoni, that's a Ruy Lopez, that's a Scotch. Similarly I'm very confident about how the pieces move and what's a legal move and what can capture what and whether the king is in check and how to write down a move in several standard notations. Those skills have stuck with me through many years of not playing chess (I forgot lots of the more obscure things and got rusty in various ways, but I still know a lot offhand). I remember because I practiced and studied chess a lot in addition to playing a lot. For me, saying "That's the King's Gambit" is similar to saying "That's a watermelon."

It's not just identifications. One can be fully confident about judgments and recommendations. You can look at a rotten apple and say "don't eat that". Similarly I can look at a chess move and say "don't make that move" and be fully confident. Not for any move. But there are some moves that I can clearly identify as bad. Similarly, you may not be fully confident about how good some apples are, but there are some you can clearly, confidently identify as rotten and unfit to eat.

A plumber can identify pipes and tools that I can't. And he can make recommendations like "Sure, you can hook that washing machine up. It'll work just fine with those pipes." And he can be fully confident of that recommendation. He's not guessing or bullshitting or hoping. He's not confused. He knows that.

*What else do you have confident knowledge of? Brainstorm a bunch.*

The basic goal of learning philosophy is to have confident knowledge of lots of philosophy skills and concepts. The basic problem of overreaching is to accept non (confident) knowledge as success and move on. So e.g. someone is confused about how commas work and then writes thousands of commas without ever learning which are correct and which are incorrect. Or someone doesn't know (highly confident knowledge) common logical concepts and errors, but then tries to make complex logical arguments (that involve using many logical concepts and need to avoid logical errors).


curi at 9:58 AM on May 1, 2020 | #16471 | reply | quote

A different sort of not-overreaching knowledge (aka "knowledge" or "high confidence knowledge" or "conjectural knowledge" (CR) or "certain knowledge" (Oism)):

I sometimes buy meal kit deliveries like Blue Apron, Homechef, Hello Fresh, Gobble, etc.

One of the reasons is I like learning about different foods and ways of cooking.

Some are easier/faster to cook than others. (I'm counting the time as the amount of time I spend on it, not e.g. time in oven where i'm doing something else.)

I saw the benefit of the easier/faster ones as primarily that it's less time and effort to make them.

I mostly prefer the easier/faster ones.

Today it occurred to me that they're also better as examples of easy/fast ways to cook things. (This applies more with some than others. I just recently some easy oven ones. Gobbles does easy ones where they pre-prepare a lot for you and you dump it in a pan. That doesn't actually mean you could make the same thing yourself easily.)

So it's not just it saves time when you get it from them and make it. It's also teaching you recipes and cooking techniques that you're more likely to use in the future, yourself, if you want easy ways to cook stuff.

I'm really confident of this (the specific thought about the benefit of the faster/easier to cook meal kits demonstrating faster/easier cooking techniques for later use). I think I understand what I'm saying. It's not that complicated at least from my pov, even though it has a fair amount of parts and layers (individual foods, meal kits, delivery, recipes, cooking, eating, etc.). i think i have a good grasp of the components that go into the thought too.


curi at 9:24 PM on May 1, 2020 | #16475 | reply | quote

I can self-evaluate whether I'm following a book at my current reading speed and whether to read faster or slower, or go back and reread a section. I can do this while reading not just in retrospect. It's a highly practiced skill that I have a lot of confidence about. Same with fast listening/watching. Knowing how well it's working is crucial. It's also important with regular/slow stuff, e.g. being aware of when you don't focus or think about something else and should reread the last page.

Lots of people, instead of being able to evaluate/monitor/measure how well it's going and make adjustments, just have one standard behavior for reading (or listening or watching) which they use regardless. That works much worse even ignoring speedreading, skimming, reading specific sections, etc. Just for plain old regular reading it makes a big difference to be aware when you miss info.


curi at 9:41 AM on May 2, 2020 | #16476 | reply | quote

Confidence

How do you know when you're fully confident? Is it just a feeling? If you're fully confident about "I put it on a scale and it's 2 lbs" like does this mean you're confident about the scale working properly too?


Anonymous at 9:42 AM on May 15, 2020 | #16533 | reply | quote

Rational Confidence

#16533 Summary of how to get full confidence about solution X for goal G:

Brainstorm all negatives/criticisms about X that you can.

Come up with satisfactory solutions for *all* of them.

Brainstorm negatives/criticisms about all of your solutions, too. And for X again because now you have more info about it.

Come up with satisfactory solutions for *all* of them.

Repeat until (fully) satisfied.

Note: your thinking/brainstorming process should include web searching, looking in books, asking others, etc., as appropriate. It doesn't have to be isolated thinking.

What should satisfy you? I'd say: you don't think of any more negatives after a reasonable effort (reasonable relative to how important this is, how high the positive and negative stakes are – you have to allocate your effort where it's most needed in life). For unimportant stuff that isn't worth allocating much thought, you may act on it and take the risk, but you shouldn't be very confident it'll actually succeed.

Won't the negatives be endless? Won't they get into infinitely fine details or endless tangents? No because you only want *relevant* negatives. A relevant negative is a reason that X will fail to achieve it's goal, G.

Don't suppress brainstorming negatives that won't prevent goal success. Let yourself add a lot to the list. But do review the negatives for goal-success-relevance after a brainstorming iteration.

Summary of summary: the standard for reaching a conclusion (the point where act or accept an idea, and you move on to learning about something else) is *no known (relevant, decisive) errors*. relevant, decisive errors, aka just errors, are errors that prevent/contradict goal success, aka cause failure.

Crucial to all this is specifying clear goals with criteria of success and failure. With vague goals that are ambiguous about whether some outcomes count as success or failure, you aren't in a position to judge what is an error (aka failure-causer).

Of course we can't have infinite precision but we can define our goals clear enough to achieve success instead of failure. Some judgment is needed. How can you calibrate your judgment and improve it? How can you decide whether to raise or lower your average view of how much precision your goals need and how thoroughly you need to look for negatives? By whether you're succeeding in life at a high rate. Critically consider whether you're actually satisfied with the results you're getting. If not, more precise goals and more thorough critical thinking should be on the short list of generic things to try to fix it.

A loose rule of thumb for how much to think about typical little (but not tiny) things in life is: 3 minutes. You want to review whether any generic or field-specific criticisms *that you already know* apply, and also have a minute to think of new criticism.

---

Yes you have to be confident your scale works if the measurements are important, though you can also narrowly consider your knowledge of how to use a functioning scale and be confident in that. Confidence in a scale is not hard to achieve. You can compare several scales if you doubt it works. But in general we have factories good at mass producing accurate scales, and when there is an error it's usually either small or big enough to disagree with our estimate of what the weight should be (if it's something you can't estimate and it's important, use 2 scales or test your one scale with a couple things you can estimate the weight of). Medium errors are rare, big errors are generally easy to notice, and we usually don't need very precise measurements so small errors don't matter.

A perspective on why medium errors are rare is: an error can be at any order of magnitude. Very roughly: Small errors range from negative infinity to -1 order of magnitude. Medium range from 0 to 2. And big errors from 3 to positive infinity. So medium is a much, much smaller category than the others. This is just for quantity/numerical errors not conceptual errors.


curi at 10:20 AM on May 15, 2020 | #16535 | reply | quote

#16535 Talking about confidence is an attempt to connect to people's existing common sense intuitions. People already have confidence of some things and not others, and ways of judging.

If you want more philosophical precision, a better term to think about is *conclusive knowledge*. What does it take to *reach a conclusion*? (A tentative, fallible conclusion that could be reopened in the future, as Popper emphasized. But still a conclusion. At some point you act, you accept an idea, you move on to something else. You tentatively finish things including learning. There are lots of things I learned in the past but are not currently open issues, so there must have been some point at which they switched from open to currently-not-open issues. Or, sure, there could be a transition period. But I think a decision at a single point in time is a good enough model for my purposes.)

And that's what I was talking about in #16535 with the no known errors standard for conclusive knowledge.

Building up complex knowledge with pieces with no known errors works *much better* than building with pieces with known errors. It gets you a way lower and more manageable error rate. And ignoring errors makes no sense once you understand errors are things that cause goal failure. The reason people want to ignore some errors is they are used to conceptualizing minor ignorable things as errors – if you insist on doing that then differentiate dealbreaker/decisive errors and other errors, and use the standard no known decisive errors.

I think those non-failure-causing "errors" are basically opportunities to optimize non-bottlenecks. See Goldratt.


curi at 11:16 AM on May 15, 2020 | #16537 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)