Reply to Robert Spillane

I'm not trying to make ad hominem remarks. I put effort into avoiding them. It is nevertheless possible that an argument targets idea X, but CR was saying Y, not X. It's also possible that CR makes a statement in its own terminology which is misread by substituting some word meanings with those favored by a rival philosophy. I don't see anything against-the-person about bringing up these issues.

I reject Popper's three worlds. I think there's one world, the physical world. I think minds and ideas have physical existence in that one world, just like running computer software and computer data physically exist. More broadly, the laws of physics say that information exists and specify rules for it (the rules of computation); ideas are a type of information.

I've never selected philosophy ideas by nationality, and never found pragmatism appealing. Nor am I getting material from Quine. And I don't accept the blame for Feyerabend, who made his own bad choices. Here's a list of philosophers I consider especially important: Karl Popper, David Deutsch, Ayn Rand, Ludwig von Mises, William Godwin, Edmund Burke, Thomas Szasz, and some ancient Greeks.

All propositions are synthetic because the laws of logic and math depend on the laws of computation (including information processing) which depend on the laws of physics. Our understanding of physics involves observation, and the particular laws of physics we have are contingent. Epistemology and evolution depend on physics too, via logic and computation, and also because thinking and evolving are physical processes.

Of course I agree with you that the goal is to find truth, not power or bullying or popularity.

Stove on Paley didn't answer my questions, but gave me some indication of some of your concerns, so:

I do not accept any kind of genetic or biological determinism, nor Darwinian "survival of the fittest" morality. Men have free will and are not controlled by a mixture of "influences" like genes, memes, culture, etc. By "influences" I include claims like "that personality trait is under 60% genetic control" – in that way genes are claimed to partially influence, but not fully control, some human behavior.

I have read some of the studies in this field and their quality is terrible. I could tell you how to refute some of their twin studies, heritability claims, etc, but I'm guessing you already know it.

I think "influences" may play a significant role in two ways:

1) A man may like and agree with an "influence", and pursue it intentionally. E.g. his culture praises soldiers, and he finds the profession appealing and chooses to become a soldier. Here the "influence" is actually just an option or piece of information which the man judges.

or

2) "Influences" matter more when a man is irresponsible and passive. If you don't take responsibility for your life, someone or something else may partially fill the void. If you don't actively control your life, then there's room for external control. A man who chooses to play the role of a puppet, and lets "influences" control him, may partially succeed.

Regarding Miller: by your terminology, I'm also a critic of Popper.

When two philosophers cannot agree on basic definitions,

could you give definitions of knowledge and induction? for clarity, i'll be happy to call my different concepts by other words such as CR-knowledge.

You state that 'I disagree with and deny the whole approach of a priori knowledge and the analytic/synthetic dichotomy.' But Popper, as a rationalist, relies on a priori knowledge, i.e. primitive theories which are progressively modified by trial and error elimination.

Inborn theories aren't a priori, they were created by genetic evolution. (They provide a starting point but DO NOT determine people's fate.)

when I try to argue with you, and you disagree with my mode of arguing, which is widely accepted in philosophical circles, it is difficult to know how to respond to your questions.

i think this is important. I have views which disagree with what is, i agree with you, "widely accepted in philosophical circles". it is difficult to understand different frameworks than the standard one, but necessary if you want to e.g. evaluate CR.

For example, with respect to Szasz you write that he 'doesn't write deductions, formal logic and syllogisms'. True, he doesn't use symbolic logic but his life's work was based on the following logic (see Szasz Under Fire, pp.321-2 where he relies on the analytic-synthetic distinction):

"When I [Szasz] assert that (mis)behaviors are not diseases I assert an analytic truth, similar to asserting that bachelors are not married...InThe Myth of Mental Illness, I argued that mental illness does not exist not because no one has yet found such a disease, but because no one can find such a disease: the only kind of disease medical researchers can find is literal, bodily disease."

I acknowledge that I disagree with Szasz about analytic/synthetic. Unfortunately he died before we got to resolve the matter.

However, I think Szasz's main point is that no observations of "patients" could refute him. I agree. Facts about "patients" can't challenge logical arguments.

However, as I explained above, I don't think logic itself is analytic. I think observations which led to a new understanding of physics could theoretically (I don't expect it) play a role in challenging Szasz's logical arguments.

Here is Szasz's logic:

  • Illness affects the human body (by definition);
  • The 'mind' is not a bodily organ;
  • Therefore, the mind cannot be or become ill;
  • Therefore mental illness is a myth.
  • If 'mind' is really the brain or a brain process;
  • Then mental illnesses are brain illnesses.
  • Since brain illnesses are diagnosed by objective medical signs,
  • And mental illnesses are diagnosed by subjective moral criteria;
  • Mental illnesses are not literal illnesses
  • And mental illness is still a myth.

If this is not deductive reasoning, then what is?

That isn't even close to a deductive argument. For example, look how "myth" is used in a conclusion statement (begins with "therefore"), without being introduced previously. You couldn't translate this into symbolic logic and make it work. Deduction has very strict rules, which you haven't followed.

As to what is deductive reasoning: no one does complex, interesting philosophy arguments using only deduction. Deduction is fine but limited.

I do appreciate the argument you present. I think it's well done, valuable, and rational. It's just not pure deduction (nor a combination of deduction and induction).

I would normally just call it an "argument". CR doesn't have some special name for what type of reasoning it is. We could call it a CR-argument or CR-reasoning if you like. You ask what's left for reasoning besides induction and deduction, and I'd point to your example and say that's just the kind of thing I think is a typical argument. (Your argument is written to appear to resemble deduction more than is typical, so the style is a bit uncommon, but the actual way it works is typical.)

'The basic point here is to judge an idea by what it says..." Quite so. But how do you do that?

By arguments like the "mental illness" example you provided, and the socialism and price controls example I provided previously. By using arguments to criticize mistakes in ideas. etc.

You write: 'The claim [Stove's and mine] that "there are good reasons to believe inductively-derived propositions" doesn't address Popper's arguments that inductively-derived propositions don't exist.' This follows more than half a page of reasons why they do exist. And, contrary to your claim, I gave you an example of a good (i.e reasonable, practical, useful) reason to believe an inductively-derived proposition. What more can I say?

You write: 'This is typical. I had an objection to the first sentence following "Inductivists do have answer for you." It made an assumption I consider false. It then proceeds to build on that assumption rather than answer me.' That obnoxious sentence is 'Stove has argued, correctly in my view, that there are good reasons to believe inductively-derived propositions.' What is the assumption you consider false? I then proceed to provide Stove's arguments. Is not this what critical rationalists encourage us to do with their platitudes about fallibility, willingness to argue a point of view? Those arguments, whether valid or invalid, do provide reasons why one might reject Popper's authoritarian pronouncement that inductively-derived propositions don't exist. Of course, they exist, even if Popper does not grant them legitimacy.

We're talking about too many things at once. If you think this is particularly important, I could answer it. I do attempt to continue the discussion of induction below.

You write: 'But as usual with everyone, so far nothing RS has said gives even a hint of raising an anti-CR argument which I don't have a pre-existing answer for.' Well, then future argument is pointless because your 'fallibilism' is specious. If you have already decided in favour of CR, I doubt there are any critical arguments which you will consider. You appear to have developed your personal version of CR and immunised yourself against criticism, a vice which Popper in theory, if not in practice, warned against.

I'm open to changing my mind.

I have discussed these issues in the past and made judgements about some ideas. To change my mind, you'll have to say something new to me. I expect the same the other way around: if I don't have anything to say that you haven't heard before, then I won't change your mind.

I have a lot of previous familiarity with these issues. So far you haven't come near the edges of my pre-existing knowledge. You haven't said something about epistemology which is surprising or new for me (nor has Stove in what I read). Minor details differ, but not main points.

That's OK, I would expect it to take more discussion than we've done so far to get to get beyond people's already-known arguments.

It's right and proper that we each have initial (level 1) responses ready which cover many issues a critic could raise. And when he responds to one of them, we again have level 2 responses ready for many things he may say next. Very educated, experienced persons may have a dozen levels of responses that they already know. So to change their minds, one has to either say something surprising early on (that they didn't hear or think of previously, so they don't have a pre-existing answer) or else go through the levels and then argue with some of their ideas near the limits of their knowledge.

So far your comments regarding induction have been typical of other inductivists I've spoken with.

A reviewer of Popper's work was published in 1982 in The New York Review (Nov.18 (pp.67-68) and Dec.2 (pp.51-56). I could not express my reservations better than this:

'Popper's philosophy of science is profoundly ambiguous: it is, he says, "empirical", but it is left unclear why scientists should consult experience.

The reason for consulting experience is to criticize ideas which contradict experience (because we want ideas which match reality). That is not "left unclear", it's stated clearly by Popper.

It is called "fallibilism", in which we learn from our mistakes", but it is really an ill-concealed form of skepticism.

The skepticism accusation is an assertion, not an argument.

It claims to surrender the quest for certainty, but it is precisely the standards of this quest - that if one is not certain of a proposition, one can never be rationally justified in claiming it to be true - that underlie Popper's rejection of induction (and the numerous doctrines that stem from this rejection).

Popper did NOT reject induction for being fallible or imperfect, he rejected it for reasons like:

1) Any finite set of data is compatible with infinitely many generalizations, so by what method does induction select which generalizations to induce from those infinite possibilities?

2) How much support does X give Y, in general? And what difference does that make?

Induction fails to meet these challenges, and without answers to those issues induction can't be used at all. These aren't "it's imperfect" type issues, they are things that must be addressed to use induction at all for anything.

There have been some attempts to meet these challenges, but I don't think any succeeded, and Popper pointed out flaws in some of them. If you can answer the questions, or give page numbers where Stove does, I will comment.

If you wish to address (2), note that "in general" includes non-mathematical issues, e.g. the beauty of a piece of music or flower. (And if you think induction can't address those beauty issues, then I'm curious what you propose instead. Deduction? Some third thing which will, on examination, turn out to have a lot in common with CR?)


Elliot Temple | Permalink | Messages (8)

More Robert Spillane Discussion

This reply to Robert Spillane follows up on this previous discussion. Here's a full list of posts related to Spillane.

Thank you for your respectful reply. I think we are making progress.

It has been helpful to have you clarify which parts of Popper you accept.

Great.

I am reminded of an interesting chapter in Ernest Gellner's bookRelativism and the Social Sciences, (1985, Ch. 1: 'Positivism and Hegelianism), where he discusses early versus late Popper, supports the former against the latter, and concludes that Popper is (a sort of) positivist. It is an interesting chapter and one I would happily discuss with you.

Like Gellner, I am sympathetic to Popper's 'positivism' but cannot accept his rejection of inductive reasoning. Like you (and Szasz), I reject his 3 Worlds model.

Popper was an opponent of the standard meaning of positivism. I mean something like this dictionary definition: "a philosophical system that holds that every rationally justifiable assertion can be scientifically verified or is capable of logical or mathematical proof, and that therefore rejects metaphysics and theism."

So what sort of "positivism" are you attributing to Popper?

I've ordered the book.

Re your favourite philosophers: you might read Szasz's critical comments on Rand, Branden, Mises, Hayek, Rothbard and Nozick in Faith in Freedom: Libertarian Principles and Psychiatric Practices, (Transaction Publishers, 2004). Even though I received the Thomas Szasz Award in 2006, I told Tom that I could not commit myself to (economic) libertarianism in the way that he did and you appear to do. I accept the primacy of personal freedom but do not accept the economic freedom favoured by libertarians. Indeed, I would have thought that by now, in the age of huge corporations, neo-liberalism is on its last legs. I respect your position, however.

Yes, I'm fully in favor of capitalism.

Yeah, I discussed Faith in Freedom with Szasz, but I don't have permission to share the discussion. One thing Szasz did in the book was use some criticism of Rand from Rothbard. I could tell you criticism of Rothbard's arguments if you wanted, though I think he's best ignored. I do not consider Rothbard or Justin Raimondo to be decent human beings, let alone reliable narrators regarding Rand. I was also unimpressed by Szasz's criticisms of Rand's personal life in the book, and would prefer to focus on her ideas. And I think Szasz made a mistake by quoting Whittaker Chambers' ridiculous slanders.

FYI I only like Rand and Mises from the list of people you mention, and I agree with Szasz that they were mistaken regarding psychiatry. (Rand didn't say much on psychiatry, and some of it good, as Szasz discusses. But e.g. she got civil commitment partly wrong.)

You may be interested to know that Rand spoke very critically of libertarians, especially Hayek and Friedman (who both sympathized with socialism, as did Popper). She thought libertarians were harming the causes of liberty and capitalism with their unprincipled, bad philosophy. I agree with her.

Rand did appreciate Mises because he was substantially different than the others: he was an anti-anarchy classical liberal, a consistent opponent of socialism, and he was very good at economics.

We have criticisms of many libertarian ideas from the right.

Let me mention that I'm not an orthodox Objectivist. I do not like the current Objectivist leadership like Peikoff, Binswanger, and the Ayn Rand Institute. I am banned from the main Objectivist forum for dissenting regarding epistemology (especially induction, fallibilism and perception). I also dissented regarding psychiatry, but discussion of psychiatry was banned before much was said.

If you're interested, I wrote about what the disagreements were and the decision to ban me. I pointed out various ways my views and actions are in line with Ayn Rand's philosophy and theirs aren't. It clarifies some of my philosophy positions:

http://curi.us/1930-harry-binswanger-refuses-to-think

There was no reply, no counter-argument. I am aware that they will hold a grudge for life because I wrote that.

I also made a public record of what I said in my discussions with them:

http://curi.us/1921-the-harry-binswanger-letter-posts

Warning: my comments are book length.

I have spent my career in the space between neo-positivism (Hume, Stove) and a critical existentialism (Sartre, Szasz). You might see inconsistencies here but I have always agreed with Kolakowski who wrote in his excellent book Positivist Philosophy, (pp. 242-3):

'The majority of positivists tend to follow Wittgenstein's more radical rule: they do not simply reject the claims of metaphysics to knowledge, they refuse it any recognition whatever. The second, more moderate version is also represented, however, and according to it a metaphysics that makes no scientific claims is legitimate. Philosophers who, like Jaspers, do not look upon philosophy as a type of knowledge but only as an attempt to elucidate Existenz, or even as an appeal to others to make such an attempt, do not violate the positivist code. This attitude is nearly universal in present-day existential phenomenology. Awareness of fundamental differences between 'investigation' and 'meditation', between scientific 'accuracy' and philosophic 'precision', between 'problems' and 'questioning' or 'mystery' is expressed by all existential philosophers...'

I broadly disagree with attempts to separate some thinking or knowledge from reality.

As an aside: I asked Tom Szasz that since he has been appropriated by some existentialists, whether he accepted that label. He thought about it for an hour and said: 'Yes, I'm happy to be included among the existentialists. However, if Victor Frankl is an existentialist, I'm not!' Frankl, despite his reputation as a humanist/existentialist boasted of having authorised many and conducted a few lobotomies on people without their consent.

Your criticism of the analytic/synthetic dichotomy reminds me of Quine but expressed differently. I disagree with you (and Quine) and agree with Hume, Stove and Szasz (and many others) on this issue. I am confident that had Szasz lived for another 50 years, you would not have convinced him that all propositions are synthetic and therefore are either true or false. He and I believe that the only necessities (i.e necessary truths) in the world are those expressed as analytic propositions and these tell us nothing about the world of (empirical) facts.

I don't believe necessary truths like that exist. I think people mistake features of reality (the actual reality they live in) for necessary truths. In our world, logic works a particular way, but it didn't necessarily have to. People fail to imagine how some things could be otherwise because they are used to the laws of physics we live with.

If you have a specific criticism of my view, I'll be happy to consider it.

I think I would have persuaded Szasz in much less than 50 years, if I'm right. Or else Szasz would have persuaded me. I don't think it would have stayed unresolved.

I found Szasz extraordinarily rational and open to criticism, more so than anyone else I've ever discussed with.

I'm delighted that you do not buy into Dawkins' nonsense about 'memes' even if you use 'ideas' as if they are things. Stove on Dawkins hits the mark.

There may be a misunderstanding here. I do buy into David Deutsch's views about memes! I accept memes exist and matter. But I think memes are popularly misunderstood and don't lead to the conclusions others have said they do.

I know that Szasz disagreed with me about memes. He did not, however, provide detailed arguments regarding evolution.

'Knowledge' and 'idea' are abstract nouns and therefore, as a nominalist, I'm bound to say they don't exist, except as names.

I consider them the names of either physical objects (like chairs) or attributes of physical objects (like the color red). As a computer hard drive can contain a file, a brain can contain an idea.

I encourage my students to rely less on nouns and more on verbs (from which most nouns originated). You asked for two definitions:

To 'know' means 'to perceive or understand as fact or truth' (Macquarie Dictionary, p.978). Therefore 'conjectural knowledge' is oxymoronic.

This is ambiguous about whether the understanding may be fallible or not.

Do you need a guarantee of truth to have knowledge, or just an educated guess which is correct according to your current best-efforts at understanding?

Why can't one conjecturally (fallibly) understand something to be a fact?

Induction: 'the process of discovering explanations for a set of particular facts, by estimating the weight of observational evidence in favour of a proposition which asserts something about the entire class of facts (MD, p.904).

Induction: 'a method of reasoning by which a general law or principle is inferred from observed particular instances...The term is employed to cover all arguments in which the truth of the premise, or premises, while not entailing the truth of the conclusion, or conclusions, nevertheless purports to constitute good reasons for accepting it, or them... With the growth of natural science philosophers became increasingly aware that a deductive argument can only bring out what is already implicit in its premises, and hence inclined to insist that all new knowledge must come from some form of induction. (A Dictionary of Philosophy, Pan Books, 1979, pp.171-2).

I agree that those are typical statements of induction. How do you address questions like:

Which general laws, propositions, or explanations should one consider? How are they chosen or found? (And whatever method you answer, how does it differ from CR's brainstorming and conjecturing?)

When and why is one idea estimated to have a higher weight of observational evidence in favor of it than another idea? Given the situation that neither idea is contradicted by any of the evidence.

I think these issues are very important to our disagreement, and to CR's criticism of induction.

You say that 'inborn theories are not a priori'. But a priori means prior to sense experience and so anything 'inborn'must be a priori be definition.

A priori means "relating to or denoting reasoning or knowledge that proceeds from theoretical deduction rather than from observation or experience" (New Oxford American Dictionary)

Inborn theories, which come from genes, don't come from theoretical deduction, nor from observation. Their source is evolution. This definition offers a false dichotomy.

Another definition (OED):

"A phrase used to characterize reasoning or arguing from causes to effects, from abstract notions to their conditions or consequences, from propositions or assumed axioms (and not from experience); deductive; deductively."

that doesn't describe inborn theories from genes.

inborn theories are like the software which comes pre-installed on your computer, which you can replace with other software if you prefer.

inborn theories don't control your life, it's just that thinking needs a starting point. similar to how your life has a starting time and place, which does matter, but doesn't control your fate.

these inborn theories are nothing like analytical ideas or necessary truths. they're just regular ideas, e.g. we might have inborn ideas about the danger of snakes (the details of which ideas are inborn is largely unknown) which were created because of actual encounters with snakes before we were born. but that's still not created by observation or experience, because genes and evolution can neither observe nor experience.

Spillane wrote previously:

Here is Szasz's logic:

  • Illness affects the human body (by definition);
  • The 'mind' is not a bodily organ;
  • Therefore, the mind cannot be or become ill;
  • Therefore mental illness is a myth.
  • If 'mind' is really the brain or a brain process;
  • Then mental illnesses are brain illnesses.
  • Since brain illnesses are diagnosed by objective medical signs,
  • And mental illnesses are diagnosed by subjective moral criteria;
  • Mental illnesses are not literal illnesses
  • And mental illness is still a myth.

If this is not deductive reasoning, then what is?

I denied that this is deduction, and I pointed out that "myth" is introduced for the first time in a conclusion statement, so it doesn't follow the rules of deduction. Spillane now says:

If the example of Szasz's logic is not deductive - the truth of the conclusion is implicit in the premise - what sort of argument is it? If you remove #4, would you accept it as a deductive argument?

I think it deviates from deduction in dozens of ways, so removing #4 won't help. For example, the terms "objective", "subjective" and "literal" are introduced towards the end without using previous premises and syllogisms to establish anything about them. I also consider it incomplete in dozens of ways (as all complex arguments always are). You could try to write it as formal (deductive) logic, but I think you'd either omit most of the content or fail.

I don't think the truth of the conclusion is implicit in the premises. I think many philosophers have massively overestimated what they could translate to equivalent formal deductions. So I regard it simply as an "argument", just like most other arguments which don't fall into the categories non-Popperian philosophers are so concerned with.

And even if some arguments could be rewritten as strict deductions, people usually don't do that, and they can still learn and make progress anyway.

Rather than worrying about what category an argument falls into, CR is concerned with whether you have a criticism of it – that is, an argument for why it's false.

I don't think pointing out "that isn't deduction" is a criticism, because being non-deductive is compatible with being true. (The same comment applies to induction.)

I also don't think that pointing out an idea is incomplete is a criticism without further elaboration. What matters is if the idea can succeed at it's purpose, e.g. solve a problem, answer a question, explain an issue. An idea may do that despite being incomplete in some way because the incompleteness may be
irrelevant.

My epistemological position should be clear from what I have said above - it is consistent with a moderate form of neo-positivism.

That Popper's fallibilism is ill-concealed skepticism has been argued at length, by many Popper scholars, e.g. Anthony O'Hear. It was even argued in the book review mentioned.

I don't care how many people argued something at what length. I only care if there are specific arguments which are correct.

Are you denying that you are fallible (capable of making mistakes)? Do you think you sometimes have 100% guarantees against error?

Or do you just deny the second part of Popper's fallibilism? His claim that, in the world today, mistakes are common even when people feel certain they're right.

If it's neither of those, then I don't know what your issue with fallibilism is.

I have already given you (in a long quote) examples of inductively-derived propositions that are 'reasonable'. Now they may not be reasonable to a deductivist, but that only shows that deductivists have a rigid definition of 'rational', 'reasonable' and 'logical'. Given that a very large number of observations of ravens has found that they are black without exception, I have no good reason to believe the next one will be yellow, even though it is possible. That the next raven may be yellow is a trivial truth since it is a tautology. Accordingly, I have a good reason to believe that the raven in the next room is black.

OK I'll address this topic after you answer my two questions about induction above.


Elliot Temple | Permalink | Messages (0)

Discussing Necessary Truths and Induction with Spillane

You often ask me for information/arguments that I have already given you

We're partially misunderstanding each other because communication is hard and we have different ways of thinking. I'm trying to be patient, and I hope you will too.

Please address these two questions about induction. Answering with page numbers from a book would be fine if they directly address it.

I've read lots of inductivist explanations and found they consistently don't address these questions in a clear, specific way, with actual instructions one could follow to do induction if one didn't already know how. I've found that sometimes accounts of induction give vague answers, but not actionable details, and sometimes they give specifics unconnected to philosophy. Neither of those are adequate.

1) Which general laws, propositions, or explanations should one consider? How are they chosen or found? (And whatever method you answer, how does it differ from CR's brainstorming and conjecturing?)

2) When and why is one idea estimated to have a higher weight of observational evidence in favor of it than another idea? Given the situation that neither idea is contradicted by any of the evidence.

These are crucial questions to what your theory of induction says. The claimed specifics of induction vary substantially even among people who would agree with the same dictionary definition of "induction".

I've read everything you wrote to me, and a lot more in references, and I don't yet know what your answers are. I don't mind that. Discussion is hard. I think they are key questions for making progress on the issue, so I'm trying again.

As a fallibilist, you acknowledge that the 'real world' is a contingent one and there are no necessary truths. But is not 1+1=2 a necessary truth? Is not 'All tall men are men' a necessary truth since its negation is self-contradictory?

I'll focus on the math question because it's the easier case to discuss first. If we agree on it, then I'll address the A is A issue.

I take it you also think the solution to 237489 * 879234 + 8920343 is a necessary truth, as well as much more complex math. If instead you think that's actually a different case than 1+1, please let me know.

OK, so, how do you know 1+1=2? You have to figure out what 1+1 sums to. You have to calculate it. You have to perform addition.

The only means you have to calculate sums involve physical objects which obey the laws of physics.

You can count on your fingers, with an abacus, or with marbles. You can use a Mac or iPhone calculator. Or you can use your brain to do the calculation.

Your knowledge of arithmetic sums depends on the properties of the objects involved in doing the addition. You believe those objects, when used in certain ways, perform addition correctly. I agree. If the objects had different properties, then they'd have to be used in different ways to perform addition, or might be incapable of it. (For example, imagine an iPhone had the same physical properties as an iPhone-shaped rock. Then the sequences of touches the currently sum 1 and 1 on an iPhone would no longer work.)

Your brain, your fingers, computers, marbles, etc, are all physical objects. The properties of those objects are specified by the laws of physics. The objects have to be used in certain ways, and not other ways, to add 1+1 successfully. What ways work depends on the laws of physics which say that, e.g., marbles don't duplicate themselves or disappear when arranged in piles.

So I don’t think 1+1=2 is a truth independent of the laws of physics. If there's a major, surprising breakthrough in physics and it turns out we're mistaken about the properties of the physical objects used to perform addition, then 1+1=2 might have to be reconsidered because all our ways of knowing it depended on the old physics, and we have to reconsider it using the new physics. So observations which are relevant to physics are also relevant to determining that 1+1=2.

This is explained in "The Nature of Mathematics", which is chapter 10 of The Fabric of Reality by David Deutsch. If you know of any refutation of Deutsch's explanation, by yourself or others, please let me know. Or if you know of a view on this topic which contradicts Deutsch's, but which his critical arguments don't apply to, then please let me know.

I believe that Einstein is closer to the truth of what you call the real world than was Aristotle. So when I'm told by this type of fallibilist that we don't know anymore today than we did 400 years ago, I demur.

Neither Popper nor I believe that "we don't know anymore today than we did 400 years ago".

Given your comments on LSD and the a-s dichotomy, after reading this I conclude that you are a fan of late Popper (LP) and I prefer early Popper (EP).

Yes.

You think EP is wrong, and I think LP is right, so I don't see the point of talking about EP.

(I disagree with your interpretation of EP, but that's just a historical issue with no bearing on which philosophy of knowledge ideas are correct. So I'm willing to concede the point for the purpose of discussion.)

Gellner argued that Popper is a positivist in the logical positivist rather than the Comtean positivist sense. His discussion proceeded from the contrasting of positivists and Hegelians and so he put (early) Popper in the positivist camp - Popper was certainly no Hegelian. Of course, Popper never tired of reminding us that he destroyed the positivism of the Vienna Circle and went to great pains to declare himself opposed to neo-positivism. For example, he says that he warmly embraces various metaphysical views which hard positivists would dismiss as meaningless. Moderate positivists, however, accept metaphysical views but deny them scientific status. Does not Popper do this too, even if some of these views may one day achieve scientific status?

Yes: (Late) Popper accepts metaphysical and philosophical views, but doesn't consider them part of science.

CR (late-CR) says non-science has to be addressed with non-observational criticisms, instead of what we do in science, which is a mix of observational and non-observational criticism.

If by fallibilism you mean searching for evidence to support or falsify a theory, I'm a fallibilist. If, however, you mean embracing Popper's view of 'conjectural knowledge' and the inability, even in principle, or arriving at the truth, then I'm not. I believe, against Popper, Kuhn and Feyerabend, that the history of science is cumulative.

No, fallibilism means that (A) there are no guarantees against error. People are capable of making mistakes and there's no way around that. There's no way to know for 100% sure that a proposition is true.

CR adds that (B) errors are common.

Many philosophers accept (A) as technically true on logical grounds they can't refute, but they don't like it, and they deny (B) and largely ignore fallibilism.

I bring this up because, like many definitions of knowing, yours was ambiguous about whether infallibility is a requirement of knowing. So I'm looking for a clear answer about your conception of knowing.


Elliot Temple | Permalink | Messages (0)

Plateauing

I wrote these comments for the Fallible Ideas discussion group:

Plateauing while learning is an important issue. How do people manage that initial burst of progress? Why does it stop? How can they get going again?

This comes up in other contexts too, e.g. professional gamers talk about it. World class players in e.g. Super Smash Bros. Melee talk about how you have to get through several plateaus to get to the top, and have offered thoughts on how to do that. While they’ve apparently successfully done it themselves, their advice to others is usually not very effective for getting others past plateaus.

One good point I’ve heard skilled gamers say is that plateauing is partly just natural due to learning things with more visible results sometimes, and learning more subtle skills other times. So even if you learn at a constant rate, your game results will appear to have some plateauing anyway. Part of the solution is to be patient and don’t get disheartened and keep trying. Persistence is one of the tools for beating plateaus (and persistence is especially effective when part of the plateau is just learning some stuff with less visible benefits – though if you’re stuck on some key point then mere persistence won’t fix that problem).

When gamers talking about “leveling up” their play, or taking their play “to another level” it implicitly refers to plateaus. If skill increases were just a straight 45 degree line then there’d be no levels, it’d all just blend together. But with plateaus, there are distinguishable different levels you can reach.

It can be really hard to tell how much people plateau because they’re satisfied and don’t care about making further progress vs. they got stuck and rationalize it that way. That applies both to gamers and also to philosophy learners . [A poster] in various ways acted like he was done learning instead of trying to get past his plateau – but was that the cause of him being stuck, or was it a reaction to being stuck?


A while after people plateau, they commonly go downhill. They don’t just stay stable, they fall apart. Elements of this have been seen with many posters. (Often it’s ambiguous because people do things like quit philosophy without explaining why. So one can presume they fell apart in some way, some kind of stress got to them, but who knows, maybe they got hit by a car or got recruited by the CIA.)

In general, stagnation is unstable. This is something BoI talks about. It’s rapid progress or else things fall apart. Why? Problems are inevitable. Either you solve them (progress) or things start falling apart (unsolved problems have harmful consequences).

New problems will come up. If your problem solving abilities are failing, you’re screwed. If your problem solving abilities are working, you’ll make progress. You don’t just get to stand still and nothing happens. There are constantly issues coming up threatening to make things worse, and the only solution is problem solving which actually takes you forward.

So anyway people come to philosophy, make progress, get stuck, then fall apart.

A big part of why this happens is they find some stuff intuitively easy, fun, etc, and they get that far, then get stuck at the part where it requires more “work”, organization, studying books, or whatever else they find hard. People have the same issue in school sometimes – they are smart and coast along and find classes easy, then they eventually run into a class where they find the material hard and it can be a rough transition to deal with that or they can just abruptly fail.

Also people get excited and happy and stuff. Kinda like being infatuated with a new person they are dating. People do that with hobbies too. And that usually only happens once per person per hobby. Usually once their initial burst of energy slows down (even if they didn’t actually get stuck and merely were busy for a month) then they don’t know how to get it back and be super interested again.

After people get stuck, for whatever reason, they have a situation with some unsolved problems. What then happens typically is they try to solve those problems. And fail. Repeatedly. They try to get unstuck a bunch and it doesn’t work (or it does work, and then quite possibly no one even notices what happened or regards it as a plateau or being stuck). Usually if people are going to succeed at solving a problem they do it fast. If you can’t solve a problem within a week, will a month or year help? Often not. If you knew how to solve it, you’d solve it now. So if you’re stuck or plateauing it means all your regular methods of solving problems didn’t work. You had enough time to try everything you know how to do and that still didn’t work. Some significant new idea, new creativity, new method, etc, is needed. And people don’t know how to persistently and consistently pursue that in an organized effective way – they can just wait and hope for a Eureka that usually never comes, or go on with their life and hope somehow, someway, something ends up helping with the problem or they find other stuff to do in life instead.

People will try a bunch of times to solve a problem. They aren’t stuck quietly, passively, inactively. They don’t like the problem(s) they’re stuck on. They try to do something about it. This repeated failure takes a toll on their morale. They start questioning their mental capacity, their prospects for a great life, etc. They get demoralized and pessimistic. Some people last much longer than others, but you can see why this would often happen eventually.

And people who are living with this problem they don’t like, and this recurring failure, often turn to evasion and rationalization. They lie to themselves about it. They say it’s a minor problem, or it’s solved. They find some way not to think about it or not to mind it. But this harms their own integrity, it’s a deviation from reason and it opens the door to many more deviations from reason. This often leads to them falling apart in a big way and getting much worse than they were previously.

And people often want to go do something else where their pre-existing methods of thinking/learning/etc work, so they can have success instead of failure. So they avoid the stuff they are stuck on (after some number of prior failures which varies heavily from just a couple to tons). This is a bad idea when they are stuck on something important to their life and end up avoiding the issue by spending their time on less important stuff.

So there’s a common pattern:

  1. Progress. They use their existing methods of making progress and make some progress.

  2. Stuck. They run into some problems which can't be solved with their pre-existing methods of thinking, learning, problem solving, etc.

  3. Staying stuck. They try to get unstuck a bunch and fail over and over.

  4. Dishonesty. They don’t like chronic unsolved problems, being stuck, failing so much, etc. So they find some other way to think about it, other activities to do, etc. And they don’t like the implications (e.g. that they’ve given up on reason and BoI-style progress) so they are dishonest about that too.

  5. Falling apart. The dishonesty affects lots of stuff and they get worse than when they started in various ways.


Elliot Temple | Permalink | Messages (0)

Lots of Thoughts

BoI is about unbounded progress, and this is very different than what people are used to.

It means any kinds of bounds – like some topic being off limits – is problematic.

The standard expectation elsewhere is a little of this, a little of that, and great, good job, you’re a success. Around here it’s more like: lots of everything. More, more, more. And it’s hard to find break points for praise and basking in glory b/c there’s always further to go. And anyway were you seeking glory and praise, or interested in learning for its own sake?

What do you want a break for? Don’t you like making progress more than anything else? What else would you want to do? Some rest is necessary, but not resting on one’s laurels.

You’re still at the beginning of infinity. You still have infinite ignorance. Keep going!

People say they want to learn. But how much? How fast? Why not more, faster?

What is there to stop this? To restrain it from intruding on their whole life and disrupting everything? They don’t know, and don’t want to give up or question various things, so, when it comes down to it, they just give up on reason instead.

People expect social structures to determine a lot. If you learn at the pace of your university class, who could ask more of you? If you publish more than enough peer-reviewed papers to keep your job as a professor, aren’t you doing rather well? There are socially approved lifestyles which come with certain expectations for how much you should be learning. Do anything more than that and you’re in extra credit territory – which is awesome but (socially) one can’t be faulted for not getting even more extra credit...

People interact and learn in limited ways. They don’t want to deal with e.g. some philosophy ideas invalidating their whole field – like AGI, psychiatry, most of the social “sciences”. That’s too out of control. People want ideas to be bounded contrary to the inherent reach of the ideas. What an idea applies to is a logical matter, not a human choice, but people aren’t used to that and it disrupts their social structures.


I can break anyone. I can ask questions, criticize errors, and advocate for more progress until they give up and refuse to speak. No one can handle that if I really try. I can bring up enough of people’s flaws that it’s overwhelming and unwanted.

There are limits on what criticism people want to hear, what demons they want to face, what they want to question. Perhaps they’ll expand those limits gradually. But I, in the spirit of BoI, approach things differently. i take all criticism and questions from all comers without limiting rules and without being overwhelmed.

BTW, people, so used to their limits – and to statements like this being lies – still usually won’t ask me much or be very challenging.

I used to be confused by people breaking. I expected people to be more similar to myself. I thought they’d want to know about problems. i thought that of course they’d value truth-seeking above all else. I thought they’d take responsibility for organizing incoming information in good ways instead of being overwhelmed. I thought they’d make rapid progress. Instead, it turns out, people don’t know how to handle such things, and don’t ask, and get progressively more emotional while hiding the problem until they burst.

It’s foreign to me how people are. But it’s pretty predictable now. I stopped giving people unbounded criticism. It’s not appreciated. I just give little fraction of the criticism I could, to people who come to me – and still that’s usually more than enough that they hate it.

occasionally people here ask for full, maximum criticism. they don’t like the idea that i’m holding back – that i know problems in their lives and their thinking that i’m not telling them, that are going unsolved. (or that i could quickly discover such problems if i thought about them, asked them some questions, etc). i’ve often responded by testing them in some little way which was too much and they didn’t persist in asking for more.

it’s difficult b/c i prefer to be honest and say what i think openly. i generally don’t lie. but i neglect to say lots of things i could. i neglect to energetically pursue things involving other ppl which could/should be pursued if they were better and more capable. i could write 10+ replies each to most posts here with questions and arguments (often conditional on some guesses about incomplete information). there’s so much more to be said, so many connections to other stuff. people don’t want to deal with that. they want bounds on discussion.

they don’t have a grip on Paths Forward. they don’t have a functional home base to avoid Overreaching. they don’t have a beachhead of stuff they’ve gotten right to expand on. whenever they try to deal with unlimited criticism it’s chaos b/c, long story short, they are wrong about everything – both b/c they are at the beginning of infinity and also b/c they aren’t at the cutting edge of what’s already known. and progress from where they are to being way better doesn’t just consist of adding things while keeping what they already know, a ton of it is error correction.

whenever people try to deal with unbounded criticism, everything starts falling apart. their whole childhood and education was a tragic mess and they don’t want to redo it.

people don’t even get started on the Paths Forward project of dealing with all public criticism of ideas. and so basically all their ideas are already refuted and they just think that’s how knowledge is, and if you suddenly demand a new standard of actually getting stuff right – of actually addressing all the problems and criticisms and issues – then they totally lose their footing in the world of ideas b/c they never developed their ideas to that standard. and the project of starting thinking in that way and building up knowledge to that proper standard is fucking daunting and approximately no one else wants to do it.


people don’t like to be picked apart, like DD talking to the cryptoinductivist in FoR ch. 7 and continuing even after the guy conceded (and without even trying to manage the guy’s schedule for him by delaying communications until after he had time to think things over).

FoR and BoI held back 99% of what DD knows.


people want a forum where they go to get small doses of things they already want, while having total control over the whole process. they don’t want to feel bad about some surprise criticism about something they weren’t even trying to talk about.


people all have something they're dishonest about and don't want to talk about.

people all have some anti-rational memes.

and this stuff doesn't stay in neat little boundaries.

all the really powerful, general, abstract, important ideas with tons of reach are threatening to these entrenched no-progress zones.

it doesn't matter if the issue is just some little dumb thing like being scared of spiders. ideas have consequences. how do you feel good about yourself while knowing about some problem and being unwilling/unable to fix it? so you better not know much about spiders – so you better have poor research methods. so you better not know much about memes – so you better not come to understand what the current state of the world is or you'll have questions which memes are part of the answer to.

your best bet is to admit there seems to be a problem there but decide it's a low priority and you're going to do some other stuff and maybe get to it later. that can work with stuff that genuinely isn't very important, like about spiders. then you can learn about memes, and realize maybe you have a nasty meme about spiders, and that isn't destabilizing cuz u already thought there's a problem there, just not one that is affecting your life enough to prioritize over other issues you could work on first.

but what do you do when it isn't a low priority thing? what do you do when it's way harder to isolate than fear of spiders, and has much more immediate and large downsides? like when it's about family, relationships, parenting, your intelligence, your honesty, your rationality?

the more you learn and think and actually start to make some progress with reason, the harder it is to just be a collection of special cases. the more you start to learn and apply some principles and try to be more consistent. and then you run into clashes as you find internal contradictions. and that's not just ignorable, something's gotta give.


people have identities they're attached to. they want to already be wise. if not about everything, about some particular things they think they're good at. that's one of the things people really seem to dislike – when i'm way better at their specialty than they are, when they can't win any arguments with me in their own speciality that i've barely spent time on.

when i found FoR/DD/TCS i was fine with being wrong about more or less everything. i didn't mind. i didn't respect my own existing education in general. i thought school was shit and i'd barely learned anything since elementary school besides some math and programming. i was very good at chess, but i was well aware of the existence of people way better than me at chess – i'd lost tons of chess games and had a positive history of interacting about chess with people i didn't have much chance to beat (both chess friends and chess teachers).

my chess, math and programming have never got especially challenged since finding FoR/etc. but if they were – if there was some whole better way to think about them – i'd like that. i'd be happy. i don't rely on being good at them for identity and self-esteem. my self-esteem comes from more like being rational itself, being interested in learning, being willing to change and fix mistakes, etc. a lot of people actually get some self-esteem along those lines, which makes it all the more problematic for them to try to impose limits on discussion – so they end up twisting themselves up into such dishonest tangles trying to make excuses for why they won't discuss or think anymore in order to end discussion. the internal tangles are so much worse than what you see externally btw. like externally they might just say they are busy and will follow up in a week or two, and then not do that. and then after 3 weeks i write a few paragraphs, and they don't reply, and that's that, externally. but internally it often involves some serious breach of integrity to pull that off, and a whole web of dishonest rationalizations. a lot of these people actually did put a lot of thought into stuff behind the scenes rather than just casually leaving like it's nothing – or suppressed a lot of thought behind the scenes, which has consequences.

i had lefty political views – but they weren't very important to my life. thinking about issues was important to me, but i didn't mind having different thoughts.

lots of people have lots of friends, coworkers, family members, customers, etc, to worry about alienating by changing their mind about politics. i had some of that, but relatively less, and i didn't mind alienating people. if one of my friends doesn't want to reconsider politics and is unwilling to be friends with a right wing person, whatever, i'll just lose respect for them. i don't value people and interactions which are tied to some pre-existing unquestionable conclusions.

happily i haven't lost a job or spouse over my beliefs, but i would be willing to. i have lost potential jobs – e.g. i think it'd be quite hard for me to get hired at Google nowadays given some things i've written in public are the kinds of things Google considers hate speech and fires people for. but on the other hand i also got noticed and got some programming work on account of speaking my mind and having an intelligent blog, so that was good. (i don't do stuff like aggressively bring up politics or philosophy in programming work contexts btw)

you don't need to be popular to have a few friends, coworkers and family members you can get along with. you don't need millions of people to be OK with your beliefs. one job, one spouse and 5 good friends is more than a lot of people have. that's easier to get if you stand out in some ways (so some people like you a lot) than if a lot more people have a very mild positive opinion of you.

anyway lots of people have accomplishments they are proud of. they don't want to change their perspective so their accomplishments are all at the beginning of infinity and could really use as much rapid error-correcting progress as they can manage, which they should continue forever.

people are so used to disliking the journey (like learning or work) and liking the destination. so they don't want criticism of the destinations they already reached and to be told they should journey (make progress, change, improve) continuously forever.


btw people get way more offended if you personalize stuff like this (to criticism of them specifically; talking about yourself is alright). that gets in the way of their ability to pretend they are one of the exceptions. they don't want help connecting all this stuff to actual specific flaws in their life and attitudes (or at least not unbounded help of that type – if they could carefully control all the consequences and what they find out, then they might be willing to open that pandora's box a little. but they can't. even if i was totally obedient and stuff, you just can't control, predict and bound the growth of knowledge. it takes severe fucking limits to avoid what's basically the jump to universal progress-making).

and if you don't personalize and you don't call out individuals, mostly everyone just acts like you're talking to someone else. kinda like if someone is hurt you don't want to shout "someone call 911" to the crowd while you try to perform CPR. it's too likely that no one will do it. it's more effective to pick a random person and tell them personally to call 911.


there are legitimate, important, worthwhile questions about how to change while keeping some stability. you need a mind and life situation which is viable for your life throughout the whole process. it's kinda like patching computer software without being able to shut it down and restart it.

the solution isn't to limit criticism, to block messages, to not find things out. knowing about too many problems to deal with is better than not knowing. it lets you prioritize better. even if you're not very good at prioritizing, you ought to do better with a half-understood list with more stuff on it than with simply less information. (unless the less info is according to a wise design that someone else put effort into. then their knowledge about what you should prioritize could potentially be superior to what overwhelmed-you would come up with initially.)

people need to learn to live with conflict, to live with knowing they are at the beginning of infinity and knowing actual open questions, open leads, open very important things to work on or learn with big consequences.

this is difficult as a practical matter when it comes to emotionally charged issues, identity, self-esteem, major attachments, and stuff with lasting consequences like how one treats one's children. people have a hard time knowing they may well be doing a lot of harm to their child, and then just being emotionally OK with that and proceeding in a calm, reasonable way to e.g. read some relevant books and try to learn more about philosophy of knowledge so they can understand education better so they can later, indirectly, be a better parent. and in the meantime they are doing stuff to their kid which leaves most victims really mentally crippled and irrational for the rest of their lives... and what they are doing violates tons of their own existing values and knowing about that bothers them.

this perspective is wrong though. if they don't hear a damn word about specifically some of their flaws, they should still realize they are at the beginning of infinity and must be doing all sorts of things horribly wrong with all sorts of massive, nasty consequences that are sooooooo far from ideal. not knowing the specific criticisms as applied to their life really shouldn't change their perspective much. but people aren't so good at abstract thinking so they just want to shut up certain messages and not think through or learn all the philosophy of BoI.

BoI (dream of socrates chapter) talks about Hermes' perspective and how tons of stuff the Athenians do looks like the example of stealing and then having disasters and then thinking the solution is even more stealing. that applies to you whether anyone names some of the stuff you're really bad at or not. and hearing some indication of some of the stuff you're fucking up – e.g. using violence and threat of violence against your child, as well as a lot of more subtle but serious stuff – should be purely helpful to deciding what to prioritize, what to do next, and hell it should help with motivation.

i wish i knew some big area(s) i was really bad at and had the option to read stuff about it from people who already put a lot of great thought into it that i don't already know. that'd make things so much easier. i know in theory i must be fucking up all kinds of things, but i don't have a bunch of useful leads being handed to me by others anymore. i used to have that a ton, especially from DD. but also other stuff like i read Szasz and found out about psychiatry – not that i had much in the way of pre-existing views on psychiatry, but still, my little bit of vague thinking on the matter was wrong.

i also never had much of an opinion on induction or economics before learning lots about it. that's something i find kinda weird. how much people who don't know much think they know a bunch and are attached. i usually am good at knowing that i don't know much about something, but when i talk to people about psychiatry i find a large portion of them are like super entrenched with pro-psychiatry views even though they really don't know much about it. same with capitalism/socialism and induction. people who've really never studied the matter have such strong opinions they are so attached to.

an example of something that went less smoothly was Israel. i had picked up some anti-Israel ideas from news articles and i think also from some other TCS discussion people like Justin (i know he had bad views on Israel in the past and changed his mind later than i did and he predated me at the TCS IRC chatroom). anyway DD misidentified me as entrenched with anti-Israel dogma, partly b/c i did know (or thought i knew) a bit about it, and I brought up some information i'd read. but, while i can see how it looked a lot like many other conversations, he was actually mistaken about me and i quickly learned more and changed my mind about Israel (with DD offering guidance like recommending things to read and pointing out a few things).

the misunderstanding is important b/c it lets us examine: what happened when DD thought I was being irrational? he said a few harsh things. which, as a matter of fact, i didn't deserve. but so what? did i spend my time getting offended? no. i just wanted to learn and focused on that. i still expected him to be right about the topic, and just wanted to get info.

i used to say, more or less, that DD was always right about everything. this attitude is important and interesting b/c it appears irrational (deferring to authority). it's also an attitude lots of people would dislike, whereas i enjoyed it – i was thrilled to find a bunch of knowledge (embodied by a particular person – which people find more offensive than books for some reason) better and wiser than myself rather than feeling diminished by comparison.

i was, at the same time, very deferential in some ways and not at all deferential in other ways. this is important and people suck at it.

i did not go "well i lost the last 50 arguments but i bet i'm right about Israel. i bet those dozen articles i read means i know more about it than DD and i'll win the debate this time". that's so typical and so dumb.

but i also did not just accept whatever DD said b/c he said it. i expected him to be right but also challenged his claims. i asked questions and argued, while expecting to lose the debate, to learn more about it. i very persistently brought stuff up again and again until i was fully satisfied. lots of people concede stuff and then think it's done and don't learn more about it, and end up never learning it all that well. sometimes i thought i conceded and said so, but even if i did, i had zero shame about re-opening any topic from any amount of time ago to ask a new question or ask how to address a new argument for any side.

i also fluidly talked about arguments for any side instead of just arguing a particular side. even if i was mostly arguing a particularly side, i'd still sometimes think of stuff for DD's side and say that too. ppl are usually so biased and one-sided with their creativity.

after i learned things from DD i found people to discuss them with, including people who disagreed with them. then if i had any trouble thoroughly winning the debate with zero known flaws on my side, zero open problems, zero unanswered criticisms, etc, then i'd go back to DD and expect more and better answers from him to address everything fully. i figured out lots of stuff myself but also my attitude of "DD is always right and knows everything" enabled me to be infinitely demanding – i expected him to be a perfect oracle and just kept asking questions about anything and everything expecting him to always have great answers to whatever level of precision, thoroughness, etc, i wanted. when i wasn't fully convinced by every aspect of an answer i'd keep trying over and over to bring up the subject in more ways – state different arguments and ask what's wrong with them, state more versions of his position (attempting to fix some problem) and ask if that's right, find different ways to think about a question and express it, etc. this of course was very useful for encouraging DD to create more and better answers than he already knew or already had formulated in English words.

i didn't 100% literally expect him to know everything, but it was a good mantra and was compatible with questioning him, debating him, etc. it's important to be able to expect to be mistaken and lose a debate and still have it, eagerly and thoroughly. and to keep saying every damn doubt you have, every counter-argument you think of, to address all of them, even when you're pretty convinced by some main points that you must be badly wrong or ignorant.

anyway the method of not being satisfied with explanations until i'd explained them myself to teach others and win several debates – with no outstanding known hiccups, flaws, etc – is really good. that's the kind of standard of knowledge people need.

standards for what kind of knowledge quality people should aim for is an important topic, btw. people often think their sloppy knowledge is good enough and that more precision isn't needed. why split hairs? this is badly wrong:

  • we're at the beginning of infinity. there's so much wrong with our knowledge and we should strive to make all the progress we can, make it as great as we can.

  • people's actual current knowledge leads to all kinds of tragedies and misery. disasters go wrong in people's lives. a lot. our knowledge isn't good enough. there's so much we can see wrong with the world that we should want to be better. not just advanced stuff like what's wrong with parenting, but more blatant stuff like how the citizens of North Korea are treated, the threat of NK or Iranian nukes, our poor ability to create a reasonable consensus about foreign policy. or people having broken hearts and bitter divorces. or people having a "mental illness" like "depression" or "autism" and kids and malcontents being drugged into a stupor. and even if you don't think psychiatrists are doing anything wrong you can still see it as they are dealing with hard problems and there's room for them to develop better medicines. oh and people die of cancer, car accidents, and stuff – and more generally of aging. and we're still a single-planet civilization that could get wiped out if we don't get to other planets soon enough. and it's not really that hard to list a lot more stuff on a big or small scale. people have mini fights with their family and friends all the time. people get fired, programming projects fail, business in all industries fail, people make bad decisions and lose a bunch of money, people don't achieve all that they wish to, people feel bad about things that happen to them (e.g. someone said something mean) and have a bad time with it and find it distracting, people are late to stuff, people's cooking comes out bad.


FI is a method of always being right. cuz either ur right now, or u change ur mind and then ur right. other stuff is a method of staying wrong.

first you have some position that, as far as you know is right. you've done nothing wrong. even if you're mistaken, you don't know better and you're making reasonable ongoing efforts to seek out new info, learn new things, etc. then someone challenges you, and you realize there's some issues with your view, so your new position is you're undecided pending further thought and info. (that's your intellectual position, in terms of IRL actions u might be mid-project and decide, at this point, it's best not to disrupt it even given the risk you're mistaken.) and then the moment after you're persuaded, your position is you know enough to be persuaded of this new idea. and so who can fault you at any time? you held the right position to hold, given what you knew, at each step.

when ppl argue with me, either they have yet to provide adequate help for me to understand a better idea (so it's ok i haven't adopted the new view yet), or they have in which case i will have successfully adopted the new view (if i haven't successfully done that then apparently the help was inadequate and either they can try to help more or i can work on it without them more, whatever, i'm blameless regardless).


Elliot Temple | Permalink | Messages (10)

The Four Best Books

The four best books are The Fabric of Reality and The Beginning of Infinity by David Detusch (DD), and Atlas Shrugged and The Fountainhead by Ayn Rand (AR).

Update: See my unendorsement of the Deutsch books.

Everyone should learn this stuff, but currently only a handful of people in the world know much about all four of these books. This material is life-changing because it deals with broad ideas which are important to most of life, and which challenge many things people currently think they know.

However: they’re way too deep and novel to read once and understand. The ideas are correct to a level of detailed precision that people don't even know is a possible thing to try for. The normal way people read books is inadequate to learn all the wonderful ideas in these books. To understand , there’s two options:

1) be an AR or DD yourself, be on their level or reasonably close, be the kind of person who could invent the ideas in the first place. then you could learn it alone (though it’d still involve many rereadings and piles of supplementary material, unless you were dramatically better than AR or DD.)

this is not intended as an option for people to choose, they're like one in a billion kind of people. and even if one could do it, it’s way harder than (2) so it'd be a dumb approach.

2) get help with error correction from other people who already understand the ideas. realistically, this requires a living tradition of people willing to help with individualized replies. it’s plenty hard enough to learn the ideas even with great resources like that. to last, it has to educate new people faster than existing people stop participating or die. (realistically, this method still involves supplementary material, rereadings, etc, in addition to discussion.)

What is the current situation regarding relevant living traditions?

DD

for the DD stuff, there’s only one living tradition available: the Fallible Ideas community.

the most important parts of the DD material is based on Karl Popper's philosophy, Critical Rationalism (CR). there’s some CR-only stuff elsewhere, but the quality is inadequate.

Fallible Ideas

besides reading the books, it's also important to understand how the DD and AR ideas fit together, and how to apply the cohesive whole to life.

there's lots of written material about this on my websites and in discussion archives. the only available living tradition for this is the Fallible Ideas community.

AR

for the AR stuff, there are two living traditions available which i consider valuable. there are also others like Branden fans, Kelley fans, various unserious fan forums, etc, which i don’t think are much help.

the two valuable Rand living traditions disagree considerably on some topics, but they do also agree a ton on other topics.

they are the Fallible Ideas community and the Peikoff/Ayn Rand Institute/Binswanger community. The Peikoff version of Objectivism doesn’t understand CR; it’s inductivist. There are other significant flaws with it, but there’s also a lot of value there. It’s has really helpful elaborations of what Rand meant on many topics.


Elliot Temple | Permalink | Messages (9)

Discussion About the Importance of Explanations with Andrew Crawshaw

From Facebook:

Justin Mallone:

The following excerpt argues that explanations are what is absolutely key in Popperian philosophy, and that Popper over-emphasizes the role of testing in science, but that this mistake was corrected by physicist and philosopher David Deutsch (see especially the discussion of the grass cure example). What do people think?
(excerpted from: https://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science)

Most ideas are criticized and rejected for being bad explanations. This is true even in science where they could be tested. Even most proposed scientific ideas are rejected, without testing, for being bad explanations.
Although tests are valuable, Popper's over-emphasis on testing mischaracterizes science and sets it further apart from philosophy than need be. In both science and abstract philosophy, most criticism revolves around good and bad explanations. It's largely the same epistemology. The possibility of empirical testing in science is a nice bonus, not a necessary part of creating knowledge.

In [The Fabric of Reality], David Deutsch gives this example: Consider the theory that eating grass cures colds. He says we can reject this theory without testing it.
He's right, isn't he? Should we hire a bunch of sick college students to eat grass? That would be silly. There is no explanation of how grass cures colds, so nothing worth testing. (Non-explanation is a common type of bad explanation!)
Narrow focus on testing -- especially as a substitute for support/justification -- is one of the major ways of misunderstanding Popperian philosophy. Deutsch's improvement shows how its importance is overrated and, besides being true, is better in keeping with the fallibilist spirit of Popper's thought (we don't need something "harder" or "more sciency" or whatever than critical argument!).

Andrew Crawshaw: I see, but it might turn out that grass cures cold. This would just be an empirical fact, demanding scientific explanation.

TC: Right, and if a close reading of Popper yielded anything like "test every possible hypothesis regardless of what you think of it", this would represent an advancement over Popper's thought. But he didn't suggest that.

Andrew Crawshaw: We don't reject claims of the form by indicated by Deustch because they are bad explanations. There are plenty of dangling empirical claims that we still hold to be true but which are unexplained. Deutsch is mistaking the import of his example.

Elliot Temple:

There are plenty of dangling empirical claims that we still hold to be true but which are unexplained.

That's not the issue. Are there any empirical claims we have criticism of, but which we accept? (Pointing out that something is a bad explanation is a type of criticism.)

Andrew Crawshaw: If you think that my burden is to show that there are empirical claims that are refuted but that we accept, then you have not understood my criticism.

For example

Grass cures colds.

Is of the same form as

aluminium hydroxide contributes to the production of a large quantity of antibodies.

Both are empirical claims, but they are not explanatory. That does not make them bad

Neither of them are explanations. One is accepted and the other is not.

It's not good saying that the former is a bad explanation.

The latter has not yet been properly explained by sciences

Elliot Temple: The difference is we have explanations of how aluminum hydroxide works, e.g. from wikipedia " It reacts with excess acid in the stomach, reducing the acidity of the stomach content"

Andrew Crawshaw: Not in relation to its antibody mechanism.

Elliot Temple: Can you provide reference material for what you're talking about? I'm not familiar with it.

Andrew Crawshaw: I can, but it is still irrelevant to my criticism. Which is that they are both not explanatory claims, but one is held as true while the other not.

They are low-level empirical claims that call out for explantion, they don't themselves explain. Deutsch is misemphesising.

https://www.chemistryworld.com/news/doubts-raised-over-vaccine-boost-theory/3001326.article

Elliot Temple: your link is broken, and it is relevant b/c i suspect there is an explanation.

Andrew Crawshaw: It's still irrelevant to my criticism. Which is that we often accept things like rules of thumb, even when they are unexplained. They don't need to be explained for them to be true of for us to class them as true. Miller talks about this extensively. For instance strapless evening gowns were not understand scientifically for ages.

Elliot Temple: i'm saying we don't do that, and you're saying you have a counter-example but then you say the details of the counter-example are irrelevant. i don't get it.

Elliot Temple: you claim it's a counter example. i doubt it. how are we to settle this besides looking at the details?

Andrew Crawshaw: My criticism is that calling such a claim a bad explanation is irrelevat to those kinds of claims. They are just empirical claims that beg for explanation.

Elliot Temple: zero explanation is a bad explanation and is a crucial criticism. things we actually use have more explanation than that.

Andrew Crawshaw: So?

Elliot Temple: so DD and I are right: we always go by explanations. contrary to what you're saying.

Andrew Crawshaw: We use aliminium hydroxide for increasing anti-bodies and strapless evening gowns p, even before they were explained.

Elliot Temple: i'm saying i don't think so, and you're not only refusing to provide any reference material about the matter but you claimed such reference material (indicating the history of it and the reasoning involved) is irrelevant.

Andrew Crawshaw: I have offered it. I re-edited my post.

Elliot Temple: please don't edit and expect me to see it, it usually doesn't show up.

Andrew Crawshaw: You still have not criticised my claim. The one comparing the two sentences which are of the same form, yet one is accepted and one not.

Elliot Temple: the sentence "aluminium hydroxide contributes to the production of a large quantity of antibodies." is inadequate and should be rejected.

the similar sentence with a written or implied footnote to details about how we know it would be a good claim. but you haven't given that one. the link you gave isn't the right material: it doesn't say what aluminium hydroxide does, how we know it, how it was discovered, etc

Elliot Temple: i think your problem is mixing up incomplete, imperfect explanations (still have more to learn) with non-explanation.

Andrew Crawshaw: No, it does not. But to offer that would be to explain. Which is exactly what I am telling is irrelevant.

What is relevant is whether the claim itself is a bad explanation. It's just an empirical claim.

The point is just that we often have empirical claims that are not explained scientifically yet we accept them as true and use them.

Elliot Temple: We don't. If you looked at the history of it you'd find there were lots of explanations involved.

Elliot Temple: I guess you just don't know the history either, which is why you don't know the explanations involved. People don't study or try things randomly.

Elliot Temple: If you could pick a better known example which we're both familiar with, i could walk you through it.

Andrew Crawshaw: There was never an explanation of how bridges worked. But there were rules of thumb of how to build them. There is explanations of how to use aluminium hydroxide but is actual mechanism is unknown.

Elliot Temple: what are you talking about with bridges. you can walk on strong, solid objects. what do you not understand?

Andrew Crawshaw: That's not how they work. I am talking about the scientific explanation of forces and tensions. It was not always understood despite the fact that they were built. This is the same with beavers dams, they don't know any of the explanations of how to build dams.

Elliot Temple: you don't have to know everything that could be known to have an explanation. understanding that you can walk on solid objects, and they can be supported, etc, is an explanation, whether you know all the math or not. that's what the grass cure for the cold lacks.

Elliot Temple: the test isn't omniscience, it's having a non-refuted explanation.

Andrew Crawshaw: Hmm, but are you saying then that even bad-explanations can be accepted. Cuz as far as I can tell many of the explanations for bridge building were bad, yet they stil built bridges.

Anyway you are still not locating my criticism. You are criticising something I never said it seems. Which is that Grass cures cold has not been explained. But what Deutsch was claiming was that the claim itself was a bad explanation, which is true if bad explanation includes non-explanation, but it is not the reason it is not accepted. As the hydroxide thing suggests.

Elliot Temple: We should only accept an explanation that we don't know any criticism of.

We need some explanation or we'd have no idea if what we're doing would work, we'd be lost and acting randomly without rhyme or reason. And that initial explanation is what we build on – we later improve it to make it more complete, explain more stuff.

Andrew Crawshaw: I think this is incorrect. All animals that can do things refutes your statement.

Elliot Temple: The important thing is the substance of the knowledge, not whether it's written out in the form of an English explanation.

Andrew Crawshaw: Just because there is an explanation of how some physical substrate interacts with another physical substrate, does not mean that you need explanations. Explanations are in language. Knowledge not necessarily. Knowledge is a wider phenomenon than explanation. I have many times done things by accident that have worked, but I have not known why.

Elliot Temple: This is semantics. Call it "knowledge" then. You need non-refuted knowledge of how something could work before it's worth trying. The grass cure for the cold idea doesn't meet this bar. But building a log bridge without knowing modern science is fine.

Andrew Crawshaw: Before it's worth trying? I don't think so, rules of thumb are discovered by accident and then re-used without knowing how or why it could work,,it's just works and then they try it again and it works again. Are you denying that that is a possibility?

Elliot Temple: Yes, denying that.

Andrew Crawshaw: Well, you are offering foresight to evolution then, it seems.

Elliot Temple: That's vague. Say what you mean.

Andrew Crawshaw: I don't think it is that vague. If animals can build complex things like behaves and they should have had knowledge of how it could work before it was worth trying out, then they have a lot of forsight before they tried them out. Or could it be the fact that it is the other way round, we stumble in rules of thumb develop them, then come up with explanations about how they possibly work. I am more inclined to the latter. The former is just another version of the argument from design.

Elliot Temple: humans can think and they should think before acting. it's super inefficient to act mindlessly. genetic evolution can't think and instead does things very, very, very slowly.

Andrew Crawshaw: But thinking before acting is true. Thinking is critical. It needs material to work on. Which is guesswork and sometimes, if not often, accidental actions.

Elliot Temple: when would it be a good idea to act thoughtlessly (and which thoughtless action) instead of acting according to some knowledge of what might work?

Elliot Temple: e.g. when should you test the grass cure for cancer, with no thought to whether it makes any sense, instead of thinking about what you're doing and acting according to your rational thought? (which means e.g. considering what you have some understanding could work, and what you have criticisms of)

Andrew Crawshaw: Wait, we often act thoughtlessly whether or not we should do. I don't even think it is a good idea. But we often try to do things and end up somewhere which is different to what we expected, it might be worse or better. For instance, we might try to eat grass because we are hungry and then happen to notice that our cold disspaeard and stumble on a cure for the cold.

Andrew Crawshaw: And different to what we expected might work even though we have no idea why.

Elliot Temple: DD is saying what we should do, he's talking about reason. Sometimes people act foolishly and irrationally but that doesn't change what the proper methods of creating knowledge are.

Sometimes unexpected things happen and you can learn from them. Yes. So what?

Andrew Crawshaw: But if Deustch expects that we can only work with explanations. Then he is mistaken. Which is, it seems, what you have changed your mind about.

Elliot Temple: I didn't change my mind. What?

What non-explanations are you talking about people working with? When an expectation you have is violated, and you investigate, the explanation is you're trying to find out if you were mistaken and figure out the thing you don't understand.

Elliot Temple: what do you mean "work with"? we can work with (e.g. form explanations about) spreadsheet data. we can also work with hammers. resources don't have to be explanations themselves, we just need an explanation of how to get value out of the resource.

Andrew Crawshaw: There is only one method of creating knowledge. Guesswork. Or, if genetically, by mutation. Physical things are often made without knows how and then they are applied in various contexts and they might and mint not work, that does not mean we know how they work.

Elliot Temple: if you didn't have an explanation of what actions to take with a hammer to achieve what goal, then you couldn't proceed and be effective with the hammer. you could hit things randomly and pray it works out, but it's not a good idea to live that way.

Elliot Temple: (rational) humans don't proceed purely by guesses, they also criticize the guesses first and don't act on the refuted guesses.

Andrew Crawshaw: Look there are three scenarios

  1. Act on knowledge
  2. Stumble upon solution by accident, without knowing why it works.
  3. Act randomly

Elliot Temple: u always have some idea of why it works or you wouldn't think it was a solution.

Andrew Crawshaw: No, all you need is to recognise that it worked. This is easily done by seeing that what you wanted to happen happened. It is non-sequitur to then assume that you know something of how it works.

Elliot Temple: you do X. Y results. Y is a highly desirable solution to some recurring problem. do you now know that X causes Y? no. you need some causal understanding, not just a correlation. if you thought it was impossible that X causes Y, you would look for something else. if you saw some way it's possible X causes Y, you have an initial explanation of how it could work, which you can and should expose to criticism.

Elliot Temple:

Know all you need is to recognise that it works.

plz fix this sentence, it's confusing.

Andrew Crawshaw: You might guess that it caused it. You don't need to understand it to guess that it did.

Elliot Temple: correlation isn't causation. you need something more.

Elliot Temple: like thinking of a way it could possibly cause it.

Elliot Temple: that is, an explanation of how it works.

Andrew Crawshaw: I am not saying correlation is causation, you don't need to explained guesswork, before you have guess it. You first need to guess that something caused something before you go out and explain it. Otherwise what are explaining?

Elliot Temple: you can guess X caused Y and then try to explain it. you shouldn't act on the idea that X caused Y if you have no explanation of how X could cause Y. if you have no explanation, then that's a criticism of the guess.

Elliot Temple: you have some pre-existing understanding of reality (including the laws of physics) which you need to fit this into, don't just treat the world as arbitrary – it's not and that isn't how one learns.

Andrew Crawshaw: That's not a criticism of the guess. It's ad hominem and justificationist.

Elliot Temple: "that" = ?

Andrew Crawshaw: I am agreeing totally with you about many things

  1. We should increase our criticism as much as possible.
  2. We do have inbuilt expectations about how the world works.

What We are not agreeing about is the following

  1. That a guess has to be back up by explanation for it to be true or classified as true. All we need is to criticise the guess. Arguing otherwise seems to me a type of justificationism.

  2. That in order to get novel explanations and creations, this often is done despite the knowledge and necessarily has to be that way otherwise it would not be new.

Elliot Temple:

That's not a criticism of the guess. It's ad hominem and justificationist.

please state what "that" refers to and how it's ad hominem, or state that you retract this claim.

Andrew Crawshaw: That someone does not have an explanation. First, because explanations are not easy to come by and someone not having an explanation for something does not in anyway impugn the pedigree of the guess or the strategy etc. Second explanation is important and needed, but not necessary for trying out the new strategy, y, that you guess causes x. You might develope explanations while using it. You don't need the explanation before using it.

Elliot Temple: Explanations are extremely easy to come by. I think you may be adding some extra criteria for what counts as an explanation.

Re your (1): if you have no explanation, then you can criticize it: why didn't they give it any thought and come up with an explanation? they should do that before acting, not act thoughtlessly. it's a bad idea to act thoughtlessly, so that's a criticism.

it's trivial to come up with even an explanation of how grass cures cancer: cancer is internal, and various substances have different effects on the body, so if you eat it it may interact with and destroy the cancer.

the problem with this explanation is we have criticism of it.

you need the explanation so you can try criticizing it. without the explanation, you can't criticize (except to criticize the lack of explanation).

re (2): this seems to contain typos, too confusing to answer.

Elliot Temple: whenever you do X and Y happens, you also did A, B, C, D. how do you know it was X instead of A, B, C or D which caused Y? you need to think about explanations before you can choose which of the infinite correlations to pay attention to.

Elliot Temple: for example, you may have some understanding that Y would be caused by something that isn't separated in space or time from it by very much. that's a conceptual, explanatory understanding about Y which is very important to deciding what may have caused Y.

Andrew Crawshaw: Again, it's not a criticism of the guess. It's a criticism of how the person acted.

The rest of your statements are compatible with what I am saying. Which is just that it can be done and explanations are not necessary either for using something or creating something. As the case of animals surely shows.

You don't know, you took a guess. You can't know before you guess that your guess was wrong.

Elliot Temple: "I guess X causes Y so I'll do X" is the thing being criticized. If the theory is just "Maybe X causes Y, and this is a thing to think about more" then no action is implied (besides thinking and research) and it's harder to criticize. those are different theories.

even the "Maybe X causes Y" thing is suspect. why do you think so? You did 50 million actions in your life and then Y happened. Why do you think X was the cause? You have some explanations informing this judgement!

Andrew Crawshaw: There is no difference between maybe Y and Y. It's always maybe Y. Unless refuted.

Andrew Crawshaw: You are subjectivist and justificationist as far as I can tell. A guess is objective and if someone despite the fact that they have bad judgement guesses correctly. They still guess correctly. Nothing mitigates the precariousness of this situation. Criticism is the other component.

Elliot Temple: If the guess is just "X causes Y", period, you can put that on the table of ideas to consider. However, it will be criticized as worthless: maybe A, B, or C causes Y. Maybe Y is self-caused. There's no reason to care about this guess. It doesn't even include any mention of Y ever happening.

Andrew Crawshaw: The guess won't be criticised, what will be noticed is that it shouts out for explanation and someone might offer it.

Elliot Temple: If the guess is "Maybe X causes Y because I once saw Y happen 20 seconds after X" then that's a better guess, but it will still get criticized: all sorts of things were going on at all sorts of different times before Y. so why think X caused Y?

Elliot Temple: yes: making a new guess which adds an explanation would address the criticism. people are welcome to try.

Elliot Temple: they should not, however, go test X with no explanation.

Andrew Crawshaw: That's good, but one of the best ways to criticise it, is to try it again and see if it works.

Elliot Temple: you need an explanation to understand what would even be a relevant test.

Elliot Temple: how do you try it again? how do you know what's included in X and what isn't included? you need an explanation to differentiate relevant stuff from irrelevant

Elliot Temple: as the standard CR anti-inductivist argument goes: there are infinite patterns and correlations. how do you pick which ones to pay attention to?

Elliot Temple: you shouldn't pick one thing, arbitrarily, from an INFINITE set and then test it. that's a bad idea. that's not how scientific progress is made.

Elliot Temple: what you need to do is have some conceptual understanding of what's going on. some explanations of what types of things might be relevant to causing Y and what isn't relevant, and then you can start doing experiments guided by your explanatory knowledge of physics, reality, some possible causes, etc

Elliot Temple: i am not a subjectivist or justificationist, and i don't see what's productive about the accusation. i'm willing to ignore it, but in that case it won't be contributing positively to the discussion.

Andrew Crawshaw: I am not saying that we have no knowledge. I am sayjng that we don't have an explanation of the mechanism.

Elliot Temple: can you give an example? i think you do have an explanation and you just aren't recognizing what you have.

Andrew Crawshaw: For instance, washing hands and it's link to mortality rates.

Elliot Temple: There was an explanation there: something like taint could potentially travel with hands.

Elliot Temple: This built on previous explanations people had about e.g. illnesses spreading to nearby people.

Andrew Crawshaw: Right, but the use of soap was not derived from the explanation. And that explanation might have been around before, and no such soap was used because of it.

Elliot Temple: What are you claiming happened, exactly?

Andrew Crawshaw: I am claiming that soap was invented for various reasons and then it turned out that the soap could be used for reducing mortality"

Elliot Temple: That's called "reach" in BoI. Where is the contradiction to anything I said?

Andrew Crawshaw: Reach of explanations. It was not the explanation, it was the invention of soap itself. Which was not anticipated or even encouraged by explanations. Soap is invented, used in a context an explanation might be applied to it. Then it is used in another context and again the explanation is retroactively applied to it. The explantion does not necessarily suggest more uses, nor need it.

Elliot Temple: You're being vague about the history. There were explanations involved, which you would see if you analyzed the details well.

Andrew Crawshaw: So, what if there were explanations "involved" The explanations don't add anything to the discovery of the uses of the soap. This are usually stumbled in by accident. And refinements to soaps as well for those different contexts.

Andrew Crawshaw: I am just saying that explanations of the soap works very rarely suggest new avenues. It's often a matter of trial and error.

Elliot Temple: You aren't addressing the infinite correlations/patterns point, which is a very important CR argument. Similarly, one can't observe without some knowledge first – all observation is theory laden. So one doesn't just observe that X is correlated to Y without first having a conceptual understanding for that to fit into.

Historically, you don't have any detailed counter example to what I'm saying, you're just speculating non-specifically in line with your philosophical views.

Andrew Crawshaw: It's an argument against induction. Not against guesswork informed by earlier guesswork, that often turns out to be mistaken. All explanations do is rule things out. unless they are rules for use, but these are developed while we try out those things.

Elliot Temple: It's an argument against what you were saying about observing X correlated with Y. There are infinite correlations. You can either observe randomly (not useful, has roughly 1/infinity chance of finding solutions, aka zero) or you can observe according to explanations.

Elliot Temple: You're saying to recognize a correlation and then do trial and error. But which one? Your position has elements of standard inductivist thinking in it.

Andrew Crawshaw: I never said anything about correlation - you did.

What is said was we could guess that x caused y and be correct. That's what I said, nothing more mothing less.

Andrew Crawshaw: One instance does not a correlation make.

Elliot Temple: You could also guess Z caused Y. Why are you guessing X caused Y? Filling up the potential-ideas with an INFINITE set of guesses isn't going to work. You're paying selective attention to some guesses over others.

Elliot Temple: This selective attention is either due to explanations (great!) or else it's the standard way inductivists think. Or else it's ... what else could it be?

Andrew Crawshaw: Why not? Criticise it. If you have a scientific theory that rules my guess out, that would be intersting. But saying why not this guess and why not that one. Some guesses are not considered by you maybe because they are ruled out by other expectations, or ey do not occurs to you.

Elliot Temple: The approach of taking arbitrary guesses out of an infinite set and trying to test them is infinitely slow and unproductive. That's why not. And we have much better things we can do instead.

Elliot Temple: No one does this. What they do is pick certain guesses according to unconscious or unstated explanations, which are often biased and crappy b/c they aren't being critically considered. We can do better – we can talk about the explanations we're using instead of hiding them.

Andrew Crawshaw: So, you are basically gonna ignore the fact that I have agreed that expecations and earlier knowledge do create selective attention, but what to isolate is neither determined by theory, nor by earlier perceptions, it is large amount guesswork controlled by criticism. Humans can do this rapidly and well.

Elliot Temple: Please rewrite that clearly and grammatically.

Andrew Crawshaw: It's like you are claiming there is no novelty in guesswork, if we already have that as part of our expectation ps it was not guesswork.

Elliot Temple: I am not claiming "there is no novelty in guesswork".

Andrew Crawshaw: So we are in agreement, then. Which is just that there are novel situations and our guesses are also novel. How we eliminate them is through other guesses. Therefore the guesses are sui generiz and then deselected according earlier expecations. It does not follow that the guess was positively informed by anything. It was a guess about what caused what.

Elliot Temple: Only guesses involving explanations are interesting and productive. You need to have some idea of how/why X causes Y or it isn't worth attention. It's fine if this explanation is due to your earlier knowledge, or it can be a new idea that is part of the guess.

Andrew Crawshaw: I don't think that's true. Again beavers make interesting and productive dams.

Elliot Temple: Beavers don't choose from infinite options. Can we stick to humans?

Andrew Crawshaw: Humans don't choose from infinite options....They choose from the guess that occur to them, which are not infinite. Their perception is controlled by both pyshiologival factors and their expectations. Novel situations require guesswork, because guesswork is flexible.

Elliot Temple: Humans constantly deal with infinite categories. E.g. "Something caused Y". OK, what? It could be an abstraction such as any integer. It could be any action in my whole life, or anyone else's life, or something nature did. There's infinite possibilities to deal with when you try to think about causes. You have to have explanations to narrow things down, you can't do it without explanations.

Elliot Temple: Arbitrary assertions like "The abstract integer 3 caused Y" are not productive with no explanation of how that could be possible attached to the guess. There are infinitely more where that came from. You won't get anywhere if you don't criticize "The abstract integer 3 caused Y" for its arbitrariness, lack of explanation of how it could possibly work, etc

Elliot Temple: You narrow things down. You guess that a physical event less than an hour before Y and less than a quarter mile distant caused Y. You explain those guesses, you don't just make them arbitrarily (there are infinite guesses you could make like that, and also that category of guess isn't always appropriate). You expose those explanations to criticism as the way to find out if they are any good.

Andrew Crawshaw: You are arguing for an impossible demand that you yourself can't meet, event when you have explanations. It does not narrow it down from infinity. What narrows it down is our capacity to form guess which is temporal and limited. It's our brains ability to process and to intepret that information.

Elliot Temple: No, we can deal with infinite sets. We don't narrow things down with our inability, we use explanations. I can and do do this. So do you. Explanations can have reach and exclude whole categories of stuff at once.

Andrew Crawshaw: But it does not reduce it to less than infinite. Explanations allow an infinite amount of thugs most of them useless. It's what they rule out, and things they can rule out is guess work. And this is done over time. So we might guess this and then guess that x caused y, we try it again and it might not work, so we try to vary the situation and in the way develope criticism and more guesses.

Elliot Temple: Let's step back. I think you're lost, but you could potentially learn to understand these things. You think I'm mistaken. Do you want to sort this out? How much energy do you want to devote to this? If you learn that I was right, what will you do next? Will you join my forum and start contributing? Will you study philosophy more? What values do you offer, and what values do you seek?

Andrew Crawshaw: Mostly explanations take time to understand why they conflict with some guess. It might be that the guess only approximates the truth and then find later that it is wrong because we look more into the explanation of i.

Andrew Crawshaw: Elliot, if you wish to meta, I will step out of the conversation. It was interesting, yet you still refuse to concede my point that inventions can be created without explanations. But yet this is refuted by the creations of animals and many creations of humans. You won't concede this point and then make your claims pretty well trivial. Like you need some kind od thing to direct what you are doing. When the whole point is the Genesis of new ideas and inventions and theories which cannot be suggest by earlier explanations. It is true that explanations can help, I refining and understanding. But that is not the whole story of human cognition or human invention.

Elliot Temple: So you have zero interest in, e.g., attempting to improve our method of discussion, and you'd prefer to either keep going in circles or give up entirely?

Elliot Temple: I think we could resolve the disagreement and come to agree, if we make an effort to, AND we don't put arbitrary boundaries on what kinds of solutions and actions are allowed to be part of the problem solving process. I think if you make methodology off-limits, you are sabotaging the discussion and preventing its rational resolution.

Elliot Temple: Not everything is working great. We could fix it. Or you could just unilaterally blame me and quit..?

Andrew Crawshaw: Sorry, I am not blaming you for anything.

Elliot Temple: OK, you just don't really care?

Andrew Crawshaw: Wait. I want to say two things.

  1. It's 5 in the morning, and I was working all day, so I am exhausted.

  2. This discussion is interesting, but fragmented. I need to moderate my posts on here, now. And recuperate.

Elliot Temple: I haven't asked for fast replies. You can reply on your schedule.

Elliot Temple: These issues will still be here, and important, tomorrow and the next day. My questions are open. I have no objection to you sleeping, and whatever else, prior to answering.

Andrew Crawshaw: Oh, I know you haven't asked for replies. I just get very involved in discussion. When I do I stop monitoring my tiredness levels and etc.

I know this discussion is important. The issues and problems.

Elliot Temple: If you want to drop it, you can do that too, but I'd want to know why, and I might not want to have future discussions with you if I expect you'll just argue a while and then drop it.

Andrew Crawshaw: Like to know why? I have been up since very early yesterday, like 6. I don't want to drop the discussion I want to postpone it, if you will.

Elliot Temple: That's not a reason to drop the conversation, it's a reason to write your next reply at a later time.

Andrew Crawshaw: I explicitly said: I don't want to drop the discussion.

Your next claim is a non-sequitur. A conversation can be resumed in many ways. I take it you think it would be better for me to initiate it.

Andrew Crawshaw: I will read back through the comments and see where this has lead and then I will post something on fallible ideas forum.

Elliot Temple: You wrote:

Elliot, if you wish to meta, I will step out of the conversation.

I read "step out" as quit.

Anyway, please reply to my message beginning "Let's step back." whenever you're ready. Switching forums would be great, sure :)


Elliot Temple | Permalink | Messages (17)

Replies to Gyrodiot About Fallible Ideas, Critical Rationalism and Paths Forward

Gyrodiot wrote at the Less Wrong Slack Philosophy chatroom:

I was waiting for an appropriate moment to discuss epistemology. I think I understood something about curi's reasoning about induction After reading a good chunk of the FI website. Basically, it starts from this:

He quotes from: http://fallibleideas.com/objective-truth

There is an objective truth. It's one truth that's the same for all people. This is the common sense view. It means there is one answer per question.

The definition of truth here is not the same as The Simple Truth as described in LW. Here, the important part is:

Relativism provides an argument that the context is important, but no argument that the truth can change if we keep the context constant.

If you fixate the context around a statement, then the statement ought to have an objective truth value

Yeah. (The Simple Truth essay link.)

In LW terms that's equivalent to "reality has states and you don't change the territory by thinking differently about the map"

Yeah.

From that, FI posits the existence of universal truths that aren't dependent on context, like the laws of physics.

More broadly, many ideas apply to many contexts (even without being universal). This is very important. DD calls this "reach" in BoI (how many contexts does an idea reach to?), I sometimes go with "generality" or "broader applicability".

The ability for the same knowledge to solve multiple problems is crucial to our ability to deal with the world, and for helping with objectivity, and for some other things. It's what enabled humans to even exist – biological evolution created knowledge to solve some problems related to survival and mating, and that knowledge had reach which lets us be intelligent, do philosophy, build skyscrapers, etc. Even animals like cats couldn't exist, like they do today, without reach – they have things like behavioral algorithms which work well in more than one situation, rather than having to specify different behavior for every single situation.

The problem with induction, with this view is that you're taking truths about some contexts to apply them to other contexts and derive truths about them, which is complete nonsense when you put it like that

Some truths do apply to multiple contexts. But some don't. You shouldn't just assume they do – you need to critically consider the matter (which isn't induction).

From a Bayesian perspective you're just computing probabilities, updating your map, you're not trying to attain perfect truth

Infinitely many patterns both do and don't apply to other contexts (such as patterns that worked in some past time range applying tomorrow). So you can't just generalize patterns to the future (or to other contexts more generally) and expect that to work, ala induction. You have to think about which patterns to pay attention to and care about, and which of those patterns will hold in what ranges of contexts, and why, and use critical arguments to improve your understanding of all this.

We do [live in our own map], which is why this mode of thought with absolute truth isn't practical at all

Can you give an example of some practical situation you don't understand how to address with FI thinking, and I'll tell you how or concede? And after we go through a few examples, perhaps you'll better understand how it works and agree with me.

So, if induction is out of the way, the other means to know truth may be by deduction, building on truth we know to create more. Except that leads to infinite regress, because you need a foundation

CR's view is induction is not replaced with more deduction. It's replaced with evolution – guesses and criticism.

So the best we can do is generate new ideas, and put them through empirical test, removing what is false as it gets contradicted

And we can use non-empirical criticism.

But contradicted by what? Universal truths! The thing is, universal truths are used as a tool to test what is true or false in any context since they don't depend on context

Not just contradicted by universal truths, but contradicted by any of our knowledge (lots of which has some significant but non-universal reach). If an idea contradicts some of our knowledge, it should say why that knowledge is mistaken – there's a challenge there. See also my "library of criticism" concept in Yes or No Philosophy (discussed below) which, in short, says that we build up a set of known criticisms that have some multi-context applicability, and then whenever we try to invent a new idea we should check it against this existing library of known criticisms. It needs to either not be contradicted by any of the criticisms or include a counter-argument.

But they are so general that you can't generate new idea from them easily

The LW view would completely disagree with that: laws of physics are statements like every other, they are solid because they map to observation and have predictive power

CR says to judge ideas by criticism. Failure to map to observation and lack of predictive power are types of criticism (absolutely not the only ones), which apply in some important range of contexts (not all contexts – some ideas are non-empirical).

Prediction is great and valuable but, despite being great, it's also overrated. See chapter 1 of The Fabric of Reality by David Deutsch and the discussion of the predictive oracle and instrumentalism.

http://www.daviddeutsch.org.uk/books/the-fabric-of-reality/excerpt/

Also you can use them to explain stuff (reductionism) and generate new ideas (bottom-up scientific research)

From FI:

When we consider a new idea, the main question should be: "Do you (or anyone else) see anything wrong with it? And do you (or anyone else) have a better idea?" If the answers are 'no'and 'no' then we can accept it as our best idea for now.

The problem is that by having a "pool of statements from which falsehoods are gradually removed" you also build a best candidate for truth. Which is not, at all, how the Bayesian view works.

FI suggests evolution is a reliable way to suggest new ideas. It ties well into the framework of "generate by increments and select by truth-value"

It also highlights how humans are universal knowledge machines, that anything (in particular, an AGI) created by a human would have knowledge than humans can attain too

Humans as universal knowledge creators is an idea of my colleague David Deutsch which is discussed in his book, The Beginning of Infinity (BoI).

http://beginningofinfinity.com

But that's not an operational definition : if an AGI creates knowledge much faster than any human, they won't ever catch up and the point is moot

Yes, AGI could be faster. But, given the universality argument, AGI's won't be more rational and won't be capable of modes of reasoning that humans can't do.

The value of faster is questionable. I think no humans currently maximally use their computational power. So adding more wouldn't necessarily help if people don't want to use it. And an AGI would be capable of all the same human flaws like irrationalities, anti-rational memes (see BoI), dumb emotions, being bored, being lazy, etc.

I think the primary cause of these flaws, in short, is authoritarian educational methods which try to teach the kid existing knowledge rather than facilitate error correction. I don't think an AGI would automatically be anything like a rational adult. It'd have to think about things and engage with existing knowledge traditions, and perhaps even educators. Thinking faster (but not better) won't save it from picking up lots of bad ideas just like new humans do.

That sums up the basics, I think The Paths Forwards thing is another matter... and it is very, very demanding

Yes, but I think it's basically what effective truth-seeking requires. I think most truth-seeking people do is not very effective, and the flaws can actually be pointed out as not meeting Paths Forward (PF) standards.

There's an objective truth about what it takes to make progress. And separate truths depending on how effectively you want to make progress. FI and PF talk about what it takes to make a lot of progress and be highly effective. You can fudge a lot of things and still, maybe, make some progress instead of going backwards.

If you just wanna make a few tiny contributions which are 80% likely to be false, maybe you don't need Paths Forward. And some progress gets made that way – a bunch of mediocre people do a bunch of small things, and the bulk of it is wrong, but they have some ability to detect errors so they end up figuring out which are the good ideas with enough accuracy to slowly inch forwards. But, meanwhile, I think a ton of progress comes from a few great (wo)men who have higher standards and better methods. (For more arguments about the importance of a few great men, I particularly recommend Objectivism. E.g. Roark discusses this in his courtroom speech at the end of The Fountainhead.)

Also, FYI, Paths Forward allows you to say you're not interested in something. It's just, if you don't put the work into knowing something, don't claim that you did. Also you should keep your interests themselves open to criticism and error correction. Don't be an AGI researcher who is "not interested in philosophy" and won't listen to arguments about why philosophy is relevant to your work. More generally, it's OK to cut off a discussion with a meta comment (e.g. "not interested" or "that is off topic" or "I think it'd be a better use of my time to do this other thing...") as long as the meta level is itself open to error correction and has Paths Forward.

Oh also, btw, the demandingness of Paths Forward lowers the resource requirements for doing it, in a way. If you're interested in what someone is saying, you can be lenient and put in a lot of effort. But if you think it's bad, then you can be more demanding – so things only continue if they meet the high standards of PF. This is win/win for you. Either you get rid of the idiots with minimal effort, or else they actually start meeting high standards of discussion (so they aren't idiots, and they're worth discussing with). And note that, crucially, things still turn out OK even if you misjudge who is an idiot or who is badly mistaken – b/c if you misjudge them all you do is invest less resources initially but you don't block finding out what they know. You still offer a Path Forward (specifically that they meet some high discussion standards) and if they're actually good and have a good point, then they can go ahead and say it with a permalink, in public, with all quotes being sourced and accurate, etc. (I particularly like asking for simple things which are easy to judge objectively like those, but there are other harder things you can reasonably ask for, which I think you picked up on in some ways your judgement of PF as demanding. Like you can ask people to address a reference that you take responsibility for.)

BTW I find that merely asking people to format email quoting correctly is enough barrier to entry to keep most idiots out of the FI forum. (Forum culture is important too.) I like this type of gating because, contrary to moderators making arbitrary/subjective/debatable judgements about things like discussion quality, it's a very objective issue. Anyone who cares to post can post correctly and say any ideas they want. And it lacks the unpredictability of moderation (it can be hard to guess what moderators won't like). This doesn't filter on ideas, just on being willing to put in a bit of effort for something that is productive and useful anyway – proper use of nested quoting improves discussions and is worth doing and is something all the regulars actively want to do. (And btw if someone really wants to discuss without dealing with formatting they can use e.g. my blog comments which are unmoderated and don't expect email quoting, so there are still other options.)

It is written very clearly, and also wants to make me scream inside

Why does it make you want to scream?

Is it related to moral judgement? I'm an Objectivist in addition to a Critical Rationalist. Ayn Rand wrote in The Virtue of Selfishness, ch8, How Does One Lead a Rational Life in an Irrational Society?, the first paragraph:

I will confine my answer to a single, fundamental aspect of this question. I will name only one principle, the opposite of the idea which is so prevalent today and which is responsible for the spread of evil in the world. That principle is: One must never fail to pronounce moral judgment.

There's a lot of reasoning for this which goes beyond the one essay. At present, I'm just raising it as a possible area of disagreement.

There are also reasons about objective truth (which are part of both CR and Objectivism, rather than only Objectivism).

The issue isn't just moral judgement but also what Objectivism calls "sanction": I'm unwilling to say things like "It's ok if you don't do Paths Forward, you're only human, I forgive you." My refusal to actively do anti-judgement stuff, and approve of PF alternatives, is maybe more important than any negative judgements I've made, implied or stated.

It hits all the right notes motivation-wise, and a very high number of Rationality Virtues. Curiosity, check. Relinquishment, check. Lightness, check. Argument, triple-check.

Yudkowsky writes about rational virtues:

The fifth virtue is argument. Those who wish to fail must first prevent their friends from helping them.

Haha, yeah, no wonder a triple check on that one :)

Simplicity, check. Perfectionism, check. Precision, check. Scholarship, check. Evenness, humility, precision, Void... nope nope nope PF is much harsher than needed when presented with negative evidence, treating them as irreparable flaws (that's for evenness)

They are not treated as irreparable – you can try to create a variant idea which has the flaw fixed. Sometimes you will succeed at this pretty easily, sometimes it’s hard but you manage it, and sometimes you decide to give up on fixing an idea and try another approach. You don’t know in advance how fixable ideas are (you can’t predict the future growth of knowledge) – you have to actually try to create a correct variant idea to see how doable that is.

Some mistakes are quite easy and fast to fix – and it’s good to actually fix those, not just assume they don’t matter much. You can’t reliably predict mistake fixability in advance of fixing it. Also the fixed idea is better and this sometimes helps leads to new progress, and you can’t predict in advance how helpful that will be. If you fix a bunch of “small” mistakes, you have a different idea now and a new problem situation. That’s better (to some unknown degree) for building on, and there’s basically no reason not to do this. The benefit of fixing mistakes in general, while unpredictable, seems to be roughly proportional to the effort (if it’s hard to fix, then it’s more important, so fixing it has more value). Typically, the small mistakes are a small effort to fix, so they’re still cost-effective to fix.

That fixing mistakes creates a better situation fits with Yudkowsky’s virtue of perfectionism.

(If you think you know how to fix a mistake but it’d be too resource expensive and unimportant, what you can do instead is change the problem. Say “You know what, we don’t need to solve that with infinite precision. Let’s just define the problem we’re solving as being to get this right within +/- 10%. Then the idea we already have is a correct solution with no additional effort. And solving this easier problem is good enough for our goal. If no one has any criticism of that, then we’ll proceed with it...")

Sometimes I talk about variant ideas as new ideas (so the original is refuted, but the new one is separate) rather than as modifying and rescuing a previous idea. This is a terminology and perspective issue – “modifying" and “creating" are actually basically the same thing with different emphasis. Regardless of terminology, substantively, some criticized flaws in ideas are repairable via either modifying or creating to get a variant idea with the same main points but without the flaw.

PF expects to have errors all other the place and act to correct them, but places a burden on everyone else that doesn't (that's for humility)

Is saying people should be rational burdensome and unhumble?

According to Yudkowsky's essay on rational virtues, the point of humility is to take concrete steps to deal with your own fallibility. That is the main point of PF!

PF shifts from True to False by sorting everything through contexts in a discrete way.

The binary (true or false) viewpoint is my main modification to Popper and Deutsch. They both have elements of it mixed in, but I make it comprehensive and emphasized. I consider this modification to improve Critical Rationalism (CR) according to CR's own framework. It's a reform within the tradition rather than a rival view. I think it fits the goals and intentions of CR, while fixing some problems.

I made educational material (6 hours of video, 75 pages of writing) explaining this stuff which I sell for $400. Info here:

https://yesornophilosophy.com

I also have many relevant, free blog posts gathered at:

http://curi.us/1595-rationally-resolving-conflicts-of-ideas

Gyrodiot, since I appreciated the thought you put into FI and PF, I'll make you an offer to facilitate further discussion:

If you'd like to come discuss Yes or No Philosophy at the FI forum, and you want to understand more about my thinking, I will give you a 90% discount code for Yes or No Philosophy. Email [email protected] if interested.

Incertitude is lack of knowledge, which is problematic (that's for precision)

The clarity/precision/certitude you need is dependent on the problem (or the context if you don’t bundle all of the context into the problem). What is your goal and what are the appropriate standards for achieving that goal? Good enough may be good enough, depending on what you’re doing.

Extra precision (or something else) is generally bad b/c it takes extra work for no benefit.

Frequently, things like lack of clarity are bad and ruin problem solving (cuz e.g. it’s ambiguous whether the solution means to take action X or action Y). But some limited lack of clarity, lower precision, hesitation, whatever, can be fine if it’s restricted to some bounded areas that don’t need to be better for solving this particular problem.

Also, about the precision virtue, Yudkowsky writes,

The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test.

FI/PF has no issue with this. You can specify required precision (e.g. within plus or minus ten) in the problem. Or you can find you have multiple correct solutions, and then consider some more ambitious problems to help you differentiate between them. (See the decision chart stuff in Yes or No Philosophy.)

PF posits time and again that "if you're not achieving your goals, well first that's because you're not faillibilist". Which is... quite too meta-level a claim (that's for the Void)

Please don't put non-quotes in quote marks. The word "goal" isn't even in the main PF essay.

I'll offer you a kinda similar but different claim: there's no need to be stuck and not make progress in life. That's unnecessary, tragic, and avoidable. Knowing about fallibilism, PF, and some other already-known things is adequate that you don't have to be stuck. That doesn't mean you will achieve any particular goal in any particular timeframe. But what you can do is have a good life: keep learning things, making progress, achieving some goals, acting on non-refuted ideas. And there's no need to suffer.

For more on these topics, see the FI discussion of coercion and the BoI view on unbounded progress:

http://beginningofinfinity.com

(David Deutsch, author of BoI, is a Popperian and is a founder of Taking Children Seriously (TCS), a parenting/education philosophy created by applying Critical Rationalism and which is where the the ideas about coercion come from. I developed the specific method of creating a succession of meta problems to help formalize and clarify some TCS ideas.)

I don't see how PF violates the void virtue (aspects of which, btw, relate to Popper's comments on Who Should Rule? cuz part of what Yudkowsky is saying in that section is don't enshrine some criteria of rationality to rule. My perspective is, instead of enshrining a ruler or ruling idea, the most primary thing is error correction itself. Yudkowsky says something that sorta sounds like you need to care about the truth instead of your current conception of the truth – which happily does help keep it possible to correct errors in your current conception.)

(this last line is awkward. The rationalist view may consider that rationalists should win, but not winning isn't necessarily a failure of rationality)

That depends on what you mean by winning. I'm guessing I agree with it the way you mean it. I agree that all kinds of bad things can happen to you, and stuff can go wrong in your life, without it necessarily being your fault.

(this needs unpacking the definition of winning and I'm digging myself deeper I should stop)

Why should you stop?


Justin Mallone replied to Gyrodiot:

hey gyrodiot feel free to join Fallible Ideas list and post your thoughts on PF. also, could i have your permission to share your thoughts with Elliot? (I can delete what other ppl said). note that I imagine elliot would want to reply publicly so keep that in mind.

Gyrodiot replied:

@JUSTINCEO You can share my words (only mine) if you want, with this addition: I'm positive I didn't do justice to FI (particularly in the last part, which isn't clear at all). I'll be happy to read Elliot's comments on this and update in consequence, but I'm not sure I will take time to answer further.

I find we are motivated by the same "burning desire to know" (sounds very corny) and disagree strongly about method. I find, personally, the LW "school" more practically useful, strikes a good balance for me between rigor, ease of use, and ability to coordinate around.

Gyrodiot, I hope you'll reconsider and reply in blog comments, on FI, or on Less Wrong's forum. Also note: if Paths Forward is correct, then the LW way does not work well. Isn't that risk of error worth some serious attention? Plus isn't it fun to take some time to seriously understand a rival philosophy which you see some rational merit in, and see what you can learn from it (even if you end up disagreeing, you could still take away some parts)?


For those interested, here are more sources on the rationality virtues. I think they're interesting and mostly good:

https://wiki.lesswrong.com/wiki/Virtues_of_rationality

https://alexvermeer.com/the-twelve-virtues-of-rationality/

http://madmikesamerica.com/2011/05/the-twelve-virtues-of-rationality/

That last one says, of Evenness:

With the previous three in mind, we must all be cautious about our demands.

Maybe. Depends on how "cautious" would be clarified with more precision. This could be interpreted to mean something I agree with, but also there are a lot of ways to interpret it that I disagree with.

I also think Occam's Razor (mentioned in that last link, not explicitly in the Yudkowsky essay), while having some significant correctness to it, is overrated and is open to specifications of details that I disagree with.

And I disagree with the "burden of proof" idea (I cover this in Yes or No Philosophy) which Yudkowsky mentions in Evenness.

The biggest disagreement is empiricism. (See the criticism of that in BoI, and FoR ch1. You may have picked up on this disagreement already from the CR stuff.)


Elliot Temple | Permalink | Messages (2)

Empiricism and Instrumentalism

Gyrodiot commented defending instrumentalism.

I'm going to clarify what I mean about "instrumentalism" and "empiricism". I don't know if we actually disagree or there's a misunderstanding.

FI has somewhat of a mixed view here (reason and observation are both great), and objects to an extreme focus on one or the other. CR and Objectivism both say you don't have to, and should not, choose between reason and observation. We object to the strong "rationalists" who want to sit in an armchair and reason out what reality is like without doing any science, and we object to the strong "empiricists" who want to look at reality and do science without thinking.

Instrumentalism means that theories are only or primarily instruments for prediction, with little or no explanation or philosophical thought. Our view is that observation and prediction are great and valuable, but aren't alone in being so great and valuable. Some important ideas – such as the theory of epistemology itself – are primarily non-empirical.

There's a way some people try to make philosophy empirical. It's: try different approaches and see what the results are (and try to predict the results of acting according to different philosophies of science). But how do you judge the results? What's a good result? More accurate scientific predictions, you say. But which ones? How do you decide which predictions to value more than others? Or do you say every prediction is equal and go for sheer quantity? If quantity, why, and how do you address that with only empiricism and no philosophical arguments? And you want more accurate predictions according to which measures? (E.g. do you value lower error size variance or lower error size mean, or one of the infinitely many possible metrics that counts both of them in some way?)

How do you know which observations to make, and which portion of the available facts to record about what you observe? How do you interpret those observations? Is the full answer just to predict which way of making observations will lead to the most correct predictions later on? But how do you predict that? How do you know which data will turn out useful to science? My answer is you need explanations of things like which problems science is currently working on, and why, and the nature of those problems – these things help guide you in deciding what observations are relevant.

Here are terminology quotes from BoI:

Instrumentalism   The misconception that science cannot describe reality, only predict outcomes of observations.

Note the "cannot" and "only".

Empiricism   The misconception that we ‘derive’ all our knowledge from sensory experience.

Note the "all" and the "derive". "Derive" refers to something like: take a set of observation data (and some models and formulas with no explanations, philosophy or conceptual thinking) and somehow derive all human knowledge, of all types (even poetry), from that. But all you can get that way are correlations and pattern-matching (to get causality instead of correlation you have to come up with explanations about causes and use types of criticism other than "that contradicts the data"). And there are infinitely many patterns fitting any data set, of which infinitely many both will and won't hold in the finite future, so how do you choose if not with philosophy? By assuming whichever patterns are computable by the shortest computer programs are the correct ones? If you do that, you're going to be unnecessarily wrong in many cases (because that way of prediction is often wrong, not just in cases where we had no clue, but also in cases when explanatory philosophical thinking could have done better). And anyway how do you use empiricism to decide to favor shorter computer programs? That's a philosophy claim, open to critical philosophy debate (rather than just being settled by science), of exactly the kind empiricism was claiming to do without.

Finally I'll comment on Yudkowsky on the virtue of empiricism:

The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction.

I disagree about "roots" because, as Popper explained, theories are prior to observations. You need a concept of what you're looking for, by what methods, before you can fruitfully observe. Observation has to be selective (like it or not, there's too much data to record literally all of it) and goal-directed (instead of observing randomly). So goals and ideas about observation method precede observation as "roots" of knowledge.

Note: this sense of preceding does not grant debating priority. Observations may contradict preceding ideas and cause the preceding ideas to be rejected.

And note: observations aren't infallible either: observations can be questioned and criticized because, although reality itself never lies, our ideas that precede and govern observation (like about correct observational methods) can be mistaken.

Do not ask which beliefs to profess, but which experiences to anticipate.

Not all beliefs are about experience. E.g. if you could fully predict all the results of your actions, there would still be an unanswered moral question about which results you should prefer or value, which are morally better.

Always know which difference of experience you argue about.

I'd agree with often but not always. Which experience is the debate about instrumentalism and empiricism about?


See also my additional comments to Gyrodiot about this.


Elliot Temple | Permalink | Messages (0)

Accepting vs. Preferring Theories – Reply to David Deutsch

David Deutsch has some misconceptions about epistemology. I explained the issue on Twitter.

I've reproduced the important part below. Quotes are DD, regular text is me.

There's no such thing as 'acceptance' of a theory into the realm of science. Theories are conjectures and remain so. (Popper, Miller.)

We don't accept theories "into the realm of science", we tentatively accept them as fallible, conjectural, non-refuted solutions to problems (in contexts).

But there's no such thing as rejection either. Critical preference (Popper) refers to the state of a debate—often complex, inconsistent, and transient.

Some of them [theories] are preferred (for some purposes) because they seem to have survived criticism that their rivals haven't. That's not the same as having been accepted—even tentatively. I use quantum theory to understand the world, yet am sure it's false.

Tentatively accepting an idea (for a problem context) doesn't mean accepting it as true, so "sure it's false" doesn't contradict acceptance. Acceptance means deciding/evaluating it's non-refuted, rivals are refuted, and you will now act/believe/etc (pending reason to reconsider).

Acceptance deals with the decision point where you move past evaluating the theory, you reach a conclusion (for now, tentatively). you don't consider things forever, sometimes you make judgements and move on to thinking about other things. ofc it's fluid and we often revisit.

Acceptance is clearer word than preference for up-or-down, yes-or-no decisions. Preference often means believing X is better than Y, rather than judging X to have zero flaws (that you know of) & judging Y to be decisively flawed, no good at all (variant of Y could ofc still work)

Acceptance makes sense as a contrast against (tentative) rejection. Preference makes more sense if u think u have a bunch of ideas which u evaluate as having different degrees of goodness, & u prefer the one that currently has the highest score/support/justification/authority.


Update: DD responded, sorta:

You are blocked from following @DavidDeutschOxf and viewing @DavidDeutschOxf's Tweets.


Update: April 2019:

DD twitter blocked Alan, maybe for this blog post critical of LT:

https://conjecturesandrefutations.com/2019/03/16/lulie-tanett-vs-critical-rationalism/

DD twitter blocked Justin, maybe for this tweet critical of LT:

https://twitter.com/j_mallone/status/1107349577538158592


Elliot Temple | Permalink | Messages (8)