"Mental Illness" Discussion with Andrew Adams

From Twitter.

Andrew Adams

What are your thoughts on having harsher regulations? E.g., making it harder for mentally ill to access guns, etc?

Elliot Temple

I favor much less regulation of guns. I am especially opposed to "mental illness" laws: http://szasz.com/manifesto.html
http://fallibleideas.com/books#szasz

Andrew Adams

I will read up on him and will get back to you. In meanwhile I'll ask you this question, if mental disorders could be detected like. heart diseases, or kidney diseases are detected, would you favor regulation on people who show signs of severe unstable moods or psycopathy?

[quoting from Szasz manifesto] "Classifying thoughts, feelings, and behaviors as diseases is a logical and semantic error". I have to disagree. The brain is a biological organism and the diseases related to it are well studied and well known. Would you give a gun to someone who showed..signs if psycopathy or borderline personality disorder for example?

Elliot Temple

i am open to regulation of medically detectable defects, like requiring someone with testably bad vision to wear glasses when driving.

it would need to be an actual medical test like for cancer, not a person judging someone's mood. preferably detectable at autopsy.

Andrew Adams

I'm glad we agree on that. I come from a human behavioral biology background, and what Szasz claims seems bizzare. I will read his writings.

Elliot Temple

Szasz agrees with that too, FYI. Though he would point out that would be a physical brain illness, not an illness of a mind.

Minds have bad ideas, which are different than illnesses.

Andrew Adams

And why brain, which is a biological organism like heart and kidney, can't cause a problem that can be called a disease? It seems to me that the fact that we can't diagnose mental diseases the way we diagnose other organs, due to its complexity, makes you to believe we can't have mental disorders. Imagine you had diabetes, and for some reason science couldn't yet see what's going on, but certainly we got all the diagnoses that a diabetic person has. Wouldn't be absurd to not call that a disease?

Elliot Temple

There are brain diseases, but they aren't "mental illnesses", they are "brain illnesses". And schizophrenia and autism are myths.

Though, as with many myths, there is some true element: some people behave in socially-disapproved ways and others want to stigmatize them.

With depression: true that some ppl r VERY sad, have MAJOR life problems. False that their bad situation, bad coping ideas, etc, is illness

it doesn't "make" me believe anything, it's one problematic issue, among many, for psychiatry.

"mind disorders" (bad ideas, a disordered mind) is a problem, but is not the same category of thing as cancer.

i think this is too hard to follow on twitter with every message divided into multiple tweets. could you reply on my blog comments?

Andrew Adams

If you hallucinate and hear sounds in your head, is that due to bad ideas? Or the way your brain is wired?

Sure. Under the same page you posted the longer answer?

Elliot Temple

i made a fresh page: http://curi.us/2047-discussion

Andrew Adams

Can we please communicate through the direct message? That's much faster and we can engage each other.

Elliot Temple

i would prefer the blog. i will talk here if you're unwilling and you give me permission to quote anything here.

Andrew Adams

You can quote anything you want.

So let's see. You believe brain is a biological organ right? No souls and superstitious there, correct?

Elliot Temple

yes

Andrew Adams

And a biological thing can get fucked up, as we are seeing with all the other things. RIght?

Elliot Temple

the mind is software and the details of the brain are irrelevant to the mind in the same way that you can run the same software on different PCs.

yes brain damage is a thing, just like e.g. a ram stick going bad.

Andrew Adams

Okay this is where I think you're wrong, with all due respect. I'll explain. The software is there because of the way the brain is wired. It's not something separate built on top of those neurons. Your behavior is due to the wiring. That's why if the wiring gets screwed up, you have no way to upgrade the software. Those two are not independent.

It's like saying to a diabetes, person: man stop it with the insulin thing. It's getting really annoying.

Elliot Temple

How did you find me? Have you read David Deutsch?

Andrew Adams

I found you through Deutsch. I haven't read his books yet, but I've listened to his interviews and are somewhat aware of his positions.

Elliot Temple

The computer's behavior is due to the data on the disks, the wiring of the CPU, etc. It's not something separate either. If you hit computer components with a hammer, it affects the software that's running.

So the cases remain analogous.

hardware and software bugs are different things. both exist. right?

and there are also features which some people don't like and call bugs

Andrew Adams

But software is just instructions for the cpu, which is the hardware. If you damage the cpu, the hardware is not gonna perform as usual. Do you agree?

The software I meant

Elliot Temple

i agree that hitting a cpu or brain with a hammer can screw up the software currently running.

Andrew Adams

The software is not gonna perform as ususal

Okay. Then if the wiring is screwed up due to anomalies that we observe throughout the biological organisms, does that make that make what's happening an illness?

Elliot Temple

if there's physical brain damage, that's an illness or injury.

Andrew Adams

But it doesn't have to be a hammer hitting your brain. It's more subtle than that. The wiring can be screwed up.

It's a biological thing again. Anomalies exist.

Elliot Temple

Yes, for example in my non-expert understanding, Alzheimer's involves some brain damage which causes some memory loss issues.

Andrew Adams

You don't have to hit it from outside to screw it up.

Sure

It's the wiring that gets screwed up, hence bugs in the software.

Environment can influence your wiring.

Elliot Temple

for example, an environment with radiation. sure.

Andrew Adams

But some disorders have proven to have like 80% genetical cause.

Elliot Temple

correlation isn't causation.

Andrew Adams

It's a causation study.

Elliot Temple

have you read the studies you're referring to?

Andrew Adams

yes

Elliot Temple

ok link one you think contains no flaws.

which involves a typical "mental illness"

Andrew Adams

Behavioral geneticists did experiments with schizophrenia. They did it on twins that were adopted at birth.

I'll find the links and send it to you

Elliot Temple

i've read studies too, as has Szasz. Schizophrenia will work fine.

just one, please.

Andrew Adams

I mean even if it's done by environment, doesn't make your argument stronger. Do you agree? No matter the cause, something is screwed up up there.

Sure.

Elliot Temple

i don't think i've stated my case. i began earlier by saying that hardware and software problems are different categories of things, and both are real. do you agree?

Andrew Adams

I don't thing software is separate from hardware in the brain. All the behaviors we have is due to neurons connections to each other. As I said, software is just a set of instructions for the cpu, but the difference in the brain is that the software is not programmed separately, but also hardwired in the neurons. Say you're kind person, right? I can theoratically open your mind, change a few neuron, and you become evil. We could do that if we had the technology right?

And again, whether the behaviour is shaped by genes or environment is irrelavant.

Elliot Temple

you can also open up a computer and edit stuff to change what it does. that's the same thing.

Andrew Adams

So do you agree neurons getting screwed up is not really different from brain damage?

Elliot Temple

you can arrange your neurons in a bad configuration by forming bad ideas. you can make unwise life decisions, believe a bunch of crap from a cult, and it physically affects the arrangement of your neurons. this – people having ideas, for better or worse – is different than Alheizmers or brain cancer.

Andrew Adams

No!!! You shouldn't be an evil person to have your wiring screwed up!

Elliot Temple

the data in a computer can be screwed up due to a hard disk malfunction or due to software that writes bad data. one is a hardware error, one is a software error. they are different things.

Andrew Adams

Why do you assume only evil things are the only environmental factors that cause brain problems?

Elliot Temple

i didn't assume that. i'm trying to say that bad ideas exist. you seem to be resisting this and saying it's all just neurons.

i'm trying to use the simplest, most clearcut cases as initial examples.

people get indoctrinated into cults, and that's not an illness. right?

Andrew Adams

I'm not denying people can believe in bad ideologies and get brainwashed. But for some it's just the wiring that can be genetically or by a certain environment or by nutrition for example screwed up.

Do all people that have diabeties have had bad diets in their life?

Absolutely not.

It's sometimes merely genetic.

Elliot Temple

i'm not trying to say all people, at this time. i'm trying to establish a category exists and some stuff is in it. some people are healthy and join a cult and it's a big mistake and it has some physical affect on their neurons (e.g. they form memories of cult ideas, which then physically exist in their brain), but it's still not an illness or brain damage. it's a different thing. right?

Andrew Adams

If you were born with one of your neurons for example only one centimeter to one side, you could become a more violent person. Do you agree that?

It's just biology.

There was a man that murdered his whole family and then went to street and mass murdered a bunch of people. They opened his brain for autopsy and they found out he had two tumors in is brain.

and tumors are not the only thing that can cause that.

Elliot Temple

can you answer my question?

Andrew Adams

You can be genetically born with some kind of screwed up neurons.

I answered it. I agree that ideologies can change your neurons.

But those are not the only cases.

Some people can't just help it.

It's like saying to a diabetic person to stop it with his insulins

Elliot Temple

And you agree that ideologies are not brain damage or illness, even when a neuron changes?

Andrew Adams

brain damage IS chagne of neurons

Elliot Temple

so you think that all people adopting bad ideologies count as brain damaged and ill?

Andrew Adams

Not all changes are brain damage, but brain damage is a change in neurons

You can change your neurons and become too generous and kind

Elliot Temple

so you agree that a person can adopt a cult ideology, have neurons change, but they are not ill and are not brain damaged?

Andrew Adams

First, the fact that they have done evil things could be due to the way their neurons were wired in the first place. I mean, couldn't choose your original brain wiring could you? Second, there is a difference between adopting cult like behavior and the diseases that are categorized as mentally ill. People get moody, see things, get depressed, get anxious. These are not things you see on TV or cults and adopt.

Elliot Temple

Why won't you give a straight answer?

Andrew Adams

The fact that they first joined the cult is due to the wiring of their brain.

Elliot Temple

do you think most people are brain-damaged or not?

Andrew Adams

Yeah

By that definition

I don't beleive in free will

You are nothing but the wiring of your brain

Elliot Temple

do you think most people have brain illnesses/diseases? and so you would call most people "mentally ill"?

Andrew Adams

and 90% of the environment you grew up in you didn't choose

Most people have different wirings that most don't lead to extreme behavior, but some of them are extereme. So all people are mentally ill, but only some are in the extreme side.

There is no such thing as perfect wiring

Elliot Temple

you're not using words in the way psychiatry in general does, nor the way Szasz does. this makes the discussion difficult.

Andrew Adams

Some are lucky and don't get bad wirings due to anomalies.

Some due

Do you believe in genetics?

Elliot Temple

i believe i have genes.

that question isn't very clear.

Andrew Adams

and do you believe genes determine the wiring of your initial brain?

Elliot Temple

mostly, yes. there may be some other factors in the womb.

Andrew Adams

prenatal effects true. Which you didn't choose.

So if I'm a person who by chance are born by a screwed up wiring.

am i considered ill if my behavior lead to extreme bad causes that is hurful?

Elliot Temple

are you a native English speaker?

Andrew Adams

No

I'm typing very fast too, my spelling and grammar are not as bad

Elliot Temple

I don't think your genes control your whole life. I think people make decisions in their life, and they're responsible for lots of what happens in their life.

Andrew Adams

But the wiring that you originally inherit is genetical, right?

Elliot Temple

Your genes create an initial brain with an intelligent mind. They set that up. If they didn't do that, you'd be screwed. But once you have that, then you have a chance to think for yourself.

The operation of your intelligent mind, not your genes, control most of what happens in your life, such as what ideas you accept.

To understand a person's life, you need to analyze how intelligence works, rather than genes.

And to know much about a person, you usually need to look at their ideas not their neurons.

Andrew Adams

But you are denying that the early years of your life and the original wiring can have huge impact.

If you were born with a set of neurons that made you a little more agressive in school, or a little less IQ, or little more depressed.

Elliot Temple

The original wiring has the impact: creates intelligence software. Your early years have a big impact because your intelligence is actively learning and thinking during that time.

Andrew Adams

Exactly

Elliot Temple

IQ is a myth.

Andrew Adams

Did you choose to be born to your family?

Elliot Temple

no.

Andrew Adams

So those crucial early years that you didn't have control over may set your neurons up in a way that can lead you to join a cult in the future. Or the way your original neurons were determined by your genes.

Elliot Temple

Having bad parents is hard and I think they can have some partial responsibility for what their children do, especially at younger ages.

However, you can still make good life choices even if you have bad parents. Especially once you're an adult and free to control your own life.

Andrew Adams

Is it possible that someone is born with a brain that is genetically wired a little screwed up?

Elliot Temple

You have power over what happens in your life. Everything isn't determined by fate.

Andrew Adams

Not fate, but genes and the environment you were didn't choose at early lives.

Elliot Temple

I don't think anyone is born a little screwed up, no. Either you have functioning intelligence software or you don't. There's no such thing as 95% intelligent.

That's not Szasz's idea btw. It was developed by David Deutsch and I.

Andrew Adams

What kind if reasoning is that? Are all people the same height or midget?

The brain is biological

Elliot Temple

It has to do with universality, which is covered in DD's books.

Andrew Adams

What universality?

Universality of computation?

Elliot Temple

there are other types of universality besides computation, such as universal knowledge creators (intelligences).

Andrew Adams

If you were born autistic, could you be the person you are now?

Elliot Temple

i think autism is a myth.

Andrew Adams

In what sense?

Elliot Temple

some parents don't like their children, and fight with them. they call those children "autistic" to stigmatize them.

Andrew Adams

What?! Are you serious?

Elliot Temple

it has nothing to do with a brain problem. it's just a disagreement, a moral conflict.

this is DD's view too.

Andrew Adams

Have you met an autistic person?

Elliot Temple

i have met a person who has been called autistic, yes.

Andrew Adams

Well, attributing all your ideas to DD doesn't make them right.

So you think a moral conflict caused that?

Elliot Temple

why don't you read this and point out which statement is false? http://web.archive.org/web/20030620082122/http://www.tcs.ac:80/Articles/DDAspidistraSyndrome.html

DD's views are not automatically true, but you shouldn't call them unserious.

Andrew Adams

Asperger is not autism

Elliot Temple

so do you think DD is correct about everything in that article?

Andrew Adams

what year was this written?

Elliot Temple

1997 like it says

it doesn't matter.

Andrew Adams

It matters!

https://www.webmd.com/brain/autism/tc/aspergers-syndrome-symptoms#1

Read this

20 years ago

Elliot Temple

what about it?

Andrew Adams

Totally different symptoms than what dd was mocking 20 years ago. A lot has changed.

Elliot Temple

no, it's the same thing as before, e.g. "Talk a lot, usually about a favorite subject. One-sided conversations are common. Internal thoughts are often verbalized." is exactly the kind of thing DD was mocking.

how is talking a lot about your favorite subject an illness?

Andrew Adams

Asperger they say is a mild case of autism, so symptomes are the watered down symptomes of autism. If you have ever seen an autistic child, how could you say it's due to a moral conflict?

https://www.scientificamerican.com/article/discovery-of-18-new-autism-linked-genes-may-point-to-new-treatments/

Scientific american 2017

Elliot Temple

what exactly do you think could not be a parent-child conflict?

Andrew Adams

regarding autism

Elliot Temple

by autism-linked genes they mean correlated. as before, if you have a causation study, provide a link.

Andrew Adams

Let's say even it is due to parents. Does that make the person ill due to his parent's actions that he didn't choose?

Elliot Temple

in order for something to be shown to cause autism, autism would also need to be carefully defined.

Andrew Adams

How about the way they all look and act?

Elliot Temple

i don't see what hating your parents, and not being a total conformist, has to do with being ill.

Andrew Adams

that's due to moral conflict too?

Elliot Temple

i don't know what looks and actions you're referring to.

Andrew Adams

But you were saying everyone can choose the right path and stuff

Elliot Temple

in my understanding, the people called "autistic" look and act in a wide variety of different ways. they aren't all the same.

Andrew Adams

Just watch autistic kids on youtube

Elliot Temple

i have

Andrew Adams

And you think they are all due to bad parentin?

parenting?

Elliot Temple

in short, yes.

Andrew Adams

Okay even let's say you're right and autism is 0% genetical and due to moral conflict of parents.

Elliot Temple

and in many cases, i don't think anything is wrong with the kid.

i think the kid is fine and the parent just doesn't like him.

Andrew Adams

They can't have eye contact, they don't react facially or understand emotions, they can't think big picture,

I've met them and they all had the same problems.

You can't deny it's problem

Elliot Temple

some people don't like to make the socially normal amount of eye contact. i don't see anything wrong with that.

Andrew Adams

Do you believe they are abnormal?

Elliot Temple

some people called autistic seem completely normal in the videos on youtube. others seem abnormal, yes, but i don't see anything bad about not making eye contact.

i don't think everyone should be a conformist who spends their whole life trying really hard to fit in and be normal.

learning what facial expressions to make, in what situations, so that people think you're normal is a skill. some people are more interested in other skills.

Andrew Adams

Is there any mental disease that you attribute to genetics?

Elliot Temple

no. all the ones with genetic, disease or injury causes are already called regular illnesses, like Alzheimer's, not "mental illnesses" like schizophrenia and autism

Andrew Adams

How about down syndrome?

Elliot Temple

that's a defective chromosone. regular illness.

Andrew Adams

So genes can get screwed up but not neurons in the brain

?

Elliot Temple

people are very mean to down syndrome persons similar to how they treat "autistic" people, though. that part is similar.

Andrew Adams

So genes can get screwed up but not neurons in the brain?

Elliot Temple

bad ideas aren't caused by genes.

good ideas also aren't caused by genes. genes set up intelligence software. from there, you have to look at how intelligence and ideas work, not at genes.

it's like if you buy a house, you don't blame the construction workers for when you yell at your wife in the house.

the genes are the construction workers.

they built the brain in the first place, but that doesn't mean they're controlling it later.

Andrew Adams

So a gene can make you like a down syndrome kid but the same gene structure that code for neurons can't in any way make you more aggressive or psychopathic?

Why do you assume that?

Elliot Temple

i'm not assuming it, it's implied by what's currently known about epistemology, computation, science, etc

i've studied it extensively.

Andrew Adams

Send me a study that says genes have no affect in how the neurons function later in life

Elliot Temple

my argument doesn't consist of a study.

it consists of understanding concepts like universality.

and putting them together to help you analyze and interpret various evidence, studies, behaviors, etc

why don't you send me a correct genes cause (not correlated) mental illness study? you said you had one. i don't think they exist. prove me wrong?

Andrew Adams

http://psycnet.apa.org/record/1984-06924-001

Elliot Temple

if you want to understand my argument, you should start by reading DD, szasz and http://bactra.org/weblog/520.html

Andrew Adams

I just sent you a scientific study

You are taking two people and denying what the whole community of geneticists and neuroscientists beleive

that genes have affects, if not most

You believe a down syndrome kid can be born like that due to a single gene, but the millions of genes that code for intelligence have no way of affecting the way nerurons will function in future life

Elliot Temple

i'm not judging which people have how much authority, i'm looking at arguments

i've gotten a copy of the study you sent and will take a look

Andrew Adams

Just read the abstract

Elliot Temple

did you read the whole paper?

Andrew Adams

I'm not saying genes are 100%

Yeah. I even studied that in class at stanford

Elliot Temple

ok, why would i only read the abstract?

i don't understand

Andrew Adams

but even if they are 5%

Ok read the whole thing if you want.

Elliot Temple

not everything comes in amounts. let's talk about how many houses are haunted by a ghost. you can't just say "well it may not be 100% but at least 5%"

Andrew Adams

But you are the one who says the affect is zero

I'm just saying even if that's the case, that it's zero percent, that most scientist would caught at you because of it, can lead to mental illnesses that are not only caused by your actions.

5% i meant here*

laugh*

The same way a single gene can cause down syndrome

why is brain an exception to biology?

explain that to me?

Elliot Temple

i read the abstract. it says it's a meta correlation study. that's what "concordance" means. i also looked at the start and it doesn't attempt to define "schizophrenia".

i agree that many people would laugh at me. that's not an argument.

Andrew Adams

tell me why a singel gene can cause down syndrome but not affects neurons?

Elliot Temple

down syndrome is different than you think.

let's try to stick to one thing at a time. this study first.

Andrew Adams

No you refused to give a study so let's talk

Elliot Temple

i'm talking about the study you gave.

Andrew Adams

yeah what about it?

Elliot Temple

you said it was a causation (not correlation) study, but the abstract says it's a correlation study.

Andrew Adams

How is it correlation?

Elliot Temple

it studies concordances (correlations) between genes and being diagnosed with schizophrenia.

Andrew Adams

They separate twins at birth, and measure if they both get schizophrenia

how's that correlation?

Identical twins

But you didn't tell me how down syndrome is different

Elliot Temple

the point of a twin study is to say they have the same genes, so if they both get schizophrenia that's evidence that schizophrenia is caused by genes. right?

one thing at a time please.

Andrew Adams

Ok

yeah

Elliot Temple

and they separate them. cuz if they are raised by the same parents, you could blame the parents.

Andrew Adams

Sure

Or the environement

Elliot Temple

that's what a correlation study means.

it's saying "when X happened, Y happened".

or when X happened, Y is more likely to happen

X is having certain genes, and Y is being diagnosed with schizophrenia

Andrew Adams

So two identical twins get separated at birth, and most get schizophrenia later on.

Is it chance that all those parents raised them schizophrenic?

Elliot Temple

it's wrote down when X happened, and wrote down when Y happened, and we analyzed the data and then we found a correlation.

do you understand that this is a correlation study?

Andrew Adams

It's a correlation yeah. But controlled.

How can you explain this study?

Elliot Temple

ok, so why did you tell me it wasn't a correlation study?

did you not know what correlation means until today?

Andrew Adams

Why does it matter?

tell me

How do you explain the study?

Elliot Temple

you were mistaken. i'm trying to find out what happened.

Andrew Adams

Of course they're not gonna find out genes by twin separation

Elliot Temple

we're having a debate, and you were wrong, and then you don't want to talk about it at all?

Andrew Adams

So my mistake of saying causation ruins the whole study for you?

Explain the study to me

Elliot Temple

i'm trying to find out what's going on. why did this mistake happen? i don't know what it means yet.

if you didn't know what a correlation is before today, then your understanding of every study you read in the past is unreliable.

regarding correlations, you should read this: http://bactra.org/weblog/520.html

Andrew Adams

Explain the study

Elliot Temple

this article will explain to you a lot of things about correlations so you can understand the study better.

Andrew Adams

You lost the debate so you're trying to attack me personally

Explain teh study

Elliot Temple

i can't explain it to you because you don't have the background knowledge to understand the issues. you need to learn more. when i tried to give explanations earlier, you didn't understand. you need to read more. read this to find the answer to the study: http://bactra.org/weblog/520.html

Andrew Adams

You're the one who said you don't go by scientific studies and you have your own rules.

Elliot Temple

i don't go by flawed studies when i know the flaws.

the webpage explains some of the flaws with correlation studies.

Andrew Adams

Explain the study to me then

Why when they're separated they still get schizophrenia?

Elliot Temple

there are lots of possibilities. i can't tell you exactly what happened. it's not known.

if you read the webpage, http://bactra.org/weblog/520.html then you can find out what some explanations of the study are.

Andrew Adams

And you still haven't answered why down syndrome can be caused by a single gene but genes have no affect on your neurons' functions in future life.

Elliot Temple

we're still talking about this.

and the answer has to do with universality, which you haven't read about yet.

Andrew Adams

Sure because you can't answer that

Tell me about it

Elliot Temple

i think you're getting angry and impatient, and it's very hard to give you a lecture covering thousands of pages of material, requiring years of study, when you're in a bad mood and hostile.

Andrew Adams

So neurons are not bound to biology because of universality?

Elliot Temple

minds are universal knowledge creators. there can't be minds with 99% of the universal repertoire b/c there is a jump to universality.

but you don't know what this means. it's in BoI.

Andrew Adams

Minds are just a biological organism that can get flawed due to genes.

Elliot Temple

that isn't a counter-argument. it doesn't say why my understanding of the jump to universality is wrong, or my epistemology is wrong.

Andrew Adams

So you can't summarize your universality argument in 2-3 sentences?

Elliot Temple

minds are universal knowledge creators. there can't be minds with 99% of the universal repertoire b/c there is a jump to universality.

that is 2 sentences.

Andrew Adams

Okay

Elliot Temple

i can't teach you the contents of the book BoI in 2-3 sentences.

Andrew Adams

Alright. Well, I enjoyed the conversation. I'll read that. I had no intention of fighting or something like that. And I don't debate to win.

I admit that my knowledge is limited and I can be wrong. So what you say might be right.

I'll read it

Elliot Temple

you should read this to learn about correlations http://bactra.org/weblog/520.html

it's very important to this field.

Andrew Adams

Are you angry?

Elliot Temple

no

Andrew Adams

Okay

Elliot Temple

in a gene-environment interaction, sometimes it wouldn't happen at all unless BOTH the gene and that part of the environment were there. in that case, it's incorrect to say the gene causes 40% of it. it couldn't happen at all without the environmental factor. what you have to do is figuring out what the gene actually does, and what part of the environment is involved, what the causal mechanism is.

the problem with the twin studies is they don't do this. they don't know the answer.

plus they are correlating with schizophrenia diagnoses, which is different than schizophrenia (which isn't even defined)

there are no studies which do this with autism or schizophrenia. all the published studies are just correlations without understanding it.

an example of a gene-environment interaction is: a gene makes infants cry more during the first 3 months, and then does nothing. parents in our culture are meaner to infants that cry more. this meanness results in higher rates of ADHD diagnoses in school later. correlation studies would report this as finding a gene for ADHD, but that's incorrect.

Andrew Adams

I understand the study is not perfect and it's a correlation. You didn't expect them to find the actual genes in a twin studies did you?

Elliot Temple

you said you had a study about the causes.

i knew there aren't any. that's why i challenged you.

Andrew Adams

And I didn't quote this studies as the final truth, but a little bit of evidence that genes play some role.

Elliot Temple

it isn't any evidence. it's the same as the ADHD study example.

there are many other problems with correlation studies, which you can learn about at the link.

Andrew Adams

To say that correlation study completely meaningless is absurd. It sheds some lights on the topic for furthur studies.

Elliot Temple

calling something absurd isn't an argument.

look at the ADHD example. it sheds NO light on ADHD

Andrew Adams

I explained why it's absurd in the nest sentence

next

Elliot Temple

claiming it sheds light is not an argument that it sheds light. that's an assertion.

Andrew Adams

I'm not here to defend that study again. You've gotten preoccupied with that and have ignored all other things I've said.

Elliot Temple

you are defending that type of study

Andrew Adams

You told me about universality and how it makes brain different from other organisms

Elliot Temple

but you don't have any arguments which address what i said or the link i gave.

Andrew Adams

I'm gonna study that

Elliot Temple

ok

Andrew Adams

So universality will explain to me why down syndrome is affected by a gene by neurons' functions in the future life of a person are not affected by any gene. I'm not challenging it. Just making sure that's what you're saying.

but*

Elliot Temple

it is a part of the explanation. there's a lot of things to understand.

Andrew Adams

What else?

Elliot Temple

i think it works better to start with IQ and why that's wrong.

Andrew Adams

Why IQ?

IQ is just a test

Elliot Temple

because the idea behind IQ is that some people are 10% smarter than other people.

Andrew Adams

What does it have to do with mental illness?

Elliot Temple

and this is due to genes or hardware.

and we can use universality to see that that is false.

it's a simpler argument than trying to talk about down's syndrome.

Andrew Adams

Some people could be wired to be faster at learning or doing mathematical computations but I don't believe in quantifying it the way IQ does.

Elliot Temple

from understanding universality, we can find out that all people are capable of learning the same things.

that includes people who are claimed to have lower IQs or down syndrome.

their genes gave them the same capabilities as everyone in else in terms of what things they can learn, what knowledge they can create, what they can think of.

Andrew Adams

But can it be harder for some people?

Elliot Temple

no

it's harder for people with brain damage like alzheimer's. and it's harder for people after they have bad ideas.

Elliot Temple

but they aren't born with it being harder for them (except in RARE cases of being born brain damaged)

Andrew Adams

And what's the evidence for universality?

Elliot Temple

it's more a logical argument. but we have built universal computers.

Andrew Adams

Right

But we have faster computers, right?

Some have better specs

Elliot Temple

this makes almost no difference to the lives of most people

Andrew Adams

But you just said all people learn at the same rate

Elliot Temple

no, i said they are capable of learning the same things

and most people don't max out their CPU

they use maybe 10% of their brain's computing capacity

Elliot Temple

so it doesn't matter if it's slightly slower or faster.

Andrew Adams

Oh okay I get what you mean by universality

Elliot Temple

b/c there is more they aren't using

Andrew Adams

All things that compute are eventually capable of learning all things that there is

Elliot Temple

and no one cares if you write a great book in 37 months or 36 months. being slightly faster isn't what makes a genius.

Andrew Adams

So the fact that some people are faster is biological?

Elliot Temple

that is possible, but it doesn't really matter.

stuff like "autism" isn't thinking 3% slower than someone else.

Andrew Adams

Oh okay.

Elliot Temple

and bad ideas make people 1000x better or worse at thinking.

or good ideas

so it's the ideas that are important

Andrew Adams

Gotcha

Thank you

Elliot Temple

sure

:)

Andrew Adams

I'll read more on it

Elliot Temple

i don't know a lot about down syndrome. it's possible they think significantly slower and it matters. more likely, i think, is that a brain defect causes random errors. genes can't control you like telling you to be a Republican, but if genes build your brain wrong it can cause random data to be deleted or changed sometimes which makes it harder and slower to think (you have to spend more time double checking things, kinda like using checksums)

random error doesn't make someone have certain opinions or be aggressive.

i don't think stuff like "autism" and "schizophrenia" is related to physical brain problems, but down syndrome could be.

i don't think it's like the "mental illnesses" from what i know.

it's more objective and consistent, and has a medical test.

instead of just talking to someone and then lots of different psychiatrists would reach different conclusions about the same person.

Andrew Adams

I see

watch this please and let me know what you think

https://www.youtube.com/watch?v=RG5fN6KrDJE&index=7&list=PL848F2368C90DDC3D

Elliot Temple

90 minutes? hmm. will you sign up for my newsletter and discussion forum in return? :)

Andrew Adams

First 40 minutes would suffice actually

Sure!

Elliot Temple

awesome

fallibleideas.com/newsletter

https://groups.yahoo.com/neo/groups/fallible-ideas/info

i may not watch for a few days. i'll post some comments to my forum or blog.

i will also get this conversation posted somewhere

Andrew Adams

Just signed up for the newsletter.

Cool!

Do I have to have a yahoo account to join the group?

Elliot Temple

no, you can also send a blank email to fallible-ideas-subscribe@yahoogroups.com and then confirm

Andrew Adams

Awesome just joined the group too

I look forward to your thought on the video

Elliot Temple

ok :)

Andrew Adams

Hi, this just crossed my mind. What do you think of savants?

https://www.youtube.com/watch?v=H2HiLtgGdVg

Like these two guys for example

Elliot Temple

brains are computers. mostly "savants" do stuff that's actually pretty easy to do with a PC like memorizing things or math. it's just a bit of a quirk to organize their mind differently than most people and are able to use some hardware features that other people are bad at using.

most people don't want to do the things savants do. they aren't interested.

some people memorize hundreds of pokemon names and various facts about them all. but if you do digits of pi, people get way more impressed for some reason.

others memorize hundreds of bible quotes. remembering lots of stuff is actually pretty common.

Andrew Adams

I see. Thanks.


Elliot Temple | Permalink | Messages (0)

Discussion: Politicizing the Las Vegas Tragedy

From Facebook:

Evan O'Leary:

What is with people who don't like things to be "politicized"? Do you not want people you tribally dislike to say reasonable things because then you'll have to disagree with them because you were born with nothing but an amygdala for a brain?
EDIT: good point made in the comments, exploiting people's emotions to manipulate their political beliefs while they're in a less rational state is bad

Elliot Temple:
i take it you're insulting right wingers including classical liberals who believe in freedom regarding the issue of gun control. i'd suggest being more clear about what your point is in the future.

so, regarding gun control: instead of insulting people, i think it'd be better to try to investigate, in an objective, scholarly way, whether the factual claims in this book are correct or incorrect:
https://www.amazon.com/War-Guns-Yourself-Against-Control-ebook/dp/B01HH5HN8W/

Evan O'Leary:
 I'd suggest being less paranoid, you're wrong about what I'm arguing

Elliot Temple:
 then clarify

Evan O'Leary:
 There's nothing in my post that needs clarification, people on the left get mad at the NRA for "politicizing" shootings too when they say less people would have died if one of the hostages was carrying a gun

Elliot Temple:
 do you have an example of that? for example, Hillary chose to politicize the shooting rather than accuse the NRA of politicizing. By contrast, I say many right wingers complaining about Hillary politicizing it.

Evan O'Leary:
 Sure, let me find it. There was some hostage situation in recent years when people said open carry would have prevented it

Elliot Temple:
 Hold on, let's stick to the Vegas shooting and representative examples! I'm sure somewhere in history you'll find one example.

Evan O'Leary:
 Not just open carry but also when refugees commit shootings the right politicizes it with immigration

Elliot Temple:
 Are you in favor of gun control or against it?

Evan O'Leary:
 Can't find the hostage situation rn, do you disagree with the immigration point?

I'm not sure what to think about gun control

Elliot Temple:
 I agree that the right sometimes politicizes shootings, but in my understanding the dominant trend after the Vegas shooting – which is the context of your post – was the left politicizing it and the right criticizing the politicization. If I'm mistaken because I didn't see a broad enough sample of political messaging, I'd appreciate the correction. If you saw it similarly, then wasn't your post a reaction to some right winger comments?

Evan O'Leary:
 It was caused by me seeing right winger comments and seeing a problem with the "don't politicize" part of the argument, not the "gun control has downsides" part

Elliot Temple:
 views on gun control are relevant here. e.g. consider Hillary's pivot to bringing up silencers. was that relevant and reasonable, or just unreasonably trying to use the tragedy in an unrelated way? people who have knowledge about silencers and gun rights are going to have a different perspective on Hillary's comments than someone who is neutral. Part of their reaction – which you took issue with – was due to knowledge of the issues, not tribalism and amygdalae.

Elliot Temple:
 Hunters want suppressors to prevent damage to their ears and their dogs' ears, and to be better able to hear each other and prevent dangerous hunting miscommunications. That's what Hillary pivoted to the tragedy to.

https://www.frontpagemag.com/point/268035/how-hillary-clintons-tweet-showcases-cynicism-gun-daniel-greenfield

Elliot Temple:
 A reasonable response would be to call Hillary Clinton dishonest, because her comments were an attempt to shoehorn an unrelated agenda where it didn't fit and mislead the public. The discussion is ready to go straight into the mud. But do we want a bunch of mud slinging and character attacks and typical political dirty fighting to be the centerpiece of the national discussion of the Vegas tragedy? As much as I'm personally pretty willing to debate anything, I do see why people could object to this!


Elliot Temple:
 and the reason some people don't want a bunch of murder to be politicized is because of their respect for life and human dignity.

Evan O'Leary:
 What about politics inherently lacks respect for that

Elliot Temple:
 many political discussions aren't respectful of the gravity of mass murder, as i'm sure you've observed

Evan O'Leary:
 Is that because they're political?

Elliot Temple:
 Partly, yes. Some types of discussions are more known for human decency than others.

Evan O'Leary:
 The only political discussions which lack respect for life and dignity are the ones with bad political arguments

Any solution to this issue is going to be one of policy, so even if politics causes irrationality in humans, our other choice is having murder problems which don't seem less important than irrationality

Elliot Temple:
 "The only political discussions which lack respect for life and dignity are the ones with bad political arguments"

So, most of them? Do you see the problem?

Elliot Temple:
 No one is objecting to debating the issues at some point, and trying to make the discussions civil. But there are questions about the apporpriate immediate comments from public figures. Should they prioritize attempting a dirty political sound byte, or perhaps is it better to begin by saying something about their respect for human life and how sad they are about the tragedy, and then try to debate gun control issues in the normal ways afterwards?

Evan O'Leary:
 The better explanation is irrationality, not politics

Evan O'Leary:
 "Don't politicize" is a problematic criterion, and we have a better criterion, "don't be irrational"

Elliot Temple:
 People debate what is irrational or not. Being more specific is good sometimes.

Elliot Temple:
 Of course it's a problematic criterion. They aren't having extensive serious discussions with both sides engaging with each other. It's not a very intellectual forum.

Justin Mallone:
some on the left have definitely taken the tone of "fuck talking about respect for human life. now is the time for drastic political action." one example is literally not attending a moment of silence as a political protest due to insufficient gun control: http://www.washingtontimes.com/news/2017/oct/5/jackie-speier-congressional-moment-silence-shootin/

Evan O'Leary:
 A better criterion would be "don't politicize too soon after tragedies", but even that creates problems that aren't clearly improvements, because people lose political motivation after tragedies

Elliot Temple:
 that's roughly what lots of them meant, though the issue isn't entirely a matter of time. part of the issue is what you say in the time before the political debate. and your actual attitudes, not just statements.

Elliot Temple:
 and btw they primarily meant for the anti-politicization comments to apply to public figures, and people participating in the hashtags/slogans/yelling kind of politics, not discussions on serious debating forums.

Justin Mallone:
I saw a formulation of don't politicize idea from a right-winger (FYI Elliot, it was Tracinski) that just said wait 72 hours after tragedy. very modest standard but people couldn't even come close to that

Elliot Temple:
 some major voices on the left are really eager to proclaim that they know the solution to tragedies like this. some major voices on the right disagree, and think they have better solutions, but are more willing to try to set that disagreement aside briefly to have some unity in mourning.

Elliot Temple:
 can we pray together and try to think things over for a few days before we go back to squabbling over the same bitter disagreement we've been fighting about for decades?

^ I think that's a reasonable attitude.

Elliot Temple:
 can we, in the wake of the tragedy, use it as a reminder that we're on the same side, instead of using it as leverage to be divisive?

Elliot Temple:
 unfortunately i honestly don't think Hillary Clinton is on the same side as the rest of us. but i can sympathize with people who take the above kind of attitude, and i think most of the left are reasonably decent people.

Elliot Temple:
https://www.youtube.com/watch?v=slDjxJMWJn4


Elliot Temple | Permalink | Messages (0)

Yes or No Philosophy Discussion with Andrew Crawshaw

From Facebook:

Alan Forrester: https://curi.us/1963-can-winwin-solutions-take-too-long

Assigning weights to ideas never really fitted very well with critical rationalism. Evolution doesn't assign points to genes: they either survive and get copied or they don't. The same is true for an idea: it either solves a problem or it doesn't. This post is relevant to whether there is always a solution to a problem or if we have to weigh ideas to avoid throwing away conflicting ideas that might be okay.

BC: "The same is true for an idea: it either solves a problem or it doesn't." quote

Well who determines whether a problem is solved or not or even what is the problem? The problem of the basis, empirical or otherwise? The search for the algorithm to end all algorithms?

Elliot Temple: problems are solved, or not, in objective reality. people try to understand this with guesses and criticism, as always. there's no authorities. "who determines...?" is begging for an authoritarian answer just like "who should rule?"

BC: "A problem is perceived as such when the progress to a goal by an obvious route is impossible and when an automatism does not provide an effective answer." (W D Wall) What determines the goal?

Elliot Temple: people are free to determine their own goals, by thinking (guesses and criticism).

BC: So what point is being made?

Elliot Temple: you asked tangential questions. i answered. it was your responsibility for them to have a point.

Andrew Crawshaw: I think, Bruce, that the point is is that CR should be about either or claims about truth and falsity. What I don't understand is why this would be incompatible with measures of verisimilitude. I do not know if either Forrester or Temple are averse verisimiltude per se. I think they are critical of the idea that we can build a theory of critical preference on top of this, which was Popper's hope.

Am I right in suggesting, Elliot, that you think that we should only act under the circumstance that there is a single exit strategy, as it is called, and if there is not a single exist strategy that there are ways of making the circumstance such that there is a single exit strategy, therefore getting rid of the need for critical preferences.

Elliot Temple: Ideas either solve a problem or they don't solve it. A criticism either explains why an idea doesn't solve a problem, or fails to. There's no room here for amounts of goodness of ideas, which is a core idea of justificationism. Yes I think critical preferences are a mistake. See:

https://yesornophilosophy.com/argument

http://curi.us/1585-critical-preferences

http://curi.us/1917-rejecting-gradations-of-certainty

Andrew Crawshaw: Yes, I have read that. Are you saying that, given that I have a cold, and that there are two ways of alleviating it but they are incompatible solutions to alleviating this cold, ie they cannot be taken together. Say they are both to hand and both are explained as being effective by the scientific theories we have at our disposal. Would you say then that it is not right to take either?

Elliot Temple: What does "that" refer to? I gave 3 links.

Elliot Temple: > Would you say then that it is not right to take either?

no. i don't know where that's coming from.

Andrew Crawshaw: There is only one link showing. And it says Fallible ideas - Yes or No Philosophy.

Elliot Temple: all 3 links are showing, please look in the text of the post.

Andrew Crawshaw: Okay, I was just clearing up whether I might have misinterpreted you. So your theory applies only to what theories we should act on?

Elliot Temple: No. I don't know where you're getting that interpretation either. I think it would help if you quoted the text you're talking about

Andrew Crawshaw: I am responding to your reply to my comment. I asked about single exit strategies, the scenario I gave was not a single exist strategy, I was wondering how you would answer it.

Elliot Temple: Come up with a theory about what to do that you don't have a criticism of. E.g. "I should take medicine A now b/c i don't have a better idea and it's way better than nothing and it's not worthwhile to spend more time deciding". You can form an idea like that and see if you have a criticism of it or not.

Andrew Crawshaw: But you could substitute Medicine B in your theory and the situation would still be symmetrical.

Elliot Temple: So what?

Elliot Temple: If your theory is that it's best to take one medicine, but not both or neither, and it doesn't matter which one then it's ok to choose arbitrarily or randomly. you don't have a criticism of doing so.

Andrew Crawshaw: Now, you might think my question peculiar. Say I have medicine A and medicine B, everything is exactly the same as it is in the previous scenario, except that medicine B is in the bathroom and medicine A is to hand. Could this be part of preferential decision in favour of A? Even though it's not a criticism of it as a solution?

Elliot Temple: Yes. "Why would I want to go walk to the bathroom for no reason?" is a criticism. Everything else being equal (which it usually isn't), in general I'd rather not go walk to get something.

Andrew Crawshaw: But there is a difference between the two types of criticism, one is of the solution whether it would actually solve it if carried out and the other to do with whether there are other factors. The other factors being about preference.

Elliot Temple: The idea "medicine B as a solution to problem 1" and "medicine B as a solution to problem 2" are different ideas. A criticism may apply to only one of them. The criticism that i don't want to walk and get B doesn't matter for B as a solution to problem 1 (cure my illness), but does criticize choosing B for problem 2 (what action should i take in my life right now, with the situation that A and B medicines are equally good, and the only difference is one is further away and i'd rather not go get it).

This is explained at length in my Yes or No Philosophy.

Andrew Crawshaw: Isn't it slightly unhelpful to add your preference to the formulation of the problem. I mean, in otherwords, that you can just keep extending the formulation of the problem as you think about to carry it out. it seem to me no different than weighing up preferences.

Elliot Temple: Preferences need to be dealt with by critical thinking, not weighing. Weighing doesn't work. Also explained in my Yes or No Philosophy.

Elliot Temple: Weighing is also criticized in BoI and in various blog posts. Did you read the 3 I linked you? You can find more relevant posts e.g. here which is linked at the bottom of a link i gave you: http://curi.us/1595-rationally-resolving-conflicts-of-ideas

Andrew Crawshaw: Maybe I did not communicate properly. The problem is that I want to administer medicine. I have a preference...I would rather not walk. Therefore I go for medicine A. What's changed by reformulating the problem to contain the preference?

Elliot Temple: The point isn't where you notionally put the preference – it's part of the situation in any case. The point is you have a criticism of one option (walking is too hard) and not the other.

Elliot Temple: So one always can and should act on a single, non-refuted idea.

Elliot Temple: You never have to act on a refuted idea, or try to choose between non-refuted ideas by a method other than conjectures and criticism. Such an alternative method would actually be a huge problem for epistemology and basically destroy CR.

Andrew Crawshaw: The administering of medicine B has not been refuted qua alleviating my headache.

Elliot Temple: Right, I said that too.

Andrew Crawshaw: I am not sure of the difference between critical preference and your theory. Seems to be the same theory redescribed. I will have to think about it a little.

Andrew Crawshaw: Thanks for the links, I will read them more carefully over the next week.

Andrew Crawshaw: Oh, Elliot, could you give me the chapter of BoI, where weighing is criticised.

Elliot Temple: 13. Choices

Andrew Crawshaw: Thanks


Elliot Temple | Permalink | Messages (0)

Discussion About the Importance of Explanations with Andrew Crawshaw

From Facebook:

Justin Mallone:

The following excerpt argues that explanations are what is absolutely key in Popperian philosophy, and that Popper over-emphasizes the role of testing in science, but that this mistake was corrected by physicist and philosopher David Deutsch (see especially the discussion of the grass cure example). What do people think?
(excerpted from: https://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science)

Most ideas are criticized and rejected for being bad explanations. This is true even in science where they could be tested. Even most proposed scientific ideas are rejected, without testing, for being bad explanations.
Although tests are valuable, Popper's over-emphasis on testing mischaracterizes science and sets it further apart from philosophy than need be. In both science and abstract philosophy, most criticism revolves around good and bad explanations. It's largely the same epistemology. The possibility of empirical testing in science is a nice bonus, not a necessary part of creating knowledge.

In [The Fabric of Reality], David Deutsch gives this example: Consider the theory that eating grass cures colds. He says we can reject this theory without testing it.
He's right, isn't he? Should we hire a bunch of sick college students to eat grass? That would be silly. There is no explanation of how grass cures colds, so nothing worth testing. (Non-explanation is a common type of bad explanation!)
Narrow focus on testing -- especially as a substitute for support/justification -- is one of the major ways of misunderstanding Popperian philosophy. Deutsch's improvement shows how its importance is overrated and, besides being true, is better in keeping with the fallibilist spirit of Popper's thought (we don't need something "harder" or "more sciency" or whatever than critical argument!).

Andrew Crawshaw: I see, but it might turn out that grass cures cold. This would just be an empirical fact, demanding scientific explanation.

TC: Right, and if a close reading of Popper yielded anything like "test every possible hypothesis regardless of what you think of it", this would represent an advancement over Popper's thought. But he didn't suggest that.

Andrew Crawshaw: We don't reject claims of the form by indicated by Deustch because they are bad explanations. There are plenty of dangling empirical claims that we still hold to be true but which are unexplained. Deutsch is mistaking the import of his example.

Elliot Temple:

There are plenty of dangling empirical claims that we still hold to be true but which are unexplained.

That's not the issue. Are there any empirical claims we have criticism of, but which we accept? (Pointing out that something is a bad explanation is a type of criticism.)

Andrew Crawshaw: If you think that my burden is to show that there are empirical claims that are refuted but that we accept, then you have not understood my criticism.

For example

Grass cures colds.

Is of the same form as

aluminium hydroxide contributes to the production of a large quantity of antibodies.

Both are empirical claims, but they are not explanatory. That does not make them bad

Neither of them are explanations. One is accepted and the other is not.

It's not good saying that the former is a bad explanation.

The latter has not yet been properly explained by sciences

Elliot Temple: The difference is we have explanations of how aluminum hydroxide works, e.g. from wikipedia " It reacts with excess acid in the stomach, reducing the acidity of the stomach content"

Andrew Crawshaw: Not in relation to its antibody mechanism.

Elliot Temple: Can you provide reference material for what you're talking about? I'm not familiar with it.

Andrew Crawshaw: I can, but it is still irrelevant to my criticism. Which is that they are both not explanatory claims, but one is held as true while the other not.

They are low-level empirical claims that call out for explantion, they don't themselves explain. Deutsch is misemphesising.

https://www.chemistryworld.com/news/doubts-raised-over-vaccine-boost-theory/3001326.article

Elliot Temple: your link is broken, and it is relevant b/c i suspect there is an explanation.

Andrew Crawshaw: It's still irrelevant to my criticism. Which is that we often accept things like rules of thumb, even when they are unexplained. They don't need to be explained for them to be true of for us to class them as true. Miller talks about this extensively. For instance strapless evening gowns were not understand scientifically for ages.

Elliot Temple: i'm saying we don't do that, and you're saying you have a counter-example but then you say the details of the counter-example are irrelevant. i don't get it.

Elliot Temple: you claim it's a counter example. i doubt it. how are we to settle this besides looking at the details?

Andrew Crawshaw: My criticism is that calling such a claim a bad explanation is irrelevat to those kinds of claims. They are just empirical claims that beg for explanation.

Elliot Temple: zero explanation is a bad explanation and is a crucial criticism. things we actually use have more explanation than that.

Andrew Crawshaw: So?

Elliot Temple: so DD and I are right: we always go by explanations. contrary to what you're saying.

Andrew Crawshaw: We use aliminium hydroxide for increasing anti-bodies and strapless evening gowns p, even before they were explained.

Elliot Temple: i'm saying i don't think so, and you're not only refusing to provide any reference material about the matter but you claimed such reference material (indicating the history of it and the reasoning involved) is irrelevant.

Andrew Crawshaw: I have offered it. I re-edited my post.

Elliot Temple: please don't edit and expect me to see it, it usually doesn't show up.

Andrew Crawshaw: You still have not criticised my claim. The one comparing the two sentences which are of the same form, yet one is accepted and one not.

Elliot Temple: the sentence "aluminium hydroxide contributes to the production of a large quantity of antibodies." is inadequate and should be rejected.

the similar sentence with a written or implied footnote to details about how we know it would be a good claim. but you haven't given that one. the link you gave isn't the right material: it doesn't say what aluminium hydroxide does, how we know it, how it was discovered, etc

Elliot Temple: i think your problem is mixing up incomplete, imperfect explanations (still have more to learn) with non-explanation.

Andrew Crawshaw: No, it does not. But to offer that would be to explain. Which is exactly what I am telling is irrelevant.

What is relevant is whether the claim itself is a bad explanation. It's just an empirical claim.

The point is just that we often have empirical claims that are not explained scientifically yet we accept them as true and use them.

Elliot Temple: We don't. If you looked at the history of it you'd find there were lots of explanations involved.

Elliot Temple: I guess you just don't know the history either, which is why you don't know the explanations involved. People don't study or try things randomly.

Elliot Temple: If you could pick a better known example which we're both familiar with, i could walk you through it.

Andrew Crawshaw: There was never an explanation of how bridges worked. But there were rules of thumb of how to build them. There is explanations of how to use aluminium hydroxide but is actual mechanism is unknown.

Elliot Temple: what are you talking about with bridges. you can walk on strong, solid objects. what do you not understand?

Andrew Crawshaw: That's not how they work. I am talking about the scientific explanation of forces and tensions. It was not always understood despite the fact that they were built. This is the same with beavers dams, they don't know any of the explanations of how to build dams.

Elliot Temple: you don't have to know everything that could be known to have an explanation. understanding that you can walk on solid objects, and they can be supported, etc, is an explanation, whether you know all the math or not. that's what the grass cure for the cold lacks.

Elliot Temple: the test isn't omniscience, it's having a non-refuted explanation.

Andrew Crawshaw: Hmm, but are you saying then that even bad-explanations can be accepted. Cuz as far as I can tell many of the explanations for bridge building were bad, yet they stil built bridges.

Anyway you are still not locating my criticism. You are criticising something I never said it seems. Which is that Grass cures cold has not been explained. But what Deutsch was claiming was that the claim itself was a bad explanation, which is true if bad explanation includes non-explanation, but it is not the reason it is not accepted. As the hydroxide thing suggests.

Elliot Temple: We should only accept an explanation that we don't know any criticism of.

We need some explanation or we'd have no idea if what we're doing would work, we'd be lost and acting randomly without rhyme or reason. And that initial explanation is what we build on – we later improve it to make it more complete, explain more stuff.

Andrew Crawshaw: I think this is incorrect. All animals that can do things refutes your statement.

Elliot Temple: The important thing is the substance of the knowledge, not whether it's written out in the form of an English explanation.

Andrew Crawshaw: Just because there is an explanation of how some physical substrate interacts with another physical substrate, does not mean that you need explanations. Explanations are in language. Knowledge not necessarily. Knowledge is a wider phenomenon than explanation. I have many times done things by accident that have worked, but I have not known why.

Elliot Temple: This is semantics. Call it "knowledge" then. You need non-refuted knowledge of how something could work before it's worth trying. The grass cure for the cold idea doesn't meet this bar. But building a log bridge without knowing modern science is fine.

Andrew Crawshaw: Before it's worth trying? I don't think so, rules of thumb are discovered by accident and then re-used without knowing how or why it could work,,it's just works and then they try it again and it works again. Are you denying that that is a possibility?

Elliot Temple: Yes, denying that.

Andrew Crawshaw: Well, you are offering foresight to evolution then, it seems.

Elliot Temple: That's vague. Say what you mean.

Andrew Crawshaw: I don't think it is that vague. If animals can build complex things like behaves and they should have had knowledge of how it could work before it was worth trying out, then they have a lot of forsight before they tried them out. Or could it be the fact that it is the other way round, we stumble in rules of thumb develop them, then come up with explanations about how they possibly work. I am more inclined to the latter. The former is just another version of the argument from design.

Elliot Temple: humans can think and they should think before acting. it's super inefficient to act mindlessly. genetic evolution can't think and instead does things very, very, very slowly.

Andrew Crawshaw: But thinking before acting is true. Thinking is critical. It needs material to work on. Which is guesswork and sometimes, if not often, accidental actions.

Elliot Temple: when would it be a good idea to act thoughtlessly (and which thoughtless action) instead of acting according to some knowledge of what might work?

Elliot Temple: e.g. when should you test the grass cure for cancer, with no thought to whether it makes any sense, instead of thinking about what you're doing and acting according to your rational thought? (which means e.g. considering what you have some understanding could work, and what you have criticisms of)

Andrew Crawshaw: Wait, we often act thoughtlessly whether or not we should do. I don't even think it is a good idea. But we often try to do things and end up somewhere which is different to what we expected, it might be worse or better. For instance, we might try to eat grass because we are hungry and then happen to notice that our cold disspaeard and stumble on a cure for the cold.

Andrew Crawshaw: And different to what we expected might work even though we have no idea why.

Elliot Temple: DD is saying what we should do, he's talking about reason. Sometimes people act foolishly and irrationally but that doesn't change what the proper methods of creating knowledge are.

Sometimes unexpected things happen and you can learn from them. Yes. So what?

Andrew Crawshaw: But if Deustch expects that we can only work with explanations. Then he is mistaken. Which is, it seems, what you have changed your mind about.

Elliot Temple: I didn't change my mind. What?

What non-explanations are you talking about people working with? When an expectation you have is violated, and you investigate, the explanation is you're trying to find out if you were mistaken and figure out the thing you don't understand.

Elliot Temple: what do you mean "work with"? we can work with (e.g. form explanations about) spreadsheet data. we can also work with hammers. resources don't have to be explanations themselves, we just need an explanation of how to get value out of the resource.

Andrew Crawshaw: There is only one method of creating knowledge. Guesswork. Or, if genetically, by mutation. Physical things are often made without knows how and then they are applied in various contexts and they might and mint not work, that does not mean we know how they work.

Elliot Temple: if you didn't have an explanation of what actions to take with a hammer to achieve what goal, then you couldn't proceed and be effective with the hammer. you could hit things randomly and pray it works out, but it's not a good idea to live that way.

Elliot Temple: (rational) humans don't proceed purely by guesses, they also criticize the guesses first and don't act on the refuted guesses.

Andrew Crawshaw: Look there are three scenarios

  1. Act on knowledge
  2. Stumble upon solution by accident, without knowing why it works.
  3. Act randomly

Elliot Temple: u always have some idea of why it works or you wouldn't think it was a solution.

Andrew Crawshaw: No, all you need is to recognise that it worked. This is easily done by seeing that what you wanted to happen happened. It is non-sequitur to then assume that you know something of how it works.

Elliot Temple: you do X. Y results. Y is a highly desirable solution to some recurring problem. do you now know that X causes Y? no. you need some causal understanding, not just a correlation. if you thought it was impossible that X causes Y, you would look for something else. if you saw some way it's possible X causes Y, you have an initial explanation of how it could work, which you can and should expose to criticism.

Elliot Temple:

Know all you need is to recognise that it works.

plz fix this sentence, it's confusing.

Andrew Crawshaw: You might guess that it caused it. You don't need to understand it to guess that it did.

Elliot Temple: correlation isn't causation. you need something more.

Elliot Temple: like thinking of a way it could possibly cause it.

Elliot Temple: that is, an explanation of how it works.

Andrew Crawshaw: I am not saying correlation is causation, you don't need to explained guesswork, before you have guess it. You first need to guess that something caused something before you go out and explain it. Otherwise what are explaining?

Elliot Temple: you can guess X caused Y and then try to explain it. you shouldn't act on the idea that X caused Y if you have no explanation of how X could cause Y. if you have no explanation, then that's a criticism of the guess.

Elliot Temple: you have some pre-existing understanding of reality (including the laws of physics) which you need to fit this into, don't just treat the world as arbitrary – it's not and that isn't how one learns.

Andrew Crawshaw: That's not a criticism of the guess. It's ad hominem and justificationist.

Elliot Temple: "that" = ?

Andrew Crawshaw: I am agreeing totally with you about many things

  1. We should increase our criticism as much as possible.
  2. We do have inbuilt expectations about how the world works.

What We are not agreeing about is the following

  1. That a guess has to be back up by explanation for it to be true or classified as true. All we need is to criticise the guess. Arguing otherwise seems to me a type of justificationism.

  2. That in order to get novel explanations and creations, this often is done despite the knowledge and necessarily has to be that way otherwise it would not be new.

Elliot Temple:

That's not a criticism of the guess. It's ad hominem and justificationist.

please state what "that" refers to and how it's ad hominem, or state that you retract this claim.

Andrew Crawshaw: That someone does not have an explanation. First, because explanations are not easy to come by and someone not having an explanation for something does not in anyway impugn the pedigree of the guess or the strategy etc. Second explanation is important and needed, but not necessary for trying out the new strategy, y, that you guess causes x. You might develope explanations while using it. You don't need the explanation before using it.

Elliot Temple: Explanations are extremely easy to come by. I think you may be adding some extra criteria for what counts as an explanation.

Re your (1): if you have no explanation, then you can criticize it: why didn't they give it any thought and come up with an explanation? they should do that before acting, not act thoughtlessly. it's a bad idea to act thoughtlessly, so that's a criticism.

it's trivial to come up with even an explanation of how grass cures cancer: cancer is internal, and various substances have different effects on the body, so if you eat it it may interact with and destroy the cancer.

the problem with this explanation is we have criticism of it.

you need the explanation so you can try criticizing it. without the explanation, you can't criticize (except to criticize the lack of explanation).

re (2): this seems to contain typos, too confusing to answer.

Elliot Temple: whenever you do X and Y happens, you also did A, B, C, D. how do you know it was X instead of A, B, C or D which caused Y? you need to think about explanations before you can choose which of the infinite correlations to pay attention to.

Elliot Temple: for example, you may have some understanding that Y would be caused by something that isn't separated in space or time from it by very much. that's a conceptual, explanatory understanding about Y which is very important to deciding what may have caused Y.

Andrew Crawshaw: Again, it's not a criticism of the guess. It's a criticism of how the person acted.

The rest of your statements are compatible with what I am saying. Which is just that it can be done and explanations are not necessary either for using something or creating something. As the case of animals surely shows.

You don't know, you took a guess. You can't know before you guess that your guess was wrong.

Elliot Temple: "I guess X causes Y so I'll do X" is the thing being criticized. If the theory is just "Maybe X causes Y, and this is a thing to think about more" then no action is implied (besides thinking and research) and it's harder to criticize. those are different theories.

even the "Maybe X causes Y" thing is suspect. why do you think so? You did 50 million actions in your life and then Y happened. Why do you think X was the cause? You have some explanations informing this judgement!

Andrew Crawshaw: There is no difference between maybe Y and Y. It's always maybe Y. Unless refuted.

Andrew Crawshaw: You are subjectivist and justificationist as far as I can tell. A guess is objective and if someone despite the fact that they have bad judgement guesses correctly. They still guess correctly. Nothing mitigates the precariousness of this situation. Criticism is the other component.

Elliot Temple: If the guess is just "X causes Y", period, you can put that on the table of ideas to consider. However, it will be criticized as worthless: maybe A, B, or C causes Y. Maybe Y is self-caused. There's no reason to care about this guess. It doesn't even include any mention of Y ever happening.

Andrew Crawshaw: The guess won't be criticised, what will be noticed is that it shouts out for explanation and someone might offer it.

Elliot Temple: If the guess is "Maybe X causes Y because I once saw Y happen 20 seconds after X" then that's a better guess, but it will still get criticized: all sorts of things were going on at all sorts of different times before Y. so why think X caused Y?

Elliot Temple: yes: making a new guess which adds an explanation would address the criticism. people are welcome to try.

Elliot Temple: they should not, however, go test X with no explanation.

Andrew Crawshaw: That's good, but one of the best ways to criticise it, is to try it again and see if it works.

Elliot Temple: you need an explanation to understand what would even be a relevant test.

Elliot Temple: how do you try it again? how do you know what's included in X and what isn't included? you need an explanation to differentiate relevant stuff from irrelevant

Elliot Temple: as the standard CR anti-inductivist argument goes: there are infinite patterns and correlations. how do you pick which ones to pay attention to?

Elliot Temple: you shouldn't pick one thing, arbitrarily, from an INFINITE set and then test it. that's a bad idea. that's not how scientific progress is made.

Elliot Temple: what you need to do is have some conceptual understanding of what's going on. some explanations of what types of things might be relevant to causing Y and what isn't relevant, and then you can start doing experiments guided by your explanatory knowledge of physics, reality, some possible causes, etc

Elliot Temple: i am not a subjectivist or justificationist, and i don't see what's productive about the accusation. i'm willing to ignore it, but in that case it won't be contributing positively to the discussion.

Andrew Crawshaw: I am not saying that we have no knowledge. I am sayjng that we don't have an explanation of the mechanism.

Elliot Temple: can you give an example? i think you do have an explanation and you just aren't recognizing what you have.

Andrew Crawshaw: For instance, washing hands and it's link to mortality rates.

Elliot Temple: There was an explanation there: something like taint could potentially travel with hands.

Elliot Temple: This built on previous explanations people had about e.g. illnesses spreading to nearby people.

Andrew Crawshaw: Right, but the use of soap was not derived from the explanation. And that explanation might have been around before, and no such soap was used because of it.

Elliot Temple: What are you claiming happened, exactly?

Andrew Crawshaw: I am claiming that soap was invented for various reasons and then it turned out that the soap could be used for reducing mortality"

Elliot Temple: That's called "reach" in BoI. Where is the contradiction to anything I said?

Andrew Crawshaw: Reach of explanations. It was not the explanation, it was the invention of soap itself. Which was not anticipated or even encouraged by explanations. Soap is invented, used in a context an explanation might be applied to it. Then it is used in another context and again the explanation is retroactively applied to it. The explantion does not necessarily suggest more uses, nor need it.

Elliot Temple: You're being vague about the history. There were explanations involved, which you would see if you analyzed the details well.

Andrew Crawshaw: So, what if there were explanations "involved" The explanations don't add anything to the discovery of the uses of the soap. This are usually stumbled in by accident. And refinements to soaps as well for those different contexts.

Andrew Crawshaw: I am just saying that explanations of the soap works very rarely suggest new avenues. It's often a matter of trial and error.

Elliot Temple: You aren't addressing the infinite correlations/patterns point, which is a very important CR argument. Similarly, one can't observe without some knowledge first – all observation is theory laden. So one doesn't just observe that X is correlated to Y without first having a conceptual understanding for that to fit into.

Historically, you don't have any detailed counter example to what I'm saying, you're just speculating non-specifically in line with your philosophical views.

Andrew Crawshaw: It's an argument against induction. Not against guesswork informed by earlier guesswork, that often turns out to be mistaken. All explanations do is rule things out. unless they are rules for use, but these are developed while we try out those things.

Elliot Temple: It's an argument against what you were saying about observing X correlated with Y. There are infinite correlations. You can either observe randomly (not useful, has roughly 1/infinity chance of finding solutions, aka zero) or you can observe according to explanations.

Elliot Temple: You're saying to recognize a correlation and then do trial and error. But which one? Your position has elements of standard inductivist thinking in it.

Andrew Crawshaw: I never said anything about correlation - you did.

What is said was we could guess that x caused y and be correct. That's what I said, nothing more mothing less.

Andrew Crawshaw: One instance does not a correlation make.

Elliot Temple: You could also guess Z caused Y. Why are you guessing X caused Y? Filling up the potential-ideas with an INFINITE set of guesses isn't going to work. You're paying selective attention to some guesses over others.

Elliot Temple: This selective attention is either due to explanations (great!) or else it's the standard way inductivists think. Or else it's ... what else could it be?

Andrew Crawshaw: Why not? Criticise it. If you have a scientific theory that rules my guess out, that would be intersting. But saying why not this guess and why not that one. Some guesses are not considered by you maybe because they are ruled out by other expectations, or ey do not occurs to you.

Elliot Temple: The approach of taking arbitrary guesses out of an infinite set and trying to test them is infinitely slow and unproductive. That's why not. And we have much better things we can do instead.

Elliot Temple: No one does this. What they do is pick certain guesses according to unconscious or unstated explanations, which are often biased and crappy b/c they aren't being critically considered. We can do better – we can talk about the explanations we're using instead of hiding them.

Andrew Crawshaw: So, you are basically gonna ignore the fact that I have agreed that expecations and earlier knowledge do create selective attention, but what to isolate is neither determined by theory, nor by earlier perceptions, it is large amount guesswork controlled by criticism. Humans can do this rapidly and well.

Elliot Temple: Please rewrite that clearly and grammatically.

Andrew Crawshaw: It's like you are claiming there is no novelty in guesswork, if we already have that as part of our expectation ps it was not guesswork.

Elliot Temple: I am not claiming "there is no novelty in guesswork".

Andrew Crawshaw: So we are in agreement, then. Which is just that there are novel situations and our guesses are also novel. How we eliminate them is through other guesses. Therefore the guesses are sui generiz and then deselected according earlier expecations. It does not follow that the guess was positively informed by anything. It was a guess about what caused what.

Elliot Temple: Only guesses involving explanations are interesting and productive. You need to have some idea of how/why X causes Y or it isn't worth attention. It's fine if this explanation is due to your earlier knowledge, or it can be a new idea that is part of the guess.

Andrew Crawshaw: I don't think that's true. Again beavers make interesting and productive dams.

Elliot Temple: Beavers don't choose from infinite options. Can we stick to humans?

Andrew Crawshaw: Humans don't choose from infinite options....They choose from the guess that occur to them, which are not infinite. Their perception is controlled by both pyshiologival factors and their expectations. Novel situations require guesswork, because guesswork is flexible.

Elliot Temple: Humans constantly deal with infinite categories. E.g. "Something caused Y". OK, what? It could be an abstraction such as any integer. It could be any action in my whole life, or anyone else's life, or something nature did. There's infinite possibilities to deal with when you try to think about causes. You have to have explanations to narrow things down, you can't do it without explanations.

Elliot Temple: Arbitrary assertions like "The abstract integer 3 caused Y" are not productive with no explanation of how that could be possible attached to the guess. There are infinitely more where that came from. You won't get anywhere if you don't criticize "The abstract integer 3 caused Y" for its arbitrariness, lack of explanation of how it could possibly work, etc

Elliot Temple: You narrow things down. You guess that a physical event less than an hour before Y and less than a quarter mile distant caused Y. You explain those guesses, you don't just make them arbitrarily (there are infinite guesses you could make like that, and also that category of guess isn't always appropriate). You expose those explanations to criticism as the way to find out if they are any good.

Andrew Crawshaw: You are arguing for an impossible demand that you yourself can't meet, event when you have explanations. It does not narrow it down from infinity. What narrows it down is our capacity to form guess which is temporal and limited. It's our brains ability to process and to intepret that information.

Elliot Temple: No, we can deal with infinite sets. We don't narrow things down with our inability, we use explanations. I can and do do this. So do you. Explanations can have reach and exclude whole categories of stuff at once.

Andrew Crawshaw: But it does not reduce it to less than infinite. Explanations allow an infinite amount of thugs most of them useless. It's what they rule out, and things they can rule out is guess work. And this is done over time. So we might guess this and then guess that x caused y, we try it again and it might not work, so we try to vary the situation and in the way develope criticism and more guesses.

Elliot Temple: Let's step back. I think you're lost, but you could potentially learn to understand these things. You think I'm mistaken. Do you want to sort this out? How much energy do you want to devote to this? If you learn that I was right, what will you do next? Will you join my forum and start contributing? Will you study philosophy more? What values do you offer, and what values do you seek?

Andrew Crawshaw: Mostly explanations take time to understand why they conflict with some guess. It might be that the guess only approximates the truth and then find later that it is wrong because we look more into the explanation of i.

Andrew Crawshaw: Elliot, if you wish to meta, I will step out of the conversation. It was interesting, yet you still refuse to concede my point that inventions can be created without explanations. But yet this is refuted by the creations of animals and many creations of humans. You won't concede this point and then make your claims pretty well trivial. Like you need some kind od thing to direct what you are doing. When the whole point is the Genesis of new ideas and inventions and theories which cannot be suggest by earlier explanations. It is true that explanations can help, I refining and understanding. But that is not the whole story of human cognition or human invention.

Elliot Temple: So you have zero interest in, e.g., attempting to improve our method of discussion, and you'd prefer to either keep going in circles or give up entirely?

Elliot Temple: I think we could resolve the disagreement and come to agree, if we make an effort to, AND we don't put arbitrary boundaries on what kinds of solutions and actions are allowed to be part of the problem solving process. I think if you make methodology off-limits, you are sabotaging the discussion and preventing its rational resolution.

Elliot Temple: Not everything is working great. We could fix it. Or you could just unilaterally blame me and quit..?

Andrew Crawshaw: Sorry, I am not blaming you for anything.

Elliot Temple: OK, you just don't really care?

Andrew Crawshaw: Wait. I want to say two things.

  1. It's 5 in the morning, and I was working all day, so I am exhausted.

  2. This discussion is interesting, but fragmented. I need to moderate my posts on here, now. And recuperate.

Elliot Temple: I haven't asked for fast replies. You can reply on your schedule.

Elliot Temple: These issues will still be here, and important, tomorrow and the next day. My questions are open. I have no objection to you sleeping, and whatever else, prior to answering.

Andrew Crawshaw: Oh, I know you haven't asked for replies. I just get very involved in discussion. When I do I stop monitoring my tiredness levels and etc.

I know this discussion is important. The issues and problems.

Elliot Temple: If you want to drop it, you can do that too, but I'd want to know why, and I might not want to have future discussions with you if I expect you'll just argue a while and then drop it.

Andrew Crawshaw: Like to know why? I have been up since very early yesterday, like 6. I don't want to drop the discussion I want to postpone it, if you will.

Elliot Temple: That's not a reason to drop the conversation, it's a reason to write your next reply at a later time.

Andrew Crawshaw: I explicitly said: I don't want to drop the discussion.

Your next claim is a non-sequitur. A conversation can be resumed in many ways. I take it you think it would be better for me to initiate it.

Andrew Crawshaw: I will read back through the comments and see where this has lead and then I will post something on fallible ideas forum.

Elliot Temple: You wrote:

Elliot, if you wish to meta, I will step out of the conversation.

I read "step out" as quit.

Anyway, please reply to my message beginning "Let's step back." whenever you're ready. Switching forums would be great, sure :)


Elliot Temple | Permalink | Messages (17)

Freeze Discussion

This is a discussion topic for Freeze. Other people are welcome to make comments. Freeze has agreed not to post under other names in this topic.


Elliot Temple | Permalink | Messages (32)

Overreaching Discussion Plus Analysis

This discussion is a typical example of dealing with overreaching people who don't know basic stuff about how to read literally, the meanings of words, sentence grammar, simple logic, etc., and don't want to. They want to talk about complicated stuff – and never reach agreement and understanding – instead of searching for some common ground to build on. And they have very little common ground to build on because they haven't mastered some standard, generic language and logic skills. (It's common to be bad at that stuff but how are you supposed to have a conversation when you read A as B, write false dichotomies and non sequiturs, write ambiguous clarifications of your ambiguous statements, etc?)

The beginning is a discussion where talking about discussion methodology, e.g. the existence of culture clash, inferential distance gap, and differing background knowledge. That was rejected. There are various detail parts about particular basic errors in the ballpark of logic and language, stuff like reading something non-literally (that is, reading A as saying B). I suggest looking for errors. Being able to analyze this kind of thing is an important skill that will help you learn to have a productive conversation.

The later part of the log has analysis and post-mortem discussion about the prior discussion. I wrote a bunch of interesting stuff (IMO) near the end.

You can join the Discord chat here.


Freeze:

seems like background knowledge is important
and we should be open to being communicated about background knowledge in a discussion, not just what we think is relevant, but what they think is relevant

Freeze:
I think people assume that if their discussion partner brings up some seemingly irrelevant background knowledge, they're doing it to be evasive

curi:
http://fallibleideas.com/communication-is-hard https://wiki.lesswrong.com/wiki/Inferential_distance (3 linked blog posts + 2 external links) actually one of the links is dead here is the archive for it https://web.archive.org/web/20120523083248/http://www.greatplay.net/essays/the-sad-truth-of-inferential-distance

curi:
also http://fallibleideas.com/social-communication http://fallibleideas.com/originality

curi:
i particularly like

here's a metaphor to help understand the issue: everyone's mind has its own programming language.

curi:
explanation is in originality article but is very relevant to communication

TheRat:
@Shadow Starshine Seems like you've been debating/discussing the vegan topic a while. Do you know of Vegans that you find have thought about their positions and are not averse to reading like AY?

Shadow Starshine:
Perspective Philosophy would be my highest recommendation

Shadow Starshine:
If you're looking for people who aren't necessarily well read, but open minded, I have others I can recommend as well

TheRat:
Oh I heard of him through AY just calling hime trash, so he must be good lol.

TheRat:
him*

Shadow Starshine:
haha that's always a good litmus test

TheRat:
I want to practice debating more

TheRat:
voice or text either one.

Freeze:
a litmus test indicates if something is acidic or basic, but doesn't tell you the exact pH value right? So are you saying that if AY calls someone trash, then you know roughly that they're worth looking into further to find exact value of discussing with them?

Shadow Starshine:
yes that's the joke

TheRat:
Freeze it was probably more tongue in cheek.

Freeze:
well there's a joke part to it, which is that if AY thinks someone is bad, they might be good

Freeze:
but i'm asking about the informative part of it

Freeze:
if there was any intended

Shadow Starshine:
Nah, I wouldn't take that literally

Freeze:
ah ok

Shadow Starshine:
Some people AY doesn't like aren't worth talking to either

Freeze:
right

Freeze:
so it's mostly a misleading joke

TheRat:
It was a joke because I was joking too that PP must be good because AY called him trash. Not meant to be a misleading joke

TheRat:
just a joke

Freeze:
yeah, only misleading if taken with any level of seriousness/informative value

Shadow Starshine:
Anywho, debating can be fun, but it's good to set up good expectations

Freeze:
if taken as a joke i agree, it's not misleading

Shadow Starshine:
It's easy to get caught up trying to "win" debates

Freeze:
so your expectations aren't about conclusions necessarily, but just exchange of ideas/learning/progress, so you're rarely disappointed?

Shadow Starshine:
I'd say that's correct

Freeze:
cool

TheRat:
yeah well that's the issue I had with AY and Avi. I don't think they made any effort in understanding what I was saying. Just trying to catch me in a mistake.

TheRat:
though Avi did a little

Freeze:
I want to learn more about curi's idea of truth-seeking

Freeze:
it seems important

Shadow Starshine:
Right, they only care about "winning", which is another way of saying trying to make someone look stupid

Freeze:
certain things i do in discussions are not truth-seeking, and i can find out what they are

Freeze:
right

Freeze:
discrediting the other person without refuting their ideas is bad for you

JustinCEO:
serious truth seeking from animal rights advocates would involve written discussion and something like a list (or tree) of args they've received from ppl who disagree and the refutation of the arg (or at least their attempt at one 🙂 )

Shadow Starshine:
I also don't worry about changing my mind on the spot if I'm wrong, I've noticed that often times, a week or two later, I'll think more about it an intuition I had and change my mind then

JustinCEO:
the lack of that kinda thing and heavy emphasis on voice is very compatible with wanting to pwn people instead of truth seek

Shadow Starshine:
I think it's good to just acknowledge that human tendency

JustinCEO:
the thing about real time voice stuff is u can't think carefully about it over time and then formulate your reply

Freeze:
i think we can shorten that timespan by learning more about reason

JustinCEO:
this can lead to stuff seeming more plausible when it has errors

Freeze:
if i change my mind a week later, i can learn to do it a few days later, and eventually a few minutes later

JustinCEO:
cuz you literally don't have enough time to identify and point out the error in mid-conversation

Freeze:
what im curious about is why that human tendency exists, how it works, and how we can shape it

Freeze:
is it based on wanting to be right? is it something else?

Shadow Starshine:
Not really sure that works Freeze, in fact, I think there's certain thigns that if you changed your mind too soon, I wouldn't think you took it seriously

Freeze:
right, the idea is not to change your mind until you are fully, rationally persuaded

Freeze:
and sometimes that requires an unconscious process of thought

TheRat:
well I am not sure we can say for sure that voice > text for truth seeking. Could find good discussions and make progress via voice too. There is also a lot of inexcplicit information that is valuable from voice. I don't think one should prefer one over the other too heavily. I'd lean on text for clarity and long term discussion.

Freeze:
but most times i think that can be progressed by explicit discussion

TheRat:
but not by a lot

JustinCEO:

1.: reasoning that is superficially plausible but actually fallacious
-a definition of sophistry.

voice chat is more amenable to sophistry because it makes stuff seem more superficially plausible (to the participants or the viewers) due to the time constraints

Shadow Starshine:
Consider that, if ideas are brain states, then there would be a speed at which brain states can relate to other ideas they effect, and physically change

curi:
freeze it's cultural not a "human tendency"

Shadow Starshine:
I made the claim it's a human tendency

curi:
ok u2

Freeze:
yeah like im wondering in places like dalio's company or FI culture where people have learnt to be wrong all the time and have rational discussion without feeling bad about it, would they still take a week to change their mind on something or would that gap shorten? I think people can learn to make unconscious stuff conscious and ask questions about it. We can be more honest over time

Shadow Starshine:
Mmmm I'm unsure. On one hand, I agree that not letting your ego hinder you would speed up a process dramatically. But even if that wasn't a hinderance, certain things take time to ponder

curi:
speed limits on brain stuff would be short, don't think they're relevant. you could make the same argument re e.g. how fast a computer can update a spreadsheet.

Shadow Starshine:
I see no reason to accept that

Freeze:
the unconscious deliberation part you're referring to may not always need to be long or necessarily unconscious

curi:
you broadly don't seem to accept my mental model of what a person is, that a mind is software running on a (fast) computer, etc. do you have an alternative model specified somewhere?

Shadow Starshine:
I'm not even sure what you mean by mind. But I'm sure we could have a conversation about what a person is. But before that occurs, are you suggesting that you should be considered right unless you're proven wrong?

Freeze:
I think a model should be considered right if it doesn't have any unrefuted criticisms

JustinCEO:
i think in that particular question, he was just asking if u have a model specified somewhere

Freeze:
or competing models with also unrefuted criticisms

curi:

are you suggesting that you should be considered right unless you're proven wrong?

no

Shadow Starshine:
I'm just asking

Shadow Starshine:
It could have been meant to challenge me in that way

TheRat:

I think a model should be considered right if it doesn't have any unrefuted criticisms

TheRat:
why?

Shadow Starshine:
Well, there's some parts of what your model has that I don't see a reason to accept, whether I could refute them or not.

Shadow Starshine:
Like, if you say that it's analogous to updating a spreadsheet

Shadow Starshine:
why would I think that's rue

Shadow Starshine:
out of all possibilities

Shadow Starshine:
why should I hold to that one?

Shadow Starshine:
There's possibly hundreds of assertions I can't refute

curi:
i was saying your argument was inadequately differentiated from the spreadsheet one

curi:
there are physical limits on computation speed but in general, as in that example and many many others, they are short

Shadow Starshine:
why do you say they are short

curi:
e.g. b/c electrons move fast

Shadow Starshine:
Okay, electrons move fast, now how does that give me a full model of what's going on

JustinCEO:
that's one detail, i didn't take it as specifying the entire model

curi:
i just said that you didn't give a model that shows why it'd be slow and in lots of cases computation is fast

Shadow Starshine:
You just agreed I didn't have to give a model to doubt your claim

Shadow Starshine:
so why bring that up

curi:
i was doubting your claim re slow

Shadow Starshine:
When I talk about it, I mean phenomenologically, how it seems to us, is that people take time to deliberate

Shadow Starshine:
They stew over issues

Shadow Starshine:
before changing their mind

curi:
but you said

curi:

Consider that, if ideas are brain states, then there would be a speed at which brain states can relate to other ideas they effect, and physically change

curi:
which is about physical limits on changing of physical brain states

curi:
i disagreed with the specific thing you said there

curi:
you are welcome to make other comments about how people often slowly deliberate for weeks. they do.

curi:
but if you think that's because of physical speed limits involved, i disagree

Shadow Starshine:
Right I suppose that's true. You're right. What I think though, is that if neurologically, a brain has to change how the connections work, and different connections are different ideas, then for those physical changes to occur, it would take time

Shadow Starshine:
But I suppose what you're thinking is that brain states don't change, they just compute?

TheRat:

When I talk about it, I mean phenomenologically, how it seems to us, is that people take time to deliberate
They stew over issues
before changing their mind
That seems right.

Consider that, if ideas are brain states, then there would be a speed at which brain states can relate to other ideas they effect, and physically change

That doesn't seem right.

curi:
i don't think you know what "compute" is based on your sentence

Shadow Starshine:
What I'm taking it to mean is that the physical structure doesn't change of the neurons, it's just the electron and neurotransmitters passing data along

Shadow Starshine:
Do you have a different idea in mind?

Shadow Starshine:
You said "electrons are fast" so i'm assuming you are talking about the electrochemical signaling

curi:
there are large communication failures here. i regard you as adding a bunch of context to my statements, e.g. i wasn't specifically talking about brains when i mentioned electrons. i regard you as inadequately literal and precise about what you say, so you end up making claims that aren't really what you meant. and more broadly i think you don't have the background knowledge to discuss this effectively.

Shadow Starshine:
Right I don't care about any of that

Shadow Starshine:
Sounds like excuses no offense

Shadow Starshine:
Like unless you're willing to demonstrate any of those claims

Shadow Starshine:
I'm just gonna disregard that entire paragraph

curi:
if you want to untangle things you'll have to acknowledge the broad situation and then try to discuss where and how to begin the untangling. if you don't want to acknowledge the complexity of the problem, and make an organized and large effort to deal with it, then we can do something else like talk occassionally in generalities and hope to have partial understanding.

curi:
but you can't have it both ways and demand detailed explanations from me while ignoring issues like large inferential distance

Shadow Starshine:
I'm not ignoring anything, I'm just not accepting your assertion prima facie

TheRat:

and more broadly i think you don't have the background knowledge to discuss this effectively.

I don't much like this sentiment. But I don't quite know why. Rubs me the wrong way.

Shadow Starshine:
It just reads like posturing to me

curi:

I'm just gonna disregard that entire paragraph

I'm not ignoring anything

this is an example of the inadequate precision

Shadow Starshine:
Ignoring would imply im not reading it

Shadow Starshine:
disregarding it means I've read it and found it valueless

curi:
you don't seem to be offering value or to be curious to learn

Shadow Starshine:
Again, more assertions and posturing

TheRat:
😦

Shadow Starshine:
I don't care what you think of my personality, if you want to explain to me why you disagree

Shadow Starshine:
that's fine

curi:
you're flaming me over epistemology differences while rejecting the very concept that we have conflicting background knowledge

Shadow Starshine:
I don't think that's correct either

Shadow Starshine:
I reject your characterization of me

TheRat:
Where did this discussion get off track so hard. I thought SS was just trying to understand your position. I feel like this is Felix all over again and I'll have egg on my face about it later, but once again I find myself confused at the hostility.

Shadow Starshine:
same

curi:
what hostility (by me)?

TheRat:
Yes it seems to me you're being hostile.

curi:
we were not even close to on the same page re the original topic. i said so. he didn't want to consider it.

JustinCEO:

there are large communication failures here. i regard you as adding a bunch of context to my statements, e.g. i wasn't specifically talking about brains when i mentioned electrons.

JustinCEO:

I'm just gonna disregard that entire paragraph

Shadow Starshine:
We talked about brain speed. I claimed I was talking phenomenologically, he showed I wasn't, I agreed, but offered another thought, then he decided I was unworthy of discussion

Shadow Starshine:
and started character assassinating me

JustinCEO:
curi was offering an important clarification there

JustinCEO:
which you explicitly said you were ignoring

Freeze:
(disregarding)

curi:
i don't think believing someone lacks particular background knowledge is character assassination

JustinCEO:
disregarding ok

Shadow Starshine:
I disregarded the paragraph talking about my intentions and abilities

Freeze:
but yeah i agree, disregarding that entire paragraph also throws out that clarification

Freeze:
and eliminates discussion around why the clarification was necessary

Shadow Starshine:
If he can prove I don't have those capabilities, fine

Shadow Starshine:
But just claiming it

Shadow Starshine:
isn't useful

JustinCEO:
so SS you're saying you're gonna engage with the clarification now?

Shadow Starshine:
I\

curi:
have you read this paper? https://arxiv.org/pdf/quant-ph/0104033

Freeze:

there are large communication failures here. i regard you as adding a bunch of context to my statements, e.g. i wasn't specifically talking about brains when i mentioned electrons. i regard you as inadequately literal and precise about what you say, so you end up making claims that aren't really what you meant. and more broadly i think you don't have the background knowledge to discuss this effectively.
Did you mean you only want to disregard this part?
i regard you as inadequately literal and precise about what you say, so you end up making claims that aren't really what you meant. and more broadly i think you don't have the background knowledge to discuss this effectively.

Shadow Starshine:
Im unsure what you guys are calling "the clarification"

Freeze:
the clarification:

i regard you as adding a bunch of context to my statements, e.g. i wasn't specifically talking about brains when i mentioned electrons.

Shadow Starshine:
yes the top paragraph

TheRat:
the clarification being curi doesn't think he has what it takes to have that discussion. Which seems to me hostile and unhelpful. Doesn't seem like much of a clarification, more of a conclusion.

Freeze:
disregarding the entire paragraph also disregards the clarification

JustinCEO:
ya

Freeze:
the clarification was about the brain

JustinCEO:
right

TheRat:
oh

Shadow Starshine:
I dont see how that's a clarification about the brain

Shadow Starshine:
That's just talking about me?

TheRat:
Damn I lost the plot big time.

JustinCEO:

i wasn't specifically talking about brains when i mentioned electrons.

JustinCEO:
is that talking about you, Shadow Starshine?

Shadow Starshine:
I didn't disregard that part

Shadow Starshine:
It was that first paragraph freeze posted

Shadow Starshine:
It just talks about me as a person

TheRat:
there are large communication failures here. i regard you as adding a bunch of context to my statements, e.g. i wasn't specifically talking about brains when i mentioned electrons. i regard you as inadequately literal and precise about what you say, so you end up making claims that aren't really what you meant. and more broadly i think you don't have the background knowledge to discuss this effectively.

Freeze:
curi is pointing out an example where you added context to his statements that wasn't there in his wording, and explains that there are large communication failures.

curi:
saying a discussion has communication failures and talking about some of the discussion activities from my perspective is not focused on you as a person.

Shadow Starshine:
Well he seems to imply the communication failures are all on my part

curi:
i did not

Shadow Starshine:
The communications failure seems a conclusion, upon which the premises are my imprecision and lack of background knowledge

Shadow Starshine:
How else did you mean it?

curi:
no, that's another communication failure

curi:
i would think there was a communication failure regardless of the causes

Freeze:
the imprecision and lack of background knowledge include that example, and i think curi was saying further discussion would have to happen about that communication failure

curi:
i had other reasons to think that. your messages did not respond to me in a way where it seemed like we were understanding each other.

Shadow Starshine:
Okay, then that's fine. Then I'll only disregard the parts about my lack of background knowledge and imprecision

Shadow Starshine:
unless some demonstration shows those to be the case

curi:
what background knowledge is relevant to what claims is an important part of discussions

Shadow Starshine:
I didn't just say that background knowledge isn't relevant did I?

Shadow Starshine:
I'm saying you haven't shown ME to have a lack.

Shadow Starshine:
Yet you've claimed I have such a lack

Freeze:

i regard you as adding a bunch of context to my statements, e.g. i wasn't specifically talking about brains when i mentioned electrons.
so you don't disregard the above, but you do disregard:
i regard you as inadequately literal and precise about what you say, so you end up making claims that aren't really what you meant. and more broadly i think you don't have the background knowledge to discuss this effectively.

Shadow Starshine:
I'd rather be shown

Shadow Starshine:
then asserted at

curi:
do you really want me to show you lack some background knowledge?

curi:

have you read this paper? https://arxiv.org/pdf/quant-ph/0104033

curi:
i could go through 50 more examples

Shadow Starshine:
Yes, of course I want you to show it, why would I accept it just because you said it?

Shadow Starshine:
Do you think that's how it should work?

JustinCEO:
i defer to curi having more philosophy background knowledge

Shadow Starshine:
He doesn't even know me

Freeze:
Also, was this ever addressed/acknowledged?

i don't think you know what "compute" is based on your sentence

JustinCEO:
compared to me

TheRat:
but we're not talking about you tho Justin.

Freeze:
curi pointing out that you may not know what compute means is also relevant to background knowledge lacking

Shadow Starshine:
It could be, but I gave an example of what I meant by it

JustinCEO:
"Do you think that's how it should work?" is asking about a general principle, or i took it to be, anyways

Freeze:
J is bringing that up as an example of background knowledge mattering I think

curi:
your messages did not appear to be informed by some background knowledge that mine are informed by, and you didn't seem to be reading my messages in accordance with some of my backgorund knowledge.

Shadow Starshine:
in the context of a brain

curi:
this is a major communication issue

Shadow Starshine:
and he didn't correct it

Shadow Starshine:
or offer anything

Freeze:

What I'm taking it to mean is that the physical structure doesn't change of the neurons, it's just the electron and neurotransmitters passing data along
Do you have a different idea in mind?
You said "electrons are fast" so i'm assuming you are talking about the electrochemical signaling

Freeze:
was this your example of computing in the context of the brain?

Freeze:
im trying to find it

Shadow Starshine:
yes

Freeze:
ah ok

Shadow Starshine:
That's what I asked if he meant

Shadow Starshine:
I still don't know

curi:
understanding the large perspective gap is important to productive conversation. you have to take it into account when interpreting. i at least know that i don't know what you mean by lots of comments. you often jump to conclusions about what i'm saying that aren't what i meant.

TheRat:
This is an example of text being superior for sure though. Hard to follow as is, without quotes I can't even imagine.

Freeze:
ye

Shadow Starshine:
I don't think that's true, I spend most my time asking you what you mean by things

JustinCEO:
lol ya imagine this on voice, total chaos

Shadow Starshine:
I literally said "here's what I mean by compute, what do you mean?"

Shadow Starshine:
and you didn't answer

curi:
please don't put non-quotes in quote marks

Freeze:
SS:

What I'm taking it to mean is that the physical structure doesn't change of the neurons, it's just the electron and neurotransmitters passing data along
Do you have a different idea in mind?
You said "electrons are fast" so i'm assuming you are talking about the electrochemical signaling

Shadow Starshine:
If you want to be productive, just tell me what you mean

curi:
slow down, i'll get you an example after this example re different background knowledge re quote usage

Shadow Starshine:
what does the re mean?

Freeze:
regarding i think

curi:

{Attachments}
https://cdn.discordapp.com/attachments/304082867384745994/658489281685487626/unknown.png

Freeze:
ah

Freeze:
about/concerning

TheRat:
so same thing basically, I always assumed as regarding.

curi:
discord seems to have prevented copy/paste with a software update, hmm

JustinCEO:
i just ran the discord plain text log maker

curi:
u can copy/paste tiny amounts but not select all and a bunch of stuff

curi:
thx j

Shadow Starshine:
I was having that issue as well I can't even post a link

JustinCEO:
lol j discord project suddenly ESSENTIAL?

JustinCEO:
what version do u guys have

JustinCEO:
of discord

TheRat:
Web

curi:
mac

JustinCEO:
o i never use web

TheRat:
No issues for me I think

Shadow Starshine:
What I'm generally frustrated about though, is when i ask someone to clarify their own position, and offer up mine, and instead of giving me their position or understanding, they just defer to talking about "the conversation going all wrong", or specific character flaws I supposedly have.

Shadow Starshine:
It doesn't seem too hard to progress the conversation instead by just offering up what you mean

curi:
we're all alike in our infinite ignorance. it's not a character flaw. the belief that it is is another difference in background knowledge leading to communication failures.

JustinCEO:
i think curi wants to progress the conversation and there was a big misunderstanding

Shadow Starshine:
You can say "No, that's not what compute means to mean(or community X), it instead means..."

JustinCEO:
and some disagreement about discussion methodology

curi:
what compute means is complicated.

Shadow Starshine:
If he wanted to continue the conversation, he made it sound like he didn't

Freeze:
I think he meant that it would need to be a more involved discussion than you might have thought, because he saw a lot of things going wrong re communication that presumably you were ok with because you kept moving forward rather than bringing those things up

Freeze:
so it might need require more patience/tolerance/discussion of background knowledge/discussion of discussion to progress than initially expected

Freeze:
i think it was a disagreement about how best to progress the discussion

TheRat:
Its interesting you see that Freeze. I thought curi was dismissing SS as not having enough background knowledge to continue. It didn't strike me as an invitation to continue in more depth.

Freeze:
hmm

Freeze:
I rarely see curi eliminate people from discussion. instead it seems like he asks for people to consider flaws in their current approach that are making it unnecessarily difficult for truth-seeking or learning

TheRat:
Maybe a link might have been more helpful? Like mention the communication gap and the potential knowledge gap, and say here read this and that might help with the gap?

Freeze:
im reading the multiverse paper rn

Freeze:
it seems to be an example of background knowledge that might be relevant

TheRat:
well yes I personally don't think he was, but I know curi. I am saying it seems. Like If I pretend I don't know curi, I would have reacted that way

Shadow Starshine:
thats how I took it TheRat. If it was like Freeze suggested, that would be fine

Freeze:
but i dont see yet how specifically it relates to this discussion

curi:
[5:59 PM] curi: understanding the large perspective gap is important to productive conversation. you have to take it into account when interpreting. i at least know that i don't know what you mean by lots of comments. you often jump to conclusions about what i'm saying that aren't what i meant.
[5:59 PM] Shadow Starshine: I don't think that's true, I spend most my time asking you what you mean by things

for example:

Shadow Starshine:
Like, if you say that it's analogous to updating a spreadsheet

Shadow Starshine:
why would I think that's rue

Shadow Starshine:
out of all possibilities

Shadow Starshine:
why should I hold to that one?

Shadow Starshine:
There's possibly hundreds of assertions I can't refute

it's hard to tell what you think i said/meant, but based on your reply i'm confident it's dissimilar to what i had in mind. e.g. you seem to think i suggested you should hold to a particular possibility out of many. that's not what i was saying. it looks like you believe i was making an analogy that i wasn't.

Shadow Starshine:
I also don't know what you mean by "I often jump to conclusions"

Shadow Starshine:
Is there examples of me doing that?

curi:
this is an example

Shadow Starshine:
You said "I often do it"

curi:
note the communication failure again

Shadow Starshine:
Are there other examples?

Shadow Starshine:
what is often?

curi:
wait, stop

Freeze:
the conclusions were: curi is asking you to hold a particular possibility, and curi is making an analogy but he wasn't

Freeze:
two conclusions/examples is enough to start a productive discussion i think

curi:

You said "I often do it"

I didn't, and i had just asked you not to put non-quotes in quotes.

Shadow Starshine:
No, it's right up there

Freeze:
proper quote:

you often jump to conclusions about what i'm saying that aren't what i meant.

Shadow Starshine:
You just quoted yourself

curi:
i regard this as a major problem. you apparently thought it was ignorable or you don't know what quotes are.

Freeze:
SS I also have a disagreement with how you used quotes here:

I literally said "here's what I mean by compute, what do you mean?"

curi:
either way there's a major difficulty due to clashing background knowledge/assumptions/etc

Freeze:
i think you were clarifying what you meant and summarizing what you had said. but to use quotes and the word literally was wrong I think

Shadow Starshine:
I get it, you guys only like quotes when its literal

JustinCEO:
ya you should never ever say literally before a paraphrase

Shadow Starshine:
and I use them as paraphrases

curi:
you don't get it, the gap in perspetive is larger than you're realizing

curi:
you're trying to downplay the perspective difference and different approach to communication

JustinCEO:
if you're trying to have a serious discussion anyyways

TheRat:
This has gone hyper meta.

Shadow Starshine:
curi, I'm getting tired of this. If you're gonna make claims like I often jump to conclusions, then I expect you to show that's the case

Shadow Starshine:
Don't just throw that out there

curi:
you just jumped to a conclusion that "I get it"

Freeze:
i think there is a substantive difference in meaning between:

You said "I often do it"

and curi's actual quote:

you often jump to conclusions about what i'm saying that aren't what i meant.
also it was about specific conclusions, conclusions about what curi is saying when that's not what he meant

TheRat:
What's the difference Freeze?

Freeze:
which is important to address for productive communication

Shadow Starshine:
How is that jumping to a conclusion, is it not what you guys are saying? Don't use literally and don't use quotes without it actually being word for word?

Shadow Starshine:
Now what other examples do you have

Shadow Starshine:
Because you said it BEFORE I said any of these things

Shadow Starshine:
you are literally only using post hoc examples

Shadow Starshine:
I want examples that occured BEFORE you made the claim

curi:
i gave you an example a minute ago, which you didn't recognize as an example, which shows the large communication problem

Freeze:

you don't get it, the gap in perspetive is larger than you're realizing
you're trying to downplay the perspective difference and different approach to communication

Shadow Starshine:
You said I often do it

curi:
re "I get it", is that a conclusion you're willing to test?

Shadow Starshine:
How am I supposed to test it, you either agree with my understanding of your desire of quotations or you dont

Freeze:
SS:

I get it, you guys only like quotes when its literal
curi, i think this "I get it" is only referring to the quote/paraphrasing issue. It may not be referring to the overall disagreement or perspective gap.

curi:
that was a yes or no question. your answer is not a yes or no. this again indicates perspective and communication gap.

Shadow Starshine:
exactly

curi:
ik that freeze

Shadow Starshine:
do you just get meta over and over again to avoid answering anything?

curi:

do you just get meta over and over again to avoid answering anything?

this is a meta comment while still not answering my direct question.

Shadow Starshine:
You dont answer anything I bring up

Shadow Starshine:
Do you think you are the sole driver of this conversation?

curi:

is that a conclusion you're willing to test?

i'm trying to demonstrate some claims to you, but you aren't being responsive.

Freeze:
how would we go about testing this specific statement?

I get it, you guys only like quotes when its literal

Shadow Starshine:
I want you to answer things I'm saying to you

curi:
if you want me to demonstrate any claims to you, you have to be responsive when i try to do so.

Shadow Starshine:
Now sure, test that claim if you can. I think you want something out of quotes, I stated what I think you want, am I right or wrong

curi:

is that a conclusion you're willing to test?

Freeze:

Now sure, test that claim if you can

Freeze:
i think his answer is yes

curi:
that is not an answer

Freeze:
although yeah it seems to carry a bunch of additional stuff

Shadow Starshine:
my god

curi:
if he meant that as "yes", it's an example of his lack of precision

JustinCEO:
curi asked if SS would be willing to test claim

Shadow Starshine:
Are you being purposesly obtuse?

JustinCEO:
SS replied that curi can test claim if he can

curi:
no, as i told you we have a perspective gap, different background knowledge, and some communication failures.

Freeze:
i dont think he is, i think he sees real communication gaps and is trying to build a mutual understanding of them

curi:
i suggest you read the inferential distance articles linked earlier

Freeze:
(in response to purposely obtuse)

Shadow Starshine:
I don't think hes trying to build anything

Shadow Starshine:
any other person would know what I'm saying

JustinCEO:
so SS specified the wrong actor in his reply

Shadow Starshine:
in fact, multiple people in this chat

Shadow Starshine:
seem to know what im saying

curi:
could you say what you mean instead of trying to rely on me guessing it? i've spent the last half hour trying to tell you that relying on guessing what each other means isn't going to work because we're too different.

Shadow Starshine:
I am saying what I mean

Shadow Starshine:
I don't agree to your framing

Freeze:
SS:

Are you being purposesly obtuse?
curi:
no, as i told you we have a perspective gap, different background knowledge, and some communication failures.
btw i see this is an example of a direct question and a direct answer happening

curi:

is that a conclusion you're willing to test?

you haven't answered this question.

Shadow Starshine:
Yes I have

curi:
quote?

Shadow Starshine:
I can't copy paste

Freeze:
i think i can find

Shadow Starshine:
I said sure

Freeze:

Now sure, test that claim if you can

Shadow Starshine:
then I asked you to confirm if what I said was accurate

Freeze:
J pointed out a mix up of actors which might be relevant

curi:
that is not a "sure" answer to my question.

Shadow Starshine:
yes it is

Shadow Starshine:
now take it as one

Shadow Starshine:
and progress

Shadow Starshine:
stop wasting time

curi:
you're jumping to the conclusion that i'm wasting time

Shadow Starshine:
I am

Freeze:
there seems to be a disagreement about whether or not this meta discussion is progress. I think it is, but you think it's a waste of time. How would we resolve this disagreement?

Shadow Starshine:
I honestly think you're wasting time

curi:
i think it could be progress if SS wanted to resolve our differences, but he doesn't

Shadow Starshine:
I think you don't

curi:
he isn't willing to actually face the gap in viewpoint and try to deal with it

Shadow Starshine:
and you just keep trying to frame the discussion

Shadow Starshine:
to make it sound like its my bad

Shadow Starshine:
and not yours

Shadow Starshine:
its incredibly obvious I answered in the affirmative

curi:
you are resisting trying to sort out our communication differences

Shadow Starshine:
no, you are

Shadow Starshine:
you aren't just taking it as a yes

Shadow Starshine:
if you did

Shadow Starshine:
we could move on

curi:
you didn't and still haven't said "yes"

JustinCEO:
SS you're the one who keeps bringing up personal dynamics while other people are focused on interpreting statements, pointing out ambiguities or errors, and explaining stuff.

TheRat:
I wish there was a way to halt meta and get back on track 😦

Shadow Starshine:
I have said yes, I told you it meant yes, this isn;t complicated

curi:
you aren't being precise enough

Freeze:
SS, did you see Justin's statements about the mix-up of actors? Quotes:
J:

curi asked if SS would be willing to test claim
SS replied that curi can test claim if he can
so SS specified the wrong actor in his reply

Shadow Starshine:
I am being precise enough

curi:
you never said "yes"

curi:
but you claim to have said "yes"

Shadow Starshine:
I don't care, I said sure

curi:
you are wrong in a literal, precise way

Shadow Starshine:
No, I said that I answered in the affirmative

Shadow Starshine:
I never said that I only said "yes"

curi:
your response to being wrong in a literal, precise way is "I don't care, I said [thing that isn't "yes"]"

Shadow Starshine:
show me where I said that

curi:

I have said yes,

Freeze:
I think SS is saying that he clarified later that his "Now sure, test that claim if you can" means yes

JustinCEO:
this is related to the questionable quotation usage earlier

Shadow Starshine:
buddy, if you think what's gonna happen

curi:
that text doesn't mean yes. it's not even coherent.

Shadow Starshine:
is that I',m gonna use the exact terminology you want in the exact format you want

Shadow Starshine:
and its either that or its my fault

Shadow Starshine:
you're deluded

Shadow Starshine:
you can either take it as a yes

Shadow Starshine:
and continue

Shadow Starshine:
or I think you're doing this to avoid

Shadow Starshine:
having to justify earlier statements

Freeze:
I think this was relevant to a potential misunderstanding:

SS, did you see Justin's statements about the mix-up of actors? Quotes:
J:
curi asked if SS would be willing to test claim
SS replied that curi can test claim if he can
so SS specified the wrong actor in his reply

curi:
i said you hadn't said "yes". you claimed that you had. now you're moving the goalposts to whether a previous comment meant yes

Shadow Starshine:
I've said my piece dude

Shadow Starshine:
I'm not interested

curi:
do you think you can estimate, with over 95% confidence, how many times i've banned or suspended someone for misquoting, or given a warning that i will do that if they do it again?

Shadow Starshine:
Take the yes or don't

TheRat:
This is why I am not a fan of meta. Look how 0 progress was made the moment we went meta. Even if curi is 100% right, this conversation to me highlights my issue with going meta. It is like a blackhole. I have never escaped a meta discussion, and I have never seen anyone escape it. 😦

Shadow Starshine:
I'm not bothering to answer such an irrelevant question

curi:
that question is a test of your claim to get it re my view of quoting.

curi:
your failure to see the relevance shows some sorta communication failure and perspective gap.

Shadow Starshine:
Buddy are you taking the yes or not

curi:
the sort that i've claimed is happening ~constantly

Freeze:
He took the yes by asking you the question that tests the claim

Shadow Starshine:
or are you going to keep saying "sure" isn't precise enough

Shadow Starshine:
I want him to acknowledge his acceptance then

Freeze:
curi is saying

do you think you can estimate, with over 95% confidence, how many times i've banned or suspended someone for misquoting, or given a warning that i will do that if they do it again?
is the test

Shadow Starshine:
not implicity

Shadow Starshine:
I want this explicit

curi:
i don't think you've said yes but i was trying a different approach anyway because i don't think you have the background knowledge to be able to speak precisely.

Shadow Starshine:
What is your different approach

Shadow Starshine:
is it accepting the affirmative?

curi:

do you think you can estimate, with over 95% confidence, how many times i've banned or suspended someone for misquoting, or given a warning that i will do that if they do it again?

Shadow Starshine:
No, before we move on, tell me you've accepted the affirmative

Shadow Starshine:
is that your different approach?

curi:
i accept that you now mean it, but i don't accept that factually you've said it.

Shadow Starshine:
great good enough

curi:
yes that quote is the different approach

Freeze:
@TheRat i think this meta discussion is progress by the way

Freeze:
I don't think meta discussion is a black hole that we can't get out of

JustinCEO:
@TheRat meta often comes up when there's already a problem and people are pointing stuff out to try to address the problem. so the universe of cases in which meta comes up is already slanted towards discussions where there's some kinda issue. so you can't judge conversational problems as necessarily being attributable to the meta itself.

Shadow Starshine:
To answer your question, i'm not making a claim about everything surrounding your usage of quotes, but merely how you like them being used. I don't care about what days you used it, how many times, and what you were wearing while you did it. Neither do I care about who you banned for not doing it. I'm merely expressing how you want quotes used. Did I get that part right or wrong?

Shadow Starshine:
That is what "I get it" meant

Freeze:

{Attachments}
https://cdn.discordapp.com/attachments/304082867384745994/658497979493253140/unknown.png

curi:
i asked you to stop doing something and then you did it again

curi:
then you claimed to get my perpsective on the matter, and i doubt that you do

Shadow Starshine:
I want you to stop doing things as well

Freeze:
yes, although he said he gets it after the second time it happened and there was further discussion about it that clarified the issue

Shadow Starshine:
I want you to acknowledge what I'm saying

Shadow Starshine:
can you do that?

curi:
i can't read your mind as well as i believe you want me to.

Shadow Starshine:
I want you to read what I'm typing not my mind

curi:
so for example you wrote

curi:

I get it, you guys only like quotes when its literal

Shadow Starshine:
Is that a true or false statement?

curi:
i read this. by reading it, i noticed that the text "its" is an error. is that what you want?

Shadow Starshine:
Can you just answer that

Shadow Starshine:
...

Shadow Starshine:
are you serious

TheRat:
I very much disagree @Freeze There is no way you can tell me with a straight face that progress has been made. I don't think SS is any closer to understanding curi's position on computation. Hell not even progress has been made within meta yet.

Shadow Starshine:
you're bothered that "its" isn't "it's"

curi:
wanting me to guess that you meant something other than what you wrote is in the mind reading category.

curi:
i can do it some but not enough for how you're talking.

Shadow Starshine:
you're being serious right now?

curi:
i am being serious

Shadow Starshine:
wow

curi:
your correction is still wrong

Freeze:
@TheRat I think there's lots of progress. curi and SS better understand a) that there is a large perspective gap b) the perspectives of each other regarding the perspective gap
there's also lots of valuable discussion to look at, quote, make a discussion tree out of later etc.

Shadow Starshine:
Sorry, at this point I can't imagine you're worth talking to

curi:
will you read the inferential distance articles?

curi:
your attitude is irrational in a way that has been explained by quite a few ppl

Freeze:
Inferential distance articles: https://ptb.discordapp.com/channels/304082867384745994/304082867384745994/658476068436705320

Freeze:
https://wiki.lesswrong.com/wiki/Inferential_distance

Shadow Starshine:
Listen, I'll talk to Freeze and TheRat and people, but I'm done with curi

Shadow Starshine:
I don't think he's being honest

Freeze:
hmm

Freeze:
he disagrees about that, but ok

Freeze:
i dont know how we make progress on the gap in perspective about curi's honesty

curi:
see, the communication gap is bad enough that he's claiming bad faith. typical thing as explained in the articles.

Shadow Starshine:
I talk with a lot of philosophers, none of said that a conversation was too confusing to move forward because of the wrong "its"

Freeze:

{Attachments}
https://cdn.discordapp.com/attachments/304082867384745994/658499801507168272/unknown.png

curi:
i didn't say that.

Shadow Starshine:
Either he honestly means that, in which case he speaks in a way so annoying I'd had to try and broach it

Freeze:
btw I think curi is saying he can do some mind-reading but not enough to communicate effectively based on how you have been writing

Shadow Starshine:
Or he's dishonest

Shadow Starshine:
And it's a waste of my time

Freeze:
I took the its as an example of imprecision, but in a way that curi can mindread past

JustinCEO:
ya SS you're just making stuff up now

TheRat:
FWIW, SS. I get where you're coming from but curi is not acting in bad faith. Idk a way out of this type of meta black hole tbh though. I definitely don't think progress ever gets made contra Freeze.

Freeze:
but the other examples are ones he can't mindread past, like the computation stuff

curi:
SS doesn't want to take one issue at a time and proceed carefully to reach a conclusion, but also demands repeatedly that i demonstrate my claims.

Shadow Starshine:
Are you guys honstly taking the perspective that you'd have to be a "mind reader" to understand that sentence?

curi:
no

curi:
you are misunderstanding quite badly

Shadow Starshine:
Then I'm sorry, but you're too imprecise for me to understand

Shadow Starshine:
must be a background knowledge problem

JustinCEO:
why hatefully flame?

Freeze:
he is showing systematic lack of precision in your words. some of it curi can read past, like "its", but others he can't figure out what you meant accurately. He is saying even if you fix the "its" to "it's" the sentence, the quote:

I get it, you guys only like quotes when its literal
is still a misunderstanding and needs to be addressed. That's my interpretation

Shadow Starshine:
I'm literally mimicing, and you call it flaming

JustinCEO:

To imitate or ape for sport; to attempt to excite laughter or derision by acting or speaking like another; to ridicule by imitation.

Shadow Starshine:
Who here doesn't understand what I mean with the "I get it, you guys only like quotes when its literal"

Shadow Starshine:
Someone other than curi

Shadow Starshine:
Please raise your hand

curi:
i do understand what you mean

curi:
you keep misstating my position and ignoring my corrections

Freeze:
but he sees a misunderstanding there, specifically in "I get it" i think

Shadow Starshine:
Not what I'm asking, freeze, what do you think that sentence means?

Freeze:
even "You guys only like quotes when its literal" might be misunderstood or incomplete in a meaningful way for further communication

Shadow Starshine:
Well lets find out

Shadow Starshine:
Freeze, what do you take from it

Shadow Starshine:
Anyone can take a stab at it

JustinCEO:

I'm literally mimicing, and you call it flaming

JustinCEO:
definitions of mimic commonly involve ridicule, derision

JustinCEO:
which i would consider flaming

TheRat:
I take it to mean that we don't like it, but it doesn't tell me if you agree to stop quoting in that manner.

Shadow Starshine:
Good, I didn't agree

Shadow Starshine:
But what is it I think you don't like?

JustinCEO:
the specific word you chose to bring up to exonerate yourself from flaming charge is not helpful for your case.

TheRat:
Non literal quotes

JustinCEO:
did SS block me or is he ignoring me? 🤔

Shadow Starshine:
I can't reply to everything

JustinCEO:
ok

Shadow Starshine:
so I'm chosing to narrow this

JustinCEO:
in principle that's fine but doing so without saying anything is hella confusing and ambiguous

Shadow Starshine:
It seems TheRat understood what I was trying to say

Freeze:

I get it, you guys only like quotes when its literal
I think it means:
Explicitly: I understand that you guys only like quotes that are quotes. You don't like paraphrasing dressed up as quotes. I understand that you didn't like my paraphrasing of quotes.

Implicitly a few different things I was wondering about: Is there hostility in this statement? It seems to sort of be saying... you guys use quotes this way, but most people don't and it's not actually important, but I get that that's how you want to do it. Another implicit thing seemed to be like... you guys are only discussing my paraphrasing because you can't answer my other issues and this is your way of being pedantic and stubborn.
Issues I see: You didn't address the fact that you used quotes as paraphrasing, and you didn't offer your opinion on whether it matters or not to use quotes precisely and literally. The way that misquoting relates to communication gap was also not something addressed, but I think that could happen in future discussion. Without any of this additional stuff though, we don't actually know where we agree and where we disagree about quotes.

TheRat:
Disclaimer: I suffer from ridiculous headaches. Currently in the middle of one so I am prone to miss a lot when this happens. But yeah I think I understood what you meant.

Freeze:
migraine? 😦

Shadow Starshine:
well you got the explicit message right, but the implicit part was a possibility that I purposely didn't state

Shadow Starshine:
I never agreed to your usage

TheRat:
I don't think its a migraine as I understand them. I never quite knew the difference, but I was told that migraines come with like sensitivity to light and vision impairment.

Shadow Starshine:
But I wanted to know that I had what curi wanted correct

Shadow Starshine:
first

Freeze:
yeah i think curi knows you didnt agree to our usage

Freeze:
and believed that that discussion about agreement/disagreement would be relevant to further communication

Shadow Starshine:
I didn't agree ot disagree

TheRat:
Freeze didn't say you agreed to disagree

Freeze:
so i think the misunderstanding is around whether or not you wanted to discuss further and believed there was more to learn about the role of quoting in discussions

Freeze:
@TheRat about what?

TheRat:
No i meant toward SS

Freeze:
ah

TheRat:
Ok let me take a stab at explaining. The problem with quote usage is not just that we dislike non-literal usage. It is that curi felt you were paraphrasing his ideas in the wrong manner, and responding to the wrong paraphrase. Using quotes helps mitigate that perspective gap.

curi:
ot = or

curi:
u mindread his typo wrong

curi:
as to

curi:
example of how it can be non-trivial and go wrong

TheRat:
So when you said I get it, SS, you didn't quite get it.

TheRat:
Does that help?

curi:
(i'm fairly confident re typo interpretation tho not 100%)

Shadow Starshine:
You guys are taking "I get it" as "I get everything". I think that's problematic

Shadow Starshine:
When I say "I get it" followed by a statement, that statement is what I get

TheRat:
Btw I know these discussions are quite draining. So if you're too tired to continue we can pick it up another time. (Not assuming anything just putting it out there)

Freeze:
So SS if:
You were just looking for clarification from curi on whether you understood the conclusion about quotes which is that it's important to use quotes literally, then I think curi did clarify that he believes you didn't truly understand it. People have been warned/banned for misquoting because of the background knowledge curi has around quotes and how important it is to interact with people's text in an intellectually honest way through quotation.
I think curi believed that although you said that you got that literal quotes are important to us, you would have been surprised if you discovered just how seriously we actually take quoting. Your potential surprise would indicate a gap in understanding and background knowledge, which I think curi wanted to address ahead of time and as part of making progress on the communication gap

TheRat:
Ok so the problem with that. Maybe taking I get it to mean I get everything is problematic. However, the problem is that the main reason why quotes was brought up was not addressed, not even a little bit. In this case the mis-paraphrase and response to the mis-paraphrase.

Shadow Starshine:
Right, I may not understand to the degree that quotes are important to you. I may not even care. What I wanted to establish, however, was not the degree of importance, which you're right, I am ignorant of, but the qualifications of getting it corect

Freeze:
Ok, so I think that may have been a genuine misunderstanding then

Freeze:
If for example curi replied: Yes, that is our standard for quoting, but I believe you don't understand why we have that standard, and you disagree implicitly about the value of that standard. Discussing that disagreement is important to our communication.

Shadow Starshine:
Right, that would have been a good response

TheRat:
That might have been better yes

Freeze:
Yes, but it would have required some mindreading from curi, or us

Shadow Starshine:
But I did clarify multiple times

Shadow Starshine:
and he wouldn't acknowledge it

Shadow Starshine:
Which is utterly frustrating

Shadow Starshine:
If someone is so far from any common language communication as that

Shadow Starshine:
I'm not sure it's worth building up anything

Shadow Starshine:
especially when all changes seem to be required on my end

Freeze:
the thing is individual clarifications don't address imprecision in past communication, if that imprecision is a consistent issue

TheRat:
I don't think that would require mindreading Freeze.

TheRat:
SS was clear with what he meant regarding quotes

Freeze:
like if you imprecisely communicate in the same way 3 times, then just asking what you meant each time and moving forward might be less effective than trying to figure out why the imprecise communication happened

curi:
SS didn't merely clarify multiple times. he made additional false claims, while also actually willfully refusing to give a clear answer on the basis of (false claims that he'd already given answers he hadn't). this is one of many examples of how his approach to text doesn't engage well with what ppl (he or others) literally said, which is relevant to quote usage.

TheRat:
right well I am in disagreement with freeze and curi about the value of going meta. Even if progress is slow in non meta, and clarifications are needed. I think progress happens. I haven't seen meta not just come to a screeching halt.

TheRat:
but I have a lot less discussion experience too

Shadow Starshine:
I don't mind meta discussions, I have them with other people

Shadow Starshine:
they don't go that badly

Freeze:
I might also be biased because I genuinely enjoy meta

Freeze:
I find that I learn a lot about discussion and thinking in general

Freeze:
and analysis

Freeze:
but i am kind of exhausted, which is interesting

curi:
most of what i said was not meta

Shadow Starshine:
I stand by curi just framing things rather than being clarifying

Freeze:
I'd like to discuss the inferential distance articles at some point

Shadow Starshine:
If you look at what he just wrote

Shadow Starshine:
it doesn't offer clarity

TheRat:
ok well maybe I mean meta in the wrong way then. I thought the moment you said he lacked the sufficient knowledge to continue the discussion regarding computation, I thought that was the beginning of the meta train.

Shadow Starshine:
It's just assertion

curi:
rat i mean, once ur talking about a meta topic, not every statement within that topic is meta. many are object statements re that topic.

Freeze:
well when curi says:

most of what i said was not meta
it makes me think that he means most of what he said was directly relevant/topical to the discussion

Shadow Starshine:
I will counter assert that I think curi's approach doesn't work

Freeze:

most of what i said was not meta

I will counter assert that I think curi's approach doesn't work
is this a counter assertion to the first quote from curi?

Shadow Starshine:
no

Freeze:
oh

Freeze:
ok, i think i understand

TheRat:
I wonder if maybe had an addition of something like . Maybe if you read this link you might get a better idea of what I mean by computation. And then link.

curi:
https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances

A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.

SS doesn't want to recurse enough, calls it my approach but he also disagrees with EY and many others.

Shadow Starshine:
That's also false

curi:
https://www.lesswrong.com/posts/sBBGxdvhKcppQWZZE/double-illusion-of-transparency

In desperation, I recursed all the way back to Bayes's Theorem, the ultimate foundation stone of -

He didn't know how to apply Bayes's Theorem to update the probability that a fruit is a banana, after it is observed to be yellow. He kept mixing up p(b|y) and p(y|b).

Shadow Starshine:
You can spend as much time framing our discussion as you like

Shadow Starshine:
if you actually want to stop doing that and just talk to me directly feel free to DM me

curi:
what is false? you want to recurse more than i think you do?

curi:
you refused to discuss further b4 we recursed enough to find common ground. said the gap btwn us was too big (after spending a lot of the conversation denying gap size)

Shadow Starshine:
Again, i don't agree with your characterization of the events

curi:
what do you disagree with?

TheRat:
Wait what do you think about what I said. A resource to help close that gap

curi:
you aren't being specific. which is just the sort of issue our way of using quotes addresses.

Freeze:
Shadow StarshineToday at 6:44 PM

Then I'm sorry, but you're too imprecise for me to understand

must be a background knowledge problem

Shadow StarshineToday at 7:12 PM
If you look at what he just wrote
it doesn't offer clarity
It's just assertion

I will counter assert that I think curi's approach doesn't work

Is this second example re counter assertion a similar process to the first example of mimicking?

Shadow Starshine:
That I had a refusal to discuss further. I had a refusal for your methods and approach.

Shadow Starshine:
Which isn't a refusal to continue

Shadow Starshine:
That's just your characterization

curi:
do you deny that i was trying to recurse further to common ground?

TheRat:
The second quote is not mimicry

Shadow Starshine:
I don't know whether you were, I have no reason to believe you were.

Shadow Starshine:
And in either case, it was a bad approach and was not based on my refusal

Shadow Starshine:
no that wasn't mimicry

curi:

Sorry, at this point I can't imagine you're worth talking to

isn't this a "refusal to discuss further"?

curi:
more refusal here and to make scrolling back up easier: https://discordapp.com/channels/304082867384745994/304082867384745994/658499339018305565

Shadow Starshine:
At that point yes. I had basically tried to continue with you multiple times and found you weren't perceptive to any approaches I had taken and I didn't think at that point you were acting in good faith

Shadow Starshine:
Nor do you seem to answer anything I put out there

curi:
so why did you deny my characterization of the conversation, on the basis of specifically denying what you now concede is true?

curi:
you doing that kind of thing, over and over, is why conversation doesn't work.

Shadow Starshine:
Because it makes it sound like I refused earlier, or that the initial problem was my refusal. If you're saying that, instead, there was a refusal later after it had broken down, that I can accept

Shadow Starshine:
No

Shadow Starshine:
the reason it doesn't work

Shadow Starshine:
is because you write sentences like "you are doing that kind of thing, over and over, is why conversation doesn't work."

Shadow Starshine:
And you do THAT over and over

JustinCEO:
that's a typo lol

JustinCEO:
in the attempted quote

Shadow Starshine:
And those sort of sentences

Shadow Starshine:
don't help any discussion

Shadow Starshine:
It's just framing

JustinCEO:
i find it kinda funny cuz like

JustinCEO:
that specific issue

JustinCEO:
on which

JustinCEO:
this community

JustinCEO:
has a serious position

JustinCEO:
has been brought up

JustinCEO:
repeatedly

JustinCEO:
and

JustinCEO:
you are offering

curi:

Because it makes it sound like I refused earlier

what is "it" here and how does it do that? this is part of a pattern where you raise new, vague claims in response to being wrong about a previous claim, which prevents anything resolving.

JustinCEO:
your reason for why discussion doesn't work

JustinCEO:
and again

JustinCEO:
you violated a norm on which there is a considered position/attitude as it relates to discussion

JustinCEO:
i think maybe u think its us just being fussy pedants

Shadow Starshine:
You aren't being specific what my refusal was about or why it occured, and your sentence makes it sound like the problem rests on my refusal, rather than that being a conclusion about what occured after there was problems

JustinCEO:
the quoting thing

JustinCEO:
but that's not it

curi:
which text wasn't specific but should have been?

Shadow Starshine:
give me a sec I can't copy paste

TheRat:
I think he means

you doing that kind of thing, over and over, is why conversation doesn't work.

curi:
copy/paste works for me for short amounts of text

curi:
mb relog

curi:
if you prefer to move to a better forum like curi website we can do that.

Shadow Starshine:
I dont know if its the laptop

Shadow Starshine:
"you refused to discuss further b4 we recursed enough to find common ground"

Shadow Starshine:
This implies that we were finding common ground, or that this recursive process was occuring or that you had intentions to do so

curi:
ok so factually you admit you refused to discuss further at a certain point in time. at that point in time, had we recursed far enough to find common ground?

Shadow Starshine:
Do you see my contention?

curi:
yes, i implied the process was occurring and/or that i had intentions for it to occur. i agree.

Shadow Starshine:
Great, I don't accept that

curi:
do you accept that we'd recursed for several levels?

Shadow Starshine:
No, I don't accept that what we were doing what recursing

Shadow Starshine:
was*

curi:
ok, so you're denying my statement based on a different concept of recursion than what i know and mean ... basically you have different background knowledge than me, interpreted my statement using your background knowledge, and jumped to the conclusion that it's false. which is the same kind of thing you wanted me to give you examples of you doing, and i was trying to.

Shadow Starshine:
If what you're implying is that you have some proprietary notion of recursion that I failed to aquire, then sure, that's a possibility. perhaps every word you're saying doesn't mean what I commonly think it means and I've jumped to conclusions that we are even communicating

Shadow Starshine:
In which case, my bad

curi:
i believe i have the standard meaning, as meant by EY in the article, but that you don't.

Shadow Starshine:
Perhaps you are just randomly stating words

Shadow Starshine:
Oh, well, I shouldn't jump to the conclusion that by article you mean what I mean by article

Shadow Starshine:
Or that when you say you, you mean me

Shadow Starshine:
I'd hate to be so presumptious

curi:
do you want to recurse on this? e.g. i could ask you to give an example of what you think recursing is.

Freeze:
this is sarcasm, right SS?

curi:
and i could give one from the conversation previously.

Shadow Starshine:
I really don't think you're honestly trying to sort anything out curi, so no, not really. If you were, you woulda just told me what you meant by compute

Shadow Starshine:
and not waste 3 hours of my time

Shadow Starshine:
or however long its been

Shadow Starshine:
2?

Perspective Philosophy:
you're bothered that "its" isn't "it's" Is that what is being argued right now?

curi:
no

Perspective Philosophy:
ive got to keep reading then

Perspective Philosophy:
brb

curi:
i told you earlier that "compute" is complicated. to use article examples, it's like a young earth creationist asking for one paragraph explanation of evolution that is convincing to him.

Shadow Starshine:
It would have been a better use of time to discuss it than any of this

Shadow Starshine:
instead, you just made assertions

Shadow Starshine:
I asked you to defend them

Shadow Starshine:
you didnt

Shadow Starshine:
and here we are, you picking at grammar and quotes

TheRat:
oh Hello PP!

curi:
i don't agree. i thought your discussion methodology was and is inadequate to make progress and that disagreement needs to be addressed.

TheRat:
Welcome

Shadow Starshine:
instead of actually addressing real issues

Freeze:
are all forms of information processing computational?

Shadow Starshine:
And I think that is of you

Shadow Starshine:
So you go ahead and think that of me

Shadow Starshine:
and I'll think it of you as well

curi:
ok well i've written at length about my discussion methodology, linked you relevant articles, etc., but you have not presented your position on the matter and, i think, don't want to

Shadow Starshine:
Why do you write about what you think I want and my intentions

Shadow Starshine:
Why not just ask

Shadow Starshine:
It's obnoxious

curi:
you want me to ask you something which i believe you've already communicated about many times, but previously you were upset with me for asking something which you believed you had already communicated about many times

Perspective Philosophy:
@TheRat Hello!

Freeze:
But when curi exposes what he thinks you want to you (or what he thinks your intentions are) it allows for better understanding of what he's thinking, and you can point out where he's wrong. We all have these thoughts in our head and if we don't put them out there to be clarified or contested, the thoughts affect the discussion in an unseen way instead of an openly attributable way

Freeze:
Like I wouldn't want to misunderstand you in a way that I don't realize and not find out until way later. I'd prefer to say what I think you mean and have you correct me

TheRat:
I do agree with SS that an attempt at explaining computation or linking to a resource that explains computation in a way curi endorses would have been a much better approach than going meta. I feel like past 2-3 hours the progress has been quite minimal if any.

Shadow Starshine:
curi if you hold that belief state that's not my fault

curi:
there were dozens of large problems in the conversation, from my perspetive, rat. i didn't think they were ignorable.

Freeze:
curi believes SS understanding computation would have required some steps of recursion though, and relevant background knowledge

curi:
if i didn't discuss discussion methodology i would have stopped discussing.

Perspective Philosophy:
I got to that point

curi:
SS, do you want to present your position on discussion methodology?

TheRat:
I don't doubt it curi. I just think that going meta seems to always end in no progress.

Shadow Starshine:
No, curi, I don't. What I wanted was for you to talk about what you mean by compute.

curi:
why did you derail by questioning my correct understanding of what you'd already communicated?

curi:
and saying i should have asked

Perspective Philosophy:
You said there are large communication failures here. i regard you as adding a bunch of context to my statements, e.g. i wasn't specifically talking about brains when i mentioned electrons. i regard you as inadequately literal and precise about what you say, so you end up making claims that aren't really what you meant. and more broadly i think you don't have the background knowledge to discuss this effectively.

Seems that the better course would have been to talk about the subject matter and alleviate any discrepancies as the conversation flows. This is how two people come to speak the same language, the meta-discussion is only going to hinder that process.

Perspective Philosophy:
@curi

Shadow Starshine:
Because I comment on multiple things, but I'm telling you what I'd REALLY like is to bypass all of it

curi:
the discrepancies were too large and complicated, and layering on each other, to do it that way PP

curi:
needed some acknowledgement of the situation and attempt to take it into account

Freeze:
Like it seems like the more relevant thing in curi's eyes than an article about computation are the articles about inferential distance: https://ptb.discordapp.com/channels/304082867384745994/304082867384745994/658476068436705320

To SS, the more relevant thing is computation itself, resources on that or discussion about computation specifically, as well as discussion about SS's example of computation:

What I'm taking it to mean is that the physical structure doesn't change of the neurons, it's just the electron and neurotransmitters passing data along
Do you have a different idea in mind?
You said "electrons are fast" so i'm assuming you are talking about the electrochemical signaling

this difference of ideas about what is most relevant to the progression of the discussion is a topical, substantive disagreement that needs to be addressed. It's not something where we should just say that curi should do what SS wants or SS should do what curi wants. If SS just concedes to curi and reads the articles (without being rationally persuaded of curi's perspective) I think that would be wrong and harmful to the discussion. Same for if curi agrees to what SS wants without being rationally persuaded as to why that is best for the discussion.

Shadow Starshine:
What am I conceding?

Perspective Philosophy:
so you had some entry level requirements, what where they? It might help the situation to understand these requirements.

Perspective Philosophy:
@curi

curi:
his epistemology, discussion methodology and approach to precision are quite different than mine, in addition to him not sharing my view of physics and computation.

Perspective Philosophy:
well you surely can talk to people who disagree?

Freeze:

What am I conceding?
Not what you're conceding, but what you shouldn't concede without being rationally persuaded of it first (if it's true), which is the idea that the inferential distance concept and application is more immediately relevant and important to the discussion than the understanding of computation.

Shadow Starshine:
What I would have wanted from the discussion is you saying "I agree with your definition" or "I disagree with that definition, and here's why (insert short snippet of where the disagreement may be)"

Freeze:
which relates to what TheRat said here:

I do agree with SS that an attempt at explaining computation or linking to a resource that explains computation in a way curi endorses would have been a much better approach than going meta. I feel like past 2-3 hours the progress has been quite minimal if any.

Shadow Starshine:
Then I would proceed to ask questions about it

curi:
sure but if he wants to understand what i think about consciousness he needs to understand some other stuff first. the direct approach was tried and wasn't close to working.

Shadow Starshine:
You already told me consciousness isn't in your lexicon

Shadow Starshine:
and was contextual

Shadow Starshine:
I'm not concerned with that word anymore

Perspective Philosophy:
okay, well why don't you try explaining it to me, perhaps in the most basic terms you can. That way all parties can hopefully gain some insight?

Shadow Starshine:
We were talking about why you think people changing their mind was analogous to an updated spreadsheet

curi:
PP i don't want to explain something to you unless you are interested, have a question. has to be a real learning process. and there are a lot of topics i'd suggest are more interesting. i don't want you to act as a go between to lure me into making statements for SS's benefit.

curi:
what particularly interests me is the problem of how to talk with someone when errors are accumulating in the discussion faster than they're being cleared up.

curi:
especially when the rate of clearing them up is very low. hardly any conversational branches seem to get resolved.

Perspective Philosophy:
What do you think of Habermas on communicative action?

Shadow Starshine:
That sounds good, but it doesn't seem to be working out for you. I'd suggest to stop making statements that involve peoples intentions.

curi:
not familiar

Shadow Starshine:
Or abilities

Shadow Starshine:
and just stick to argumentation

curi:

We were talking about why you think people changing their mind was analogous to an updated spreadsheet

that is a gross misstatement of what i said, and it's like the 20th one today.

Shadow Starshine:
Okay mr precision

Shadow Starshine:
show me 1-19

Shadow Starshine:
I want them numbered

curi:
what's in it for me?

curi:
i'll bet $10,000 on whether i can do it.

Shadow Starshine:
You're the precise one

Shadow Starshine:
How would we figure out if you actually accomplished the goal

Shadow Starshine:
or whether any of your points count

TheRat:
We've reached meta levels I didn't think were possible 😦

curi:
are you actually interested if we got terms and a referee?

Shadow Starshine:
You'll give me $10,000 if you're wrong?

curi:
and you give me 10k if i'm right.

Shadow Starshine:
then quite possibly, though I'd wager $1000

Shadow Starshine:
since 10,000 is out of my wheelhouse

curi:
no i'm not making the effort for that amount of money.

Shadow Starshine:
well you can't say I'm not willing to call it

curi:
and if you don't have the money, i don't want to take 1k from you anyway

Shadow Starshine:
I have 1k

Shadow Starshine:
And I'll gladly take another

curi:
if 10k is out of your wheelhouse, losing 1k would be a big deal for you.

Shadow Starshine:
That's a stupid statement

JustinCEO:
can you just do fewer examples of imprecision for $1k?

TheRat:
You rich mofos

curi:
that'd be way easier tho

Shadow Starshine:
1k is not a big deal to me

Shadow Starshine:
and 10k is out of my wheelhouse

Shadow Starshine:
both those are true

TheRat:
1k would ruin me If I lost it

Shadow Starshine:
I have 8k in the bank, I barely use money

Shadow Starshine:
I literally can't pay 10k, but 1k wouldn't halt anything I do

Shadow Starshine:
so that statement was false

Perspective Philosophy:
if i had 1k id still be in debt

Shadow Starshine:
so your excuse of breaking the bank on me is false, and I stand by not backing down

Shadow Starshine:
but if 1k isn't worth your time, fine

Shadow Starshine:
you're still wrong about that hypothetical

curi:
how many misstatements of my views do you think you made?

curi:
do you remember when i pointed out several when you were ignoring me?

Shadow Starshine:
you're the one who said 20, don't ask me questions about it. And just because you think they occured doesn't mean they do

Shadow Starshine:
hence why we need a referee

Shadow Starshine:
Now, do you agree you're statement that if 10k is too much, that 1k would be a big deal to me?

Shadow Starshine:
are you able to concede that?

curi:
i don't think betting 12.5% of your savings is a reasonable amount

Shadow Starshine:
I didnt ask that

curi:
i don't agree with you

Shadow Starshine:
you said it was a big deal to me

Shadow Starshine:
I showed that it wasnt

Shadow Starshine:
Do you concede

curi:
i just told you i don't.

Shadow Starshine:
No, you said reasonable amount

Freeze:

I have 8k in the bank, I barely use money
I literally can't pay 10k, but 1k wouldn't halt anything I do
Does him barely using money change your perspective on the contextual validity of betting $1k on something he's confident in being right about to an objective referee?

Shadow Starshine:
But sure, if you're saying you don't agree and that it IS a big deal to me

curi:
you seem to doubt my claim re ~20 misstatements. so i have a question for you, "how many misstatements of my views do you think you made?"

Shadow Starshine:
can you prove that

Perspective Philosophy:
okay in think this is ridiculous that being said, please tell me what is reasonable in terms of gambling? What is a reasonable risk to take?

Perspective Philosophy:
@curi

TheRat:
I think this is a silly discussion but SS could easily not find losing 12% of his money a big deal, while curi also is right that 12% of your entire bank account can be considered a big deal.

Shadow Starshine:
That question is irrelevant to the conversation, you show that it's a big deal to me

curi:
google poker bankroll management

Shadow Starshine:
right now

Shadow Starshine:
it COULD be considered a big deal

Shadow Starshine:
but it isn't by me

Freeze:
https://www.cardschat.com/poker-bankroll-management.php

Perspective Philosophy:
@curi That doesnt give me a justification as to why that is a 'Reasonable' amount

Shadow Starshine:
Now he says it is

Shadow Starshine:
I want proof

curi:
you don't think poker knowledge constitutes any kind of argument re reasonable approach to this subject?

Shadow Starshine:
I don't care about "reasonableness", what I'm asking is whether I think it is a big deal

Shadow Starshine:
You said it is

Shadow Starshine:
Prove it

Shadow Starshine:
stop asking questions

Shadow Starshine:
make an argument

Shadow Starshine:
or concede

Shadow Starshine:
that you can't make that claim

Freeze:
curi:

if 10k is out of your wheelhouse, losing 1k would be a big deal for you.
I'm trying to figure out if this is a claim about how SS would feel or if it is an objective moral/financial claim

Freeze:
SS seems to have interpreted it as the former

Freeze:
But it might be the latter

JustinCEO:
its absolutely objective

Perspective Philosophy:
First this is not poker, so I dont care about that. Second, the risk to reward ratio is an aspect of rationality and so unless you can determine the risk to be without rational justification then it very possibly could be reasonable

curi:
yeah that's meant as a statement about reality, not about his opinions

Freeze:
Like me losing my arm would be an objectively big deal even if I felt 0 emotional distress or burden intellectually or something

JustinCEO:
totally crystal clear to me

JustinCEO:
no ambiguity

Freeze:
ok so that's where the misunderstanding is

Shadow Starshine:
Then qualify the statement about reality. In what way is it a big deal to me

Shadow Starshine:
In what way does it effect me

Shadow Starshine:
Or would in any way be problematic to my life

curi:
PP there is literature about how to bet well. idk why you're rejecting it out of hand. see also Kelley criterion.

Freeze:
but SS clarifying context that he isn't spending much money might change that objective claim. The same way having a trust that kicks in a year later with $10m in it would also change the context.

Perspective Philosophy:
the bet is determined by the game is it not? Id suggest reading Macintyres critque of rawlsian maximin reasoning or even nozicks

Shadow Starshine:
Give me a criteria of "big deal", and how you plan to prove it's the case

Freeze:
expenses and savings both go into objective financial claims right? if his expenses happen to be $200 a month and will be so for the next two years (For example) then maybe $1k from an $8k savings account could be objectively fine to risk

curi:
PP do you mean that if he thinks he is a 99% favorite in the game, that changes things? b/c the actual thing i claimed is losing would be a big deal.

Shadow Starshine:
I'm saying losing it is not a big deal

Freeze:
in objective reality

Freeze:
that is an interesting discussion to be had maybe

Shadow Starshine:
Just concede the statement

Perspective Philosophy:
that doesn't make his risk unreasonable. as losing isn't guaranteed . If losing was guaranteed then all risk would be unreasonable.

TheRat:
Well SS could say, I want to set 1000 on fire now, and its not a big deal to me.

curi:
i don't think you're understanding what i said, PP. have you reread my actual message?

Freeze:
so this isn't about the risk he's taking. It's just about:
Is SS losing $1,000 today a big deal or not (objectively)

Freeze:
SS says losing it is not a big deal

Freeze:
curi says it is

Shadow Starshine:
you notice how he doesn't bother showing how it is a big deal and just gets distracted by other shit?

Perspective Philosophy:
i don't think betting 12.5% of your savings is a reasonable amount

Perspective Philosophy:
"Reasonable"

curi:
PP that statement was made in context

TheRat:
well how would curi know what is a big deal to SS or not? Isn't that a claim on his qualia of losing that money?

Shadow Starshine:
@TheRat Hes not bothering to clarify

Freeze:
relevant context:
curi:

if 10k is out of your wheelhouse, losing 1k would be a big deal for you.
SS:
I have 8k in the bank, I barely use money
I literally can't pay 10k, but 1k wouldn't halt anything I do
so that statement was false

Perspective Philosophy:
Okay, so if we talk about it being a big deal. then weve created an unfalisable statement if it doesnt relate to rationality

curi:
the context was i wouldn't want to take his 1k b/c

losing 1k would be a big deal for you.

so my position doesn't depend on the game odds.

curi:
he denied this rather than claiming the odds were favorable enough

Perspective Philosophy:
either its about reason or about shadows evaluation. which is it?

curi:
what's "it" in your message?

curi:
first one

Perspective Philosophy:
it was referring to your position and the territory of this current discussion

curi:
my position is multi-part, so that's a false dichotomy

Perspective Philosophy:
excuse me im going to shoot myself

Freeze:
SS is saying objectively that losing $1k would not be a big deal based on his context in life, since he barely spends money, and losing $1k wouldn't halt anything he does. But maybe part of curi's argument is that not halting things in life is not the only or most relevant objective measure for financial decisions.

Shadow Starshine:
curi hasn't made an argument

Shadow Starshine:
If he doesn't make an argument now

curi:
ok if making logic errors results in you not wanting to converse further instead of wanting to learn (or reach a conclusion and potentially teach), then we shouldn't talk.

Shadow Starshine:
I'm taking it as a concession

TheRat:

if 10k is out of your wheelhouse, losing 1k would be a big deal for you.

Perspective Philosophy:
well its not objective, i could agree with that. Unless he means its objectively the case that he doesnt give a shit

Freeze:
im reading this poker bankroll thing

Freeze:
that seemed to be part of curi's argument

Freeze:
as relevant knowledge

Freeze:
but im realizing first off that you and curi have different views on objective morality/objective knowledge right?

Shadow Starshine:
right, so no argument

Perspective Philosophy:
It doesnt matter because I would like to think we could talk within a general language community?

curi:
argument for what, SS?

Shadow Starshine:
why it's a big deal

curi:
what doesn't matter PP?

Shadow Starshine:
Asked like 5 times now

Shadow Starshine:
he just avoids the question

curi:
google poker bankroll management, i told you already

Shadow Starshine:
I'm not googling shit, just type your argument

curi:
that was my argument

Shadow Starshine:
it's not an argument

curi:
i disagree

Shadow Starshine:
asking someone to google

TheRat:

if 10k is out of your wheelhouse, losing 1k would be a big deal for you.

That alone is not talking about poker bankroll or anything of the sort Freeze. Without context it is a claim on SS's experience of losing 1k. Which I don't think can be made objective, he may well not care at all about setting 1k aflame.

Shadow Starshine:
is not an argument

curi:
i guess we have a perspective gap on epistemology, as i said

Freeze:
PP, do you subscribe to the justified true belief conception of epistemology?

Shadow Starshine:
Write down your argument

Shadow Starshine:
in this chat

Freeze:
PP:

That doesnt give me a justification as to why that is a 'Reasonable' amount

curi:
i didn't make any statement about his mental experiences, rat.

TheRat:
That's true.

TheRat:
Big deal could mean many things

Shadow Starshine:
Your argument at this point couldn't have anything to do with how I think of it, because it would be false. It cant be about how it negatively impacts my life, because that would be false.

Shadow Starshine:
What do you have left?

Shadow Starshine:
Write it down

Freeze:
@TheRat I asked for clarification about that here Rat, curi explained it was objective. You interpreted it as subjective but didn't ask for clarification. I asked because I wasn't sure. J interpreted it as objective. https://ptb.discordapp.com/channels/304082867384745994/304082867384745994/658521491713032202

Freeze:
Edited assumed out of the above ^

TheRat:
I was talking about the quote alone, as I said, "without context"

curi:
rat's comment is fine IMO

Freeze:
i just wasn't sure how to interpret curi's statement. it seems like he could have meant it as objective or subjective

Shadow Starshine:
I'm taking his lack of typing what he meant for the last 10 minutes

Shadow Starshine:
to just be dishonesty

Perspective Philosophy:
wait so @curi Your position on "big deal" is an esoteric notion based on backroll management

Freeze:

That alone is not talking about poker bankroll or anything of the sort Freeze. Without context it is a claim on SS's experience of losing 1k. Which I don't think can be made objective, he may well not care at all about setting 1k aflame.
I think we agree that SS can feel no emotional distress or anguish from burning 1k and for that to still be objectively a bad thing for him/big deal to his financial situation

TheRat:
that's because you know curi, and J does. I am trying to take a perspective from someone who doesn't know curi and apply it. Without context includes that too Freeze.

JustinCEO:
that losing 1/8th of your small savings would be a big deal seems really common sensical to me, not remotely esoteric

Freeze:
the poker bankroll was a later link in response to PP asking how it can be objective

Shadow Starshine:
1/8th of a savings is not a big deal if it doesn't actually impact your life

Shadow Starshine:
I don't even use half of it

Freeze:
depends on context a bit right J. Like SS had context alongside saying he had $8k in bank, which is that he barely spends money

Freeze:
that is as relevant as the $8k number imo

JustinCEO:
ya but u can have an emergency man

curi:
i don't think bankroll management is esoteric re betting

Shadow Starshine:
Anyone can have an emergency of any amount, that's a vague statement

Shadow Starshine:
stop trying to defend this nonsense

Shadow Starshine:
he isn't giving any argument

Shadow Starshine:
nor clarifying

TheRat:
What are the odds this conversation would ever return to computation?

Perspective Philosophy:
either big deal is subjective or objective. If its objective then what does it relate to?

Freeze:
right, we'd need an explanation for why $8k is significantly better for you in your savings account than $7k. Why that difference is a big deal in objective reality.

Shadow Starshine:
He made a statement, can't back it up

Shadow Starshine:
and is wasting time

Shadow Starshine:
I take this as a concession

Freeze:
@TheRat is odds the right way to think about it? I think if people ask for it to be about computation now or later, it can return to computation quite easily

curi:
my objective evaluation of the bet sizing and loss impact relates to my understanding of bankroll management and bet sizing theory

curi:
i don't see what's difficult about this

Shadow Starshine:
so write it

Shadow Starshine:
dont tell me what it relates to

Shadow Starshine:
just write it down

curi:
i'm not writing a poker blog post for you in real time

curi:
if you want to learn you can read sources

Shadow Starshine:
you're full of shit

curi:
there's no point in me repeating it

curi:
you think i'm full of shit ... as in i'm bluffing about knowing anything about poker or having read this stuff b4?

Shadow Starshine:
I think you're full of shit of having an actual argument

curi:
you think these arguments don't exist at all?

Perspective Philosophy:
Okay so your evaluation was based on an esoteric notion from which would then need to be grounded and justified. You would then also have to say that shadows unacceptance of said understanding was unreasonable.

Shadow Starshine:
I think YOU don't have an ARGUMENT about the statement IT'S A BIG DEAL

curi:
i don't see what's esoteric about gambling knowledge in context of bet sizing

TheRat:
maybe we should start our own bet Freeze lol. Does this conversation return to computation (excluding you and me) on its own.

Freeze:
i think he wants an argument for how bankroll management relates to 1k being objectively a big deal for SS's situation

Shadow Starshine:
Stop asking me these stupid tangential questions

Freeze:
@TheRat i don't predict people

curi:
SS, you think i couldn't write comments on bet sizing without lookging them up first? is that what you mean?

TheRat:
I was joking.

Freeze:
ah

Shadow Starshine:
Why are you asking me that

TheRat:
❤️ u

curi:
it's hard to figure out what you're saying

Shadow Starshine:
do you think that's what I mean?

Shadow Starshine:
I literally wrote in caps what I mean

Shadow Starshine:
how can you not understand

curi:
i told you i have an arguemnt re bet sizing

curi:
i don't know in what sense you think i don't have it?

Shadow Starshine:
I don't think you have an argument to relate it to what you said to me

JustinCEO:
Shadow Starshine, were you discussing a bet before?

Shadow Starshine:
If you do, write it

curi:
you don't think i have any argument that could relate bet sizing and bankroll management theory to a particular example of a possible bet?

Shadow Starshine:
Stop asking me questions, write an argument

Shadow Starshine:
just write it

curi:
is that what you meant?

Shadow Starshine:
write your argument

curi:
if you won't clarify what you meant, i'm not going to answer you.

Shadow Starshine:
write the argument that shows 1k loss to me is a big deal

Perspective Philosophy:
well considering that your saying he'd be wrong in his understanding of what big deal means. You would be concluding that most individuals have a wrong understanding of value and that value is independent of the individuals evaluation of the value of money.

Backroll management would have to assume a unified value of money to each individual, then assume that all risk that goes beyond its limits would be unjustified based on this assumed theory of value.

Shadow Starshine:
that has been the question for the last 15 minutes

Shadow Starshine:
write that argument

Shadow Starshine:
you made a statement

Shadow Starshine:
back it up

curi:
PP, i take it you're not familiar with bankroll management literature or kelley criterion?

curi:
you're incorrectly characterizing what it says.

Perspective Philosophy:
I don't see how that is relevant to a term that is used within general language?

Perspective Philosophy:
You are using subject specific vocabulary.

curi:
there is literature explaining how to use math to evaluate these matters

curi:
my comment was referring to that kinda math, not to ppl's arbitrary opinions

Freeze:

Okay so your evaluation was based on an esoteric notion from which would then need to be grounded and justified. You would then also have to say that shadows unacceptance of said understanding was unreasonable.
@Perspective Philosophy I disagree with grounding/justification

Freeze:
foundational epistemology differences here

Shadow Starshine:
define "big deal", use a definition that would be generally acceptable by people, then show me how a 1k loss meets that definition

Shadow Starshine:
This isn't hard

Perspective Philosophy:
are you guys trolling

TheRat:
No.

Freeze:
You've mentioned justification twice PP

Freeze:
I asked you about that here: https://ptb.discordapp.com/channels/304082867384745994/304082867384745994/658524369579933726

Perspective Philosophy:
Then why do you think there is a problem with epistemology here @Freeze

curi:
i'm not trolling and don't get how knowing some math is what gets you to accuse me

Shadow Starshine:
more dishonest framing

Freeze:
Because justification is a chimera. knowledge does not need to be justified, is not grounded, and grows through conjecture and criticism

Perspective Philosophy:
I would argue that truth is justified true belief that also doesnt contain any relevant falsehood

Freeze:
ok

Freeze:
that's an important discussion to have

Freeze:
and is why I asked you if that's what you believe

curi:
PP, that position disagrees with Critical Rationalism. have you heard of it?

Freeze:
do you know that critical rationalism has arguments against that belief?

curi:
this is a CR group FYI.

Shadow Starshine:
It's quite clear im never getting an argument

Shadow Starshine:
which is the only way this discussion would really progress

Perspective Philosophy:
but wait if we dont we require justification, then why do we need critque

Perspective Philosophy:
?

TheRat:
I don't think the argument hinges on his epistemology though. Not directly. The phrasing "justified" I think in this case PP meant unargued, unexplained. Which is fine.

curi:
have you heard of Karl Popper?

Perspective Philosophy:
I have

curi:
are you familiar with his criticism of JTB?

Perspective Philosophy:
yes if you mean his position on JTB not requiring absolute certitude

curi:
that is not his position.

curi:
where did you get that?

Perspective Philosophy:
one sec ill link a source

Perspective Philosophy:
okay, so this is taken from SEP on Popper

Perspective Philosophy:
If such conclusions are shown to be true, the theory is corroborated (but never verified). If the conclusion is shown to be false, then this is taken as a signal that the theory cannot be completely correct (logically the theory is falsified), and the scientist begins his quest for a better theory.

This is his fourth step on the growth of human knowledge. You'll see it clearly says that a statement is NEVER verified. That is because he argued absolute verification was impossible and that the scientific modal relies upon falsification to determine truth. The statement that incurs the least falsehood is closer to the truth than the statement which incurs more.

TheRat:
So verificationism is a form of justificationism but this is not his refutation of justificationism

curi:
the issue isn't whether Popper denied verification.

Perspective Philosophy:
he doesnt refute justification

Perspective Philosophy:
he refutes the need for verification

curi:

his position on JTB not requiring absolute certitude

what you wrote here implies that Popper thinks JTB is possible to acquire. he does not.

Perspective Philosophy:
He does argue that we are justified in our beliefs but that our justification is based on the removal of relevent falsehood. That is why i said earlier I would argue that truth is justified true belief that also doesnt contain any relevant falsehood

Freeze:

He does argue that we are justified in our beliefs but that our justification is based on the removal of relevent falsehood.

Freeze:
Is this based on that statement from SEP above?

curi:

Like many other philosophers I am at times inclined to classify philosophers as belonging to two main groups—those with whom I disagree, and those who agree with me. I might call them the verificationists or the justificationist philosophers of knowledge or of belief, and the falsificationists or fallibilists or critical philosophers of conjectural knowledge. I may mention in passing a third group with whom I also disagree. They may be called the disappointed justificationists—the irrationalists and sceptics.
The members of the first group—the verificationists or justificationists—hold, roughly speaking, that whatever cannot be supported by positive reasons is unworthy of being believed, or even of being taken into serious consideration.

curi:
have you read any Popper?

curi:
that's C&R

curi:

1 With Hume, knowledge is a kind of justified true belief. This whole approach clashes with mine.

RASc

Perspective Philosophy:
I have read popper and the problem was in language

curi:
which Popper have you read?

curi:

  1. Humanism, Science, and the Inductive Prejudice.

There is no probabilistic induction. Human experience, in ordinary life as well as in science, is acquired by fundamentally the same procedure: the free, unjustified, and unjustifiable invention of hypotheses or anticipations or expectations, and their subsequent testing. These tests cannot make the hypothesis ‘probable’.

RASc

curi:

That we cannot give a justification-or sufficient reasons- for our guesses does not mean that we may not have guessed the truth; some of our hypotheses may well be true.[31]

OK

Perspective Philosophy:
falisifcationists also believe in JTB. On the otherhand if you understand justificationist to be synonymous with verificationist then the problem is with those terms

curi:
you're just asserting, – about the beliefs of my school of thought – while i'm giving quotes of Popper?

TheRat:
No, Like I said it is a form of, not a synonym. (re verificationism)

Perspective Philosophy:
1 sec

Perspective Philosophy:
That we cannot give a justification-or sufficient reasons- for our guesses does not mean that we may not have guessed the truth; some of our hypotheses may well be true.[31] That's not knowledge though. his point is that our statements can be true without our knowing why.

an example would be in saying without looking. It is raining outside and it is. The statement is true but it would not be classed as knowledge because it doesn't meet the epistemological criterion. once we investigate the statement and determine its truth value then it becomes knowledge.

There is no probabilistic induction. Human experience, in ordinary life as well as in science, is acquired by fundamentally the same procedure: the free, unjustified, and unjustifiable invention of hypotheses or anticipations or expectations, and their subsequent testing. These tests cannot make the hypothesis ‘probable’.

He is rejecting induction not JTB.

he goes on to say
They can only corroborate it--and this only because 'degree of corroboration' is just a label attached to a report, or an appraisal of the severity of tested passed by the hypothesis.....for there is considerable intuitive force in the assertation that the probability of a law increases with the number of its observed instances. I have attempted to explain this intuitive force but pointing out that the probability and free of corroboration have no been properly distinguished.

Perspective Philosophy:
To prove my point that he does believe in truth and knowledge here is quote from sep. What you will notice is that he doesn't believe in certitude but does accept truth as being any statement which avoids relevant falsehood and describes reality.

Popper was initially uneasy with the concept of truth, and in his earliest writings he avoided asserting that a theory which is corroborated is true—for clearly if every theory is an open-ended hypothesis, as he maintains, then ipso facto it has to be at least potentially false. For this reason Popper restricted himself to the contention that a theory which is falsified is false and is known to be such, and that a theory which replaces a falsified theory (because it has a higher empirical content than the latter, and explains what has falsified it) is a ‘better theory’ than its predecessor. However, he came to accept Tarski’s reformulation of the correspondence theory of truth, and in Conjectures and Refutations (1963) he integrated the concepts of truth and content to frame the metalogical concept of ‘truthlikeness’ or ‘verisimilitude’. A ‘good’ scientific theory, Popper thus argued, has a higher level of verisimilitude than its rivals, and he explicated this concept by reference to the logical consequences of theories. A theory’s content is the totality of its logical consequences, which can be divided into two classes: there is the ‘truth-content’ of a theory, which is the class of true propositions which may be derived from it, on the one hand, and the ‘falsity-content’ of a theory, on the other hand, which is the class of the theory’s false consequences (this latter class may of course be empty, and in the case of a theory which is true is necessarily empty).

Perspective Philosophy:
Popper offered two methods of comparing theories in terms of verisimilitude, the qualitative and quantitative definitions. On the qualitative account, Popper asserted:

``Assuming that the truth-content and the falsity-content of two theories t1 and t2 are comparable, we can say that t2 is more closely similar to the truth, or corresponds better to the facts, than t1, if and only if either:

(a) the truth-content but not the falsity-content of t2 exceeds that of t1, or

(b) the falsity-content of t1, but not its truth-content, exceeds that of t2. (Conjectures and Refutations, 233).``

curi:

He is rejecting induction not JTB.

when he says unjustified he's rejecting the J in JTB.

Human experience, in ordinary life as well as in science, is acquired by fundamentally the same procedure: the free, unjustified, and unjustifiable invention of hypotheses or anticipations or expectations, and their subsequent testing.

Perspective Philosophy:
Only if as i suggested you take Justified to mean Verified.

curi:

To prove my point that he does believe in truth and knowledge

you're getting way off topic. i said Popper rejects JTB. yes he believes in truth and conjectural knowledge.

curi:
he literally said our knowledge is "unjustified, and unjustifiable" and you think it's compatible with "justified" b/c of something about verification? what?

Perspective Philosophy:
he's arguing that when we obtain our knowledge through induction its unjustifiable.

curi:
no, he thinks all knowledge is unjustifiable

Perspective Philosophy:
hed argue verification is not justifiable methodology as it also relies upon induction as the acceptance of the immediate experience as being fundamentally true.

curi:

on my view, all views—good and bad—are in this important sense baseless, unfounded, unjustified, unsupported.)

RASc

Freeze:

Human experience, in ordinary life as well as in science, is acquired by fundamentally the same procedure: the free, unjustified, and unjustifiable invention of hypotheses or anticipations or expectations, and their subsequent testing.

curi:

In so far as my approach involves all this, my solution of the central problem of justification—as it has always been understood—is as unambiguously negative as that of any irrationalist or sceptic.

RASc

Freeze:
he's talking about the fundamental growth of knowledge

curi:
have you read Popper on episteme and doxa?

curi:
in WoP

TheRat:
Actually Popper thought induction to be impossible so I don't think that's quite the right interpretation. Indeed he is talking knowledge there. @pp

curi:

Yet I differ from both the sceptic and the irrationalist in offering an unambiguously affirmative solution of another, third, problem which, though similar to the problem of whether or not we can give valid positive reasons for holding a theory to be true, must be sharply distinguished from it. This third problem is the problem of whether one theory is preferable to another—and, if so, why. (I am speaking of a theory’s being preferable in the sense that we think or conjecture that it is a closer approximation to the truth, and that we even have reasons to think or to conjecture that it is so.)
My answer to this question is unambiguously affirmative. We can often give reasons for regarding one theory as preferable to another. They consist in pointing out that, and how, one theory has hitherto withstood criticism better than another. I will call such reasons critical reasons, in order to distinguish them from those positive reasons which are offered with the intention of justifying a theory, or, in other words, of justifying the belief in its truth.
Critical reasons do not justify a theory, for the fact that one theory has so far withstood criticism better than another is no reason whatever for supposing that it is actually true. But although critical reasons can never justify a theory, they can be used to defend (but not to justify) our preference for it: that is, our deciding to use it, rather than some, or all, of the other theories so far proposed. Such critical reasons do not of course prove that our preference is more than conjectural: we ought to give up our preference should new critical reasons speak against it, or should a promising new theory be proposed, demanding a renewal of the critical discussion.

RASc

curi:

Giving reasons for one’s preferences can of course be called a justification (in ordinary language). But it is not a justification in the sense criticized here. Our preferences are ‘justified’ only relative to the present state of our discussion.
Postponing until later the important question of the standards of preference for theories, I will now give Bartley’s view of the new problem situation which has arisen. He describes the situation very strikingly by saying that, after having given a negative solution to the classical problem of justification, I have replaced it by the new problem of criticism, a problem for which I offer an affirmative solution.
This transition from the problem of justification to the problem of criticism, Bartley suggests, is fundamental; and it gives rise to misunderstandings because almost everybody takes it implicitly for granted that everybody else (I included) accepts the problem of justification as the central problem of the theory of knowledge.
For according to Bartley all philosophies so far have been justificationist philosophies, in the sense that all assumed that it was the prima facie task of the theory of knowledge to show that, and how, we can justify our theories or beliefs.

RASc

curi:
he goes on and on

curi:

Bartley observes that my approach has usually been mistaken for some form of justificationism, though in fact it is totally different from it.

curi:
you can argue Popper was wrong but you're factually mistaken about what his views are, what positions he takes, what he thinks from his perspective

Perspective Philosophy:
@TheRat The full quote explains how he rejects the new theories of induction released everyday. what he's saying is that this theory of knowledge is inadequate.

@curi It is worth noting that early popper rejected truth whilst late popper did not and accepted the correspondence theory of truth

curi:
that is inaccurate. i don't know why you're still trying to lecture me on what Popper said.

curi:

  Language analysts regard themselves as practitioners of a method peculiar to philosophy. I think they are wrong, for I believe in the following thesis.

  Philosophers are as free as others to use any method in searching for truth. There is no method peculiar to philosophy.

LScD

curi:

  *1Not long after this was written, I had the good fortune to meet Alfred Tarski who explained to me the fundamental ideas of his theory of truth. It is a great pity that this theory—one of the two great discoveries in the field of logic made since Principia Mathematica—is still often misunderstood and misrepresented. It cannot be too strongly emphasized that Tarski's idea of truth (for whose definition with respect to formalized languages Tarski gave a method) is the same idea which Aristotle had in mind and indeed most people (except pragmatists): the idea that truth is correspondence with the facts (or with reality). But what can we possibly mean if we say of a statement that it corresponds with the facts (or with reality)? Once we realize that this correspondence cannot be one of structural similarity, the task of elucidating this correspondence seems hopeless; and as a consequence, we may become suspicious of the concept of truth, and prefer not to use it. Tarski solved (with respect to formalized languages) this apparently hopeless problem by making use of a semantic metalanguage, reducing the idea of correspondence to that of 'satisfaction' or 'fulfilment'.

  As a result of Tarski's teaching, I no longer hesitate to speak of 'truth' and 'falsity'. And like everybody else's views (unless he is a pragmatist), my views turned out, as a matter of course, to be consistent with Tarski's theory of absolute truth. Thus although my views on formal logic and its philosophy were revolutionized by Tarski's theory, my views on science and its philosophy were fundamentally unaffected, although clarified.

LScD

curi:
note LScD is early Popper

Perspective Philosophy:
Giving reasons for one’s preferences can of course be called a justification (in ordinary language). But it is not a justification in the sense criticized here

I said this point. What I said is that if we take Justification to mean verification then popper rejects it.

popper does not on the other hand reject JTB once qualified for Gettier to not contain relevant falsehood, hence he accepts truth, just not absolute certitude.

If we take a justified true belief to be mean (b) the falsity-content of t1, but not its truth-content, exceeds that of t2.

Then we only had a semantic issue.

Unless you're arguing he rejects knowledge.

Perspective Philosophy:
Anyway guys im done for tonight its 6am. perhaps we can clarify things further another time.

curi:
Popper accepts conjectural knowledge which is a different thing than JTB

curi:
the things you're saying are typical of the secondary sources which misrepresent Popper

curi:
Popper himself wrote lengthy replies to some of these myths

curi:
such as, repeating:

my approach has usually been mistaken for some form of justificationism, though in fact it is totally different from it.

which you did not respond to, nor the many other statements like it, including explanations of the differences

curi:
you also never said what you'd read nor answered the specific question re relevant parts of WoP

curi:
since you seem far less familiar with Popper, and to be an opponent, why not believe me about what I, a more familiar advocate, tell you CR says?

Vox Dialectica:
Lol

Vox Dialectica:
Nice posturing

curi:
we could be discussing whether the CR view is correct instead of him debating what it actually is

Freeze:
I need to find a replacement for quotation marks

Freeze:
for hypothetical scenarios where we want to represent hypothetical speech

Freeze:
ill use >> for now

jordancurve:
I use asterisks. John said I want lunch and James said I want to play Chess.

curi:
jkl

Freeze:
oh yeah this could work

Freeze:
this could work although people use italics for emphasis too

JustinCEO:
https://discordapp.com/channels/304082867384745994/304082867384745994/658372850515836940

I think @Kate wants to claim that she's only having problems interacting with @curi. I am one example of another person who has problems interacting with @Kate. Kate's dishonesty is to the point that I don't wanna engage with her anymore

TheRat:
What happened J?

JustinCEO:
?

TheRat:
You said you had problems interacting with Kate. What were those problems?

JustinCEO:
Big picture there is a years long pattern of evasion which never gets resolved. This makes discussions difficult to have and also seem pointless.

JustinCEO:
See "Evadin' Kate" series on FI for various examples

JustinCEO:
I compiled many

JustinCEO:
https://www.dropbox.com/s/8228kqdt0vtn5uo/fmapp%252Fprint.pdf?dl=0

JustinCEO:
Perma link to post would be better but busy atm

JustinCEO:
and on phone

JustinCEO:
Kate basically won't actually concede having ongoing moral problems that are live issues right now and causing problems.
She'll concede stuff in the past tense or as theoretical possibilities

Also , I do reserve the right to challenge anyone criticizing me about a particular example of irrationality. You are irrational, too, as well as fallible. I'm not going to concede irrationality unless I see it for myself. And you shouldn't want me to.

I have problems I'll actually concede right now. But with Kate, for every claim she's doing something wrong and it's causing problems, she wants a full trial each time, without explaining how the past problems were resolved and with no evidence of bad past character admissible

TheRat:
Hmm. Hard to gather anything from that post other than assertions that she has a pattern of evasions. Her attempts at introspection you deemed insufficient. I see for example she said that reliving the mistakes is not helping her figure out what to do next but it is making her upset. Which is fine, rumination is not a good thing, no Psychologist would recommend that type of introspection.

Is the problem with you J that she didn't agree to your way of introspection?

JustinCEO:
TheRat

JustinCEO:
Look up Evadin Kate series

JustinCEO:
If you want more details

JustinCEO:
The post has that series as background and explicitly mentions it

JustinCEO:
It's unfair of you to characterize something as doing mere assertions when u haven't read relevant background material

TheRat:
I said based on that link

JustinCEO:
You're taking a position on whether it's assertions based on insufficient knowledge. You should be more neutral if you haven't read enough yet rather than coming to jumping-the-gun judgments

JustinCEO:
Based on a link to Chinese language material I could say it just looks like a bunch of meaningless characters to me cuz I can't read Chinese. I don't think that'd be a very interesting or useful statement in general

TheRat:
Its fine to make assessments within bounded scenarios. As long as you don't take it outside the bounded scenario

TheRat:
I am now curious on what you find a sufficient introspection

TheRat:
not necessarily tied to Kate

TheRat:
but like in general

TheRat:
what is your criteria for sufficient introspection about mistakes

JustinCEO:

A suggestion: try writing 10k words of introspection on the topic of your PATTERN OF EVASION. Connect what you write to the details of concrete examples of this PATTERN OF EVASION. If you manage to do that successfully, maybe you could say that you’ve begun to start to address the PATTERN OF EVASION issue meaningfully.

JustinCEO:
That would be a start

JustinCEO:
And yes, I've written 10k words of introspection about a mistake

TheRat:

Based on a link to Chinese language material I could say it just looks like a bunch of meaningless characters to me cuz I can't read Chinese. I don't think that'd be a very interesting or useful statement in general

TheRat:
Not a fair comparsion I don't think. I asked about your problems with Kate and you linked that which presumably you thought would answer my question.

TheRat:
So its not like a bunch of meaningless symbols

JustinCEO:
I linked it to give some brief indication of some issues, not as a self-contained and complete summary you'd find persuasive with zero follow up

TheRat:
Is the introspection qualification the number of words? Probably not right I assume.

JustinCEO:
Number of words can give some indication that meaningful effort was made

TheRat:
is it like necessary but not sufficient? minimum 10K words of introspection? Is that for patterns of mistakes, I doubt you mean for mistakes made 1st time?

TheRat:
Yes that definitely shows effort.

JustinCEO:
It's hard to solve a serious issue in way less than 10k words of some kind of discussion (whether introspective self discussion or with another person)

JustinCEO:
I don't think 10k number is super significant but it's an okay ballpark figure for some purposes

TheRat:
What else do you need besides # of words? Or do you have an example of acceptable introspection to you that I can learn from?

JustinCEO:
Being willing to go back and explain why you wrote each word you did in part of some conversation that failed to make progress can be another good indicator of decent introspection. Like willingness to explain each "heh" or "lol" instead of treating your mind as a black box that just outputs random words with no explanation possible

JustinCEO:
That is a common issue that comes up which is why I mention it

JustinCEO:
People have an attitude of not wanting to take anything too seriously, including their own words, "jokes" etc

JustinCEO:
I have had that specific issue

JustinCEO:
Just so you don't think I'm up on mount Olympus pronouncing judgment on the mortals or something

JustinCEO:
http://curi.us/2095-youre-a-complex-software-project-introspection-is-auditing

TheRat:
Ty J. I am at work for the next 10 hours so I'll be off and on.

Shadow Starshine:
@JustinCEO Looking at the snippet there, what do you think consciousness is?

Shadow Starshine:
wait n/m curi wrote that

JustinCEO:
http://curi.us/2194-discussion-policy-quotes-or-youre-presumed-wrong

Shadow Starshine:
This "presumed wrong" is problematic. It reminds me of Ask Yourself trying to force every discussion into syllogisms and if you don't, then you're the unreasonable one.

Shadow Starshine:
These sorts of things are discussion tools, they shouldn't be used as barriers

Shadow Starshine:
If someone paraphrases me, and that paraphrasing seems correct, I'm not going to ask for a quote

Shadow Starshine:
Only if there was an actual dispute, if someone says I said something that I don't think I did, would I ask for a quote

Shadow Starshine:
I may also add, that being able to successfully paraphrase someone marks a level of progress

Shadow Starshine:
It shows that you've entered the way they see things

JustinCEO:
If you can't quote accurately you won't paraphrase accurately

Shadow Starshine:
I both disagree with that necessary relationship nor see its connection to what I wrote

JustinCEO:
Expecting otherwise is like expecting to do translation between two languages, one of which you're struggling in

JustinCEO:
I can't debate at length right now btw

Shadow Starshine:
np

Shadow Starshine:
That translation to me is a necessary part of the process. It's why I spend so much time just asking what people mean by certain words/phrases

Shadow Starshine:
Then I will try and repeat it back in my own way until common ground is formed

AnneB:
I find that quoting accurately helps me be correct more often. It forces me to go back and see what the person actually said, which is, more often than I like, not what I thought they said.

Shadow Starshine:
Sure, but don't take what I said as "quoting is bad"

Shadow Starshine:
I made a specific criticism

AnneB:

I may also add, that being able to successfully paraphrase someone marks a level of progress
I agree with this. I can't always do this and I'd like to be able to.

Shadow Starshine:
Well I think failing to do so is part of the process. Things like "So what I think you're saying is... X", and if they go "not quite" or "not at all", it sorta tells you how close you're getting

AnneB:
yeah

TheRat:
That seems pretty good to me too. Trying to reach understanding vs proving someone wrong or incapable.

curi:
what is "that"?

TheRat:

Well I think failing to do so is part of the process. Things like "So what I think you're saying is... X", and if they go "not quite" or "not at all", it sorta tells you how close you're getting

curi:
he got quite mad at me for me telling him how close he was getting

curi:
that isn't an honest statement about how he approaches discussion

curi:
he wouldn't accept direct feedback from me about how close he was

curi:
he'd challenge me about it

curi:
quite early in the discussion i tried to bring up that issue about how far apart we were in communication and understanding, and he spent hours resisting it, refusing to engage with the concept, and flaming me

TheRat:
What would you have done differently on your end? Do you think you made any missteps? Could there have been ways to rephrase things that maintained a mutual desire to understand each other?

curi:
that's a general comment, not specifically related to my messages today?

TheRat:
Not a comment. A question, do you think there were any misteps that made it worse or do you think the conversation was doomed no matter how you approached?

curi:
you didn't answer my question

TheRat:
I don't remember your messages today tbh curi. Probably a mistake to engage while at work as I am too distracted. I was just talking about the conversation yesterday specifically there.

curi:
i'm referring to messages from within the last 15 minutes

curi:
i said 6 things to you just now and then you asked questions, and i was trying to clarify if the questions related to those messages or not

curi:
i take it not related

TheRat:
Related only insofar that we were talking about SS's methodology but not directly related. Was a seperate question from today's messages.

curi:
ok

curi:
there could have been a solution leading to SS learning about my ideas but i don't know one, it's very hard.

curi:
it doesn't violate a law of physics though. words like "could" are very strong. idk if you really meant it. ppl use them a lot when they shouldn't.

TheRat:
I was thinking in terms of the way things were phrased specifically. Like some of the recommendations I gave. Do you think a less confrontational, with more doors open approach might have been better? Like the example I gave about computation. It went something ~ Here is a link to a book that talks about computation in a way that I endorse, but I think our perspective gap is deeper than computation and might hinder progress~ Would that not have been a better approach? if not why not?

curi:
i told you i don't have a book link to solve that problem

TheRat:
for reference this is what you wrote,

there are large communication failures here. i regard you as adding a bunch of context to my statements, e.g. i wasn't specifically talking about brains when i mentioned electrons.

i regard you as inadequately literal and precise about what you say, so you end up making claims that aren't really what you meant. and more broadly i think you don't have the background knowledge to discuss this effectively.

TheRat:
I seperated into 2 parts in that quote

TheRat:
I think the first part seems fine. The second part starting at "i regard" is what I mean could be phrased better.

curi:
what is confrontational about sharing information about my perspective?

TheRat:
give me a few, got a customer.

TheRat:
When you put it that way... but I think examples might be useful here. Perhaps instead of saying, I find you inadequately literal, you can show where he misread or misrepresented you. And how that imprecision is hindering progress. A good time to request copy paste quotes. (or a moratorium until his copy paste feature returns).

I guess just asserting that he is not precise is not quite as helpful as showing why he is not adequately precise.

Have you had more success with explanations vs assertions?

curi:

you can show where he misread or misrepresented you.

i did give an example of that.

throughout the discussion, i wasn't able to give an example of that which he accepted. he didn't want to understand one.

curi:
even though he even misquoted repeatedly and said literally before one of them

curi:
he basically thinks i'm pedantic ... which is more or less agreement that he isn't discussing at the same level of detail as i am ... but he won't consider it that way

curi:

(or a moratorium until his copy paste feature returns).

i offered him a venue switch which he didn't take. no excuses for tech issues. his responsibility for what he says.

TheRat:
Yes but I meant as a first offering. Not once the conversation got derailed and he was potentially tilted.

curi:

{Attachments}
https://cdn.discordapp.com/attachments/304082867384745994/658805276862054572/unknown.png

curi:

Yes but I meant as a first offering.

curi:
that is not responsive

curi:

Have you had more success with explanations vs assertions?

he was understanding little of what i said, so writing something complicated wouldn't make sense.

TheRat:
Yes I agree you did explain later. But I am not sure that beginning with an explanation is necessarily more complicated than beginning with an assertion. I don't have a quote in mind atm but for example personally I've had more success when you said You're being sloppy in your answer because I didn't ask you that, I asked you this Instead of when you just asserted you're being sloppy

I feel like that leaves the door open to further understanding. I think an assertion seems more of a door getting shut, even if you do not intend it that way.

curi:

Yes I agree you did explain later.

you aren't agreeing

curi:

i regard you as adding a bunch of context to my statements, e.g. i wasn't specifically talking about brains when i mentioned electrons.

that's an example

TheRat:
Yes that's true, that was an example. Although, when you brought up electrons I remember it was fairly vague, like electrons move quickly. You were talking information processing speed in regards to computation, but is the brain assumption (connecting electrons to biochemical reactions) doesn't strike me as a conversation halter. It is quite a complicated subject.

curi:
You’re objecting to the example cuz it doesn’t demonstrate everything alone? That is a goalpost move.

TheRat:
Yes I think so.

curi:
Ok. Do you agree goalpost moves are bad? Unclear if/what you’re conceding.

TheRat:
Yes the example is insufficient for what I had in mind in regards to furthering civil discussions.

TheRat:
I am not sure what it means to move goalposts in this scenario. I am trying to figure out if there was a better way to have approached that conversation. The way it was approached clearly didn't work.

TheRat:
Btw It is unclear whether you think you approached the conversation perfectly or if you made any missteps. By missteps I mean in regards of trying to keep the conversation going in a civil manner.

curi:

Btw It is unclear whether you think you approached the conversation perfectly or if you made any missteps.

that's a false dichotomy

curi:
the thing you asked for originally was an example of my claim. i provided it. you then changed the ask (goalpost move) to something non-standard: an example that would, alone, convince someone of the claim (that i got to via multiple examples not one big one).

TheRat:
Actually, sorry you did answer.

there could have been a solution leading to SS learning about my ideas but i don't know one, it's very hard.

You don't think you made any missteps. So you think the conversation going badly was entirely on SS?

curi:
that's a bit ambiguous (relies on cultural default criteria for what reaches the level of a misstep, which takes a lot more than mere imperfection) but i think the answer is yes.

curi:
this conversation is itself a pretty typical example of an inferential distance problem. it has some of the features of the SS conversation. TheRat's comments appear to me to lack certain background knowledge i'm using, re issues like logic and language, which makes it hard to discuss. (i don't think dropping words like "you" makes things better. but the sentence starting with TheRat is an example of what it looks like. it could also be done using quotes without actually naming the author of the quotes in a context like this where the source of the quote is adequately implied.)

JustinCEO:

[10:43 AM] JustinCEO: I linked it to give some brief indication of some issues, not as a self-contained and complete summary you'd find persuasive with zero follow up

JustinCEO:

10:30 AM] JustinCEO: It's unfair of you to characterize something as doing mere assertions when u haven't read relevant background material

curi:
also i think there were some no-fault errors/problems. your question could be taken as asking if of the problems that were someone's fault, all were SS's. or of all problems. i think the first is the more typical meaning and is what i can say yes to.

TheRat:
Ok I see.

AnneB:
TheRat and curi may not agree on what the problems with the conversation were.

curi:
unclear which thing(s) you see.

TheRat:
Yes. Though I am far more fuzzy about it than he is. Like I am still not convinced that we had to go meta so soon. I have not considered the conversation as much once it did go meta. As to me at that point was a lost cause.

AnneB:
My point is that before discussing who could have done something different to make the conversation go better, you should discuss what criteria you're using to decide whether the conversation went well.

TheRat:

unclear which thing(s) you see.

I think I see your view better and how my framing was a false dichotomy, (either curi fault, or ss fault) But there were faultless problems (I don't know which ones yet) but according to curi, out of the problems that could be someone's fault, they were all SS's fault.

curi:
of the at-fault problems at the level of a misstep (rather than mere imperfection), all SS's

TheRat:

My point is that before discussing who could have done something different to make the conversation go better, you should discuss what criteria you're using to decide whether the conversation went well.
I don't think anyone would say the conversation went well. I've been just assuming it did not go well. I doubt ss or curi would say it went well.

AnneB:
People might have different reasons for saying it didn't go well, even if they agree that it didn't go well.

curi:
it went well for the purpose of clarifying what kind of person SS is and that he doesn't want to think and both can't and won't logic. roughly like that. i could have made that judgment beforehand but i mostly don't think others could have, and it had the productive purpose of double checking my judgment.

curi:
and it had the productive purpose of giving SS several opportunities for learning and a better life, of several different types

curi:
it also had the secondary purpose of providing an example for discussion with people like TheRat who are (irrationally, afaict) unwilling to have discussions about pre-existing archived examples instead.

TheRat:
I am not convinced that this was a revelation that SS can't or won't do logic.

TheRat:
There's a lot of missing background knowledge

curi:
which aspect of the dozens of logical errors did you find unconvincing?

curi:
(logic broadly. there isn't really a proper name for it. i'm including precision, reading and literalness stuff, including e.g. misquotes)

TheRat:
I considered the conversation bust once it went meta. I don't find convincing to judge people once they've tilted. Well do you consider his misuse of quotes a logical error? or a methodological hinderance to discussions.

TheRat:
oh

curi:
there's a skillset people need to read things, think about them logically, figure out what they mean, etc. it should be automated and habitual to the point it holds up to a large extent while tilted.

curi:
similar to how if you're adequately good at it, it also holds up when very tired or distracted

TheRat:
Why?

curi:
you can also see in his behavior a sort of prioritization of tilt and emotion over logic (at least if you think he had the skill to do better if not tilted), a value choice about what to put first.

curi:
basically no one can do level 90 of this skill reliably without automating level 30. roughly like you can't reliably run an obstacle course if you aren't able to walk automatically enough to walk while angry.

curi:
or like how everyone good at calculus can do basic arithmetic correctly in a really automatic way, even if tired or tilted

TheRat:
I don't have a good answer to that.

curi:
also ppl don't develop these kinda skills really far without better self-awareness about sources of error such as tilt and other emotions

curi:
if you go through a process of learning to think and converse rationally, and develop those skills, you run into your emotional problems some (if you have big ones) and you do something to manage them at least to a moderate extent.

curi:
like how all pro overwatch players manage their tilt some, not ~zero.

TheRat:
You mentioned you developed these skills by losing debates to DD for years. So did you catch yourself getting angry and did what, tell yourself to calm down, or what method specifially? Or did you never get emotional during debates.

curi:
i had a lot of these skills before that

curi:
i didn't get angry about this kind of thing

curi:
i played chess calmly from age 4 or 5

TheRat:
So it just came naturally to you to be call since age 4

TheRat:
calmI

TheRat:
calm*

curi:
not about everything, not perfectly, but about logical correctness type stuff and being wrong about ~clearly objective issues, yeah

TheRat:
It came naturally to you so you don't have a clear methodology then right?

TheRat:
(to hone a skill that came to you already)

curi:
i have lots of relevant writing and methodology

TheRat:
But these are based on what you think it would take though right? You haven't experienced this yourself. Are there success cases of people who have followed your methodology?

curi:
i have experienced emotions. i think you're getting the wrong idea just cuz i said stuff like not getting mad about specific categories of things like making the wrong chess move or a scientific fact.

TheRat:
I didn't mean to imply you don't feel emotions. I meant you have not had to hone the skill you mentioned, re not tilting and staying logical and not feeling emotional during debates.

TheRat:
You were already there at 4

curi:
i broadly don't think there are different methods for dealing with that than other emotions

curi:
http://fallibleideas.com/emotions

TheRat:
Can I assume there are no success cases following your methodology?

curi:
No

curi:
I think you’re wrong to focus on emotions. You can walk and do single digit multiplications even when super emo rit?

TheRat:
The methodology,

First, be calm. Take your time, there isn't as much rush or pressure as it feels like. Emotional reactions are often immediate. Instead, act thoughtfully and slowly; think things through; don't react until you're ready.

Second, be self-aware. Pay attention to, and keep track of, what you do and think and feel, and compare it to your values and how you want to be. Whenever it doesn't match, then think about what would match and at least form a quick guess at how to do better next time. Replay conversations and events in your head and look for things you could have done better, and things you wish you hadn't done. Look for emotions you felt, and any problems they caused. You can also look for emotions you didn't feel but would have liked to. Don't worry too much about changing; just notice everything, pay attention, and form some ideas about what'd be better and guesses at how to do it, and try imagining yourself acting in the new way.

With practice you'll learn to notice things faster. Instead of hours later while reflecting, you'll notice minutes later. You'll have ideas what to do better, and spot things you wish you didn't do or feel. Then with more skill, you'll start to notice in seconds.

If you can notice within seconds, and you act and feel slowly, you'll be able to notice before you've done or felt anything. Then you can do something else! Now you have better control over your life.

TheRat:

I think you’re wrong to focus on emotions.
Why?
You can walk and do single digit multiplications even when super emo rit?
I think so.

curi:
That’s why. There is a skills issue

curi:
He made logic errors while calm too anyway. As have you.

TheRat:
Yes.

Why should we consider skills such as doing addition the same as having emotional control?

curi:
It’s not. It shows high quality skills hold up anyway. Aren’t ruined by emotion.

TheRat:
oh.

curi:
Similarly a highly tilted OWL player still plays at GM level.

curi:
Cuz like 90% of his skill is too automatic you go away

curi:
To

TheRat:
I think I understand. If one acquires a certain level of mastery of a skill, one can still perform without significant hindrance to that skill despite being tilted.

curi:
Ya

curi:
Similarly strong chess players can keep a large part of their skill when playing moves in under 1 second while dead tired

TheRat: (pinned)
Ok so with that in mind. Let's say SS tilted and his logic mastery is not high enough that the tilt significantly affected his logic. So let's say typically hes at 60/100 and tilted hes like 10/100. Based on that conversation, I don't think even then we can say he can't do logic. We can say he has not reached a level of mastery to make tilt essentially meaningless. What am I missing here?

curi:
So if you see someone play chess at amateur level. There are no excuses. They aren’t good.

curi:
Busy atm btw

Shadow Starshine:
Is this guy still spending all his time trying to frame the discussion?

Shadow Starshine:
I feel like he just got his ego hurt

TheRat:
no I was asking him about ways to have better conversations

TheRat:
and we ended here

Shadow Starshine:
I just read the back and forth

Shadow Starshine:
It amazes me anyone is like that

Shadow Starshine:
Oh well

Shadow Starshine:
I think the best way forward in that discussion was for curi to answer my questions honestly

TheRat:
I don't know if that's fair. I asked him questions and engaged him. What is he to do? ignore me?

curi:
He’s still tilted but also partly he’s just like this

Shadow Starshine:
Give me a snippet of comparative definition of computation

Shadow Starshine:
So I can go "Ah, I see how we differ"

Shadow Starshine:
And then try and make sense of the two different views

Shadow Starshine:
curi, you're about as a good character judge as Ask Yourself

JustinCEO:
i disagree

Shadow Starshine:
You are welcome to disagree

Shadow Starshine:
But in general, from what I can tell, curi has problems relating with other people

Shadow Starshine:
where I actually find quite a bit of success shortening gaps in understanding with people

TheRat:
That's everyone brother

JustinCEO:
i have seen curi engage patiently with hostile people way beyond the point at which i would have given up

Shadow Starshine:
Does patient mean good at relating?

Shadow Starshine:
Someone can be both calm and not understanding

JustinCEO:
patience is required to understand other people when there is a gap in perspective

Shadow Starshine:
Even if it was required it wouldn't make it sufficient

TheRat:
I can sympathize with difficulties relating to others. If the dude was perfectly logical at age 4. Must make it tough to relate to others.

Shadow Starshine:
Do you buy that?

TheRat:
I think my framing is a bit misleading

JustinCEO:
@Shadow Starshine you claimed curi has problems relating to other people. I claimed that I've seen curi demonstrate tremendous patience, which I regard as relevant to relating to/understanding people. You now bring up that patience isn't a sufficient skill to enable understanding/relating to other people, but I never claimed that one skill was by itself sufficient.

TheRat:
perfectly is not what he meant

JustinCEO:
so it's unclear to me what you're arguing with/about

Shadow Starshine:
Right, so are you saying that your point about curi demonstrating patience was NOT a rebuttal to my point?

Shadow Starshine:
If it was a rebuttal, then you havent made an inferential connection, and my counter demonstrates this

TheRat:
That's interesting Justin because I would have assumed as SS did that your example of patience was meant to refute the understanding others position

Shadow Starshine:
if it wasn't then it was just some tangential thing you were saying

JustinCEO:
Right Rat, you keep expecting my single examples to be a complete self-contained case

JustinCEO:
that happened earlier too

Shadow Starshine:
That's also not what anyone is saying

Shadow Starshine:
We are expecting an inference structure from your statement to the one that I made

TheRat:
I don't think complete, but at least a major point

JustinCEO:
"refute" would be decisive

TheRat:
or why else bring it up?

Shadow Starshine:
in that its a related point

Shadow Starshine:
If "refute" is too decisive for you, then use the understanding that it was meant to in part counter what I'm saying

TheRat:
btw J I am assuming I am mistaken as I have the least amount of philosophy discussion here

Shadow Starshine:
Was it meant in part to counter what I was saying?

TheRat:
I was pointing out that I would have made the same assumption

JustinCEO:
part of the reason to bring up curi's patience is to indicate to Shadow Starshine in concrete terms that perspectives other than his exist re: curi's ability to relate to/understand other people

Freeze:

btw J I am assuming I am mistaken as I have the least amount of philosophy discussion here
Is this the right way to go about it?

Freeze:
I don't think J would want you to do that either but I'm not sure

Shadow Starshine:
So is that a yes or a no Justin

curi:

I don't think complete, but at least a major point

it was a major point. right J?

Shadow Starshine:
Was it meant to counter my statement?

Shadow Starshine:
in part?

TheRat:
Doesn't mean I agree with Justin blindly due to longer. But I am approaching it more humbly

TheRat:
nothing wrong with that Freeze I don't think

TheRat:
but I don't want to meta the meta

Freeze:
Can a point be brought up to further a discussion without being intended to refute the other side entirely?

Freeze:
Like to add more info for context or discussion

JustinCEO:
sure, major point @curi

Freeze:
I think J's point kind of does that as well as counter-argues a bit

Shadow Starshine:
Well I'm trying to establish if it was directed to me, in context, due to what I said, as the start of a counter example

Shadow Starshine:
Should be an easy yes or no question

Freeze:

part of the reason to bring up curi's patience is to indicate to Shadow Starshine in concrete terms that perspectives other than his exist re: curi's ability to relate to/understand other people

JustinCEO:

"Was it meant to counter my statement?
[9:16 PM] Shadow Starshine: in part?"

TheRat:
Yes Justin was disagreeing with SS in regards to curi's ability to relate to others, and the patience was a way to present an argument of why he disagreed. What is wrong with that?

JustinCEO:
sure, i was trying to contradict, give a different perspective

curi:
why did rat think J's point wasn't major after it came up that it wasn't complete?

Shadow Starshine:
Okay great, now we've established how it started

TheRat:
what do you mean curi?

Shadow Starshine:
My retort to that point was that it was not sufficient to be patient. Meaning, that you could have patience, and still not relate to others

Shadow Starshine:
Do you understand that point?

Freeze:
I think Rat thought it was a major point

Freeze:

I don't think complete, but at least a major point

Freeze:
After it came up that it wasn't complete, he clarified that he thought it was at least a major point

TheRat:
ah yes

TheRat:
Yes.

Freeze:
Rat was the first person to use the phrase major point

curi:
the context was what J's message wasn't

curi:
you seemed to say you thought it wasn't a major point

Freeze:
which context? im reading again

JustinCEO:
I understood what you were saying @Shadow Starshine. I understand e.g. the difference between a necessary and sufficient condition. I didn't think your statement was responsive though, cuz as I said, I wasn't saying or implying that patience by itself would be sufficient for understanding others. So it seemed irrelevant.

Freeze:
putting meta quotes in #other for later analysis

TheRat:
What would have been a proper response to that in your view J?

Shadow Starshine:
If you agree that it's not sufficient, then telling me curi is patient doesn't refute my point. So my comment was meant to show you that you needed to keep making further arguments.

Shadow Starshine:
Do you now understand the nature of my comment?

Shadow Starshine:
If it WAS sufficient, it would have stand alone been good enough

Shadow Starshine:
but since it's not, it is not good enough to refute

Shadow Starshine:
Hence the importance

JustinCEO:
so let's back up a bit

JustinCEO:

Shadow Starshine:
curi, you're about as a good character judge as Ask Yourself

JustinCEO:
that's a hostile flame

JustinCEO:

Shadow Starshine:
But in general, from what I can tell, curi has problems relating with other people

Shadow Starshine:
Justin don't derail, do you now understand the nature of my comment or not?

JustinCEO:
no no hang on please

Shadow Starshine:
Tell me if we are in agreement

Shadow Starshine:
then proceed

Shadow Starshine:
with what you think the problem is

Shadow Starshine:
Don't just ignore my comments

JustinCEO:
i'm going to reply to you after i'm finished making my point

Shadow Starshine:
Just tell me if you agree or not first

Shadow Starshine:
then do so

JustinCEO:
I've said what i'm going to do

Shadow Starshine:
This is not an unreasonable request

Shadow Starshine:
I don't want you to sidetrack

Shadow Starshine:
for no reason

curi:
rat do you think this behavior by SS is just tilt and not some kinda wrong attitudes to discussion?

Shadow Starshine:
Just say "I agree" or "I don't agree"

curi:
or missing skills and methods

curi:
btw i'll get back to ur msg later

curi:
i have stuff to say

curi:
i'll pin it

TheRat:
Seems fine to me. He is answering a specific argument presented by J.

TheRat:
Ok

curi:
he's being ridiculous right now

JustinCEO:
this statement was an assertion based on your own impressions (that's not a criticism, just a description):

Shadow Starshine:
But in general, from what I can tell, curi has problems relating with other people

curi:
we should analyze i guess

Shadow Starshine:
I wrote a bunch of lines to J, he hasn't in any way acknowledged them

curi:
Pinned a message.

JustinCEO:

he hasn't in any way acknowledged them

JustinCEO:
please don't lie

Shadow Starshine:
How am I lying

JustinCEO:
i did acknowledge them "in any way", i said i would reply after i made my point

JustinCEO:
that was an acknowledgement

Shadow Starshine:
Let me be more precise, you haven't acknowledged their content before moving on

curi:
rat does this seem to you like a logic/whatever type error J is running into right now?

JustinCEO:

JustinCEO:
i have seen curi engage patiently with hostile people way beyond the point at which i would have given up

JustinCEO:
so that's me offering my own assertion based on my own impressions

JustinCEO:
which you should not find persuasive on its own, btw!

TheRat:
I don't think I get the question.

Shadow Starshine:
we've already established this

Shadow Starshine:
are you going to address what I wrote

curi:
do you see how there is a skills issue in the "please don't lie" conversation branch?

curi:
they are having an issue b/c J has much greater skills at logic, language, precision, reading, etc, than SS

Shadow Starshine:
Justin, you said my point seemed irrelevant, I wrote to you why it was relevant

Shadow Starshine:
I'm waiting for a response to that

TheRat:
Hmm. I don't see it. I don't think he has acknowledged it yet either. Though planning to acknolwedge in teh future is acknolwedging? I think there's an inference jump we can make from SS claim he hasn't acknowledged it yet that seems reasonable to me.

curi:

I wrote a bunch of lines to J, he hasn't in any way acknowledged them

curi:
J literally acknowledged that those lines exist

curi:
i think you don't see it b/c of your own skills lack

JustinCEO:
i'm afk 5 mins

Shadow Starshine:
I wasn't asking him to acknowledge their existence, but their content

curi:
or your lack of respect for literal meanings. it's hard to tell how much is skill vs. attitude

TheRat:
Might be attitude

Shadow Starshine:
I think it's your lack of skills on understanding what people mean

TheRat:
I don't take literal too literal

TheRat:
and I try to guess what people mean more

TheRat:
than their literal exact phrasing

TheRat:
is that lack of skill or bad attitude

Shadow Starshine:
I think its a disregard for the principle of charity

curi:
being able to figure out what statements mean as written, and keeping that in mind and generally not contradicting it, is a basic starting point for being able to do more complex analysis or hold conversations

Shadow Starshine:
Right then why are you so bad at figuring out what things mean

curi:
skipping that step is one of the reasons conversations fall apart. it should be automated.

TheRat:
I feel like conversations fall apart when you take the literal meanings of what people wrote vs the spirit of what they meant

TheRat:
hmm

Shadow Starshine:
curi, I could copy paste this convo elsewhere, and I'll bet you that other people will get what I'm saying

TheRat:
That's another gap we have it seems

Shadow Starshine:
and you're gonna find yourself on the outside of understanding

Shadow Starshine:
claiming that you're the only smart and logical one

curi:
you generally need to say stuff that isn't wrong (literally) to get anywhere when ppl disagree much, when there's much culture clash, when there's much inferential distance, etc.

curi:
that should be a basic starting point to get ppl to have something in common, a shared understanding of some objective facts about reality that can be built on

Shadow Starshine:
The only reason there's so much culture clash is because you're dedicated to a style that is so foreign to what other people mean

Shadow Starshine:
I have a very easy time relating to many different language games

Shadow Starshine:
I can use words differently based on my interlocutors use

Shadow Starshine:
what do you do?

Shadow Starshine:
Bitch and complain about how other people talk

Shadow Starshine:
and then act like you have 'skills'

Shadow Starshine:
It's honestly a joke

curi:
when ppl won't or can't participate in that, then communication across much of a perspective gap mostly just doesn't work. it's hard to find good alternatives/replacements, esp generic ones, to get some other common ground.

Shadow Starshine:
It does in fact work, it works for me all the time

Shadow Starshine:
I think what you'll find is that the shit you do doesn't work

Shadow Starshine:
and you're projecting that onto others

Shadow Starshine:
then convincing yourself its everyone else's problem

curi:
it's like EY talking about recursing down to bayes' theorem in inferential distance article. but it's to something considerable more simple, basic and generic. and if you still can't find agreement on stuff like what words mean and how words fit together to form sentences and what those sentences then mean, you're pretty damn screwed.

curi:
you could go to arithmetic instead of words but that's harder to build on

Shadow Starshine:
buddy you ain't listening

curi:
plus most ppl hate math

curi:
and arithmetic would seem more pedantic, would be resisted even more by ppl like SS

TheRat:
Man philosophy is hard lol. I am so confused atm.

Shadow Starshine:
im wasting my breath here

curi:
did u read the inferential distance articles?

TheRat:
Not yet.

curi:
ok i think that'd help

TheRat:
I was thinking like in this scenario where J took what SS said literally

TheRat:
I don't think that seems like a good idea

TheRat:
you say otherwise

TheRat:
and that if you don't always take seriously the conversation falls appart

TheRat:
but J taking it seriously made the conversation fall appart anyway

curi:
SS, by not developing the skill for literal communication and/or not wanting to do it, is dealing with culture clash and inferential distance inappropriately

curi:
he's using methods that don't work

TheRat:
but what if the culture clash is of our own making

TheRat:
like he said

curi:
J is right to try to focus on more basic, simple, easier things to start with

TheRat:
like 99% of the time progress can be made

TheRat:
except in FI

Shadow Starshine:
Curi, but taking things literal and not using common understanding or the pricniple of charity, you cause shit to fall apart. Your methods are a hinderance to yourself.

curi:
if you can't agree on easier stuff, doesn't make much sense to do harder stuff

curi:
what progress elsewhere?

TheRat:
I am talking what if

TheRat:
what if the culture clash is of our own making

TheRat:
like he said

curi:
you can sometimes skip easy stuff when you have a lot in common, esp in more cooperative interations, but J/SS have major disagreements so shouldn't be skipping

curi:
i have SS blocked and haven't been reading his msgs for a while

TheRat:
oh

Shadow Starshine:
TheRat, may I ask of you, in my back and forth between J and I, what do you think I was saying?

curi:
was talking to rat + generically

curi:
i don't know what culture clash of own making means.

JustinCEO:

If you agree that it's not sufficient, then telling me curi is patient doesn't refute my point. So my comment was meant to show you that you needed to keep making further arguments.

TheRat:
Ok SS. My attempt at summary (entirely from memory). You said curi seems to have a problem relating to people. J said he disagreed because curi has demonstrated a lot of patience. You said you can be patient and still not understanding. J said patience is required for understanding. SS said even if it is required it is not sufficient. J said he never claimed it was sufficient. I have to refresh my memory on the rest

Shadow Starshine:
That seems accurate to me. Does my point on it not being sufficient seem relevant to the conversation at hand?

curi:
that kind of memory is a skill that ppl vary dramatically at

TheRat:
it does seem relevant

JustinCEO:
i said curi is patient to in part contradict your view that curi is bad at relating to other people, by bringing up a relevant skill to relating to other people that i'd seen some concrete examples of in action. this wasn't meant as a self-contained airtight logical proof. the discussion started with you indicating your perspective, and then i indicated mine, and then you tried to pretend that i was trying to do an airtight logical refutation of what you'd said. you didn't offer an airtight logical proof to begin with -- you just indicated your perspective, @Shadow Starshine.

curi:
and it's something ppl can practice and take steps to get better at

Shadow Starshine:
Okay, so I've demonstrated my efficacy of getting points across

curi:
ya i think it's relevant a lot. some ppl seem to forget msgs from a few min ago and get lost, or misremember recent wordings in ways that change the meaning.

TheRat:
I think you're right curi. I haven't looked much into the literature of memory improvement.

Shadow Starshine:
@JustinCEO I did not pretend that you were trying to do an airtight logical refutation.

Shadow Starshine:
I showed that it wasn't sufficient, and that you needed further arguments

curi:
what i've done a lot, for many years, is try to remember things then reread and test my beliefs to find errors.

Shadow Starshine:
I was not denying your ability to continue to do so

TheRat:

what i've done a lot, for many years, is try to remember things then reread and test my beliefs to find errors.

This seems like a good idea to practice. Writing it down sec.

JustinCEO:
if you wanted more arguments, why didn't you say something like "Oh, really? Well I don't find that persuasive, but could you give me some concrete examples so I can judge for myself?" is that what you really wanted?

JustinCEO:
or "i'm not interested in patience, but has curi actually convinced specific people of specific things you can point to?"

curi:
i think making discussion trees for conversations is also good practice for memory as well as understanding structure

Shadow Starshine:
Because I don't know how definitive you thought your comment was. I'm not a mind reader. I'm merely demonstrating it isn't good enough. If you thought it was, that would give you pause. If you thought it wasn't, you'd continue

JustinCEO:

{Attachments}
https://cdn.discordapp.com/attachments/304082867384745994/658864430771077121/unknown.png

TheRat:
I think I disagree with that tree J

curi:
i think going node by node in a discussion tree is the kind of method that can work for ppl with very different background knowledge, but that ppl generally don't want to do it (and the tree method is itself background knowledge they don't have and don't wanna learn)

TheRat:
I don't think SS strawmanned you

Shadow Starshine:
there is a big problem here in that you seem to be implying that by me stating your comment is insufficient, that means that you don't know that it's insufficient, or that I'm making a claim that you think it is sufficient, neither of those necessarily follow

Shadow Starshine:
You make statement X. I say statement X lacks property Y.

You don't need to say "Are you saying that I'm saying it does have property Y?" - No.
"Are you saying that I don't know it lacks property Y?" - No.

Shadow Starshine:
It only means exactly what I said

curi:
so pending things for me are the pinned msg and waiting on explanation re creating culture clash

curi:
iirc

TheRat:
Hmm. Yeah the tree thing. I haven't quite thought about why I refuse to make trees. I feel like it would be tedious and time consuming. I think it might be similar with me and working out. explicitly it all makes sense, should work out. but I still don't do it.

curi:
in many cases, i think trees would save time. this is the same issue as how being more literal and "pedantic" would save time even tho it seems like it takes extra time. but the reduced misunderstandings makes it a large time save overall.

Freeze:
J seemed to make that tree pretty quickly and at the same time as the conversation was happening, even though he had to go afk for 5 min

TheRat:
yeah but after all that he could have just strawmmend me when you said x, and ss would have replied like dhd

curi:
or at least trees would save time if the other guy wasn't hostile to them. e.g. the vegans never engaged with my trees

Shadow Starshine:
I've already asked TheRat to paraphrase the convo, he did so successfully and understands my point. I really don't see a problem with my approach

TheRat:
I think your tree is better curi because there wasn't commentary

curi:
my vegan trees were quite opinionated

TheRat:
such as the vegans are misrepresenting me

TheRat:
you just took their argument

curi:
ya they were less parochial

curi:
semi afk

TheRat:
Something is up tho. SS agreed that my paraphrasing of what he said is accurate. So indeed his method of communicating works for at least me. But not J or curi, and I would guess Freeze. So what's going on there?

TheRat:
I wonder if this is a case of maximizing traits vs optimizing

TheRat:
like maximizing literal accuracy

TheRat:
leads to lack of understanding

TheRat:
I feel like we're doing a reverse AY by accident here

TheRat:
trying to catch him in a literal misstep instead of seeking to understand

TheRat:
like AY did to us.

curi:
rat i think SS likes your summary b/c you share some of his biases

DavetheDastard:
AY did a great deal of pain to all of us

JustinCEO:
@Shadow Starshine I read your bringing up the sufficiency stuff as trying to offer some kind of refutation of what I was saying. If you're basically saying it was an oblique request for more evidence re:curi understanding/relating to people, then i don't think that was very good request methodology but uh ok i guess. So what sort of evidence do you want? What would convince you that your perspective on curi is wrong?

curi:
or in ohter words, you're more similar to him so your thoughts are more agreeable to thim

TheRat:
so my biases made it easier for me to understand him?

TheRat:
hmm.

curi:
a lot of what happens isn't understanding, it's just saying things he's fine with

Shadow Starshine:
@JustinCEO Great, we've made progress. Do you agree that I was not necessarily saying that you didn't know that it wasn't sufficient nor making a claim that you DID say it was sufficient?

DavetheDastard:
It all depends on who said what first, the first one who said it is most valid

DavetheDastard:
Who’s oldest?

DavetheDastard:
That’ll help us find out who was right first about anything

TheRat:
Please don't do that Dave.

Shadow Starshine:
Dave I have 11 fingers

DavetheDastard:
But isn’t that how science works?

Shadow Starshine:
I think that basically concludes it

curi:

J said he disagreed because curi has demonstrated a lot of patience.

rat your paraphrase for J is wrong. J didn't say that "because"

Shadow Starshine:
unless someone is holding the jack of hearts

DavetheDastard:
You’re a mutant

TheRat:
well a paraphrase is not meant to be taken literal

TheRat:
afaik

curi:
@DavetheDastard how'd you find this server? what are your goals here?

curi:
rat you substantially changed the meaning

DavetheDastard:
I was invited

TheRat:
only if I had said something like "solely because"

DavetheDastard:
And I don’t hold any goals

Shadow Starshine:
I take the "because" as a positive reason towards disagreeing

Freeze:
Welcome Dave

Shadow Starshine:
aka "this is a premise"

Freeze:
Are you a friend of Starshine?

DavetheDastard:
IM after rational philosophy!

curi:
@DavetheDastard you seem to be being disruptive to this conversation. can you go talk in #other for now cuz this channel is busy currently.

Freeze:
Nice :D

DavetheDastard:
Yeah I’m friends with SS

Shadow Starshine:
Dave is a welsh philosopher working on his PHD.

DavetheDastard:
Yeah

Shadow Starshine:
good lack, being cheeky

Shadow Starshine:
lad*

TheRat:
maybe a better paraphrase would have been something like one reason J disagrees with SS re relating to others is that curi displays great patience.

JustinCEO:
sure @Shadow Starshine, though my current theory compatible with what you said involves a major miscommunication that was at least partially your fault

DavetheDastard:
Second year PhD, studying late wittgensteinian noncognitivism within religious language

Shadow Starshine:
@JustinCEO Since we've agreed with some things I'm not necessarily saying, what problems do you think are still left that need addressing that were my fault?

Shadow Starshine:
From my personal perspective, I think you were taking me to be saying more than I was actually saying

Shadow Starshine:
you also said it was irrelevant, but I think it turned out relevant

curi:
rat explain re creating culture clash ?

curi:
and i'm not trying to catch SS in a misstep, i found the dozens of them prevented him from understanding what i was saying

curi:
and also that he denies the problem exists and won't take steps to try to deal with it

TheRat:
Let's assume for a moment that SS is right. And we created a culture in FI, in particular around methodological discussion. And we're wrong. Then the clash that exists is of our own making, and need not exist when talking to other people. Like AY making people write in syllogism when the argument is clear, maybe meant to make things more clear but it bogs down conversations instead

TheRat:
something like that.

curi:
to deal with this kind of perspective gap we need to find some common ground but he won't acknowledge that problem, won't meet me at getting facts and basics right, and doesn't have an alternative.

JustinCEO:
SS you should have asked for more examples/arguments if you wanted them. If you didn't, and then you start bringing up whether the one example I brought up was sufficient to prove a case/refute a claim of yours, and then i guess that maybe you think that's what I was trying to do, that's totally reasonable and mostly your fault IMHO.

curi:

Let's assume for a moment that SS is right.

right about what?

TheRat:
That we're at fault for the culture clash

JustinCEO:
from my perspective, you apparently wanted something (more examples/argument) you didn't ask for (and when I asked why you didn't explicitly ask, you said, somewhat incredibly in my view, "I'm not a mind reader.")

curi:
what does that mean?

curi:
i already think i'm primarily responsible for the existence of the culture clash. i'm the one who chose to learn ideas further from the mainstream.

Shadow Starshine:
@JustinCEO I didn't make the assumption if you thought it was sufficient, from my perspective you may or may not have thought that, so my response was expected to either get agreement or disagreement. The reason I don't ask for more examples is because I don't know the nature of your argument, in so much as why it was incomplete, so rather than make assumptions of what you should do next, I merely point out the problem and let it evolve from there. Does that make sense?

Shadow Starshine:
@JustinCEO For example, if you DID think it was sufficient, then I don't want more examples. I want to address that you think that. If you don't, then I do want more examples. But I can't know what to expect before hand and would rather not assume.

JustinCEO:
stated that way, what you were trying to do makes some sense, but I disagree with the methodology. The major issue, which came up, is that in pointing out "a problem", you have to make a guess about what the argument is trying to accomplish -- what it is trying to solve. If you wanted to know the nature of my argument, you could have asked that directly, rather than bringing up criticisms based on what you thought i was maybe trying to do.

Shadow Starshine:
Well the problem is the only part I'm confident in, that it's not sufficient. But I simply don't know which way to stem from there until I get a response back. "Yes it is" would go one way. "True, but there's also X, Y, Z" would go another.

Shadow Starshine:
I could ask you the nature, sure

Shadow Starshine:
But I also don't mind things evolving more naturally

JustinCEO:
the lack of sufficiency is only a problem in a context where somebody wants it to be sufficient

JustinCEO:
criticisms are contextual

JustinCEO:
http://curi.us/1592-criticism-is-contextual

Shadow Starshine:
I don't think that's true. The lack of sufficiency can either counter that you do think its sufficient, or be a request for more arguments. I think it serves both purposes

Shadow Starshine:
And that is contextual with where you are willing to take it

Shadow Starshine:
I think me requesting more arguments pre-emptively would have been an assumption

Shadow Starshine:
perhaps you didn't think more were needed

Shadow Starshine:
I can't know

JustinCEO:
re the last point, I already said you could have just asked the nature of my argument/why i was bringing up patience. i think it's fair to assume i had some argument/point. also if u asked for examples and i was like "you don't need more, i already justified my position definitively and authoritatively" that would have also been a cheap way to get more info about my position

Shadow Starshine:
I think the only suggestion you've offered I find reasonable is "What is the nature of your argument?"

Shadow Starshine:
but I don't like the pathway of asking for more arguments

JustinCEO:
FWIW

JustinCEO:
i used to do more of your type of approach re: offering crits right off the bat. i try to ask more questions up first way more now. i think i have more of a sense of how big perspective gaps can be and that plays into my questions-up-first approach some.

JustinCEO:

I don't think that's true. The lack of sufficiency can either counter that you do think its sufficient, or be a request for more arguments. I think it serves both purposes

JustinCEO:
it failed to communicate a request for more arguments quite badly in this conversation

Shadow Starshine:
That's fair. I have nothing against the approach of asking your suggested question.

But I think the failure of communication rests on what you assumed I was saying.

Shadow Starshine:
You seemed to take me to be saying other propositions I was not saying

TheRat:
Hmm. this might be an example of being too biased toward explicit statements and neglecting the implicit that DD was taking about.

JustinCEO:
my fault was i should have asked why you were bringing it up

JustinCEO:
the sufficiency stuff

JustinCEO:
like immediately

Shadow Starshine:
Well I'm not sure I agree I should have asked your question, but I think it's a good suggestion none-the-less and would have been successful

curi:
Rat, explicit is easier. It’s the place to start seeking common ground. Add inexplicit second. Which I do a lot of but can’t communicate it very well to SS and others who misread my explicit statements

curi:
The perspective gap on inexplicit is significantly larger in general

TheRat:
I wonder if there is a point of diminishing returns though. Like I said earlier about maybe if you maximize literal accuracy, you start to hinder understanding.

TheRat:
so if you maximize explicit literal accuracy

curi:
Yes but we didn’t even get basics right let alone overiptimize

TheRat:
you throw a big baby with the bathwater

TheRat:
oh

curi:
He misread me over and over

curi:
He kept make incorrect statements that were wrong on many levels including literal and couldn’t agree with even the literal errors existing

curi:
got keyboard again briefly

curi:
it's very hard to get the gist of what someone says when you don't know much about it, disagree, and misunderstand it on a literal level

curi:
it's very hard to tell someone the implications of what they said when you disagree with them about the literal meaning and the implications are bad for them, things they want to deny. it's step skipping to tell someone that reading btwn the lines they're wrong without telling them why the lines are wrong first.

curi:
it works ok when you can ignore a few literal errors to reach a conclusion you and the other guy both agree makes sense (even if you disagree with it for some more advanced reason, but u see why it's at least semi-reasonable). that's "steelmanning" or principle of charity. but when you think something is trash and/or highly ambiguous (too hard to guess any reasonable meaning, though so vague it could mean something reasonable), and there's no way to correct the literal errors to get non-trash (as far as you know), then you can't just interpret what they meant so that they like your interpretation. they don't want you to interpret them as meaning something wrong.

curi:
it's not literally impossible but it's very hard in general and rarely an effective approach

curi:
ppl don't want you to reply to something that isn't what they literally said and prove why it's trash

curi:
that's actually unfair to them

curi:
you shouldn't put words in their mouth

curi:
they like it occassionally when they see the point but mostly it doesn't work when the perspective difference is significant

curi:
when someone is wrong and confused, it's very hard to guess which particular confusions they will consider the strongest, steeliest version of their viewpoint.

curi:
and do a better job than they did at coming up with that

curi:
you can do it somewhat when you are culturally similar to them or know a ton about their subculture. especially if you know the person really well, like years of discussion history. it's an unreasonable thing to ask for in general.

curi:
none of the analysis changes if they're right. if i think they're wrong and confused, b/c i'm wrong and confused, and then i take their statement, change it in a way that seems a little less wrong and confused to me, and try to argue with that

curi:
it's just gonna make things worse

curi:
you need stuff in common to know what corrections to ppl's errors to make. i do have enough in common with SS to correct "its" to "they're" but not for a lot of the corrections he wanted. and he managed to misunderstand that grammar point in an illogical way and correct it incorrectly (its -> it's IIRC) and never fix it. the thing that happened with that was a recurring pattern: I brought up X for purpose Y, and he responded as if I brought up X for purpose Z, where Z is normally some kinda generic typical purpose (so it's more like dropping context than inserting a different one)

curi:
the electrons thing was the same broad pattern (brought up for purpose Y, but he took in context Z)

curi:
these context errors seemed to be caused in significant part by his lack of literal understanding of words, quotes, replies, and those kinda issues

curi:
it's quite hard to guess which non-literal misreadings ppl might have and preemptively write to avoid them.

curi:
there are so many possible misreadings. even if you have a 99% success rate at avoiding, can still have tons of them happen.

curi:
when ppl will read X as Y, there are really striaghtforwardly infinitely many misreadings and nothing you can do besides guess, based on shared culture, which ones ppl might do

TheRat:
So what's the solution?

curi:
start with the easiest, most objective stuff for common ground

curi:
or like binary search backwards towards it

curi:
if you do stuff that's too simple and easy, and unnecessarily basic, you'll quickly be able to escalate

curi:
but ppl like SS don't want to back off far enough, to easy enough stuff, to actually have common ground and agree on and resolve ~anything

curi:
and the stuff he thinks is hair splitting pedantry is actually too hard for him. which is super common

curi:
so it's really hard to find anything you can talk about productively

curi:
if he had a better attitude, and tried, he might have been able to get that stuff right 95% of the time but that's too hard, that's not reliable enough for a basic building block to do complex, advanced stuff using. you can go a level or 2 past 95% reliability but not very far. every level past exponentially increases the usages of the building block, resulting in exponentially more opportunities for error, so error rate has to be VERY low to build many levels past

curi:
this is all a bit approximate for various complicated reasons, but is the gist

curi:
the main approximation is that i wrote in terms of mainstream foundationalist thinking which is actually wrong as CR says but is approximately correct for many purposes

TheRat:
Do you have examples of long term discussion with other philosophers who did this? (followed binary searched backwards toward common ground then built up).

curi:
Afk

curi:
Pinned a message.

curi:
this method has been used lots with ppl i talk with long term, e.g. justin and alisa. there was a little example on FI w/ anon re torturing kids discussion recently. stuff like that is common. you can also see longterm patterns over many discussions like e.g. justin discusses more precisely in lots of ways, including quote usage, compared to 20 years ago

curi:
it doesn't yield fast, large results with new ppl – in the sense of them e.g. being able to make expert level CR comments after 5 hours of discussion – b/c they need years of learning to be competent b/c there's a lot of human knowledge and school sux. no huge shortcuts. but you can get quick results about smaller things in individual conversations, e.g. my conversation with you where at the ~10th level of the conversation we got some common ground and then we quickly resolved a bunch of prior levels

curi:
similarly, i successfully dealt with a very difficult conversation with Andy re min wage http://curi.us/2145-open-discussion-economics he was majorly uncooperative, confused, sabotaging, etc., but i managed to get him to talk about some simple enough things that he could get them right (partly after corrections b/c it was simple enough he could actually be corrected and see why the correction was right) and then from there we built pretty quickly to conclusions re min wage as a whole.

curi:
a lot of subjects are simpler than ppl realize, and much more open to rational analysis reaching decisive conclusions. i picked min wage b/c it's like that (contrary to Andy's prior belief about how complicated it is, how their are lots of arguments on both sides, etc.). if a conversation can make progress at all, then lots of things can be accomplished. ppl overestimate difficulty of lots of stuff b/c they are used to conversations with ~zero progress. but if u can make "hyperliteral pedantic" progress, that's more than zero, and then u can quickly sort out some issues that many ppl never figure out.

curi:
ppl also used to blind leading the blind. when neither side knows the answer already, reaching conclusions is way harder.

curi:
and finding common ground or being more literal won't fix that super common problem that no1 talking actually has good ideas on the topic.

curi:
in that case they really ought to figure out how to engage with existing knowledge. (unless they are trying to invent new ideas, in which case they ought to know relevant existing knowledge and have lots of common ground that way already)

curi:
most ppl just won't do it and have a halfway reasonable conversation tho

curi:

Ok so with that in mind. Let's say SS tilted and his logic mastery is not high enough that the tilt significantly affected his logic. So let's say typically hes at 60/100 and tilted hes like 10/100. Based on that conversation, I don't think even then we can say he can't do logic. We can say he has not reached a level of mastery to make tilt essentially meaningless. What am I missing here?

the drop from tilt shouldn’t be that big cuz you ~can’t get to level 60 with only up to level 10 practiced, automated and highly reliable. you can only effectively build a few levels past the lowest level where your knowledge is shoddy.

so the further you go along, the percentage of non-automated, non-mastery type knowledge drops. at first it’s 100%. then maybe you master 5 levels and are working on 3 more so it’s 37.5%. to get to the point you can do level 60 at all, early stages of learning it, you normally want to master at least level 55. you can skip ahead a little just to explore, especially once you know that much.

conclusion: tilt doesn’t make all that much difference unless it’s actual bad faith where they are being wrong on purpose.

curi:
it’s a big difference in a competitive game like OW or LoL where you are playing other people on your level, and then if you tilt and play a bit worse you’re at a meaningful competitive disadvantage. you’d still stomp nubs (relative to u) while tilted tho. and you’d still go to the right lane in LoL and do all kinds of other stuff (unless you actually stop trying, but you’d at least know how to do that if you’re a decent player).

for this tho, if you wanna talk complicated philosophy subjects, then stuff like how to read a sentence correctly was 50 levels ago, should have mastered it ages ago. it should stick with you when tilted similar to how you can still read individual words with nearly 100% accuracy when tilted.

there were many, many signs that SS’s skill levels at all sorts of really generic discussion skills were way too low. including pre-tilt signs. but some of the errors while tilted were such low level it put a lower bound on how high his mastery skill could be than before.

(there are many different dimensions of skill, some weren’t bounded much at all pre-tilt, so another thing the tilted evidence did was fill out the picture and show low skill on a bunch more dimensions. the dimensions are partially but not fully independent btw.)

curi:
2000char limit sux

curi:
it's not just SS. for example PP's response to me pointing out he made a logical error was "excuse me im going to shoot myself" and then not to say anything else about the matter https://discordapp.com/channels/304082867384745994/304082867384745994/658523456966623242

curi:
here's PP's error:

curi:
8:15 PM] Perspective Philosophy: either its about reason or about shadows evaluation. which is it?
[8:16 PM] curi: what's "it" in your message?
[8:16 PM] curi: first one
[8:16 PM] Perspective Philosophy: it was referring to your position and the territory of this current discussion
[8:16 PM] curi: my position is multi-part, so that's a false dichotomy

curi:
he's just too irrational to make progress with. he doesn't want to work on the project of getting things right and getting his ideas to connect with reality.

curi:
this error is so basic. is the kind of thing that should have been mastered ages ago. most ppl haven't but they can't actually do philosophy in that case. or economics, or psychology, or ... which is what we see in the world :/

curi:
if someone doesn't care about false dichotomy errors, or doesn't understand them and isn't curious, you also won't be able to correct them on their misreadings of Popper even when it's reading stuff like "unjustified" as compatible with justified. there just isn't enough connection to truth and reality there and/or enough skill to understand what anything means.

curi:
where else are we going to get common ground if not some of the most objective and generically useful knowledge that humanity has?

curi:
pure math? programming? mathematicians and programmers have to get details right in a literal way to do their jobs!

curi:
words have meanings, there are rules for combining them into sentences, people who don't respect this, and don't learn the meanings and rules, are extremely hard to talk with or find common ground with for a rational discussion about ideas that are different than what they know. they just get by in life by talking with other similar people who already know what they mean a ton, saying really simple stuff, and being misunderstood (and misunderstanding others) but glossing it over and hiding the problem.]


Elliot Temple | Permalink | Messages (0)

DavetheDastard Chat

This is an example of how bad at thinking people are regarding facts, logic and details. I think that's because they think socially instead.

Are you better? Join some discussions and test your belief and create documented evidence of your higher quality discussions.

The key point is that I disagreeed with him about some philosophy stuff and asked if he was interested in discussing. His replies included "Hahaha what" and "I’ll be willing to have a discussion to demonstrate that you’re talking utter nonsense in an unnecessarily hostile manner". Those replies indicated that he wasn't intellectually interested in the matter. I took that, combined with no answer to my question (re interest), as a "no", and told him so. He later followed up on that initial conversation in the chat below.

After this chat, Dave came back the next day to ask if I was "actually retarded" before leaving the server.

This is from the public Fallible Ideas Discord.


DavetheDastard:
Freeze Today at 02:45
maybe this relates to the point you had that Dave might be wasting his career

curi Today at 02:48
it's related sure, but it isn't really what i was talking about. i think the specialization he works in is building complicated ideas on top of many layers of false premises, and that it doesn't have paths forward or engage seriously with other schools of thought which criticize it

Freeze Today at 02:49
ah

curi Today at 02:49
his response was typical: he wasn't interested and also, despite the sort of work he allegedly does, he misread what i said and didn't respond meaningfully to my question.
the sort of work he allegedly does requires being great at accurate, detail reading of words, to the point that's easy and automatic


Curi, could you be more specific with the following:
In your initial message, you never explained in detail how my area of study rests on false premises - could you please identify which premises my area of study rests upon, and then demonstrate them to be "false"?
In what way(s) does my field of study fail to interact with other areas of study? Could you name a few, but then clarify how and why those areas of study would be relevant; i.e., my field of study does not relate with the study of Hospitality, but we wouldn't say that that is a problem. Also, whilst we're at it, could you acknowledge the fields of study which my area does interact with?
Your second message says that I wasn't interested [in your message informing me that my career is a waste due to the area of which it focuses is based on false premises] , and that despite the work I allegedly do, I misread what you said. Further, you say that I didn't respond meaningfully to your question.
You only asked one question, and that was whether I would be willing to discuss the matter with you.
I said that I would discuss the matter with you, and I then invited you to a VC to discuss the matter directly. I had hoped that to be a meaningful answer.
curi:

In your initial message, you never explained in detail how my area of study rests on false premises - could you please identify which premises my area of study rests upon, and then demonstrate them to be "false"?

this is not responsive to what i already said to you and how our conversation went.
curi:

You only asked one question, and that was whether I would be willing to discuss the matter with you.

that is not the question i asked. you should read.

DavetheDastard:

https://discordapp.com/channels/304082867384745994/482766203983626255/658871776708919316 in my considered and professional opinion, i think your specialization is based on incorrect premises and so it's a waste of a career. is that something you're interested in discussing? @DavetheDastard

DavetheDastard:

you said that

DavetheDastard:

that is your initial message, is it not?
And is it not true that in it you have stated that my area of study rests on false (incorrect) premises?
Also, is it not the case that you asked a single question, which is what I reported?

curi:

read it.

DavetheDastard:

I have

curi:

have you read it today?

DavetheDastard:

I have just copied and pasted it to you

curi:

where?

DavetheDastard:

in what way is me asking you to identify the incorrect premises which you have asserted to exist, not responsive to what you have said?

DavetheDastard:

I have pasted it here from #slow

curi:

oic, you didn't quote it and it started wtih a link

curi:

very confusing

DavetheDastard:

the link is what you posted in your initial message.

curi:

ok well, factually, does it ask if you're willing to discuss?

DavetheDastard:

if I have failed to quote directly, then so had you

DavetheDastard:

"is that something you're interested in discussing?

curi:

when i said you didn't quote it, i meant you didn't do this:

quote

DavetheDastard:

right, would that make a notable difference to you? it would merely alter the layout

curi:

it would have prevented the confusion, but nvm

curi:

is interest in discussing something the same as willingness?

DavetheDastard:

I would have hoped you would recognise your message to me, the one which you have been referencing in other messages.
In the context which you asked it, it would be a fair reading to believe that you had asked me to interact with you, and not merely wonder if I am interested in discussing this area. One would have thought that I would be interested in discussing the field of study which I specialise in.

DavetheDastard:

Look, if you are going to be unbelievably difficult in communication, then I am not wasting my time with you.
If you do not want to actually engage and have a discussion, fine, but in that case, would you please refrain from making further comments about my field of study and specifically my engagement in it.

curi:

the belief that the other guy is difficult re communication is symmetric. you aren't offering a symmetry breaker. i am offering one: your way of communicating contradicts the dictionary.

DavetheDastard:

pardon?

curi:

which is the first word that you don't follow?

DavetheDastard:

it's just that you seem to be saying that I am contradicting the dictionary in my communication, and yet you are not capitalising your use of "I", in contrary to the dictionary.

curi:

in contrary to the dictionary.

DavetheDastard:

are you or are you not asking whether I would like to meaningfully discuss the topic of whether my area of study rests upon incorrect premises?

DavetheDastard:

sorry, is that a direct quote? Oughtn't we mark that with quotation marks?

curi:

i had in mind dictionary definitions, not minor typos or informalities like omitting trailing periods in one sentence messages

curi:

curi:

that is what a block quote indicator looks like on discord

DavetheDastard:

that isn't quotation marks

DavetheDastard:

a quotation marks appears as follows - "X is the case"

DavetheDastard:

usually accompanied with a time stamp for mutual reference

DavetheDastard:

are you not familiar with references?

curi:

do you want to try to actually resolve an issue?

DavetheDastard:

again, I am only having this discussion with you to see whether or not you want to meaningfully discuss the question of whether my area of study rests on incorrect premises. If you do not wish to have such a discussion, and perhaps it may be fair to reason that you would likely not to have this current one either, then merely tell me so, and I will leave this discussion here. In doing so, however, I ask that you refrain from asserting that I am wasting my career on a field of study due to faults of that study, until you directly inform me of those premises and the nature of their falsity.

curi:

do you want to try to actually resolve an issue?

DavetheDastard:

are you struggling to understand my previous message?

DavetheDastard:

the only issue I wish to resolve is the question of whether my field of study rests upon incorrect premises, hence me having earlier asked you to identify those premises and to explain to me how they are incorrect.

curi:

you brought up other issues which you now grant you don't wish to resolve. that was inappropriate.

DavetheDastard:

my lord.

DavetheDastard:

you have just acknowledged that I have brought up issues in order for them to be resolved, and that due to you not acknowledging or engaging them, it is I who has been inappropriate.

curi:

i refer you to https://discordapp.com/channels/304082867384745994/647276416857276426/659136940607799309

DavetheDastard:

right - to which bit in particular?
your not wanting to have a vc?

DavetheDastard:

this is a waste of time, I'm leaving; please ping me in the future if you wish to meaningfully engage on the question of whether my field of study rests upon incorrect premises.

curi:

i don't like talking with people who aren't interested and also aren't adequately literate or precise.

curi:

and who don't want to address that problem e.g. by reading my articles on how to discuss or the Inferential Distance articles

curi:

or making a serious effort to propose a way forward that works from my pov

curi:

i don't like when people say things like "my lord" instead of recognizing the large culture clash, being tolerant and curious, trying to deal with it rationally instead of assuming bad faith, etc.


Elliot Temple | Permalink | Messages (0)

Vegan Footsoldier Chat

This chat followed immediately after the DavetheDastard Chat. That's what my first four messages relate to.

This is from the public Fallible Ideas Discord.


curi:

i don't like talking with people who aren't interested and also aren't adequately literate or precise.

curi:

and who don't want to address that problem e.g. by reading my articles on how to discuss or the Inferential Distance articles

curi:

or making a serious effort to propose a way forward that works from my pov

curi:

i don't like when people say things like "my lord" instead of recognizing the large culture clash, being tolerant and curious, trying to deal with it rationally instead of assuming bad faith, etc.

footsoldier:
(pinned)
"i don't like talking with people who [...don't want to read...] my articles on how to discuss"

footsoldier:

are you aware that doesn't sound so cool curi?

curi:

i suggest that your next message do something to persuade me of your good faith interest in productive discussion.

footsoldier:

So far all I have gotten from this server is people talking meta, being arrogant and getting caught up on miscommunication and nitpicking. I'm not interested in persuading you of my good faith, I'm interested in talking to cool people. I've come in here with an open mind but you aren't doing much to convince me you aren't arrogant and annoying to talk to. So at this point my interest in being in this server has wained to close to zero. Feel free to ban me, at least it will end my umming and arring about leaving 😂

curi:

you quoted me in a misleading way to make me look bad, then didn't give any explanation of what you think is bad. neither you nor any of your friends has made a reasonable effort to objectively establish any significant error by me or in FI philosophy. there hasn't even been a claim that a particular set of messages was objectively, rationally adequate to win the debate. see e.g. https://curi.us/2232-claiming-you-objectively-won-a-debate

footsoldier:

I never claimed I won any debate

footsoldier:

I came here to debate

footsoldier:

and got meta

footsoldier:

and links to articles

curi:

you're judging me and some ideas negatively. the rational way to do that is to back up that claim with arguments and attempt to actually win the debate re whether your arguments are correct or not, rather than just arbitrarily believing it or believing it while ignoring counter-arguments, or other errors like that.

footsoldier:

what am I arguing for?

curi:

you claimed e.g.

being arrogant and getting caught up on miscommunication and nitpicking.

curi:

but you aren't arguing your negative beliefs like these

curi:

so it's unreasonable to conclude they're correct, because you might be incorrect and aren't taking adequate steps to find out if you're mistaken

curi:

establishing a negative claim in argument is one of the reasonable ways to reach a negative judgment about people. if you have some other method for reaching negative judgments that you think is rational and truth-seeking-compatible, you haven't explained it and i wasn't able to figure it out from your comments.

footsoldier:

you are proving my point

footsoldier:

I came to discuss and got meta

footsoldier:

and you are continuing meta

curi:

this is discussion of how to be rational

curi:

if that topic doesn't interest you, you're on the wrong server

footsoldier:

so the goals in this server are to discuss the meta of discussion?

footsoldier:

if so then cool but i didn't know that

footsoldier:

i thought this was a general philo server

curi:

discussing how to think, learn, judge ideas, etc – epistemology – is a common topic here

curi:

you can also discuss other things

footsoldier:

well epistemology doesn't mean talking about a conversation about epistemology

curi:

however, if your discussion methodology differs from that of others, then you may run into major problems. as has happened with e.g. SS and ppl here.

footsoldier:

for example, if I said to you, can music be judged objectively?

footsoldier:

and you said, I don't like the way you posed the question

footsoldier:

and then we argue the meta

footsoldier:

then that's meta

footsoldier:

and not the actual discussion itself

curi:

you aren't similar to the FI people. you don't seem to want to learn our ideas about how to think, learn and discuss. you want discussion to just work automatically without doing anything to bridge the gap.

curi:

it's really up to you if you're interested enough to learn something about ideas that are different than your current ideas, or not.

curi:

but if you try to ignore that this is the situation you're in when you come here, it isn't going to work well.

footsoldier:

i appreciate that you are interested in meta discussion

footsoldier:

but what if I want to discuss my previous example, whether music can be judged objectively... is that appropriate to do here? Or would we only want to discuss the way we can discuss whether music can be objective without actually discussing whether music can be objectively judged??

curi:

you can try but i expect you to run into problems like when i think you read something i say in a non-literal, biased way

curi:

then, in my understanding, you won't want to discuss or try to solve that problem

curi:

and i won't think the original discussion is productive given ongoing, unsolved problems like that

footsoldier:

ok that's a fun goal. Let us attempt to discuss whether music can be objectively judged but before we get to that, let us immediately resolve any outstanding issues

footsoldier:

please can you give the first issue to resolve?

curi:

i don't know if you're being sarcastic or what you mean

footsoldier:

what is it I have said? Are we speaking about the quote you felt I manipulated?

footsoldier:

or are there other things?

footsoldier:

is the [mis]quote the most important issue here?

curi:

my biggest concern is that i predict certain types of problems will come up and that you then won't want to continue in a way i regard as productive, as i just explained.

footsoldier:

well that is irrational

footsoldier:

both you and I can predict anything we like

footsoldier:

or are you claiming to have access to future knowledge? This is suddenly a bizarre conversation.

curi:

my take on this is that you aren't reading and understanding what i say, and that you aren't responding in a way that's good at clarifying.

footsoldier:

can you point to an example?

curi:

me: i expect ... you won't want to

you: what is it I have said?

curi:

i talked about an expectation and used future tense. your response was to ask about the past.

footsoldier:

what is your prediction based off?

footsoldier:

I must have prompted you to think that

footsoldier:

otherwise you are just being mystical

curi:

you don't understand where

then, in my understanding, you won't want to discuss or try to solve that problem

is coming from and how it relates to anything prior in the discussion?

footsoldier:

no because I am currently tying to discuss and solve problems RIGHT NOW

footsoldier:

LOL

curi:

can you point to an example?

do you accept the first example i gave?

curi:

you didn't respond

footsoldier:

could you confirm what constitutes you example? I do not feel you have given a clear example yet.

curi:

footsoldier:

You seem to be confused as to the flow of the conversation.

  1. ME - let us discuss objectivity in music
  2. YOU - I don't think we will get very far because I expect issues to arise
  3. ME - how so? Can you give an example of previous issues which have arisen so we can resolve?
  4. YOU - I am simply predicting that issues will arise - and the fact you assume I am referencing a previous example when in fact I am just predicting is an example of such an issue.

NOW - it seems that this example is derailing from the conversation. It seems your objections are prophetic. If there are no CURRENT issues preventing us from discussing objectivity in music, could we stop getting caught up in meta and move to the conversation of objectivity in music?

curi:

could you try responding more directly to what i said? e.g. do you agree that i talked about the future and you responded about the past? if so, why did you do that?

footsoldier:

i already explained this......

footsoldier:

you cannot make a prediction based upon nothing

footsoldier:

but this is derailing

curi:

i talked about an expectation and used future tense. your response was to ask about the past.

footsoldier:

what is your expectation predicated upon?

curi:

you aren't responding to me.

footsoldier:

i am

footsoldier:

is it because you think you can make predictions based upon nothing previously being observed?

curi:

why did you respond with the wrong tense?

footsoldier:

because I assumed that you weren't foolish enough to base predictions on nothing

footsoldier:

so skipped a step

footsoldier:

and went stright to asking you what your predictions were based upon

footsoldier:

so again what you are predictions based upon?

footsoldier:

because now your predictions are based upon something which occurred after the fact of you originally predicting

curi:

ok so here i am worrying about miscommunications followed by them not being fixed b/c discussing miscommunications is meta discussion and you expressed your negative opinions of meta discussion ... and what you do is skip steps which makes miscommunications more likely and larger.

footsoldier:

lol

footsoldier:

curi:

ok gl talking wtih someone else

footsoldier:

you've confirmed my opinion of you mate

footsoldier:

You assert we cannot discuss any given topic because you prophesize issues will arise - therefore we are limited to discussing how we ought to discuss but which is itself a discussion. Either you are a comedian or need to pull your head out of your ass. Leaving the server. No interesting discussion to be had here.


Elliot Temple | Permalink | Messages (0)

Shadow Starshine Chat

Shadow Starshine (SS) was already tilted before this discussion, ever since I decided we had an Inferential Distance problem and he refused to read any articles to find out what that means.

He also wouldn't discuss the concept when I and others wrote several explanations for him in chat messages. In short it means there are major differences in our background knowledge and premises, and we need to find some points of common ground to build on (but SS refused to try to do that).

For context, here is the SS discussion tree I made for part of my discussion with SS, and here is the VSE discussion tree that he was actually talking about below. I had been unable to get substantive responses to either tree, and sadly that doesn't change in this chat.

This is from the public Fallible Ideas Discord.


curi:

That mindmap by curi is so dishonest

SS, are you going to write down an error in it? then, step 2, explain why you think that error was made dishonestly?

Shadow Starshine:

@curi I'm not writing down an error in it. It's dishonest because of where you started it, like taking a video clip of someone talking out of context to make it seem like there's another problem.

curi:

can you point out how any node is misleading or wrong b/c of missing context? that is, an error.

Shadow Starshine:

The first node makes it look like TheRat's question should be answered or was at all the topic at hand.

Shadow Starshine:

What should be shown is how that question was an avoidance of something already being asked of him.

curi:

which mindmap are you talking about? i figured it was the one about you.

Shadow Starshine:

Negative, TheRat and VSE

curi:

ok, you think rat's question shouldn't have been answered b/c of some message(s) i didn't include in the graph. the appropriate thing for you to do is quote those messages, right? then explain how they indicate rat's question shouldn't be answered.

Shadow Starshine:

Well I don't hold a belief that this would be a fruitful use of my time in a discussion with you in particular, but if someone else wishes to understand where I stand on that and why, they may ask.

curi:

if you won't argue your case, don't make claims here

curi:

you just say over and over that i'm wrong but you never substantiate it

Shadow Starshine:

That's rich coming from you

curi:

i'm the one who makes trees, writes articles, gives details

curi:

i ask you for details when you try to make claims that i'm wrong

curi:

you don't give them

JustinCEO:

ya there's a huge effort asymmetry

curi:

if you think i fucked up in some previous part of a discussion

curi:

provide it

curi:

you just keep referring to my past bad behavior that you don't specify or argue

curi:

when i try to go thru issues with you in detail

Shadow Starshine:

The amount of fuck ups you make is simply not worth my time, especially in a discussion where I want to convince you of them. I just told you if someone else has the same questions, i'll put in the effort.

curi:

you consistently stop part way

curi:

so that they don't get resolved

curi:

you haven't established i was wrong a single time

curi:

you have never made a case i was wrong about anything

curi:

that you have even claimed was objectively adequate

Shadow Starshine:

curi your position has been noted

curi:

since you don't want to resolve disagreements or argue your flaming-adjacent claims, you're on the wrong server.

Shadow Starshine:

Yet again, I'll say this for the third time

Shadow Starshine:

I'm interested in it with other people other than you

curi:

you say i made many fuckups but haven't explained even one

JustinCEO:

so it seems spurious that someone is going to tell me that I've been doing it wrong and that they can tell me the underpinning problems.

SS fyi your ultra hostile attitude towards curi doesn't really serve you well in helping establish your case as a veteran debater to whom a certain level of respect/deference should be granted re: judging discussion issues.

Shadow Starshine:

If I'm on the wrong server specifically because of my disinterest with you in particular, fine. But you can't say my disinterest is categorical.

Shadow Starshine:

and that's one error you can note right now

curi:

his methodology for objectively establishing an error is to write a sloppy sentence or two, then assume he's done

curi:

amazing

Shadow Starshine:

@JustinCEO depends who I'm trying to convince

Shadow Starshine:

I've already established I think curi is a waste of time, far beyond the amount of effort it would take

Shadow Starshine:

just to engage with some shit tier blogger

JustinCEO:

dude wtf

JustinCEO:

so hostile jeez

JustinCEO:

i'm out

curi:

https://curi.us/2232-claiming-you-objectively-won-a-debate


Elliot Temple | Permalink | Messages (2)

TheRat Chat

Example of how irrational people are and how hard it is to deal with. Think you're better at reason youreslf or better able to engage with people productively? Test your skills in discussions and share transcripts for critical analysis. If you never test how good you are, and take other steps to get good, you should assume you're highly irrational. Highly irrational is the default.

This is from the public Fallible Ideas Discord.


TheRat:

However, I reject your summary of the discussion.

JustinCEO:

hey Rat, do you think curi puts in a fair amount of effort in general re: explaining things carefully, doing things like making discussion trees, referring people to resources relevant to the point at hand, etc?

JustinCEO:

@TheRat

TheRat:

I think he does both things. Put a lot of effort in some things, and make unargued, unexplained assertions too.

JustinCEO:

Rat has their been any instance you can point to where there was no path forward, nothing you could have done to try to address some conversational impasse? Where curi left no route for making progress in the discussion? If you answer in the affirmative, link or quote please

TheRat:

and tilts when someone even slightly presses him to explain himself

TheRat:

like yesterday

TheRat:

called it "demanding' and nonsense of the sort

curi:

rat have you ever done this? https://curi.us/2232-claiming-you-objectively-won-a-debate

curi:

or ever used my debate policy?

JustinCEO:

@TheRat I suggest you consider whether an organized attempt to demonstrate what you regard as curi's unreasonableness (with quotes, discussion tree, whatever) might be a better use of your time than venting in the chat.

JustinCEO:

make it clear as day for us all if you can. the more clearly right you are, the easier your task should be.

TheRat:

I think the situation from yesterday is quite clear

curi:

oh that reminds me, the vegan never got back to me who was reading BoI and said he would write 3 blog posts then debate me (i said i'd take 3 instead of 20 for debate policy)

JustinCEO:

TheRat: I think he does both things. Put a lot of effort in some things, and make unargued, unexplained assertions too.

JustinCEO:

do you think there might be a relationship between the effort you put into some area and what sort of things you can quickly come to a correct judgment about within that area?

TheRat:

What's the relevance?

TheRat:

nobody cares about his alleged skills at coming to a conclusion. What matters is his explanations of his conclusions

TheRat:

which he fails to do

JustinCEO:

1:34 PM] TheRat: I think the situation from yesterday is quite clear

Do you think you explained why you regard the situation as clear, Rat?

TheRat:

Don't shift it

TheRat:

He made the assertions

TheRat:

ot me

JustinCEO:

Do you concede you've made assertions?

curi:

curi:

why is rat doing meta discussion?

curi:

he says meta sux?

TheRat:

Let me put it as clear as I can, and hopefully you'll see it but you have a blindspot for curi so I don't have high hopes. Curi makes assertions he refuses to explain, what efforts he puts in other areas or how good he is at getting to conclusions etc.. is utterly irrelevant. Does he explain his assertions? No. If he asserts "You don't know how to do X" and is asked for an explanation, saying "What is your system to do X" is not an explanation. It is a dodge. He already made the assertion "You don't know how to do X" and he refuses to explain himself. This is an ongoing pattern with curi I have labelled PatternB.

JustinCEO:

Rat do you concede making assertions or not

TheRat:

Irrelevant

JustinCEO:

humor me?

TheRat:

Yes, but after we have resolved the problem of PatternB

JustinCEO:

by "humor me?" i was asking for an immediate reply on that discrete issue

JustinCEO:

Y/N?

TheRat:

I don't want to go off topic because as we have seen that never works.

JustinCEO:

one char direct reply would be lower effort than non-substantive reply alternatives!

TheRat:

also let him defend himself. You shouldn't fight his battles

JustinCEO:

this isn't a battle

TheRat:

he's hurting you by making you his proxy, you aren't thinking for yourself.

TheRat:

its not good

JustinCEO:

you're being disrespectful and offensive

TheRat:

You've successfully derailed the conversation. I'll go back to

Curi makes assertions he refuses to explain, what efforts he puts in other areas or how good he is at getting to conclusions etc.. is utterly irrelevant. Does he explain his assertions? No. If he asserts "You don't know how to do X" and is asked for an explanation, saying "What is your system to do X" is not an explanation. It is a dodge. He already made the assertion "You don't know how to do X" and he refuses to explain himself. This is an ongoing pattern with curi I have labelled PatternB.

TheRat:

and I refuse to move from that until he addresses it. Or concedes he makes unargued assertions frequently.

TheRat:

I am under no delusions that he will do either.

JustinCEO:

maybe part of the reason you won't give a one character reply to me in good faith is that you view discussion as a battle

TheRat:

Irrelevant Justin, please refer to my quote.

TheRat:

Since curi is clearly not afk but crying in his own channel, I can safely assume he is here and has read what I wrote. His inability to defend it here (this channel) I am willing to take as a concession that he is incapable of defending his assertions. And I can drop the matter of PatternB.

JustinCEO:

he tried to engage with you, just now

JustinCEO:

here

TheRat:

He failed to address the issue.

JustinCEO:

"crying in his own channel" you're being a really douchebag rat

JustinCEO:

super hostile flaming

TheRat:

Irrelevant Justin

TheRat:

please refer to my quote

JustinCEO:

You can't force a mind I guess. gl i'm afk

TheRat:

I accept your concession curi.

TheRat:

ill bbl

curi:

i wrote 2 msgs to rat and he hasn't responded yet... https://discordapp.com/channels/304082867384745994/304082867384745994/660595900346925077 [The link goes to the message "rat have you ever done this? https://curi.us/2232-claiming-you-objectively-won-a-debate"]

curi:

he's just baiting by lying

curi:

he thinks the nastier the accusations, the more social pressure he's exerting

curi:

and having them be false makes them extra annoying to facts-and-logic oriented ppl. bonus!?

curi:

and it's baiting by making it looks fairly easy to correct b/c it's simple, basic factual errors. but even this isn't actually fixable b/c he won't engage with reality.

TheRat:

I already said that asking me what I have done is not an explanation to your assertion

TheRat:

please read more carefully

curi:

if you think i've made an error, see https://elliottemple.com/debate-policy

TheRat:

Still not an explanation

TheRat:

Imagine if anyone thought that flew as an explanation. "Vaccines don't work." Why? "See my debate policy www.blogsmahfeels.com"


Elliot Temple | Permalink | Messages (3)

Social Reality and Real Reality

There are two broad mindsets for how to deal with life: dealing with reality and dealing with social reality, social dynamics, social metaphysics, social climbing. Some people second-handedly focus on the opinions of other people, while others focus on dealing with nature, logic, facts and science. Most people do a mix of both, but many individual statements or actions are primarily related to one mindset.

I conjecture that social reality is the primary mechanism of static memes. That's how they make people irrational, prevent critical thinking, etc.

Social thinking is the primary reason people fail at being rational intellectuals. It's an ongoing cause of misunderstanding and conflict because e.g. I say something and people read it according to a non-literal, social meaning. Social thinkers aren't very connected to real reality because they're focusing on a whole separate reality.

Some of the messages I comment on in this post are from TheRat Chat.

This is from the public Fallible Ideas Discord.


curi:

looked thru #off-topic stuff. good tries alan, GISTE, J, anne. i thot u all did fine re logic. didn't address the bad faith tho which is why that didn't work

curi:

e.g.

You should read the response above if you need a good laugh.

curi:

when ppl are saying things like that, they are basically admitting their bad faith

curi:

curi:

the basic underlying problem is 2 different approaches to life: facts/logic/physical-reality and social climber approach, social rules, social dynamics, social metapysics

curi:

is a whole separate way of thinking

curi:

it's the way of static memes

curi:

VSE is a cargo culter. his scientific logical mindset sounding statements are fakery for social posturing, which is why he can't engage with arguments, he just knows the kind of things to say but not how to understand arguments and respond substantively.

curi:

he thinks everyone is like this. doesn't know real scientists, logicians, etc., exist

curi:

this is one of the reaosns they hate meta. meta lets you call them out. if you only respond with topical statements, it actually helps them make it look like a real discussion

curi:

if you call out non sequiturs or otherwise talk about their errors, or the overall situation, that's a threat to them

curi:

if you just make statements re e.g. logic of scientific discovery, they can derail and fake forever. that's what they know how to do.

curi:

ppl ignore stuff they find socially inconvenient and then get mad about "meta" if u comment on this

curi:

and they have double standards: they make all kinds of demands that you do things, answer things, etc. which are meta comments

curi:

anyway when you see the kind of people who never give direct answers to questions, and don't read statements in a way that's very well connected to the dictionary meanings of the words, it's b/c they don't think in terms of facts and logic, they think in terms of social meanings

curi:

that's the big divide in the world which prevents ppl from engaging with FI and ruins their minds

curi:

that's the key principle of irrationality

curi:

social metaphysics doesn't do error correction.

curi:

ppl who manage to do some programming, science, engineering, math, etc., often either 1) don't get along well with ppl socially (especially common with the best ones) or 2) it's an exception which they turn off when they aren't in professional mode, like DD talking about the scientists leaving the lecturehall, going to the meal hall, and then going into social dynamics mode and not being scientists anymore.

curi:

1:18 PM] TheRat: I count it after one asks, and he refuses. For example, he has yet to explain his assertion that I don't know what progress in a conversation looks like. After I asked for an explanation he asked me for my mode. Which is irrelevant as to how he came up with the assertion himself.
[1:19 PM] TheRat: model*

curi:

that is factually false. i've already corrected him on his factual errors many times. he has an unlimited supply of them.

curi:

sometimes he changes mindset and is able to think some but right now he's in a hostile and social mode, so he loses touch with reality and its facts.

curi:

the English language is closely connected to reality, as i blogged about yest, which is why social metaphysicians won't use it right

curi:

that's why all the conflicts re words

curi:

and the misreadings which are egregious from factual pov

curi:

there isn't a proper name for the reality/facts/logic/science side

curi:

that identifies it

curi:

cuz it's the default and there is a broad assumption that everyone is on that side

curi:

scientific mindset is too narrow

curi:

i like the contrast of social reality/metaphysics vs. real reality/metaphysics

curi:

but that's custom terminology

curi:

curi:

"a fair amount" is literally a quantifier

curi:

the examples are endless

curi:

he's just cargo culting to sound like a logician

curi:

TheRat: The whole writing in a channel one can't respond to is the most bizarre behaviors I have seen him display.

calling stuff bizarre is an example of a social judgment. similar to abnormal.

curi:

the use of "whole" is social

curi:

there's a factual error too. can you spot it?

curi:

the mindset of behaviors being on display, and putting things in terms of who has seen who do what, is also socially oriented.

JustinCEO:

they can (and have) responded

curi:

indeed

curi:

it's a double pronged error. cuz he maybe meant can't respond in the channel, which is not what he said. would that be true?

JustinCEO:

well i read him as speaking of e.g. VSE and SS, who i think are locked to off-topic. so if that's what he meant (can't respond in the channel) it seems true, unless maybe TheRat was gonna serve as a go-between or they were to ask for an unmute

JustinCEO:

needed to think about that one lol

curi:

he said "one" not VSE or SS

JustinCEO:

ahhh

curi:

so consider if TheRat could reply here or not

curi:

talking about issues like discussion methodology or social vs. actual metaphysics is meta discussion. anything where you point out patterns of error instead of individual errors is meta discussion. their hostility to meta discussion is part of how they protect their racket. they have an unlimited source of errors and they don't want the pattern or source to be discussed.

curi:

also "behaviors" is an error, should be singular. and he left out the word "thing" after "respond to" to match "whole".

curi:

And i don't think the use of "most" is an honest, logical, factual thought.

JustinCEO:

i thought the complaining about channel thing was interesting cuz

JustinCEO:

in the face of the hostility level these folks have demonstrated

JustinCEO:

very standard approach would be to kick

JustinCEO:

for discord

JustinCEO:

and you figured out a way to not kick, to allow some discussion to proceed

JustinCEO:

and get flamed for it

JustinCEO:

TT

curi:

Thought: People are dishonest because (one reason, not only) honesty is related to reality and they are acting in social reality which has its own rules. They are often honest re social rules, in some sense, e.g. they will back off when 100 people say they're wrong (as SS accused me of being unwilling to do – he was calling me socially dishonest).

JustinCEO:

i thought the off-topic channel was a rather elegant/clever solution

JustinCEO:

like the server's purpose is for people interested in FI

JustinCEO:

including people who disagree, that's fine

curi:

yes tho i don't think rat was talking about OT channel and u haven't given a direct answer.

JustinCEO:

but it's not really about enabling hostile flaming, appeals to authority and active disinterest in this community's ideas...

JustinCEO:

oh re: answer you mean "4:46 PM] curi: so consider if TheRat could reply here or not"

curi:

yes

curi:

the reason i can't back off to simpler stuff and get common ground with ppl is they back off to simpler social claims while i back off to simpler facts and logic.

JustinCEO:

well he could reply to stuff said here in another channel

curi:

could he reply in #contributors ?

JustinCEO:

oh lol

JustinCEO:

i see

JustinCEO:

yes he could

curi:

it seems like you thought the answer was "no" but didn't want to disagree with me, or assumed i had some other point in mind, so were avoiding direct response

JustinCEO:

ya i was leaning no but didn't wanna respond right away

JustinCEO:

actually

JustinCEO:

yeah

curi:

so why is the answer yes? u didn't explain.

JustinCEO:

right

JustinCEO:

he could become a contributor

curi:

yes. $2 for a month is not an impenetrable barrier to replying.

curi:

he also acts like i'm talking to him but not letting him reply

curi:

but i was talking to my contributors

curi:

my messages were aimed at the audience of ppl who like my stuff

curi:

back to main theme: notice how often they accuse me of social errors

curi:

i think lots of those are real opinions involving some (social) thought and not just lying in ways they hope to get away with.

curi:

whereas their accusations of factual errors are all cargo cult stuff, skin deep, no details, no examples, no analysis (sometimes a little bit of that stuff, which is always fake cardboard cutouts and they derail if you try to look behind the curtain)

curi:

:06 PM] TheRat: nobody cares about his alleged skills at coming to a conclusion. What matters is his explanations of his conclusions

curi:

notice the social emphasis re what people care about

curi:

and the disrespect for facts. "nobody" is egregiously factually false

curi:

second-handed.

curi:

but that sort of factually false exaggeration like "nobody" is allowable in social rules. it's actually encouraged. it's like saying "you were 3 hours late" to someone who was 2 hours late. if they correct you, they have to admit to being 2 hours late and spend time focusing attention on that fact which is bad for them. so it's lose/lose for them.

curi:

ppl will be like "holy shit how are you defending being 2 hours late?"

curi:

in social rules, a lot of stuff can be taken out of context. i think the context rules are different.

curi:

the social context is stuff like how prestigious someone is, not what is the parent statement of a statement.

curi:

social stuff has so much selective attention. hypocrisy is a facts and logic concern related to consistency and general principles.

curi:

the social world has other general principles like that low status is bad and that the appearance of effort is bad (with exceptions but it has a lot of generality).

curi:

but it doesn't worry about consistency like if you say X is bad when Joe does it, then X should be deemed bad when you do it. the person who is doing it is major differentiating context in social metaphysics. what you can get away with socially is a big issue based on your social status.

JustinCEO:

i was reading about an applied example of effort is bad

curi:

in some sense they see it as not being hypocritical b/c ppl with different social status levels doing the same actions are not the same things

curi:

just like we think "ofc joe can lift that and bob can't, joe is stronger"

JustinCEO:

the idea of "sprezzatura" as applied to male fashion

curi:

what ur allowed to do or say is based on ur social status level

curi:

and that's a thing they're always taking into account as relevant, differentiating context

JustinCEO:

That's the interesting dichotomy of good style: you want to look good but you also don't want to look like you're trying too hard.

There needs to be an element of nonchalance or sprezzatura (aka artful dishevelment) to your look.

curi:

going into details like node by node analysis of discussion is high effort

curi:

so the social ppl super resist it whether they could do it or not

curi:

not b/c they are avoiding effort itself – they will sometimes e.g. put lots of effort into days of derailing and BS, and make the conversations use more resources not less – but more b/c appearance of effort (as judged in a particular way that isn't very factually accurate) is socially bad and they internalized that social rule

curi:

you have a blindspot for curi

social statement

curi:

who is allies with who

curi:

I don't want to go off topic because as we have seen that never works.

social re what the group has seen. that's how something is determined to be true

curi:

it's so ingrained they are bad at hiding it

curi:

also let him defend himself. You shouldn't fight his battles

heh, nice example simultaneous to me saying they're bad at hiding it

curi:

and he goes and openly admits he views discussion as battle

curi:

and he's talking about the sources of statements, treating the same arguments as different depending on who says them

curi:

social metaphysics is very interested in sources of ideas. it needs those to judge ideas by the social status of the speaker.

curi:

and rat is saying: you shouldn't be allies with that guy cuz he's a pariah

curi:

he wouldn't tell a marxist you shouldn't fight marx's battles for him.

curi:

he wouldn't do it with a live and high status person either, like he wouldn't tell a DD fan not to fight DD's battles for him meaning don't argue in favor of FoR and BoI.

curi:

he's hurting you by making you his proxy, you aren't thinking for yourself.

rat wants to talk about who is doing what to who

curi:

who is whose ally and what is the relative status of the ppl in the group

curi:

2:25 PM] TheRat: its not good

he thinks justin is being hurt by having a low status place in my group

curi:

he also claims i'm the actor here, the puppet master, that i'm "making" justin, which is a good example of lack of interest in physical reality and its facts

curi:

there's also the fact that i responded to rat and he ignored me

curi:

and responded to justin only

curi:

so who exactly chose that rat should be talking with J instead of me directly?

curi:

but rat is talking social facts, which don't care about facts

curi:

You've successfully derailed the conversation.

says the guy who won't answer one question, and claims to dislike meta discussion but keeps doing it

JustinCEO:

was gonna say this if Rat conceded assertions: even if curi did make explanationless assertions -- which I doubt, but let's stipulate it for the heck of it -- even if he did, and you also made assertions, @TheRat , then at the very best ya'll would be a symmetrical position re: making some assertions in the conversation. Reason doesn't say making assertions is okay cuz the other guy started it... but instead of trying to bridge the gap of (at best, for you) mistakes on both sides, Rat, you seem more into being mad, flaming people you disagree with as not thinking for themselves, etc.

JustinCEO:

writing here cuz rat didn't wanna engage

curi:

no one explains all their assertions

curi:

methodology is needed re which to explain, when, why

curi:

and there is the whole regress issue

JustinCEO:

i guess part of the issue is

curi:

when you explain one u make other assertions

curi:

is like how u can't define all the words

JustinCEO:

i view explanationless assertion as

JustinCEO:

something for which there is no explanation available

JustinCEO:

like a bluff

curi:

you need some common ground so u don't have to explain everything infinitely

curi:

and I refuse to move from that until he addresses it.

curi:

i factually already addressed it and rat just ignored me

curi:

he is making unexplained, unargued assertions

curi:

he's cargo culting what a principled stand looked like

curi:

but it's so divorced from reality

curi:

Since curi is clearly not afk but crying in his own channel,

curi:

social comment

curi:

curi:

it's all this social stuff about what people have done what actions and who the burdens should fall on, who deserves what treatment and which people should do what actions in the future

curi:

rat enjoyed sending my debate policy to ppl. did he think it was a good way to socially bully them? now when he has an issue with me he doesn't want to use it. does he think the policy is too unfair or unreasonable to use? but then why did he keep linking others to it to challenge them? more social dynamics crap going on?

curi:

enjoyed is the wrong word. i'd guess it's true but the issue is more that he seemed to think it was good and rational to do that.

curi:

Imagine if anyone thought that flew as an explanation. "Vaccines don't work." Why? "See my debate policy www.blogsmahfeels.com"

curi:

can you spot the 4+ social attacks here?

JustinCEO:

anti-blogger flaming, comparison to low status vaccine deniers, "mah feels" to claim the status of being more rational vs an emotional person

JustinCEO:

struggling to get to 4

curi:

u missed the biggest one!

JustinCEO:

😄 doh

curi:

he smeared me as a person who thinks differently than anyone else

JustinCEO:

ah

curi:

u cud phrase it other ways but he's saying ~everyone thinks i'm wrong and he put stuff blatantly in second handed terms of what ppl think

curi:

the msg has other issues like he's misrepresenting what i said

curi:

and the method of imagining a counterfactual world instead of analyzing

curi:

and the appeal to the obvious dumbness of the scenario rather than arguing why

curi:

and the not saying his conclusion: that woudl be bad

curi:

the structure is "Imagine if X."

curi:

with no conclusion statement b/c it's assumed to be so obvious it doesn't need saying

curi:

there's also no direct connection btwn the msg and what i said, and no attempt at one

curi:

that's only implied

curi:

he focused on social instead of logic

curi:

the point of it, the purpose, was the 4 social smears

curi:

@Freeze perhaps you can learn something about how and why ppl quit FI. or perhaps you can try to talk to him.

curi:

curi:

lol/sigh @ the unargued assertion (got schooled, which is also an anti-student social smear) in the msg accusing me unargued assertions

curi:

@Mingmecha you also asked re ppl quitting FI

GISTE:

I was asking SS about what he meant by one of his statements that included the word “force”, where he misused the word. After some back and forth I asked him to restate without the word “force”. He was surprised that I wanted that. He said something like that he couldn’t do it cuz force is what he meant. That made no damn sense. Like he wanted me to make sense of his statement despite it containing a word that he knew didn’t really fit. And he put so much effort in the meta, effort that he instead could have put into restating without the word “force”.

GISTE:

I was pretty surprised by that.

GISTE:

I didn’t know that people did that.

GISTE:

So that’s something I learned. And SS said shortly after that convo that he thinks I didn’t learn anything from the meta discussion.

curi:

curi:

good answer GISTE


Elliot Temple | Permalink | Messages (9)

Discussion with "Critical Rationalist"

From Discord.

Critical Rationalist:

I’m new to this app. Someone recommended that I come here. I am pursuing an masters degree in philosophy. My undergraduate degree was in psychology (concentration in applied theory and research). I would count myself as a Neo-Popperian (which should be unsurprising given my username). I look forward to tuning into the conversations you guys have.

curi:

What’s a Neo-Popperian?

Critical Rationalist:

Neo just means new or modified. It’s a shorthand way of saying “Popperian with some caveats”

Critical Rationalist:

Karl Popper influenced my epistemology more than any thinker, but I don’t think he was right about everything

curi:

What was he wrong about?

Critical Rationalist:

I think that the demarcation problem (insofar as it is a problem at all) is not best solved by a single criterion. Insofar as there is a correct definition of a term (like “science”), it’s definition will be cashed out in terms of family resemblance.

Critical Rationalist:

That’s probably my biggest disagreement with Popper. In Popperian fashion, I welcome criticism.

Critical Rationalist:

(I’m also happy to explain what I said with concrete examples)

Freeze:

What do you think of Popper's political philosophy?

JustinCEO:

@Critical Rationalist what do you think of Popper's critical preference idea

Critical Rationalist:

@JustinCEO @Freeze Very much on board with both his political philosophy and critical preference

curi:

Hi. Have you seen much of my stuff? I’m an Objectivist.

Critical Rationalist:

No, I haven’t

Critical Rationalist:

Do you have a blog or something?

Critical Rationalist:

(I know what objectivism is though)

curi:

Popper didn’t learn econ or give counter arguments but disagreed with free market minimal govt

curi:

How’d you find this server?

Critical Rationalist:

Someone recommended it to me

Critical Rationalist:

I met them at a party actually

curi:

https://elliottemple.com

curi:

Have you read Deutsch?

Critical Rationalist:

I see that you’ve talked with David Deutsch!

Critical Rationalist:

Yes! I love Deutsch.

Critical Rationalist:

He has never made explicit his ethical commitments, other than the fact that he is a) a realist, and b) not a utilitarian.

Critical Rationalist:

(Not in what I’ve read)

curi:

DD was an Ayn Rand fan and libertarian. He favors capitalism, individualism, minimal govt or anarchism. I got those ideas from him and his discussion community (which this is a continuation of, we had IRC back then) initially.

Critical Rationalist:

Well, no one is perfect.

curi:

What do you mean?

Critical Rationalist:

Sorry, that was a bad attempt at humour.

Misconceptions:

imagine this scenario. a bunch of kids are playing. 1 kid is mean to the others. so the other kids get away from him. the alone kid cries because he's now alone and he wants to play with the rest of the kids. the parent hears the crying of the alone kid and he learns about what happened. he doesn't hear about the part where that kid was being mean though. and the parent decides that the other kids have to include the alone kid. is this utilitarian ethics in action?

Critical Rationalist:

I have immense respect for DD. He was my introduction to Popperian thought. But I am not a Randian.

curi:

Is there a written criticism you think is good?

Critical Rationalist:

Of Randianism?

Critical Rationalist:

None that I’ve read

Misconceptions:

That action is not optimific. It leads to lower overall happiness, kid getting further bullied, and other kids not enjoying this company. Not utilitarian

curi:

of Objectivism. the term "Randianism" is disrespectful FYI.

Critical Rationalist:

Sorry I knew the term objectivism, but was unaware that Randianism was viewed as a pejorative

curi:

np

Misconceptions:

What is wrong with Randian? is Popperian bad too?

curi:

Rand didn't want her name used that way

curi:

Is there something you think would change my mind if I read it?

Critical Rationalist:

I’ve never read any criticism of Rand

Critical Rationalist:

I’ll go further

curi:

why disagree then?

Critical Rationalist:

I actually think egoism (a family of ethical theories of which objectivism is a species) is perfectly defensible

Critical Rationalist:

I think that actions which maximize your own welfare can be called genuinely good.

Critical Rationalist:

Actions which maximize the welfare of others (even when they conflict with your own) can also be called genuinely good

Critical Rationalist:

How do you decide between the two axioms when they conflict (egoism and utilitarianism)? Henry Sidgwick says that although they agree in most cases, there is no rational standard for deciding between them when they conflict.

Misconceptions:

Is your claim that one must not disagree with theories until one has criticism of it? @curi

curi:

Why else would one disagree?

Misconceptions:

There are infinite many theories, you agree with all of them?

curi:

no

Misconceptions:

Henry Sidgwick says

Why should we care what he says?

Critical Rationalist:

We shouldn’t

Misconceptions:

so why bring it up?

Critical Rationalist:

I’m giving credit to where I got this idea from.

curi:

Is there a conflict you have in mind?

Critical Rationalist:

Do I give money to life-saving charities. That’s one salient example.

curi:

Like cancer research?

curi:

Or like handing out fresh water in africa? or what?

Critical Rationalist:

Like the latter. The case I have in mind is the Against Malaria Foundation. They make bednets that save lives inexpensively.

Misconceptions:

@Critical Rationalist btw there's multiple Utilitarianism versions. Not all are about GHP.

Critical Rationalist:

Yes. Eg preference satisfaction

Critical Rationalist:

I’m defending the version that is a) most well known and b) the one I agree with

curi:

I think Africa's problems are political and that kind of charity is like pouring water into a leaky bucket. The real issues here are more about tyranny, which isn't a conflict between individual or group benefit, it's bad in both ways.

Misconceptions:

You'd think with that name you'd agree with Popper's version of utilitarianism.

curi:

@Misconceptions hi, how'd you find this server?

Critical Rationalist:

I’m not a sycophant. I agree with theorists when their arguments work. I think Popper got some things wrong. Any fallibilist should expect their heroes to get some things wrong.

Misconceptions:

Did I accuse you of being a sycophant?

Critical Rationalist:

Fair enough. My use of the term was not needed.

Critical Rationalist:

I just wanted to clarify that I am not a Popper devotee or something.

Misconceptions:

Hi, @curi Reddit.

curi:

where on reddit?

curi:

Popper made comments advocating TV censorship and a 51% share of all public companies being owned by the government. I think some of his beliefs contradict others so you couldn't agree with him about everything even if you wanted to.

Misconceptions:

Your post against Ollie's ANTIFA vid.

curi:

ah cool. which subreddit was it posted to? i didn't see.

GISTE:

@Critical Rationalist this line of discussion is still pending: curi said: "I think Africa's problems are political and that kind of charity is like pouring water into a leaky bucket. The real issues here are more about tyranny, which isn't a conflict between individual or group benefit, it's bad in both ways."

Misconceptions:

You didn't post it?

Critical Rationalist:

Oh sorry I was typing and forgot to finish

curi:

i don't recall posting it but possibly i did in the past.

Critical Rationalist:

@curi Yes that’s an interesting factual claim. It might turn out that giving to charities in Africa are on the whole counterproductive. But suppose it factually turned out to be the case that on balance, donating to African charities contributed more to their welfare and did NOT detract from their political progress. Philosophically, what would you say then?

curi:

i think you could help more people, a larger amount, by addressing the political problems, rather than donating to the victims who are being victimized on an ongoing basis (which is why they're so poor). and i think that can be done with mutual benefit – more civilized, productive countries to trade with.

Critical Rationalist:

Yes, and you could be right about that factual claim.

Misconceptions:

Dancing around the question tho

Critical Rationalist:

Do you think there are no cases in which self-interest and benefiting others come apart? It would be a miracle if that was true.

curi:

i don't think conflicts of interest exist in any cases. so if you want me to replace this hypothetical with a different one where i agree there's a conflict, i can't do it.

curi:

this is a standard (classical) liberal position which is also held by Objectivism

curi:

my comments re replacing were addressing to @Misconceptions comment about dancing.

Critical Rationalist:

I’m in a lab that is burning down. I’m dying of a disease x (I’m the only person who has it), and millions of people are dying from disease y. The lab has one room with the cure for disease x (last of its kind). The lab has another room with has the cure for disease y. I only have time to go into one room before the building burns down. Which room should I enter?

Misconceptions:

The point I think the KritRAT was making was that Donating your money in this hypothetical scenario does not further your selfish interests but it does help others. What do?

curi:

i also don't think it's necessarily sacrificial to donate to benefit others. if you value life and want to promote life, and combat mosquitos, i don't see anything wrong with that. i think it's a variety of shaping the world more to your liking.

GISTE:

hmm, i thought Misconceptions was talking to Critical Rationalist when he said the dancing comment

Critical Rationalist:

Ok, so what about the case I just described?

Misconceptions:

That sounds like a rejection of egoism. Value life = value other's lives.

curi:

the lab scenario is an emergency situation which is generally a bad way to understand how to live a good life in general in normal situations. i don't have strong opinions about it. i think an egoist can pick either room. you have to choose values to pursue in life. saving millions of people is a good accomplishment for a whole career. one can be happy with that.

Misconceptions:

That sounds like another tango my friend.

Critical Rationalist:

If you define egoism so broadly so as to include living in accordance with the values you hold, then it becomes empty. Choosing literally any set of values and acting upon them would count as egoistic so long as you hold the values.

Misconceptions:

I am curious about your real answer regarding the lab situation too mr @curi

Critical Rationalist:

By empty, I mean it is not an alternative to other ethical systems. It doesn’t add new content or help you decide in moral dilemmas.

curi:

i don't accept all values, but i do accept valuing human life – it's a wonderful thing.

Critical Rationalist:

It’s not clear to me then in what sense you’re an egoist

curi:

i'm describing Rand's position

Misconceptions:

ok what door would Rand take?

Critical Rationalist:

If I’m not mistaken, Rand thought that altruism was unethical

curi:

yes, as do i

Critical Rationalist:

At least, altruism for its own sake

Misconceptions:

So Rand and curi would take the self cure.

curi:

no

curi:

have you read Atlas Shrugged?

Critical Rationalist:

If the other cure is not altruistic, then nothing is

Misconceptions:

The Plot Thickens

Misconceptions:

my reading of AS is irrelevant to whether you would take x or y door my good man.

curi:

AS contains a relevant scene

Critical Rationalist:

What counts as altruistic according to you curi?

curi:

i guess you guys would consider John Galt an altruist

Critical Rationalist:

I haven’t read as, but I’m curious about your take on this dilemma

Misconceptions:

well it seems that if you do not take the self cure, you're sacrificing yourself for the benefit of others

Critical Rationalist:

Literally

Misconceptions:

and you said you would not take the self cure

curi:

if you want to understand the Objectivist way of thinking, this is a bad place to start.

Critical Rationalist:

Curi, you said self interest and benefiting others NEVER conflict

Critical Rationalist:

And I used this to show why that claim is false

Critical Rationalist:

It is very easy to imagine scenarios where they come apart

curi:

do you agree that i'm right about all non-emergency scenarios? we should start with easier cases before harder ones.

curi:

then you will see the main ideas of the theory.

Critical Rationalist:

There probably are cases in the real world where they come apart, but that’s an empirical question not a philosophical question

curi:

and learn something about how to apply them.

Misconceptions:

To be clear, you would not take the self cure right?

Misconceptions:

your position regarding where to start has been noted

curi:

so for example, a common alleged counter-example is two men apply for the same job, and there's just one spot. do you think that's a conflict of interest?

Misconceptions:

I'd like to conclude the lab scenario

Misconceptions:

before we move on

Critical Rationalist:

Curi, I think it is a sign of philosophical skill to be able to apply your philosophy to fresh moral dilemmas, not just to dilemmas that you have practiced dealing with

Misconceptions:

I agree my critical rodent friend.

curi:

i did give you an answer, but if you want to learn about Objectivism you're taking the wrong approach.

Critical Rationalist:

It’s unclear to me how your answer is consistent with egoism

Misconceptions:

curi how is sacrificing yourself to save the lives of others not altruism?

Critical Rationalist:

I think the egoistic answer has to be self cure

curi:

right, so let's talk about how this works in general before trying to apply it to an edge case.

Critical Rationalist:

Or else it is not egoism except in a trivial sense

Critical Rationalist:

Sure, give your explanation of the General case

curi:

so for example, a common alleged counter-example is two men apply for the same job, and there's just one spot. do you think that's a conflict of interest?

Critical Rationalist:

I’ll grant that there isn’t

curi:

why isn't there?

Misconceptions:

I would not have abandoned your lab scenario to a previously practice scenario so easily

Critical Rationalist:

I could concoct different explanations. eg I would rather live in a society where employers evaluate on merits

Critical Rationalist:

I agree, it is easier to give an account of why self interest and benefiting others converge in those cases

Critical Rationalist:

Misconceptions: I try to be charitable

Misconceptions:

Charity is evil!

Critical Rationalist:

I don’t play debate games, I’m interested in what the other person thinks

Misconceptions:

Get you some bootstraps

Critical Rationalist:

Especially someone who knows David Deutsch personally (that’s very cool byw)

Critical Rationalist:

*btw

curi:

yes, employers evaluating on merits is important. many benefits. and part of the mindset here is wanting good general policies rather than insisting on short term personal benefit in the immediate situation, regardless of overall consequences. right?

curi:

in the lab scenario, i don't see a clear principle (like evaluating job candidates on merit) that would be violated by either choice. yeah dying sucks but we don't have immortality yet anyway and it's a major accomplishment to pursue and helps shape reality more to my (non-arbitrary, i claim) preferences. on the other hand, nothing was specified in the example about me having any obligation to those people. like it isn't my job to save their cure. i don't have a contract making this part of my job duties. i don't know why all these people have allowed their lives to be dependent on this one lab without any backup copies of the info, but it seems unreasonable.

Critical Rationalist:

You say that your preferences for human life are non-arbitrary. Say a bit more about why they are non-arbitrary

curi:

i think promoting and contributing to a beginning of infinity and the growth of knowledge is good. also e.g. i value the kind of society which allows men to live peacefully, cooperate voluntarily, and control nature. is that enough or did you want a different type of info?

Critical Rationalist:

Yes that’s exactly what I want

Critical Rationalist:

Very good. So, you think all of those ends are good and worth pursuing. Furthermore, you think there are good and worth pursuing in a case when they conflict with self-interest. That’s not a problem! I just don’t think you’re really an egoist (but I don’t care much about the terms). You think it is empirically the case that in most cases self-interest and benefiting others converge on the same answer, but in the case where they don’t, you go for benefitting others

curi:

i didn't say what room i'd pick. and i think by your standards Rand isn't an egoist either. John Galt said he'd kill himself if they threatened Dagny's life (to pressure and control him). he didn't put his own life first no matter what.

Critical Rationalist:

Interesting.

Critical Rationalist:

So yes I don’t care what term we use. Rand would (according to that) not be an egoist in the traditional sense.

Critical Rationalist:

The fact that it is even a question for you problematizes your self-description as an egoist. Maybe you should define egoism

Critical Rationalist:

Ben

Critical Rationalist:

Brb

curi:

I guess you'd also think an egoist in the military must betray his country and comrades if he gets into a very dangerous situation where he thinks that'll (significantly? or even 1%?) improve his odds of personally living?

curi:

whereas i think you can sign up for the military. it's risky but it's an option. and if you do, you should follow general policies like your contract with your employer and your duties to your fellow soldiers to follow military strategy instead of getting them killed. If you don't want to risk your life, don't sign up. but if you do sign up and follow the basic rules you agreed to, it's possible to succeed and have a good life. it's not hopeless. it's a way to make a try for it. so it's ok if you don't have a better option.

GISTE:

i don't recall curi calling himself an egoist

curi:

Egoism is a term used by Objectivism. I consider it an overly fancy word but it's OK. The basic point is the self is very important and valuable, and pursuing self-interest is good. But the point is not to maximize years of life regardless of all other considerations like quality of life and the state of the world.

curi:

If that was the meaning, an egoist would have to get all his groceries delivered to reduce the risk of dying in a car accident.

curi:

I don't know anyone who advocates that. Certainly not Rand.

curi:

Egoism means e.g. that it's not my duty to sacrifice my preferences or values to other people's preferences or values. I should reject that. But it doesn't mean rejecting all values broader than my continued physical existence. An egoist is allowed to care about e.g. colonizing the stars and spend money towards that goal even if he doesn't expect to see it, and even though not spending that money on medical care lowers his life expectancy a little.

curi:

An egoist also may value his model trains above additional medical care.

GISTE:

so traditional egoism is nonsense like how the traditional selfishness concept is nonsense?

curi:

@GISTE take a look at info like https://plato.stanford.edu/entries/egoism/ and see if you can find it saying to maximize life expectancy over all other values

Critical Rationalist:

@curi thank you for the replies

Critical Rationalist:

I really should go to bed now, but I definitely have more to say

Critical Rationalist:

@curi I read through your comments again. If egoism (for you and Rand) only means that pursuing self-interest is good and worth doing, I’ll accept your definition

curi:

Did someone link squirrels yet?

Critical Rationalist:

But someone could still say “I value what’s in the Bible and want to follow it”

Critical Rationalist:

Egoism (in this broad sense) has nothing to say to such a person

JustinCEO:

http://curi.us/1169-morality

Critical Rationalist:

Was the Carlo Elliot dialogue a response to me?

JustinCEO:

It's squirrel thing curi mentioned, and is relevant to morality discussion

curi:

It’s DD and my view rejecting moral foundationalism. Mostly afk.

Critical Rationalist:

Conveniently for y’all I’m not a moral foundationalist

Critical Rationalist:

Does anyone have thoughts on Popper’s solution to the problem of induction? I think it is very compelling. His approach is to accept Hume’s conclusion that it is invalid to draw conclusions about the likelihood of events in the future based on observations of the past. He says that we instead have various competing theories which are criticized and (when applicable) tested. The theories which best survive our attempts at refutation, we tentatively accept (for the time being).

JustinCEO:

I dont think I really got crit of drawing conclusions on past data until someone explained that the reason we expect sun to rise is not cus we've seen it rise a bunch of times but because we have an explanatory model of sunrises. Change the model or some variables in it (cuz eg sun expanding in later stages of being a star or whatever) and your expectation of what will happen changes

Critical Rationalist:

Yes exactly

Critical Rationalist:

The model is what is held up to empirical tests and tentatively accepted in the absence of disconfirmarion.

curi:

re egosim, Objectivism is a system. i'm not very interested in terminology, but the overall ideas about how to think about morality, what sort of values are good and bad, what sorts of methods of achieving values are effective and ineffective, etc. when you look at the whole picture here, you find substantial disagreements with most people. the exact nature or starting point of those disagreements is hard to discover because most people don't organize their moral thinking much and don't want to go through the issues point by point (and if they do that, it often changes their view, which complicates finding out what they thought before).

curi:

re solution to induction, i think it's important to talk about how conjectures and refutations is an evolutionary process and evolution is the only known reasonable theory of how new knowledge is created. induction never actually offered a rival theory to evolution. also, although I think Popper's idea is good, and adequate to solve the problem of induction narrowly, i think it's missing some things. specifically the idea of best surviving attempts at refutation is vague and leaves people using a lot of intuition to fill in the gaps.

Critical Rationalist:

@curi I’m not interested in terms either, so that’s a fair response. Does objectivism give us a standard by which to decide between values that people hold? For example, if I (as a utilitarian) value maximizes happiness (everyone’s counts equally), does objectivism have anything to say to me? If so, what?

curi:

yes Objectivism has a lot to say about what values to hold. as does BoI, btw: don't hold values incompatible with error correction, don't hold values incompatible with unbounded progress.

Critical Rationalist:

Well... that sounds more Popperian than objectivist

curi:

i think you misread

Critical Rationalist:

But ok, I’m a utilitarian. I believe in error correction and unbounded progress.

Freeze:

i think objectivism might say don't hold values that sacrifice your preferences for others '

Freeze:

because they are counterproductive

Critical Rationalist:

Well, as a utilitarian I sacrifice my happiness for others, but since I want to do that, I suppose in a certain sense I’m not sacrificing my preferences.

Critical Rationalist:

Utilitarianism (and many other ethical systems) seem compatible with @curi’s standards

Freeze:

Jordan

jordancurve:

@Critical Rationalist Any comment as to your alleged misreading of curi's comment on values?

Critical Rationalist:

Where was that alleged?

jordancurve:

https://discordapp.com/channels/304082867384745994/304082867384745994/662830621898571806

Critical Rationalist:

sorry there’s a lot to keep track of

curi:

i'm going to be mostly AFK soon FYI

Critical Rationalist:

Ok sure, I’ll grant that.

Critical Rationalist:

I’ll grant his standards are objectivist

Critical Rationalist:

I maintain that they are compatible with many (maybe most) ethical theories

curi:

my 2 examples were from BoI not Oism

Freeze:

yeah

curi:

they are Oism-compatible though.

Critical Rationalist:

Right that’s what I thought

Critical Rationalist:

Boi=Deutsch

Critical Rationalist:

Anyways, breaks over

jordancurve:

@Critical Rationalist If that's what you thought, then why did you write "that sounds more Popperian than objectivist"?

Critical Rationalist:

I’ll see y’all later

Critical Rationalist:

I count Deutsch as a Popperian (as would he)

Freeze:

i think the misreading allegation had to do with you expecting them to be more oist when curi said them after the BoI part. were you more asking for objectivist values that aren't Popperian or Deutschian?

Critical Rationalist:

But yes Deutschian would have been more accurate

curi:

Oism says the way for individuals or society to get ahead is by the pursuit of individual self-interest in peaceful ways. this is how to help others. trying more directly to help others is broadly (not alway) counter productive and people shouldn't be guilted into it or told it's a moral ideal.

Freeze:

my disagreement isn't about your use of Popperian over Deutschian

jordancurve:

@Critical Rationalist curi said, paraphrased: Boi suggests these values. You replied, "that sounds more Popperian than objectivist". That still looks like a non-sequitur to me, most likely due to a misreading.

curi:

Oism rejects ideas like that the profit motive, or greed, are inherently anti-social or bad for anyone, and rejects the seeing the purpose of my life as being to help others instead of to help myself.

JustinCEO:

re: moral ideal, i think someone said earlier (mb @Critical Rationalist ? i'm not sure, correct me if wrong) that the strong form of altruism was rare. but even holding altruism as a moral ideal has a big effect on ppl's thinking

curi:

Oism broadly thinks each person should look out for himself and a few people who play a substantial, valuable role in his life (family, close friends), and take personal responsibility for getting good outcomes for himself, and people should cooperate especially via the economic division of labor and specialization, and also in other voluntary ways (like friendship) when they want to. this is not how most people see life.

JustinCEO:

even if ppl don't actually practice altruism consistently, it still has a (bad) effect on the world

curi:

Oism says e.g. that Bill Gates did more good for the world as microsoft founder/CEO than with his charity efforts afterwards.

Augustine:

Why is that?

curi:

when you trade for mutual benefit, it's hard to screw that up. both sides think they are benefitting. they can make mistakes but it's a good thing similar to solo actions that you think benefit you. and with business you have tools like profit and loss to help you judge what's effective and efficient. when you do charity you lose those mechanisms to help you get good outcomes. it's hard to know what's a good use of resources. it's hard to measure. the recipients can say "sure this is good for me" but it's hard to tell how good it is for them and compare it to alternative uses of resources. the free market system compares resource uses to alternatives and does optimization there.

curi:

and competition between charities for fundraising dollars are a different sort of thing (more marketing based for example) than competition by companies for customers.

Critical Rationalist:

@curi given your description of oism, I think it is an empirical claim not a philosophical one. It might be true (and likely is to a large extent) that self-interest produces more benefit than being altruistic. But that’s a claim for economists and sociologists to confirm or disconfirm.

Critical Rationalist:

I have to go again, but that would be my initial reaction

curi:

Economics is primarily a matter of logic and math, not empirical

Critical Rationalist:

There is behavioral economics, which is more empirical

curi:

That isn’t where Oism gets these ideas

Critical Rationalist:

To the extent that economics is insufficiently empirical, I would just amend my comment to say “it is for better economics to corroborate or disconfirm”

Freeze:

DD:

The whole concept of bias is a misconception. So-called 'biases' are just errors. Thinking is error correction—which biases are not immune to.

Hence patterns of errors in the outcomes of thinking are not explained by biases but by whatever is sabotaging error correction.

Freeze:

I also thought of behavioural economics when you mentioned sociology alongside economists

Freeze:

but I've been questioning that stuff lately

Freeze:

a lot of it seems based on ideas that contradict CR epistemology

Freeze:

in terms of knowledge and how it's created and the role ideas play in minds

Critical Rationalist:

Have to go again unfortunately. I’ll try to return tomorrow

curi:

Bye CR

Critical Rationalist:

@GISTE “do you agree with these 2 interpretations of your view? (1) a headache has inherent negative value and that it's automatically bad. (2) if i have a headache, and choose to not immediately take pain meds because i prefer to continue philosophy discussion for a few more minutes before taking pain meds, that is a sacrifice.”

Critical Rationalist:

Is this a true story?

Critical Rationalist:

But yes that is a sacrifice. If the pleasure derived from philosophy discussion outweighs the headache, then it would be prudent to make the sacrifice

curi:

Do you think all purchases are sacrifices because you give up money?

Critical Rationalist:

In a trivial sense, sure

Critical Rationalist:

But they are worthwhile sacrifices (sometimes)

curi:

In the same sense as what you just said re headache?

Critical Rationalist:

Yes exactly

Critical Rationalist:

Though the pleasure created could be in others or long term

curi:

I think it’s an error to view all action as sacrifice just because some hypothetical other scenario would be superior.

Critical Rationalist:

No I’m not using sacrifice in that sense

Critical Rationalist:

I would say sacrifice is giving up some good for an end

Critical Rationalist:

The end could be such that it makes the sacrifice worth it, or not

GISTE:

Is that ends justifies the means logic ?

Critical Rationalist:

Absolutely

curi:

All action involves giving up alternatives

Critical Rationalist:

What else could justify the means?

Critical Rationalist:

In other words, I don’t see how one can show that some means are bad unless they have tend to have bad consequences

Critical Rationalist:

People sometimes say “ends justify the means” to defend lying, violence etc

Critical Rationalist:

But those “means” are bad precisely because they have bad consequences

curi:

Busy soon btw.

Critical Rationalist:

No worries

curi:

Not caught up much but:

There are two different ways an idea can be empirical.

1) The idea was inspired by evidence. We used evidence to help develop the idea.

2) The idea makes claims about observable facts, so we could use evidence to test the idea.

The main ideas of economics, as I view it, are neither 1 nor 2. They are about logical and mathematical analysis of abstract, hypothetical situations. The starting point of economics isn't seeing what sort of economies worked well in the past and trying to optimize that. It's theoretical analysis of certain ideas and principles.

Economics is very hard to test because we can't do controlled experiments for most issues. Even if we could test, it's often not the best approach, as DD pointed out: https://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science

Some people try to make economics more empirical. For example, if they want to know about minimum wage, they look at cities, states or countries which created or changed a minimum wage and then look at the results (and sometimes they can find two similar places, and one creates a minimum wage, and one doesn't, and do a comparison). I reject this sort of empirical approach to economics in general. Not 100% useless but generally not much use.

If you want to understand minimum wage, you should consider concepts like supply and demand, and do mathematical calculations to see what they mean in some simplified scenarios.

And when rival economists disagree, the way to resolve this isn't by getting more data. A better approach is to figure out what's different about each of their systems and look for logical errors.

curi:

Applying economics to real world scenarios has various difficulties but can be done to a reasonable approximation. With minimum wage, after figuring out its consequences in a simple scenario, we can play with that scenario. Start adding extra complications and see what changes. E.g. increase or decrease the ratio of workers to employers and see if minimum wage has different results. Or you could add part time workers to your model, or add a simplified stock market, or whatever you think is relevant. That lets you learn about the connections between minimum wage and the other stuff you model.

You can also see how it's a form of price control and follows the general logic of price controls (price maximums causes shortages when low enough to matter; price minimums cause surplusses when high enough to matter – minimum wage causes a surplus of labor (unemployment) by preventing the price of labor from reaching the market clearing price). You can also understand why that is based on simple principles. The principles are things like what a trade is, what the division of labor is, what supply and demand are, what a buyer and a seller are, etc.

For complex real situations, we can see them as similar to an abstract concept – an inexact but pretty good fit – except for e.g. 8 extra factors that we identified as potentially important differences. Then we can consider the effects of each of the factors. And then we can often make some empirical predictions. But if we're wrong, while it can be an error in our economic logic, it's often an error somewhere else, like there was another factor in the real situation, which is important to the result, but which we didn't take into account.

curi:

same issue with Objectivist morality and self-interest. we get conclusions like that by thinking more like this https://elliottemple.com/essays/liberalism rather than by empirical observation.

curi:

8:16 AM] Critical Rationalist: If you all believe so much in the power (and easyness) of rational criticism, I would like to see someone defend @curi’s and @Freeze’s original claims which lead to this.

which claims by me?

GISTE:

@curi, maybe cr was talking about this https://discordapp.com/channels/304082867384745994/304082867384745994/663051081818963991

curi:

I just think as a practice it should be less common. I want @curi to rule out the following claim with philosophy: “everyone would be better off if they were altruistic”.

that claim is too vague to begin criticism.

curi:

it's ambiguous between: each individual would be better off if he did it himself, or everyone as a group would be better off if everyone did it

curi:

it doesn't specify what is and isn't altruistic behavior

curi:

and it doesn't specify what better off means

curi:

also i would expect to use economics in my response and i don't know which economics is accepted or denied.

curi:

like are we accepting the benefits of private property, division of labor, capitalism and trade? or not? if not, what is claimed instead?

curi:

which of the claims about those are errors and why?

curi:

if we accept that stuff, how does altruism interact with it? like are some trades altruistic? which ones?

curi:

@Critical Rationalist

curi:

also if i missed some major point to respond to, let me know, cuz i'm not gonna be reading everything. (this applies to everyone). if you really want my attention you can use curi or FI forums btw. i encourage ppl to do that but some seem to prefer discord without much explanation of why. http://fallibleideas.com/discussion

Critical Rationalist:

@curi

“1) The idea was inspired by evidence. We used evidence to help develop the idea.
2) The idea makes claims about observable facts, so we could use evidence to test the idea.
The main ideas of economics, as I view it, are neither 1 nor 2. They are about logical and mathematical analysis of abstract, hypothetical situations.”

Yes, but those logical models depend on assumptions about the world that are 2. The claim that humans are best approximated as rational self-interested utility maximizers is a claim economists could be wrong about. We might not have evolved to be like that. To the extent that that assumption (the rationality assumption) is violated, economic models will be less than perfect. Surely the point of economic models is to predict real economic behavior. Economic models are not toys for smart people to play with.

Critical Rationalist:

“Some people try to make economics more empirical. For example, if they want to know about minimum wage, they look at cities, states or countries which created or changed a minimum wage and then look at the results (and sometimes they can find two similar places, and one creates a minimum wage, and one doesn't, and do a comparison). I reject this sort of empirical approach to economics in general. Not 100% useless but generally not much use.”

Our disagreement might run deeper than I thought, because that is exactly the sort of economics I’m in favour of. I’m also in favour of abstract mathematical modeling, but if the modeling does not approximate real world exchange of goods, then it is useless. It is, as I said, a toy for smart people to play with. The proof of the pudding is in the eating.

Critical Rationalist:

“If you want to understand minimum wage, you should consider concepts like supply and demand, and do mathematical calculations to see what they mean in some simplified scenarios.”
There is nothing wrong with those concepts, but those mathematical calculations include empirical assumptions about human nature that could be false. If we evolved to NOT be rational self-interested utility maximizers, the equations will just be false (or at least, imperfect approximations).

“And when rival economists disagree, the way to resolve this isn't by getting more data. A better approach is to figure out what's different about each of their systems and look for logical errors.”
Or look for empirical assumptions that are false. Which of the following claims do you disagree with:
1. The goal of economics is to describe and predict actual economic interactions
2. Actual economic interactions are affected by human nature
3. Economic models make assumptions about human nature
4. Those assumptions could be false for accidental reasons about how we happened to evolve

“that claim (that a world of altruists would be better) is too vague to begin criticism.
it's ambiguous between: each individual would be better off if he did it himself, or everyone as a group would be better off if everyone did it”

You as an objectivist believe that it is the case that a world wherein people were altruistic would be a worse world. What did you have in mind when you asserted that? Tell me what you mean by altruistic, and we will select people who fit the description. No matter what description you give, it quickly becomes an empirical question whether people who fit that description interact better and produce more wealth.

curi:

Semi afk. Didn’t read @Critical Rationalist messages yet but I’m thinking we should narrow down the discussion and pick a specific point to focus on and try to reach agreement about. Make sense to you? Topic suggestion?

Critical Rationalist:

@curi I agree

Critical Rationalist:

I just had left a lot that I hadn’t responded to from 3 people so I just did a volley

Critical Rationalist:

If I had to narrow down what I see as my main disagreement with you, it would be this: the exact form that human nature has taken is a contingent fact of evolution. We are a certain way, and we could have easily been different if evolution had gone differently. Given that, we cannot know a priori what human nature is like (contingent facts have to be discovered through empirical testing). Your claims about which ethical systems will produce more wealth or welfare depend on assumptions about human nature. Therefore (given that assumptions about human nature cannot be know a priori, because they are contingent results of evolution), your claims about the effects of ethical systems cannot be known a priori.

Critical Rationalist:

Everything I said above would also apply to economic theories.

jordancurve:

@Critical Rationalist I don't know what you mean by "human nature". The closest meaningful term that comes to mind is "universal knowledge creator", but since you're familiar with Deutsch, I guess you would have used that instead if that's what you meant.

Critical Rationalist:

No. I mean things like how we respond to incentive structures, under what circumstances we will cooperate or not cooperate, what makes people respond tribalistically or not, whether people develop better under strict parenting or permissive parenting

Critical Rationalist:

Those are all relevant to what the impact of different ethical systems will be

jordancurve:

So... human nature = the way people think about various ideas in Western culture today?

jordancurve:

I don't think that's what you mean, but it's again my best guess at something coherent (to me) that roughly matches (maybe) what you're talking about.

Critical Rationalist:

What do you think I mean?

jordancurve:

idk, the closest match I have so far is "the way people think about various ideas in Western culture today"

Critical Rationalist:

If you think it’s incoherent, point out how

Critical Rationalist:

Did I mention or imply western culture?

jordancurve:

I don't undestand what you're talking about well enough to criticize it other than for being vague (to me)

Critical Rationalist:

I just listed some traits

jordancurve:

Does traits = ideas?

Critical Rationalist:

It is an open empirical question to what extent humans develop better under strict parenting, for example

Critical Rationalist:

Or... to what extent do we naturally feel empathy for suffering strangers

Critical Rationalist:

Some primates are fairly empathetic

Critical Rationalist:

Others are not

Critical Rationalist:

Which kind are we?

Critical Rationalist:

Open empirical question

Critical Rationalist:

I could give examples like this all day

Critical Rationalist:

And the answers to these questions really matter when we try to design societies

Critical Rationalist:

@curi these examples are relevant to our topic

curi:

i regard my main, important ideas about economics or parenting styles to apply to aliens too, not to be human-specific. do you disagree with that?

Critical Rationalist:

I stand by the idea that economic models can only be true to the extent that their assumptions about human nature are true (eg that humans or aliens are self-interested rational utility-maximizers). Whether or not those assumptions are true is an accidental fact of evolution. There is no law of nature that says humans or aliens must be a certain way. It depends what selection pressures we happened to face.

curi:

i think the relevant assumptions about human nature are very limited. e.g.: intelligence. made of matter. have preferences.

curi:

separate individuals

curi:

no magic

Critical Rationalist:

Well, even those are empirical claims (albeit ones that are so obvious that it is not worth challenging them)

curi:

i'm not saying 100% non-empirical

Critical Rationalist:

Good

curi:

tangentially i actually think the laws of logic, epistemology and computation are all due to the laws of physics, and so are technically empirical matteers.

Critical Rationalist:

Do you think the assumptions you listed are premises from which you can deduce logically (ie with no empirical social science data) that egoism works better than altruism in society?

Critical Rationalist:

And are you so confident in this deduction that no amount of empirical social science data could change your mind?

curi:

i probably left out a few premises and i use critical argument in general not strictly deduction, but basically yes.

Critical Rationalist:

Well, I’m afraid you’ll have to spell that out

curi:

big clashes with empirical data would result in me trying to figure out what's going on. lots of the sorts of studies people do today could not change my mind.

Critical Rationalist:

Explain to me the transition from those assumptions to egoism works better

curi:

or i should say, not with the sort of results they actually get. i guess if a minimum wage study found wages went up a trillion times in a city (after inflation adjustments) i'd start investigating wtf happened there.

Critical Rationalist:

Do you mean a trillion fold or a trillion times in a row?

curi:

fold

Critical Rationalist:

Your critical argument is so powerful that you need a trillionfold increase in wages to even consider that your argument is wrong?

curi:

that was an example not a minimum

Critical Rationalist:

What would the minimum be

Critical Rationalist:

Ballpark

Critical Rationalist:

Although frankly

Critical Rationalist:

To me

Critical Rationalist:

What matters more is not the size of the increase

curi:

varies heavily by context. just if something really unusual happened, which does not appear to be explainable by any of the typical factors, i'd be curious what caused it.

Critical Rationalist:

But the number of replications

Critical Rationalist:

If dozens of different natural experiments were done (ie neighbouring states or provinces with minimum wage increases) and all of them found a particular result, that would count more than one natural experiment with a huge effect size

curi:

if they all got 10% wage increases it'd mean nothing to me

curi:

but a trillion percent increase is very hard to explain by any explanations i already know of

Critical Rationalist:

But if it is just one natural experiment

Critical Rationalist:

It could be so many other factors

Critical Rationalist:

Replications are (rightly) much more impressive to social scientists than single studies with big effects

Critical Rationalist:

It is easy to get big effects by chance with a single study

curi:

you're speaking general rules of thumb. i'm not debating that.

Critical Rationalist:

It is much harder to get small effects that replicate really well (and btw, 10% wage increase is huge)

curi:

i understand what you're saying

Critical Rationalist:

Ok, I want you to spell out this critical argument

Critical Rationalist:

Because... you’re hypothetically willing to discount dozens of replicated natural experiments on the basis of this argument

Critical Rationalist:

It better be airtight

curi:

do you have an opinion of minimum wage laws? do you know much econ? is it a good topic to use? may afk any time btw

Critical Rationalist:

Well, I guess I originally had in mind the argument that egoism makes society better

curi:

my arguments re egoism involve econ, that isn't a separate topic

Critical Rationalist:

I figured they’d be related

Critical Rationalist:

Well, I would like to see it spelled out

Critical Rationalist:

I suspect I’ll be able to follow without a technical understanding of Econ

curi:

ok. just to know where to start, what is your current view on min wage?

curi:

yeah my econ arguments aren't especially technical

Critical Rationalist:

Oh I’m very open minded about this

Critical Rationalist:

There are some natural experiments of the sort I’m describing that indicate min wage increases employment

Critical Rationalist:

But they are few in number

Critical Rationalist:

I accept that the models generally predict the opposite

Critical Rationalist:

I’m not here to defend any particular view of economics

curi:

ok

Critical Rationalist:

I’m not even attacking the idea that egoism harms society

Freeze:

around here was a minimum wage discussion between Andy and curi that was interesting: http://curi.us/2145-open-discussion-economics#10988

Critical Rationalist:

I’m attacking the idea that the claim that “egoism helps society” can be known a priori

curi:

to be clear, my claim: not strictly a priori, but approximately. we don't need to do empirical studies about it, and it doesn't depend on parochial details like that our planet has oil or trees on it.

Critical Rationalist:

Not those parochial details

Critical Rationalist:

But details about the kind of creatures humans are

Critical Rationalist:

How empathetic are we

Critical Rationalist:

How rational are we

Critical Rationalist:

Do we engage in systematic errors of reasoning

Critical Rationalist:

How selfish are we under normal conditions

Critical Rationalist:

(not “how selfish should we be for optimal results”)

Critical Rationalist:

We are primates. The product of an unguided process. It really matters what kind of creatures we are.

curi:

yeah, my arguments don't use claims about those things are premises in the usual sense. however, i do have some claims about the irrelevance of standard claims along those lines.

jordancurve:

I regard people's degree of empathy and rationality as a product of the ideas they hold, not as some kind immutable property of humans.

jordancurve:

Contra "the kind of creatures humans are"

curi:

yeah that. it's part of the universal knowledge creator view of BoI.

Critical Rationalist:

The extent to which empathy is caused by their ideas is a question of psychology and neuroscience

Critical Rationalist:

In fact, I think there is good reason to think that most of our responses are the result of automatic unconscious processing

Critical Rationalist:

But even if you don’t agree with that, how can you rule it out? It is certainly possible that unconscious automatic processing (NOT ideas) leads to empathy. How can you rule that possibility out?

Critical Rationalist:

How can you rule out that empathy is not in the non-idea part of unconscious processing

curi:

This internet is cutting out. The quick outline is you do epistemology first and then use that to evaluate models [of] minds. I’ll give some details but not today.

Critical Rationalist:

I definitely want that spelled out when curi comes back

Critical Rationalist:

Our minds could have evolved many different ways

Critical Rationalist:

Evolution is a contingent process, with lots of random events and shifting selection pressures

Critical Rationalist:

There is no way to sit on your armchair and figure out how evolution happened

Critical Rationalist:

And our minds are products of evolution

jordancurve:

Empathy involves understanding other people. If our ability to empathize were limited by non-universal hardware (which I take to follow from the hypothetical that empathy is part of "the non-idea part of unconscious processing"), then there could exist situations in which it would be impossible for us to understand the other person enough to empathize with them. This would contradict the unbounded reach of human understanding that is argued for in The Beginning of Infinity. Therefore our ability to empathize is not controlled by non-universal hardware.

jordancurve:

Or at least, the final sentence follows unless there's some other objection I didn't think of, which is quite possible. 🙂

Critical Rationalist:

Ok, maybe there are some situations in which our current empathetic capacities (which we’ll suppose are constituted of non-universal hardware) cannot empathize with others

Critical Rationalist:

But maybe our rational capacities do have the unbounded character Deutsch speaks of. I’m willing to grant that

Critical Rationalist:

But I see no contradiction between supposing that a) empathy is non-universal and b) rationality is unbounded

curi:

Do you think you understand and agree with what BoI says about universality and jump to universality?

jordancurve:

To the extent that empathy is a matter of ideas, any hard-wired limitation on human empathy contradicts the universality of human thought argued for in BoI. @Critical Rationalist

Critical Rationalist:

The claim that empathy is a matter of ideas is precisely what I’m challenging

Critical Rationalist:

I have not read Boi in its entirety. The universality chapter was one of the ones I skimmed

jordancurve:

If you're looking for things to argue with or learn about, curi has collected a list of unrefuted and potentially controversial ideas here: http://curi.us/2238-potential-debate-topics

Critical Rationalist:

@jordancurve I went through that page and identified around 50 claims. I disagree with around 35 of them (quite strongly in most cases)

curi:

Most of these things don’t have an explicit Popper view, have to apply Cr principles

Critical Rationalist:

@curi If you know which claims on your list are DDs views, I’d be interested in knowing

Critical Rationalist:

These are my core commitments, and the thinkers who influenced me:

Critical Rationalist:

Critical rationalism* (epistemology): Karl Popper, David Deutsch, Alex Rosenberg (helpful critic)

Utilitarianism, moral realism* (ethics): Henry Sidgwick, Joshua Greene, Peter Singer

Metaphysical naturalism (metaphysics): Sean Carroll, Dan Dennett, Alex Rosenberg

Social democracy, centre-leftism (politics): Karl Popper, Noam Chomsky, Thomas Sowell (helpful critic)

Compatibilism (free will): Dan Dennett, David Hume, Giulio Tononi

Panprotopsychism (consciousness): David Chalmers, Christopher Koch, Giulio Tononi

Evolutionary psychology* (human nature): David Buss, Steven Pinker, David Buller (helpful critic)

* with caveats

curi:

Most are. Is there a particular thing you’re curious about?

Critical Rationalist:

Trump, romance, global warming

curi:

No, yes, yes

Critical Rationalist:

Global warming... are you sure about that?

curi:

Yes

Critical Rationalist:

Because I seem to remember hearing him say in a ted talk that the right response is to trust the experts

Critical Rationalist:

In this context

curi:

He was trying to be diplomatic and choose words very exactly to not literally lie

Critical Rationalist:

Has DD ever been married?

curi:

I don’t discuss my personal life let alone his

Critical Rationalist:

Haha fair enough

Critical Rationalist:

Anyways, there is obviously lots to talk about

Critical Rationalist:

I will probably have to push away in a week or so when my next semester starts

Critical Rationalist:

But this will be a looming temptation

curi:

Re romance there was an Autonomy Respecting Relationships forum

Critical Rationalist:

Next semester I’m working on finishing my MA in philosophy, but I’ll also be volunteering as a research assistant for that horrid discipline of psychology

Critical Rationalist:

😉

curi:

DD supported Bush but has been gradually shifting more politically left

Critical Rationalist:

I think Popper is left wing to a first approximation

curi:

Yeah but not far left like Hillary, Bernie, SJWs

Critical Rationalist:

Hillary is left in your book?

curi:

Yes!?

Critical Rationalist:

*far left??

curi:

Yes she is an Alinskyite who called a hundred million Americans deplorables

Critical Rationalist:

She’s centrist even by the standards of the Democratic Party

Critical Rationalist:

And by international standards, the democrats themselves are quite centrist

Critical Rationalist:

Bernie, Warren, the squad, they are squarely in the left

Critical Rationalist:

But they’re a minority in the dems

Critical Rationalist:

In terms of Hillary’s concrete policy proposals, she’s quite centrist

curi:

I don’t agree

Critical Rationalist:

Foreign policy she has a long history of being hawkish (arguably Center right)

Critical Rationalist:

Calling 100 million Americans deplorables is elitist and dismissive, but not leftist

curi:

She did it because she’s far enough left of them to hate them

Critical Rationalist:

How do you know that’s why she said it?

Critical Rationalist:

Btw just to be clear I’m not a Hillary fan

Critical Rationalist:

I’m just a little surprised

curi:

I have read a lot of political info that you probably haven’t

curi:

Leads to perspective differences

Critical Rationalist:

That’s... not a good way to engage in conversation

curi:

? It shouldn’t be surprising to reach significantly different conclusions based on different info

Critical Rationalist:

I might have been reading more into that comment than was there

curi:

Just on phone not giving details. Around more tomorrow probably

curi:

Almost done traveling

Critical Rationalist:

Ok @curi, here is one issue from the list of debate topics: genes have no direct influence over our intelligence or personalities. That is a empirical conjecture. As Popperians, what do we do when we make empirical conjectures? We try to test them. If genes had no influence over those traits, then people who share all of their genes but none of their environment should not be similar. Identical twins raised in separate adoptive families fit this description. They are in fact massively similar. In terms of IQ scores and personality tests (which I am sure you’re skeptical of), but also behavioral measures: how much education they get, income, even political values (yes, really). Just go to google scholar and look up “heritability estimate twin studies” and then any trait. These heritability estimates are derived from the kind of twin and adoption studies that I’m describing.

Critical Rationalist:

To make this concrete, suppose John and Bob are identical twins raised in separate families. They would be similar in terms of cognitive ability (as measured by IQ tests), political beliefs (though of course not 100% identical), and measurable behaviors. Get massive samples of “Johns and Bobs”, and you find similarities like this replicate well. What is your explanation for this?

curi:

While I have some empirical comments on that issue (e.g. re low data quality), I think the important issues are primarily theoretical. We need a complex theoretical framework with which to interpret the data. We need models of how genes and minds work, explanations of causal mechanisms, rival ideas, criticism, etc. Popper says observation is theory laden, and fairly often there is a lot of theory involved, a lot of background knowledge that makes some difference.

curi:

So e.g. I think the theory points in http://bactra.org/weblog/520.html are important to interpreting the data correctly. They explain e.g. what "heritability" is. One needs an understanding of that to know what to make of the data. They also explain in general some limitations of correlations and statistics.

Critical Rationalist:

Well, on the data quality issue, the findings of behavioral genetics are VERY well-replicated. See https://journals.sagepub.com/doi/pdf/10.1177/1745691615617439

Critical Rationalist:

I've taught AP Psychology (which contains a chapter on heritability and individual differences) several times, and took psychology statistics courses during my undergrad. Heritability has a precise meaning: the percentage of population variance in a trait that is caused by genetic differences. For example, people in the population differ on height (i.e. height is variable). What percentage of this variability is due to genetic differences? Around 90%. That means 90% of the differences between people are due to genes. We can estimate this with twin and adoption studies.

curi:

the percentage of population variance in a trait that is caused by genetic differences.

that is not the meaning.

Critical Rationalist:

So, if your article has a different account of heritability than the one I've described, I can say with some confidence that it is at odds with contemporary behavioral genetics. I've read summaries of the literature from Eric Turkheimer, Steven Pinker, and the article above (which was written by 4 leading experts; they summarized dozens of studes).

Critical Rationalist:

Oh it isn't? Please give me the definition.

curi:

the article is by an geneticist expert FYI

curi:

To summarize: Heritability is a technical measure of how much of the variance in a quantitative trait (such as IQ) is associated with genetic differences, in a population with a certain distribution of genotypes and environments. Under some very strong simplifying assumptions, quantitative geneticists use it to calculate the changes to be expected from artificial or natural selection in a statistically steady environment. It says nothing about how much the over-all level of the trait is under genetic control, and it says nothing about how much the trait can change under environmental interventions. If, despite this, one does want to find out the heritability of IQ for some human population, the fact that the simplifying assumptions I mentioned are clearly false in this case means that existing estimates are unreliable, and probably too high, maybe much too high.

curi:

the term "associated with" is not, and does not mean, caused by

JustinCEO:

"associated with" is more like "correlated with" if my understanding is correct

curi:

i've read mostly looked at the actual literature instead of summaries of the literature FYI. i think this is a better method.

Critical Rationalist:

I actually agree with that definition. But the best explanation of the pattern of associations is that the genes are playing a causal role. This is not just my view. Here is a quote from an article from Nature (written by leading experts): "IQ heritability, the portion of a population's IQ variability attributable to the effects of genes"

Critical Rationalist:

https://www.nature.com/articles/41319

Critical Rationalist:

But yes, the data are logically compatible with other causal explanations.

GISTE:

i guess most people (including most "experts") agree with you, but that doesn't mean that's the right position. @Critical Rationalist

curi:

your ideas about what is a good explanation in a particular case are not a matter of heritability. they are something else.

Critical Rationalist:

@curi What is your rival theory for why identical twins raised apart are similar on every trait we can measure.

Critical Rationalist:

@GISTE He was citing the definition given by a geneticist. I agree with everything in the definition, but it was incomplete.

Critical Rationalist:

Or at least, it is compatible with a causal explanation. The causal explanation is the theory which (I would contend) best survives theoretical criticism.

Critical Rationalist:

If someone disagrees, they better offer a better theory.

GISTE:

@Critical Rationalist i was referencing this: "But the best explanation of the pattern of associations is that the genes are playing a causal role. This is not just my view."

Critical Rationalist:

It is true that that explanation is not just my view, but I am willing to defend it on its own terms.

Critical Rationalist:

The fact that experts believe it does not make it true.

Critical Rationalist:

Here is my explanation for why identical twins raised apart are similar for psychological traits: genes influence them. Does someone have an alternative explanation?

curi:

i don't agree with your take on the dataset, but setting that aside the basic explanation is gene-environment interactions, e.g. a gene for height can be correlated with basketball skill but it doesn't provide basketball skill, that isn't the kind of thing it does.

betterbylearning:

@Critical Rationalist I find it easiest to think about this matter of genetic causes by way of example. Suppose our culture regards red-haired people as volatile, easily angered and less rational personalities. And so when people, generally, encounter a red-haired child they treat him or her differently from other children. They try to explain stuff less and invoke violence / control over redheads more quickly. So then someone comes along and does a twin study. They find that, in fact, genes are associated with adults who are less rational and more prone to violence. But it could be (likely is) that genes cause red hair, red hair causes cultural mistreatment, and cultural mistreatment causes less rationality and more violence. Not that genes directly cause less rationality and more violence. If the culture changed, the result would change without the genes changing at all.

Critical Rationalist:

"i don't agree with your take on the dataset" Please be more specific. Are you denying that identical twins raised apart have similar IQs, similar personalities, etc.?

curi:

i think you're overstating that.

Critical Rationalist:

They much more similar than strangers, but not 100% the same.

Critical Rationalist:

I can give you precise numbers if you want.

Critical Rationalist:

I'm still waiting for an alternative explanation.

curi:

the basic explanation is gene-environment interactions, e.g. a gene for height can be correlated with basketball skill but it doesn't provide basketball skill, that isn't the kind of thing it does.

betterbylearning:

@Critical Rationalist I intended to suggest an alternative explanation via my example. There could be some trait genes cause, which people culturally decide means they should treat people differently. The different treatment then causes outcomes like IQ (or basketball skill).

curi:

DD's example is an infant smiling gene, which causes infants to smile more and does nothing later. This could end up associated with all sorts of stuff because it leads to different treatment by parents in our culture.

Critical Rationalist:

Yes, these would all (for me) count as ways that the genes can cause human differences.

curi:

a gene which causes infant smiling is quite different than a gene which causes intelligence, right?

Critical Rationalist:

Ok, so now you want to have a specific empirical discussion of HOW genes cause intelligence.

Critical Rationalist:

Maybe they do so by structuring the brain differently

Critical Rationalist:

Maybe they cause more height

curi:

no, i don't want to discuss empirical matters, i want to discuss how to view a simplified example

betterbylearning:

I think it comes down to what problem you're trying to solve with the "genes cause" explanation.

Critical Rationalist:

maybe they cause smiling (which in turn causes more attention)

curi:

suppose by premise it's the smiling thing. that is very different than a brain structure gene right? worth knowing the difference? worth making statements which differentiate the two cases?

Critical Rationalist:

So my original question was what your explanation was for the fact that identical twins raised apart are similar in terms of personality and IQ scores

Critical Rationalist:

and your response is: it is possible that genes cause this difference by making children smile more

Critical Rationalist:

I agree

curi:

ok so have the twin studies differentiated between these two scenarios?

betterbylearning:

If you're trying to enumerate all causes, including indirect ones, then I don't have an objection to including genes. But if you're trying to figure out what you'd have to change to get greater IQ, genes don't make that list, culture does.

Critical Rationalist:

the twin studies do not establish HOW genes cause intelligence

Critical Rationalist:

to be clear, you both have only established that one possible way that genes cause intelligence is through eliciting cultural responses

curi:

if you agree the twin studies might be about infant smiling genes, and that one should be careful not to make statements talking about genetic intelligence when genetic infant smiling is the actual thing, then you should not make statements that studies have shown genetic intelligence. right?

curi:

they'd just be inconclusive

Critical Rationalist:

they are inconclusive about the exact mechanism by which genes have their effects yes

Critical Rationalist:

but

Critical Rationalist:

your page said something to the effect of genes do not influence intelligence

Critical Rationalist:

you claim to know that this is true

Critical Rationalist:

not that "it is possible that genes have their influence indirectly"

curi:

yes, so there are multiple issues involved with that

JustinCEO:

Does a study consistent with very different causal mechanisms tell us anything more than that a correlation exists?

curi:

one is: some people think twin studies refute my position. you brought that up. they do not. they are compatible with it.

curi:

another is my actual reasoning

Critical Rationalist:

you believe (correct me if I'm wrong) "genes do NOT directly influence intelligence"

curi:

my comments re twin studies were just trying to defend my view from refutation, not tell you the positive reasons for it

curi:

do you agree that i've succeeded at this limited goal?

Critical Rationalist:

Yes, actually I would agree that your view is not logically incompatible with the results of twin studies.

curi:

ok great

Critical Rationalist:

The DD example is a possible explanation of the twin studies findings which would be such that the genes have an indirect effect on intelligence

Critical Rationalist:

So, how do you rule out the possibility of direct influence?

Critical Rationalist:

The quote from the website is this: "Genes (or other biology) don’t have any direct influence over our intelligence or personality."

curi:

to understand what ways genes may effect intelligence, one needs a model of how minds work and an epistemology.

Critical Rationalist:

Make your case

curi:

for example, if we model minds as buckets, then we could imagine (without knowing all the details, that's ok) that there is a gene which causes a brain to be a larger sized bucket which lets more knowledge be poured into it total.

curi:

similarly there could be genes that make the entrance to the bucket wider or narrower, allowing knowledge to be poured in at a higher or lower rate.

Critical Rationalist:

Sure, I'm willing to discard the bucket model

Critical Rationalist:

I've read Objective Knowledge (which you seem to be alluding to)

curi:

in this model, it's fairly easy to propose genetic mechanisms. however the model has problems.

Critical Rationalist:

Ok, so we've ruled out the bucket model

Critical Rationalist:

go on

curi:

my model says that brains are universal classical computers. they're Turing-complete. this highly limits the relevance of hardware differences. minds are a type of software. basically we get an operating system pre-loaded which grants us intelligence (the ability to conjecture and refute) and then we develop our own apps/ideas during our life. intelligence differences, in the sense of thinking quality differences, are due to better or worse ideas.

Critical Rationalist:

"brains are universal classical computers. they're Turing-complete."

Critical Rationalist:

And you established this without the smallest amount of neuroscience data, right?

Critical Rationalist:

You're going to have to spell out how you know that brains are universal classical computers

curi:

i wouldn't say zero. but not much.

Critical Rationalist:

And also, I assume you mean that only human brains are like this. Chimpanzee brains are not classical computers, right?

curi:

do you know what a universal classical computer is? They are covered in FoR. not sure if you've read that.

curi:

no, chimpanzee brains are also universal classical computers.

Critical Rationalist:

I've read maybe half of it

Critical Rationalist:

A turing machine? Capable of computing anything that can be computed

curi:

yes

Critical Rationalist:

but classical as in non-quantum (only 0s and 1s)

Critical Rationalist:

Interesting, how do you know that human brains are classical computers

curi:

do you mean classical as opposed to quantum?

Critical Rationalist:

no classical as opposed to whatever chimpanzee brains are doing

curi:

i said chimp brains are also classical

Critical Rationalist:

oh sorry I misread that

curi:

so are PCs and iphones

Critical Rationalist:

yes yes those definitely are

Critical Rationalist:

now... you also think chimpanzees are less intelligent than humans...

curi:

i don't think chimps are intelligent at all

Critical Rationalist:

so it is possible for brains (which are classical computers) to differ in their intellectual capacity, yes?

curi:

it's important to differentiate differences due to software from differences due to hardware

Critical Rationalist:

well, I think there are several more steps you must go through before you can rule out that genes directly influence intelligence

curi:

sure, i gave an outline

Critical Rationalist:

where?

curi:

my model says that brains are universal classical computers. they're Turing-complete. this highly limits the relevance of hardware differences. minds are a type of software. basically we get an operating system pre-loaded which grants us intelligence (the ability to conjecture and refute) and then we develop our own apps/ideas during our life. intelligence differences, in the sense of thinking quality differences, are due to better or worse ideas.

Critical Rationalist:

"are universal classical computers. they're Turing-complete. this highly limits the relevance of hardware differences."

Critical Rationalist:

but wait... chimpanzees also have universal classical computers which are turing-complete

Critical Rationalist:

are the number of hardware differences (that are relevant to intelligence) between humans and chimpanzees "highly limited"?

curi:

yes

Critical Rationalist:

so if a chimpanzee was raised with the same software as a human, it could be as intelligent?

curi:

not all software comes from parenting

Critical Rationalist:

by the way

Critical Rationalist:

this isn't limited to chimpanzees I assume

curi:

right

Critical Rationalist:

but I won't even go there

Critical Rationalist:

you think if a chimpanzee was raised in the same parenting (and wider social) context, it would be as intelligent as a human?

curi:

no

Critical Rationalist:

what other sources of software are there?

Critical Rationalist:

in your view

curi:

genes do something roughly like an operating system install disk does

Critical Rationalist:

ok so genes can influence software?

curi:

initially

Critical Rationalist:

and install software that makes an organism more intelligent, initially?

curi:

if you drop the "more" then yes

Critical Rationalist:

so you know that human genes produce the exact same intelligence software in each human

Critical Rationalist:

how do you know that?

curi:

no

Critical Rationalist:

so... do you think that human genes produce different intelligence software in different humans?

curi:

so, genes do not produce the exact same hardware brains in each person, but small variations in hardware, such as having 1% more neurons, have only limited importance. they don't change certain key issues like being a universal computer or not. (setting aside cases of major brain damage and people who can't hold conversations, learn math, etc.)

variations in intelligence software don't matter much either for the same basic reason: the important issue is whether a universality is present or not present. for the software, either it is or isn't a universal knowledge creator.

Critical Rationalist:

does the software in chimpanzee classical computer brains have universality? I'm inferring "no"

curi:

it doesn't have universal knowledge creation. (there are different types of universality)

curi:

in my view, the term intelligence has two separate meanings. one is binary: intelligent or not. this refers to universal knowledge creation or not. the second is a matter of degree, and relates to thinking quality. this is the kind of difference we see between people healthy people, and is due to different knowledge especially methodology stuff.

Critical Rationalist:

I'm skeptical of your account of the human mind, but I'll grant it and see if what you're saying follows or not

curi:

ok

Critical Rationalist:

Here is an empirical possibility that seems compatible with your accont

curi:

btw i may afk soon but will continue later

Critical Rationalist:

Well, actually, multiple possibilities

Critical Rationalist:

The software could come prepackaged with ideas already in place. It could come into place with certain ideas encoded unconsciously (and thus, inaccessible to deliberative reflection and change). If the latter is true, then the ideas IN PRINCIPLE could be changed (with technology) but not with pure thought. Absent dramatic changes in technology, if that were true, some people would be more limited if bad ideas were encoded into the unconscious by our genes

Critical Rationalist:

Let's start with that possibility

curi:

what sort of limit? would this limit limit the repertoire of knowledge they could create, or not?

Critical Rationalist:

suppose empathy turns out to be harmful

Critical Rationalist:

but suppose its effect on conscious thinking is unidirectional

Critical Rationalist:

empathy affects our conscious thinking, but not the other way around

Critical Rationalist:

but the underpinnings of empathy are unconscious, and determined by our genes

Critical Rationalist:

suppose it prevents certain people from becoming objectivists

curi:

objectivism is a type of knowledge. so you're talking about a person who is not a universal knowledge creator, right?

Critical Rationalist:

their unconsciously caused empathy overrides their conscious thinking or at least strongly influenced it

Critical Rationalist:

they in principle could be

Critical Rationalist:

their linguistic capacities are capable of conjecturing objectivism and criticizing it

Critical Rationalist:

but they refuse to accept it, because their empathy overrides it

Critical Rationalist:

(empathy being, ex hypothesi, somethign unconsciously caused and built by genes)

Critical Rationalist:

This is obviously very hypothetical, but this is the kind of thing you need to rule out

curi:

this empathy is an extra, unnecessary complication tacked onto a simpler model, and without clear details about where it fits into the conjecture and refutation model.

Critical Rationalist:

but it is a possibility

Critical Rationalist:

we could have been selected to have this empathy

curi:

i don't think one can see whether it's a possibility without clarifying the thing being claimed.

Critical Rationalist:

whenever we think of people who are suffering

curi:

but in any case it's a possibility that we're all puppets of advanced aliens, living in a simulation, etc., etc.

curi:

that sort of possibility is the wrong way to make judgments about what to tentatively, fallibly believe

Critical Rationalist:

we have a software program that says "be concerned about this for its own sake"

Critical Rationalist:

and it overrides the outputs of conscious deliberative thinking

Critical Rationalist:

but it itself is outside the reach of deliberative thinking

Critical Rationalist:

there is nothing contradictory about this hypothesis

Critical Rationalist:

but your theory (seems to) require that it is false

curi:

people aren't born knowing what suffering is conceptually and how to recognize it in other people, so how could an preloaded software deal with it? that's similar to proposing preloaded software for doing calculus even though we aren't born knowing arithmetic or algebra.

Critical Rationalist:

"people aren't born knowing what suffering is conceptually and how to recognize it in other people"

Critical Rationalist:

how do you know that?

curi:

do you think they are?

curi:

i conjectured they aren't and considered the matter, and alternatives, critically.

curi:

i didn't seek an airtight proof, i used CR methods.

Critical Rationalist:

the preloaded empathy software program could be one that is ready to develop as soon as the organism develops the concept of suffering

Critical Rationalist:

you said earlier that the preloaded software admits of individual differences

Critical Rationalist:

as long as it is possible that some of those individual differences are realized as unconscious programs (which are not amenable to being changed with reflection), then it is possible that those individual differences are consequential

Critical Rationalist:

(consequential by your standards)

curi:

busy

curi:

what does software being ready to develop mean? develop in what ways by what means?

curi:

and what, if anything, prevents a person from simply not running this software?

Critical Rationalist:

However you think the universal knowledge creation software develops in brains, this software develops the same way

Critical Rationalist:

What prevents the person from not running the software is that it is inaccessible to conscious reflection

curi:

but i don't think that develops. more like it's there, fully formed, when the computer is first turned on.

JustinCEO:

kinda like a BIOS?

Critical Rationalist:

Does a zygote have the universal knowledge creation software?

Critical Rationalist:

Obviously not

curi:

your conception of conscious reflection is not specified in terms of the things in this model. i think it's a higher level issue.

Critical Rationalist:

Do adults have it? Yes

Critical Rationalist:

Somewhere in the middle it develops

curi:

if you're talking about development in terms of e.g. creating and attaching proteins that form the brain, then do you think people's brains grow at age 10, or whatever, re empathy?

Critical Rationalist:

Yes that’s an empirical possibility that you haven’t ruled out

curi:

do you believe that?

Critical Rationalist:

But no, I was just responding to your assertion that the universal knowledge creation software doesn’t develop

Critical Rationalist:

Which... of course it has to develop

Critical Rationalist:

Somewhere between zygotehood and adulthood

curi:

do you think macos develops at some point in the imac factory?

Critical Rationalist:

Yes they are built

curi:

what is "they"?

Critical Rationalist:

You mean macs right?

curi:

no i said macos

JustinCEO:

macOS, mac Operating System

Critical Rationalist:

Oh sorry

Critical Rationalist:

I think I have a way to make this more concrete (in terms of your system)

Critical Rationalist:

You think the universal knowledge creation software is innate

Critical Rationalist:

How do you know that there are not other softwares that a) sometimes override the universal knowledge creation software, and b) cannot be overridden by the universal knowledge creation software because they are unconscious

Critical Rationalist:

*Unconscious and insulated from inputs from the universal knowledge creation software. This is just (on my hypothesis) how the brain is designed

curi:

is there a proposal of that nature which you find convincing?

curi:

i think the key issue here is that i'm judging by critical thinking, not by airtight proof that logically covers every possibility

Critical Rationalist:

Good.

Critical Rationalist:

I would say that it is perfectly possible that evolution could have produced such softwares, and I wouldn’t put confidence in any theories that hadn’t been subjected to experimental tests

Critical Rationalist:

My analogy I used yesterday was this: imagine that there was a theory that people had conjectured about the sun

Critical Rationalist:

In the absence of any data at all

curi:

is there a specific proposal which you find plausible, which explains the nature of the software, the selection pressure to create it, gives details about what it does, etc., which you think stands up to criticism?

Critical Rationalist:

I could tell a just so story

curi:

but i'm not asking for just so stories, i'm asking for ideas which you think survive criticism. a just so story is a story you have a criticism of.

Critical Rationalist:

That’s not the definition of a just so story

JustinCEO:

is there a specific proposal which you find plausible

If you're calling something a just so story that's a pretty good indicator you don't find it plausible, so bringing up just so stories is non-responsive

Critical Rationalist:

I don’t have a view about what is plausible in cases like this. My view is that what we should not settle on a perspective with much confidence in the absence of data

curi:

do you see some major flaw in my model?

curi:

g2g

Critical Rationalist:

It is logically possible and internally coherent

Critical Rationalist:

Whether or not it is true ought to be settled with empirical tests

curi:

is there a specific alternative model which you think can stand up to criticism? that we need a test to differentiate btwn it and my model?

Critical Rationalist:

Sure. I’ll put forward this as an alternative model

Critical Rationalist:

I don’t believe it, but I think it is also internally coherent and logically possible

Critical Rationalist:

There is other software that a) sometimes override the universal knowledge creation software, and b) cannot be overridden by the universal knowledge creation software because the (occasionally) overriding software is unconscious

Critical Rationalist:

If you want it to be more specific

Critical Rationalist:

I’ll say that the software is “empathy for kin”

Critical Rationalist:

There are plausible reasons why there would be selective pressures that favour it

Critical Rationalist:

And we’ll suppose that the empathy for kin overrides the universal knowledge software, but the reverse cannot happen (because of how the brain is built)

curi:

When I asked about a flaw in my model, I meant any type of flaw. Anything bad about it. But with emphasis on a problem with the model itself and its application to the world, not an issue in its ability to exclude alternatives, which is a somewhat separate matter. Just lacking logical errors isn't the whole question.

For the alternative empathy model, I think it's too vague to begin serious critical analysis. For example, you've introduced unconsciousness as a concept which is connected to the ability of software subroutines to write to certain locations in memory. Something like that? A lot more details would be needed to know what's going on there. Similarly, empathy for kin is underspecified. And simple examples of what you have in mind are underspecified. Like does this empathy for kin software take over my muscles and control my arm motions in some situations, and i'm like a puppet who watches helplessly as I can't control my limbs? If not that, what is it like? it somehow (how?) controls my conscious opinions, like mind control rather than puppetry? is the empathy for kin software able to create knowledge?

Critical Rationalist:

I’m going to bed now, but I’ll just say this. You are asking for a level of detail in my theory that you have not provided for your own. I can make similar requests for specificity. It will be easy to make my account as detailed as yours. So tell me, how do our classical computational capacities give rise to the creative ability to create new explanations? What selection pressures gave rise to that ability?

curi:

moving from #fi @Critical Rationalist https://discordapp.com/channels/304082867384745994/304082867384745994/663953331714261002

i'm open to more questions. i don't know what areas you find problematic or want to know more about. i think if you provide details for your ad hoc theory, you will run into problems just like how fleshing out the theory that DD will float when jumping off a building, in FoR ch 7, led to difficulties.

the selection pressure for intelligence may have been the value of better tool use, for example. we don't know the exact mechanism but there are several stories that work ok and, afaik, no criticism for why this wouldn't work. DD presents one in BoI re meme replication.

curi:

another possibility is it helped with communication and language, which enabled more effective group hunting

curi:

Yes, but those logical models depend on assumptions about the world that are 2. The claim that humans are best approximated as rational self-interested utility maximizers is a claim economists could be wrong about.

That isn't one of my claims. Think of a claim more like "everything else being equal, when demand for a product increases and supply stays the same, then the price must be raised to avoid shortages". there are premises here like that each buyer will pay up to a certain price for the product, rather than e.g. be willing to pay any even number of dollars but not an odd number of dollars. i'm aware that has non-zero connection to the empircal world. it is nevertheless different than doing a bunch of studies and science experiments to try to figure things out, which is my point. the empirical aspects of this claim are more limited than the empirical aspects of the claim that force equals mass times acceleration. the actual debates that take place re economics claims like my example are primarily non-empirical. do you agree there's a notable difference there? if so, what terminology would you like to use to keep this distinction clear? just calling my idea re demand and shortages "empirical" doesn't differentiate it from an issue like whether a particular vaccine works for humans and to prevent a particular parochial disease from earth.

you are welcome to try to point out empirical problems with economic models when you have them, but i don't think you'll have many empirical complaints about my core economic claims. i don't expect you to say "maybe Joe likes buying things with prime numbered prices. we better do a big study to see how many people buy in that way".

curi:

  1. Actual economic interactions are affected by human nature

i think a claim like my example above is approximately (but not literally 100%) independent of controversial conceptions of human nature like how empathetic or rational people are.

Your claims about which ethical systems will produce more wealth or welfare depend on assumptions about human nature.

What sort of human nature do you think would make not having division of labor be more productive than having it? Got anything plausible enough to merit a study to try to test what people are like?

curi:

[re human nature] I mean things like how we respond to incentive structures, under what circumstances we will cooperate or not cooperate, what makes people respond tribalistically or not, whether people develop better under strict parenting or permissive parenting

It is an open empirical question to what extent humans develop better under strict parenting, for example

I stand by the idea that economic models can only be true to the extent that their assumptions about human nature are true (eg that humans or aliens are self-interested rational utility-maximizers). Whether or not those assumptions are true is an accidental fact of evolution. There is no law of nature that says humans or aliens must be a certain way. It depends what selection pressures we happened to face.

You have a different model of how minds and personalities work than I do. Deciding which model is correct will initially involve specifying the models more, specifying our epistemologies more, and doing philosophical debate about those sorts of issues. Depending how those discussions went, it's possible an issue would come up where doing an empirical test made sense, but I doubt it. I wouldn't expect our discussion to get stuck over disagreeing about an empirical fact. (This does not mean we'd never mention anything empirical. I would expect some simple, uncontroversial empirical facts to be mentioned.)

(I'm now caught up. If i didn't respond to a specific thing you want a reply to, feel free to quote it and ask for a reply.)

Critical Rationalist:

I’ll stick with the issue of the empathy software for now. I’ve read chapter 7 of FoR several times, and I do not think my model suffers from the same problems. Very powerful kin empathy software could arise from selection pressures. Genes that favour altruistic Behavior towards kin at (almost) any cost actually make good evolutionary sense.

Critical Rationalist:

The reason for an overriding kin empathy software is clear: it gets more genes into the next generation. By contrast, all you have said is that “maybe it helped with tool use”. But why not just have tool creation software? A universal knowledge creation software seems wasteful.

Critical Rationalist:

I think this whole approach is backwards. In evolutionary biology (which Deutsch is not an expert in) what you are supposed to do is empirically discover what traits an organism (in this case, humans) have, and then reverse engineer those traits.

Critical Rationalist:

Crucially, I did not see in your response an explanation of how a classical computer could instantiated creativity. I asked “how do our classical computational capacities give rise to the creative ability to create new explanations?” You do not have a detailed account of how this happens. Do you see now that it is unfair to ask for a similar level of detail in my account? I will provide details for mine when you provide details for yours.

Critical Rationalist:

As it stands, I can tell an evolutionary story that is at least as plausible as yours. Neither of us have spelled out the details about how such software will be instantiated.

Critical Rationalist:

I guess I might as well give my two cents about your response to my economics arguments. The one example of an assumption you gave is instructive. It says “all else being equal, this will tend to happen”. There is an implicit claim in there about human nature, it is just one that is so uncontroversial that it is rational to accept it without doing an empirical study. But crucially, it’s connection to the real world is mediated by the “all things being equal” clause. Widespread errors in thinking or other elements of human nature could systematically prevent such a claim from mapping onto the real world. Don’t get me wrong, the kind of economics you’re describing has its virtues. I just think the possibility that human nature is such that our behavior is systematically different from the predictions of economics models is a possibility. @curi

GISTE:

@Critical Rationalist Selection pressures are not responsible for creating new genes. They are instead responsible for selecting the (already existing) genes that cause their hosts to have more grandchildren than compared to rival genes. (Disclaimer: I don't claim to be an expert on this.)

Critical Rationalist:

Yes that’s true. Random mutations create the genes, and then selection pressures eliminate the harmful ones and keeps the beneficial ones.

Critical Rationalist:

But natural selection is also a cumulative process. So you can get new traits over time with repeated instances of variation and selection.

Critical Rationalist:

@curi after this weekend I’ll probably have to stop commenting for the sake of school. There’s one topic I really wanted to ask about: All Women Are Like That. How can you hold to this in light of your belief that people have free will, are not determined by genes, and have universal knowledge creation software? Are women not people? Or is it just a coincidence that all women have used their unbounded free will incorrectly?

Critical Rationalist:

I’d also like you to share what specific traits you think all women share.

Alisa:

The AWALT phenomenon is due to things like culture and the prevalence of certain static memes, not genes

curi:

I was planning to make a discussion tree to organize our discussion but I'll drop that and try to do some quicker replies today. AWALT vs NAWALT is a specific debate about redpill/PUA ideas that you can google. the shared traits of women in question are related to romance and relationship behavior. the overall issue is what alisa says: culture, including static memes, are major forces in life.

Critical Rationalist:

And not a single woman has escaped the grasp of these static memes? Despite the fact that they have free will and universal knowledge creation software?

curi:

the all means something more like "i'm not convinced that a single NAWALT sighting posted to a redpill forum is actually true"

jordancurve:

Critical Rationalist: You seem to be unaware of what the word "all" means when used outside of a formal logical context

Critical Rationalist:

I’m confused. You’ll have to be more precise. Does “all” mean most?

Critical Rationalist:

Precision of hypotheses is a Popperian virtue. It makes them more amenable to rational and empirical refutation

curi:

there is an ongoing problem where people fool themselves into thinking their gf is different. AWALT is pushback against that. and i don't think any documented exceptions exist.

Critical Rationalist:

Loose and vague hypotheses are impossible to criticize

jordancurve:

You can criticize them for being vague.

curi:

you're wrong to call something loose and vague when, as i said, there are ongoing discussions about it. you can read tons more info about what it means if you want to.

curi:

the proper noun does not precisely summarize all the meaning.

curi:

this is typical of proper nouns

curi:

such as Critical Rationalism

Critical Rationalist:

Also, speaking of precision, give me precisely what traits all (whatever that means) women share in common

JustinCEO:

That just means being rationalistic critically rite

curi:

you can read about the traits if you want to learn. if you are expecting to learn this topic by being told a list of 10 traits each given 3 words of explanation, you're dramatically underestimating the complexity of the issiue

Critical Rationalist:

Why don’t you give me the most well-evidenced example, and as thorough an explanation as you want

curi:

because you need the redpill/PUA intellectual framework first before interpreting an example

Critical Rationalist:

Just as a basis for discussion

Critical Rationalist:

I have some passing familiarity with it. Try me. See how far you can get

curi:

were you already familiar with AWALT?

Critical Rationalist:

No that particular term was new to me

Critical Rationalist:

Totally serious

curi:

that sounds like near-zero familiarity

curi:

do you know what AFC is?

Critical Rationalist:

Hence “passing”

curi:

shit test? neg? hoops? two-set? DHV?

Critical Rationalist:

Haha wow I’m definitely less familiar than I thought

curi:

mystery method?

Critical Rationalist:

Ok, do you at least think that this is the kind of theory that should be put to empirical tests?

curi:

yes it's extensively field-tested.

Critical Rationalist:

Awalt is extensively field tested?

curi:

yes

Critical Rationalist:

Interesting

Critical Rationalist:

I’m genuinely curious, name me just one trait that “all” women share in common

curi:

all this stuff was developed with a heavy empirical testing emphasis. lots of the theory was created to explain observed patterns.

Critical Rationalist:

Ie not one documented exception

curi:

valuing social status as she perceives it (not everyone is into actors as high status).

curi:

if i said all parents were coercive, it wouldn't mean that there was any single thing (e.g. playing with matches) for which all parents coerce.

Critical Rationalist:

Yes, but in this case you said “all women are like that”. “Like that” has to mean something.

Critical Rationalist:

As far as your example, sure. I would wager that’s true of all humans (not just women). Completely innocuous

Critical Rationalist:

Sure, all women value status.

Critical Rationalist:

Completely banal

curi:

the issue ppl are debating is roughly: is there a woman who is immune to PUA?

Critical Rationalist:

Ok that’s more interesting

Critical Rationalist:

Since you’ve agreed that this is an issue that should be subject to empirical tests

Critical Rationalist:

This is what Popper said we must do before an empirical test: specify in advance what observations would falsify the theory (in this case “no women are immune to PUA”).

Critical Rationalist:

So, what empirical observation would falsify the claim that “no women are immune to PUA”? If you’re going to do an empirical test Popper-style, you have to answer that question.

Critical Rationalist:

If you systematically reinterpret the results to make them consistent with your theory, you’re doing what Popper (rightly) accused Freud and Marx of doing.

curi:

you seem to want a single decisive test to settle this conclusively. no one has done one or knows how to do one.

curi:

hence the ongoing debates

Critical Rationalist:

You said you believe this issue should be subject to empirical tests.

curi:

PUA approaches have been broadly testing on many women to help refine them, they aren't ivory tower speculation

Critical Rationalist:

So you believe the theory has been subject to tests, but can you explain to me what an empirical test is, in Popper’s theory?

Critical Rationalist:

To be clear, I’m not asking about the relative advantage of PUA. It might be on average better than other methods

Critical Rationalist:

Im talking about testing this theory: no women are immune to PUA

Critical Rationalist:

You admit that this is the sort of claim that should be tested empirically

curi:

people have said over and over "my gf is different" and they seem to be wrong every time. and ppl keep saying it. that's the issue AWALT is about.

Critical Rationalist:

So, explain to me how, according to Popper, we empirically test theories

Critical Rationalist:

you also said the issue is “are any women immune to PUA”

Critical Rationalist:

Implying that this was part of the meaning of awalt

curi:

right: different than the other girls who PUA works on.

Critical Rationalist:

Good

Critical Rationalist:

You believe that issue should be empirically tested

curi:

no one on either side has any idea for how to test it in the way you want. some things are hard to test.

Critical Rationalist:

How does Popper believe we should perform empirical tests?

curi:

nevertheless, there is nothing even resembling a documented counter example AFAIK

curi:

and there are many, many documented examples where AWALT turned out corrected

curi:

and ppl don't respect this situation and are super biased

Critical Rationalist:

I would like an answer to my question

curi:

a test is an observation aimed to potentially refute an idea. the best tests address a clash between 2+ ideas, such that at least one has to be refuted by any outcome of the test.

Critical Rationalist:

Good, exactly. For Popper, an empirical test only counts as a test if it is a genuine attempt at refutation

Critical Rationalist:

So... if you have not specified in advance the conditions for falsification, then for Popper, you have not actually empirically tested a theory

curi:

no

Critical Rationalist:

So, given that you and PUAs have not specified the conditions for falsification in advance, you have not actually performed empirical tests

Critical Rationalist:

No? Are you alleging that I’ve misunderstood Popper? I’m happy to provide quotes

curi:

you said "So" like you're following on what I said, but then you introduced a new thing: specifying conditions in advance.

Critical Rationalist:

Do you think Popper thought you could specify the conditions for falsification after the experiment?

curi:

we never fully specify anything, as Popper explained

curi:

if you mean that the conditions for falsification have to be partially specified in advance, i'll agree, but that's a different claim.

Critical Rationalist:

I’ll brb with quotes.

Critical Rationalist:

Also, it goes without saying you can disagree with Popper on this issue

curi:

do you agree that "we never fully specify anything"?

Critical Rationalist:

In a certain sense. But I’ll get the quotes

Critical Rationalist:

Yes, there is a certain sense in which we cannot fully specify anything (I'm interested for you to spell out why that's relevant).

Critical Rationalist:

But here's the quote. "Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory."

Critical Rationalist:

So, have you (or the PUAs) made "serious...attempt(s) to falsify the theory" that no women are immune to PUA?

curi:

i don't understand why you dug up a quote that doesn't mention specifying falsification conditions in advance. also please only post sourced quotes at my forums.

curi:

and yes PUAs have searched widely for NAWALTs

Critical Rationalist:

It is from Conjectures and Refutations. Page 36 http://www.rosenfels.org/Popper.pdf

Critical Rationalist:

So they have made genuine attempts to falsify theory and have failed to do so?

Critical Rationalist:

So... what kind of observation would count as falsification?

curi:

a NAWALT

Critical Rationalist:

What observations would count as observation of a NAWALT

curi:

that's complicated and involves understanding a bunch of theory with which to interpret data

Critical Rationalist:

As far as specifying in advance, this quote comes from the next page.

Critical Rationalist:

"Some genuinely testable theories, when found to be false, are still upheld by their admirers--for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status."

curi:

if you can point to that ever being done with AWALT, i'd be interested

JustinCEO:

Right ad hoc stuff bad

JustinCEO:

Ppl want to find a NAWALT tho

Critical Rationalist:

NAWALT is too broad

Critical Rationalist:

I'm talking about an observation that would refute this theory: "no women are immune to PUA"

Critical Rationalist:

You said that^

Critical Rationalist:

as a concrete example of what AWALT means

Critical Rationalist:

Don't give me jargon. Tell me what observation would refute this claim "no women are immune to PUA"

curi:

read the Girls Chase book if you want to begin to understand what we're talking about

Critical Rationalist:

If you have subjected your theory to Popperian tests, then you should be able to answer that question

Critical Rationalist:

Does the Girls Chase book explain what observation would falsify the theory that "no women are immune to PUA"? What chapter explains that?

curi:

i don't think you're trying your best to understand my perspective. you're trying to shoehorn the discussion into your preconceived notions of how to be Popperian.

curi:

while neglecting issues like the use of complex theoretical frameworks to interpret data

Critical Rationalist:

@curi you're doing exactly what GISTE was doing

curi:

and you seem to want to be able to test and debate something without understanding the topic.

Critical Rationalist:

refusing to answer questions when it gets difficult

Critical Rationalist:

you told GISTE that he should answer the question

Critical Rationalist:

you should abide by your own standard

curi:

i've just spent a while answering your question. you don't like the answer.

curi:

the specifications re the testing are complicated and you don't have teh background knowledge to discuss them.

curi:

that's your answer.

Critical Rationalist:

Really? I missed it. What observations would count as a falsification of this theory: "no women are immune to PUA"

JustinCEO:

If a complex theoretical framework is required to interpret data then pointing out that fact and a concrete place where you can get info with which to develop such a framework is not a dodge

Critical Rationalist:

you said at one point "a NAWALT". That's not an observation.

Critical Rationalist:

That is too flexible.

curi:

it gets less flexible if you learn the field. you just aren't familiar with the constraints involved and can't be told them in 5min while adversarial.

Critical Rationalist:

Adversarial? I'm asking genuine questions. I am willing to hear you explain it in detail. I place no time limits on your explanation (it doesn't have to be within 5 minutes).

curi:

but if that was true you'd read multiple books as part of the conversation.

Critical Rationalist:

Remember what you said to giste, and remember what you said on your page: picky arguments matter

JustinCEO:

CR u seem unwilling to let curi incorporate a book as part of his explanation so your length claim seems false

Critical Rationalist:

Sometimes recommending a book is a way of avoiding conversation.

Critical Rationalist:

I will read the book if you can tell me which chapter answers my question. Which chapter (or chapters) answer this question: What observations would count as a falsification of this theory: "no women are immune to PUA"

Critical Rationalist:

I doubt the author of the book even considers a question as technical as that

Critical Rationalist:

If I'm wrong, I want page numbers

curi:

there is no chapter with a direct answer to that question, it provides some of the framework with which to discuss tha tmatter, as i told you.

JustinCEO:

CR you seem to be implicitly conceding that your no time limit claim is false by raising arguments against reading books

Critical Rationalist:

@curi if during our debate about the software of the mind, I required you to read all of "How the Mind works" by Steven Pinker (without specifying which parts were relevant), would that have been a fair request?

curi:

i routinely respond to books during discussions

Critical Rationalist:

Do you read the books in their entirety?

Critical Rationalist:

Would you read all of "How the Mind Works" if I asked you to?

curi:

you're welcome to propose a better way to become familiar with the field, or to point out problems with Girls Chase.

curi:

it's up to you whether you're interested in learnign about this. idc

Critical Rationalist:

@curi that isn't answering my question

curi:

you seem to want a really short version containing certain specific things, which i don't have to offer you.

Critical Rationalist:

I'm wondering if you think it is legitimate to require a conversation partner to read a whole book

curi:

i didn't require you to

Critical Rationalist:

You can't apply a standard to someone else if you won't apply it to you

Critical Rationalist:

ok

curi:

https://curi.us/2235-discussions-should-use-sources

curi:

and i proposed the book as a potential way to make progress. if you have a better one, feel free to suggest it.

Critical Rationalist:

Well, I have a different rival theory of how women work

Critical Rationalist:

It is explained in How the Mind Works (which does deal extensively with sexuality)

Critical Rationalist:

I propose that you read that book before we continue

curi:

does it cover shit tests?

Critical Rationalist:

No...

Critical Rationalist:

I'm just saying, for you to understand my perspective, you have to understand the details of my theoretical framework

curi:

since shit tests have been observed many times, why aren't they covered and explained?

Critical Rationalist:

And I can't explain my theoretical framework in conversation, so you have to read How the Mind Works

Critical Rationalist:

Unless

curi:

do you mean that or are you just trying to mirror what you think i said?

Critical Rationalist:

you can propose an alternative way

Critical Rationalist:

Evolutionary psychology (my own view of how human sexuality works) is a complicated theory that takes time to understand

Critical Rationalist:

if I'm going to be expected to read a book (or a comparable alternative), I think this would be fair

Critical Rationalist:

we would both have a better understanding of each other's approach

curi:

i'm already familiar with evo psych

Critical Rationalist:

what is the evolutionary psychology explanation for sex differences in human jealousy?

curi:

the evo psych framework is compatible with more than one explanation for that.

Critical Rationalist:

(you asked me questions about the PUA theories to see how familiar I was)

Alisa:

I don't know evo psych, but I would say: the asymmetrical resources each sex invests in child rearing

Critical Rationalist:

Name one that has been offered for jealousy

Critical Rationalist:

Alisa: not quite

Alisa:

Fair. Was just a guess.

Critical Rationalist:

That is an explanation of many sex differences tho

Critical Rationalist:

So it was a good guess

curi:

i don't read much at that level of detail b/c it's irrelevant to my (DD's) criticisms of evo psych

Critical Rationalist:

right, so just as I don't have a detailed understanding of PUA, you don't have a detailed understanding of evo psych

Critical Rationalist:

so... if it is fair for you to propose a book, it is fair for me to propose a book

curi:

if you were familiar with some higher level PUA theory and had a refutation of it, and skipped some details, that would be comparable.

curi:

it would still not put you in a position to debate AWALT vs. NAWALT given PUA/redpill premises though

curi:

i haven't tried to jump into a debate between different applications of evo psych

Critical Rationalist:

right, in order to do that, I need to know details. Well, in order to understand what I deem to be the correct explanation (i.e. the rival theory for why women do particular things), you need to know details about evo psych

Critical Rationalist:

Becoming familiar with higher level PUA theory does not require details.

Critical Rationalist:

by "in order to do that", I mean AWALT and NAWALT

curi:

i don't know what you want to get out of this. you seem to want to call me Wrong about an issue you don't know or care about.

curi:

b/c you didn't like the choice of words that make up a particular jargon

curi:

which were, i will readily grant, not chosen in a way to make friends with the mainstream, and aren't normally used for outreach

JustinCEO:

Perhaps a different topic would be more fruitful to discuss??

Critical Rationalist:

@curi you listed this as a debate topic on your page. I read through your list and this issue jumped out at me. I am deeply interested in human sexuality (I mean, who isn't?). You are trying to read into my behavior bad motivations. And now you are saying "you just want to prove me wrong". You are doing exactly what Giste did when he accused me of being in debate mode

curi:

if you're deeply interested then why don't you begin reading material from this school of thought?

Critical Rationalist:

Also like him, you are refusing to answer my questions. When Giste did this, you (rightly) called him out on it (no hard feelings giste).

curi:

until you find some objection to it

JustinCEO:

Ya read to first objection

curi:

you're trying to jump into the middle of an internal debate you aren't familiar with

Critical Rationalist:

@curi by affirming PUA, you are implictly rejecting evo psych. You are thus taking sides on an issue when you don't understand the rival theory. You're in the same position as me (but a mirror image)

curi:

what are you talking about? PUAs routinely use evo psych explanations.

Critical Rationalist:

I guess I should say your version of pua, they are compatible

Critical Rationalist:

yes I've actually heard that, that's fair

JustinCEO:

You guys could both read to first objection on a suggested book

Critical Rationalist:

I think this matter though

curi:

my objections to evo psych have nothing to do with PUA

Critical Rationalist:

Let me use an analogy

Critical Rationalist:

Let's think about Einstein's theory

Critical Rationalist:

The paradigm case of a falsifiable theory

curi:

wait slow down

curi:

by affirming PUA, you are implictly rejecting evo psych

do you retract this?

Critical Rationalist:

Oh yes 100%

Critical Rationalist:

Anyways like I was saying

Critical Rationalist:

The theoretical details of Einstein's theory are very hard to understand

Critical Rationalist:

much harder to understand than PUA or evo psych

curi:

You are thus taking sides on an issue when you don't understand the rival theory.

do you mean that i don't understand what NAWALT means?

Critical Rationalist:

No, I meant the rival theory, evo psyc. But I retracted the implication that they are rival theories

Critical Rationalist:

Anyways

Critical Rationalist:

Despite the theoretical sophistication, Einstein was still able to say "this is the observation that will refute my theory" in clear terms.

curi:

yes because he was dealing with stuff that's much easier to measure and do math about, etc.

Critical Rationalist:

@curi I won't talk by implication. I do not think you have a clear understanding of what observations will falsify this claim "no women are immune to PUA"

curi:

other fields, like those involving human behavior, have a much harder time measuring things. takes more theory to do that.

Critical Rationalist:

I strongly suspect that you do not have an answer.

Critical Rationalist:

I was texting someone else in the group, and I am not the only one with this suspicion

Critical Rationalist:

When you don't answer a question, it makes you look bad.

curi:

can you quote a question i didn't answer?

Critical Rationalist:

What observations would count as a falsification of this theory: "no women are immune to PUA"

curi:

i did respond to that

Critical Rationalist:

So, tell me what the observations are?

curi:

do you remember me responding?

Critical Rationalist:

well, you did say NAWALT. But that is not a statement about what you would observe. Let me say something about that answer. It is actually just a tautology. A NAWALT is just "a woman who is not like that". In other words, you are just answering by saying the observation that would falsify the theory is the observation that the theory doesn't predict

Critical Rationalist:

That would be like Einstein saying "an observation that is not predicted by general relativity would falsify the theory"

curi:

do you remember me responding?

Critical Rationalist:

But what Einstein actually said was "if you see the points of light here rather than here, that falsifies the theory".

Critical Rationalist:

Yes I do now remember, you said NAWALT

curi:

you didn't remember before?

Critical Rationalist:

But I'm explaining why that is insufficient

Critical Rationalist:

No I forgot about that answer when I was typing. Thank you for helping me remember.

curi:

do you agree that a response you consider insufficient is different than no response?

Critical Rationalist:

yes of course

curi:

do you retract everything you said comparing me to GISTE?

Critical Rationalist:

Well, during the earlier part of the conversation

Critical Rationalist:

I followed up to your NAWALT answer by insisting on something more specific

Critical Rationalist:

that was approximately when you started proposing that I read a book

Critical Rationalist:

(if I remember correctly)

Critical Rationalist:

Which is still not answering the question

curi:

AWALT and NAWALT are jargon terms which refer to many books, articles and discussions. thousands of pages of material. is there a particular part of that literature which you think is inadequately specific?

Critical Rationalist:

But I am asking for specificity in terms of what observation counts as an instance of a NAWALT in a Popperian test. I bet that none of the material you mention gives specificity in that sense

Critical Rationalist:

And if they do, just quote it or point me to page numbers

curi:

you want physics-like specification. the field doesn't have that.

curi:

do you think evo psych has that?

Critical Rationalist:

Not physics level, but evo psyc theorists make predictions and test them.

Critical Rationalist:

They do say in advance what would count as falsification of their specific hypotheses

curi:

PUAs have made and tested many predictions.

Critical Rationalist:

I'm more than happy to give examples

Critical Rationalist:

Ok great!

curi:

e.g. "I think X would be a good opener". then try it 20 times.

Critical Rationalist:

Tell me what predictions follow from this theory (the original topic): "no women are immune to PUA"

Critical Rationalist:

Remember, if that theory is empirically testable in a Popperian sense, if the predictions are not corroborated, the theory should be considered falsified

curi:

it predicts things like e.g. Joe Newbie will never find a NAWALT, and if he claims to have found one he's fooling himself.

Critical Rationalist:

"if he claims to have found one he's fooling himself" this sounds suspiciously like an ad hoc hypothesis designed to save the theory from refutation

Critical Rationalist:

but again

curi:

if you review the literature and find inappropriate use of ad hoc hypotheses, feel free to point them out.

Critical Rationalist:

that is not an observational prediction I can test. I need to know what observations count as an instance of a NAWALT

curi:

you will find in most cases that Joe is fooling himself in highly repetitive ways that were already written about at length.

Critical Rationalist:

in most cases?

curi:

that's the typical discussion

curi:

the concepts AWALT and NAWALT are not specified as exactly as you'd like (like physics). i already told you this but you keep bringing it up. i don't see the point.

Critical Rationalist:

let me give you an example of how evo psyc works

Critical Rationalist:

so you can see what I mean

Critical Rationalist:

one evo psyc explanation of male homosexuality

Critical Rationalist:

was that genes for being gay also lead to increased giving to kin. This means gay uncles invest more in nieces and nephews than straight uncles.

Critical Rationalist:

Because of kin selection, those genes can be selected for

Critical Rationalist:

This theory lends itself to a prediction: gay uncles should be measurably more generous to kin than straight uncles

Critical Rationalist:

That turns out to not be true

Critical Rationalist:

So the theory is falsified

Critical Rationalist:

Now, let me give you this

Critical Rationalist:

your example of "this pickup line is superior"

Critical Rationalist:

that is DEFINITELY testable

Critical Rationalist:

I would never dispute that

Critical Rationalist:

it is very easy to run natural experiments on that

curi:

PUA is a body of knowledge that has used lots of testing

curi:

that's all i said

Critical Rationalist:

but this claim "no women are immune to PUA"

Critical Rationalist:

I think it should be testable

curi:

i also said there were no known documented counter examples to AWALT

Critical Rationalist:

what would count as a documented counterexample?

Critical Rationalist:

tell me

curi:

if you have one you think qualifies, let me know

Critical Rationalist:

no, you have to explain what observation would count as someone qualifying

Critical Rationalist:

maybe your explanation won't be complete

curi:

it's explained in a very roundabout, complicated way for thousands of pages

Critical Rationalist:

but get me started

curi:

that's all u get, sorry

curi:

that's what exists for that debate

curi:

also i think a evo psych example with a passed test would be more enlightening.

Critical Rationalist:

a different theory of male homosexuality is this

Critical Rationalist:

there is a gene on the x chromosome (males have one, females have two) which causes increased attraction to men. In males this makes them gay, in females it makes them extra fertile. This would allow the gene to continue to exist.

Critical Rationalist:

This theory makes a prediction.

Critical Rationalist:

Female relatives of gay men (who share that gene on the x chromosome) should have more children on average

curi:

that prediction doesn't follow

Critical Rationalist:

For now, this theory has in fact been corroborated

Critical Rationalist:

Why not?

curi:

how do you get from increased attraction to more children? could easily result in fewer children.

Critical Rationalist:

You might have misunderstood

curi:

do you mean the gene does different things for the different genders?

Critical Rationalist:

one way of reading it is that the gene makes the holder want to have sex with men more

curi:

what does that have to do with fertility?

Critical Rationalist:

I mean fertility in the sense of producing more children

Critical Rationalist:

in women, wanting sex with men leads to more children (in our evolutionary past, no condoms)

curi:

that's what i'm saying doesn't follow

curi:

wanting sex and getting sex are different things

Critical Rationalist:

ok good, so a good followup experiment would measure the number of sex partners

Critical Rationalist:

now, as you know

Critical Rationalist:

when observations occur as the theory predicts

Critical Rationalist:

it doesn't prove the theory, it only corroborates it

curi:

are you going to respond to me?

Critical Rationalist:

which is why you try to do as many tests as you can

Critical Rationalist:

what question?

curi:

the non sequitur issue

Critical Rationalist:

well, given evolutionary dynamics, there are always men who want to have sex with women (for reasons having to do with differential parental investment, which @Alisa mentioned)

Critical Rationalist:

so increased desire for sex (in women) would reliably lead to more sex

curi:

do you think it reliably leads to more sex today?

Critical Rationalist:

because they are the gatekeepers (as a PUA I'm sure you believe this)

Critical Rationalist:

yes, if women want more sex, they will usually get it

curi:

can you think of any reasons they wouldn't? any ways this can go wrong?

Critical Rationalist:

of course! hence the need to do followup experiments! corroboration does not equal proof

Critical Rationalist:

just like with Einstein

curi:

hold on

Critical Rationalist:

the fact that the starlight was where it was does not PROVE he was right

curi:

when you have a problem with the logic of your theory, testing it more times doesn't help

Critical Rationalist:

there are other explanations

Critical Rationalist:

Ok, lets compare this with Einstein

curi:

the tests are all premised on that logic

Critical Rationalist:

his theory predicted that starlight would be here rather than here

Critical Rationalist:

but there are other possible reasons for the light to be in that location

curi:

you're saying something like "X will cause Y which will cause Z so we'll measure Z to learn about X", right?

Critical Rationalist:

no

Critical Rationalist:

we say "x will cause y which will cause z"

Critical Rationalist:

we look to see if there is z

Critical Rationalist:

if there is no z, theory is falsified

Critical Rationalist:

if there is a z, the theory is not proven right

Critical Rationalist:

same with Einstein

curi:

so if Y would cause Z or not-Z, then the test doesn't work right due to the theory being logically confused?

Critical Rationalist:

"x (Einsteinian gravity) will cause y (curved spacetime) will cause z (star light here rather than here)"

Critical Rationalist:

if by y you mean "increased sexual desire", then we have other theoretical reasons for believing that (in women) increased sex drive will cause more sex partners (z)

Critical Rationalist:

parental investment theory

Critical Rationalist:

As I've said, I'm sure you already agree with that anyways

curi:

i asked if there were reasons it could lead to less sex. you said yes. but then instead of investigating this problem you suggested running extra tests which are premised on the idea that more attraction would lead to more sex.

Critical Rationalist:

are there possible reasons that spacetime could lead to the light NOT being where Einstein predicted?

Critical Rationalist:

yes, there could be other forces acting on the light that we don't know about

Critical Rationalist:

there are always possibilities like that

Critical Rationalist:

(which you can test on their own)

curi:

suppose, hypothetically, that increased attraction reduces the amount of sex a woman has by 50%. then would the results of your proposed tests be misleading?

Critical Rationalist:

you mean if women who wanted sex more had 50% less sex?

curi:

yes

Critical Rationalist:

yes, then the prediction would not follow

curi:

ok and could you solve this problem by doing more tests?

curi:

test it 100 times instead of 10

Critical Rationalist:

no

Critical Rationalist:

you would test that claim

curi:

2:36 PM] curi: can you think of any reasons they wouldn't? any ways this can go wrong?
[2:36 PM] Critical Rationalist: of course! hence the need to do followup experiments! corroboration does not equal proof

Critical Rationalist:

I mean followup experiments with different methodologies

Critical Rationalist:

i.e. test for a relationship between female sex drive and number of sex partners

Critical Rationalist:

Ok

Critical Rationalist:

Everyone who is watching

curi:

ok do you think that testing has been done?

Critical Rationalist:

I want you all to take note of something

Critical Rationalist:

(before I answer @curi's next volley of questions)

Critical Rationalist:

I do not know if that testing has been done or not

Critical Rationalist:

I asked @curi for specific observational predictions based on his theory. He said "NAWALT". When I asked him to explain what observations would count as an instance of NAWALT, he said "it's explained in a very roundabout, complicated way for thousands of pages. that's all u get, sorry". When he asked me for specific observational predictions based on evo psych, I answered. I gave real world examples from real experiments. I gave one example of an experiment that FALSIFIED an evo psych hypothesis, and I gave one example of an experiment that CORROBORATED an evo psych hypothesis. He asked a followup question about whether the corroborating experiment actually counted as corroboration, and I explained why it does by comparing it to the case of Einstein. I tried to use as little jargon as possible. If @curi asks me to explain any jargon I left unexplained, I will be happy to do so. There is a clear asymmetry here.

Critical Rationalist:

If anyone thinks my account of this conversation is inaccurate, I encourage you to read it for yourself.

curi:

do you think there exist examples of PUA openers or concepts which were falsified?

Critical Rationalist:

I stated (and never disputed) that the relative efficacy of openers is falsifiable.

curi:

ok so some evo psych ideas and some PUA ideas are relatively easy to test. so what?

Critical Rationalist:

You have not explained how "no women are immune to PUA" is falsifiable.

Critical Rationalist:

If you think there are some evo psych ideas that are not falsifiable, please tell me what you think they are.

Critical Rationalist:

I don't think there is an analogous unfalsifiable claim.

curi:

i asked for an example of an evo psych idea that passed some testing. the example you gave depends on an untested (as far as you know) premise which one can immediately think of major flaws with. why do you think that constitutes meaningful corroboration?

Critical Rationalist:

What did I say in response?

Critical Rationalist:

Did you read my Einstein analogy?

Critical Rationalist:

Einstein's prediction that starlight would be "here rather than here" requires untested assumptions

Critical Rationalist:

You always need auxiliary assumptions to get from a theory to a prediction (this is well understood in philosophy of science). You then can test those assumptions after

Critical Rationalist:

Do you disagree with Popper? Do you not think that Einstein's theory was meaningfully corroborated by the 1919 test?

curi:

how do you differentiate your method from the following: i think there is a gene which makes people like to eat fish. i assume, without testing, that liking to eat fish gives people better skin quality which leads to being more attractive which leads to more sex. i measure babies and correlate it to that gene. i say my whole theory is corroborated.

Critical Rationalist:

how would that explain male homosexuality?

curi:

it doesn't. it's a different theory.

Critical Rationalist:

... that is the reason the explanation was conjectured

Critical Rationalist:

so I would criticize you theory because it doesn't explain what it is supposed to explain

curi:

i'm giving a toy example to discuss a concept. does that make sense to you?

Critical Rationalist:

No. The claim that there is a gene on the x chromosome that leads to increased attraction to males was postulated to explain male homosexuality

Critical Rationalist:

that is why it was postulated

curi:

do you know what a toy example is?

Critical Rationalist:

your theory does not explain that datum at all

Critical Rationalist:

so it would be criticized on that basis

curi:

busy?

curi:

do you think any untested assumptions are allowable and it's still corroboration, or only certain categories?

Critical Rationalist:

I tested assumptions are allowable so long as they can be tested later

Critical Rationalist:

And as long as they’re consistent with other theories etc

curi:

anything which can be tested later is allowable?

curi:

oh, consistent with which other theories?

Critical Rationalist:

Well, yeah you could put additional constraints. Consistent with other corroborated theories etc

Critical Rationalist:

You still haven’t engaged with my Einstein analogy

curi:

your premise (female more attracted to men -> more sex) is inconsistent with many theories.

curi:

that's why i objected to it

Critical Rationalist:

Oh yeah no i meant to theories that are well corroborated thank you for the objection

Critical Rationalist:

Allows my to clarify

curi:

it's inconsistent with many high quality theories, not just arbitrary junk

curi:

i'm not talking about the space of logically possible theories

Critical Rationalist:

@curi this line of questioning is important and interesting

curi:

it's inconsistent with a variety of things that i and many other people believe and have extensive reasons for

curi:

there are many books about such things

Critical Rationalist:

But I’m going to have to remind you of the asymmetry

Critical Rationalist:

When you asked for a specific experimental test of an evo psyc theory

Critical Rationalist:

I gave you an example

Critical Rationalist:

A concrete example of how an observation can rule out an evo psyc theory

Critical Rationalist:

For any evo psych theory

curi:

i think it's a good example of the quality of the work in the field b/c it assumed a very questionable premise.

Critical Rationalist:

I’d be happy to do this for you

Critical Rationalist:

But when I challenged a specific PUA theory

curi:

AWALT is like a meta study

Critical Rationalist:

you only said “NAWALT”, and couldn’t tie it to a concrete observation

curi:

it's a belief about the overall state of many other tests, ideas, debates, etc.

Critical Rationalist:

You could not specify what observations would falsify the theory

Critical Rationalist:

Even though you think the theory is testable (in Popper’s sense)

Critical Rationalist:

When I explain how my theories are testable

Critical Rationalist:

I give details

Critical Rationalist:

I answer followup

curi:

but your details are problematic

Critical Rationalist:

You think so

Critical Rationalist:

I explained why they aren’t with the Einstein analogy

Critical Rationalist:

Which you haven’t responded to

curi:

want me to give details that you consider problematic? would that satisfy you?

Critical Rationalist:

But you haven’t even BEGUN to do the same for your theory

Critical Rationalist:

Well, it’s not just enough for me (or you) to consider something problematic

Critical Rationalist:

What matters is arguing for their problematic nature

curi:

you seem to think saying stuff i consider bad quality research is a good start. i don't know why you think that should count for a lot.

Critical Rationalist:

You tried, and I responded (my response has been left alone)

Critical Rationalist:

It doesn’t matter what you consider to be bad

Critical Rationalist:

You have to argue that it is bad

curi:

i asked if you could think of reasons your premise is false

curi:

you said yes

curi:

instead of asking for mine

Critical Rationalist:

I criticized that argument

curi:

so we didn't go into those details because you conceded

Critical Rationalist:

The same is true for Einstein’s prediction

Critical Rationalist:

Which Popper thought was a paradigm case of empirical testing

Critical Rationalist:

Again, still waiting

curi:

can you think of reasons that matter that the premise would be false, not just picky logically-possible stuff? this is what i meant in the first place.

Critical Rationalist:

The reasons in the Einstein case also matter

curi:

what is your best argument that the premise is false that you know of?

Critical Rationalist:

There really could be other forces interacting with the curvature of space time

Critical Rationalist:

And don’t forget

Critical Rationalist:

“Picky” isn’t bad

curi:

you're trying to dismiss infinitely many possible objections b/c there are always infinitely many possible objections. this was not the point i was making

Critical Rationalist:

No that’s not the response I made.

Critical Rationalist:

At a later point I will explain my Einstein response again if you wish. For now I have to go. I recommend that you read my Einstein response as I originally put it, and really try to understand it.

curi:

i already know what you're saying but you aren't following me and you keep trying to fix this by explaining CR to me.

curi:

You always need auxiliary assumptions to get from a theory to a prediction (this is well understood in philosophy of science). You then can test those assumptions after

Critical Rationalist:

Note again that you have not even begun to do something analogous for your theory. I think I’ve explained the problem.

curi:

that comment deals with the infinity of possible objections

Critical Rationalist:

But yeah I really do have to go for now. Take a look at the passages about x causing y which causes z

Critical Rationalist:

Your argument against the corroboration of the evo psyc theory would work almost exactly the same way against the corroboration of Einstein’s theory

curi:

you don't know what my argument is

curi:

you made incorrect assumptions about it

curi:

i don't have an objection re Einstein. while your assumption contradicts ideas bordering on common sense.

curi:

that's a difference. it's not "something could be wrong" but actual known criticism. like if someone assumed 2+2=5 as a premise, that has known criticism in a way Einstein's premises did not.

curi:

i illustrated this with a toy example where i put an intentionally dumb premise in the middle, but you didn't understand it and also wouldn't followup and try to clear up the issue.

curi:

curi:

i'm giving a toy example to discuss a concept. does that make sense to you?

CR:

No.

curi

do you know what a toy example is?

CR

[no answer]

curi:

you switch topics frequently without resolving them. however one asymmetry in the discussion is that we've established and mutually agreed that you made mistakes. while you have not established any specific mistake by me.

curi:

my guess is you will lose patience and stop discussing prior to https://curi.us/2232-claiming-you-objectively-won-a-debate

curi:

you will give up without an impasse chain https://elliottemple.com/essays/debates-and-impasse-chains nor provide some other written methodology by which you think you won any specific debate point.

curi:

i hope i'm mistaken about this. i haven't given up. curious what you think about discussion goals like those.

curi:

i think you're overly focused on making inconclusive comments re big picture instead of resolving specific small conversational branches.

Critical Rationalist:

One quick point of clarification. When I said "no" in response to your toy example, I was not saying that I didn't understand your example. I understood your toy example, I just thought it was inadequate as a rival theory to mine.

curi:

i asked a direct question, and you gave a direct answer, but you weren't answering and then ignored me when i tried to clarify further?

Critical Rationalist:

I understand your toy example.

curi:

your prior comments had indicated you did not understand it.

curi:

you kept trying to relate it to homosexuality, which it did not mention.

curi:

and you persisted in that after i clarified that it wasn't related to homosexuality

curi:

I understood your toy example, I just thought it was inadequate as a rival theory to mine.

this statement is self-contradictory. the second half shows you don't understand it.

curi:

b/c it wasn't a rival to yours.

Critical Rationalist:

Ok, I see. So your example is meant to criticize that the link between the theory (a gene on the x chromosome causes homosexuality) and the prediction (female relatives of male homosexuals will have more sex partners)

Critical Rationalist:

The link between the theory and prediction is called an auxiliary assumption. Do you know what an auxiliary assumption is?

Critical Rationalist:

You are essentially saying "you haven't corroborated the auxiliary assumption (in this case, that women who want more sex will get more sex as a result)"

curi:

that is not what i'm saying, no

Critical Rationalist:

Ok, please clarify.

Critical Rationalist:

Here is my claim

curi:

i said i think the assumption is bad.

Critical Rationalist:

So you agree that it is legitimate in principle to use untested auxiliary assumptions? You just think this particular auxiliary assumption conflicts with other (well corroborated) theories?

curi:

i didn't say how well corroborated the other theories were. we often use non-empirical criticism, e.g. logical points.

curi:

i agree it's legitimate in principle, but you have to use critical thinking to limit it, not do it arbitrarily.

Critical Rationalist:

Ok sure, so you just think that this particular auxiliary assumption conflicts with other well corroborated OR logically unrefuted theories?

curi:

is this the research you're talking about? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1691850/pdf/15539346.pdf

Critical Rationalist:

Yes. More than one experiment (to the best of my memory) has been done in this area.

curi:

does one of the other papers talk about attraction to males?

Critical Rationalist:

No, strictly speaking they are agnostic to the exact mechanism by which the gene on the x chromosome causes increased female fecundity.

curi:

so the claim you made, as an example of something corroborated, is not part of the research?

curi:

and the assumption i doubted is also not part of the research?

Critical Rationalist:

Strictly speaking, the claim made by the researchers is that the gene on the x chromosome causes homosexuality in males but increased female fecundity in females. It is agnostic as to mechanism. The idea that the gene causes increased attraction to males strikes me as plausible. However, if you think that the fact that this mechanism is not described by the researchers, I'm happy to use a different example of corroborated evo psych theories.

curi:

you're not speaking strictly, though. e.g. you speak of "the gene" but they don't. right?

Critical Rationalist:

Oh yes they have localized a gene

Critical Rationalist:

One second

curi:

https://www.newscientist.com/article/dn6519-survival-of-genetic-homosexual-traits-explained/

Camperio-Ciani stresses that whatever the genetic factors are, there is no single gene accounting for his observations.

is Camperio-Ciani wrong or misreported?

Critical Rationalist:

When I say they have localized a gene

Critical Rationalist:

I do not mean "the" gene that explains homosexuality.

Critical Rationalist:

It is a gene which makes a male more likely to be homosexual.

Critical Rationalist:

Complex traits like homosexuality are polygenic.

Critical Rationalist:

One gene that was localized by this kind of research was Xq28

curi:

is Xq28 a gene?

Critical Rationalist:

yes

Critical Rationalist:

Now, I'm not particularly interested in the details of this example (if it happens to have a false auxiliary assumption, I can give many other examples of corroborated evo psych theories: patterns of male vs female sexual jealousy, sex differences in preference for casual sex)

curi:

https://bmcgenomics.biomedcentral.com/articles/10.1186/1471-2164-7-29

Well known for its gene density and the large number of mapped diseases, the human sub-chromosomal region Xq28 has long been a focus of genome research.

why would a gene contain gene density?

Critical Rationalist:

The point is to say that this is how you are supposed to test theories.

Critical Rationalist:

Make a theory, use some auxiliary assumptions (you still have not indicated if you understand what these are) to form predictions, then test the predictions.

curi:

curi:

why would a sub-region of a gene contain at least 11 genes?

Critical Rationalist:

There are competing definitions of "genes". I found one article which said "the study hypothesized that some X chromosomes contain a gene, Xq28, that increases the likelihood of an individual to be homosexual."

Critical Rationalist:

Maybe that article had a different definition of gene, maybe it was a simple mistake.

curi:

what definition of gene are you using, and what is a competing definition that you disagree with?

Critical Rationalist:

I have no opinion on the definition of gene. I will use whatever definition you want me to.

Alisa:

https://en.wikipedia.org/wiki/Xq28

Critical Rationalist:

It makes no difference to the content of the prediction whether we define Xq28 as one gene or 11 genes.

curi:

you aren't reading carefully even though i'm talking about details. that's inappropriate to productive discussion

curi:

no one said Xq28 had 11 genes.

Critical Rationalist:

@curi my point is that it does not matter how many genes are in Xq28.

curi:

my point is that you were factually mistaken. i think getting facts and statements correct matters. you don't seem interested. i regard this as an impasse.

Critical Rationalist:

@curi this is not an impasse (in the sense of a deadlock in debate). However many genes you think are in Xq28, I will grant that fact to you.

Alisa:

That is not responsive to his point that you were mistaken and that getting facts and details correct matters.

curi:

the impasse isn't the number of genes in Xq28. i don't think you understood what i said. your repeated misreadings of what i say, along with lack of clarifying questions or interest, is a second impasse.

Critical Rationalist:

What is the impasse? Please explain it to me.

Alisa:

you were factually mistaken. i think getting facts and statements correct matters. you don't seem interested.

curi:

your disinterest in focusing on making correct statements and caring about errors in them.

Critical Rationalist:

Ok, I also want to make correct statements. If you know how many genes are in Xq28, I will be happy to find out (so I can be correct).

curi:

do you agree that you made an error?

Critical Rationalist:

This particular fact (how many genes are there) does not have relevance to the debate (unless you can show otherwise). But I agree that correct statements are better than incorrect ones.

Critical Rationalist:

Which statement of mine was an error?

curi:

that's a yes or no question.

Critical Rationalist:

If you can show me which statement was an error, I'll agree.

Critical Rationalist:

I am a fallibilist, so I expect that sometimes I will make errors.

curi:

that's not an answer to the question. your unwillingness or inability to understand and answer questions is an impasse.

JustinCEO:

Taking a position on whether you made an error is sticking your neck out. If you're wrong in your evaluation that would warrant further analysis re why you missed that error

Freeze:

are these impasse chains in action?

Critical Rationalist:

Ok, I don't think I made an error.

Critical Rationalist:

Show me one, and I'll concede that I made one.

curi:

@Freeze not exactly, no clear chains

Freeze:

ah

Alisa:

is Xq28 a gene?

yes

For one

Freeze:

just unconnected impasses

curi:

4:40 PM] curi: is Xq28 a gene?
[4:41 PM] Critical Rationalist: yes

Freeze:

still interesting

Critical Rationalist:

Some people define it as a gene.

curi:

what definition of gene are you using, and what is a competing definition that you disagree with?

Critical Rationalist:

But actually the more reputable sources (from my glancing) define it as having more genes,

Critical Rationalist:

So sure, I concede that was an error. It is actually many genes.

Critical Rationalist:

A "gene complex"

curi:

why did you change your mind even though i didn't give new information?

Critical Rationalist:

Because you pointed out an error that I made.

curi:

i don't think you understood my question

curi:

when you say "you pointed out an error that I made" you seem to be referring to me giving you new information, contrary to the question.

Critical Rationalist:

I couldn't think of any error I made.

Critical Rationalist:

Alisa pointed one out. And I double checked the sources, and confirmed that it was an error

curi:

You forgot about the issue of whether Xq28 is a gene when evaluating and making a claim re whether you had made an error?

Critical Rationalist:

I also wasn't sure earlier because one source said it was a gene

Critical Rationalist:

But the more reputable sources said it was multiple genes

Critical Rationalist:

So I now concede that it was an error

Critical Rationalist:

These are all fair things to be saying.

curi:

i've asked a yes or no question. i'm still waiting for an answer.

Critical Rationalist:

No, it was not in my mind when you asked about whether I had made an error.

curi:

do you mean "yes"?

JustinCEO:

curi:

You forgot about the issue of whether Xq28 is a gene when evaluating and making a claim re whether you had made an error?

Critical Rationalist:

Sorry, yes.

curi:

it's hard to organize and make progress in discussions with frequent errors. because you're talking about one thing and then an error comes up, and you talk about that, an another error comes up. this can happen a lot if the rate of errors is faster or similar to the rate of error corrections. does this abstract issue make sense to you?

Critical Rationalist:

Yes, the abstract issue makes sense to me. I concede the error, and agree that errors make conversations harder. I said Xq28 is one gene when it is in fact many genes. You are free to continue with any line of argument you had.

curi:

ok. i appreciate that. many people quit around here if not earlier.

it's hard to answer some of your complicated, bigger picture questions and points, in a way that satisfies you, when communication about some of the smaller chunks is breaking down often. that's my basic answer re AWALT. does that make sense?

this discussion community has been trying to examine issues rigorously for 25 years. it has developed some complicated ideas about how to do that. if you're interested in learning the methodology, that'd be great. if not, it's possible to have discussions but expectations have to be lower. do you think that's fair?

Critical Rationalist:

I think this will be my last comment for the night. Given that I only have two days left before I leave my family for Georgia, it might be my last comment for a while. Here is why I do not think that is fair. Debates about evo psych have also gone back decades (longer than 25 years). There are also complicated ideas about how to do that (in fact, more complicated: they involve statistical analysis and genetics). Despite the fact that t7he debates about evo psych theories have been going on longer, and have more complicated methodologies, I was still able to explain (in plain English) what observations would falsify specific evo psych theories. I think it is reasonable to expect you (as a Popperian) to be able to do the same. You have a hypothesis (no women are immune to PUA) and you have been unwilling to explain what data would falsify it. You have said (and I agree) that some PUA hypotheses are testable, but I started this conversation by contesting that particular claim (i.e. the AWALT claim). I don't think there are any evo psych hypotheses for which I could not explain (in plain English) what evidence would count as falsification of the hypothesis. But if there were, I would just admit "yes, that particular hypothesis is not falsifiable". I am not claiming in any way to have "won the debate". I view this more as a conversation. I am merely saying that @curi held me to a different standard. He extensively criticized my examples of how to test evo psych hypotheses, but was unwilling to give his own example of how to test the hypothesis which was the subject of debate. I could not even begin criticism of his position, because he flatly refused to answer the crucial question.

curi:

your explanations re evo psych contained errors which have not yet been untangled, so you did not yet succeed at doing that.

curi:

that = " was still able to explain (in plain English) what observations would falsify specific evo psych theories."

curi:

you're also comparing research into evo psych using standard methodology with research into discussion methodology. and doing it after i just gave several demonstrations of how your discussion contributions were inadequately rigorous, hence my suggestion better methodology is needed to deal with that ongoing problem.

curi:

the standard i was trying to hold you to was not being mistaken. i do hold myself to that too.

curi:

i did not agree to debate AWALT with you (you call it the subject of the debate) and you didn't seem to listen to me about that.

JustinCEO:

CR seems more interested in showing curi has some purported double standard than in trying to achieve mutual understanding

curi:

AWALT is an all X are Y claim, similar to "all swans are white". you can test it by looking for counter examples. in order to judge what is a counter example you have to learn and use the redpill/PUA theoretical framework to interpret the data. i don't know a simple summary to redpill a bluepill person in a couple paragraphs so that they could do that, especially not when they're argumentative and not asking questions to learn about PUA.

curi:

the data is much messier than physics b/c e.g. no PUA has a 100% success rate

curi:

so 10 guys can try to get a girl using their flawed PUA, all fail, and that doesn't imply she's a NAWALT

curi:

this is dangerous b/c ppl could make endless excuses to get rid of counter examples, as CR said. nevertheless it's the situation. i asked if he knew of that danger happening but he didn't. which makes sense because he's unfamiliar with the literature and not in a position to join the AWALT debate.

curi:

AWALT is not 100% rigorously defined. worse, it's considerably less airtightly specified than many other existing ideas. nevertheless it does have some content, and if data started clashing with it in big ways the reasonable people would start changing their mind.

curi:

people mean stuff by it that has limited flexibility

curi:

but no single field report could refute AWALT

curi:

no more than observing one family for one day could refute the idea that they are coercive parents.

JustinCEO:

do lesbians use PUA?

curi:

no idea

JustinCEO:

i wondered cuz lots of lesbian relationships fall into gendered patterns where there's like the boy lesbian and girl lesbian

JustinCEO:

so i was wondering if it'd work for the boy lesbians

curi:

you could try to RCT whether PUAs have better pickup results on average than ppl without PUA training, but that won't tell you whether AWALT or NAWALT.

curi:

you can't directly test whether a particular woman is a NAWALT b/c any number of PUA attempts failing on her is compatible with AWALT

curi:

that doesn't mean those failures would be meaningless. we'd try to come up with explanations of the data.

curi:

it could indicate e.g. a systematic error in PUA training that many PUAs fail on that women. which would be unsurprising. no one thinks PUA is perfect as understood today. the issue is whether that kind of stuff works.

curi:

CR was uninterested in the problem situation this debate stems from

curi:

which is ppl actually want to find a NAWALT and other ppl think it's a hopeless quest

curi:

this has consequences like MGTOW, which believes AWALT and consequently rejects women

JustinCEO:

ya i mentioned that earlier i think re: wanting to find NAWALT

curi:

the actual nature of the debate is kinda like, stylized:

MGTOW: u'll never find a unicorn, RIP
Joe: my gf is GREAT, why u dissing her? i totally understand that redpill is right in general and > 90% of girls are like that, but she's special, just look harder
MGTOW: link me her facebook
Joe: ok
MGTOW: here are 8 examples of AWALT behavior i found on her wall
Joe: fuck you

curi:

then, after consistently dealing with challenges like this, CR comes along and says AWALT theory is not subject to empirical testing.

JustinCEO:

i think if u assume PUAs are like misogynists or something (which is a conventional view) you would have the opposite expectation, that they want to say AWALT

curi:

b/c it's hard to tell him how to find AWALT behaviors on an FB page

curi:

there's no simple formula for that

curi:

i can't write a bot to scrape that data

curi:

i can't get that data from a survey

curi:

it takes creative, critical thinking

curi:

note this debate is btwn ppl who think redpill is 99% right and ppl who think 100%, NOT btwn ppl who think redpill is 50% right or 5% right or 0% right. the debate with them is different. CR didn't seem to understand this when i explained earlier. but then blames me for not being able to give a short explanation, just cuz he didn't understand the one i gave? meanwhile he did not give one that satisfied me, but claimed asymmetry b/c he gave one!

curi:

anyway the big thing, to me, is he makes lots of mistakes, he admits he makes lots of mistakes, he ought to be super interested in talking with someone who can catch and correct his mistakes (and who he can't do that to, as yet). but it's not clear that he is.

curi:

curi:

and now he's leaving, probably for a while, without trying to do those things or explain alternatives or concede he has a lot to learn and express interest in learning it.

curi:

[4:26 AM] GISTE: Before I address your question, I have a point to make and a clarifying question about what you said:
(1) I think you’re implying that all of your previous comments are compatible with Popperian epistemology. I’ve been reading your comments and I disagree with many of them re epistemology. So that means that you and I disagree on what Popperian epistemology really is, how it works, and how it applies to the non-epistemology topics we’re discussing.
(2) To clarify, are you saying that you have to look at data (observe) before coming up with a theory? @Critical Rationalist
[4:31 AM] Critical Rationalist: I don’t hear a question from 1).

This is a misreading by CR. GISTE clearly stated that he had a point and a question, then provided a point and a question. CR assumed not only without it being said, but directly contrary to the text, that it would be two questions.

curi:

[5:00 AM] GISTE: (1) You’ve seen me disagree with Popper on stuff re epistemology, so I don’t get the “sacred text” comment. (Recall that we talked about Popper’s critical preferences idea and I gave you a link to a curi blog post that explains that Popper’s idea is wrong and incompatible with the rest of Popperian epistemology, while curi’s correction to that idea is compatible with the rest of Popperian epistemology.)
(2) Ok. I recommend that you engage with @curi or @alanforr about this because they are experts on this and I’m not. For now I’ll explain something that I’m not sure will help you understand my view. (This is my vague memory and these are not actual quotes.) Popper once gave a lecture where he said to his students “Observe”. The students said, “observe what?” Popper replied with something like, “Exactly, you have to have an idea (theory) about what to observe before you can observe”. This was to point out that theory always comes before observation.
(3) selective pressures cannot “give rise” to anything. I tried to come up with an interpretation of your question that makes sense from my perspective (which includes my understanding of epistemology) but I did not succeed. I could try to come up with a question that tries to get at what I think you’re trying to get at, and then answer that question. So here’s my question: what selective pressures could have possibly selected for the genes that made flying dinosaur bones lighter? Answer: flying dinosaurs that had genes that made their bones lighter resulted in those dinosaurs being able to fly more, higher, longer, etc, which resulted in those dinosaurs having more grandchildren than compared to the dinosaurs that had rival genes.

@Critical Rationalist
[5:04 AM] Critical Rationalist: I agree, the “sacred text” comment was unnecessarily provocative. The passage you cite is roughly what I had in mind.

This is an error because GISTE did not cite a passage.

curi:

[6:18 AM] Critical Rationalist: Children can only learn language during a certain period of time. If they try to learn a language after a certain age, it is virtually impossible to attain full fluency. Furthermore, learning a language as an adult is incredibly effortful, whereas doing so as a child is effortless.

How do you know it's effortless for children?

The reason you think this data contradicts my view is that you don't know what my view is. You're trying to argue with it before understanding the basics. This isn't an issue we overlooked.

These data seem best explained by specialized language acquisition capacities (which only function for a limited time), not a general learning capacity.

this claim contradicts some theories in epistemology, which are in BoI, which CR hasn't learned or found any flaw in. if theory and data are incompatible you have to say "i don't know", but the data is compatible, the only issue here is the theory-violating explanation of the data seems more intuitive.

curi:

[6:30 AM] GISTE: AFAIK = as far as i know
[6:31 AM] Critical Rationalist: Lol typed it into google incorrectly

here CR thinks making an error is funny.

Measure the degree of corruption by society (however you define it) and see if if predicts the difficulty of learning language.
[9:19 AM] Critical Rationalist: Are you willing to put your money where your mouth is and make that prediction?

CR doesn't understand the things he's trying to argue with. you can't just measure that. our concept doesn't map to a measuring device. he's dramatically underestimating the complexity of the human condition by proposing (in later messages) very naive, simplistic proxies for corruption which are very dissimilar to our thinking on the matter.

more broadly he's dramatically downplaying the role of philosophy and critical thinking compared to KP and DD.

curi:

[9:54 AM] Critical Rationalist: Both my theory and his lack theoretical specificity

this comment on me comes from misreading what i actually said. he's glossing over the details and specifics of the points i made. could go through it in detail but he won't thank or reward me, or start trying to learn FI.

[10:26 AM] Critical Rationalist: There is no account of how a universal classical computer could creatively conjecture new explanations

KP gave one. P1 -> TT -> EE -> P2. Also known as "evolution" or "conjecture and refutation". that doesn't mention computers. is the problem/objection related to some imagined limit of computers? what?

Critical Rationalist:

@curi as I said, I’ll be stepping out for a while. I’ll just say one thing. Since you’re holding my words to a very high standard, it is only fair for the same standard to be applied to you.

Critical Rationalist:

The reason you think this data contradicts my view is that you don’t know what my view is.

Critical Rationalist:

Did I say that this data contradicts your view?

curi:

Freeze:

"very powerful evidence against curi's account"

Freeze:

the evidence is the data?

Critical Rationalist:

Does “powerful evidence against” mean the same thing as “contradicting”?

Freeze:

i think so

curi:

do you think that data is compatible with my account? why, then, would it be very powerful evidence against? i myself think that the data, as you present it, refutes my account.

Critical Rationalist:

I do not think data needs to logically contradict a theory to be evidence against it. The point is that you misrepresented what I said. I elsewhere explained that what I meant was that the data are better explained by an alternative model.

Critical Rationalist:

That might not be your epistemology, but you made an error when presenting my position.

curi:

in the quote, i didn't make a statement about what you said.

Critical Rationalist:

You presented my position. You said > The reason you think this data contradicts my view is that you don’t know what my view is.

curi:

since your presentation of your data does contradict my account (IMO), and you thought it was strong evidence against, and Critical Rationalism considers evidence against something to be contradicting data, and you said you were a Critical Rationalist, i made a reasonable guess given incomplete information.

Critical Rationalist:

Ok, but it was an error nonetheless

curi:

no, making a reasonable guess using incomplete information is not an error. it's a correct action.

Critical Rationalist:

You are presupposing an incorrect definition of error. Error means mistake or false statement

Critical Rationalist:

Error: “the state or condition of being wrong in conduct or judgment.”

curi:

was my conduct wrong?

Critical Rationalist:

No, your statement was wrong

curi:

was my judgment wrong, meaning i should have made a different judgment in that situation?

Critical Rationalist:

Wrong as in factually wrong, not ethically wrong

JustinCEO:

That's the very definition I would have chosen to contradict u CR

Critical Rationalist:

No, it was wrong in the sense that it was factually incorrect

curi:

so i didn't make a conduct or judgment error?

Critical Rationalist:

The first definition of wrong is “not correct or true; incorrect”

Critical Rationalist:

Your statement was incorrect, therefor it was an error

JustinCEO:

The very definition that you first chose doesn't talk about factual correctness

curi:

you're moving the goalposts

curi:

and what do you think is evidence against a theory which doesn't contradict it? how does that work?

Critical Rationalist:

Furthermore, if we accept your definition of error, then my claim that Xq28 was a single Gene was not an error: it was a reasonable guess based on incomplete information (I looked at a source which said it was a gene)

curi:

i don't agree

Critical Rationalist:

That’s besides the point

Critical Rationalist:

The point is, your statement was incorrect. It was an error

curi:

why did you manage to find a source that's worse than wikipedia or reading link previews on google?

JustinCEO:

CR imho u r scrambling badly while trying to catch curi out

curi:

seems like an error

JustinCEO:

You should be less adversarial

curi:

and why did you double down on it by making a claim re differing definitions of gene while being unable to provide any definitions?

JustinCEO:

Night

Critical Rationalist:

I’m not revisiting it in detail

curi:

i think if i restate something you communicated, and then you call it factually false, the error is yours for communicating it, not mine for talking about your views in terms of what you said.

curi:

further, you're claiming i'm factually wrong but have yet to explain the real state of affairs, as you claim it to be, which differs from what i thought it was.

Critical Rationalist:

Then the same is true for you

JustinCEO:

There he goes again

curi:

i haven't yet explained that Xq28 is more than one gene?

Critical Rationalist:

You accused me of not understanding your view. In that case, the fault is yours

Critical Rationalist:

If we use the same standard

curi:

where did i miscommunicate?

Critical Rationalist:

Where did I miscommunicate?

curi:

i told you where i got my interpretation of your position. you have yet to point out any error in my way of reading.

curi:

did you forget?

Critical Rationalist:

I did not forget. It is a rhetorical question. I do not believe that I miscommunicated

curi:

i gave an account which you have not responded to

curi:

so that's an asymmetry

curi:

asking where you miscommunicated, while remembering that i already told you and it's pending your reply, is unreasonable

Critical Rationalist:

I believe that data can decide between two theories when one theory predicts it, but the other does not.

Critical Rationalist:

That is not the same as the data contradicting the latter theory, but it does constitute evidence against it

curi:

i think you're too tilted to continue, and are just trying to win a pedantic point to save face because you lost a bunch of points, and that you can't actually win but are just going to keep throwing nonsense at me without regard for the quality of your arguments, and this is an impasse.

Critical Rationalist:

No, I just explained what I mean by evidence being against a theory without contradicting it

Critical Rationalist:

Which is what you asked for.

curi:

asking where you miscommunicated, while remembering that i already told you and it's pending your reply, is unreasonable

curi:

among many other things

Critical Rationalist:

Alright, this will actually be my last comment. The reason i did this little exercise is because your own accusations of errors are levied against me when they were clearly good faith misunderstandings. For example, I admitted that I mistyped something into google and you called this an error. I am showing why that approach is problematic. I think you’re projecting. You accusing me of being combative is odd coming from someone who criticized me for mistyping something into google.

curi:

asserting they were "clearly good faith" is an unreasonable way to speak to me. you can't reasonably expect me to agree with that.

Critical Rationalist:

Do you think I mistyped something into google in bad faith (I was referring to the errors you pointed out in your volley, eg when I mistyped something into google, or when I said giste “cited” something when he only alluded to it. Those were clearly not in bad faith).

curi:

someone who criticized me for mistyping something into google.

i didn't do that. you're lost b/c you keep misreading things and getting facts wrong. then you build conjectures using those errors.

curi:

Alright, this will actually be my last comment.

false

curi:

(I was referring to the errors you pointed out in your volley,

you didn't specify a limit on which errors from today you meant.

curi:

i was criticizing you for laughing, not for the typo.

curi:

i was criticizing your attitude not your mistyping. again you're too tilted, incompetent or whatever to read.

curi:

that's common and fixable if you want to work at improving. it takes effort to gain skills. but it doesn't sound like you want to make progress.

curi:

re epistemology, does he mean that observing my desk is powerful evidence against evolution, which did not predict it? or only if i propose a theory of intelligent design which includes a prediction of my desk?

curi:

i wonder why he thinks "effortless" learning doesn't contradict my model. does he know that contradicts Popper?

curi:

he thinks my model merely fails to predict that some learning will be effortless? odd misconception.

curi:

conjecturing and refuting is effort.

curi:

there's no actual data that anyone learned anything effortlessly.

curi:

he was ignoring that my model interprets the data differently

curi:

1:20 PM] Critical Rationalist: Let me try to spell out the contradiction with a concrete example

curi:

there's also the dictionary meanings

curi:

curi:

curi:

curi:

I'm not contradicting you, I'm just saying you're totally wrong. - CR, 2020

curi:

A general learning capacity would work equally well through the life span, but language acquisition works optimally during a particular period of life

isn't he saying: curi's model would predict X, but the data is Y. isn't he referring to contradiction?

curi:

i still read this as as a misprediction issue where my model allegedly differs from empirical reality, and i think he was being dishonest to try to catch me in an error.

curi:

he wasn't talking about something where my model has no predictions, so that was an unreasonable elaboration. he gave a case which, besides the direct problems with it, doesn't apply here.

curi:

he had just stated a prediction himself (which is correct as a first approximation, though fails to consider some factors)

curi:

it was a poor claim about what my model predicts, but he did make such a claim and contradict it.

curi:

right after mentioning something, which i highlighted, that does contradict my model (the idea of effortless learning, which tbh i don't think any serious school of thought claims).

curi:

i don't think he thought his point through beyond his initial statement that he hadn't said contradict, and i said contradict

curi:

but he wasn't even paying enough attention to notice i didn't say he said that word.

curi:

i was describing his thinking, not making statements re his word use

curi:

note that none of the errors he made were rescuable by saying e.g. "oh i was speaking loosely, and reasonably, and meant..."

curi:

no additional clarifications of his statements would help them

curi:

they were actually wrong

curi:

it wasn't stuff like typos where he'd say "oh i didn't mean that, that text doesn't represent the ideas in my head perfectly"

curi:

they were all substantive thinking mistakes

curi:

he's partly trying to smear my criticism by making low quality criticism and then calling it parallel.

curi:

i wasn't trying to hurt him by correcting him about several things in a row. in retrospect i did hurt him. i avoided those sorts of corrections for quite a bit of discussion b/c i know most ppl dislike them and can't handle them, and he broadcast plenty of the usual signs that he would dislike it. however, he kept pushing me in picky ways, trying to get more details, etc. he was basically bluffing aggressively by pretending he wanted that sort of discussion to pressure me. he thought it was a game of chicken whereas, actually, i simply can discuss carefully and rigorously.

curi:

he pretended he was OK with it at first, and pretended it had been successful, but after these later comments he clearly wasn't.

curi:

he interpreted correction re social status and wanted to do this back to me:

curi:

He had turned to go. Francon stopped him. Francon’s voice was gay and warm:
“Oh, Keating, by the way, may I make a suggestion? Just between us, no offense intended, but a burgundy necktie would be so much better than blue with your gray smock, don’t you think so?”
“Yes, sir,” said Keating easily. “Thank you. You’ll see it tomorrow.”

curi:

  • FH

curi:

but i didn't want to let him b/c he accused me of an intellectual error instead of using something unimportant to save face with

curi:

he wanted to save face in a more substantial way that denied the meaning of what had happened, as well as detracted from my intellectual reputation, whereas Francon didn't do that, he was just saying he's not a total pushover and he's still the boss.

curi:

both of which are true

curi:

anyway i didn't offer him a way out where he gets to be a competent person capable of rigorous intellectual discussion with an adequately low error rate to make progress. i don't think he's there yet. but he's too attached to already being there to try to fix it, so he's maf.

curi:

by trying to tear me down he was trying to show my criticisms were trivial and unimportant, no one is immune to that standard of pedantry, no one lives up to the standards of competence i propose, etc.

curi:

but when he tried to have that discussion, he was tilted to the point of making a lot more errors than before

curi:

and his judgment of what point he could safely win was grossly unreasonable

curi:

b/c he wasn't updating his thinking regarding the new info he had. he just kept trying to do what worked in the past.

curi:

sadly his career is posturing and social climbing re this stuff, he's really invested in that game

curi:

mb he'll come back and say i'm making erroneous assumptions, he's going to be a rich socialite, the phil MA with TA work is just a hobby

curi:

the thing i was actually trying to communicate re his thoughts was something i thought his perspective (as judged by his msgs) was not taking into account.

curi:

when he said Xq28 is a gene, and doubled down on it, he was trying to say it is in fact a single gene. which is wrong.

curi:

he was saying this in service of his claim that he was speaking strictly correctly

curi:

he chains his errors together – defending each with a new one

curi:

they aren't random. they're systematically biased

curi:

ppl don't like being outclassed. it's so fukt. i did like it when i talked with DD initially.

curi:

he's still in school and i've been a professional philosopher for a long time, and i have the best education/credentials in the field, but he can't take losing to me. he can only take (maybe) losing to ppl who he perceives as higher social status than he perceives me.

curi:

he did not discuss his social status judgments and their accuracy or relevance

curi:

the alleged asymmetry re AWALT and evo pscyh was interesting

curi:

i gave a short statement which he didn't accept. he gave one that i didn't accept.

curi:

the asymmetry was that i accepted that he hadn't accepted mine, and talked about how to solve this problem, how to make progress, what can be done. meanwhile, he did not accept that i hadn't accepted his.

curi:

so his ideas are better than mine because he denies reality.

curi:

he repeatedly tried to invoke this asymmetry, as if i'd accepted his examples in some significant way, when i hadn't.

curi:

he like couldn't face that his short, simple summary info was not convincing to me.

curi:

it works on everyone else!

curi:

despite the fact that he doesn't know the basic facts of the topic

curi:

which are, in his experience, not relevant to getting most ppl to agree that he's clever.

curi:

he thinks everything in evo psych is readily testable. but how would you test whether being more attracted to men in general leads to more children? survey questions will not measure degrees of attraction accurately. how does anyone know how their attraction levels in their head, on average, compare to those of other people? his general policy, which we saw re measuring mental corruption, was just use terrible proxies to measure things cuz testing > not testing.

curi:

it's bad enough trying to survey to accurately measure a mental state that we have no good way to quantify. it's much worse trying to get people to make relative comparisons between their mental states and other people's non-quantified mental states.

curi:

when we quantify attraction normally we do it relatively to our own experience. i was much more attracted to sue than sarah.

curi:

ppl will pick words to communicate. they will say "i am super attracted to Nadalie". but this reflects 1) relative comparisons to their other attractions 2) social incentives to brag about this, play it up or down, etc. 3) how much they use strong terms in general. and, ok, 4) some crude estimates re behavior. e.g. they were willing to put effort into a date, so they should be using stronger language than someone who isn't putting in effort. roughly like that.

curi:

these behaviors are affected by tons of factors other than attraction.

curi:

including: attraction can result in putting in less effort b/c of playing hard to get

curi:

this also all neglects different types of attraction. treats it as a single trait which it's really not.

curi:

this was covered in BoI re happiness

curi:

The connection with happiness would still involve comparing subjective interpretations which there is no way of calibrating to a common standard

curi:

etc

curi:

So how does explanation-free science address the issue? First, one explains that one is not measuring happiness directly, but only a proxy such as the behaviour of marking checkboxes on a scale called ‘happiness’. All scientific measurements use chains of proxies. But, as I explained in Chapters 2 and 3, each link in the chain is an additional source of error, and we can avoid fooling ourselves only by criticizing the theory of each link – which is impossible unless an explanatory theory links the proxies to the quantities of interest. That is why, in genuine science, one can claim to have measured a quantity only when one has an explanatory theory of how and why the measurement procedure should reveal its value, and with what accuracy.

curi:

but he reads BoI, likes it, doesn't notice it contradicts a field he likes, doesn't notice the field in general has no rebuttal, and then is surprised when a DD colleague doesn't make concessions re his claims about it

Critical Rationalist:

There is a lot to talk about in your last volley, including some very important issues related to philosophy of science. May is when my upcoming semester in grad school ends. When I come back, I might return to those issues.

But there is one distinction I want to make. It will be helpful when you and I have future conversations. There is a difference between not addressing something and refusing to address something. For example, you said “he did not discuss his social status judgments and their accuracy or relevance”. This is me not addressing something. I agree that there I things I did not address.

However, this is normal. For example, here are is one question of mine that you never answered:

The link between the theory and prediction is called an auxiliary assumption. Do you know what an auxiliary assumption is?

Make a theory, use some auxiliary assumptions (you still have not indicated if you understand what these are) to form predictions, then test the predictions.

Now, if I had failed to answer a question two times in a row, you would have been very critical of me. But again, that is still just not addressing something. When you failed to answer my question about auxiliary assumptions, I decided to be charitable and assume you had just not gotten around to it (you are free to answer now if you want). I would never criticize someone for simply not addressing something (as you did with the auxiliary assumption question). In a conversation this complex, people will sometimes get sidetracked, or other things happen.

It is not reasonable to condemn someone for not addressing something. What is reasonable is to expect people to not flatly refuse to address something. A blanket refusal to answer a question (i.e. a statement to the effect of “no, I will not answer your question”) is a hindrance to progress in a conversation. Crucially, at no point did I do this.

jordancurve:

It is not reasonable to condemn someone for not addressing something.

Unless I missed it, you didn't quote anyone doing this.

curi:

https://my.mindnode.com/tvuTuLmRpf7YbREDvBAhKDoFvi4wkBcPfDXje3bB @Critical Rationalist (should work on desktop. if on android, ask for a pdf export. if on ios, download the free mindnode app and open in that)

jordancurve:

I think it would be clearer to refer to him as CRist and reserve CR for critiical rationalism.

curi:

did i refer to him as CR?

curi:

oh the title

curi:

i didn't even think of the filename as something that woudl be shared

curi:

it's not part of the tree

Critical Rationalist:

I was trying to explain that evo psych makes testable predictions. How does would it help my case if Xq28 were a gene instead of a series of genes? I grant that it is a set of genes. Does that show that evo psych is not making testable predictions? If not, what does the fact that Xq28 is a set of genes show?

curi:

that is non-responsive to BoI c12

curi:

it's also non-responsive to the biased errors problem

Critical Rationalist:

How is it a biased error?

Critical Rationalist:

Does this error favour my side?

curi:

it says how in the tree

Critical Rationalist:

@curi did you understand my distinction between "not responding" and "refusing to respond"?

curi:

yes

Critical Rationalist:

I read the purple part of the tree.

Critical Rationalist:

I did say when explaining the evo psych theory that it talked about a specific gene. It in fact was about a set of genes. But that is still a testable prediction. It doesn't help my case to say it is one gene: saying "a set of genes" is still a testable prediction.

Critical Rationalist:

that is non-responsive to BoI c12

Critical Rationalist:

I agree. I haven't responded to that yet, just like you have not responded to the auxiliary hypothesis question. Note again the difference between "not responding" and "refusing to respond".

Freeze:

I think non-responsive in this context means something more like, This doesn't address the arguments that criticize it or offer better explanations

curi:

@Critical Rationalist did you delete messages from the log?

Critical Rationalist:

I deleted one of my messages that said "my last mistake"

curi:

Please don't delete anything here

Critical Rationalist:

Sounds good

Critical Rationalist:

I await a response to my above messages.

curi:

https://elliottemple.com/debate-policy

Critical Rationalist:

Since @curi has shared that tree here, I will say what I said in "Slow". I was trying to explain that evo psych makes testable predictions. I said this to @curi

Critical Rationalist:

I did say when explaining the evo psych theory that it talked about a specific gene. It in fact was about a set of genes. But that is still a testable prediction. It doesn't help my case to say it is one gene: saying "a set of genes" is still a testable prediction.

Critical Rationalist:

@curi has not responded in "slow". So I'll ask the question again here.

Critical Rationalist:

How does would it help my case if Xq28 were a gene instead of a series of genes? I grant that it is a set of genes. Does that show that evo psych is not making testable predictions? If not, what does the fact that Xq28 is a set of genes show?

jordancurve:

How does would it help my case if Xq28 were a gene instead of a series of genes?

It would help the case that you are familiar enough with the topic to discuss it without making blatantly false statements.

jordancurve:

Does that show that evo psych is not making testable predictions?

No, that's in BoI ch. 12.

jordancurve:

what does the fact that Xq28 is a set of genes show?

See above.

Critical Rationalist:

@jordancurve Does it have any relevance to my claim that evo psych makes testable predictions? What matters is not how familiar or smart I am, what matters is the ideas I put forward.

jordancurve:

Does [the fact that Xq28 is not a gene] have any relevance to my claim that evo psych makes testable predictions?

jordancurve:

Not that I know of.

Critical Rationalist:

The claim that evo psych makes testable predictions is what I was arguing for.

Critical Rationalist:

So you don't know of any way that my error was relevant to that^ claim.

jordancurve:

No, and I don't think anyone said your error was relevant to that claim.

Critical Rationalist:

In slow, this conversation happened

Critical Rationalist:

I asked this:

Critical Rationalist:

How is it (my gene mistake) a biased error?
Does this error favour my side?

Critical Rationalist:

@curi said this

Critical Rationalist:

it says how in the tree

jordancurve:

Indeed.

Critical Rationalist:

That was a direct response to me.

Critical Rationalist:

So, he thinks that this error favours my side.

jordancurve:

Yes.

jordancurve:

One of your "sides", to be more precise.

Critical Rationalist:

Please explain.

jordancurve:

It says so right in the purple node of the tree!

jordancurve:

Do you want to try to re-read it once more before I explain it?

Critical Rationalist:

But it does not favour my side in the sense that it shows that evo psych is testable.

jordancurve:

No it doesn't, but no one (except you?) thought it did

Critical Rationalist:

Xq28 is a set of genes. Granted. Does that mean evo psych isn't testable?

Critical Rationalist:

Does that count against my claim that evo psych is testable?

jordancurve:

I think I answerd this earlier. No. That argument comes from BoI ch 12

Critical Rationalist:

Good.

jordancurve:

Not that I know of, but I'm no expert.

curi:

@jordancurve check IMs

Critical Rationalist:

So my error (claiming that Xq28 is a single gene, instead of a set of genes) does not count against my argument that evo psych is testable.

jordancurve:

Again, not that I know of.

Critical Rationalist:

The Boi chp 12 argument is an interesting argument, one that I'm willing to answer.

jordancurve:

It counts against your claim that you didn't make any errors.

Critical Rationalist:

Yes 100%

Critical Rationalist:

But surely, what matters is not me, but the ideas I'm putting forward.

jordancurve:

If you make a claim about yourself, then you matter.

Critical Rationalist:

We all agree, don't we, that the ideas are what matter?

Critical Rationalist:

Yes, I've retracted that claim.

JustinCEO:

Truth is what matters. Errors lead one away from truth and have to be dealt with in a serious and systematic way in order to get at the truth effectively. Concessions and retractions of errors are not a serious and systematic solution to the thing giving rise to the errors in the first place. The errors CR has made in the discussions with curi are not mere unavoidable byproducts of human fallibility and will sabotage making discussion progress if not rigorously and thoroughly addressed

curi:

https://curi.us/2190-errors-merit-post-mortems

Critical Rationalist:

"Second, an irrelevant “error” is not an error... The fact that my measurement is an eighth of an inch off is not an error. The general principle is that errors are reasons a solution to a problem won’t work."

Critical Rationalist:

That's from @curi's post.

Critical Rationalist:

So, by his standard, this error has to be relevant. It has to be "a reason a solution to a problem won't work". Why does my error qualify as relevant in @curi's sense?

jordancurve:

It's relevant to your claim about not having made an error.

curi:

you don't understand the standard in the post. this is another example of the same kind of lack of rigor that the xq28 error was

Critical Rationalist:

"The small measurement “error” doesn’t prevent my from succeeding at the problem I’m working on, so it’s not an error."

Critical Rationalist:

The problem I was working on was showing that evo psych is testable

curi:

is "is Xq28 a gene?" a problem?

Critical Rationalist:

It was not the problem I was working on, no.

curi:

when i asked that question, and you answered, you were not working on that problem?

Critical Rationalist:

The problem I was working on was "is evo psych testable"

Critical Rationalist:

Not on the problem "is Xq28 a gene".

Critical Rationalist:

That is not a problem I'm working on.

jordancurve:

!

curi:

so your answer that it's not a gene was not an attempt to solve the problem "is Xq28 a gene?"?

JustinCEO:

Problems have subproblems and you can make mistakes at the subproblem level and that affects your ability to claim you have solved the higher level problem

JustinCEO:

Like if I make an addition error in a complicated mathematical expression

JustinCEO:

Boom answer wrong

Critical Rationalist:

No, it was an attempt to solve the problem of whether evo psych is testable. I try to answer all questions when having conversation about a topic.

Critical Rationalist:

So, by your standard, the gene mistake does not qualify as an error.

Critical Rationalist:

Now look. I don't care what you call it.

Critical Rationalist:

Error, mistaken definition, whatever

JustinCEO:

Hang on nobody's conceded

So, by your standard, the gene mistake does not qualify as an error.

Critical Rationalist:

I was trying to argue that evo psych was testable.

Critical Rationalist:

That is the problem we were trying to solve.

JustinCEO:

Don't try to move on before that gets thoroughly resolved

Critical Rationalist:

The problem I was trying to solve was whether evo psych was testable.

Critical Rationalist:

Whether Xq28 is one gene or many genes does not affect THAT^ claim.

curi:

you clearly don't understand what the post means re problems and problem solving. so you haven't understood the standard in the post. that would be ok if you weren't then trying to use your misunderstanding as a bludgeon to win a debating point.

Critical Rationalist:

@curi the post does not define the term "problem" or "problem-solving". The word "problem" only occurs twice.

jordancurve:

It's written for people familiar with CR

Critical Rationalist:

The problem that I was trying to solve was this: "is evo psych testable".

Critical Rationalist:

I am familiar with CR

JustinCEO:

Why didn't CR ask something like "Ok then what am I missing?" re: the post and curi's comments about not understanding the standard

Critical Rationalist:

Because sometimes when I ask @curi a question he refuses to answer.

Critical Rationalist:

But I will try with this one, since you've recommended that I do so.

curi:

http://fallibleideas.com/problems

curi:

among many other things. your denial of subproblems or working on multiple problems at once is contrary to the mainstream, quite bizarre, and not something you can expect to be covered preemptively.

curi:

anyway you interpreted something i wrote, using your intellectual framework assumptions, to conclude basically that i was contradicting myself. the more reasonable conclusion is different framework.

JustinCEO:

Ya I found the replies in that vein shocking

JustinCEO:

Shocking re:

among many other things. your denial of subproblems or working on multiple problems at once is contrary to the mainstream, quite bizarre, and not something you can expect to be covered preemptively.

curi:

among many other things

i meant that the link is one of many pieces of literature.

curi:

I am familiar with CR

right you were familiar enough with CR to know that a question is a type of problem, but some of your other comments had nothing to do with CR

jordancurve:

Because sometimes when I ask @curi a question he refuses to answer.

Yesterday you made a similar claim ("When you [curi] don't answer a question, it makes you look bad") and yet, when challenged, you were unable to quote a single question that curi didn't answer. Has that changed?

Critical Rationalist:

When I first heard about this group, I was excited to talk with other people who were familiar with Karl Popper. Despite being in a masters program in philosophy, I rarely encounter people who know his work closely. But the quality of discourse is on the whole negative (though there have been some exceptions). You have been obsessing over the fact that I said Xq28 is one gene instead of many genes, despite the fact that it is not relevant to the problem I was trying to solve (is evo psych testable). @curi will criticize me for failing to address things (despite the fact that I try my very best to answer every question). When it is pointed out that everyone (including him) sometimes fails to address things, he ignores it. For example, this is the fourth time I have prompted you to answer this question: "do you know what an auxiliary hypothesis is?" And as I have already pointed out several times, when I challenged him to provide a testable prediction that followed from his theory, he refused to do so. He claims that this claim "no women are immune to PUA" is testable" has been subject to empirical tests. However, in order to be an empirical test, it has to be a genuine attempt at falsification. I read @curi's most recent volley on this topic. What a Popperian should be able to say for his theory is this "if we observe x, then the theory is falsified". In the case of Einstein, he could answer this question concretely: if we see the starlight here, then the theory is falsified. I could do this for evo psych: "if male homosexuals do not invest more in their nieces and nephews, then the theory is falsified".

curi:

you aren't using this method or proposing a different one https://curi.us/2232-claiming-you-objectively-won-a-debate

Critical Rationalist:

@curi said that "any number of PUA attempts failing on her is compatible with AWALT. that doesn't mean those failures would be meaningless. we'd try to come up with explanations of the data." This is exactly the strategy that Marxists and Freudians used (which Popper criticized). When Marxist and Freudian predictions did not come true, they would explain away the apparent falsification. They would systematically protect their theory from refutation. The way to avoid doing this is to specify in advance what observations would count as falsification. @curi has not said what observations would count as falsification. Until he does so, he cannot claim that his theory is testable in a Popperian sense.

Critical Rationalist:

This forum is no longer worth my time. I will be deleting my account. If any of you want to contact me for one on one discussion, please email me at davidangus1996@gmail.com

jordancurve:

jfc

curi:

[redpill] rationalization hamster

curi:

he doesn't want to debate to a conclusion in an organized way. he just wants to declare victory and hide.

jordancurve:

C R, you didn't have to go out like that!

Critical Rationalist:

It is too bad. I heard from people who were glad I had joined this group.

curi:

after conceding he made a bunch of errors, and never establishing any error by me, his conclusion is not "wow someone who is better at not making errors than me, amazing!" (which is a part of how i reacted to DD initially), it's just to ignore all the objetively established facts and be [redpill] solipsistic

Critical Rationalist:

I had moments where I enjoyed it do.

Critical Rationalist:

But it is not longer worth my time.

curi:

got any paths forward to go with that?

curi:

if you're wrong, how will you find out?

Critical Rationalist:

Yes, finish my masters degree in philosophy (where peer review is a part of the process of writing, so errors are caught), and then pursue a doctorate degree. That is my path forward. I thought this would be a fun outlet. I was wrong.

Critical Rationalist:

I'm not directing this at anyone personally. You are all free to email me with questions or discussion topics.

curi:

that's not a path forward

JustinCEO:

How will you find out if you're wrong about your judgment of this group and whether it's worth your time

JustinCEO:

Why not try discussing a small discrete and less controversial issue to conclusion instead of giving up totally

Critical Rationalist:

I'll have to live with that. I have ways of spending my time that I know are productive.

jordancurve:

That doesn't sound very critical rationalist.

Critical Rationalist:

My hypothesis that this group is a good use of my time has been falsified by the evidence.

jordancurve:

lol sigh

curi:

there are arguments the ways you're spending your time are not only not productive but counter-productive. you have not refuted them nor cited any refutation, but wish to ignore them with no way to fix it if you're wrong.

jordancurve:

Well, C R, I wish you would just take a break. Don't delete your account. Maybe you'll want to say something else some day. Why not leave the option open.

jordancurve:

Okay, we have your email if we want to contact you in the mean time.

jordancurve:

Like people say "delete your account" but I've never seen someone actually do it.

curi:

2:20 PM] Critical Rationalist: The Boi chp 12 argument is an interesting argument, one that I'm willing to answer.

I guess that was a lie?

JustinCEO:

😦

curi:

his parting shot included further statements ignoring the existence of those arguments

curi:

as if the state of the debate was me not answering him, rather than us waiting for his answer

curi:

he seems to be criticizing me for admitting duhem-quine applies to AWALT, on the implied basis that he doesn't think it applies to evo psych. he should read more Popper!

curi:

you will notice he has no solutions

curi:

no ideas about how to solve this problem

curi:

no reading recommendatiosn to fix us

curi:

no discussion methodology documents he thinks we should try using

curi:

popper says we can learn from each other, despite culture clash, by an effort.

curi:

but he just gives up with ppl who are willing to try more and in fact are bursting at the seams with dozens of proposed solutions

curi:

but he won't read ours nor suggest his own

curi:

that's a big asymmetry

jordancurve:

My hypothesis that this group is a good use of my time has been falsified by the evidence.

Come on. Really? He has to know, when he's not tilted, that evidence admits of multiple interpretations. Observations are theory-laden.

curi:

that's a bitter social comment which means "these guys aren't adequately falsificationists like real CRs"

jordancurve:

He didn't even seem to try to establish that the rival interpretations of the evidence were false.

JustinCEO:

"fun outlet" sounds like maybe he wasn't expecting tons of pushback and crit, given conventional views on what's fun

jordancurve:

*any rival interpretations

curi:

that's one of his main rationalizations to preserve his pretense of self-esteem

curi:

he didn't quote any unfun msg

curi:

he wanted to use unsourced paraphrases to attack msgs

curi:

[redpill] nothing personal, teehee

JustinCEO:

What are the [brackets] doing there exactly

curi:

tagging the msg. i'm gonna write a blog post to explain

JustinCEO:

Okay 👌

curi:

expressing a redpill perspective is different than expressing something i fully agree with

JustinCEO:

Ah

curi:

but i think worthwhile to consider

curi:

a little like /s is not your usual voice

JustinCEO:

Rite

curi:

@curi has not said what observations would count as falsification. Until he does so, he cannot claim that his theory is testable in a Popperian sense.

does he not know enough about BoI c12 to know that's covered there?

curi:

if so, why did he say BoI c12 is interesting and he'd be willing to answer, as if he knew what it said?

GISTE:

CRist makes a particular mistake repeatedly. He thinks that an interpretation of data using one theoretical framework can be used as evidence contradicting another theoretical framework. He did this a bunch in the discussion about the BOI model of the human mind, and in the discussion about PUA/AWALT. we tried to explain his error many times, but he did not get it, nor did he ask about it, nor did he criticize it.

curi:

think he'll learn about and fix the error from his MA + the peer review process?

GISTE:

well those things are not focussed on finding and fixing mistakes, so i'd guess no.

GISTE:

if he did learn about and fix that error, it would be despite his MA + peer review process, not because of it.

curi:

https://curi.us/2278-second-handedness-examples#15054

curi:

there was something else he said about other ppl telling him to join or msging him about his participation here but i didn't find it when searching

curi:

https://curi.us/2279-red-pill-comments#15055

curi:

OT C R dared claim familiarity with red pill and PUA while not knowing what a neg is, or AWALT, or a bunch of other standard terms

curi:

similar to how he didn't finish either of DD's books but initially presented himself as a knowledgeable fan

curi:

he has really low standards for knowing about something

curi:

shit test? mystery method? AFC? no? what have you heard of? no answer.

JustinCEO:

think he'll learn about and fix the error from his MA + the peer review process?

Peer review in fields like Philosophy is currently more about signaling a certain sort of conformity in language and method than it is about error correction

JustinCEO:

And also

JustinCEO:

There's political stuff like eg:

Metaphysics, traditionally a highly abstract and impractical area of inquiry, is the area of philosophy that has had perhaps the most high-profile political scuffles in the past few years. This is because there are significant political overtones to questions about the nature of race and ethnicity, or the nature of sex and gender. The Hypatia affair, which I wrote about for this magazine two years ago, crystallized many of the dynamics surrounding these issues. My contention is not that questions about race/ethnicity and sex/gender are improper for philosophical inquiry, but that philosophical inquiry is threatened by the political fervor that surrounds these questions. In the debates between gender-critical feminists and their detractors (who call them “Trans-Exclusionary Radical Feminists”), for instance, it is often taken as a given that the political demands of feminism should determine our views on the metaphysics of sex and gender; at issue is which version of feminism is given pride of place.

JustinCEO:

https://quillette.com/2019/07/26/the-role-of-politics-in-academic-philosophy/

curi:

sex, gender, race and ethnicity are not metaphysical issues

curi:

philosophers so confused

curi:

1:58 PM] Critical Rationalist: I agree. I haven't responded to that yet, just like you have not responded to the auxiliary hypothesis question. Note again the difference between "not responding" and "refusing to respond".

curi:

2:05 PM] Critical Rationalist: I await a response to my above messages.
[2:07 PM] curi: https://elliottemple.com/debate-policy

curi:

hen it is pointed out that everyone (including him) sometimes fails to address things, he ignores it. For example, this is the fourth time I have prompted you to answer this question: "do you know what an auxiliary hypothesis is?"

curi:

i did answer right there

curi:

not the first time he confused 1) not liking my answer 2) me not answering


Elliot Temple | Permalink | Messages (9)

Confusion About Overreaching

I'm sharing this chatlog because if you feel like you're suppressing/repressing to avoid overreaching, something is going wrong. Don't accept that; there's a problem there. (This is from the Fallible Ideas Discord which you can join.)


Freeze: Does overreaching get in the way of you doing what you want to do, or do your wants mostly follow your understanding of overreaching?
Freeze: One thing I've been thinking about is... If someone learns rationality and reason, does that mean they would rarely if ever desire things that would be overreaching?
Freeze: Is the general regret or disappointment I feel at not being able to discuss interesting topics a symptom of irrational ideas I've learnt?
Freeze: In the sense that if I had learnt rationality better, I would find the simple stuff interesting because I'd know that it's required for the more complex stuff
Freeze: So if I find the grammar boring, it might be a sign that I'm not reasoning well
curi: overreaching isn't about goals but methods. you can work towards SENS/immortality, for example, without overreaching, by taking low error rate steps to work on the project.
Freeze: Right.
Freeze: And as part of a well reasoned process to progress SENS, doing something like analyzing sentences wouldn't feel offtopic. It would feel like part of the topic, if one is rational
curi: managing your error rate is your best chance to succeed at a big, hard project. it doesn't take anything away from you. there isn't a downside.
Freeze: So if I'm feeling bad about it, something's going wrong in my reasoning where it seems like a downside even if I logically know it isn't
curi: sentences are really important and useful and people who don't have enough mastery of that tool ought to work on it, ya
Freeze: Right
Freeze: So I need to learn to convince myself so that I'm wholeheartedly doing things like grammar in a way that it's interesting
curi: dealing with questions is another big tool. i posted to FI about it today
JustinCEO: for me grammar stuff was pretty clearly on topic for various things
JustinCEO: first of all i actually have inherent interest in grammar
JustinCEO: i think it's fun, on its own, without needing to justify it somehow
JustinCEO: but also, i like to write stuff, and am a lawyer, heh
Freeze: When I find discussing epistemology more fun than something like grammar, it seems like I'm operating on bad ideas rather than good ones. I don't know how exactly to go about changing those ideas so that grammar becomes more fun first
curi: yeah i developed some interest in grammar too cuz i've written a lot
curi:

Is the general regret or disappointment I feel at not being able to discuss interesting topics a symptom of irrational ideas I've learnt?

what can't you discuss?
Freeze: I find a lot of things inherently interesting, and I tend to get dragged along by whatever is happening in the moment
Freeze: like pasta discussions or cheese
Freeze: Well some discussions would be overreaching
JustinCEO: i don't think you've gotten crit re: food discussions
curi: i don't think the pasta was a reply to me
Freeze: Although I liked the post someone wrote on FI that said something like, This system is designed so that you should never have to discuss less than you usually do and it involved stuff like labelling overreaching
Freeze: and labelling confident statements
JustinCEO: btw i found food an especially easy topic to learn something about
Freeze: well what I meant by that J is that I don't seem well in control of what I find interesting
curi: yeah cooking with recipes is very learnable field. lots of tutorials and shit.
JustinCEO: one thing that helps is that there are tons of people making detailed instructions which include videos and pictures
Freeze: And it's weird that I can find pasta/cheese inherently interesting sometimes, but not grammar
curi: did you read my essay?
Freeze: Maybe because the grammar becomes this obstacle rather than an inherently interesting topic
curi: ppl have preconceptoins about what grammar is like
Freeze: Only some of it curi, like the first half
JustinCEO: grammar has skool connotation
curi: and my essay is pretty atpyical
JustinCEO: skool is cancer for interests
Freeze: I'll read through it tonight. It seems like when I put something up as a barrier to doing something else, it becomes less interesting
JustinCEO: well if u think of stuff as a barrier
Freeze: like I love vegetables today, but as a kid I disliked them, maybe because they were compulsory or a barrier to eating better tasting food
JustinCEO: that means u are not convinced it is necessary
JustinCEO: to do X well
Freeze: Right, or maybe it means I want to do X poorly
Freeze: for some reason
JustinCEO: so you have some disagreement with ppl saying u should do the thing
JustinCEO: or yeah
JustinCEO: right
JustinCEO: u could want to
JustinCEO: e.g.
Freeze: like maybe I think doing X poorly would be more fun than doing grammar well
JustinCEO: social chit chat
JustinCEO: about
JustinCEO: X
JustinCEO: instead of actually do something meaningful with it, learn about it seriously
JustinCEO: i have that issue
Freeze: It's weird but I seem to find failing at CR discussion more fun than succeeding at grammar discussion. But maybe I should try more grammar discussion since I haven't really had much aside from that one comma splice exchange
Freeze: social chit chat is fun, and feels like learning sometimes
Freeze: like when you talk about food
Freeze: or legal stuff
Freeze: I remember something DD wrote about conversation being one of the best learning methods
Freeze:

One cannot make many such investments in one's life. I should say, of course, that the most educational thing in the world is conversation. That does have the property that it is complex, interactive, and ought to have a low cost, although often between children and adults it has a high cost and high risk for the children, but it should not and need not.

Apart from conversation, all the complex interactive things require a huge initial investment, except video games, and I think video games are a breakthrough in human culture for that reason.

Freeze: https://www.takingchildrenseriously.com/video_games_a_unique_educational_environment
JustinCEO: I think it's important to separate the issue of conversation being a good learning method (it is) from the issue of valuing not-particularly-serious conversation over other ways to spend your time that would actually be more productive/helpful for learning and life
Freeze: I have been excited to read The Goal every night, which was interesting to note and observe in myself
Freeze: The story was cool
JustinCEO: i liked The Goal
Freeze: Reading books sometimes seems like a conversation with the author
JustinCEO: well it's not interactive so that's a difference
JustinCEO: you either have to do a bunch of self-discussion or talk about the book with other knowledgable ppl
Freeze: Right, although I find myself asking a lot of questions of the book, to myself
Freeze: Which is self-discussion I guess
JustinCEO: peikoff knew much more of Rand than is in her books
JustinCEO: and Rand knew more of Rand than is in Peikoff but she dead, and Peikoff dead soon :frowning:
curi: @Freeze re overreaching, whatever you're interested in but don't think you should work on, i suggest you make a project planning tree where you clearly lay out the interest, the things you think it'd take to succeed at it, the prerequisites or components of those and so on down the hierarchy a ways. you will then see specifically 1) what skills, tools, resources, etc. you think you're missing before you do X 2) how those things relate to X, what the chain of connections is. and then you can critically consider it, share it, etc., to maybe find out about errors, alternative learning paths, etc.
curi: if you don't care about something np, but if you have regret or negative feeling, it's worth investigating and getting clear in your mind what you think is in your way and why.
Freeze: ty curi
curi: this works somewhat as an example: https://my.mindnode.com/p3ZX6Py8iVnutKEbf9NSnyocjDs1MMERUdg8Qozk

that + more nodes + label which nodes are done/not-done = much clearer idea of what's standing in the way of building a skyscaper


Here's the FI post about asking questions. Note: you can join the FI email discussion group to read emails like this.

Here's the skyscraper related project planning tree as a PDF permalink.


Elliot Temple | Permalink | Messages (3)

Project Planning Discussion

This is a discussion about rational project planning. The major theme is that people should consider what their project premises are. What claims are they betting their project success on the correctness of? And why? This matter requires investigation and consideration, not just ignoring it.

By project I mean merely a goal-directed activity. It can be, but doesn't have to be, a business project or multi-person project. My primary focus is on larger projects, e.g. projects that take more than one day to finish.

The first part is discussion context. You may want to skip to the second part where I write an article/monologue with no one else talking. It explains a lot of important stuff IMO.


Gavin Palmer:

The most important problem is The Human Resource Problem. All other problems depend on the human resource problem. The Human Resource Problem consists of a set of smaller problems that are related. An important problem within that set is the communication problem: an inability to communicate. I classify that problem as a problem related to information technology and/or process. If people can obtain and maintain a state of mind which allows communication, then there are other problems within that set related to problems faced by any organization. Every organization is faced with problems related to hiring, firing, promotion, and demotion.

So every person encounters this problem. It is a universal problem. It will exist so long as there are humans. We each have the opportunity to recognize and remember this important problem in order to discover and implement processes and tools which can facilitate our ability to solve every problem which is solvable.

curi:

you haven't explained what the human resource problem is, like what things go in that category

Gavin Palmer:

The thought I originally had long ago - was that there are people willing and able to solve our big problems. We just don't have a sufficient mechanism for finding and organizing those people. But I have discovered that this general problem is related to ideas within any organization. The general problem is related to ideas within a company, a government, and even those encountered by each individual mind. The task of recruiting, hiring, firing, promoting, and demoting ideas can occur on multiple levels.

curi:

so you mean it like HR in companies? that strikes me as a much more minor problem than how rationality works.

Gavin Palmer:

If you want to end world hunger it's an HR problem.

curi:

it's many things including a rationality problem

curi:

and a free trade problem and a governance problem and a peace problem

curi:

all of which require rationality, which is why rationality is central

Gavin Palmer:

How much time have you actually put into trying to understand world hunger and the ways it could end?

Gavin Palmer:

How much time have you actually put into building anything? What's your best accomplishment as a human being?

curi:

are you mad?

GISTE:

so to summarize the discussion that Gavin started. Gavin described what he sees as the most important problem (the HR problem), where all other problems depend on it. curi disagreed by saying that how rationality works is a more important problem than the HR problem, and he gave reasons for it. Gavin disagreed by saying that for the goal of ending world hunger, the most important problem is the HR problem -- and he did not address curi's reasons. curi disagreed by saying that the goal of ending world hunger is many problems, all of which require rationality, making rationality the most important problem. Then Gavin asked curi about how much time he has spent on the world hunger problem and asked if he built anything and what his best accomplishments are. Gavin's response does not seem to connect to any of the previous discussion, as far as I can tell. So it's offtopic to the topic of what is the most important problem for the goal of ending world hunger. Maybe Gavin thinks it is on topic, but he didn't say why he thinks so. I guess that curi also noticed the offtopic thing, and that he guessed that Gavin is mad. then curi asked Gavin "are you mad?" as a way to try to address a bottleneck to this discussion. @Gavin Palmer is this how you view how the discussion went or do you have some differences from my view? if there are differences, then we could talk about those, which would serve to help us all get on the same page. And then that would help serve the purpose of reaching mutual understanding and agreement regarding whether or not the HR problem is the most important problem on which all other problems depend.

GISTE:

btw i think Gavin's topic is important. as i see it, it's goal is to figure out the relationships between various problems, to figure out which is the most important. i think that's important because it would serve the purpose of helping one figure out which problems to prioritize.

Gavin Palmer:

Here is a google doc linked to a 1-on-1 I had with GISTE (he gave me permission to share). I did get a little angry and was anxious about returning here today. I'm glad to see @curi did not get offended by my questions and asked a question. I am seeing the response after I had the conversation with GISTE. Thank you for your time.

https://docs.google.com/document/d/1XEztqEHLBAJ39HQlueKX3L4rVEGiZ4GEfBJUyXEgVNA/edit?usp=sharing

GISTE:

to be clear, regarding the 1 on 1 discussion linked above, whatever i said about curi are my interpretations. don't treat me as an authority on what curi thinks.

GISTE:

also, don't judge curi by my ideas/actions. that would be unfair to him. (also unfair to me)

JustinCEO:

Curi's response tells me he does not know how to solve world hunger.

JustinCEO:

Unclear to me how that judgment was arrived at

JustinCEO:

I'm reading

JustinCEO:

Lowercase c for curi btw

JustinCEO:

But I have thought about government, free trade, and peace very much. These aren't a root problem related to world hunger.

JustinCEO:

curi actually brought those up as examples of things that require rationality

JustinCEO:

And said that rationality was central

JustinCEO:

But you don't mention rationality in your statement of disagreement

JustinCEO:

You mention the examples but not the unifying theme

JustinCEO:

GISTE:

curi did not say those are root problems.

JustinCEO:

Ya 🙂

JustinCEO:

Ya GISTE got this point

JustinCEO:

I'm on phone so I'm pasting less than I might otherwise

JustinCEO:

another way to think about the world hunger problem is this: what are the bottlenecks to solving it? first name them, before trying to figure out which one is like the most systemic one.

JustinCEO:

I think the problem itself could benefit from a clear statement

GISTE:

That clear statement would include causes of (world) hunger. Right ? @JustinCEO

JustinCEO:

I mean a detailed statement would get into that issue some GISTE cuz like

JustinCEO:

You'd need to figure out what counts and what doesn't as an example of world hunger

JustinCEO:

What is in the class of world hunger and what is outside of it

JustinCEO:

And that involves getting into specific causes

JustinCEO:

Like presumably "I live in a first world country and have 20k in the bank but forgot to buy groceries this week and am hungry now" is excluded from most people's definitions of world hunger

JustinCEO:

I think hunger is basically a solved problem in western liberal capitalist democracies

JustinCEO:

People fake the truth of this by making up concepts called "food insecurity" that involve criteria like "occasionally worries about paying for groceries" and calling that part of a hunger issue

JustinCEO:

Thinking about it quickly, I kinda doubt there is a "world hunger" problem per se

GISTE:

yeah before you replied to my last comment, i immediately thought of people who choose to be hungry, like anorexic people. and i think people who talk about world hunger are not including those situations.

JustinCEO:

There's totally a Venezuela hunger problem or a Zimbabwe hunger problem tho

JustinCEO:

But not really an Ohio or Kansas hunger problem

JustinCEO:

Gavin

I try to be pragmatic. If your solution depends on people being rational, then the solution probably will not work. My solution does depend on rational people, but the number of rational people needed is very small

GISTE:

There was one last comment by me that did not get included in the one on one discussion. Here it is. “so, you only want people on your team that already did a bunch of work to solve world hunger? i thought you wanted rational people, not necessarily people that already did a bunch of work to solve world hunger.”

JustinCEO:

What you think being rational is and what it involves could probably benefit from some clarification.

Anyways I think society mostly works to the extent people are somewhat rational in a given context.

JustinCEO:

I regard violent crime for the purpose of stealing property as irrational

JustinCEO:

For example

JustinCEO:

Most people agree

JustinCEO:

So I can form a plan to walk down my block with my iPhone and not get robbed, and this plan largely depends on the rationality of other people

JustinCEO:

Not everyone agrees with my perspective

JustinCEO:

The cop car from the local precinct that is generally parked at the corner is also part of my plan

JustinCEO:

But my plan largely depends on the rationality of other people

JustinCEO:

If 10% or even 5% of people had a pro property crime perspective, the police could not really handle that and I would have to change my plans

Gavin Palmer:

World hunger is just an example of a big problem which depends on information technology related to the human resource problem. My hope is that people interested in any big problem could come to realize that information technology related to the human resource problem is part of the solution to the big problem they are interested in as well as other big problems.

Gavin Palmer:

So maybe "rationality" is related to what I call "information technology".

JustinCEO:

the rationality requirements of my walking outside with phone plan are modest. i can't plan to e.g. live in a society i would consider more moral and just (where e.g. a big chunk of my earnings aren't confiscated and wasted) cuz there's not enough people in the world who agree with me on the relevant issues to facilitate such a plan.

JustinCEO:

anyways regarding specifically this statement

JustinCEO:

If your solution depends on people being rational, then the solution probably will not work.

JustinCEO:

i wonder if the meaning is If your solution depends on [everyone] being [completely] rational, then the solution probably will not work.

Gavin Palmer:

There is definitely some number/percentage I have thought about... like I only need 10% of the population to be "rational".

GISTE:

@Gavin Palmer can you explain your point more? what i have in mind doens't seem to match your statement. so like if 90% of the people around me weren't rational (like to what degree exactly?), then they'd be stealing and murdering so much that the police couldn't stop them.

JustinCEO:

@Gavin Palmer based on the stuff you said so far and in the google doc regarding wanting to work on important problems, you may appreciate this post

JustinCEO:

https://curi.us/2029-the-worlds-biggest-problems

JustinCEO:

Gavin says

A thing that is sacred is deemed worthy of worship. And worship is based in the words worth and ship. And so a sacred word is believed to carry great worth in the mind of the believer. So I can solve world hunger with the help of people who are able and willing. Solving world hunger is not an act done by people who uphold the word rationality above all other words.

JustinCEO:

the word doesn't matter but the concept surely does for problem-solving effectiveness

JustinCEO:

people who don't value rationality can't solve much of anything

nikluk:

Re rationality. Have you read this article and do you agree with what it says, @Gavin Palmer ?
https://fallibleideas.com/reason

GISTE:

So maybe "rationality" is related to what I call "information technology".
can you say more about that relationship? i'm not sure what you have in mind. i could guess but i think it'd be a wild guess that i'm not confident would be right. (so like i could steelman your position but i could easily be adding in my own ideas and ruin it. so i'd rather avoid that.) @Gavin Palmer

Gavin Palmer:

so like if 90% of the people around me weren't rational (like to what degree exactly?), then they'd be stealing and murdering so much that the police couldn't stop them.
I think the image of the elephant rider portrayed by Jonathan Haidt is closer to the truth when it comes to some word like rationality and reason. I actually value something like compassion above a person's intellect: and I really like people who have both. There are plenty of idiots in the world who are not going to try and steal from you or murder you. I'm just going to go through these one by one when able.

Gavin Palmer:

https://curi.us/2029-the-worlds-biggest-problems
Learning to think is very important. There were a few mistakes in that article. The big one in my opinion is the idea that 2/3 of the people can change things. On the contrary our government systems do not have any mechanism in place to learn what 2/3 of the people actually want nor any ability to allow the greatest problem solvers to influence those 2/3 of the people. We aren't even able to recognize the greatest problem solvers. Another important problem is technology which allows for this kind of information sharing so that we can actually know what the people think and we can allow the greatest problem solvers to be heard. We want that signal to rise above the noise.

The ability to solve problems is like a muscle. For me - reading books does not help me build that muscle - they only help me find better words for describing the strategies and processes which I have developed through trial and error. I am not the smartest person - I learn from trial and error.

curi:

To answer the questions: I have thought about many big problems, such as aging death, AGI, and coercive parenting/education. Yes I've considered world hunger too, though not as a major focus. I'm an (experienced) intellectual. My accomplishments are primarily in philosophy research re issues like how learning and rational discussion work. I do a lot of educational writing and discussion. https://elliottemple.com

curi:

You're underestimating the level of outlier you're dealing with here, and jumping to conclusions too much.

Gavin Palmer:

https://fallibleideas.com/reason
It's pretty good. But science without engineering is dead. That previous sentence reminds me of "faith without works is dead". I'm not a huge fan of science for the sake of science. I'm a fan of engineering and the science that helps us do engineering.

curi:

i don't thikn i have anything against engineering.

Gavin Palmer:

I'm just really interested in finding people who want to help do the engineering. It's my bias. Even more - it's my passion and my obsession.

Gavin Palmer:

Thinking and having conversations is fun though.

Gavin Palmer:

But sometimes it can feel aimless if I'm not building something useful.

curi:

My understanding of the world, in big picture, is that a large portion of all efforts at engineering and other getting-stuff-done type work are misdirected and useless or destructive.

curi:

This is for big hard problems. The productiveness of practical effort is higher for little things like making dinner today.

curi:

The problem is largely not the engineering itself but the ideas guiding it – the goals and plan.

Gavin Palmer:

I worked for the Army's missile defense program for 6 years when I graduated from college. I left because of the reason you point out. My hope was that I would be able to change things from within.

curi:

So for example in the US you may agree with me that at least around half of political activism is misdirected to goals with low or negative value. (either the red tribe or blue tribe work is wrong, plus some of the other work too)

Gavin Palmer:

Even the ones I agree with and have volunteered for are doing a shit job.

curi:

yeah

curi:

i have found a decent number of people want to "change the world" or make some big improvement, but they can't agree amongst themselves about what changes to make, and some of them are working against others. i think sorting that mess out, and being really confident the projects one works on are actually good, needs to come before implementation.

curi:

i find most people are way too eager to jump into their favored cause without adequately considering why people disagree with it and sorting out all the arguments for all sides.

Gavin Palmer:

There are many tools that don't exist which could exist. And those tools could empower any organization and their goal(s).

curi:

no doubt.

curi:

software is pretty new and undeveloped. adequate tools are much harder to name than inadequate ones.

Gavin Palmer:

adequate tools are much harder to name than inadequate ones.
I don't know what that means.

curi:

we could have much better software tools for ~everything

curi:

"~" means "approximately"

JustinCEO:

Twitter can't handle displaying tweets well. MailMate performance gets sluggish with too many emails. Most PDF software can't handle super huge PDFs well. Workout apps can't use LIDAR to tell ppl if their form is on point

curi:

Discord is clearly a regression from IRC in major ways.

Gavin Palmer:

🤦‍♂️

JustinCEO:

?

JustinCEO:

i find your face palm very unclear @Gavin Palmer; hope you elaborate!

Gavin Palmer:

I find sarcasm very unclear. That's the only way I know how to interpret the comments about Twitter, MailMate, PDF, LIDAR, Discord, IRC, etc.

curi:

I wasn't being sarcastic and I'm confident Justin also meant what he said literally and seriously.

Gavin Palmer:

Ok - thanks for the clarification.

JustinCEO:

ya my statements were made earnestly

JustinCEO:

re: twitter example

JustinCEO:

twitter makes it harder to have a decent conversation cuz it's not good at doing conversation threading

JustinCEO:

if it was better at this, maybe people could keep track of discussions better and reach agreement more easily

Gavin Palmer:

Well - I have opinions about Twitter. But to be honest - I am also trying to look at what this guy is doing:
https://github.com/erezsh/portal-radar

It isn't a good name in my opinion - but the idea is related to having some bot collect discord data so that there can be tools which help people find the signal in the noise.

curi:

are you aware of http://bash.org ? i'm serious about major regressions.

JustinCEO:

i made an autologging system to make discord chat logs on this server so people could pull information (discussions) out of them more easily

JustinCEO:

but alas it's a rube goldberg machine of different tools running together in a VM, not something i can distribute

Gavin Palmer:

Well - it's a good goal. I'm looking to add some new endpoints in a pull request to the github repo I linked above. Then I could add some visualizations.

Another person has built a graphql backend (which he isn't sharing open source) and I have created some of my first react/d3 components to visualize his data.
https://portal-projects.github.io/users/

Gavin Palmer:

I think you definitely want to write the code in a way that it can facilitate collaboration.

curi:

i don't think this stuff will make much difference when people don't know what a rational discussion is and don't want one.

curi:

and don't want to use tools that already exist like google groups.

curi:

which is dramatically better than twitter for discussion

Gavin Palmer:

I'm personally interested in something which I have titled "Personality Targeting with Machine Learning".

Gavin Palmer:

My goal isn't to teach people to be rational - it is to try and find people who are trying to be rational.

curi:

have you identified which philosophical schools of thought it's compatible and incompatible with? and therefore which you're betting on being wrong?

curi:

it = "Personality Targeting with Machine Learning".

Gavin Palmer:

Ideally it isn't hard coded or anything. I could create multiple personality profiles. Three of the markets I have thought about using the technology in would be online dating, recruiting, and security/defense.

curi:

so no?

Gavin Palmer:

If I'm understanding you - a person using the software could create a personality that mimics a historical person for example - and then parse social media in search of people who are saying similar things.

Gavin Palmer:

But I'm not exactly sure what point you are trying to make.

curi:

You are making major bets while being unaware of what they are. You may be wrong and wasting your time and effort, or even being doing something counterproductive. And you aren't very interested in this.

Gavin Palmer:

Well - from my perspective - I am not making any major bets. What is the worst case scenario?

curi:

An example worst case scenario would be that you develop an AGI by accident and it turns us all into paperclips.

Gavin Palmer:

I work with a very intelligent person that would laugh at that idea.

curi:

That sounds like an admission you're betting against it.

curi:

You asked for an example seemingly because you were unaware of any. You should be documenting what bets you're making and why.

Gavin Palmer:

I won't be making software that turns us all into paperclips.

curi:

Have you studied AI alignment?

Gavin Palmer:

I have been writing software for over a decade. I have been using machine learning for many months now. And I have a pretty good idea of how the technology I am using actually works.

curi:

So no?

Gavin Palmer:

No. But if it is crap - do you want to learn why it is crap?

curi:

I would if I agreed with it, though I don't. But a lot of smart people believe it.

curi:

They have some fairly sophisticated reasons, which I don't think it's reasonable to bet against from a position of ignorance.

Gavin Palmer:

Our ability to gauge if someone has understanding on a given subject is relative to how much understanding we have on that subject.

curi:

Roughly, sure. What's your point?

Gavin Palmer:

First off - I'm not sure AGI is even possible. I love to play with the idea. And I would love to get to a point where I get to help build a god. But I am not even close to doing that at this point in my career.

curi:

So what?

Gavin Palmer:

You think there is a risk I would build something that turns humans into paperclips.

curi:

I didn't say that.

Gavin Palmer:

You said that is the worst case scenario.

curi:

Yes. It's something you're betting against, apparently without much familiarity with the matter.

curi:

Given that you don't know much about it, you aren't in a reasonable position to judge how big a risk it is.

curi:

So I think you're making a mistake.

curi:

The bigger picture mistake is not trying to figure out what bets you're making and why.

curi:

Most projects have this flaw.

Gavin Palmer:

My software uses algorithms to classify input data.

curi:

So then, usually, somewhere on the list of thousands of bets being made, are a few bad ones.

curi:

Does this concept make sense to you?

Gavin Palmer:

Love is most important in my hierarchy of values.

Gavin Palmer:

If I used the word in a sentence I would still want to capitalize it.

curi:

is that intended to be an answer?

Gavin Palmer:

Yes - I treat Love in a magical way. And you don't like magical thinking. And so we have very different world views. They might even be incompatible. The difference between us is that I won't be paralyzed by my fears. And I will definitely make mistakes. But I will make more mistakes than you. The quality and quantity of my learning will be very different than yours. But I will also be reaping the benefits of developing new relationships with engineers, learning new technology/process, and building up my portfolio of open source software.

curi:

You accuse me of being paralyzed by fears. You have no evidence and don't understand me.

curi:

Your message is not loving or charitable.

curi:

You're heavily personalizing while knowing almost nothing about me.

JustinCEO:

i agree

JustinCEO:

also, magical thinking can't achieve anything

curi:

But I will also be reaping the benefits of developing new relationships with engineers

curi:

right now you seem to be trying to burn a bridge with an engineer.

curi:

you feel attacked in some way. you're experiencing some sort of conflict. do you want to use a rational problem solving method to try to address this?

curi:

J, taking my side here will result in him feeling ganged up on. I think it will be counterproductive psychologically.

doubtingthomas:

J, taking my side here will result in him feeling ganged up on. I think it will be counterproductive psychologically.
Good observation. Are you going to start taking these considerations into account in future conversations?

curi:

I knew that years ago. I already did take it into account.

curi:

please take this tangent to #fi

GISTE:

also, magical thinking can't achieve anything
@JustinCEO besides temporary nice feelings. Long term its bad though.

doubtingthomas:

yeah sure

JustinCEO:

ya sure GISTE, i meant achieve something in reality

curi:

please stop talking here. everyone but gavin

Gavin Palmer:

You talked about schools of philosophy, AI alignment, and identifying the hidden bets. That's a lot to request of someone.

curi:

Thinking about your controversial premises and civilizational risks, in some way instead of ignoring the matter, is too big an ask to expect of people before they go ahead with projects?

curi:

Is that what you mean?

Gavin Palmer:

I don't see how my premises are controversial or risky.

curi:

Slow down. Is that what you meant? Did I understand you?

Gavin Palmer:

I am OK with people thinking about premises and risks of an idea and discussing those. But in order to have that kind of discussion you would need to understand the idea. And in order to understand the idea - you have to ask questions.

curi:

it's hard to talk with you because of your repeated unwillingness to give direct answers or responses.

curi:

i don't know how to have a productive discussion under these conditions.

Gavin Palmer:

I will try to do better.

curi:

ok. can we back up?

Thinking about your controversial premises and civilizational risks, in some way instead of ignoring the matter, is too big an ask to expect of people before they go ahead with projects?

did i understand you, yes or no?

Gavin Palmer:

no

curi:

ok. which part(s) is incorrect?

Gavin Palmer:

The words controversial and civilizational are not conducive to communication.

curi:

why?

Gavin Palmer:

They indicate that you think you understand the premises and the risks and I don't know that you understand the idea I am trying to communicate.

curi:

They are just adjectives. They don't say what I understand about your project.

Gavin Palmer:

Why did you use them?

curi:

Because you should especially think about controversial premises rather than all premises, and civilizational risks more than all risks.

curi:

And those are the types of things that were under discussion.

curi:

A generic, unqualified term like "premises" or "risks" would not accurately represent the list of 3 examples "schools of philosophy, AI alignment, and identifying the hidden bets"

Gavin Palmer:

I don't see how schools of philosophy, AI alignment, and hidden bets are relevant. Those are just meaningless words in my mind. The meaning of those words in your mind may contain relevant points. And I would be willing to discuss those points as they relate to the project. But (I think) that would also require that you have some idea of what the software does and how it is done. To bring up these things before you understand the software seems very premature.

curi:

the details of your project are not relevant when i'm bringing up extremely generic issues.

curi:

e.g. there is realism vs idealism. your project takes one side, the other, or is compatible with both. i don't need to know more about your project to say this.

curi:

(or disagrees with both, though that'd be unusual)

curi:

it's similar with skepticism or not.

curi:

and moral relativism.

curi:

and strong empiricism.

curi:

one could go on. at length. and add a lot more using details of your project, too.

curi:

so, there exists some big list. it has stuff on it.

curi:

so, my point is that you ought to have some way of considering and dealing with this list.

curi:

some way of considering what's on it, figuring out which merit attention and how to prioritize that attention, etc.

curi:

you need some sort of policy, some way to think about it that you regard as adequate.

curi:

this is true of all projects.

curi:

this is one of the issues which has logical priority over the specifics of your project.

curi:

there are generic concepts about how to approach a project which take precedence over jumping into the details.

curi:

do you think you understand what i'm saying?

Gavin Palmer:

I think I understand this statement:

there are generic concepts about how to approach a project which take precedence over jumping into the details.

curi:

ok. do you agree with that?

Gavin Palmer:

I usually jump into the details. I'm not saying you are wrong though.

curi:

ok. i think looking at least a little at the big picture is really important, and that most projects lose a lot of effectiveness (or worse) due to failing to do this plus some common errors.

curi:

and not having any conscious policy at all regarding this issue (how to think about the many premises you are building on which may be wrong) is one of the common errors.

curi:

i think being willing to think about things like this is one of the requirements for someone who wants to be effective at saving/changing/helping the world (or themselves individually)

Gavin Palmer:

But I have looked at a lot of big picture things in my life.

curi:

cool. doesn't mean you covered all the key ones. but maybe it'll give you a head start on the project planning stuff.

Gavin Palmer:

So do you have an example of a project where it was done in a way that is satisfactory in your mind?

curi:

hmm. project planning steps are broadly unpublished and unavailable for the vast majority of projects. i think the short answer is no one is doing this right. this aspect of rationality is ~novel.

curi:

some ppl do a more reasonable job but it's really hard to tell what most ppl did.

curi:

u can look at project success as a proxy but i don't think that'll be informative in the way you want.

Gavin Palmer:

I'm going to break soon, but I would encourage you to think about some action items for you and I based around this ideal form of project planning. I have real-world experience with various forms of project planning to some degree or another.

curi's Monologue

curi:

the standard way to start is to brainstorm things on the list

curi:

after you get a bunch, you try to organize them into categories

curi:

you also consider what is a reasonable level of overhead for this, e.g. 10% of total project resource budget.

curi:

but a flat percentage is problematic b/c a lot of the work is general education stuff that is reusable for most projects. if you count your whole education, overhead will generally be larger than the project. if you only count stuff specific to this project, you can have a really small overhead and do well.

curi:

stuff like reading and understanding/remembering/taking-notes-on/etc one overview book of philosophy ideas is something that IMO should be part of being an educated person who has appropriate background knowledge. but many ppl haven't done it. if you assign the whole cost of that to a one project it can make the overhead ratio look bad.

curi:

unfortunately i think a lot of what's in that book would be wrong and ignore some more important but less famous ideas. but at least that'd be a reasonable try. most ppl don't even get that far.

curi:

certainly a decent number of ppl have done that. but i think few have ever consciously considered "which philosophy schools of thought does my project contradict? which am i assuming as premises and betting my project success on? and is that a good idea? do any merit more investigation before i make such a bet?" ppl have certainly considered such things in a disorganized, haphazard way, which sometimes manages to work out ok. idk that ppl have done this by design in that way i'm recommending.

curi:

this kind of analysis has large practical consequences, e.g. > 50% of "scientific research" is in contradiction to Critical Rationalist epistemology, which is one of the more famous philosophies of science.

curi:

IMO, consequently it doesn't work and the majority of scientists basically waste their careers.

curi:

most do it without consciously realizing they are betting their careers on Karl Popper being wrong.

curi:

many of them do it without reading any Popper book or being able to name any article criticizing Popper that they think is correct.

curi:

that's a poor bet to make.

curi:

even if Popper is wrong, one should have more information before betting against him like that.

curi:

another thing with scientists is the majority bet their careers on a claim along the lines of "college educations and academia are good"

curi:

this is a belief that some of the best scientists have disagreed with

curi:

a lot of them also have government funding underlying their projects and careers without doing a rational investigation of whether that may be a really bad, risky thing.

curi:

separate issue: broadly, most large projects try to use reason. part of the project is that problems come up and people try to do rational problem solving – use reason to solve the problems as they come up. they don't expect to predict and plan for every issue they're gonna face. there are open controversies about what reason is, how to use it, what problem solving methods are effective or ineffective, etc.

curi:

what the typical project does is go by common sense and intuition. they are basically betting the project on whatever concept of reason they picked up here and there from their culture being adequate. i regard this as a very risky bet.

curi:

and different project members have different conceptions of reason, and they are also betting on those being similar enough things don't fall apart.

curi:

commonly without even attempting to talk about the matter or put their ideas into words.

curi:

what happens a lot when people have unverbalized philosophy they picked up from their culture at some unknown time in the past is ... BIAS. they don't actually stick to any consistent set of ideas about reason. they change it around situationally according to their biases. that's a problem on top of some of the ideas floating around our culture being wrong (which is well known – everyone knows that lots of ppl's attempts at rational problem solving don't work well)

curi:

one of the problems in the field of reason is: when and how do you rationally end (or refuse to start) conversations without agreement. sometimes you and the other guy agree. but sometimes you don't, and the guy is saying "you're wrong and it's a big deal, so you shouldn't just shut your mind and refuse to consider more" and you don't want to deal with that endlessly but you also don't want to just be biased and stay wrong, so how do you make an objective decision? preferably is there something you could say that the other guy could accept as reasonable? (not with 100% success rate, some people gonna yell at you no matter what, but something that would convince 99% of people who our society considers pretty smart or reasonable?)

curi:

this has received very little consideration from anyone and has resulted in countless disputes when people disagree about whether it's appropriate to stop a discussion without giving further answers or arguments.

curi:

lots of projects have lots of strife over this specific thing.

curi:

i also was serious about AI risk being worth considering (for basically anything in the ballpark of machine learning, like classifying big data sets) even though i actually disagree with that one. i did consider it and think it merits consideration.

curi:

i think it's very similar to physicists in 1940 were irresponsible if they were doing work anywhere in the ballpark of nuclear stuff and didn't think about potential weapons.

curi:

another example of a project management issue is how does one manage a schedule? how full should a schedule be packed with activities? i think the standard common sense ways ppl deal with this are wrong and do a lot of harm (the basic error is overfilling schedules in a way which fails to account for variance in task completion times, as explained by Eliyahu Goldratt)

curi:

i meant there an individual person's schedule

curi:

similarly there is problem of organizing the entire project schedule and coordinating people and things. this has received a ton of attention from specialists, but i think most ppl have an attitude like "trust a standard view i learned in my MBA course. don't investigate rival viewpoints". risky.

curi:

a lot of other ppl have no formal education about the matter and mostly ... don't look it up and wing it.

curi:

even riskier!

curi:

i think most projects managers couldn't speak very intelligently about early start vs. late start for dependencies off the critical path.

curi:

and don't know that Goldratt answered it. and it does matter. bad decisions re this one issue results in failed and cancelled projects, late projects, budget overruns, etc.

curi:

lots of ppl's knowledge of decision making processes extends about as far as pro/con lists and ad hoc arguing.

curi:

so they are implicitly betting a significant amount of project effectiveness on something like "my foundation of pro/con lists and ad hoc arguing is adequate knowledge of decision making processes".

curi:

this is ... unwise.

curi:

another generic issue is lying. what is a lie? how do you know when you're lying to yourself? a lot of ppl make a bet roughly like "either my standard cultural knowledge + random variance about lying is good or lying won't come up in the project".

curi:

similar with bias instead of lying.

curi:

another common, generic way projects go wrong is ppl never state the project goal. they don't have clear criteria for project success or failure.

curi:

related, it's common to make basically no attempt to estimate the resources needed to complete the project successfully and estimating the resources available and comparing those two things.

curi:

goals and resource budgeting are things some ppl actually do. they aren't rare. but they're often omitted, especially for more informal and non-business projects.

curi:

including some very ambitious change-the-world type projects, where considering a plan and what resources it'll use is actually important. a lot of times ppl do stuff they think is moving in the direction of their goal without seriously considering what it will take to actually reach their goal.

curi:

e.g. "i will do X to help the environment" without caring to consider what breakpoints exist for helping the environment that make an important difference and how much action is required to reach one.

curi:

there are some projects like "buy taco bell for dinner" that use low resources compared to what you have available (for ppl with a good income who don't live paycheck to paycheck), so you don't even need to consciously think through resource use. but a lot of bigger ones one ought to estimate e.g. how much time it'll take for success and how much time one is actually allocating to the project.

curi:

often an exploratory project is appropriate first. try something a little and see how you like it. investigate and learn more before deciding on a bigger project or not. ppl often don't consciously separate this investigation from the big project or know which they are doing.

curi:

and so they'll do things like switch to a big project without consciously realizing they need to clear up more time on their schedule to make that work.

curi:

often they just don't think clearly about what their goals actually are and then use bias and hindsight to adjust their goals to whatever they actually got done.

curi:

there are lots of downsides to that in general, and it's especially bad with big ambitious change/improve the world goals.

curi:

one of the most egregious examples of the broad issues i'm talking about is political activism. so many people are working for the red or blue team while having done way too little to find out which team is right and why.

curi:

so they are betting their work on their political team being right. if their political team is wrong, their work is not just wasted but actually harmful. and lots of ppl are really lazy and careless about this bet. how many democrats have read one Mises book or could name a book or article that they think refuses a major Mises claim?

curi:

how many republicans have read any Marx or could explain and cite why the labor theory of value is wrong or how the economic calculation argument refutes socialism?

curi:

how many haters of socialism could state the relationship of socialism to price controls?

curi:

how many of them could even give basic economic arguments about why price controls are harmful in a simple theoretical market model and state the premises/preconditions for that to apply to a real situation?

curi:

i think not many even when you just look at people who work in the field professionally. let alone if you look at people who put time or money into political causes.

curi:

and how many of them base their dismissal of solipsism and idealism on basically "it seems counterintuitive to me" and reject various scientific discoveries about quantum mechanics for the same reason? (or would reject those discoveries if they knew what they were)

curi:

if solipsism or idealism were true it'd have consequences for what they should do, and people's rejections of those ideas (which i too reject) are generally quite thoughtless.

curi:

so it's again something ppl are betting projects on in an unreasonable way.

curi:

to some extent ppl are like "eh i don't have time to look into everything. the experts looked into it and said solipsism is wrong". most such ppl have not read a single article on the topic and could not name an expert on the topic.

curi:

so their bet is not really on experts being right – which if you take that bet thousands of time, you're going to be wrong sometimes, and it may be a disaster – but their bet is actually more about mainstream opinion being right. whatever the some ignorant reporters and magazine writers claimed the experts said.

curi:

they are getting a lot of their "expert" info fourth hand. it's filtered by mainstream media, talking heads on TV, popular magazines, a summary from a friend who listened to a podcast, and so on.

curi:

ppl will watch and accept info from a documentary made by ppl who consulted with a handful of ppl who some university gave expert credentials. and the film makers didn't look into what experts or books, if any, disagree with the ones they hired.

curi:

sometimes the info presented disagrees with a majority of experts, or some of the most famous experts.

curi:

sometimes the film makers have a bias or agenda. sometimes not.

curi:

there are lots of issues where lots of experts disagree. these are, to some rough approximation, the areas that should be considered controversial. these merit some extra attention.

curi:

b/c whatever you do, you're going to be taking actions which some experts – some ppl who have actually put a lot of work into studying the matter – think is a bad idea.

curi:

you should be careful before doing that. ppl often aren't.

curi:

politics is a good example of this. whatever side you take on any current political issue, there are experts who think you're making a big mistake.

curi:

but it comes up in lots of fields. e.g. psychiatry is much less of an even split but there are a meaningful number of experts who think anti-psychotic drugs are harmful not beneficial.

curi:

one of the broad criteria for areas you should look into some before betting your project on are controversial areas. another is big risk areas (it's worse if you're wrong, like AI risk or e.g. there's huge downside risk to deciding that curing aging is a bad cause).

curi:

these are imperfect criteria. some very unpopular causes are true. some things literally no one currently believes are true. and you can't deal with every risk that doesn't violate the laws of physics. you have to estimate plausibility some.

curi:

one of the important things to consider is how long does it take to do a good job? could you actually learn about all the controversial areas? how thoroughly is enough? how do you know when you can move on?

curi:

are there too many issues where 100+ smart ppl or experts think ur initial plan is wrong/bad/dangerous, or could you investigate every area like that?

curi:

relying on the opinions of other ppl like that should not be your whole strategy! that gives you basically no chance against something your culture gets systematically wrong. but it's a reasonable thing to try as a major strategy. it's non-obvious to come up with way better approaches.

curi:

you should also try to use your own mind and judgment some, and look into areas you think merit it.

curi:

another strategy is to consider things that people say to you personally. fans, friends, anonymous ppl willing to write comments on your blog... this has some merits like you get more customized advice and you can have back and forth discussion. it's different to be told "X is dangerous b/c Y" from a book vs. a person where you can ask some clarifying questions.

curi:

ppl sometimes claim this strategy is too time consuming and basically you have to ignore ~80% of all criticism you're aware of with according to your judgment with no clear policies or principles to prevent biased judgments. i don't agree and have written a lot about this matter.

curi:

i think this kind of thing can be managed with reasonable, rational policies instead of basically giving up.

curi:

some of my writing about it: https://elliottemple.com/essays/using-intellectual-processes-to-combat-bias

curi:

most ppl have very few persons who want to share criticism with them anyway, so this article and some others have talked more about ppl with a substantial fan base who actually want to say stuff to them.

curi:

i think ppl should write down what their strategy is and do some transparency so they can be held accountable for actually doing it in addition to the strategy itself being something available for ppl to criticize.

curi:

a lot of times ppl's strategy is roughly "do whatever they feel like" which is such a bias enabler. and they don't even write down anything better and claim to do it. they will vaguely, non-specifically say they are doing something better. but no actionable or transparent details.

curi:

if they write something down they will want it to actually be reasonable. a lot of times they don't even put their policies into words into their own head. when they try to use words, they will see some stuff is unreasonable on their own.

curi:

if you can get ppl to write anything down what happens next is a lot of times they don't do what they said they would. sometimes they are lying pretty intentionally and other times they're just bad at it. either way, if they recognize their written policies are important and good, and then do something else ... big problem, even in their own view.

curi:

so what they really need are policies which some clear steps and criteria where it's really easy to tell if they are being done or not. just just vague stuff about using good judgment or doing lots of investigation of alternative views that represent material risks to the project. actual specifics like a list of topic areas to survey the current state of expert knowledge in with a blog post summarizing the research for each area.

curi:

as in they will write a blog post that gives info about things like what they read and what they think of it, rather than them just saying they did research and their final conclusion.

curi:

and they should have written policies about ways critics can get their attention, and for in what circumstances they will end or not start a conversation to preserve time.

curi:

if you don't do these things and you have some major irrationalities, then you're at high risk of a largely unproductive life. which is IMO what happens to most ppl.

curi:

most ppl are way more interested in social status hierarchy climbing than taking seriously that they're probably wrong about some highly consequential issues.

curi:

and that for some major errors they are making, better ideas are actually available and accessible right now. it's not just an error where no one knows better or only one hermit knows better.

curi:

there are a lot of factors that make this kind of analysis much harder for ppl to accept. one is they are used to viewing many issues as inconclusive. they deal with controversies by judging one side seems somewhat more right (or sometimes: somewhat higher social status) instead of actually figuring out decisive, clear cut answers.

curi:

and they think that's just kinda how reason works. i think that's a big error and it's possible to actually reach conclusions. and ppl actually do reach conclusions. they decide one side is better and act on it. they are just doing that without having any reason they regard as adequate to reach that conclusion...

curi:

some of my writing about how to actually reach conclusions re issues http://curi.us/1595-rationally-resolving-conflicts-of-ideas

curi:

this (possibility of reaching actual conclusions instead of just saying one side seems 60% right) is a theme which is found, to a significant extent, in some of the other thinkers i most admire like Eliyahu Goldratt, Ayn Rand and David Deutsch.

curi:

Rand wrote this:

curi:

Now some of you might say, as many people do: “Aw, I never think in such abstract terms—I want to deal with concrete, particular, real-life problems—what do I need philosophy for?” My answer is: In order to be able to deal with concrete, particular, real-life problems—i.e., in order to be able to live on earth.
You might claim—as most people do—that you have never been influenced by philosophy. I will ask you to check that claim. Have you ever thought or said the following? “Don’t be so sure—nobody can be certain of anything.” You got that notion from David Hume (and many, many others), even though you might never have heard of him. Or: “This may be good in theory, but it doesn’t work in practice.” You got that from Plato. Or: “That was a rotten thing to do, but it’s only human, nobody is perfect in this world.” You got it from Augustine. Or: “It may be true for you, but it’s not true for me.” You got it from William James. Or: “I couldn’t help it! Nobody can help anything he does.” You got it from Hegel. Or: “I can’t prove it, but I feel that it’s true.” You got it from Kant. Or: “It’s logical, but logic has nothing to do with reality.” You got it from Kant. Or: “It’s evil, because it’s selfish.” You got it from Kant. Have you heard the modern activists say: “Act first, think afterward”? They got it from John Dewey.
Some people might answer: “Sure, I’ve said those things at different times, but I don’t have to believe that stuff all of the time. It may have been true yesterday, but it’s not true today.” They got it from Hegel. They might say: “Consistency is the hobgoblin of little minds.” They got it from a very little mind, Emerson. They might say: “But can’t one compromise and borrow different ideas from different philosophies according to the expediency of the moment?” They got it from Richard Nixon—who got it from William James.

curi:

which is about how ppl are picking up a bunch of ideas, some quite bad, from their culture, and they don't really know what's going on, and then those ideas effect their lives.

curi:

and so ppl ought to actually do some thinking and learning for themselves to try to address this.

curi:

broadly, a liberal arts education should have provided this to ppl. maybe they should have had it by the end of high school even. but our schools are failing badly at this.

curi:

so ppl need to fill in the huge gaps that school left in their education.

curi:

if they don't, to some extent what they are at the mercy of is the biases of their teachers. not even their own biases or the mistakes of their culture in general.

curi:

schools are shitty at teaching ppl abstract ideas like an overview of the major philosophers and shitty at teaching practical guidelines like "leave 1/3 of your time slots unscheduled" and "leave at least 1/3 of your income for optional, flexible stuff. don't take on major commitments for it"

curi:

(this is contextual. like with scheduling, if you're doing shift work and you aren't really expected to think, then ok the full shift can be for doing the work, minus some small breaks. it's advice more for ppl who actually make decisions or do knowledge work. still applies to your social calendar tho.)

curi:

(and actually most ppl doing shift work should be idle some of the time, as Goldratt taught us.)

curi:

re actionable steps, above i started with addressing the risky bets / risky project premises. with first brainstorming things on the list and organizing into categories. but that isn't where project planning starts.

curi:

it starts with more like

curi:

goal (1 sentence). how the goal will be accomplished (outline. around 1 paragraph worth of text. bullet points are fine)

curi:

resource usage for major, relevant resource categories (very rough ballpark estimates, e.g. 1 person or 10 or 100 ppl work on it. it takes 1 day, 10 days, 100 days. it costs $0, $1000, $1000000.)

curi:

you can go into more detail, those are just minimums. often fine to begin with.

curi:

for big, complicated projects you may need a longer outline to say the steps involved.

curi:

then once u have roughly a goal and a plan (and the resource estimates help give concrete meaning to the plan), then you can look at risks, ways it may fail.

curi:

the goal should be clearly stated so that someone could clearly evaluate potential outcomes as "yes that succeeded" or "no, that's a failure"

curi:

if this is complicated, you should have another section giving more detail on this.

curi:

and do that before addressing risks.

curi:

another key area is prerequisites. can do before or after risks. skills and knowledge you'll need for the project. e.g. "i need to know how wash a test tube". especially notable are things that aren't common knowledge and you don't already know or know how to do.

curi:

failure to succeed at all the prerequisites is one of the risks of a project. the prerequisites can give you some ideas about more risks in terms of intellectual bets being made.

curi:

some prerequisites are quite generic but merit more attention than they get. e.g. reading skill is something ppl take for granted that they have, but it's actually an area where most ppl could get value from improving. and it's pretty common ppl's reading skills are low enough that it causes practical problems if they try to engage with something. this is a common problem with intellectual writing but it comes up plenty with mundane things like cookbooks or text in video games that provides information about what to do or how an ability works. ppl screw such things up all the time b/c they find reading burdensome and skip reading some stuff. or they read it fast, don't understand it, and don't have the skill to realize they missed stuff.)

curi:

quite a few writers are not actually as good at typing as they really ought to be, and it makes their life significantly worse and less efficient.

curi:

and non-writers. cuz a lot of ppl type stuff pretty often.

curi:

and roughly what happens is they add up all these inefficiencies and problems, like being bad at typing and not knowing good methods for resolving family conflicts, and many others, and the result is they are overwhelmed and think it'd be very hard to find time to practice typing.

curi:

their inefficiencies take up so much time they have trouble finding time to learn and improve.

curi:

a lot of ppl's lives look a lot like that.


Elliot Temple | Permalink | Messages (2)