Chat about Guidance with InternetRules

curi:
FI doesn’t have a lot of guided learning. it requires ppl to be able to guide themselves some. reading stuff is one of the most guided parts. i have lists of stuff ppl can read.
most ppl aren’t very good at guiding themselves instead of being told what to do
like it’s useful to learn a variety of standard things that our society knows. some economics, psychology, history, grammar, science, etc.
a lot of ppl won’t go find resources and work on that on their own
it’s not realistic to get really good at FI without knowing mainstream stuff too like an “educated” person would know
ppl are used to a teacher telling them what to do
InternetRules:
so like with games i dont think there is much guided learning in say OW. but if u were following the SMO speedrunning guide that would be like guided learning how to speedrun
curi:
but they need their own “motor” as Rand calls it. their own ability to run their own lives. a lot of why they dislike criticism is because they feel like it bosses them around or makes them do things. they don’t know how to run their own lives and use criticism to make their life better.
InternetRules:
with OW u can be like "what things should i learn? forwards backwards? mechanics? team comps? counters?"
curi:
OW has decent guidance available. you can vod review your own play, watch strategy videos, watch vod reviews done by top players, etc
there are videos with guidance about things to practice
but it’s less guided than school
there isn’t a textbook telling you stuff step by step
InternetRules:
so like if u have someone vod review your gameplay and they are like "u r really bad at X" then that would be more guided if u try to practice X
curi:
and yeah some speedruns have really good guidance like smallant’s guide vids
that’s better guidance than schools give
InternetRules:
if they say like "u should practice X, Y, and Z in what ever order u want" that would be less guided i htink
if they were like "do X, then Y, then Z" that would be more guided
curi:
guidance can be a problem b/c it’s hard for one set of instructions to work well for everyone. ppl have different goals, preferences, prior knowledge, likes, dislikes, questions, confusions, etc.
often ppl try to fit themselves to the guidance even though it’s not a great fit. but sometimes they won’t do that at all or complain a bunch. their attitude depends on how badly they want to learn it and what alternatives they have and the social status of the guide.
InternetRules:
right now i think im trying to figure out like what guidance and what counts a guidance.
curi:
if they are happy to treat someone as an authority they will defer more and try to adjust themselves to make it work, and blame themselves if it doesn’t. but with people who society doesn’t endorse as a good master/leader/etc then people can be way more hostile.
InternetRules:
Tooey seems very much like that
curi:
guidance is like advice, tips, instructions ... stuff that tells you how or what to do.
InternetRules:
there was a part that said he like caused some of his students or something, some ppl he adviced they killed themselfs cuz they didnt like life or something
so you can also guide your self
curi:
it’s always partial. ppl can’t control you entirely with every detail. even with like slaves being whipped to pick cotton, no one told them how to close their hand around the cotton and which muscles to use. they were expected to make it work somehow and whipped if they screwed it up to motivate them.
InternetRules:
instead of like someone else telling u "ur bad at X" you could watch your own vod and be like "im bad at X" then start working on X. both of those would be guidance i think
curi:
actually a lot of school is like that. teachers are really bad at teaching but they know how to keep punishing kids to pressure the kid to learn it himself somehow.
pioneers don’t have guidance. they go exploring and try to figure things out. like taking a wagon to oregon in the past without a good map or knowing what you’ll do there, and you hopefully arrive and try to figure out how to make a life there. you know some basic stuff in advance like there are trees and rivers. you can farm and hunt. but you figure out details yourself.
when a new video game comes out, some ppl explore it and figure out what works well in the game and whether it’s a good game. most ppl play badly at first and then watch some guides to tell them what’s good.
InternetRules:

often ppl try to fit themselves to the guidance even though it’s not a great fit. but sometimes they won’t do that at all or complain a bunch. their attitude depends on how badly they want to learn it and what alternatives they have and the social status of the guide.

so you should try to change the guidance in some way if it doesnt fit u i think. maybe its like "ok u should learn strategy then mechanics" but u really like mechanics and doing things like trickshots so u could decide to do that first before strategy parts.

that would be like changing the guidance to suit you better
and you could look at specific parts of the strategy part of the guidance to help you get more trickshots, like u could learn about positioning which helps you stay alive longer and be in better spots so you get more and better trickshots

guidance can be a problem b/c it’s hard for one set of instructions to work well for everyone. ppl have different goals, preferences, prior knowledge, likes, dislikes, questions, confusions, etc.

im trying to think aobut the "ppl have different goals part" and how that effects guidance

preferences, likes, and dislikes make immediate sense to me.

if a guide is telling u how to be good at the game and like win more, but u specifically just want to do trickshots and dont care to much about winning, u just dont want to be like hard throwing with ur trickshots, then i think thats like a different goal than the guide is intending
so i guess like the info section in discord has some guided stuff. like "Everyone should read the FI articles" and saying some stuff that ppl should read and discuss and like to introduce yourself
unguided would be like if u had the FI articles but u didnt tell ppl to read them. and ppl like just by them selfs think "oh FI articles those seem relevant and important i should read those"
ok so theres like benefits to guided stuff and also drawbacks i think
like if ur trying to do something new that no one has really done before but ur super use to using a bunch of guided learning that would be harder
curi:
i’d have more guided stuff but it takes work to make it
InternetRules:

guidance is like advice, tips, instructions ... stuff that tells you how or what to do.

so like telling someone about the "site:" command on search engines would be guidance about how to use search engines

when a new video game comes out, some ppl explore it and figure out what works well in the game and whether it’s a good game. most ppl play badly at first and then watch some guides to tell them what’s good.

so like if u watch pros play and try to copy that, i think the pros gameplay would also be guidance
curi:
sure re “site:”. guiding ppl for how to do small things is common and often works well. there are YT videos on how to repair a leaky sink or change a bike tire or whatever.
InternetRules:

pioneers don’t have guidance. they go exploring and try to figure things out. like taking a wagon to oregon in the past without a good map or knowing what you’ll do there, and you hopefully arrive and try to figure out how to make a life there. you know some basic stuff in advance like there are trees and rivers. you can farm and hunt. but you figure out details yourself.

so like maybe after a year guidance would be given to like ppl trying to get to the west coast, like the pioneers who already made it could give tips like:
after X miles food is really scarce for Y amount of miles, so make sure u have enough food.
and that would be guidance
curi:
how to learn to think well is harder to guide. it’s a big, broad topic and the best actions depend on what you already know, what emotions block what options, your available resources, what your friends think and how much you want to stay similar to them, etc. it gets really complicated. that’s one of the reasons for more self-guidance. you can take into account your situation way better than someone who is writing an essay for everyone and who doesn’t know about your personal situation.
InternetRules:
so like some ppl could write guidance on specific things, like mb your more compatible with objectivism than CR, so you start with objectivism, or the other way around, and then u could like and guided content for those specific topics
so you could guide your self on what to start learning first i guess

how to learn to think well is harder to guide. it’s a big, broad topic and the best actions depend on what you already know, what emotions block what options, your available resources, what your friends think and how much you want to stay similar to them, etc. it gets really complicated. that’s one of the reasons for more self-guidance. you can take into account your situation way better than someone who is writing an essay for everyone and who doesn’t know about your personal situation.

so like having an overview of like how to learn to think well could be good, it like could point you to a bunch of stuff then u can decide which ones to start with. but having a direct guide like: "start with objectivism, then CR, then read szasz, then popper
" would not work out for everyone
wait CR is pretty related to popper i think its like kind of his philosophy
curi:
ya CR = popper
InternetRules:
but like guided learning like that might work for some ppl who are already compatible, but if for some reason u really hate objectivism then it would be better to start with other stuff
curi:
when people change guidance, there’s a risk they screw it up. they often don’t understand why it is the way it is or what kind of changes would work well or not.
InternetRules:
so like maybe u want to do popper before  deutch, but like reading deutch first might help u understand popper better
curi:
an example of changing guidance is it says “practice X” and you don’t want to do that b/c you think it’s boring or you think you’re already good at X and don’t need practice.
InternetRules:
so like if its like "learn addition, then times tables" u might think your already good enough at addition and just skip to times tables
but u dont even know what 5 + 3 is off hand
curi:
books that aren’t “educational” – like textbooks and homeschool work books – are generally pretty unguided. like The Fountainhead doesn’t tell you what steps to take to change your life. science books will tell you some science ideas but they won’t guide you through practicing it, thinking through questions you have, and otherwise learning it. they usually make it pretty easy to read the book and keep thinking “yeah i agree” but then after you’re done you can’t actually do the math or figure out what’ll happen in some experiments because you didn’t really learn it well.
pro gameplay is an example. that helps compared to not having an example. but it isn’t guidance like “do this then do this”. you have to decide what to do from the example.
InternetRules:
so i think like figuring out bosses in vindictus is more self guided
curi:
legendarma makes tutorials but ya lots of ppl just practice until they figure it out
ok with you to blog this chat?
InternetRules:
its fine to blog this chat yes


Elliot Temple | Permalink | Messages (0)

Andy B Harassment Continues

Andy B has been harassing my FI community using many false identities. He left after I caught and exposed him, but he returned in Aug 2020. He’s written over 100 new curi.us messages under the names Periergo and Anonymous, and his Periergo Less Wrong account has been banned by Less Wrong for targeted harassment against me.

Unfortunately, he succeeded at his goal of destroying my discussions with Less Wrong.

Andy’s actions – including threats, doxxing, spamming, infiltrating the FI Discord with multiple sock puppets for months, and posting hundreds of harassing curi.us messages – violate multiple laws. He’s attacked several other FI members, not just me. His real name is unknown.

If anyone is actually willing to discuss this matter, I will provide additional evidence as appropriate. I have extensive documentation. I already posted evidence, and none of the facts are disputed.

Andy’s Friends

Andy is a David Deutsch (DD) fan who is friends with the “CritRat” DD fan community, including the “Four Strands” subgroup. They have turned a blind eye to Andy’s actions. They’ve refused to ask him to stop or to say that they think harassment is bad. The CritRat community is toxic and has also been an ongoing source of (milder) trouble from people besides Andy.

Andy’s friends include many of DD’s associates and CritRat community leaders. They know what he’s done but apparently don’t care. They’re providing him with encouragement and legitimacy in a social group, and some of them have egged him on. The public communications with Andy that I link below are all from months after Andy’s harassment was exposed.

  • Lulie Tanett has friendly tweets with Andy (related, she tweets saying we need to use force and threats, which she considers a useful “technology”). She’s DD’s current closest associate and long time IRL friend, who he often promotes on Twitter and does joint projects like videos with. She’s promoted on DD’s website. She has a history of knowingly associating with people like online harassers, doxxers and spam botters.
    • Sarah Fitz-Claridge follows Andy on Twitter. She co-founded Taking Children Seriously with DD and is his long time IRL friend. She has a hateful attitude towards ET.
  • Sarah’s husband has friendly communications with Andy on Twitter. He’s had discussions with DD for many years. He’s said hateful things about ET.
  • Brett Hall tweets with Andy (examples 2 and 3). He’s promoted on DD’s website and by DD’s tweets, and he’s said hateful things about ET.
  • Samuel Kuypers tweets with Andy. He’s promoted on DD’s website and recently co-authored a physics paper with DD.
  • Bruce Nielson tweets with Andy (more). He’s a Four Strands leader/moderator.
  • Aaron Stupple tweets with Andy. He’s a Four Strands leader/moderator.
  • Dennis Hackethal talks with Andy publicly and was co-moderator of a DD related subreddit with Andy. He’s a Four Strands leader/moderator who has libeled and plagiarized ET. DD has promoted him on Twitter.

All of these people, as well as DD, have so far refused to communicate about this problem. They apparently have no interest in a truce or deescalation. They’re making the problem worse.

They’ve stated no grievances against FI, no terms they want, no willingness to negotiate, and no approaches to problem solving that they’d try. They’ve given no explanation of how they view the Andy problem, and they haven’t said anything to discourage the harassment coming from their community. They haven’t made no contact requests either; they just ghost me and others without explanation. (Except Dennis asked me not to email him again about Andy, which I haven’t.) I’m willing to communicate using proxies, involve a neutral mediator, or take other reasonable steps.

The situation is asymmetric. The FI community is peaceful. Harassment doesn’t come from FI towards CritRats or anyone else. If any FI member did harass someone, I’d ask them to stop or ban them, rather than encouraging them. (Or I’d discuss my doubts about the accusation, if I had any. What I wouldn’t do is ignore the matter with no comment, and ghost the victim, while continuing a friendly relationship with the person accused of extensive harassment, illegal actions and aggressive force.)

Warning

Andy hasn’t harassed FI since his Less Wrong account was banned recently. Maybe he’s decided to leave me alone because he got caught again? I hope so. Or maybe he’ll continue on any day.

Despite Andy’s repeated aggression against FI, as well as the misdeeds of other CritRats, I would still prefer to deescalate the situation.

But this is a chronic problem which is doing major harm, and Andy has a pattern of returning to harass again. I’ve been extraordinarily patient and forgiving, but this can’t go on forever. Andy started harassing us two years ago. If any CritRats are willing to speak to me about deescalating or improving this situation, please contact me (comment below, email curi@curi.us or use Discord). So far the communications of myself and others just get ignored by CritRats. They’ve repeatedly ghosted the victims instead of the harassers.

So I’m issuing a warning: If Andy comes back to harass me again, I will hold his supporters accountable. If you’re encouraging Andy while not even giving lip service to peace, and you’re refusing to communicate about any conflict resolution, then I will blame you and take defensive actions like writing about how you’re violating my rights and sharing evidence. I’ll particularly criticize the community leaders, especially the top leader, DD. If (like me) you don’t want this outcome, clean up your community and stop harassing FI.


Elliot Temple | Permalink | Message (1)

Less Wrong Banned Me

habryka wrote about why LW banned me. This is habryka’s full text plus my comments:

Today we have banned two users, curi and Periergo from LessWrong for two years each. The reasoning for both is bit entangled but are overall almost completely separate, so let me go individually:

The ban isn’t for two years. It’s from Sept 16 2020 through Dec 31 2022.

They didn’t bother to notify me. I found out in the following way:

First, I saw I was logged out. Then I tried to log back in and it said my password was wrong. Then I tried to reset my password. When submitting a new password, it then gave an error message saying I was banned and until what date. Then I messaged them on intercom and 6 hours later they gave me a link to the public announcement about my ban.

That’s a poor user experience.

Periergo is an account that is pretty easily traceable to a person that Curi has been in conflict with for a long time, and who seems to have signed up with the primary purpose of attacking curi. I don't think there is anything fundamentally wrong about signing up to LessWrong to warn other users of the potentially bad behavior of an existing user on some other part of the internet, but I do think it should be done transparently.

It also appears to be the case that he has done a bunch of things that go beyond merely warning others (like mailbombing curi, i.e. signing him up for tons of email spam that he didn't sign up for, and lots of sockpupetting on forums that curi frequents), and that seem better classified as harassment, and overall it seemed to me that this isn't the right place for Periergo.

Periergo is a sock puppet of Andy B. Andy harassed FI long term with many false identities, but left for months when I caught him, connected the identities, and blogged it. But he came back in August 2020 and has written over 100 comments since returning, and he made a fresh account on Less Wrong for the purpose of harassing me and disrupting my discussions there. He essentially got away with it. He stirred up trouble and now I’m banned. What does he care that his fresh sock puppet, with a name he’ll likely never use again anywhere, is banned? And he’ll be unbanned at the same time as me in case he wants to further torment me using the same account.

Curi has been a user on LessWrong for a long time, and has made many posts and comments. He also has the dubious honor of being by far the most downvoted account in all of LessWrong history at -675 karma.

I started at around -775 karma when I returned to Less Wrong recently and went up. I originally debated Popper, induction and cognitive biases at LW around 9 years ago and got lots of downvotes. I returned around 3 years ago when an LW moderator invited me back because he liked my Paths Forward article. That didn’t work out and I left again. I returned recently for my own reasons, instead of because someone incorrectly suggested that I was wanted, and it was going better. I knew some things to expect, and some things that wouldn’t work, and I'd just read LW's favorite literature, RAZ.

BTW, I don’t know how my karma is being calculated. My previous LW discussions were at the 1.0 version of the site where votes on posts counted for 10 karma, and votes on comments counted for 1 karma. When I went back the second time, a moderator boosted my karma enough to be positive so that I could write posts instead of just comments. LW 2.0 allows you to write posts while having negative karma and votes on posts and comments are worth the same amount, but your votes count for multiple karma if you have high karma and/or use the strong vote feature. I don’t know how old stuff got recalculated when they did the version 2.0 website.

Overall I have around negative 1 karma per comment, so that’s … not all that bad? Or apparently the lowest ever. If downvotes on the old posts still count 10x then hundreds of my negative karma is from just a few posts.

In general, I think outliers should be viewed as notable and potentially valuable, especially outliers that you can already see might actually be good (as habryka says about me below). Positive outliers are extremely valuable.

The biggest problem with his participation is that he has a history of dragging people into discussions that drag on for an incredibly long time, without seeming particularly productive, while also having a history of pretty aggressively attacking people who stop responding to him. On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack. It's first sentence is "This is a list of ppl who had discussion contact with FI and then quit/evaded/lied/etc.", and in-particular the framing of "quit/evaded/lied" sure sets the framing for the rest of the post as a kind of "wall of shame".

I consider it strange to ban me for stuff I did in the distant past but was not banned for at the time.

I find it especially strange to ban me for 2 years over stuff that’s already 3 or 9 years old (the evaders guest post by Alan is a year old, and btw "evade" is standard Objectivist philosophy terminology). I already left the site for longer than the ban period. Why is a 5 year break the right amount instead of 3? habryka says below that he thinks I was doing better (from his point of view and regarding what the LW site wants) this time.

They could have asked me about that particular post before banning me, but didn’t. They also could have noted that it’s an old post that only came up because Andy linked it twice on LW with the goal of alienating people from me. They’re letting him get what he wanted even though they know he was posting in bad faith and breaking their written rules.

I, by contrast, am not accused of breaking any specific written rule that LW has, but I’ve been banned anyway with no warning.

Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.

I didn’t threaten anyone. I’m guessing it was a careless wording. I think habryka should retract or clarify it. Above habryka used “attack[]” as a synonym for criticize. I don’t like that but it’s pretty standard language. But I don’t think using “threat[en]” as a synonym for criticize is reasonable.

“threaten” has meanings like “state one's intention to take hostile action against someone in retribution for something done or not done” and “express one's intention to harm or kill“ (New Oxford Dictionary). This is the one thing in the post that I strongly object to.

I do really want to make clear that this is not a personal judgement of curi. While I do find the "List of Fallible Ideas Evaders" post pretty tasteless, and don't like discussing things with him particularly much, he seems well-intentioned, and it's quite plausible that he could me an amazing contributor to other online forums and communities. Many of the things he is building over on his blog seem pretty cool to me, and I don't want others to update on this as being much evidence about whether it makes sense to have curi in their communities.

I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from), but they still don't strike me as great contributions to the LessWrong canon, are all low-karma, and I assign too high of a probability that old patterns will repeat themselves (and also that his presence will generally make people averse to be around, because of those past patterns). He has also explicitly written a post in which he updates his LW commenting policy towards something less demanding, and I do think that was the right move, but I don't think it's enough to tip the scales on this issue.

So I came back after 3 years, posted in a way they liked significantly better … I’m building cool things and plausibly amazing while also making major progress at compatibility with LW … but they’re banning me anyway, even though my old posts didn’t get me banned.

More broadly, LessWrong has seen a pretty significant growth of new users in the past few months, mostly driven by interest in Coronavirus discussion and the discussion we hosted on GPT3. I continue to think that "Well-Kept Gardens Die By Pacifism", and that it is essential for us to be very careful with handling that growth, and to generally err on the side of curating our userbase pretty heavily and maintaining high standards. This means making difficult moderation decision long before it is proven "beyond a reasonable doubt" that someone is not a net-positive contributor to the site.

In this case, I think it is definitely not proven beyond a reasonable doubt that curi is overall net-negative for the site, and banning him might well be a mistake, but I think the probabilities weigh heavily enough in favor of the net-negative, and the worst-case outcomes are bad-enough, that on-net I think this is the right choice.

I don’t see why they couldn’t wait for me to do something wrong to ban me, or give me any warning or guidance about what they wanted me to do differently. I doubt this would have happened this way if Andy hadn’t done targeted harassment.

At least they wrote about their reasons. I appreciate that they’re more transparent than most forums.

In another message, habryka clarified his comment about others not updating their views of me based on this ban:

The key thing I wanted to communicate is that it seems quite plausible to me that these patterns are the result of curi interfacing specifically with the LessWrong culture in unhealthy ways. I can imagine him interfacing with other cultures with much less bad results.

I also said "I don't want others to think this is much evidence", not "this is no evidence". Of course it is some evidence, but I think overall I would expect people to update a bit too much on this, and as I said, I wouldn't be very surprised to see curi participate well in other online communities.

I’m unclear on what aspect of LW culture that I’m a mismatch for. Or put another way: I may interface better with other cultures which have or lack what particular characteristics compared to LW?


Also, LW didn't explain how they decided on ban lengths. 2.3 year bans don't correspond to solving the problems raised. Andy or I could easily wait and then do the stuff LW doesn't want. They aren't asking us to do anything to improve or to provide any evidence that we've reformed in some way. Nor are they asking us to figure out how we can address their concerns and prevent bad outcomes. They're just asking us to wait and, I guess, counting on us not to hold grudges. Problems don't automatically go away due to time passing.

Overall, I think LW’s decision and reasoning are pretty bad but not super unreasonable. I wouldn’t expect better at most forums and I’ve seen much worse. Also, I’m not confident that the reasoning given fully and accurately represents the actual reasons. I'm not convinced that they will ban other people using the same reasoning like that they didn't break any particular rules but might be a net-negative for the site, especially considering that "the moderators of LW are the opposite of trigger-happy. Not counting spam, there is on average less than one account per year banned." (source from 2016, maybe they're more trigger-happy in 2020, I don't know).


Elliot Temple | Permalink | Messages (13)

Max Microblogging

This is Max's discussion thread. Max has agreed to only post as "Max" in this thread.


Elliot Temple | Permalink | Messages (14)

curi's Microblogging

This is a thread for me to post stuff that's smaller than a blog post. You can reply and discuss here but don't start your own topics here. You can do that in Open Discussion or at any relevant post.


Elliot Temple | Permalink | Messages (3)

Eliezer Yudkowsky Is a Fraud

Eliezer Yudkowsky tweeted:

EY:

What on Earth is up with the people replying "billionaires don't have real money, just stocks they can't easily sell" to the anti-billionaire stuff? It's an insanely straw reply and there are much much better replies.

DL:

What would be a much better reply to give to someone who thinks for example that Elon Musk is hoarding $100bn in his bank account?

EY:

A better reply should address the core issue whether there is net social good from saying billionaires can't have or keep wealth: eg demotivating next Steves from creating Apple, no Gates vaccine funding, Musk not doing Tesla after selling Paypal.

Eliezer Yudkowsky (EY) frequently brings up names (e.g. Feynman or Jaynes) of smart people involved with science, rationality or sci-fi. He does this throughout RAZ. He communicates that he's read them, he's well-read, he's learned from them, he has intelligent commentary related to stuff they wrote, etc. He presents himself as someone who can report to you, his reader, about what those books and people are like. (He mostly brings up people he likes, but he also sometimes presents himself as knowledgeable about people he's unfriendly to, like Karl Popper and Ayn Rand, who he knows little about and misrepresents.)

EY is a liar who can't be trusted. In his tweets, he reveals that he brings up names while knowing basically nothing about them.

Steve Jobs and Steve Wozniak were not motivated by getting super rich. Their personalities are pretty well known. I guess EY never read any of the biographies and hasn't had conversations about them with knowledgeable people. Or maybe he doesn't connect what he reads to what he says. (I provide some brief, example evidence at the end of this post in which Jobs tells Ellison "You don’t need any more money." EY is really blatantly wrong.)

EY brings up Jobs and Wozniak ("Steves") to make his assertions sound concrete, empirical and connected to reality. Actually he's doing egregious armchair philosophizing and using counter examples as examples.

Someone who does this can't be trusted whenever they bring up other names either. It shows a policy of dishonesty: either carelessness and incompetence (while dishonestly presenting himself as a careful, effective thinker) or outright lying about his knowledge.

There are other problems with the tweets, too. For example, EY is calling people insane instead of arguing his case. And EY is straw manning the argument about billionaires having stocks not cash – while complaining about others straw manning. Billionaires have most of their wealth in capital goods, not consumption goods (that's the short, better version of the argument he mangled), and that's a more important issue than the incentives that EY brings up. EY also routinely presents himself as well-versed in economics but seems unable to connect concepts like accumulation of capital increasing the productivity of labor, or eating the seed corn, to this topic.

Some people think billionaires consume huge amounts of wealth – e.g. billions of dollars per year – in the form of luxuries or other consumption goods. Responding to a range of anti-billionaire viewpoints, including that one, by saying basically "They need all that money so they're incentivized to build companies." is horribly wrong. They don't consume anywhere near that much wealth per year. EY comes off as justifying them doing something they don't do that would actually merit concern if they somehow did it.

If Jeff Bezos were building a million statues of himself, that'd be spending billions of dollars on luxuries/consumption instead of production. That'd actually somewhat harm our society's capital accumulation and would merit some concern and consideration. But – crucial fact – the real world looks nothing like that. EY sounds like he's conceding that that's actually happening instead of correcting people about reality, and he's also claiming it's obviously fine because rich people love their statues, yachts and sushi so much that it's what inspires them to make companies. (It's debateable, and there are upsides, but it's not obviously fine.)


Steve Jobs is the authorized biography by Walter Isaacson. It says (context: Steve didn't want to do a hostile takeover of Apple) (my italics):

“You know, Larry [Ellison], I think I’ve found a way for me to get back into Apple and get control of it without you having to buy it,” Jobs said as they walked along the shore. Ellison recalled, “He explained his strategy, which was getting Apple to buy NeXT, then he would go on the board and be one step away from being CEO.” Ellison thought that Jobs was missing a key point. “But Steve, there’s one thing I don’t understand,” he said. “If we don’t buy the company, how can we make any money?” It was a reminder of how different their desires were. Jobs put his hand on Ellison’s left shoulder, pulled him so close that their noses almost touched, and said, “Larry, this is why it’s really important that I’m your friend. You don’t need any more money.

Ellison recalled that his own answer was almost a whine: “Well, I may not need the money, but why should some fund manager at Fidelity get the money? Why should someone else get it? Why shouldn’t it be us?”

“I think if I went back to Apple, and I didn’t own any of Apple, and you didn’t own any of Apple, I’d have the moral high ground,” Jobs replied.

“Steve, that’s really expensive real estate, this moral high ground,” said Ellison. “Look, Steve, you’re my best friend, and Apple is your company. I’ll do whatever you want.”

(Note that Ellison, too, despite having a more money-desiring attitude, didn't actually prioritize money. He might be the richest man in the world today if he'd invested heavily in Steve Jobs' Apple, but he put friendship first.)


Elliot Temple | Permalink | Messages (3)

Learning Updates Thread

If you want to learn philosophy or rational thinking, you need to do some stuff on a regular basis. Read books, write notes, write outlines, write articles, write journaling, study stuff, have discussions, etc.

I suggest you write a short, weekly update. How did your week go? What did you do? Did you make progress on your goals? (Figure out some goals and write them down. If in doubt, talk about it or read and watch a wide variety of things.) Do you want to make any changes going forward? Sharing this update is optional. You could do it like journaling.

Write a longer, monthly update. Reflect more on how learning is going, what's working or not working, whether you should adjust any goals or stop or start any projects, what got done or not, etc.

Sharing monthly updates is recommended. If you don't share monthly updates or explain why not, I will not regard you as actually trying to learn philosophy.

I think it'd be best if a bunch of people shared monthly updates at the same time. So let's use the first of the month. Post them below. Put the month and your name in the title field when posting a monthly update, and leave the title blank for anything else, so the monthly updates stand out more.

Posting on your own website and sharing a link here is fine too. With the link, include at least one paragraph of text with some summary and some info to interest people in clicking the link.


Elliot Temple | Permalink | Messages (15)

Mathematical Inconsistency in Solomonoff Induction?

I posted this on Less Wrong 10 days ago. At the end, I summarize the answer they gave.


What counts as a hypothesis for Solomonoff induction? The general impression I’ve gotten in various places is “a hypothesis can be anything (that you could write down)”. But I don’t think that’s quite it. E.g. evidence can be written down but is treated separately. I think a hypothesis is more like a computer program that outputs predictions about what evidence will or will not be observed.

If X and Y are hypotheses, then is “X and Y” a hypothesis? “not X”? “X or Y?” If not, why not, and where can I read a clear explanation of the rules and exclusions for Solomonoff hypotheses?

If using logic operators with hypotheses does yield other hypotheses, then I’m curious about a potential problem. When hypotheses are related, we can consider what their probabilities should be in more than one way. The results should always be consistent.

For example, suppose you have no evidence yet. And suppose X and Y are independent. Then you can calculate the probability of P(X or Y) in terms of the probability of P(X) and P(Y). You can also calculate the probability of all three based on their length (that’s the Solomonoff prior). These should always match but I don’t think they do.

The non-normalized probability of X is 1/2^len(X).

So you get:

P(X or Y) = 1/2^len(X) + 1/2^len(Y) - 1/2^(len(X)+len(Y))

and we also know:

P(X or Y) = 1/2^len(X or Y)

since the left hand sides are the same, that means the right hand sides should be equal, by simple substitution:

1/2^len(X or Y) = 1/2^len(X) + 1/2^len(Y) - 1/2^(len(X)+len(Y))

Which has to hold for any X and Y.

We can select X and Y to be the same length and to minimize compression gains when they’re both present, so len(X or Y) should be approximately 2len(X). I’m assuming a basis, or choice of X and Y, such that “or” is very cheap relative to X and Y, hence I approximated it to zero. Then we have:

1/2^2len(X) = 1/2^len(X) + 1/2^len(X) - 1/2^2len(X)

which simplifies to:

1/2^2len(X) = 1/2^len(X)

Which is false (since len(X) isn’t 0). And using a different approximation of len(X or Y) like 1.5len(X), 2.5len(X) or even len(X) wouldn’t make the math work.

So Solomonoff induction is inconsistent. So I assume there’s something I don’t know. What? (My best guess so far, mentioned above, is limits on what is a hypothesis.)

Also here’s a quick intuitive explanation to help explain what’s going on with the math: P(X) is both shorter and less probable than P(X or Y). Think about what you’re doing when you craft a hypotheses. You can add bits (length) to a hypothesis to exclude stuff. In that case, more bits (more length) means lower prior probability, and that makes sense, because the hypothesis is compatible with fewer things from the set of all logically possible things. But you can also add bits (length) to a hypothesis to add alternatives. It could be this or that or a third thing. That makes hypotheses longer but more likely rather than less likely. Also, speaking more generally, the Solomonoff prior probabilities are assigned according to length with no regard for consistency amongst themselves, so its unsurprising that they’re inconsistent unless the hypotheses are limited in such a way that they have no significant relationships with each other that would have to be consistent, which sounds hard to achieve and I haven’t seen any rules specified for achieving that (note that there are other ways to find relationships between hypotheses besides the one I used above, e.g. looking for subsets).


Less Wrong's answer, in my understanding, is that in Solomonoff Induction a "hypothesis" must make positive predictions like "X will happen". Probabilistic positive predictions – assigning probabilities to different specific outcomes – can also work. Saying X or Y will happen is not a valid hypothesis, nor is saying X won't happen.

This is a very standard trick by so-called scholars. They take a regular English word (here "hypothesis") and define it as a technical term with a drastically different meaning. This isn't clearly explained anywhere and lots of people are misled. It's also done with e.g. "heritability".

Solomonoff Induction is just sequence prediction. Take a data sequence as input, then predict the next thing in the sequence via some algorithm. (And do it with all the algorithms and see which do better and are shorter.) It's aspiring to be the oracle in The Fabric of Reality but worse.


Elliot Temple | Permalink | Messages (5)

Use RSS to Subscribe to Blogs

RSS feeds let you get updates when a website has new stuff. You can subscribe to sites you're interested in and then get notifications about new material instead of checking for it yourself. This works especially well with sites that don't update often.

You should subscribe to my feeds:

You need an RSS Reader app. I like Vienna, a free open source Mac app. There are many others, e.g. BazQux is a web app that my friend likes.

Many apps will let you import RSS feeds instead of adding them all yourself. Download my subscriptions to import. After importing, you can delete whatever you don't want.

You should also sign up for my free email newsletter. I'll send you around one email every two weeks.

Most blogs and similar sites have RSS feeds. Usually you can use their home page and the RSS app will find the correct feed URL for you. You can also subscribe to a YouTube channel or podcast.

Don't rely on getting all your info from social media sites. Don't just read whatever's in your Facebook, Twitter or Reddit feed. Choose and subscribe to some sites yourself.


Elliot Temple | Permalink | Messages (2)

Clarifying My Beliefs

This post will clarify a few of my ideas that people have concerns or misunderstandings about.

Politics

Freedom and Capitalism

I want a small government with limited power. The proper function of government is protect people against force – e.g. military, cops, courts. I want a society with tons of freedom including economic freedom (which is what “capitalism” actually means). See my essay liberalism. I’m not necessarily opposed to all anarchist ideas (though most are awful), but I think we should aim for minimal government and try that for a while before thinking we know in advance what further reforms would be a good idea.

I respect thinkers like Ludwig von Mises and Ayn Rand. I do not respect lots of their followers or the people commonly associated with them (e.g. I disagree with Hayek, Rothbard, most libertarians and most Objectivists). I also disagree with most Republicans and most Democrats.

In general I like individual thinkers, not groups.

I think political philosophy and economics are more important than politics. By politics I mean stuff like current events, news and election issues. Current political issues include abortion, gun control, immigration, racism, feminism, rent control, tax policy, government-run healthcare and environmentalist policies. People should learn how to think effectively about general principles before trying to debate those specific controversies.

People are partially right to complain about corporations and Wall Street. Many of their arguments are incorrect, but there is shady, unfair, exploitive stuff going on. But the problem is mostly government involvement in the economy and lack of economic freedom. For example, the main source of monopolies is government laws that make it harder to compete with existing companies, e.g. by increasing barriers to entry.

Trump

I like Donald Trump better than Hillary Clinton or Joe Biden. I think Trump has moderate political views and likes America. I disagree with his protectionist economic ideas (like tariffs and trade wars), though I do agree with his intuition that there’s some problem related to trade with China and that some sort of action should be taken. I think Trump screwed up by hiring a bunch of establishment Republicans and he handled coronavirus badly.

I appreciate that Trump is somewhat, partially challenging the ruling elite class of journalists, media pundits, unelected political influencers, professors, politicized non-profits, lobbyists, bankers, administrators and bureaucrats. Most politicians in both parties are part of that elite which is oppressing and ripping off the American people (the middle and lower classes, and the wealthy people without friends in high places).

Tribalism

Most people bring a tribalist, follower mindset to politics. They cheer for their team, just like sports. They’re super biased. They don’t understand the other side(s) very well. They don’t rationally study or debate the issues.

This doesn’t mean each tribe is equally right or wrong. Currently, I agree with Republicans more than Democrats. That’s despite being an atheist and growing up in a Democrat family in a heavily Democrat area. I do disagree with lots of Republican ideas.

Immigration

Immigration has been used for decades to try to dilute Western civilization by bringing in people who think in other ways and have other values. Western countries have been doing a bad job of standing up for their values and assimilating immigrants. There are ongoing debates about whether Western values are worth standing up for. While this debate is ongoing, I think immigration should be slowed way down. I don’t think bringing in immigrants who agree with you, and vote for your tribe, is a legitimate way to resolve a debate.

I don’t think white people have a monopoly on Western values. I don’t think genetics are destiny. I know many other people who criticize immigration are racist in some way (that doesn’t merit all the tribalist hatred they’re receiving, which often comes from people who are even more racist). They think there are race-related IQ genes and that there is such thing as a biological “human nature” which is controlled by genes and therefore can vary by race. I have a strong “left wing” position – shared by right winger Ayn Rand – that ideas and culture are what matter, not biology. (On a related note, I think gender roles are socially constructed, but I don’t believe some of the other ideas commonly associated with that claim. While males and females biologically differ in some physical characteristics, I don’t think biology is the cause of observed mental differences like personality traits or math success.)

I disagree with many economic arguments against immigration. In a capitalist society, immigrants don’t drive down wages. That’s because, as the workforce gets larger and wages go down, it’s easier to start a business (because you can hire employees more cheaply), so more businesses get started, which pushes wages back up. However, currently US regulations are hostile to starting new businesses. When it’s a huge burden to start a business and hire people, then immigration can drive wages down. But “they took our jobs” is a bad argument.

I think the USA should screen immigrants better. Instead of letting so many immigrants in by lottery or extended family ties (including birthright citizenship granted to babies born here by tourists), I think immigrants should be admitted more based on having American values and being ready and able to do productive work. Although I think IQ tests (and the concept of IQ itself) are highly flawed and culturally biased, I think they’d be better than nothing for an immigration screening method. English language proficiency tests would also help.

Identity Politics

I’m opposed to identity politics. I think we should stop looking at people’s skin color, rather than doing affirmative action or having race-based groups like “Black Lives Matter”. I want a more color blind approach.

I do not think racism, sexism, homophobia, white privilege, etc., are solved issues. There are significant problems there (by both Democrats and Republicans). The current activism – like riots and cancel culture – is making things worse and is making it harder to reform anything.

PUA

Lots of “pickup artists” are idiots. Sometimes their idiocy crosses the line into assault. People like Good Looking Loser, Russel Hartley and RooshV are awful.

In 1994, the alt.seduction.fast (ASF) discussion group was started on Usenet. Some people there figured out some good ideas about how dating and social dynamics work. Of course some people there were dumb, too. Representatives of that group, which I respect, include:

It was a discussion community. Many people participated productively and there are archives of ideas people liked, e.g. Classic PUA Writings. Many of them also went out and met people in person. They weren’t just armchair philosophers.

The ASF people aren’t perfect but I think they have some genuine knowledge. They managed to analyze, describe and understand social dynamics in ways that other groups haven’t. This information is useful for all members of our culture, male or female, in order to better understand the unwritten rules of our society. And although the ASF focus relates to dating, many of the social dynamics principles apply to other social situations too.

Other so-called pickup artists vary. Some learned a lot from the ASF crowd or participated in those discussions. Others didn’t and are usually clueless. Some of the ASF knowledge has spread elsewhere but it’s often mixed up with bad ideas too. The “red pill”, “mgtow (men going their own way)”, “MRA (men’s rights advocates)” and “manosphere” groups often have some ASF knowledge mixed in along with some of their own bad ideas.

Because so many fakers try to sell their pickup artist advice (advice that doesn’t work and is often offensive), the ASF people pretty much stopped using the terms “pickup artist” and “PUA”. I’ve been using the term “PUA” anyway but I’ll consider calling them the ASF community or specifying individuals in order to reduce confusion.

Claims about how our culture works are not claims that it should be that way. I’m not in favor of social climbing, promiscuity or pandering to whatever other people want. I’m also against lying to or tricking women (or anyone). The ASF people, contrary to some of the attacks on them, are more anti-lying than the typical person.

FI Members

There are no senior members of FI who have been around a while (years), learned FI well, and who make good role models. Don’t try to copy anyone or assume they’re good and you should try to be like them.

I don’t endorse anyone’s learning behavior, and I certainly don’t endorse their lecturing behavior. Some newer members have potential (and older members could change) but none have established themselves as doing a great job.

Don’t try to copy me either. That will lead to cargo culting. You have to learn things yourself and follow your own judgment. I’m too different in too many ways. You should expect to misunderstand me a lot, not to be able to do what you think I do and have it work in your own life situation.

Being a Discord moderator is not an endorsement of someone’s ideas. Being in a video with me or having a guest post on curi.us is not an endorsement of a person in general.

On a related note, I think everyone but me should be posting anonymously. (Because of cancel culture. And by posting anonymously I mean use something that isn't your real name or connected to your real name.) I’d prefer to be anonymous myself. I think it’s way too late for me to change (and maybe too hard to stay anonymous when e.g. selling stuff, meeting people IRL, and developing a reputation as a public intellectual) but everyone else should go anonymous. What’s the benefit of using your real name?

Twitter

I’m going to stop posting on Twitter in general. Most of what I posted was just retweets without me writing anything. I dislike Upvotes and Likes in general (pointless) but I found retweets ok (shows stuff to my friends/fans) and tried them for a while. Retweets were not endorsements. I never treated Twitter like a discussion forum or serious place. I will continue to read Twitter because I like a few people there. I’m going to stop retweeting because Twitter has an awful, tribalist political culture which I don’t want to contribute to. Plus Twitter shadowbanned me and is part of the cancel culture which is trying to suppress right wing speech.

I think FI people like Khaaan and Justin are tribalist tweeters who don’t understand the other side(s) of the debate well. That doesn’t stop them from being right or sharing good info over 50% of the time. But they ought to learn how to think rationally instead of doing so much politics. Even if they were going to do politics stuff, their approach is basically unproductive because they’re so biased for their tribe.

Offensive Comments

I’m not careful about what I say all the time. I don’t believe in political correctness. I think misunderstandings will happen whatever you do and it’s not worth the effort to walk on eggshells around everyone. Better if people mostly have thick skins rather than police their own speech.

If you dislike something I say, you can ask about it or criticize it. (Try to understand what it is before attacking it, please.) We might disagree. If so, I’ll have a thought-out position that you can hopefully respect, even if I didn’t explain it all upfront. I can’t preemptively explain everything I think every time I mention a topic. People can ask questions or read my writing to find out more.

I often disagree with all mainstream positions on a topic in some way. When disagreeing with one view, I don’t always communicate what I think about all the other positions. This leads to misunderstandings because people assume if you criticize one tribe then you must be part of an opposing tribe.

Lots of “jokes” reveal genuine racism or other bigotry. Speech is meaningful. I’m open to rational questions and criticism – I won’t just automatically dismiss issues as minor. But please try not to assume what I think and don’t begin the conversation in an adversarial way.


Elliot Temple | Permalink | Messages (45)

Social Maneuvering

People prefer giving orders to taking them. They prefer granting permission (or not) to having to ask permission. They prefer questioning others over being questioned. They prefer more pressure on others to answer to them, and less pressure on themselves to answer to others.

People try to figure out how to achieve these things. This begins in early childhood when they discover that their parents have social (and physical) power over them. Then they have to answer to their teachers, to babysitters, to the parents at a friend’s house they’re visiting, to adult relatives, and more.

They find at the peer level that some people get what they want more, are listened to more, are respected more, and so on. Doing better with peers is realistically achievable in the short term.

They find they look weak when they’re insecure, needy, unconfident, reactive, seeking approval from others, acting like a follower not a leader, etc. They learn how to act to get others to put in more effort than they do, to have people come to them, to get approval while looking like you aren’t trying to get approval, to hide their effort, to make comments highlighting others’ weaknesses without being considered an aggressor, to recognize and follow trends a little on the early side. They learn to hide weakness and ignorance. They learn to be dishonest. Some things are not even really considered dishonest, socially, they’re just normal. But they aren’t how a scientist thinks. A scientist volunteers relevant info in pursuit of truth instead of looking for opportunities to withhold unfavorable info. And in short, doing anything less than an idealized scientist would is dishonest to some extent.

People discover there are both formal and informal social power structures. Being a teacher, parent or boss is a formal position. It’s an explicit label. The leader of a group of friends is an informal position. There’s no contract or clear rules. It can just change as opinions change.

Sometimes formal and informal social power have a mismatch. A general may not have the respect of the soldiers he gives orders to. A boss may struggle to get his subordinates to listen to him – formally he’s in charge but informally people don’t see him that way.

Mismatches aren’t terribly common. People whose informal social power is significantly below their formal position often get replaced. More often, people don’t get positions in the first place if they don’t have an appropriate informal social status. Mismatches are often caused by giving out positions due to favoritism instead of merit, e.g. getting a position for one’s child who didn’t (socially) earn it (and often didn’t earn it on objective world merit, like knowledge and skill, either). Sometimes mismatches develop over time – things started out OK but a person lost respect or got undermined or something over time. Status can be unstable. People often get promotions to positions they’re expected to probably be able to handle, but there’s no guarantee and it doesn’t always work out.

However, mismatches are extremely common when no one cares what the subordinates think or want. If the subordinates are there voluntarily – e.g. customers or people who could get a different job or transfer to a different division in the company – it puts pressure against mismatches. However, when the subordinates are children, prison inmates, involuntary psychiatry patients, or the elderly in an old folks home, then the people in power may be hated by their subordinates and stay in power anyway. This is also a big problem with the government and its citizens – there is some accountability but generally not enough.

Anyway, people learn how to behave so they do well in terms of informal social status. There are incentives and benefits there, and it’s also one of the main things that leads to gaining and keeping formal social status.


Elliot Temple | Permalink | Messages (2)

Social Dialog with Analysis

Communications and actions have two main interpretations: social and objective.

Objective interpretations look at the literal meaning. They use rational and scientific analysis. They try to avoid logical errors. They aim to account for all the evidence and contradict none. They don’t judge the truth of ideas by the attributes of the person who thought of or communicated the idea.

Social interpretations consider the speaker or actor in relation to other people. What is he trying to do to or get from others? How does the action/communication affect the status of the actor and others? Is someone being needy or reactive? Is someone showing weakness? Is someone socially attacking someone else? Is someone becoming more or less connected with something high or low status (e.g. tribe allegiance signals).

Social interpretations are allowed to focus on some evidence and ignore or contradict other evidence. They can be illogical and unscientific. Some evidence is impolite to use or mention. Some conclusions are jumped to on a basis like “since the social meaning of that action/communication is so strong and obvious, you must have done it intentionally and chosen that social meaning on purpose, no matter what you say about a misunderstanding or that you were focusing on an objective goal.”

Example (try to read hyper literally, and look at this really logically, in order to understand Sue’s perspective):

Joe: I don’t understand what you said.
Sue: What are you planning to do about that?
Joe: I just asked you to explain.
Sue: You didn’t make a request or ask a question.
Joe: I just did. wtf!

Joe is focused on the social world while Sue is in objective, logical mode. Joe expects that, when he speaks, Sue will guess what he wants from her and why he said it. Joe takes this for granted so much that he doesn’t notice the difference between him asking for something explicitly or via hints – he uses hints and thinks he asked. Sue thinks Joe’s statements provide information and that it isn’t her job to read Joe’s mind. Mind reading always runs into a bunch of ambiguity that’s hard to make guesses about, and the whole point of a conversation is for people to communicate their ideas themselves.

The conversation can easily get worse. Let’s continue it:

Sue: Quote?
Joe: “I don’t understand what you said.”

This is actually somewhat unrealistic. People like Joe often won’t quote at all or will misquote. Often they ambiguously explain which message like “the statement we’re talking about” or “the message that starts with ‘I’”. Sometimes they say that asking for quotes is unreasonable or unnecessary, or they stop replying. But let’s not get distracted by those problems.

Sue: Is that an imperative or interrogative sentence?
Joe: No.
Sue: What are the verb and grammatical subject?
Joe: “do understand” and “I”.

These are atypically accurate and patient grammar answers by Joe. Things could have gotten a lot worse here. They’re atypical because they’re objective-mode answers, not social-mode answers.

The social-mode meaning of “Is that an imperative or interrogative sentence?” is that Sue is acting like a teacher and putting Joe in an inferior student role. Sue asks the questions and Joe has the role of being questioned. Sue can initiate things of her choice and Joe has to react to Sue’s whims. She’s pushing this kind of framing on the situation. So typically someone like Joe will avoid answering the question in order to deny and push back against that framing. He’ll try to get Sue answering his questions or otherwise establish social power over her. He’ll avoid compliance on purpose. He might say something especially sophisticated to try to prove how grown up he is. He’ll think Sue is calling him dumb by asking him a fairly basic grammar question about the sort of thing he was supposed to have learned in school over 10 years ago.

Sue: So isn’t it a declarative statement about yourself?
Joe: You knew what I meant.

This doesn’t answer Sue’s question. It’s also asking for mind reading. It’s the sort of thing that only works with similar people. It’s a reasonable guess about most people from Joe’s subculture (they knew what he wanted and are being difficult on purpose), though there’s evidence throughout that Sue has a pretty different perspective on the world than Joe and genuinely found it problematic to assume instead of relying on communication. Joe’s attitude makes conversation very hard with people very different from himself. It’s bad at engaging with other frameworks or points of view.

Asking multiple questions in a row amplifies the teacher/student dynamic. That increases the pressure on Joe to break out of it. Sue isn’t thinking about the social meaning of what she says, so she doesn’t control it, but Joe keeps looking for it and reacting to it. If Sue were to consider the social meaning of each of her statements before saying it, she’d find it much harder to converse. Like if asking clarifying questions is socially aggressive (both in a “you answer to me” sense and a “you were unclear” accusation sense), what should she do to fix it? You can’t just skip clarifying questions in general. And how many are needed is out of her control. Sue can try to minimize the number of clarifying questions Joe needs to ask her, but it’s up to Joe to minimize how many Sue needs to ask him.

Sue: I thought I did. I thought you were providing information about the state of your understanding.
Joe: I was asking for help.
Sue: But that isn’t what your words mean. Why don’t you use standard English to say what you mean?

Joe feels highly insulted. But from an objective perspective, it’s a reasonable thing to be wondering and talking about. Joe literally says X and then acts like he’d communicated Y. Why not just say what he means?

Joe: Why don’t you just put two and two together?

Joe doesn’t ask the question, asks a counter-question that’s socially insulting to Sue (it implies she’s being dumber than like a 4 year old who can’t add 2+2 correctly)

Sue: There are dozens of reasonable ways to proceed given the information you provided. That’s why I asked which one you were planning.

Even with such an insulting question that wasn’t meant to be answered, Sue still takes it at face value and tries to explain the answer.

Joe: Why won’t you just tell me what you meant?
Sue: You haven’t asked me to.

Again Sue immediately answers Joe’s question. She’s responsive and still operating with good will. She doesn’t care about who is reacting to who and how it looks for social power. And Joe’s tilted (since Sue’s first or second message in the example dialog) but Sue isn’t.

Joe: I just did.
Sue: When?
Joe: Right now. I just asked.
Sue: Quote?
Joe: “Why won’t you just tell me what you meant?”
Sue: I answered that question.

This is similar to how the dialog started. Things haven’t been sorted out. Even though Sue explained her perspective earlier, Joe still isn’t taking it into account and adjusting his communications and expectations. Sue, meanwhile, doesn’t know what to change to make things work better. She knows she’s logically right. She thinks Joe ought to try discussing in a way that isn’t logically wrong and that it’s easier to have conversations which build on that foundation. If Joe doesn’t have the skill to do that, he should try to learn it and ask for help instead of trying to have a conversation he’s incapable of handling productively.

Joe: After all this, I can’t get any answers out of you. Goodbye forever.

Objectively, Sue did answer all of Joe’s questions and was responsive to all direct, explicit requests. But Sue kept ignoring the social world meanings of both Joe’s and her own words.

In the social world, direct requests are often too pushy. People often phrase statements as questions to weaken them (“That is a dog?” which is expressing some uncertainty and making it easier for the person to disagree or confirm) and questions as statements (“I wonder if that’s a dog.” which is asking if the other person thinks it’s a dog).

Sue: I don’t understand why people come to discussion forums when they clearly don’t have a basic grasp of English and logic, and they also aren’t aiming to learn those things. What they’re doing will never work.
Joe: wtf! Leave me alone.
Sue: I did. I didn’t expect you to return. I’m just post morteming my discussion and hoping someone else may have insight. This has nothing to do with you.
Joe: You’re flaming me and attacking my reputation.
Sue: I’m just analyzing public evidence. If I made an error you can point it out. I’m not trying to flame; I’m aiming for accuracy. If you don’t want people to think about what you say, don’t post it. If you want to look good when analyzed, improve your skill level. Now leave me in peace. I’ve got several more thoughts to post.
Joe: You have no right! I didn’t sign up to be treated this way!
Sue: People thinking about and discussing things you said is exactly what you signed up for when you posted them.
Joe: [Leaves and holds a long-lasting grudge.]
Sue: When he said “You knew what I meant.” I was tolerant, lenient and generous by letting him change topics in the middle of my grammar point. Right as I was getting to a conclusion he ignored my question, seemingly because he knew he was about to lose the debate. I wasn’t rewarded for being so helpful. He didn’t reciprocate with good will towards me. Maybe I would have been better off repeating my question until he answered it, or pointing out that he wasn’t answering.
Sue: I don’t understand how, after a long conversation about how he hadn’t made a particular request, he still didn’t get it enough to realize he still hadn’t actually made that request. Which new statement did he think constituted a request to explain something to him?
Sue: Does anyone understand why most people are like this? I have such good will but it never seems to be enough. On my initiative, I asked about his plans, thinking perhaps to help with them. My reward was that he derailed the conversation. And as usual he doesn’t want to discuss what went wrong. He did begin the process of clearing up misunderstandings, but he was creating new misunderstandings faster than we could resolve stuff. Why won’t people just calm down and use English in a simple, correct way? Why do they rush through conversations and make huge messes and then give up?
Sue: What is life like for such a person? Do none of his conversations work? What happens when two Joes talk? Is it pure chaos or does it seem to work, somehow? Maybe if they both say and want sufficiently stereotyped things they can stay on the same page with almost no communication, merely by assuming stereotypes.
Sue: Why is no one else talking? Is this a dead forum? Does no one here care about trying to understand how to have rational conversations with typical people? Don’t you guys run into problems like these and want to figure out what to do about them? Or are you all similar to Joe and hiding it with your silence?
Joe: [Reads all this and intensifies his grudge.]

It could easily go worse than this more quickly. But I wanted to draw it out a bit and show the ongoing perspective clashes.

The other people on the forum are more attuned to the social world than Sue.

Sue doesn’t think that attuning to the social world more herself will actually result in intellectual progress on topics like science, epistemology, AGI, etc. People need to think objectively to contribute to those topics anyway.

Routinely, people’s social status is inaccurate in some way. Then the truth threatens the status. What could Sue do when talking to people who need to admit weakness and try to learn some stuff but who prefer to dishonestly pretend to be better than they are and who don’t want Sue to speak the truth? What’s to be done with such people besides detecting and ignoring them (and maybe criticizing them if they’re public figures)?

Lots of people try to debate sorta like Sue but they aren’t actually very good at logic themselves. Joe has experience with that. He’s dealt with people who are mostly focused on social, and screw up logic, but they do some Sue stuff as an act. He assumes Sue is like that too. He doesn’t actually have the skill to judge whether Sue got anything wrong or not. But Joe thinks he does. So what happens is Joe will misunderstand something, think Sue made a mistake, and conclude that Sue isn’t as logical as she thinks she is. From Joe’s pov, Sue seems to be about as good at logic as Joe or worse – because whenever she uses superior skill there’s a good chance that Joe doesn’t get it, and when Joe doesn’t get it there’s a good chance that he attributes the error to Sue. (These things are not a matter of random chance. But if you look at many similar events, you’ll find sometimes it goes one way and sometimes the other. And it’s too hard to analyze the detailed causality.)

Most real discussions have a lot of intentional social by both parties. This is a stylized example that turns up the contrast between characters. Actually what sort of social counts as “intentional” is a tricky question. Most of it is automated in childhood. Most social by adults is done without conscious intention at the time they do it. So “I wasn’t trying to social you” is no defense – in the past you learned how to do that kind of social to people and practiced it until it was second nature. You’re responsible for that! This explains part of why people have trouble turning it off. And it explains why people assume that most social is intentional in the sense of there was an intent in the past when the person learned to do it. They aren’t doing it now randomly or accidentally. The main excuse is “autism” which is the most standard term for a person who didn’t learn, practice and automate a bunch of social dynamics (or somehow managed to stop and change), so then they might actually honestly not be playing the social game.

This is all complicated by the social rules of evidence. Lots of social dynamics are deniable even when everyone knows they were done on purpose. You did them right and people approve, so you’re allowed to get away with them rather than be called out for the social manipulations (if the call out is sufficiently socially savvy it can often work, but just a blunt, direct logic-focused callout doesn’t work). So it’s common that everyone knows social happened and it wasn’t an accident, but everyone pretends not to know.


Elliot Temple | Permalink | Messages (23)

Analyzing Quotes Objectively and Socially

stucchio and Mason

stucchio retweeted Mason writing:

"Everything can be free if we fire the people who stop you from stealing stuff" is apparently considered an NPR-worthy political innovation now, rather than the kind of brain fart an undergrad might mumble as they come to from major dental work https://twitter.com/_natalieescobar/status/1299018604327907328

There’s no substantial objective-world content here. Basically “I disagree with whatever is the actual thing behind my straw man characterization”. There’s no topical argument. It’s ~all social posturing. It’s making assertions about who is dumb and who should be associated with what group (and, by implication, with the social status of that group). NPR-worthy, brain fart, undergrad, mumble and being groggy from strong drugs are all social-meaning-charged things to bring up. The overall point is to attack the social status of NPR by associating it with low status stuff. Generally smart people like stuchhio (who remains on the small list of people whose tweets I read – I actually have a pretty high opinion of of him) approve of that tribalist social-political messaging enough to retweet it.

Yudkowsky

Eliezer Yudkowsky wrote on Less Wrong (no link because, contrary to what he says, someone did make the page inaccessible. I have documentation though.):

Post removed from main and discussion on grounds that I've never seen anything voted down that far before. Page will still be accessible to those who know the address.

The context is my 2011 LW post “The Conjunction Fallacy Does Not Exist”.

In RAZ, Yudkowsky repeatedly brings up subculture affiliations he has. He read lots of sci fi. He read 1984. He read Feynman. He also refers to “traditional rationality” which Feynman is a leader of. (Yudkowsky presents several of his ideas as improvements on traditional rationality. I think some of them are good points.) Feynman gets particular emphasis. I think he got some of his fans via this sort of subculture membership signaling and by referencing stuff they like.

I bring this up because Feynman has a book title "What Do You Care What Other People Think?": Further Adventures of a Curious Character. This is the sequel to the better known "Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character.

Yudkowsky evidently does care what people think and has provided no indication that he’s aware that he’s contradicting one of his heroes, Feynman. He certainly doesn’t provide counter arguments to Feynman.

Downvotes are communications about what people think. Downvotes indicate dislike. They are not arguments. They aren’t reasons it’s bad. They’re just opinions. They’re like conclusions or assertions. Yudkowsky openly presents himself as taking action because of what people think. It’s also basically just openly saying “I use power to suppress unpopular ideas”. Yudkowsky also gave no argument himself, nor did he endorse/cite/link any argument he agreed with about the topic.

Yudkowsky is actually reasonably insightful about social hierarchies elsewhere, btw. But this quote shows that, in some major way, he doesn’t understand rationality and social dynamics.

Replies to my “Chains, Bottlenecks and Optimization”

https://www.lesswrong.com/posts/Ze6PqJK2jnwnhcpnb/chains-bottlenecks-and-optimization

Dagon

I think I've given away over 20 copies of _The Goal_ by Goldratt, and recommended it to coworkers hundreds of times.

Objective meaning: I took the specified actions.

Social meaning: I like Goldratt. I’m aligned with him and his tribe. I have known about him for a long time and might merit early adopter credit. Your post didn’t teach me anything. Also, I’m a leader who takes initiative to influence my more sheep-like coworkers. I’m also rich enough to give away 20+ books.

Thanks for the chance to recommend it again - it's much more approachable than _Theory of Constraints_, and is more entertaining, while still conveying enough about his worldview to let you decide if you want the further precision and examples in his other books.

Objective meaning: I recommend The Goal.

Social meaning: I’m an expert judge of which Goldratt books to recommend to people, in what order, for what reasons. Although I’m so clever that I find The Goal a bit shallow, I think it’s good for other people who need to be kept entertained and it has enough serious content for them to get an introduction from. Then they can consider if they are up to the challenge of becoming wise like me, via further study, or not.

This is actually ridiculous. The Goal is the best known Goldratt book, it’s his best seller, it’s meant to be read first, and this is well known. Dagon is pretending to be providing expert judgment, but isn’t providing insight. And The Goal has tons of depth and content, and Dagon is slandering the book by condescending to it in this way. By bringing up Theory of Constraints, Dagon is signaling he reads and values less popular, less entertaining, less approachable non-novel Goldratt books.

It's important to recognize the limits of the chain metaphor - there is variance/uncertainty in the strength of a link (or capacity of a production step), and variance/uncertainty in alternate support for ideas (or alternate production paths).

Objective meaning (up to the dash): Goldratt’s chain idea, which is a major part of your post, is limited.

Social meaning (up to the dash): I’ve surpassed Goldratt and can look down on his stuff as limited. You’re a naive Goldratt newbie who is accepting whatever he says instead of going beyond Goldratt. Also calling chains a “metaphor” instead of “model” is a subtle attack to lower status. Metaphors aren’t heavyweight rationality (while models are, and it actually is a model). Also Dagon is implying that I failed to recognize limits that I should have recognized.

Objective meaning continued: There’s some sort of attempt at an argument here but it doesn’t actually make sense. Saying there is variance in two places is not a limitation of the chain model.

Social meaning continued: saying a bunch of overly wordy stuff that looks technical is bluffing and pretending he’s arguing seriously. Most people won’t know the difference.

Most real-world situations are more of a mesh or a circuit than a linear chain, and the analysis of bottlenecks and risks is a fun multidimensional calculation of forces applies and propagated through multiple links.

Objective meaning: Chains are wrong in most real world situations because those situations are meshes or circuits [both terms undefined]. No details are given about how he knows what’s common in real world situations. And he’s contradicting Goldratt who actually did argue his case and know math. (I also know more than enough math so far and Dagon never continued with enough substance to potentially strain either of our math skills sets).

Social meaning: I have fun doing multidimensional calculations. I’m better than you. If you knew math so well that it’s a fun game to you, maybe you could keep up with me. But if you could do that, you wouldn’t have written the post you wrote.

It’s screwy how Dagon presents himself as a Goldratt superfan expert and then immediately attacks Goldratt’s ideas.

Note: Dagon stopped replying without explanation shortly after this, even though he’d said how super interested in Goldratt stuff he is.

Donald Hobson

I think that ideas can have a bottleneck effect, but that isn't the only effect. Some ideas have disjunctive justifications.

Objective meaning: bottlenecks come up sometimes but not always. [No arguments about how often they come up, how important they are, etc.]

Social meaning: You neglected disjunctions and didn’t see the whole picture. I often run into people who don’t know fancy concepts like “disjunction”.

Note: Disjunction just means “or” and isn’t something that Goldratt or I had failed to consider.

Hobson then follows up with some math, socially implying that the problem is I’m not technical enough and if only I knew some math I’d have reached different conclusions. He postures about how clever he is and brings up resistors and science as brags.

I responded, including with math, and then Hobson did not respond.

TAG

What does that even mean?

Objective meaning: I don’t understand what you wrote.

Social meaning: You’re not making sense.

He did give more info about what his question was after this. But he led with this, on purpose. The “even” is a social attack – that word isn’t there to help with any objective meaning. It’s there to socially communicate that I’m surprisingly incoherent. It’d be a subtle social attack even without the “even”. He didn’t respond when I answered his question.

abramdemski

There is another case which your argument neglects, which can make weakest-link reasoning highly inaccurate, and which is less of a special case than a tie in link-strength.

Objective meaning: The argument in the OP is incomplete.

Social meaning: You missed something huge, which is not a special case, so your reasoning is highly inaccurate.

The way you are reasoning about systems of interconnected ideas is conjunctive: every individual thing needs to be true.

Objective meaning: Chain links have an “and” relationship.

Social meaning: You lack a basic understanding of the stuff you just said, so I’ll have to start really basic to try to educate you.

But some things are disjunctive: some one thing needs to be true.

Objective meaning: “or” exists. [no statement yet about how this is relevant]

Social meaning: You’re wrong because you’re an ignorant novice.

(Of course there are even more exotic logical connectives, such as implication or XOR, which are also used in everyday reasoning. But for now it will do to consider only conjunction and disjunction.)

Objective meaning: Other logic operators exist [no statement yet about how this is relevant].

Social meaning: I know about this like XOR, but you’re a beginner who doesn’t. I’ll let you save face a little by calling it “exotic”, but actually, in the eyes of everyone knowledgeable here, I’m insulting you by suggesting that for you XOR is exotic.

Note: He’s wrong, I know what XOR is (let alone OR). So did Goldratt. XOR is actually easy for me, and I’ve used it a lot and done much more advanced things too. He assumed I didn’t in order to socially attack me. He didn’t have adequate evidence to reach the conclusion that he reached; but by reaching it and speaking condescendingly, he implied that there was adequate evidence to judge me as an ignorant fool.

Perhaps the excess accuracy in probability theory makes it more powerful than necessary to do its job? Perhaps this helps it deal with variance? Perhaps it helps the idea apply for other jobs than the one it was meant for?

Objective meaning: Bringing up possibilities he thinks are worth considering.

Social meaning: Flaming me with some rather thin plausible deniability.

I skipped the middle of his post btw, which had other bad stuff.

johnswentworth

I really like what this post is trying to do. The idea is a valuable one. But this explanation could use some work - not just because inferential distances are large, but because the presentation itself is too abstract to clearly communicate the intended point. In particular, I'd strongly recommend walking through at least 2-3 concrete examples of bottlenecks in ideas.

This is an apparently friendly reply but he was lying. I wrote examples but he wouldn’t speak again.

There are hints in this text that he actually dislikes me and is being condescending, and that the praise in the first two sentences is fake. You can see some condescension in the post, e.g. in how he sets himself up like a mentor telling me what to do (and note the unnecessary “strongly” before “recommend”. And how does he know the idea is valuable when it’s not clearly communicated? And his denial re inferential distance is actually both unreasonable and aggressive. The “too abstract” and “could use some work” are also social attacks, and the “at least 2-3” is a social attack (it means do a lot) with a confused objective meaning (if you’re saying do >= X, why specify X as a range? you only need one number.)

The objective world meaning is roughly that he’s helping with some presentation and communication issues and wants a discussion of the great ideas. But it turns out, as we see from his following behavior, that wasn’t true. (Probably. Maybe he didn’t follow up for some other reason like he died of COVID. Well not that because you can check his posting history and see he’s still posting in other topics. But maybe he has Alzheimer’s and he forgot, and he knows that’s a risk so he keeps notes about stuff he wants to follow up on, but he had an iCloud syncing error and the note got deleted without him realizing it. There are other stories that I don’t have enough information to rule out, but I do have broad societal information about them being uncommon, and there are patterns across the behavior of many people.)

MakoYass

I posted in comments on different Less Wrong thread:

curi:

Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is?

MakoYass:

I am evidently interested in discussing it, but I am probably not the best person for it.

Objective meaning: I am interested. My answer to your question is “yes”. I have agreed to try to have a discussion, if you want to. However, be warned that I’m not very good at this.

Social meaning: The answer to your question is “no”. I won’t discuss with you. However, I’m not OK with being declared uninterested in this topic. I love this topic. How dare you even question my interest when you have evidence (“evidently”) that I am interested, which consists of me having posted about it. I’d have been dumb to post about something I’m not interested in, and you were an asshole to suggest I might be dumb like that.

Actual result: I replied in a friendly, accessible way attempting to begin a conversation, but he did not respond.

Concluding Thoughts

Conversations don’t go well when a substantial portion of what people say has a hostile (or even just significantly different) social (double) meaning.

It’s much worse when the social meaning is the primary thing people are talking about, as in all the LW replies I got above. It’s hard to get discussions where the objective meanings are more emphasized than the social ones. And all the replies I quoted re my Chains and Bottlenecks post were top level replies to my impersonal article. I hadn’t said anything to personally offend any of those people, but they all responded with social nastiness. (Those were all the top level replies. There were no decent ones.) Also it was my first post back after 3 years, so this wasn’t carrying over from prior discussion (afaik – possibly some of them were around years ago and remembered me. I know some people do remember me but they mentioned it. Actually TAG said later, elsewhere, to someone else, that he knew about me from being on unspecified Critical Rationalist forums in the past).

Even if you’re aware of social meanings, there are important objective meanings which are quite hard to say without getting offensive social meaning. This comes up with talking about errors people make, especially ones that reveal significant weaknesses in their knowledge. Talking objectively about methodology errors and what to do about them can also be highly offensive socially. Also objective, argued judgments of how good things are can be socially offensive, even if correct (actually it’s often worse if it’s correct and high quality – the harder to plausibly argue back, the worse it can be for the guy who’s wrong).

The main point was to give examples of how the same sentence can be read with an objective and a social meaning. This is what most discussions on rationalist forums where explicit knowledge of social status hierarchies is common look like to me. It comes up a fair amount on my own forums too (less often than at LW, but it’s a pretty big problem IMO).

Note: The examples in this post are not representative of the full spectrum of social behaviors. One of the many things missing is needy/chasing/reactive behavior where people signal their own low social status (low relative to the person they’re trying to please). Also, I could go into more detail on any particular example people want to discuss (this post isn’t meant as giving all the info/analysis, it’s a hybrid between some summary and some detail).


Update: Adding (on same day as original) a few things I forgot to say.

Audiences pick up on some of the social meanings (which ones, and how they see them, varies by person). They see you answer and not answer things. They think some should be answered and some are ignorable. They take some things as social answers that aren’t intended to be. They sometimes ignore literal/objective meanings of things. They judge. It affects audience reactions. And the perception of audience reactions affects what the actual participants do and say (including when they stop talking without explanation).

The people quoted could do less social meaning. They’re all amplifying the social. There’s some design there; it’s not an accident. It’s not that hard to be less social. But even if you try, it’s very hard to avoid any problematic social meanings, especially when you consider that different audience members will read stuff differently, according to different background knowledge, different assumptions about context, different misreadings and skipped words, etc.


Elliot Temple | Permalink | Messages (14)

Formatting Test Thread

You can post messages (comments) here to test whether they're formatted how you expect. Messages here won't appear on the Recent Messages page or in the Messages RSS feed.

Discussion info link.


Elliot Temple | Permalink | Messages (19)

Baxter Praises Slavery

In the sci-fi novel Ring, Stephen Baxter wrote (my emphasis):

Lieserl had learned about the Qax. The Qax had originated as clusters of turbulent cells in the seas of a young planet. Because there were so few of them the Qax weren't naturally warlike -- individual life was far too precious to them. They were natural traders; the Qax worked with each other like independent corporations, in perfect competition. They had occupied Earth simply because it was so easy -- because they could. The only law governing the squabbling junior races of the Galaxy was, Lieserl realized, the iron rule of economics. The Qax enslaved mankind simply because it was an economically valid proposition. They had to learn the techniques of oppression from humans themselves. Fortunately for the Qax, human history wasn't short of object lessons.

This is explicitly claiming that slavery is economically beneficial, just like the 1619 project claims. There's something awful about how many people are fans of slavery. Immoral but practical!? No, it's immoral and impractical. Free trade is better at creating wealth. These people never read Mises or offer actual economic arguments.

People produce more (and do better thinking, which leads to e.g. scientific progress) when they aren't slaves or oppressed. And free trade avoids wasting resources on task masters, whips, chains, lasting resentment decades later, etc.

Baxter, by contrast, thinks that slavery is so great that it's worth learning how to oppress people in order to enslave them, even though all you knew how to do before was trade, and you'd already gotten very rich and powerful by trade.

BTW, in the story, the Qax ended up much worse off as a result of oppressing humanity. Lieserl is reviewing history and knows that outcome. Baxter's own story is a counter-example to his claim.

PS I liked the book. This was just one minor part.


Elliot Temple | Permalink | Message (1)