[Previous] Example of Rejecting TOC Improvements | Home | [Next] Can Social Dynamics Explain Conjunction Fallacy Experimental Results?

Elliot Temple on August 2, 2020

Messages (64)

https://www.lesswrong.com/posts/ZEz38ae84AEvJkvgH/what-are-you-looking-for-in-a-less-wrong-post?commentId=d9vkoWDqSN5ZPSHFM

> [Question] What are you looking for in a Less Wrong post?

The main issue for me to write comments is whether I think discussion to a conclusion is available. Rationalists can't just agree to disagree, but in practice almost all discussions end without agreement and without explanation of reasons for ending the discussion by the party choosing to end the discussion. Just like at most other forums, most conversations seem to have short time limits which are very hard to override regardless of the content of the discussion.

I'm interested in things like finding and addressing double cruxes and otherwise getting some disagreements resolved. I want conversations where at least one of us learns something significant. I don't like for us each to give a few initial arguments and then stop talking. Generally I've already heard the first few things that other people say (and often vice versa too), so the value in the conversation mostly comes later. (The initial part of the discussion where you briefly say your position mostly isn't skippable. There are too many common positions, that I've heard before, for me to just guess what you think and jump straight into the new stuff.)

I occasionally write comments even without an expectation of substantive discussion. That's mostly because I'm interested in the topic and can use writing to help improve my own thoughts.


curi at 1:51 PM on August 2, 2020 | #16939 | reply | quote

https://www.lesswrong.com/posts/ZEz38ae84AEvJkvgH/what-are-you-looking-for-in-a-less-wrong-post?commentId=d9vkoWDqSN5ZPSHFM

curi:

>>Rationalists can’t just agree to disagre

TAG:

> If you read all the way through the rationalwiki article on Aumanns Theorem, there is a clear explanation as to why it cannot apply in practice.

curi:

(quoting https://www.readthesequences.com )

> He said, “Well, um, I guess we may have to agree to disagree on this.”

>

> I [Yudkowsky] said: “No, we can’t, actually. There’s a theorem of rationality called Aumann’s Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.”

...

> Robert Aumann’s Agreement Theorem shows that honest Bayesians cannot agree to disagree

...

> Regardless of our various disputes, we [Yudkowsky and Hanson] both agree that Aumann’s Agreement Theorem extends to imply that common knowledge of a factual disagreement shows someone must be irrational.

...

> Nobel laureate Robert Aumann—who first proved that Bayesian agents with similar priors cannot agree to disagree

Do you think I'm misunderstanding the sequences or do you disagree with them?

Just because it's not fully proven in practice by math doesn't mean it isn't a broadly true and useful idea.


curi at 1:32 PM on August 3, 2020 | #16950 | reply | quote

#16950 TAG responded by doubling down on accusing me of not doing my homework, ignoring my question, and straw manning (maybe due to being confused) what I said about math.

https://www.lesswrong.com/posts/ZEz38ae84AEvJkvgH/what-are-you-looking-for-in-a-less-wrong-post

> It is fully proven by the math, but it requires a set of stringent conditions about honesty and shared information which are unlikely to obtain in real world situations. As explained in the rationality article. Did you read it?

I (intentionally) said the kind of thing Yudkowsky also says. TAG is triggered and out to get me. I'm not clear on why. I don't plan to respond.

I also got this response in the same thread, from the OP whose question I answered:

> Thanks for the answer! I didn't think of it that way, but I actually agree that I prefer when the post crystallize both sides of the disagreement, for example in a double crux.

This is friendly but confused. He's agreeing with me but his summary of what I said isn't close. I was talking about availability of discussion and he isn't.


curi at 2:58 PM on August 3, 2020 | #16951 | reply | quote

I see you don't plan to respond to TAG anymore but you mention that you don't like ending a discussion without an explanation why the discussion has ended. Are you planning on explaining to TAG why you will not respond anymore? Or will that be though of as too aggressive or something?

Have you thought about why you want to end a discussion without explanation? If so could you explain your thought process there?

I am trying to find actionable tips on how to spot and end conversations that are not fruitful in a more efficient manner. Nip it in the bud as they say.


Periergo at 4:10 PM on August 4, 2020 | #16964 | reply | quote

> I see you don't plan to respond to TAG anymore but you mention that you don't like ending a discussion without an explanation why the discussion has ended. Are you planning on explaining to TAG why you will not respond anymore? Or will that be though of as too aggressive or something?

That would violate the social norms of LW. It wouldn't be productive. It'd alienate people. It'd result in people writing hostile meta-discussion comments to me and being pushy for replies. It'd result in more discussion with the wrong people, like TAG, instead of people that'd be better to talk with.

LW is not a free speech forum. If I persisted enough with these matters, the result would be moderator action against me, as has happened both previous times I posted at LW (~9 and ~3 years ago).

This isn't special. There are other things I'm also doing differently to try to find some common ground and interact with them productively instead of being horribly misunderstood. I'm trying to use more cultural defaults and to follow LW customs more. I'm trying to avoid meta discussion and discussion I think is low quality. Part of the way their forum works is you're supposed to ignore a lot of stuff you disagree with (when the disagreement involves viewing it as low quality) – not all such things but a significant amount. I think that's bad but there's a lack of reasonable competition/alternatives and I also think there's some room for diversity of forums (and LW is broadly way better for discussion than e.g. twitter, reddit and facebook).

TAG's comments didn't communicate discussion interest. If he or anyone else really cares they can ask me, on LW or on my own forums. They have options. I actually provide free speech oriented forums where that kind of meta discussion is within the bounds of normal behavior instead of discouraged. I doubt he knows that, but that's his choice, and I don't know a good way to fix it.

> I am trying to find actionable tips on how to spot and end conversations that are not fruitful in a more efficient manner. Nip it in the bud as they say.

One of my main ideas about how to rationally end discussions (without mutual agreement) is: https://www.elliottemple.com/essays/debates-and-impasse-chains

Another tip is to have a debate policy and then ask people to use it (or suggest an alternative policy) if they want to continue. At that point almost everyone will choose to have the conversation end. And if they do continue, it's now on better terms. My debate policy is https://www.elliottemple.com/debate-policy

Even without a written policy, merely asking people questions like "Do you want to try to persistently discuss this to a conclusion instead of stopping without explanation after a bit?" will end most discussions in a way that gives info about why it's ending (different, incompatible discussion goals if you want a serious, effortful discussion and they don't).


curi at 4:29 PM on August 4, 2020 | #16965 | reply | quote

Thank you for the ideas. What you link appears to be about debate, Does this apply equally with discussion that I do not consider to be a debate? For example, I am just curious to know how or why people arrived at certain conclusions and compare it with my own thinking. Without looking to persuade them in any way.

I think it should apply equally but I don't want to assume.


Periergo at 5:37 PM on August 4, 2020 | #16967 | reply | quote

#16967 I'd modify a bit for discussion without intent to persuade. You can use some of the same specifics and use the underlying idea of saying your goals for the discussion and getting agreement about them upfront (or agreeing to the other guy's goals if he has some alternative that is acceptable to you, or aborting).


curi at 5:44 PM on August 4, 2020 | #16968 | reply | quote

> #16967 I'd modify a bit for discussion without intent to persuade. You can use some of the same specifics and use the underlying idea of saying your goals for the discussion and getting agreement about them upfront (or agreeing to the other guy's goals if he has some alternative that is acceptable to you, or aborting).

I have a very important reason to not want to waste time. That's one goal I have. To not waste time.

With that in mind, when you say, "saying your goals for the discussion and getting agreement about them upfront..." I realize this is not something I have thought about. I don't have a clear goal of what I want in most conversations. For the most part, I am letting my curiosity lead the way purely by intuition.

For example, within this discussion, I have a fuzzy goal of wanting to learn methods of not wasting time when discussing with people, but I don't have a clear cut idea of what I am looking for. I don't know if this is necessarily a bad thing, but you have made me more aware of it.


Periergo at 5:56 PM on August 4, 2020 | #16969 | reply | quote

G Gordon Worley III writes:

> I don't know of any research to point you to but just wanted to say I think you're right we have reason to be suspect of the normative correctness of many irrationality results. It's not that people aren't ever "irrational" in various ways, but that sometimes what looks from the outside like irrationality is in fact a failure to isolate from context in a way that humans not trained in this skill can do well.

> I seem to recall a post here a while back that made a point about how some people on tasks like this are strong contextualizers and you basically can't get them to give the "rational" answer because they won't or can't treat it like mathematical variables where the content is irrelevant to the operation, but related to the ideas shared in this post.

Yeah, (poor) context isolation is is a recurring theme I've observed in my discussions and debates. Here's a typical scenario:

There's an original topic, X. Then we talk back and forth about it for a bit: C1, D1, C2, D2, C3, D3, C4, D4. The C messages are me and D is the other guy.

Then I write a reply, C5, about a specific detail in D4. Often I quote the exact thing I'm replying to or explain what I'm doing (e.g. a statement like "I disagree with A because B" where A was something said in D4.).

Then the person writes a reply (more of a non sequitur from my pov) about X.

People routinely try to jump the conversation back to the original context/topic. And they make ongoing attempts to interpret things I say in relation to X. Whatever I say, they often try to jump to conclusions about my position on X from it.

I find it very hard to get people to stop doing this. I've had little success even with explicit topic shifts like "I think you're making a discussion methodology mistake, and talking about X won't be productive until we get on the same page about how to discuss."

Another example of poor context isolation is when I give a toy example that'd be trivial to replace with a different toy example, but they start getting hung up on specific details of the example chosen. Sometimes I make the example intentionally unrealistic and simple because I want it to clearly be a toy example and I want to get rid of lots of typical context, but then they get hung up specifically on how unrealistic it is.

Another common example is when I compare X and Y regarding trait Z, and people get hung up b/c of how X and Y compare in general. Me: X and Y are the same re Z. Them: X and Y aren't similar!

I think Question-Ignoring Discussion Pattern is related, too. It's a recurring pattern where people don't give direct responses to the thing one just said.

And thanks for the link. It makes sense to me and I think social dynamics ideas are some of the ones most often coupled/contextualized. I think it’s really important to be capable of thinking about things from multiple perspectives/frameworks, but most people really just have the one way of thinking (and have enough trouble with that), and for most people their one way has a lot of social norms built into it (because they live in society – you need 2+ thinking modes in order for it to make sense to have one without social norms, otherwise you don’t have a way to get along with people. Some people compromise and build fewer social norms into their way of thinking because that’s easier than learning multiple separate ways to think).


curi at 12:01 PM on August 5, 2020 | #16978 | reply | quote

I got linked to this in the LW discussion. I thought it was interesting enough to highlight:

https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms

I also looked at the 3 further reading links at the bottom + the followup to one of them.


curi at 1:05 PM on August 7, 2020 | #16994 | reply | quote

#16994 Erisology is very interesting.


Periergo at 4:34 PM on August 7, 2020 | #16996 | reply | quote

I wrote some short comments at:

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

https://www.lesswrong.com/posts/KpXAPsik5SkoComHM/social-dynamics

I'm not going to repeat them. Basically people don't understand what I say and assume that they did understand. Then they e.g. attribute ideas to me that contradict what I said or don't notice things I said. It's not productive so far due to low quality.

I expect the discussions to either die fast or maybe escalate: if someone challenges me more I may ask if they want to debate to a conclusion. It's socially aggressive to ask that in response to their first set of messages. It's easier to get away with later if they continue. I don't want halfway discussions where people are trying to bicker with me while having one foot out the door and reserving the right to stop talking, without explanation, at any moment. If they won't take it seriously, I'd prefer they leave me alone or label their comments appropriately. Of course they don't want to do that – they commonly want to pretend to be having a serious discussion while not doing it and while maintaining enough social plausible deniability that it's hard to call them out.


curi at 1:43 PM on August 10, 2020 | #17130 | reply | quote

#17130 on social plausible deniability see http://curi.us/2361-social-dynamics-summary-notes and the first comment.

---

A common attitude is something like: it was your job to keep me interested and earn my attention on an ongoing basis in the discussion. Write good stuff and people will want to continue.

So people often blame you when people exit conversations with you.

With discussions what normally happens is people stop midway for social reasons.

I don't want to take on the responsibility of keeping people socially happy (happy with the social dynamics of the conversation). I don't want that job.

I'm happy to take on the job of providing objective, rational value in a discussion. But what happens is people leave for social reasons and then lie (to themselves more than to me) that they left for rational reasons. Often they don't explain what they're doing much so the lie is ambiguously implied and mostly unstated, which makes it even harder to deal with.

If I fail to do a good job re objective value in a discussion, I want people to *say so and argue their case*. It could be their error rather than mine. We should try to discuss and solve it before giving up.

Problems are inevitable. There's no way to get very far with people who give up at the first sign of trouble instead of engaging in problem solving.

So there are two basic things I want from people in order to have a substantial discussion with them. I want to have ~zero responsibility for managing social dynamics and keeping them socially happy and interested. And I want some substantial resilience, perseverance and willingness to engage in problem solving before giving up.

Such things are hard to come by and people don't want to admit that so it's awkward. Responses to such requests are largely social, e.g. accusing me of being too demanding for my social status (relative to the person I'm asking for stuff from) – not knowing my place (but ~never said that explicitly). I also get socially smeared as chasing the other person, wanting stuff from them, being tryhard, etc.

But the real problem is most people don't want and don't know how to do rational discussion. They spend their life worrying about social dynamics. So even if they turned the social crap off for a bit, it wouldn't help, because that's all they know. If they were going to do rationality they'd have to start near the beginning and build up their skills – like a child (as they'd see it), and they really won't want to admit such weakness and do that.


curi at 1:54 PM on August 10, 2020 | #17134 | reply | quote

It's hard to get attention on LW because attention is mostly allocated by social dynamics but people don't admit that and will be hostile if you talk about it. But I don't want to social climb and they will mostly just blame my objective content quality (without anyone ever discussing it to a conclusion) and deny social dynamics are happening.


curi at 2:48 PM on August 10, 2020 | #17146 | reply | quote

I streamed reading and responding to some LW comments:

https://youtu.be/QDLGcVMwbS8


curi at 12:52 PM on August 11, 2020 | #17324 | reply | quote

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

Anyone know how to productively, mutually beneficially handle cases like this?

My plan is not to reply.


curi at 3:14 PM on August 11, 2020 | #17329 | reply | quote

#17329 It may not be possible to move forward productively if he doesn't want to.

An idea could be stating that his conclusion that "The Law of Least Effort Contributes to the Conjunction Fallacy is false" is based on an erroneous understanding of LLE.

A potential problem with my suggestions is that you may have to invest more time with this person and explain how he interpreted it wrong. Which based on his non-engagement specifically about that, leads me to believe he has no interest in further discussion. In short, you might end up wasting your time.

Perhaps, "I think your interpretation of LLE is mistaken. If we are to have a fruitful discussion we have to fix this, but if you're not interested in having that discussion, we can end the discussion here."

I don't think you have not thought of these but in case you haven't, maybe it helps. If you have, and rejected them, I'd like to know why for my benefit, Thanks!


Periergo at 4:03 PM on August 11, 2020 | #17330 | reply | quote

> In short, you might end up wasting your time.

Yes, his responses are low quality and the default discussion standards are people quit in the middle without explanation, before either addressing my arguments to them *or* clarifying their claims to my satisfaction (so before either they or I could learn anything substantive).

So I don't want to try to sort out his mess and then soon get ignored. Nor do I want to attempt to engage in the social maneuvering that would (presumably) get and hold his attention. I think he (like people in general) allocates attention by social dynamics, not content quality. So the position "Just say good stuff and earn his attention rationally" doesn't apply IMO. (And I was already trying to do that and what did he do? Skim what I wrote, not pay attention to or engage with what I said, not point out any reason my material was low quality, and then summarize the standard view as if I hadn't bothered to get wikipedia-level knowledge before writing 3 articles about it.) But I can't say all this or I'll just piss people off. Most forums don't tolerate such things (mine does).

> Perhaps, "I think your interpretation of LLE is mistaken. If we are to have a fruitful discussion we have to fix this, but if you're not interested in having that discussion, we can end the discussion here."

I think he'll consider this socially aggressive and try to fight back (or maybe ignore me), and also the audience will dislike it.

He doesn't want to take the blame for not being willing to do rational truth seeking or anything along those lines. He'll feel attacked and unfairly pressured. He might respond e.g. by saying and obfuscated version of "learn your place and earn attention instead of demanding it".


curi at 4:11 PM on August 11, 2020 | #17331 | reply | quote

> I think he'll consider this socially aggressive and try to fight back (or maybe ignore me), and also the audience will dislike it.

Really? I would not have guessed this. Though sometimes I have been told to "chill out" when in my mind I was not aggressive at all.


Periergo at 5:09 PM on August 11, 2020 | #17345 | reply | quote

#17345 It's borderline accusing him of not wanting to have a fruitful, rational discussion.

People are easily offended.


curi at 5:11 PM on August 11, 2020 | #17347 | reply | quote

Yes, I think I am easily offended too but not to that level. Hmm. Maybe I should consider learning to be less offensive but that feels like a waste of time to me. If they are reasonable they would be willing to listen to clarifications, if they are not reasonable, why should I bother with them in the first place?


Periergo at 5:24 PM on August 11, 2020 | #17349 | reply | quote

To be clear: I'm not advising anyone to avoid offending LW people (or others).

I have offended them before. I don't think I'll learn much by offending them again now. I'm trying something different. I have some ability to do this due to already studying social dynamics and irrationality for independent reasons.


curi at 5:33 PM on August 11, 2020 | #17351 | reply | quote

Re my https://www.elliottemple.com/essays/debates-and-impasse-chains **gjm wrote**:

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

> So, to summarize the proposal behind that link:

>

> - an "impasse", here, is anything that stops the original discussion proceeding fruitfully;

> - when you encounter one, you should switch to discussing the impasse;

> - *that* discussion may also reach an impasse, which you deal with the same way;

> - it's OK to give up unilaterally when you accumulate enough impasses-while-dealing-with-impasses-while-dealing-with-impasses;

> - you propose that a good minimum would be a *chain of five or more impasses*.

>

> I think only a small minority of discussions are so important as to justify a commitment to continuing until the fifth chained impasse.

> I do agree that there's a genuine problem you're trying to solve with this stuff, but I think your cure is very much worse than the disease. My own feeling is that for all but the most important discussions nothing special is needed; if anything, I think there are bigger losses from people feeling *unable* to walk away from unproductive discussions than from people walking away when there was still useful progress to be made, and so I'd expect that measures to make it harder to walk away will on balance do more harm than good.

>

> (Not *necessarily*; perhaps there are things one could do that make premature walking-away harder but don't make not-premature walking-away harder. I don't know of any such things, and the phenomenon you alluded to earlier, that premature walking-away often *feels like* fully justified walking away to the person doing it, makes it harder to contrive them.)

>

> I also think that, in practice, if A thinks B is being a bozo then having made a commitment to continue discussion past that point often *won't* result in A continuing; they may well just leave despite the commitment. (And may be right to.) Or they may continue, but adding *resentment at being obliged to keep arguing with a bozo* to whatever other things made them want to leave, and the ensuing discussion is not very likely to be fruitful.

>

> I guess I haven't yet addressed one question you asked: would I like to address the premature-ending problem if it weren't too expensive? If there were a magical spell that would arrange that henceforth discussions wouldn't end when (1) further discussion would in fact be fruitful and (2) the benefits of that discussion would exceed the costs, for both parties -- then yes, I think I'd be happy for that spell to be cast. But I am super-skeptical about *actually possible measures* to address the problem, because I think the opposite problem (of effort going into discussions that are not in fact productive enough to justify that effort) is actually a bigger problem, and short of outright magic it seems very difficult to improve one of those things without making the other worse.


curi at 1:44 PM on August 12, 2020 | #17375 | reply | quote

#17375 https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

I'm glad that you seem to have largely understood me and also given a substantive response about your main concern. That is fairly atypical. I'm also glad that you agree that there are important issues in this general area.

I will agree to discuss to a length 3 impasse chain with you (rather than 5) if that'd solve the problem (I doubt it). I'd also prefer to discuss impasse chains and discussion ending issues (which I consider a very important topic) over the conjunction fallacy or law of least effort, but I'm open to either.

I think you're overestimating how much effort it takes to create length 5 impasse chains, but I know that's not the main issue. He's an example of a length 5 impasse chain which took exactly 5 messages, and all but the first were quite short. It wasn't a significant burden for me (especially given my interest in the topic of discussion methods themselves) and in fact was considerably faster and easier for me than some other things I've done in the past (i try very hard to be open to critical discussion and am interested in policies to enable that). If it had taken more than 5 messages, that would have only been because the other guy said some things I thought were actually good messages.

Discussion ending policies and the problems with walking away with no explanation are a problem that particularly interests me and I'd write a lot about regardless of what you did or didn't do. I actually just wrote a bunch about it this morning before seeing your comment. By contrast, I don't want to discuss the LoLE stuff with you without some sort of precommitment re discussion ending policies because I think your messages about LoLE were low quality and explaining the errors is not the type of writing I'd choose just for my own benefit. (This kind of statement is commonly hard to explain without offending people, which is awkward because I do want to tell people why I'm not responding, and it often would only take one sentence. And I don't think it should be offense: we disagree, and i expect your initial perspective is that there were quality issues with what I wrote, so I expect symmetry on this point anyway, no big deal.) It's specifically the discussions which start with symmetric beliefs that the other guy is wrong in ways I already understand, or is saying low quality stuff, or otherwise isn't going to offer significant value in the early phases of the discussion, that especially merit using approaches like impasse chains to enable discussion. The alternative to impasse chains is often *no discussion*. But I'd rather offer the impasse chain method over just ignoring people (though due to risk of offending people, sometimes I just say nothing – but at least I have a publicly posted debate policy and paths forward policy, as well as the impasse chain article, so if anyone really cares they can find out about and ask for options like that.)

As a next step, you can read and reply to – or not – what I wrote anyway about impasse chains today. Rationally Ending Discussions

You may also, if you want, indicate your good faith interest in the topic of too much effort going into bad discussions, and how that relates to rationally ending discussions. If you do, I expect that'll be enough for me to write something about it even with no formal policy. (I didn't say much about that in the Rationally Ending Discussions linked in the previous paragraph, but I do have ideas about it.) Anyone else may also indicate this interest or request a discussion to agreement or impasse chain with me (i'm open to them on a wide range of topics including basically anything about rationality, and i don't think we'll have much trouble finding a point of disagreement if we try to).


curi at 1:44 PM on August 12, 2020 | #17376 | reply | quote

Reading through all the sequences in order. 70% done. My favorite part so far:

https://www.readthesequences.com/Science-And-Rationality-Sequence


curi at 4:04 PM on August 12, 2020 | #17377 | reply | quote

reply to gjm

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

> But if you reckon my comments are low-quality and I'm likely to bail prematurely, you'll have to decide for yourself whether that's a risk you want to take.

I have decided and I don't want to take that risk in this particular case.

But I believe I'm socially prohibited from saying so or explaining the analysis I used to reach that conclusion.

This is a significant issue for me because I have a similar judgment regarding most responses I receive here (and at most forums). But it's problematic to just not reply to most people while providing no explanation. But it's also problematic to violate social norms and offend and confuse people with meta discussion about e.g. what evidence they've inadvertently provided that they're irrational or dumb. And often the analysis is complex and relies on lots of unshared background knowledge.

I also think I'm socially prohibited from raising this meta-problem, but I'm trying it anyway for a variety of reasons including that there are some signs that you may understand what I'm saying. Got any thoughts about it?


curi at 2:26 PM on August 13, 2020 | #17392 | reply | quote

Short reply I wrote to someone who said my solution (impasse chains) sucked:

Do you have any proposal for how to solve the problems of people being biased then leaving discussions at crucial moments to evade arguments and dodge questions, and there being no transparency about what's going on and no way for the error to get corrected?


curi at 2:27 PM on August 13, 2020 | #17393 | reply | quote

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

gjm wrote:

> LW is less constrained than most places by such social norms. However, to some extent those social norms are in place because breaking them tends to have bad results on net, and my experience is that a significant fraction of people who want to break them may *think* they are doing it to be frank and open and honest and discuss things rationally without letting social norms get in the way, but *actually* are being assholes in just the sort of way people who violate social norms usually are: they enjoy insulting people, or want to do it as a social-status move, or whatever. And a significant fraction of people who say they're happy for norms to be broken "at" them may *think*they are mature and sensible enough not to be needlessly offended when someone else says "I think you're pretty stupid" (or whatever), but *actually* get bent out of shape as soon as that happens.

>

> If it's any consolation, I have my own opinions about the likely outcome of such a discussion, some of which I too might be socially prohibited from expressing out loud :-).

I hereby grant you and everyone else license to break social norms at me. (This is not a license to break rational norms, including rational moral norms, which coincide with social norms.) I propose trying this until I get bent out of shape once. I do have past experience with such things including on 4chan-like forums.

I agree with you about common cases.

What I don't see in your comment is a solution. Do you regard this as an important, open problem?


curi at 12:28 PM on August 14, 2020 | #17402 | reply | quote

Is gjm really upset/mad/hostile/whatever? Somewhat annoyed? Fine?

My initial intuition was fine. We're just playing. It's all in good fun.

When I considered it and did some analysis, I decided really upset.

I thought of this and planned to post it earlier before seeing his long new comment that was posted with a significantly shorter turnaround time than usual.


curi at 2:57 PM on August 14, 2020 | #17406 | reply | quote

#17406 My response to gjm:

> I'd be more interested in discussing Popper and Bayes stuff than your LoLE comments. Is there any literature which adequately explains your position on induction, which you would appreciate criticism of?

> FYI I do not remember our past conversations in a way that I can connect any claims/arguments/etc to you individually. I also don't remember if our conversations ended by either of our choice or were still going when moderators suppressed my participation (slack ban with no warning for mirroring my conversations to my forum, allegedly violating privacy, as well as repeated moderator intervention to prevent me from posting to the LW1.0 website.)


curi at 3:15 PM on August 14, 2020 | #17407 | reply | quote

reply to gjm

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

> This discussion was on Slack (which unfortunately hides all but the most recent messages unless you pay them, which LW doesn't).

Well, fortunately, I did save copies of those discussions. You could find them in the FI archives if you wanted to. (Not blaming *you* at all but I do think this is kinda funny and I don't regret my actions.)

FYI, full disclosure, on a related note, I have mirrored recent discussion from LW to my own website. Mostly my own writing but also some comments from other people who were discussing with me, including you. See e.g. http://curi.us/2357-less-wrong-related-dicussion and http://curi.us/archives/list_category/126

I don't plan to review the 3 year old discussions and I don't want to re-raise anything that either one of us saw negatively.

> If you are interested in pursuing any of those discussions, maybe I can make a post summarizing my position and we can proceed in comments there.

Sure but I'd actually mostly prefer literature, partly because I want something more comprehensive (and more edited/polished) and partly because I want something more suitable for quoting and responding to as a way to present and engage with rival, mainstream viewpoints which would be acceptable to the general public.

Is there any literature that's *close enough* (not exact) or which would work with a few modifications/caveats/qualifiers/etc? Or put together a position mostly from selections from a few sources? E.g. I don't *exactly* agree with Popper and Deutsch but I can provide selections of their writing that I consider close enough to be a good starting point for discussion of my views.

I also am broadly in favor of using literature in discussions, and trying to build on and engage with existing writing, instead of rewriting everything.

If you can't do something involving literature, why not? Is your position non-standard? Are you inadequately familiar with inductivist literature? (Yes answers are OK but I think relevant to deciding how to proceed.)

And yes feel free to start a new topic or request that I do instead of nesting further comments.

> what I think about what Popper thinks about induction

I actually think the basics of induction would be a better topic. What problems is it trying to solve? How does it solve it? What steps does one do to perform an induction? If you claim the future resembles the past, how do you answer the basic logical fact that the future always resembles the past in infinitely many ways and differs in infinitely many ways (in other words, infinitely many patterns continue and infinitely many are broken, no matter what happens), etc.? What's the difference, if any, between evidence that doesn't contradict a claim and evidence that supports it? My experience with induction discussions is a major sticking point is vagueness and malleability re what the inductivist side is actually claiming, and a lack of clear answers to initial questions like those above, and I don't actually know where to find any books which lay out clear answers to this stuff.

Another reason for using literature is I find lots of inductivists don't know about some of the problems in the field, and sometimes deny them. Whereas a good book would recognize at least some of the problems are real problems and try to address them. I have seen inductivist authors do that before – e.g. acknowledging that any finite data set underdetermines theory or pattern – just not comprehensively enough. I don't like to try to go over known ground with people who don't understand the ideas on their own side of the debate – and do that in the form of a debate where they are arguing with me and trying to win. They shouldn't even have a side yet.

> I *think* I looked at that argument in particular because you said you found it convincing

FYI I'm doubtful that I said that. It's not what convinced me. My guess is I picked it because someone asked for math. I'd prefer not to focus on it atm.


curi at 12:40 PM on August 15, 2020 | #17419 | reply | quote

RIP

gjm wrote:

> I also saved a copy of much of the Slack discussion. (Not all of it -- there was a lot -- but substantial chunks of the bits that involved me.) Somehow, I managed to save those discussions without posting other people's writing on the public internet without their consent.

> You do not have my permission (or I suspect anyone else's) to copy our writing on LW to your own website. Please remove it and commit to not doing it again. (If you won't, I suspect you might be heading for another ban.)

> (I haven't looked yet at the more substantive stuff in your comment. Will do shortly. But please stop with the copyright violations already. Sheesh.)

No. Quoting is not a copyright violation. And I won't have a discussion with you without being able to mirror it. Goodbye and no discussion I guess?


curi at 1:17 PM on August 15, 2020 | #17424 | reply | quote

#17424 I don't understand how people think the internet works but note there exist other mirrors of the thread:

https://web.archive.org/web/20200815205343/https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

http://archive.is/wdIh2

Does gjm think archive.is is a copyright violator and that lesswrong should ask archive.org for exclusion from their archives?


curi at 1:57 PM on August 15, 2020 | #17425 | reply | quote

People on LW tried to telling me I'm solving the wrong problem. Dropping discussions isn't much of an issue. Other problems are way bigger deals.

But what happened when I talked with people on LW? Every single discussion was dropped without resolution.

You can go look at other topics that don't involve me. How many discussions happen? How far do they go? When and why do they end?

~All discussions end, so the stopping procedures are a big deal that comes up all the time.

The vast majority end fast and without explanation. Some problem comes up early on and isn't solve and no one even tries to solve it.

You might say that's a good system. Filter out most conversations quickly. Pick a few better ones to put more effort into. Fine. Go find all the conversations that went a bit longer. All the ones that people put noticeable effort into and which had several back and forths. How did those end? Mostly badly or ambiguously (silence).

The vast majority of conversations fail, even if you filter down to just the ones that got off the ground. And the vast majority of the time people don't say *why* they failed. Neither side gets clarity and misunderstandings don't get cleared up.

Am I missing something? This problem looks huge and ubiquitous to me. Do others see it differently?


curi at 4:19 PM on August 15, 2020 | #17430 | reply | quote

I think LW operates with implied, unwritten demands about discussion behavior, ending procedures, etc. They are lax and chaotic in some ways. But they can still be quite pressuring. gjm actually explicitly accused me of not being open to discussion and suggested I should label my posts to warn ppl if i don't want to discuss. He was pressuring me to reply to comments while also simultaneously arguing at some length that it's important never to pressure anyone to discuss more than they want to.

Never discuss more than you feel like is such a recipe for whim and bias to determine outcomes. I think it's unrealistic and LW doesn't really use it. I think people judge others routinely for not answering stuff. Like if they think something is a good point or question and you don't answer then they judge you negatively instead of assuming you had a good reason. But when I suggest explicit rules for ending conversations they tell me to just assume people have good reasons and give them unlimited leeway and space. And when I say "sure that's one type of conversation but under those terms a lot of discussions won't happen and I think it's good to offer other conversations too" then no one seems willing to understand what I said.


curi at 5:10 PM on August 15, 2020 | #17433 | reply | quote

After bothering to click the link and glance at the stuff he was calling illegal and demanding I delete (like this thread, which has quotes from LW), gjm wrote:

> I have looked, now. I agree that what you've put there so far is probably OK both legally and morally.

He then, in the same comment, suggested that my previous actions (sharing my LW slack discussions on FI) were illegal. If he thought so, why did he initiate a conversation with me? I think he did it in order to attack me. That's why his initial comments to me were low quality – which I noticed and I didn't want to engage with them. He has moments of better quality writing/thinking when he isn't so hostile, but he had undisclosed hostile 3 year old memories of me at the outset. Then he got more hostile over me noticing the low quality and judging it.

The guy is toxic and I don't plan to talk with him further.


curi at 9:59 AM on August 16, 2020 | #17445 | reply | quote

I told gjm this:

> gjm, going forward, I don't want you to comment on my posts, including this one.


curi at 5:28 PM on August 16, 2020 | #17450 | reply | quote

Viliam reply

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

> I have noticed something similar in art. Artists are supposed to create their art in *mysterious*ways. If it can be explained, it is not the *true*art. For example, many people who aspire to write novels are horrified when you suggest to them that they should attend a writing workshop. Even giving them evidence that many successful authors attended workshops at some moment of their career does not remove the visceral opposition to the idea; if you learn the art from others, it is *fake*; if you use a known mechanism, it is *fake*.

> Together with your example of women not losing status for doing make-up, it seems to me that the problem is not effort per se, but rather *effort made publicly*. Even the hard-working scientist and CEO are supposed to do their work behind the curtain, *mysteriously*.

> Why is mysterious work high-status, but non-mysterious work low-status?

> From instrumental perspective, it would seem that mysterious work is easier to protect again copying. If I do my work in public, my advantage is fragile; anyone could observe me and do the same thing. If you could install a hidden camera on your CEO, maybe you would find out that their everyday work is actually quite simple and you could do it too -- but you can't, and therefore even in the hypothetical case you could do the work, you will never know, and you will never get it.

> But socially, this explanation gets it backwards. Yes, the CEO *could*order the employees to show him all their work, *could*investigate their processes in detail, *could*ask them to provide a documentation on all their trade secrets, and their only options would be to obey or to lose their jobs... but the CEO most likely will not do that; at least not for the purpose of copying their trade secrets. When the CEO asks someone to explain themselves, it is usually done to show them who is the boss. Socially, the privacy of your work is not an instrument to get power, but rather a symbol of *already having it*.

> If other people can scrutinize your work and ask you to explain yourself, you are low-status.

> If you can close the door and tell everyone to fuck off, you are high-status.

> (Hence the popularity of open spaces, from the perspective of management. Hence the popularity of remote work, from the perspective of workers.)

> > LoLE comes from a community where many thousands of people have put a large effort into testing out and debating ideas.

> And they did it publicly. Which explains why they are treated as low-status.

> tl;dr -- the problem of "public effort" is the *lack of privacy*(which signals low status), not the effort itself

Thanks for the reply. I think privacy is important and worth analyzing.

But I'm not convinced of your explanation. I have some initial objections.

I view LoLE as related to some other concepts such as *reactivity* and *chasing*. Chasing others (like seeking their attention) is low status, and reacting to others (more than they're reacting to you) is low status. Chasing and reacting are both types of effort. They don't strike me as privacy related. However, for LoLE only the appearance of effort counts (Chase's version), so to some approximation that means public effort, so you could connect it to privacy that way.

Some people do lots of publicly visible work. There are Twitch streamers, like Leffen and MajinPhil, who stream a lot of their practice time. (Some other people do stream for a living and stream less or no practice.) Partly I think it's OK because they get paid to stream. But partly I think it's OK because they are seen as wanting to do that work - it's their passion that they enjoy. Similarly I think one could livestream their gym workouts, tennis practice sessions, running training, or similar, and making that public wouldn't ruin their status. Similarly, Brandon Sanderson (a high status fantasy author) has streamed himself answering fan questions while simultaneously signing books by the hundreds (just stacks of pages that aren't even in the books yet, not signing finished books for fans), and he's done this in video rather than audio-only format. So he's showing the mysterious process of mass producing a bunch of signed books. And I don't think Sanderson gets significant income from the videos. I also don't think that Jordan Peterson putting up recordings of doing his job - university lectures - was bad for his status (putting up videos of his lecture prep time might be bad, but the lecturing part is seen as a desirable and impressive activity for him to do, and that desirability seems like the issue to me more than whether it's public or private). The (perceived) *option* to have privacy might sometimes matter more than actually having privacy.

I think basically some effort isn't counted as effort. If you like doing it, it's not real work. Plus if it's hidden effort, it usually can't be entered into evidence in the court of public opinion, so it doesn't count. But my current understanding is that **if** 1) it counts as effort/work; and 2) you're socially allowed to bring it up **then** it lowers status. I see privacy as an important thing helping control (2) but effort itself, under those two conditions, as the thing seen as undesirable, bad, something you're presumed to try to avoid (so it's evidence of failure or lack of power, resources, helpers, etc), etc.


curi at 5:29 PM on August 16, 2020 | #17451 | reply | quote

2nd Viliam Reply

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

> > Chasing others (like seeking their attention) is low status, and reacting to others (more than they're reacting to you) is low status.

> > The (perceived) *option* to have privacy might sometimes matter more than actually having privacy.

> Yes, and yes.

> I think that the *content*of the work matters, too. Like, if I think that university professors are high status, then watching a professor giving lectures is simply watching someone demonstrating high status. (And this is relative to my status, because if I am upper-class and I think of all people doing useful work - including professors - as losers, then watching the professor's lecture is in my eyes just confirmation of his low status.)

> Maybe another important thing is how your work is.... oriented. I mean, are you doing X to impress someone specific (which would signal lower status), or are you doing X to impress people in general but each of them individually is unimportant? A woman doing her make-up, a man in the gym, a professor recording their lesson... is okay if they do it for the "world in general"; but if you learned they are actually doing all this work to impress one specific person, that would kinda devalue it. This is also related to optionality: is the professor required to make the video? is the make-up required for the woman's job?

> By the way, status is not a dichotomy, so it's like: not having to make any effort > making an effort to impress the world in general > making an effort to impress a specific person. Also, the specific work is associated with some status, but doing that work well is relatively better than doing it poorly. So, publishing your work has *two*effects: admitting that you do X, and demonstrating that you are competent at X. And the privacy also impacts the perceived competence: can you watch the average lesson recorded by a hidden camera, or only the best examples the professor decided to share?

> > But my current understanding is that **if** 1) it counts as effort/work; and 2) you're socially allowed to bring it up **then** it lowers status.

> Seems correct. "I spend 12 hours a day working on my hobby" sounds cool (unless the hobby is perceived as inherently uncool); "I spend 12 hours a day doing my job" sounds uncool (unless the job is perceived as inherently cool and enjoyable).

> Maybe another important thing is how your work is.... oriented. I mean, are you doing X to impress someone specific (which would signal lower status), or are you doing X to impress people in general but each of them individually is unimportant? A woman doing her make-up, a man in the gym, a professor recording their lesson... is okay if they do it for the "world in general"; but if you learned they are actually doing all this work to impress one specific person, that would kinda devalue it. This is also related to optionality: is the professor required to make the video? is the make-up required for the woman's job?

That makes sense.

You can also orient your work to a group, e.g. a subculture. As long as its a large enough group, this rounds to orienting to the world in general.

Orienting to smaller groups like your high school, workplace or small academic niche (the 20 other high status people who read your papers) is fine from the perspective of people in the group. To outsiders, e.g. college kids, orienting to your high school peers is lame and is due to you being lame enough not yet to have escaped high school. Orienting to a few other top academics in a field could impress many outsiders - it shows membership in an exclusive club (high school lets in losers/everyone and hardly any the current highest status people are in the club).

I think orienting to a single person can be OK if 1) it's reciprocated; and 2) they are high enough status. E.g. if I started making YouTube videos exclusively to impress Kanye West, that's bad if he ignores me, but looks good for me if he responds regularly (that'd put me as clearly lower status than him, but still high in society overall). Note that more realistically my videos would also oriented to Kanye fans, not just Kanye personally, and that's a large enough group for it to be OK.

I didn't have other immediate, specific comments but I generally view these topics as important and hard to find quality discussion about. Most people aren't red-pilled and hate PUAs/MRAs/etc or at least aren't familiar with the knowledge. And then the PUAs/MRAs/etc themselves mostly aren't philosophers posting on rationalist forums ... most of them are more interested in other stuff like getting laid, using their knowledge of social dynamics to gain status, or political activism. So I wanted to end by saying that I'm open to proposals for more, similar discussion if you're interested.


curi at 7:16 PM on August 17, 2020 | #17459 | reply | quote

Viliam's 3rd reply re PUA

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction?commentId=vBbX6dMLJMQxFTsqD

> I find this topic difficult to discuss, because as an (undiagnosed) aspie, I probably miss many obvious things about social behavior, which means that I work with *incomplete data*. If I find a counter-example to a hypothesis, that's probably useful, but if the hypothesis sounds plausible to me, that means little, because I can easily overlook quite obvious things.

> I am intellectually aware of the taboo against the "PUA/MRA/etc" cluster. My interpretation is that for a man, showing weakness is low-status, and empathy towards low-status men is also low-status, so discussing male-specific problems in empathetic way means burning your social karma like wildfire. (The socially sanctioned way to discuss male-specific problems is to be condescending and give obviously dysfunctional advice, thus enforcing the status quo. Enforcing status quo is obviously the thing high-status people approve of, and that is what ultimately matters, socially.) But I do not feel the taboo viscerally. I hope I gained enough politically-incorrect creds by writing this paragraph to make the following paragraphs not seem like an automatic dismissal of an inconvenient topic.

> The difficult thing about learning "how people function" is that, simply said, everyone lives in a bubble. Not only is the bubble shaped by our social class, profession, hobbies, but even by our beliefs, including our beliefs about "how people function". Which is, from epistemic perspective, a really fucked up situation. Like, for whatever reason, you create a hypothesis "most X are Y"; then you instinctively start noticing the X who are Y, and avoiding and filtering out of your perception the X who are not Y; then at the end of the day you collect all data your observed and conclude that, really, almost all X *are*Y. It doesn't always work like this, sometimes something pierces your bubble painfully enough to notice, but it happens often. And it's not just about your perception; if you believe that all X are Y, sometimes the X who are not Y will avoid *you*; so even if you later improve your attention, you still get filtered data. I don't want to go full postmodern here, but this stuff really is crazy.

> So I wonder how much of the "PUA/MRA/etc" knowledge is really about the world in general, and how much is a description of their own bubble. Do the PUAs really have a good model of an average human, or just a good model of a drunk woman who came to a nightclub wanting to get laid? Do the MRAs generalize from their own bitter divorce a bit too much? How many edgy hypotheses are selected for their edginess rather than because they model the reality well? Also, most wannabe PUAs [[suck ]]at being PUAs, which makes their models even less useful. The entire community is selecting for people who have some kinds of problems with social interaction, which on one hand allows them to have a lot of unique insights, but on the other hand probably creates a lot of common blind spots. Maybe it's a community where the blind are trying to lead the blind, and the one-eyed are the kings. And the whole business of "selling the advice that will transform your life" actively selects for Dark Arts.

> So... it's complicated. I would like to learn from people who are guided neither by social taboos nor by edginess. And I am not sure if I could contribute much beyond an occassional sanity check. Furthermore, I think it is important to actually go out and interact with real people; and I don't even follow my own advice here. (The COVID-19 situation provides unique problems but also unique opportunities. If people socialize in smaller groups, outside, and don't touch each other, it means less sensory overload. When the entire world is weird, any individual weirdness becomes less visible.) I am completely serious here: good theory is useful, but practice is irreplaceable; and I think *my*most serious mistake is lack of practice.

> Sometimes I even wonder whether I overestimate how much the grass is greener on the other side. Like, I know a few attractive and popular people, who got divorced recently, which in my set of values constitutes a serious fail, especially when it happens to people who probably did not suffer by lack of options. Apparently, just like intelligence, social skills are also a tool many use to defeat themselves.


curi at 11:18 AM on August 18, 2020 | #17462 | reply | quote

replying to Viliam

#17462

https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction

> Do the PUAs really have a good model of an average human, or just a good model of a drunk woman who came to a nightclub wanting to get laid?

PUAs have evidence of efficacy. The best is hidden camera footage. The best footage that I’m aware of, in terms of confidence the girls aren’t actors, is Mystery’s VH1 show and the Cajun on Keys to the VIP. I believe RSD doesn’t use actors either and they have a lot of footage. I know some others have been caught faking footage.

My trusted friend bootcamped with Mystery and provided me with eyewitness accounts similar to various video footage. My friend also learned and used PUA successfully, experienced it working for him in varied situations … and avoids talking about PUA in public. He also observed other high profile PUAs in action IRL.

Some PUAs do daygame and other venues, not just nightclubs/parties. They have found the same general social principles apply, but adjustments are needed like lower energy approaches. Mystery, who learned nightclub style PUA initially, taught daygame on at least one episode of his TV show and his students quickly had some success.

PUAs have also demonstrated they’re effective at dealing with males. They can approach mixed-gender sets and befriend or tool the males. They’ve also shown effectiveness at befriending females who aren’t their target. Also standard PUA training advice is to approach 100 people on the street and talk with them. Learning how to have smalltalk conversations with anyone helps people be better PUAs, and also people who get good at PUA become more successful at those street conversations than they used to be.

I think these PUA Field Reports are mostly real stories, not lies. Narrator bias/misunderstandings and minor exaggerations are common. I think they’re overall more reliable than posts on r/relationships or r/AmITheAsshole, which I think also do provide useful evidence about what the world is like.

There are also notable points of convergence, e.g. Feynman told a story ("You Just Ask Them?” in *Surely You’re Joking*) in which he got some PUA type advice and found it immediately effective (after his previous failures), both in a bar setting and later with a “nice” girl in another setting.

> everyone lives in a bubble

I generally agree but I also think there are some major areas of overlap between different subcultures. I think some principles apply pretty broadly, e.g. LoLE applies in the business world, in academia, in high school popularity contests, and for macho posturing like in the Top Gun movie. My beliefs about this use lots of evidence from varied sources (you can observe people doing social dynamics ~everywhere) but also do use significant interpretation and analysis of that evidence. There are also patterns in the conclusions I’ve observed other people reach and how e.g. their conclusion re PUA correlates with my opinion on whether they are a high quality thinker (which I judged on other topics first). I know someone with different philosophical views could reach different conclusions from the same data set. My basic answer to that is that I study rationality, I write about my ideas, and I’m publicly open to debate. If anyone knows a better method for getting accurate beliefs please tell me. I would also be happy pay for useful critical feedback if I knew any good way to arrange it.

Business is a good source of separate evidence about social dynamics because there are a bunch of books and other materials about the social dynamics of negotiating raises, hiring interviews, promotions, office politics, leadership, managing others, being a boss, sales, marketing, advertising, changing organizations from the bottom-up (passing on ideas to your boss, boss’s boss and even the CEO), etc. I’ve read a fair amount of that stuff but it’s not my main field (which is epistemology/rationality).

There are also non-PUA/MGTOW/etc relationship books with major convergence with PUA, e.g. The Passion Paradox (which has apparently been renamed *The Passion Trap*). I understand that to be a mainstream book:

> **About the Author**

> Dr. Dean C. Delis is a clinical psychologist, Professor of Psychiatry at the University of California, San Diego, School of Medicine, and a staff psychologist at the San Diego V.A. Medical Center. He has more than 100 professional publications and has served on the editorial boards of several scientific journals. He is a diplomate of the American Board of Professional Psychology and American Board of Clinical Neuropsychology.

The main idea of the book is similar to LoLE. Quoting my notes from 2005 (I think this is before I was familiar with PUA): “The main idea of the passion paradox is that the person who wants the relationship less is in control and secure, and therefore cares about the relationship less, while the one who wants it more is more needy and insecure. And that being in these roles can make people act worse, thus reinforcing the problems.”. I was not convinced by this at the time and also wrote: “I think passion paradox dynamics could happen sometimes, but that they need not, and that trying to analyse all relationships that way will often be misleading.” Now I have a much more AWALT view.

> The entire community is selecting for people who have some kinds of problems with social interaction

I agree the PUA community is self-selected to mostly be non-naturals, especially the instructors, though there are a few exceptions. In other words, they do tend to attract nerdy types who have to explicitly learn about social rules.

> Sometimes I even wonder whether I overestimate how much the grass is greener on the other side.

My considered opinion is that it’s not, and that blue pillers are broadly unhappy (to be fair, so are red pillers). I don’t think being good at social dynamics (via study or “naturally” (aka via early childhood study)) makes people happy. I think doing social dynamics effectively clashes with rationality and being less rational has all sorts of downstream negative consequences. (Some social dynamics is OK to do, I’m not advocating zero, but I think it’s pretty limited.)

I don’t think high status correlates well with happiness. Both for ultra high status like celebs, which causes various problems, and also for high status that doesn’t get you so much public attention.

I think rationality correlates with happiness better. I would expect to be wrong about that if I was wrong about which self-identified rational people are not actually rational (I try to spot fakers and bad thinking).

I think the people with the best chance to be happy are *content and secure* with their social status. In other words, they aren’t actively trying to climb higher socially *and* they don’t have to put much effort into maintaining their current social status. The point is that they aren’t putting much effort into social dynamics and focus most of their energy on other stuff.

> I am intellectually aware of the taboo against the "PUA/MRA/etc" cluster.

I too am intellectual aware of that but don’t intuitively feel it. I also refuse to care and have publicly associated my real name with lower status stuff than PUA. I have gotten repeated feedback (sometimes quite strongly worded) about how my PUA ideas alienate people, including from a few long time fans, but I haven’t stopped talking about it.

> I would like to learn from people who are guided neither by social taboos nor by edginess. And I am not sure if I could contribute much beyond an occassional sanity check.

I’d be happy to have you at my discussion forums. My community started in 1994, (not entirely) coincidentally the same year as alt.seduction.fast. The community is fairly oriented around the work of David Deutsch (the previous community leader and my mentor) and myself, as well as other thinkers that Deutsch or I like. A broad variety of topics are welcome (~anything that rationality can be applied to).


curi at 11:19 AM on August 18, 2020 | #17463 | reply | quote

I edited a note into my comment:

> I too am intellectual aware of that but don’t intuitively feel it. I also refuse to care and have publicly associated my real name with lower status stuff than PUA. I have gotten repeated feedback (sometimes quite strongly worded) about how my PUA ideas alienate people, including from a few long time fans, but I haven’t stopped talking about it.

[Edit for clarity: I mostly mean hostile feedback from alienated people, not feedback from people worrying I'll alienate others.]


curi at 12:02 PM on August 18, 2020 | #17464 | reply | quote

#17463 Viliam replied with a discussion ender:

> You made a lot of good points.

...

> But of course there are also other reasons to expand social skills, such as increasing my income, or increasing my impact on the world.

I disagree. Rand disagrees. Oh well. (I didn't say this on LW.)

> Thanks for the debate!


curi at 6:35 PM on August 18, 2020 | #17468 | reply | quote

PUA confuses the having mode with the being mode. Turning relationships into consumerism. Shallow and unmeaningful.

Having mode as in, you have to have food, water etc...

Being mode is an entirely different thing. Confusing these two lead to deep unhappiness. It is the equivalent of confusing intelligence with wisdom.

so PUA could be "successful" at the having mode. No doubt. But it will always fail at the being mode.


Periergo at 9:35 PM on August 18, 2020 | #17469 | reply | quote

#17469 PUA helps people be more successful at their goals. It isn't equivalent to promiscuity and is helpful for other goals too. The majority of people interested would be happy to get one gf/wife.

You're attacking stuff without knowing much about it.


curi at 10:55 AM on August 19, 2020 | #17471 | reply | quote

No, you don't understand. I was not talking about promiscuity.

I am not talking about decadent romanticism either.


Periergo at 7:05 PM on August 19, 2020 | #17491 | reply | quote

reply to "misc raw responses to a tract of Critical Rationalism" by MakoYass

https://www.lesswrong.com/posts/bsteawFidASBXiywa/misc-raw-responses-to-a-tract-of-critical-rationalism?commentId=uruKAbBMcLQcTqQ35

Hi, Deutsch was my mentor. I run the discussion forums where we've been continuously open to debate and questions since before LW existed. I'm also familiar with Solomonoff induction, Bayes, RAZ and HPMOR. Despite several attempts, I've been broadly unable to get (useful, clear) answers from the LW crowd about our questions and criticisms related to induction. But I remain interested in trying to resolve these disagreements and to sort out epistemological issues.

**Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is?** And if you're interested, have you read FoR and BoI?

I'll begin with one comment now:

> I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness where they let themselves wholely believe probably wrong theories

~All open, public groups have lots of low quality self-proclaimed members. You may be right about some critrats you've talked with or read.

But that is not a CR position. CR says we only ever believe theories *tentatively*. We always know they may be wrong and that we may need to reconsider. We can't 100% count on ideas. Wholely believing things is not a part of CR.

If by "wholely" you mean with a 100% probability, that is also not a CR position, since CR doesn't assign probabilities of truth to beliefs. If you insist on a probability, a CRist might say "0% or infinitesimal" (Popper made some comments similar to that) for all his beliefs, never 100%, while reiterating that probability applies to physical events so the question is misconceived.

Sometimes we *act*, *judge*, *decide* or (tentatively) *conclude*. When we do this, we have to choose something and not some other things. E.g. it may have been a close call between getting sushi or pizza, but then I chose only pizza and no sushi, not 51% pizza and 49% sushi. (Sometimes meta/mixed/compromise views are appropriate, which combine elements of rival views. E.g. I could go to a food court and get 2 slices of pizza and 2 maki rolls. But then I'm acting 100% on *that* plan and not following either original plan. So I'm still picking a single plan to wholely act on.)


curi at 10:19 AM on August 22, 2020 | #17561 | reply | quote

Rationality is a harsh mistress. It's demanding.

I think people differentiate poorly between pressure applied by rationality itself or by me (the messenger).

Example: https://www.lesswrong.com/posts/7LmJWNQFchmivhNrT/rationally-ending-discussions?commentId=zQXjKHpsEp4SbBbkW


curi at 10:23 AM on August 22, 2020 | #17562 | reply | quote

A Solomonoff induction fan, who wrote some critical stuff related to DD and CR, expressed some interest in discussing. I wrote this.

https://www.lesswrong.com/posts/bsteawFidASBXiywa/misc-raw-responses-to-a-tract-of-critical-rationalism?commentId=X2zr9HXNqsFw9Scj2

A place to start is considering what problems we're trying to solve.

Epistemology has problems like:

What is knowledge? How can new knowledge be created? What is an error? How can errors be corrected? How can disagreements between ideas be resolved? How do we learn? How can we use knowledge when making decisions? What should we do about incomplete information? Can we achieve infallible certainty (how?)? What is intelligence? How can observation be connected to thinking? Are all (good) ideas connected to observation or just some?

Are those the sorts of problems you're trying to solve when you talk about Solomonoff induction? If so, what's the best **literature** you know of that outlines (gives high level explanations rather than a bunch of details) how Solomonoff induction plus some other stuff (it should specify what stuff) solves those problems? (And says which remain currently unsolved problems?)

(My questions are open to anyone else, too.)


curi at 9:39 AM on August 23, 2020 | #17593 | reply | quote

#17593 With a lot of these questions, I am with the pragamatists as in good enough is good enough. If it works it works and if the people in their ivory towers want to spend yet another **millenia arguing these things to no conclusion.** Let them be. Science advances all the same. idk if Kuhn said this but knowledge advances one funeral at a time, which for the individual it is of no practical use. "what is knowledge, what is an error." bah. Did it solve the problem and not kill you? ok good enough!

And for the individual, these debates that have spanned the centuries...Again, good enough is good enough. This I think is what Peterson gets right. There is Matter, and there is what Matters. Focus on the latter as life is too short to waste it on the former but I also don't judge if your view differs.


Periergo at 4:09 PM on August 24, 2020 | #17606 | reply | quote

I talked with Max about a discussion he had on Less Wrong:

https://youtu.be/TTCKfMnNZgU

(skip the first couple minutes)


curi at 9:08 PM on August 25, 2020 | #17618 | reply | quote

https://www.lesswrong.com/posts/FuGfR3jL3sw6r8kB4/ricraz-s-shortform?commentId=zQR6jqhHthrJfecns

> they're willing to accept ideas even before they've been explored in depth

People also *reject* ideas before they've been explored in depth. I've tried to discuss similar issues with LW before but the basic response was roughly "we like chaos where no one pays attention to whether an argument has ever been answered by anyone; we all just do our own thing with no attempt at comprehensiveness or organizing who does what; having organized leadership of any sort, or anyone who is responsible for anything, would be irrational" (plus some suggestions that I'm low social status and that therefore I personally deserve to be ignored. there were also suggestions – phrased rather differently but amounting to this – that LW will listen more if published ideas are rewritten, not to improve on any flaws, but so that the new versions can be published at LW before anywhere else, because the LW community's attention allocation is highly biased towards that).


curi at 3:32 PM on September 3, 2020 | #17780 | reply | quote

I posted a reply to *Mathematical Inconsistency in Solomonoff Induction?* where I used fancy latex for the maths. I also considered what happens when l(X) != l(Y).

Even though LW apparently supports latex you can't do latex references to equations.

I'm copying the post here but it's going to be ugly. best viewed on lesswrong or my site:

* https://www.lesswrong.com/posts/hD4boFF6K782grtqX/mathematical-inconsistency-in-solomonoff-induction?commentId=yYEvizv5Jey5dXh83

* https://xertrov.github.io/fi/posts/2020-09-04-reply-to-math-contradiction-in-solomonoff-induction/

-----

I went through the maths in OP and it seems to check out. I think the core inconsistency is that SI implies $l(X \cup Y) = l(X)$. I'm going to redo the maths below (breaking it down step-by-step more). curi has $2l(X) = l(X)$ which is the same inconsistency given his substitution. I'm not sure we can make that substitution but I also don't think we need to.

Let $X$ and $Y$ be independent hypotheses for Solomonoff induction.

According to the prior, the non-normalized probability of $X$ (and similarly for $Y$) is: **(1)**

\begin{equation}

P(X) = \frac{1}{2^{l(X)}} \label{eq_prior}

\end{equation}

what is the probability of $X\cup Y$? **(2)**

\begin{equation}

\begin{split}

P(X\cup Y) & = P(X) + P(Y) - P(X\cap Y) \\

& = \frac{1}{2^{l(X)}} + \frac{1}{2^{l(Y)}} - \frac{1}{2^{l(X)}} \cdot \frac{1}{2^{l(Y)}} \\

& = \frac{1}{2^{l(X)}} + \frac{1}{2^{l(Y)}} - \frac{1}{2^{l(X)} \cdot 2^{l(Y)}} \\

& = \frac{1}{2^{l(X)}} + \frac{1}{2^{l(Y)}} - \frac{1}{2^{l(X) + l(Y)}} \label{eq_std_prob}

\end{split}

\end{equation}

However, by Equation (1) we have: **(3)**

\begin{equation}

P(X\cup Y) = \frac{1}{2^{l(X\cup Y)}} \label{eq_or}

\end{equation}

thus **(4)**

\begin{equation}

\frac{1}{2^{l(X\cup Y)}} = \frac{1}{2^{l(X)}} + \frac{1}{2^{l(Y)}} - \frac{1}{2^{l(X) + l(Y)}} \label{eq_both}

\end{equation}

This must hold for *any and all* $X$ and $Y$.

curi considers the case where $X$ and $Y$ are the same length, starting with Equation (4), we get **(5):**

\begin{equation}

\begin{split}

\frac{1}{2^{l(X\cup Y)}} & = \frac{1}{2^{l(X)}} + \frac{1}{2^{l(Y)}} - \frac{1}{2^{l(X) + l(Y)}} \\

& = \frac{1}{2^{l(X)}} + \frac{1}{2^{l(X)}} - \frac{1}{2^{l(X) + l(X)}} \\

& = \frac{2}{2^{l(X)}} - \frac{1}{2^{2l(X)}} \\

& = \frac{1}{2^{l(X)-1}} - \frac{1}{2^{2l(X)}}

\end{split}

\end{equation}

but **(6)**

\begin{equation}

\frac{1}{2^{l(X)-1}} \gg \frac{1}{2^{2l(X)}}

\end{equation}

and **(7)**

\begin{equation}

0 \approx \frac{1}{2^{2l(X)}}

\end{equation}

so: **(8)**

\begin{equation}

\begin{split}

\frac{1}{2^{l(X\cup Y)}} & \simeq \frac{1}{2^{l(X)-1}} \\

\therefore l(X\cup Y) & \simeq l(X)-1 \label{eq_cont_1} \\

& \square

\end{split}

\end{equation}

curi has slightly different logic and argues $l(X\cup Y) \simeq 2l(X)$ which I think is reasonable. His argument means we get $l(X) \simeq 2l(X)$. I don't think those steps are necessary but they are worth mentioning as a difference. I think Equation (8) is enough.

I was curious about what happens when $l(X) \neq l(Y)$. Let's assume the following: **(9)**

\begin{equation} \label{eq_len_ineq}

\begin{split}

l(X) & < l(Y) \\

\therefore \frac{1}{2^{l(X)}} & \gg \frac{1}{2^{l(Y)}}

\end{split}

\end{equation}

so, from Equation (2): **(10)**

\begin{equation} \label{eq_or_ineq}

\begin{split}

P(X\cup Y) & = \frac{1}{2^{l(X)}} + \frac{1}{2^{l(Y)}} - \frac{1}{2^{l(X) + l(Y)}} \\

\lim_{l(Y) \to \infty} P(X\cup Y) & = \frac{1}{2^{l(X)}} + \cancelto{0}{\frac{1}{2^{l(Y)}}} - \cancelto{0}{\frac{1}{2^{l(X) + l(Y)}}} \\

\therefore P(X\cup Y) & \simeq \frac{1}{2^{l(X)}}

\end{split}

\end{equation}

by Equation (3) and Equation (10): **(11)**

\begin{equation} \label{eq_cont_2}

\begin{split}

\frac{1}{2^{l(X\cup Y)}} & \simeq \frac{1}{2^{l(X)}} \\

\therefore l(X\cup Y) & \simeq l(X) \\

\Rightarrow l(Y) & \simeq 0

\end{split}

\end{equation}

but Equation (9) says $l(X) < l(Y)$ --- this contradicts Equation (11).

So there's an inconsistency regardless of whether $l(X) = l(Y)$ or not.


max at 7:23 AM on September 4, 2020 | #17805 | reply | quote

#17805

> I'm not sure we can make that substitution but I also don't think we need to.

Why couldn't we do that substitution?

And yeah I thought it'd be inconsistent in lots of cases. The system is not designed for this type of consistency, so why would it have it in general?

But their answer is: it's a more limited system, which can't handle things like the hypothesis "i think my brother OR my sister stole my money", and they just failed to advertise the limitations and how badly the system departs from common sense and reasonable expectation.


curi at 10:23 AM on September 4, 2020 | #17810 | reply | quote

> Why couldn't we do that substitution?

From LW OP:

>> We can select X and Y to be the same length and to minimize compression gains when they’re both present, so len(X or Y) should be approximately 2len(X). I’m assuming a basis, or choice of X and Y, such that “or” is very cheap relative to X and Y, hence I approximated it to zero.

I don't have a good reason for thinking it's wrong, just that I didn't fully understand why it's okay. I noticed the maths worked out anyway so didn't think about it too much.

Particularly:

> such that “or” is very cheap relative to X and Y

I think you that means there's not much overhead to the program, so `X or Y` is roughly like concatenating them.

I'm not convinced there couldn't be more optimisations; you mention the case of 1.5len(X) instead of 2len(X) still not working out, which I buy. But why include it at all? If the core contradiction is len(X or Y) = len(X) I think it's neater to leave it out.

So I think you can make some substitution, but I don't know which and don't think we need to worry, anyway.


Anonymous at 11:17 AM on September 4, 2020 | #17813 | reply | quote

also, I did the maths on paper for the case l(X) < l(Y) first, and the substitution isn't needed for that. I changed the order in my post.


max at 11:28 AM on September 4, 2020 | #17814 | reply | quote

This reply that I got is not useful to me.

https://www.lesswrong.com/posts/9AWoAAA59hN9PEwT7/why-would-code-english-or-low-abstraction-high-abstraction?commentId=pKnPXR7zMXw6L8KTT

I don't think he wants to talk about what went wrong, what would be useful (he could try again if I gave feedback), or how I could write posts so to reduce the rate of people writing replies of this nature. I doubt anyone here seriously wants to talk about it either but nevertheless there it is.


curi at 5:41 PM on September 4, 2020 | #17825 | reply | quote

I left a reply to curi's post on abstraction: https://www.lesswrong.com/posts/9AWoAAA59hN9PEwT7/why-would-code-english-or-low-abstraction-high-abstraction?commentId=vJmvJwnkdNENFkhpu

---

Brevity of code and english can correspond via abstraction.

I don't know why brevity in low and high abstraction programs/explanations/ideas would correspond (I suspect they wouldn't). If brevity in low/high abstraction stuff corresponded; isn't that like contradictory? If a simple explanation in high abstraction is also simple in low abstraction then abstraction feels broken; typically ideas only become simple *after* abstraction. Put another way: the reason to use abstraction is to make ideas/thing that are highly complex into things that are less complex.

I think Occam's Razor only makes sense only if you take into account abstractions (note: O.R. itself is still a rule of thumb regardless). Occam's Razor doesn't make sense if you think about all the *extra* stuff an explanation invokes - partially because that body of knowledge grows as we learn more, and good ideas become more consistent with the population of other ideas over time.

When people think of short code they think of doing complex stuff with a few lines of code. e.g. `cat asdf.log | cut -d ',' -f 3 | sort | uniq`. When people think of (good) short ideas they think of ideas which are made of a few well-established concepts that are widely accessible and easy to talk about. e.g. we have seasons because energy from sunlight fluctuates ~sinusoidally through our annual orbital.

One of the ways SI can use abstraction is via the abstraction being encoded in both the program, program inputs, and the observation data.

(I think) SI uses an arbitrary alphabet of instructions (for both programs and data), so you can design particular abstractions into your SI instruction/data language. Of course the program would be a bit useless for any other problem than the one you designed it for, in this case.

> Is there literature arguing that code and English brevity usually or always correspond to each other?

I don't know of any.

> If not, then most of our reasons for accepting Occam’s Razor wouldn’t apply to SI.

I think some of the reasoning makes sense in a pointless sort of way. e.g. the hypothesis `1100` corresponds to the program "output 1 and stop". The input data is from an experiment, and the experiment was "does the observation match our theory?", and the result was `1`. The program `1100` gets fed into SI pretty early, and it matches the predicted output. The reason this works is that SI found a program which has info about 'the observation matching the theory' already encoded, and we fed in observation data with that encoding. Similarly, the question "does the observation match our theory?" is short and elegant like the program. The whole thing works out because all the real work is done elsewhere (in the abstraction layer).


Anonymous at 8:19 AM on September 5, 2020 | #17850 | reply | quote

I think there might be a game of plausible deniability played at LW. You win by writing things that pretend not to be questions or parts of an ongoing dialog. Winning posts should be impersonal and "unbiased" (i.e. not have preconceived ideas except those that LW already accepts). If you post stuff about yourself it should be from the frame of reference of a passive empirical observer.

You can't tell ppl that they're wrong, instead you dress up all fancy and wait to be judged favorably.


Anonymous at 8:35 AM on September 5, 2020 | #17852 | reply | quote

#17852 I agree that something like that happens.


curi at 8:49 AM on September 5, 2020 | #17853 | reply | quote

I finished reading *Rationality: From AI to Zombies* (RAZ) by Eliezer Yudkowsky yesterday. https://www.readthesequences.com

I think it has notable good ideas despite the errors. So overall I liked it.

It's a long book. I took a break in the middle to read other books.

I think ~every LW response I get that I think is bad could be criticized in terms of something written in RAZ. I think they don't live up to their own material well.


curi at 5:04 PM on September 5, 2020 | #17869 | reply | quote

I left some more replies on curi's thread re transitive simplicity across abstraction layers.

I think people are continually forgetting that the conversations started from a question. It's more like they're responding on autopilot b/c they don't know how to answer properly.

---

> I think that the argument about emulating one Turing machine with another is the best you're going to get in full generality.

In that case I especially don't think that argument answers the question in OP.

I've left some details in another reply about why I think the *constant overhead* argument is flawed.

> So while SI and humans might have very different notions of simplicity at first, they will eventually come to have the same notion, after they see enough data from the world.

I don't think this is true. I do agree *some* conclusions would be converged on by both systems (SI and humans), but I don't think simplicity needs to be one of them.

> If an emulation of a human takes X bits to specify, it means a human can beat SI at binary predictions at most X times(roughly) on a given task before SI wises up.

Uhh, I don't follow this. Could you explain or link to an explanation please?

> The quantity that matters is how many bits it takes to specify the mind, not store it(storage is free for SI just like computation time).

I don't think that applies here. I think that data is *part* of the program.

> For the human brain this shouldn't be too much more than the length of the human genome, about 3.3 GB.

You would have to raise the program like a human child in that case^1. Can you really make the case you're predicting something or creating new knowledge via SI if you have to spend (the equiv. of) 20 human years to get it to a useful state?

How would you ask multiple questions? Practically, you'd save the state and load that state in a new SI machine (or whatever). This means the data is part of the program.

Moreover, if you did have to raise the program like any other newborn, you have to use some non-SI process to create all the knowledge in that system (because people don't use SI, or if they do use SI, they have *other* system(s) too).

1: at least in terms of knowledge; though if you used the complete human genome arguably you'd need to simulate a mother and other ppl too, but they have to be good simulations after the first few years, which is a regressive problem. So it's probably easier to instantiate it in a body and raise it like a person b/c human people are already suitable. You also need to worry about it becoming mistaken (intuitively one disagrees with most people on most things we'd use an SI program for).

---

> The solution to the "large overhead" problem is to amortize the cost of the human simulation over a large number of English sentences and predictions.

That seems a fair approach in general, like how can we use the program efficiently/profitably, but I don't think it answers the question in OP. I think it actually actually implies the opposite effect: as you go through more layers of abstraction you get more and more complex (i.e. simplicity doesn't hold across layers of abstraction). That's *why* the strategy you mention needs to be over ever larger and larger problem spaces to make sense.

So this would still mean most of our reasoning about Occam's Razor wouldn't apply to SI.

> A short English sentence then adds only a small amount of marginal complexity to the program - i.e. adding one more sentence (and corresponding predictions) only adds a short string to the program.

I'm not sure we (humanity) know enough to claim only a short string needs to be added. I think GPT-3 hints at a counter-example b/c GTP has been growing geometrically.

Moreover, I don't think we have any programs or ideas for programs that are anywhere near sophisticated enough to answer meaningful Qs - unless they just regurgitate an answer. So we don't have a good reason to claim to know what we'll need to add to extend your solution to handle more and more cases (especially increasingly technical/sophisticated cases).

Intuitively I think there is (physically) a way to do something like what you describe efficiently because humans are an example of this -- we have no known limit for understanding new ideas. However, it's not okay to use this as a hypothetical SI program b/c such a program does other stuff we don't know how to do with SI programs (like taking into account itself, other actors, and the universe broadly).

If the hypothetical program does stuff we don't understand and we also don't understand its data encoding methods, then I don't think we can make claims about how much data we'd need to add.

I think it's reasonable there would be *no* upper limit on the amount of data we'd need to add to such a program as we input increasingly sophisticated questions. I also think it's intuitive there's no upper limit on this data requirement (for both people and the hypothetical programs you mention).


Anonymous at 6:23 PM on September 6, 2020 | #17893 | reply | quote

#17893 None of the discussion is about the topic I was trying to bring up. They aren't trying to talk about what high level stuff is. They just seem to have assumed that there exists some Turing machine that assigns arbitrary simplicity or complexity to anything at all, and that this sidesteps everything I was saying. (If that's correct, it means you aren't even putting forward a substantive idea without picking a Turning machine. But they don't even try to explain it's correct, they just assume it.)


curi at 6:36 PM on September 6, 2020 | #17894 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)