[Previous] Collaborative Writing Discussion Thread | Home | [Next] Focusing Your Attention Discussion

What I Sell

Philosophy Education

My speciality is rationality, problem solving, and how to think or learn. I focus on ideas useful in real life. I have original philosophical ideas (Critical Fallibilism) inspired most by Critical Rationalism, Objectivism, and Theory of Constraints.

I'm experienced at many applications, e.g. how philosophy can bring insight to artifical intelligence, parenting, relationships, psychology, or politics.

I sell digital products (PDFs, videos...) and personalized services (tutoring, advice, consulting...).

Business Consulting

Eli Goldratt died and his successors aren't creative enough. They're doing what he already explained how to do (which is worthwhile). I can apply Theory of Constraints ideas in ways that others wouldn't think of.

Unique Insight

I can do (for example) design, economics and science. But I'm not a designer, economist or scientist. I'm a philosopher. If you just want some regular design, hire a designer.

But what if you want the best that money can buy? What if regular designers aren't satisfying you? You still hire a designer, but you also hire me. I will provide design insights which are different than what you'd get from any designer. You could hire 100 designers and I'll still tell you some things that none of them do, because none of them are philosophers.

I've read and thought about design. I'm not a novice but it's not my profession. Apart from philosophy, I'm a generalist. Some people do design full time. I bring a broader, philosophical perspective. I can do this for most topics where good thinking makes a big difference. When standard results aren't enough, talk to me.


Elliot Temple on November 27, 2020

Messages (18)

This post doesn't bother with credibility/expertise signaling. If you don't know me but might be interested, you can ask me for that.


curi at 11:28 PM on November 27, 2020 | #1 | reply | quote

Value based pricing deals with outcomes and their value. It's hard to sell much if you can't offer particular outcomes. It's hard to offer philosophy outcomes because people fail a lot and "have a 10% chance at improving" is not the kind of outcome that appeals to people. But their learning and progress is in their hands, not mine. I can't control their results. They have to understand things themselves. And a lot of failures are due to things they don't want to hear about like dishonesty.

It's easier to offer outcomes if they are a lot more limited like "learn 10 things about topic X". That's small and simple enough that I can make it work for a lot of ppl even if they are bad at learning. But will their life improve? Will they be able to use those 10 things to actually accomplish much? That's much harder to achieve. People usually learn stuff by listening/reading a bit, saying it makes sense to them, and forgetting after a few months. If they want to actually use it to accomplish something impressive they'll have to practice it (and learn how to judge success and failure) and question/criticize/doubt it and deal with lots of objections so they can learn it more thoroughly and think through how it works in many different scenarios (and practice them). That's way more work and whether someone actually does that stuff is outside of my control, so I can't offer the sorts of impressive, awesome outcomes that require the person to do such things.

A type of outcome that's easier to offer is like "I do the thinking and I tell you the answer". But that gets into "trust me" too much b/c I use lines of reasoning that ppl aren't able to follow in detail, and they're mostly pretty bad at debating or discussing stuff at a medium level of detail and then being satisfied. It can work sometimes but it runs into a lot of trouble with people disagreeing. And it's really hard for people to use conclusions they don't understand or disagree with, even if they make some sort of effort to use it anyway.


curi at 11:54 PM on November 27, 2020 | #2 | reply | quote

Design/etc + philosophy is more powerful than design alone. Add the power of philosophy to your thing.

Difficult sell because people don't know what philosophy is, let alone how it could help with most topics.


curi at 12:29 AM on November 28, 2020 | #3 | reply | quote

It's hard to present ppl's philosophy problems and offer solutions. Not just because i can't control their learning process. But also b/c they don't recognize the existence of most of their problems.


curi at 12:33 AM on November 28, 2020 | #4 | reply | quote

https://twitter.com/lakens/status/1331668781194768385

>> Just published in @PsychScience a pre-registered replication of #egodepletion effect. Tks to Kathleen Vohs & @BJSchmeichel for leading. Bayesian meta-analysis showed ego-depletion effect was 4 times more likely under the null than alternative hypothesis. https://www.researchgate.net/publication/346303522_A_multi-site_preregistered_paradigmatic_test_of_the_ego_depletion_effect

> This massive study shows ego-depletion does not exist. The defense used to be: But there is a meta-analysis of 200 studies showing the effect is real!

> Now reflect on the waste. How many non-significant studies were file-drawered to get 200 type 1 errors?

Ego depletion is the idea that people have a limited amount of willpower that they use up. So if you do two things in a row that require willpower, you'll be less successful at the second.

People can't tell when they're doing philosophy. This is partly a philosophical issue. Social scientists have been screwing it up for hundreds of studies due to lack of philosophical insight. They need ideas that aren't in their field to handle a topic like this. It's an interdisciplinary topic but they don't recognize that.

The situation is difficult because 99% of philosophers would not be useful for this, because they're broadly incompetence and ineffective. But that doesn't mean you can do without philosophy. 99% of psychologists are also incompetent and ineffective at philosophy, so having them do the philosophy parts doesn't work either. People need to understand what sort of philosophical issues are involved and how to judge when some reasonable philosophical knowledge is being used or not. Something like that. But even that takes significantly more philosophical skill than is generally found in our culture.


curi at 12:07 PM on November 28, 2020 | #5 | reply | quote

#2

> It's hard to sell much if you can't offer particular outcomes. It's hard to offer philosophy outcomes because people fail a lot and "have a 10% chance at improving" is not the kind of outcome that appeals to people. But their learning and progress is in their hands, not mine. I can't control their results. They have to understand things themselves. And a lot of failures are due to things they don't want to hear about like dishonesty.

One way of dealing with this is by structuring your philosophy assistance as a partnership rather than a sales transaction.

One possible downside: You would have to vet your "customers" (partners) and their plans like they have to vet you. You definitely wouldn't want to take on just anyone or any plan to apply your philosophy skill. But maybe you'd like that better.

Another possible downside: Instead of getting paid all up front you get paid in part or in full later via equity ownership and/or a royalty in dollars or percentage of sales. That's bad if you need money right away, but possibly allows you to capture a higher percentage of the value you create long term.

Maybe you could live with those possible downsides or maybe not. But if you could, the benefit would be the same basic value prop as lots of other business partnerships: risk sharing and skill diversity. Business outcomes vary widely and people expect that. "Have a 10% better chance at success" by bringing on a partner with skills the existing owner(s) lack is something people regularly do.


Andy Dufresne at 2:52 PM on November 28, 2020 | #6 | reply | quote

#6 It's pretty impossible to trust anyone new/recent with that kinda thing. Even veterans quit FI sometimes. New people are unreliable. They can easily disagree with something, get offended by my ideas, take some criticism badly, etc. Most people quit in the first year. Having a partnership with someone who turned hostile to FI would be a mess. And the people who become passive lurkers don't work well either.


curi at 5:13 PM on November 28, 2020 | #7 | reply | quote

#7 And I don't think you can treat it as just a biz transaction like a partnership with a designer or economist. Typical people don't know what my area of expertise is, what to defer to me on, etc. Nor do they know how to do defer without understanding (even if they wanted to) when they find out i'm criticizing some stuff they thought was in their own expertise, or i'm saying they need to do X but they don't actually understand how and need to learn some concepts in order to be able to follow directions.

To work with me, people need some understanding of what I do and some sort of agreement or approval about it, but that kinda thing is often withdrawn even after being present initially.


curi at 5:18 PM on November 28, 2020 | #8 | reply | quote

I especially liked the "Unique Insight" section.

This is a good post for selling your consulting services for people who already respect you. There's a need for that, because knowing that you're good at explaining things or winning arguments or coming up with philosophy ideas doesn't necessarily imply, to your average fan, that you would be able to help in a lot of subject areas even where you're not an expert. And the post explains what form that help could take: you could work in tandem with people who *are* subject matter experts but *are not* philosophy experts, and provide guidance and new ideas that non-philosophers wouldn't think of.


Alisa at 6:13 PM on November 28, 2020 | #9 | reply | quote

#9 Yes. Asking me to do a web design or science experiment from scratch, alone, doesn't make sense. But I can advise, critique and speak to high level strategy, which adds value mostly if you want something significantly better than the norm. That's a partial, inadequate description of the power of philosophy, but is reasonably understandable.


curi at 6:19 PM on November 28, 2020 | #10 | reply | quote

#10 Another thing I can do is look at a field or topic area, research it, and figure out which existing ideas are any good and which aren't, and why. E.g. serotonin, coronaviruses, nutrition, free will, IQ tests or ego depletion would be appropriate topics.


curi at 6:21 PM on November 28, 2020 | #11 | reply | quote

#7 and #8

All the reasons you give for possible failure make sense. They're real and significant risks. It seems to me that:

Effectively engaging you as a business consultant (whatever the details of the arrangement) is a high value / high risk proposition. It could create or destroy significant value. The risk is not primarily what you charge. My intuition is that in most cases half-assing, misapplying, or starting then reversing your suggestions could be a lot worse than keeping the status quo.

You are in a better position than your potential customers to determine if certain aspects of a given engagement make it more likely to produce or destroy value. I'll stipulate for discussion that if value is destroyed it's all or mostly the fault of the potential customer, not you. Nevertheless, you know more than they do about things like:

- Suitability of the business to improvement by application of your unique skills.

- What kind of results can be reasonably expected as well as how quickly and how clear those results are likely to be.

- How much help the owners are actually going to need in order to understand and make productive use of your suggestions.

- Whether the background / history of the owners indicates they're more or less likely to misunderstand, quit, take criticism badly, get offended, become passive, etc. In short: the likelihood and severity of them sabotaging the engagement before it can produce positive value.

In a better world perhaps your potential customers would know some or all of this themselves. But I think they don't.

Putting myself in the shoes of a potential customer, it's my fault if significant value gets destroyed in engaging you as a business consultant. But I have poor visibility into how likely that actually is in my particular case. And I have (correct AFAIK) intuition that the risk is large rather than small. So unless I have both the resources and inclination to "take a flyer", the responsible choice is that I shouldn't engage you because it's too risky.

I think that's one source of the problem:

> It's hard to sell much if you can't offer particular outcomes. It's hard to offer philosophy outcomes because people fail a lot and "have a 10% chance at improving" is not the kind of outcome that appeals to people.

I think people are behaving reasonably within the context of what they know and avoiding a large risk of value destruction. Partnership might not be the best way to mitigate some of that risk, but I think it'd help if you had some way of addressing it.


Andy Dufresne at 5:17 PM on November 29, 2020 | #12 | reply | quote

> Effectively engaging you as a business consultant (whatever the details of the arrangement) is a high value / high risk proposition. It could create or destroy significant value. The risk is not primarily what you charge. My intuition is that in most cases half-assing, misapplying, or starting then reversing your suggestions could be a lot worse than keeping the status quo.

I think you're significantly overestimating this risk because doing that kind thing significantly increases the risk of a failure *that stands out* (weird/abnormal failure) as against a failure that doesn't stand out (maybe isn't even noticed). The effect is more about changing failure type than increasing failure risk.

A lot of the conflicts I have with people relate to ways that they are already failing that are visible to me but not (previously) to them. On gaining some initial visibility, people often deny a failing, feel bad about it, or other negative reactions. This can then result in something visibly bad happening but the main alternative was ongoing hidden failure causing ongoing problems in life with causes that aren't understood. When a visibly negative outcome happens, what is the *relative* result to that not happening? It could easily be about the same, better or worse. I see no clear reason to think it tends to be worse, and there is a clear general principle for why it'd tend to be better: the truth is powerful, valuable, etc (even just a bit of it). This can manifest in many ways. I think it's a trend. I'm talking here about objectively better or worse. Better or worse from one's perspective can be affected by losing an "ignorance is bliss" type situation, without things actually being worse, and I do grant that finding out about things skews results that way compared to just not knowing. I think that's a good thing (better to skew things towards truth than towards ignorance is bliss) and so do most people who would ever consider hiring me (or having a discussion with me).

It also depends on context and type of thing. Some things are more risky while acting conventionally than others. People are unaware of many of the risky ones and why they're risky.

> You are in a better position than your potential customers to determine if certain aspects of a given engagement make it more likely to produce or destroy value. I'll stipulate for discussion that if value is destroyed it's all or mostly the fault of the potential customer, not you. Nevertheless, you know more than they do about things like:

Yes. Discussing issues like this is typical in presales conversation. They're pretty hard to speak to preemptively b/c I'm not targeting a specific audience like married-with-no-kids age 30-50 Objectivist programmers. (Some of my articles do speak preemptively to some things. Some potential clients would be unable to connect the article to their particular situation anyway without personalized comments.)

> I think people are behaving reasonably within the context of what they know and avoiding a large risk of value destruction.

That sounds similar to saying people are behaving reasonably by:

- not posting at FI

- not reading DD, Rand, Goldratt, Mises

- going through life conventionally

- not trying to be rational


curi at 6:28 PM on November 29, 2020 | #13 | reply | quote

#13

> I think you're significantly overestimating this risk because doing that kind thing significantly increases the risk of a failure *that stands out* (weird/abnormal failure) as against a failure that doesn't stand out (maybe isn't even noticed). The effect is more about changing failure type than increasing failure risk.

I think different kinds of things can happen that might fit the above description. I'm not confident I understand your thought process well enough to come up with a scenario that'd fit what you have in mind. So I'll try some possibilities.

The scenario that seems most likely given what you wrote is a business that's currently generating 10 units of value in one domain. In an engagement you discover and point out that in the process the business is also destroying 15 units of value in another domain that wasn't being noticed or addressed. Now the net loss (5 units of value destruction) that was happening all along becomes visible. The owner fails to learn enough to fix the problem, but now sees the loss he was previously ignoring. Because of that loss he feels bad, gives up / starts acting worse, and maybe causes even more value destruction.

Another possible scenario is a business that's currently generating 10 units of value but you can see some reason why it will probably (but conventionally) start losing 10 units of value in the future if nothing unconventional is done. If the owner screws up the changes though, it turns the business into a visible and unconventional loss. When the owner does, in fact, screw it up that takes a loss that would have been attributed to "ordinary" and unpredictable bad business luck and turns it into a loss that's attributed to the owner's bad management.

Another scenario is a business that's currently generating 10 units of value the owner thinks might be able to be improved. You convince the owner that with some changes the business should be generating 50 units of value. The owner now sees that possibility, thinks that he ought to be doing that, explains to others how it's achievable and he's going to do it. But because of his own failures he achieves only 15 units of value. That's still an improvement! But now the owner is unhappy and looks like he failed because he missed expectations by a large amount (35 units).

Maybe one or more of these are what you had in mind or maybe something(s) different.

What I had in mind in #12 with:

> My intuition is that in most cases half-assing, misapplying, or starting then reversing your suggestions could be a lot worse than keeping the status quo.

...was something like a business that's currently generating 10 units of value. You convince the owner that with some changes it should be generating 50 units of value. The owner attempts to make the necessary changes. Because of the owner's failures in learning / implementing your ideas and also not leaving things alone, the business starts losing 10 units of value.

Do you think that the scenario I had in mind is actually unlikely?


Andy Dufresne at 4:37 PM on November 30, 2020 | #14 | reply | quote

#13

> > I think people are behaving reasonably within the context of what they know and avoiding a large risk of value destruction.

>

> That sounds similar to saying people are behaving reasonably by:

>

> - not posting at FI

>

> - not reading DD, Rand, Goldratt, Mises

>

> - going through life conventionally

>

> - not trying to be rational

For the majority of people, yes I think they're behaving reasonably in regard to those things given their life situation and background.

I don't think FI's knowledge is developed to the point where most people can engage with it and not incur a large risk of making their lives worse than convention.

What is needed includes things like FI knowing what to say to an average person so that when they read FI they understand FI's criticisms of convention, know when they understand something unconventional well enough to try it instead of convention, and know how to not feel bad about doing convention in the mean time.


Andy Dufresne at 4:52 PM on November 30, 2020 | #15 | reply | quote

> The scenario that seems most likely given what you wrote is a business that's currently generating 10 units of value in one domain. In an engagement you discover and point out that in the process the business is also destroying 15 units of value in another domain that wasn't being noticed or addressed. Now the net loss (5 units of value destruction) that was happening all along becomes visible. The owner fails to learn enough to fix the problem, but now sees the loss he was previously ignoring. Because of that loss he feels bad, gives up / starts acting worse, and maybe causes even more value destruction.

Sure and the other examples are fine too. More generally, people can go from being unaware of some problems to visibly, consciously failing to solve them.

Doing poorly at solving a problem you know about is often better than, or at least no worse than, being unaware of that problem.

> Do you think that the scenario I had in mind is actually unlikely?

I think it's reasonably unlikely for a business. There's risk everywhere in life; this could happen with any change to a business; but I don't think I'm a particularly high risk for it. I'm capable of limiting suggestions and foreseeing risks. One way it can happen is if I start telling someone an idea and then are like "great" and run off to do it, while i'm saying "wait that was the rough draft, hold on, it definitely won't work without changes and more understanding first!" but they ignore me. Or more subtly, I may fail too ask too few screening questions, challenge stuff enough, etc., possibly b/c the client is a bit sensitive to criticism so i basically have a limited budget for doing that stuff. Even so, i can calibrate suggestions to what i think will be workable in the scenario, given the constraints, and i'm pretty good at such things and don't regard it as especially risky.

Businesses tend to deal in fairly concrete things and it's easier to give them clear, specific advice.

i think similar problems are more common in a non-biz context, and in particular in a philosophy idea context where things are less concrete. the biggest cause of this is b/c ppl are often using free resources instead of paying me to make my goal be to actually help them. some ppl will make decisions about pretty important stuff like e.g. monogamy or rationality with a $0 budget. and they didn't bother to read some of the warnings and additional material, and ignored the two times i tried to suggest exploring the ideas further, and they also never bothered to tell me they were planning to take any risky action irl instead of just speculating about theory. so then maybe they do something dumb.

or they post to FI, don't pay, engage in debate, and start jumping to conclusions like they should immediately stop doing X in their life b/c they found out about a criticism of X they couldn't refute.

or they post to FI, try to learn FI, get stuck, and spiral downward. or really i don't think that's very dangerous for newbies. the main danger to newbies is they have a negative experience and leave and it really didn't cost them much (often they already got some bigger benefits anyway). some ppl have been involved in FI in a bigger way and then it has more capability to have a significant positive or negative effect and due to esp dishonesty sometimes some things go visibly wrong but it's hard to tell if they would have been any better off in some alternative world.

> For the majority of people, yes I think they're behaving reasonably in regard to those things given their life situation and background.

do you also think that vs more or less everything outside the mainstream? socialism? pure capitalism? objectivism? theory of constraints? critical rationalism? or is FI somehow more dangerous than DD's books? (the main safety feature of those is that they broadly avoid telling you how the ideas in the book connect to real world stuff like parenting, politics, etc. this means on the one hand that some ppl don't make the connections at all. but others make the connections wrong. and there are plenty of other books which are more willing to give concrete advice and have something unconventional to say – do you regard all those books as equally dangerous to FI or are you viewing FI as different in some major way?)


curi at 8:25 PM on November 30, 2020 | #16 | reply | quote

#16

> do you also think that vs more or less everything outside the mainstream?

I don't think casually looking at non-mainstream things without either understanding the need for or making major life changes - which I'll call "dabbling" - is particularly dangerous. The greatest danger in dabbling is when people start to believe their own bullshit about being serious and move from dabbling into actually making major life changes. Which they're typically not ready for.

Dabbling in socialism, pure capitalism, objectivism, DD's books, etc. is all most people who are "into" those topics actually do. And I think that dabbling is reasonable for most people as compared to trying to make major life changes based on them.

I don't even think dabbling in FI is a big risk on its own. But FI makes it harder to just dabble than most of the other topics you listed. FI's particular combination of honesty + relation to common concretes + interactivity + urgency (maybe some other factors) is harder to understand significant parts of and still remain a dabbler.

There are examples of people that do more than dabble in socialism. They become left wing activists of various sorts, many of which I think you agree are dangerous. Not recommended / not reasonable.

It's harder to think of lots of examples of the other ones though.

Who *actually* more than dabbles in Objectivism and Pure Capitalism? Some Libertarians (which can be dangerous and unreasonable - tax evaders, radicals, financial schemers of various sorts)? ARI leaders (bad, though not necessarily screwing their lives up)?

Who *actually* more than dabbles in DD's books and CR other than FI people? My experience here is pretty thin so maybe there are lots I don't know about.

I don't know enough about TOC in the real world to know whether it's common for people to more than dabble in it either and if so, what its dangers are.

One thing I can think of other than FI that lots of people *actually* more than dabble in is non-mainstream religions. I'm talking about Moonies, Nation of Islam, FLDS, Branch Davidians, Scientology, etc. They lack FI's honesty, but they're also tied to concretes, interactive, and they carry a message of urgency. They're hard to dabble in. I think most people who stay away from them rather than find out what they teach and why its wrong are behaving reasonably.

Another thing I can think of other than FI that lots of people *actually* more than dabble in is homeschooling. That's also concrete, interactive, and has a message of urgency. Can be dangerous too (lots of horror stories).

Nevertheless as of 2019 I probably would have said most people (at least parents of school-age kids) were unreasonably avoiding learning about homeschooling. Because of Corona I'm not sure if that's still the case. I know a lot more people are homeschooling now than last year, but I don't know enough details about who, how many, what their doing, etc. to express an opinion.

I think one thing homeschooling has going for it that the other examples don't is mainstream options are so bad as to be kinda dangerous too. And homeschooling is kinda mainstream already.

Maybe a better comparison is unschooling, which fewer people do / is less mainstream. But still not a lot of dabblers - people who know anything significant about unschooling tend to either try to do unschooling or reject it. I think unschooling is dangerous. So I think most people who avoid learning about unschooling are acting reasonably.


Andy Dufresne at 8:06 PM on December 1, 2020 | #17 | reply | quote

> Who *actually* more than dabbles in DD's books and CR other than FI people? My experience here is pretty thin so maybe there are lots I don't know about.

There are academics who write CR related stuff. I broadly don't think they're very good.

> Who *actually* more than dabbles in Objectivism and Pure Capitalism?

Some Mises Institute people and some of their allies. Some of them do OK with academic writing or teaching. Some do stuff like podcasts or blogs.

Lots of leaders in that community have major flaws. Similar to ARI leaders, I wouldn't blame the issue primarily on dabbling. It's more about being wrong. E.g. some of them dislike Reisman and have attacked his work with shoddy arguments and then refused to engage in much back-and-forth discussion to try to resolve matters.

> I don't know enough about TOC in the real world to know whether it's common for people to more than dabble in it either and if so, what its dangers are.

Lots of people have taken TOC seriously, learned a significant amount, and made major changes at their company. This includes low level people and high level people (in management hierarchy). Goldratt has talked about some of the major problems that have come up. One of the big issues is that the majority of people who have tried TOC are not a top executive. They work in a particular division rather than having a position where they deal with multiple divisions. Then what happens is TOC works great and it starts tribalist infighting with other divisions who look bad in comparison. The people who had success with TOC started asking other people to do it, but they also had a lot of ownership over it as their initiative, their thing, so if everyone follows their lead they are gonna get a ton of credit, which creates resistance. Goldratt thinks it's important that some top executives are involved from the start and that TOC is a company-wide initiative at the beginning, so it works as a unified global thing instead of a local improvement in one branch of the company. Getting top execs interested before showing how great it works in one branch, as a demo, has its own difficulties.

Goldratt provides some commentary and analysis about the difficulties with TOC implementations in the POOGI Forum letters which follow his 8 session GST training program/course that was designed to be good at getting top management on board with TOC and showing them the big picture and TOC's solutions for every major branch of the company.

There are somewhat similar ideas with learning FI where a person has multiple divisions/branches/parts of themselves and one part wants rationality but it's not a unified, global effort involving their whole personality/mind.

> I think one thing homeschooling has going for it that the other examples don't is mainstream options are so bad as to be kinda dangerous too

mainstream philosophy options are dangerously bad!

if i write something new and you think it's dangerous in some specific way, please point out your concern.


curi at 8:43 PM on December 6, 2020 | #18 | reply | quote

(This is an unmoderated discussion forum. Discussion info.)