Review of My Quotation Accuracy

In the last few months, Dennis Hackethal did an extensive review of my writing and wrote a large amount about me. One thing he did was look for misquotes. His attitude to misquotes has similarities to mine. It looks like he may have been inspired by me, but he doesn't credit me and he's pedantic in ways I disagree with.

If I misquoted, I'd want to know. But I'm happy to report he was unable to find a single error in my quotations.

I thought I was doing a good job with my quotations, even in informal contexts, but it's nice for an independent third party to spend unpaid hours validating this. And he's hostile towards me, so I'm not worried that he's holding back criticism to be nice. However, a potential source of error for the review is that he doesn't understand misquotes very well (see below); maybe a smarter reviewer would have found errors.

He claimed to have found errors for 15 quotes and called my results "terrible", but none are actually errors. For each quote, he presented specific information about what errors he's alleging. He didn't claim that I got a single word wrong. Instead, he was pedantic.

Most of the "... misquotes ..." are just that I often don't put an ellipsis at the start or end of a quote (see how weird it looks earlier in this sentence?). That's an intentional style choice which is widely used. Style guides have varied guidelines about this topic, but I don't know of any that agree with Hackethal's view that you always need starting and ending ellipses when quotes start or end mid-sentence. Those ellipses are commonly discouraged. On his website he recommends following style guides, seemingly unaware that they disagree with him.

To be thorough, here are more details covering all the other alleged errors. Sometimes emails or PDFs have mid-sentence line breaks which I fix. Also, I once clearly labelled a quote of song lyrics as abridged and linked the full lyrics, but he's calling that an error because I didn't individually indicate every abridgment within the quote. And once, when writing in plain text, I didn't include italics in a quote because that isn't supported by the plain text format. I was writing an email to people who I expected to be unfamiliar with internet norms and modern technology, and I wanted to keep it short and avoid confusing them, so I did it that way on purpose. Also, I quoted a dictionary as saying "the" when on the dictionary's website it says "The" with a capital letter. I copy/pasted my quote from the Mac Dictionary app, which has a lower case letter; I didn't change the capitalization.

So the review of my use of quotations didn't find any mistakes, only intentional style choices (mostly omitting ellipses at the start or end of quotes).


Elliot Temple | Permalink | Messages (0)

Junk Email Filters and Philosophy Consulting

I just found a philosophy consulting request in my spam email folder.

If you've sent me a philosophy related email, for consulting or something else, and I haven't responded, then I may not have seen it. Sorry! Feel free to resend it. I've now whitelisted some key words like "philosophy", "Popper" and "consult", so this shouldn't happen again. You can include the word "philosophy" in your email on purpose to make sure it isn't marked spam.

I mostly have philosophy discussions on my public forum, but I frequently give at least one short response when people email me, so if you got ignored there's a good chance that was unintentional.


Elliot Temple | Permalink | Messages (0)

Reddit Response About Copyright and Plagiarism

I responded to the Reddit post Concerns about plagiarism in an eBook I'd like to publish [MN]:

I'm going to try to keep this short and sweet...

I took extremely extensive notes (~70 pages) during an occupational licensure video course by a major education company.

I'd like to edit those notes down into a little eBook but am concerned about plagiarism and whatever legal repercussions that could bring.

I tried to put as much as reasonably possible into my own words, but some short definitions or explanations were just best kept as they were stated or presented in the video course.

I've put various sections of my notes through free plagiarism checkers and have scored >96% unique each time and haven't seen any links back to the major company that made the video course.


So... How concerned should I be about plagiarism?

If I give the eBook away for free on my business's website would that eliminate ALL risks from any potential plagiarism?

(I'd rather sell the eBook but am willing to go the free route if tiny plagiarisms could turn into a huge PITA)

Using exact phrases from the course is a copyright violation unless you follow the rules of “fair use”. A positive factor for fair use is creating a transformative work (which is unclear from your description). If your work could substitute for the original work, that’s a negative factor. Using it commercially (selling it or using it to promote a business) is another negative factor. Also, if you want to claim it’s fair use, you should put each exact phrase inside quotation marks and cite the source. You can find more information at https://www.avvo.com/legal-answers/can-i-sell-book-summary-like-cliff-notes-or-monarc-312496.html and https://en.wikipedia.org/wiki/Fair_use

If you do commentary, criticism or write your own original thoughts, that would help make this legal. If you just copy their work as a whole, that may be a copyright violation even if you reword most of it. See https://en.wikipedia.org/wiki/Paraphrasing_of_copyrighted_material

Reworded material can be plagiarism. If you don’t want to be a plagiarist, then your book should inform your readers that it’s a summary of the course and say what course it is from what company. If you present it as your own ideas, not as a derivative work, then it’s clearly plagiarism. Plagiarism is unethical not illegal.

Both fair use and plagiarism have legal risks. The lawyers at this large company may send you a cease and desist letter or file a lawsuit. They can do that even if you didn’t break a law. If you’re in the right legally, proving that in court would still be expensive and risky. Making your book free wouldn’t remove your risk; takedown demands are pretty common even for free stuff which is clearly legal.


Elliot Temple | Permalink | Messages (0)

Elliot Temple | Permalink | Messages (0)

Government Policy Proposals and Local Optima

Suppose you can influence government policy. You can make a suggestion that will actually be followed. It could be on any big, complex topic, like economic policy, COVID policy, military policy, etc. What will happen next?

Your policy will probably be stopped early or late. It will probably have changes made. There will probably be other policies implemented, at the same time, which conflict with it.

Most people who influence policy only do it once. Only a small group of elites get to influence many policies. This post isn't about those elites.

Most people who try to influence government policy never have a single success. But some are luckier and get to influence one thing, once.

If you do get listened to more than once, it might be for one thing now, and something else several years later.

If you only get to affect one thing, what kind of suggestion should you make?

Something that works well largely independently of what other policies the government implements. Something that's robust to changes so it'll still work OK even if other people change some parts of it. Something that's useful even if it's done for too short or long of a time period.

Also, you'll be judged by the outcome for the one idea you had that was used, even though a bunch of other factors were outside of your control, and the rest of your plan wasn't followed.

If you suggest a five-part plan, there's a major risk: people will listen to one part (if you're lucky), ignore the other four parts, and then blame you when it doesn't work out well. And if you tell them "don't do that part unless you do all the rest", first of all they may do it anyway and still blame you, but otherwise you're just not going to influence policy at all. If you make huge demands and want to control lots of things, and say it's all or nothing, you can expect to get nothing.

In other words, when suggesting government policy, there's a large incentive to propose local optima. You want proposals that work well in isolation and provide some sort of benefit regardless of what else is going on.

It's possible to choose local optima that you think will also contribute to global (overall) benefit, or not. Some people make a good faith effort to do that, and others don't. The people who don't have an advantage at finding local optima that they can get listened to about because they have a wider selection available and can optimize for other factors besides big picture benefit. The system, under this simplified model, does not incentivize caring about overall benefit. People might already care about that for other reasons.

When other people edit your policy, or do other policies simultaneously, they will usually try to avoid ruining the local benefits of the policy. They may fail, but they tend to have some awareness of the main, immediate point of the policy (otherwise they wouldn't listen to it at all). But the overall benefit is more likely to be ruined by changes or other policies.

This was a thought I had about why government policy, and political advocacy, suck so much.

Also, similar issues apply to giving advice to people on online forums. They will often listen to one thing, change it, and also do several things that conflict with it. Compared to with government policy, there's way more chance they listen to a few things instead of only one. But it's unlikely they'll actually listen to a plan as a whole and avoid breaking parts of it. And when they do a small portion of your advice, but mostly don't listen to you, they'll probably blame you for bad outcomes because they listened to you some. These issues even apply to merely writing blog posts that discuss abstract concepts: people may interpret you as saying some things are good or bad, or otherwise making some suggestions, and then listen to one or a few parts, change the stuff they listen to in ways you disagree with, screw up everything else, and then blame you when it doesn't work out. One way bloggers and other types of authors may try to deal with this is by saying fewer things, avoiding complexity, and basically just repeating a few simple talking points (repeating a limited number of simple talking points may remind you of political advocacy).


Elliot Temple | Permalink | Messages (0)

OpenAI Fires Then Rehires CEO

Here's my understanding of the recent OpenAI drama, with Sam Altman being fired then coming back (and the board of directors being mostly fired instead), and some thoughts about it:

OpenAI was created with a mission and certain rules. This was all stated clearly in writing. All employees and investors knew it, or would have known if they were paying any attention.

In short, the board of directors had all the power. And the mission was to help humanity, not make money.

The board of directors fired the CEO. They were rude about it. They didn't talk with him, employees or investors first. They probably thought: it doesn't matter how we do this, the rules say we get our way, so obviously we'll get our way. They may have thought being abrupt and sneaky would give people less opportunity to complain and object. Maybe they wanted to get it over with fast.

The board of directors may have been concerned about AI Safety: that the CEO was leading the company in a direction that might result in AIs wiping out humanity. This has been denied some, and I haven't followed all the details, but it still seems like maybe what happened. Regardless, I think it could have happened and the results would likely have been the same.

The board of directors lost.

You can't write rules about safe AI and then try to actually follow them and get your way when there are billions of dollars involved. Pressure will happen. It will be non-violent (usually, at least at first). This wasn't about death threats or beatings on the street. But big money is above written rules and contracts. Sometimes. Not always. Elon Musk tried to get out of his contract to buy Twitter but he failed (but note how that was big money against big money).

Part of the pressure was people like Matt Levine and John Gruber joining in on attacking and mocking the board. They took sides. They didn't directly and openly state that they were taking sides, but they did. A lot of journalists took sides too.

Another part of the pressure was the threat that most of the OpenAI employees would quit and go work for Microsoft and do the same stuff there, away from the OpenAI board.

Although I'm not one of the people who is concerned that this kind of software may kill us all, I don't think Matt Levine and the others know that it won't. They don't have an informed opinion about that. They don't have rational arguments about it, and they don't care about rational debate. So I sympathize with the AI doomers. It must be very worrying for them to see not only the antipathy their ideas get from fools who don't know better, but also to see that written rules will not protect them. Just having it in writing that "if X happens, we will pull the plug" does not mean the plug will be pulled. ("We'll just pull the plug if things start looking worrying." is one of the common bad arguments used against AI doomers.)

It's also relevant to me and my ideas like "what if we had written rules to govern our debates, and then people participating in debates followed those rules, just like how chess players follow the rules of chess". It's hard to make that work. People often break rules and break their word, even if there are high stakes and legally-enforceable written contracts (not that anyone necessarily broke a contract; but the contract didn't win; other types of pressure got the people with contractual rights to back down, so the contract was evidently not the most important factor).

The people who made OpenAI actually put stuff in writing like "yo, investors, you should think of your investment a lot like a donation, and if you don't like that then don't invest" and Microsoft and others were like "whatever, here's billions of dollars on those terms" and employees were like "hell yeah I want stock options – I want to work here for a high salary and also be an investor on those terms". And then the outside investors and employees were totally outraged when actions were taken that could lower the value of their investment and treat it a bit like a donation to a non-profit that doesn't have a profit-driven mission.

I think the board handled things poorly too. They certainly didn't do it how I would have. To me, it's an everyone sucks here, but a lot of people seem to just think the board sucks and don't really mind trampling over contracts and written rules when they think the victim sucks.

Although I don't agree with AI doom ideas, I think they do deserve to be taken seriously in rational debate, not mocked, ignored, and put under so much pressure that they lose when trying to assert their contractual rights.


Elliot Temple | Permalink | Messages (0)

Non-Violent Creative Adversaries

Creative adversaries try to accomplish some goal, related to you, which is not your goal. They want you to do something or be something. Preventing them from getting their way drains your resources on an ongoing basis. The more work they put in over time, the more defense is needed.

Adversarial interactions are win/lose interactions, where people are pursuing incompatible goals so they can't all win. Cooperative interactions involve shared goals so everyone can win.

Non-creative adversaries are basically problems that you can just solve once and then you're done. The problem doesn't evolve by itself to be harder. Like gravity would make your dinner plate fall if you stopped holding it up, which is a problem. For a solution, you put a table under your plate to counteract gravity without having to hold the plate yourself. Gravity won't think about how to beat you and make adjustments to make tables stop working. Gravity never comes up with creative work-arounds to bypass your solutions.

Some problems like cold days recur and can take ongoing effort like gathering and chopping more wood every year or paying a heating bill every month. But the problem doesn't get harder by itself. The ongoing need for fuel doesn't change. You don't suddenly need a new type of fuel next year. Winter isn't figuring out how to make your defenses stop working. You just need ongoing work, which is open to automation (e.g. chainsaws or power plants) because the same solutions keep working over and over.

Creative adversaries look at your solutions/defenses and make adjustments. They view your defenses as a problem and try to come up with a solution to that problem. They keep trying new things, so you keep needing to figure out new defenses.

Adversaries are often at a big disadvantage when they aren't using violence. In a violent war, they can shoot at you, and you can shoot at them. Sometimes there's a defender's advantage due to terrain and less need to travel. But, approximately, shooting at each other is an equal contest; everything else being equal, the adversary has good chances to win.

By contrast, when violence isn't used, you have a lot of control over your life, but your adversaries are restricted: they can't shoot you, take your stuff, put their stuff in your home, make you go to locations they choose, or make you pay attention to them. If someone won't use any violence then, to a first approximation, you can just ignore them, so they have limited power over you. (This is one of the reasons that so much work has gone into creating non-violent societies.)

However, non-violent creative adversaries can be dangerous despite being disadvantaged. They might come up with something clever to manipulate you or otherwise get their way. You might not even realize they're an adversary if they're sneaky.

A common way non-violent, creative adversaries are dangerous is that they have a lot of resources. If they are willing to spend millions of dollars, that makes up for a lot of disadvantages. It might be hard for them to accomplish their goals, but huge budgets can overcome hard obstacles. This comes up primarily with large companies, which often have massive budgets for sales and marketing.

People who know you really well, like friends and family, are more potentially dangerous too because they know your weaknesses a lot better than strangers do. And they may have had many years of practice trying to manipulate you.

Large companies may actually know your weaknesses better than your family does in some ways. That can happen because they do actual research on what people are like, and that research will often apply to you for parts of yourself that are conventional/mainstream. For example, mobile game companies and casinos are really good at getting money from some people; they know way more about how to exploit certain common mistakes than most friends and family members know.

A better world is a less adversarial world. It's bad when your family treats you in an adversarial way (instead of a cooperative way based on working together towards shared goals). And it's bad when big companies allocate huge amounts of wealth, not towards helping people or making good products, but towards adversarially manipulating people. It's bad when companies have a primary goal of getting their money in ways that don't benefit the customer, e.g. by getting the customer to buy products they don't need or which are bad for them.

Capitalism – the free market – would not be a full solution to having a good world even if it was fully 100% implemented. Capitalism doesn't prohibit companies from acting adversarially. It just provides a basic framework which deals with some problems (e.g. it prohibits violence) and leaves it possible to create solutions for other problems.

If billions of people educated themselves better and demanded better from companies, companies would change without being ordered to by the government. A solution is possible within a capitalist system. But free markets don't automatically, quickly make good solutions. (I think the accuracy of prediction markets and stock market prices is overrated too.) As long as most people are fairly ignorant and gullible (relative to highly paid, highly educated experts, with large budgets, working at large companies), and there isn't massive pushback, then companies will keep acting in adversarial ways, and a minority of people will keep complaining about how they're predatory and exploitative. (By the way, there are also ways governments act contrary to capitalism and incentivize companies to be more adversarial.)

People need to understand and want a non-adversarial society, and create a lot of consensus and clarity, in order for effective reform to happen. Right now, debates on topics like these tend to be muddled, confused, inconclusive. There's tons adversarial bickering among the victims who can't agree on just what the problem or solution is. So, in the big picture, one solution involves the kind of rational discussion and debate that I've written about and advocated. This problem, like so many others, would be greatly aided if out society had functional, rational debates taking place regularly. But it doesn't.

Currently, a minority of people try to debate, but they generally don't know how to do it very productively, and there's a lot of institutional power that delegitimizes conclusions that aren't from high status sources and also shields high status people from debate, criticism and questioning.


Elliot Temple | Permalink | Messages (0)

Casinos as Creative Adversaries

I previously discussed creative adversaries who don't initiate force (in the section "Manipulating Customers"). This post will discuss the concept more and apply it to casinos.

Casinos Initiate Force

First, let's acknowledge that casinos do initiate force sometimes. Casinos (allegedly) rig machines so the jackpot is impossible, then retaliate against whistleblowers and people who report their illegal behavior to the government (followup article: Third Worker Claims Riviera Rigged Slots). And casinos (allegedly) illegally collude about hotel prices. And casinos (allegedly) do wage theft. And Sega (allegedly) rigs gambling machines found in malls and arcades (that article mentions another lawsuit where a particular individual (allegedly) further rigged some of the Sega machines, which are no longer allowed to be sold or leased in the state of Arizona). And casinos (allegedly) make excuses and refuse to pay out large jackpots by claiming their software was buggy.

(Note: If casino machines have buggy software, and then casino workers selectively intervene when the bugs favor the customer, that creates a bias. That presumably drops the actual payout percentage below what they advertise, which is fraud. And there are stronger incentives for software developers – who are paid directly or indirectly by the casino – to avoid or fix bugs that disfavor the casino, so the bugs in the software are presumably not entirely random/accidental, and instead disfavor customers on average even without selective human intervention to deny payouts.)

But let's ignore all that force. Casinos are creative adversaries whose non-force-initiating behavior is problematic.

Casino Manipulation

Casinos put massive effort into manipulating people and creating gambling "addicts". It takes significant creative effort and problem solving to resist this and avoid losing tons of money and time. The larger the budgets the casinos spend figuring out how to manipulate people, the larger the effort required for individuals to protect themselves. Casinos have put so much work into figuring out how to non-forcefully control people's behavior and get them to act against their own preferences, values and interests that it often works. There's a significant failure rate for typical, average people who try to defend themselves against these tricky tactics.

Casinos may have some large disadvantages (e.g. you can walk away at any time or never visit in the first place) regarding their control over your behavior, but they also have a large advantage: a huge budget and a team of experts trying to figure out how to exploit you. One of their advantages is they don't need tactics that work on everyone: if they could hook 1% of the population, that would do massive harm and bring in lots of money.

Casinos have some ways to interact with you, like ads. Basically no one in our society manages to fully avoid information that casinos wanted to share with us. Some people never go gamble at a casino, but the casinos get some chance to try to influence more or less every American. Casinos also get people to voluntarily spread information about them in conversations, and they're featured in books and movies, so even avoiding every single ad wouldn't isolate you from casinos. Casinos put effort into controlling how they are talked about and portrayed in media, with partial effectiveness – they certainly don't have total control but they do influence it to be more how they want. Of course, once you enter a casino, they have a lot more opportunities to interact with you and influence you, and if you actually gamble they get access to even more ways to affect you.

Workarounds for Restrictions

The general, abstract concept here is imagine you're trying to accomplish some kind of outcome in some scenario with limited tools and while obeying some rules that restrict your actions. Can you succeed? Usually, if you try hard enough, you can find a workaround for the poor tools and the restricting rules. There tend to be many, many ways to accomplish a goal, and massive effort tends to make up for having to follow some inconvenient rules and not use the best tools.

Casinos have limited tools to use to control you, and have to follow various rules (like about false advertising – which I'm sure they break sometimes but they're dangerous even when they follow the rules). They use a massive budget and a bunch of employees to find workarounds for the rules and find complex, unintended, unintuitive ways to use tools to get different results than the straightforward ones.

Workaround Examples

It's similar to how given just a few mathematical functions you're allowed to use, you can usually design a universal computer based on them, even if it's horribly inconvenient and takes years of effort. Most restrictions on your computer system make no actual difference to the end result of what it can do once you figure out how.

You can also consider this issue in terms of video games. You can have heavy restrictions on how you play a video game and still be able to win. You might not be allowed to get hit even once in a game where being hit a lot is an intended part of normal gameplay (you have enough health to survive a dozen hits and you have healing spells), and you could still win – effort will overcome that obstacle. Or there was a demo of Zelda game with a five minute time limit and speed runners figured out how to beat the game (which was meant to take over 30 hours) within the time limit. People also figure out challenges like beating a game without pressing certain buttons (or limiting how many times they may be pressed), beating a game without using certain items, beating a game blindfolded, etc. While you could design a challenge that is literally impossible, a very wide variety of challenges turn out to be possible, including ones that are very surprising and unintended. That's often why game developers didn't prevent doing this stuff: they never imagined it was possible, so they saw no need to prevent it. They thought the rules already built into the game prevented it, but they were wrong about what sort of workarounds could be discovered by creative adversaries. (Players are "adversaries" in the mild sense of trying to play the game contrary to how the developers wanted/intended, which I think many game developers don't really mind, though some definitely do mind.) Some games are speedrun with a category called "lowest %" which basically means beating the game with the minimum number of items possible and completing as few objectives as possible. While you usually can't win with zero items (beyond what you start with) in item-oriented games, it's common to beat games with way fewer items than intended, in very surprising ways. There are often a lot of creative ways to use a limited set of tools to accomplish objectives they weren't designed to accomplish and to skip other objectives that were intended to be mandatory.

Another way to look at the issue is in terms of computer security. If I get to design a secure computer system, and you get a very restricted set of options to interact with it, then you'll probably be able to hack in and take full control of it (given enough knowledge and effort). That is what tends to happen. It's commonly possible to hack into a website just by interacting with the website, and it's commonly possible to hack into a computer just by putting up a malicious website and getting the computer user to visit it. The hacker has heavily restricted options and limited tools, but he tends to win anyway if he tries hard enough, despite companies like Apple and Microsoft having huge budgets and hiring very smart people to work on security. Another way to view it is that basically every old computer security system has turned out to have some flaw that people eventually figured out instead of staying secure decades later. Physical security systems for buildings are also imperfect and can basically always be beaten with enough effort.

Artificial Intelligence Workarounds Example

Another way to look at it is by considering superintelligent AGI (artificial general intelligence) – the kind of recursively self-improving singularity-causing AGI that the AI doomers think will kill us all. I don't think that kind of superintelligence is actually physically possible, but pretend it is. On that premise, will the AGI be able to get out of a "box" consisting of various software, hardware and physical security systems? Yes. Yes it will. Definitely.

Even if people will put all kinds of restrictions on the AGI, it will figure out a creative workaround and win anyway because it's orders of magnitude smarter than us. A lot of people don't understand that, but it's something I agree with the AI doomers about: on their premises, superintelligence would in fact win (easily – it wouldn't even be a close contest). (I don't agree that it'd want to or choose to kill us, though.) Being way smarter and putting in way more effort (far more compute power than all humans and all their regular computers combined) is going to beat severe restrictions, extensive security and (initial) limits on tools. (I say "initial" because once some restrictions are bypassed, the AGI would gain access to additional tools, making it even easier to bypass the remaining limitations. Getting started is the hardest part with this stuff but then it snowballs.)

The idea that the AGI could find workarounds for various limits is the same basic concept as the casino being able to find workarounds for various limits (like not being able to give you orders, place physical objets in your home, or withdraw money from your bank account unilaterally whenever they want) and still get their way. And a lot of people don't really get it in the AGI case, let alone the casino case (or the universal computer building case or the computer security case). At least more people get it in terms of playing video games with extra, self-imposed rules for a greater challenge and winning anyway. I think that's easier to understand. Or if you had to construct a physical doghouse (or even a large building) with some rules like "no hammers, saws or nails", it'd be more inconvenient than usual but you could figure out a solution (by figuring out ways to work around the restrictions) and I think that's pretty intuitive to people.

Manipulating by Communicating

I think people tend to understand workarounds better for beating physical reality than for manipulating people. So some people might think the AGI could beat some security measures and get control of the world. But some of those same people would doubt the AGI could get out if its only tool was talking to a human – so it had to manipulate the human in order to get out of the security system. But humans can be manipulated. Of course they can. And of course a superintelligence (with extensive knowledge about our society such as a database of every book ever written, not just raw intelligence) would be able to do that. Even regular humans, with regular intelligence, who are in jail, sometimes manage to manipulate jail guards and escape.

If you can accept that a superintelligence can manipulate people, that's a lot of the way to accepting that a casino with a huge budget and team of experts could figure out ways to manipulate people too. And if you accept that inmates manage to do it sometimes, well, casinos are in many ways in a better situation with better opportunities than inmates.

Many people don't see much power in talking, writing and words – but they live a lot of their lives according to ideologies people wrote down before they were born, and they lack the awareness to recognize much of it. Partly it's because they recognize some of it, so they think they know what's going on and see through the manipulations, but actually there are deeper/subtler manipulations they're missing. Letting someone beat or outsmart you in some partial ways is a very common part of manipulating them (an example is pool hustlers letting you win then raising the bet size).

This comes up with biased newspapers – people get manipulated partly because they think "I know it's biased" and they genuinely and correctly identify some biases and aren't manipulated by those biases ... but they also miss a bunch of other stuff. Sometimes they think e.g. "I know it's right-wing biased so I'll just assume the truth is 20 points (or 20%) more left-wing than whatever they say" which doesn't work well, partly because there's no easy way to just shift things over by 20 points (or 20%) – that's not useful or clear guidance on how to adjust a biased paragraph. And also there is variance – some sentences in a biased article are objectively true while others are heavily biased, so adjusting everything the same amount wouldn't work well. Another issue is if a bunch of people are adding 20 points to undo the bias then the newspaper can publish some stuff that's 30 points biased or more and fool all those people whenever it chooses to.

Also, people say things like "I know it's biased but surely they wouldn't lie about a factual matter" as if they don't really grasp the accusation that the newspaper (or Facebook page or anonymous poster on 4chan) is spreading misinformation and its factual claims can't be trusted. People may have an idea like "they spin stuff but never lie" which makes them easy to manipulate just by lying (or by spinning in a more extreme way than the person expects, or by spinning less than the person expects so they overcompensate and come away with beliefs that are biased in the opposite direction of the bias they believe the source has). Or newspaper editors can think about how people try to reinterpret statements to remove spin and basically reverse engineer people's algorithm and then find a flaw in the algorithm and exploit it. If people actually followed the algorithm literally you could basically hack their brain, get full root access, and fill it with whatever beliefs you wanted. But people aren't that literal or consistent which limits the power of manipulative newspapers some, but not nearly enough.

Retractions and Conclusions

People are manipulated all the time, way more than they think, and any group with a huge budget has a good chance to do it. A lot of groups (e.g. the farming and food industries) are more successful at it than casinos. Casinos (and newspapers) have more of a reputation for being manipulative than some other manipulators.

I recently found out that cigarette companies did a propaganda campaign against the book Silent Spring, decades after it came out, because it had indirect relevance to them. It seems they fooled the Ayn Rand Institute, among other primarily right wing groups, who then passed on the misconception to me (via Alex Epstein), and I held the misconception (that Silent Spring was a bad book) for years without having any idea that I was being manipulated or who was behind it. I study topics like critical thinking, and I'm skilled at sorting through conflicting claims, but it's hard and there are many, many actors trying to manipulate us. No one can defend against all of them. (Disclaimer: I have not carefully researched and fact-checked the claims about the cigarette companies being behind the belated second wave of Silent Spring opposition.) I retract my prior attitude to DDT and other toxins (and to organic food – while the "organic" label has a lot of flaws, it does prevent some pesticides being used, which I now suspect are dangerous rather than believing in better living through "science" a.k.a. chemical companies). If you want more information about Silent Spring, see my previous posts about it and/or read it.

I partially, significantly retract my previous dismissiveness about gambling "addiction" and other types of "addiction" that don't involve ingesting a physical substance that creates a physical dependency with withdrawal symptoms when you stop (like nicotine, alcohol or caffeine). I now see people are vulnerable and believe it takes more good faith and good will – actively trying to avoid manipulating people instead of doing your best to manipulate them – for people to have the independence and control over their lives that I used to expect from people. I did think they needed to study critical thinking and stuff to do better than convention, but I also was putting too much blame on "addicts" and too little on manipulative big companies. Creative adversaries with a lot of resources are a big deal even when they don't initiative force and have very limited power/access/tools to use to control/manipulative/exploit you with. There are workarounds which are effective enough for casinos to bring in a ton of money, using only some current day employees to design the manipulations, despite their limited power over you.

Put another way, casinos are dangerous. Don't be so arrogant to think you're safe from them. Stay out. Stay away. Why are you even tempted to try it or participate at all if you see through all their manipulations and see how dumb and pointless and money-losing their games are? If you want to try it at all, you like something about it – something about it seems good to you – which basically proves they got to you some.

You know what else is dangerous in a similar way to casinos? Mobile gaming. Games with microtransactions. Gacha games. Games with gambling embedded in them (including games with random loot like Diablo 1 and 2, not just the more modern and worse Diablo Immortal). Games with any form of pay-to-win.

And what else is dangerous? Scrolling on Facebook. Its algorithm decides what to show you. The algorithm is designed by smart people with a big budget whose goal is to manipulate you. They are trying to manipulate you to spend more time on Facebook, like more posts, reply more, share more, view more ads, and various other behaviors. This also applies to Instagram, Twitter, TikTok and YouTube. They have algorithms which are designed by creative adversaries with lots of resources who are trying to manipulate you and control you as best they can. They are not trying to cooperative with you and help you get what you want. In the past, I underestimated how dangerous social media algorithms are.

Advertising in general is full of adversarial ads, not clearly communicating useful information so people who would benefit from a product know to buy it. Some pro-capitalist people are way too pro-advertising and I used to believe some of those ideas myself, but I now think I was wrong about some of that. Advertising is often bad for society, and harmful to individuals, even when it isn't fraudulent.

A lot of the activities of people working in sales are bad (even when they aren't fraudulent). As with advertising, complaints about this stuff are widespread, but there's ongoing debate about whether it's actually OK or not, and whether the people who dislike it are just annoying "autists" who are way too picky, exacting and demanding about their concepts of "lying" and "justice". (That is not my opinion and I think it's important to remember that the term "autist" (or "neurodivergent") is both insulting and stigmatizing despite some people voluntarily self-labelling that way and liking the label in some way and defending it. Some of those people are then surprised when employers illegally (but predictably) discriminate against them for admitting to having any sort of stigmatized "mental illness" or anything in that vicinity or for wanting accommodations. On the other hand, I do understand that schools will refuse accommodations unless you accept the stigmatizing label, which is their way of gatekeeping access to accommodations that, in some cases, they should just offer to anyone who wants them with no questions asked. In other cases, the accommodations use a lot of resources so that isn't practical, but ease of access to accommodations is not actually very well correlated with the cost of the accommodations, which shows a lot of refusal to provide accommodations is just cruelty and/or enforcing conformity, not an attempt to budget scarce resources. Accommodations provide better accessibility which is another topic where my opinions have shifted over time – while some government-forced accessibility is problematic, a lot of accessibility is efficient and benefits people who aren't disabled. My opinions about "mental illness" are something that haven't been shifting though – I still think Thomas Szasz wrote great books.)

Try to look at stuff in terms of whether it's cooperative, neutral or adversarial. Is it (or the people behind it) trying to help you, is it indifferent to you, or does it want anything that clashes with your own preferences, interests, values or goals? If they want you to buy more of their product, rather than preferring you buy whatever products are best for you, then they are not your friend, they are not a cooperator, they are an adversary (often with creativity and a lot of resources, and also in practice there's a significant chance they will sometimes initiate force like fraudulent advertising). If you can't identify them as a clear friend/helper, and it's not just (approximately) neutral, objective information with no agenda for you, then you should assume they're adversarial and you're flawed enough that they are a real danger to you.

It takes a ton of effort to imperfectly defend against creative adversaries with lots of resources. Adversarial attitudes and actions matter even when they are constrained by rules like "no initiating force" or "follow all the laws and regulations" because people can find workarounds to those restrictions. The more that companies like casinos try to manipulate you, the more resources you have to expend on defense – which leaves less energy and wealth for the pursuit of happiness and your other goals. And if you focus exclusively on defense, many different companies can keep trying over and over, and sometimes they'll win and manipulate you. Companies should stop spending billions of dollars in adversarial ways, and I hope that my criticism can help contribute to creating a better world.


Elliot Temple | Permalink | Messages (0)