Difference between revisions of "Woozle/drafts/rationality/v2"

From HypertWiki
Jump to navigation Jump to search
(point of departure)
 

Latest revision as of 22:59, 23 August 2010

This is a draft of a post I am writing for LessWrong. I'm posting it here in order to get feedback because I'm not sure if it makes sense anywhere outside of my head. --Woozle 14:18, 23 August 2010 (EDT)

Spot the Loonie!

or
How to Identify the Essential Elements of Rationality from Quite a Long Way Away

(Disclaimer: I recognize that it may be a little bit bold for me to be addressing this topic -- rationality -- at such a fundamental level, when I have barely contributed two posts to this community and when I barely comprehend -- as yet -- many of the more highly-respected arguments which have been posted here. What I am attempting to do is to fill a gap I perceive. If the gap has already been filled, then please point me at the work I have overlooked. Thank you.)

In an earlier post, I argued that moral decisions could be made using rational processes. A commenter argued that moral decisions were not subject to rational analysis, and asked me how to define "rationality" in the context of a moral decision.

I had to go off and think about this for awhile. If it's true that moral decisions cannot be made on a rational basis, then it should be impossible for me to find an example of a moral decision which was made rationally, right? I'm pretty sure (i.e. I believe quite strongly, but in a way which I hope is amenable to update), however, that this is not the case, and that I could easily find examples of both rational and irrational moral decisions (and that most people would agree with my determinations).

The question then is, by what objective criteria do we determine whether a decision is rational? If we can agree on the criteria, and I can't find an example that satisfies it, then I will have to update my belief.

At this point, I kind of came up short: I was not able to find anything within LessWrong -- a site dedicated to the topic of rationality -- that suggested working guidelines by which to recognize rationality (or irrationality) in the wild. (Perhaps more tellingly, none of the more experienced members of this site stepped up and said "Oh, that -- we worked that out ages ago; you might start by reading this series of essays...")

Seven and a Half Million Years Later

I've now thought about this question enough that I have an answer to offer (from past experience posting my ideas on LessWrong, though, I really don't expect you're going to like it).

This article presents what I assume to be more or less canon for this site, wherein rationality is defined (as I understand it) as one or more goals -- the first being properly updating one's beliefs based on evidence (clearly a goal, not a recipe), and thereby being able to achieve one's wishes by basing one's actions on accurate beliefs (a larger goal to which the first is servant).

(This is compatible with a definition of rationality which seems to be accepted within the philosophical community, i.e. "apportioning one's beliefs according to the evidence", hence the word "ratio" within the word "rationality".)

Another disclaimer:

If I could provide an entirely logical algorithm by which to detect the presence of rationality across any given domain of possible decisions, then I would have essentially solved AI within that domain. Since we don't yet have AI that understands human ethics, my algorithm must depend rather heavily on the only known Engine of General Rationality, i.e. the human mind.
The following definition of rationality, and the algorithm it describes, represents an attempt to capture and document (however incompletely) a synthesis of the best known methods of overcoming that engine's worst flaw (bias).
I anticipate being accused of circular argument. I would say it is more like a recursive algorithm, or a feedback loop -- both being acceptable and widely-used tools for overcoming limitations of available components.

Types of Rationality

I think what we're actually talking about here is rules that govern dialogue [between sentient beings] with the end of ensuring that the overall process remains rational despite individual irrationality (bias, deceit, etc.) and the vagueness inherent in the communication protocol (natural language).

In other words, I'm not so much talking about rationality in individuals. but the rationality of a conversation between two or more people.

A rational conversation must be capable of sustaining rationality -- maintaining the cognitive integrity and viability of the ideas and judgments as they are launched back and forth between brains (or "Rationality Processing Environments", as we call them here in the Marketing Division of the Sirius Cybernetics Corporation, Corporate Outreach Unit) -- as they are prepared for the harsh environment of extra-rational space (i.e. encoded in words), and as they are extracted from their life-support and turned back into thoughts -- so that the ideas created in one place reach their targets with their basic meaning intact -- and so they can be copied again and again if they are found to be useful.

In this usage, appearance is everything: it is more important for you to be able to demonstrate the rationality of your thinking than for your thinking to actually be rational -- because if I can't see it in your argument, then I can't trust that what you are thinking makes sense (even if it actually does), and thus the dialogue breaks down. Likewise, I have to decide the merit of your conclusions based solely on any valid arguments which support them, even if they don't reflect the actual thinking processes you used.

It is the thought processes which must be on trial -- not individual reputations. Civilization cannot be sustained by depending on individuals to be right.

Definition of Terms

Whereas "rationality" is the act of applying a set of tools (reasoning processes) to a "problem" in such a way as to produce the "best" answer within (given constraints of time, computing power, and other resources)...

...and whereas a "problem" is any decision which must be made
...and whereas rational decisions are made by applying a specific set of reasoning processes to a specific set of inputs
...and whereas "best" refers to the output of whatever ethical function we are using to evaluate the decision
...and whereas "inputs" (or "input set") are a particular set of evaluations culled from available data
...and whereas "reasonable" means "something that an overwhelming majority of informed reasonable people would agree is acceptable"...

The Envelope Please

The key elements of rationality are:

  • the use of documented reasoning processes:
    • using the best known process(es) for a given class of problem
    • stating openly which particular process(es) you are using
    • documenting any new processes you decide to use
  • making every reasonable effort to verify that:
    • your inputs are reasonably accurate, and
    • there are no significant flaws in the reasoning process you are using (or in your application of it), and
    • there are no significant evaluations you have omitted from your input set

What This Doesn't Mean

"Rationality" is *not* the same as being correct.

It's about arriving at processes to produce the best answers, and gradually refining (or sometimes significantly overturning) those processes over time. You can "rationally" make what turns out to be a very bad decision -- but rationality won't let you repeat that mistake if there's any way to avoid it.

It's about showing your work so others can learn from it (and so you can learn from them) -- explaining your logic, putting your cards on the table. ("Rationalists playing poker" -- or any situation where secretiveness is a rational strategy to follow -- is a bit of an edge-case for my definition. I would resolve the conflict by saying that we don't know whether a given player is playing rationally until the after-game briefing when they explain their reasoning. The degree to which there is uncertainty about their reasoning defines the degree to which we must consider them as irrational actors.)

(Yudkowski says: "If you fail to achieve a correct answer, it is futile to protest that you acted with propriety." I add: this is correct if he is referring to the "propriety" of the particular reasoning process chosen. There is still the propriety of ultimately achieving a reapeatable correct answer by making and documenting a thousand different mistakes.)

It's not necessarily about ignoring hunches and intuition. If your "hunches" on a certain topic have a demonstrable success rate, and it so happens that this success rate is higher than that achieved by any other process yet known to you, then it would not necessarily be a violation of rationality to follow that hunch. You would be using methods of rationality to gain the best use of a non-rational input. (You would, however, be sacrificing the possibility of passing this method on to others whose hunches are less reliable than yours.)

The Question to the Ultimate Answer

So -- can we agree that these are reasonable criteria for deciding what is "rational" and what isn't?

Point of Departure

I am pointedly disagreeing with Yudkowski's assertion that "You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions; if you put that burden down, don't expect anyone else to pick it up." because that methodology works even less reliably than the polar opposite (i.e. depending entirely on other people to find flaws). We can't depend on individuals to be reliably right, for at least three reasons I can think of (in short: individuals go away, individuals may be corrupted, and rationally trustable individuals are as yet a rare breed).

When you can reliably both teach and test for rationality, then perhaps we can arrive at something better.