Woozle/drafts/rationality/v4

From HypertWiki
< Woozle‎ | drafts‎ | rationality
Revision as of 00:02, 24 August 2010 by Woozle (talk | contribs) (start of 3rd draft)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This is a draft of a post I am writing for LessWrong. I'm posting it here in order to get feedback because I'm not sure if it makes sense anywhere outside of my head. Thanks to Swingerzetta for a substantial review of version 2. --Woozle 20:02, 23 August 2010 (EDT)

Spot the Loonie!

or
How to Identify the Essential Elements of Rationality from Quite a Long Way Away

(Disclaimer: I recognize that it may be a little bit bold for me to be addressing this topic -- rationality -- at such a fundamental level, when I have barely contributed two posts to this community and when I barely comprehend -- as yet -- many of the more highly-respected arguments which have been posted here. What I am attempting to do is to fill a gap I perceive. If the gap has already been filled, then please point me at the work I have overlooked. Thank you.)

In an earlier post, I argued that moral decisions could be made using rational processes. A commenter argued that moral decisions were not subject to rational analysis, and asked me how to define "rationality" in the context of a moral decision.

I had to go off and think about this for awhile. If it's true that moral decisions cannot be made on a rational basis, then it should be impossible for me to find an example of a moral decision which was made rationally, right? I'm pretty sure (i.e. I believe quite strongly, but in a way which I hope is amenable to update), however, that this is not the case, and that I could easily find examples of both rational and irrational moral decisions (and that most people would agree with my determinations).

The question then is, by what objective criteria do we determine whether a decision is rational? If we can agree on the criteria, and I can't find an example that satisfies it, then I will have to update my belief.

At this point, I kind of came up short: I was not able to find anything within LessWrong -- a site dedicated to the topic of rationality -- that suggested working guidelines by which to recognize rationality (or irrationality) in the wild. (Perhaps more tellingly, none of the more experienced members of this site stepped up and said "Oh, that -- we worked that out ages ago; you might start by reading this series of essays...")

Seven and a Half Million Years Later

I've now thought about this question enough that I have an answer to offer (although I really have no idea if you're going to like it).

Without getting into a huge amount of detail on this iteration, the answer basically is... is...

Within the context of moral deciding, rationality consists of {applying logic to available data} to arrive at {a set of solutions to a given ethical problem, from which {the best solution can be chosen by evaluating each one with a previously-chosen ethical algorithm}}.

Note that an "ethical algorithm" probably cannot be accomplished by current computational methods and will -- I suspect -- probably always involve one or more intelligences. That's just a hunch, though, and the argument doesn't depend on it. The point is that we have a method for deciding which solution is the best, and that we apply it rigorously -- and that it too should be arrived at by rational means.

Yes, it's turtles all the way down -- but that's no reason to think we can't get where we want to go by adding more turtles, is it?

What Was the Question Again?

So, how does this help us objectively decide whether or not a given moral decision was rational?

It sort of doesn't. However, it's actually not quite the whole picture.

I think what we're actually talking about here is rules that govern dialogue [between sentient beings] with the end of ensuring that the overall process remains rational despite individual irrationality (bias, deceit, etc.) and the vagueness inherent in the communication protocol (natural language).

In other words, I'm not so much talking about rationality in individuals. but the rationality of a conversation between two or more people.

This is fortunate, because if we are looking at conversations instead of purely internal states, there are sanity-checks we can perform.

A rational conversation must be capable of sustaining rationality -- maintaining the cognitive integrity and viability of the ideas and judgments as they are launched back and forth between brains (or "Rationality Processing Environments", as we call them here in the Marketing Division of the Sirius Cybernetics Corporation, Corporate Outreach Unit) -- as they are prepared for the harsh environment of extra-rational space (i.e. encoded in words) and at the other end as they are extracted from their life-support and turned back into thoughts -- so that the ideas created in one place reach their targets with their basic meaning intact -- and so they can be copied again and again if they are found to be useful.

In this usage, appearance is everything: it is more important for you to be able to demonstrate the rationality of your thinking than for your thinking to actually be rational -- because if I can't see it in your argument, then I can't trust that what you are thinking makes sense (even if it actually does), and thus the dialogue breaks down. Likewise, I have to decide the merit of your conclusions based solely on any valid arguments which support them, even if they don't reflect the actual thinking processes you used.

It is the thought processes which must be on trial -- not individual reputations. Civilization cannot be sustained by depending on individuals to be right.

In this particular, I apparently disagree with Yudkowsky's assertion that "You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions; if you put that burden down, don't expect anyone else to pick it up." because that methodology works even less reliably than the polar opposite (i.e. depending entirely on other people to find flaws).

We can't depend on individuals to be reliably right for at least three reasons I can think of (in short: individuals go away, individuals may be corrupt, and rationally trustable individuals are as yet a rare breed).

When you can reliably test for rationality in individuals, then perhaps we can arrive at something better.