Difference between revisions of "Woozle/drafts/rationality/v4"
m (moved Woozle/drafts/rationality/v3 to Woozle/drafts/rationality/v4: seems ready to turn into another incarnation) |
(gotta go, save work, grrr) |
||
Line 1: | Line 1: | ||
''This is a draft of a post I am writing for [http://leswrong.com LessWrong]. I'm posting it here in order to get feedback because I'm not sure if it makes sense anywhere outside of my head. Thanks to Swingerzetta for a substantial review of [[../v2|version 2]]. --[[User:Woozle|Woozle]] | ''This is a draft of a post I am writing for [http://leswrong.com LessWrong]. I'm posting it here in order to get feedback because I'm not sure if it makes sense anywhere outside of my head. Thanks to Swingerzetta for a substantial review of [[../v2|version 2]] and further review of [[../v3|version 3]], and to [[User:Harena|Harena]] for an insight about the possible meaning of "rational morailty". --[[User:Woozle|Woozle]] 15:07, 25 August 2010 (EDT)--'' | ||
{| align=center | {| align=center | ||
|- | |- | ||
Line 28: | Line 28: | ||
<big>''Within the context of moral deciding, rationality consists of applying logic to available data to arrive at a non-empty set of solutions to a given ethical problem, from which the best solution can be chosen by evaluating each one for minimum distance to your ethical goals.''</big> | <big>''Within the context of moral deciding, rationality consists of applying logic to available data to arrive at a non-empty set of solutions to a given ethical problem, from which the best solution can be chosen by evaluating each one for minimum distance to your ethical goals.''</big> | ||
In other words, "moral rationality" is just rationality applies to goals that are specifically ethical in nature. | In other words, "moral rationality" is just rationality applies to goals that are specifically ethical in nature. I would be tempted to say "Duhhh!", except that maybe I misunderstood what he was asking. | ||
Here's what he actually said: | |||
<blockquote>There is no such thing as "rationally deciding if an action is right or wrong". This has nothing to do with particularism. It's just a metaethical position. I don't know what can be rational or irrational about morality.</blockquote> | |||
By "moral particularism", Jack apparently means [http://plato.stanford.edu/entries/moral-particularism/ this] -- but he says that's ''not'' the basis of his objection. | |||
By "metaethical", I think he must be referring to [http://lesswrong.com/lw/sk/changing_your_metaethics/ this] meaning -- where I am choosing an overall moral value from which all other moral values must follow. | |||
Or, in other words, he may be thinking that I am judging the rationality of what I call a "[http://issuepedia.org/Moral_system moral system]" -- a set of morals (rules) by which morality is judged. | |||
Well... yes and no. For the purposes of determining the attributes necessary for recognizing rationality, "no". | |||
For the purposes of my original argument, though, this does open up an important question: what happens if our moral systems have different goals (aka [http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/ terminal values])? I could probably write a whole other post on this, so I'll just jump to my conclusion. | |||
I would argue that while you can't ''strictly'' say that one terminal value is more rational than another, ''attempting to defend'' any terminal value ''other than'' the "common good" (at least, common to the group you're conversing with) is at best a sub-optimal strategy. If we don't share your goals, then your rational arguments for achieving those goals mean absolutely nothing to us. STILL WORKING THIS IDEA OUT. | |||
==The Question to the Ultimate Answer== | ==The Question to the Ultimate Answer== | ||
So, how does this help us objectively decide whether or not a given moral decision was rational? | So, how does this help us objectively decide whether or not a given moral decision was rational? |
Revision as of 19:07, 25 August 2010
This is a draft of a post I am writing for LessWrong. I'm posting it here in order to get feedback because I'm not sure if it makes sense anywhere outside of my head. Thanks to Swingerzetta for a substantial review of version 2 and further review of version 3, and to Harena for an insight about the possible meaning of "rational morailty". --Woozle 15:07, 25 August 2010 (EDT)--
Spot the Loonie!
or
|
(Disclaimer: I recognize that it may be a little bit bold for me to be addressing this topic -- rationality -- at such a fundamental level, when I have barely contributed two posts to this community and when I barely comprehend -- as yet -- many of the more highly-respected arguments which have been posted here. What I am attempting to do is to fill a gap I perceive. If the gap has already been filled, then please point me at the work I have overlooked. Thank you.)
In an earlier post, I argued that moral decisions could be made using rational processes. A commenter argued that moral decisions were not subject to rational analysis, and asked me how to define "rationality" in the context of a moral decision.
I had to go off and think about this for awhile. If it's true that moral decisions cannot be made on a rational basis, then it should be impossible for me to find an example of a moral decision which was made rationally, right? I'm pretty sure (i.e. I believe quite strongly, but in a way which I hope is amenable to update), however, that this is not the case, and that I could easily find examples of both rational and irrational moral decisions (and that most people would agree with my determinations).
This article presents what I assume to be more or less canon for this site, wherein rationality is defined (as I understand it) as a minor goal and major goal to which the minor is servant -- to wit: the ability to properly update one's beliefs based on evidence, thereby being able to achieve one's wishes by basing one's actions on accurate beliefs.
In other words, "rationality consists of applying logic to available data to arrive at a non-empty set of solutions to any given problem, from which the best solution can be chosen by evaluating each one for minimum distance to your goals.".
The question then is, by what objective criteria do we determine whether a decision is rational? If we can agree on the criteria, and I can't find an example that satisfies it, then I will have to update my belief.
At this point, I kind of came up short: I was not able to find anything within LessWrong -- a site dedicated to the topic of rationality -- that suggested working guidelines by which to recognize rationality (or irrationality) in the wild. (Perhaps more tellingly, none of the more experienced members of this site stepped up and said "Oh, that -- we worked that out ages ago; you might start by reading this series of essays..." -- in which case this essay is me just escalating the question to where someone knowledgeable is likely to see it.)
Seven and a Half Million Years Later
I've now thought about this question enough that I have an answer to offer (although I really have no idea if you're going to like it).
Without getting into a huge amount of detail on this iteration, the answer basically is... is...
Within the context of moral deciding, rationality consists of applying logic to available data to arrive at a non-empty set of solutions to a given ethical problem, from which the best solution can be chosen by evaluating each one for minimum distance to your ethical goals.
In other words, "moral rationality" is just rationality applies to goals that are specifically ethical in nature. I would be tempted to say "Duhhh!", except that maybe I misunderstood what he was asking.
Here's what he actually said:
There is no such thing as "rationally deciding if an action is right or wrong". This has nothing to do with particularism. It's just a metaethical position. I don't know what can be rational or irrational about morality.
By "moral particularism", Jack apparently means this -- but he says that's not the basis of his objection.
By "metaethical", I think he must be referring to this meaning -- where I am choosing an overall moral value from which all other moral values must follow.
Or, in other words, he may be thinking that I am judging the rationality of what I call a "moral system" -- a set of morals (rules) by which morality is judged.
Well... yes and no. For the purposes of determining the attributes necessary for recognizing rationality, "no".
For the purposes of my original argument, though, this does open up an important question: what happens if our moral systems have different goals (aka terminal values)? I could probably write a whole other post on this, so I'll just jump to my conclusion.
I would argue that while you can't strictly say that one terminal value is more rational than another, attempting to defend any terminal value other than the "common good" (at least, common to the group you're conversing with) is at best a sub-optimal strategy. If we don't share your goals, then your rational arguments for achieving those goals mean absolutely nothing to us. STILL WORKING THIS IDEA OUT.
The Question to the Ultimate Answer
So, how does this help us objectively decide whether or not a given moral decision was rational?
It sort of doesn't. However, it's actually not quite the whole picture.
I think what we're actually talking about here is rules that govern dialogue [between sentient beings] with the end of ensuring that the overall process remains rational despite individual irrationality (bias, deceit, etc.) and the vagueness inherent in the communication protocol (natural language).
In other words, I'm not so much talking about rationality in individuals. but the rationality of a conversation between two or more people.
This is fortunate, because if we are looking at conversations instead of purely internal states, there are sanity-checks we can perform.
A rational conversation must be capable of sustaining rationality -- maintaining the cognitive integrity and viability of the ideas and judgments as they are encoded in transmittable form and decoded at the other end. (I suddenly get this image of ideas as being like space travelers who must be carefully prepared for the harsh environment of extra-rational space.)
Looked at another way, appearance is everything in a conversation: it is more important for your thinking to appear rational than for your thinking to actually be rational (you know what I am saying, darleengs?) -- because if I can't see it in your argument, then I can't trust that what you are thinking makes sense (even if it actually does), and rational dialogue can't happen. Likewise, I must decide the merit of your conclusions based solely on any valid arguments which support them, even if those arguments aren't the actual thinking processes you used to reach your conclusion.
It is the thought processes which must be on trial -- not individual reputations. Civilization cannot be sustained by depending on individuals to be right.
In this particular, I apparently disagree with Yudkowsky's assertion that "You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions; if you put that burden down, don't expect anyone else to pick it up." -- because that methodology works even less reliably than the polar opposite (i.e. depending entirely on other people to find flaws).
We can't depend on individuals to be reliably right for at least three reasons I can think of (in short: individuals go away, individuals may be corrupt, and rationally trustable individuals are as yet a relatively rare breed).
When you can reliably test for rationality in individuals, then perhaps we can arrive at something better; I'm attempting to devise a solution that works with what we have now -- because if we wait for that better solution, we spend a lot more time losing, and failure to use imperfect improvements to our rationality does not prevent lethal failures of rationality.
A Theory About the Brontosaurus
Getting back to the original question yet again (ahem ahem), as slightly modified -- how can we tell if a dialogue is rational or not?
I propose that the key elements of a rational conversation are:
- the use of documented reasoning processes:
- using the best known process(es) for a given class of problem
- stating openly which particular process(es) you are using
- documenting any new processes you decide to use
- making every reasonable effort to verify that:
- your inputs are reasonably accurate, and
- there are no significant flaws in the reasoning process you are using (or in your application of it), and
- there are no significant inputs you are ignoring
I can define in more detail any terms that leave too much wiggle-room.
If an argument satisfies all of these requirements, it is at least provisionally rational (what I'll call "first-draft rational"). If it fails any one of them, then it's not rational.
This is not a circular definition (defining "rationality" by referring to "reasonable" things, where "reasonable" depends on people being "rational"); it is more like a recursive algorithm.
This is not one great moral principle; it is more like a working process which itself can be refined over time. It is an attempt to apply the processes of science (or at least the same reasoning which arrived at those processes) to ethical discourse.
So... can we agree on this?