Difference between revisions of "Woozle/drafts/rationality/v4"

From HypertWiki
Jump to navigation Jump to search
(finishing text)
(version 4 done?)
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
''This is a draft of a post I am writing for [http://leswrong.com LessWrong]. I'm posting it here in order to get feedback because I'm not sure if it makes sense anywhere outside of my head. Thanks to Swingerzetta for a substantial review of [[../v2|version 2]]. --[[User:Woozle|Woozle]] 20:02, 23 August 2010 (EDT)''
''This is a draft of a post I am writing for [http://leswrong.com LessWrong]. I'm posting it here in order to get feedback because I'm not sure if it makes sense anywhere outside of my head. Thanks to Swingerzetta for a substantial review of [[../v2|version 2]] and further review of [[../v3|version 3]], and to [[User:Harena|Harena]] for an insight about the possible meaning of "rational morailty". --[[User:Woozle|Woozle]] 15:07, 25 August 2010 (EDT)--''
{| align=center
{| align=center
|-
|-
Line 12: Line 12:


I had to go off and think about this for awhile. If it's true that moral decisions cannot be made on a rational basis, then it should be impossible for me to find an example of a moral decision which was made rationally, right? I'm pretty sure (i.e. I believe quite strongly, but in a way which I hope is amenable to update), however, that this is not the case, and that I could easily find examples of both rational and irrational moral decisions (and that most people would agree with my determinations).
I had to go off and think about this for awhile. If it's true that moral decisions cannot be made on a rational basis, then it should be impossible for me to find an example of a moral decision which was made rationally, right? I'm pretty sure (i.e. I believe quite strongly, but in a way which I hope is amenable to update), however, that this is not the case, and that I could easily find examples of both rational and irrational moral decisions (and that most people would agree with my determinations).
[http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/ This article] presents what I assume to be more or less canon for this site, wherein rationality is defined (as I understand it) as a minor goal and major goal to which the minor is servant -- to wit: the ability to properly update one's beliefs based on evidence, thereby being able to achieve one's wishes by basing one's actions on accurate beliefs.
In other words:
<big>''Rationality consists of identifying obstacles to your goals, applying logic to available data to arrive at a set of solutions to each obstacle, and choosing the best solution from that set by evaluating each one for minimum distance to your goals.''</big>


The question then is, by what ''objective criteria'' do we determine whether a decision is rational? If we can agree on the criteria, and I can't find an example that satisfies it, then I will have to update my belief.
The question then is, by what ''objective criteria'' do we determine whether a decision is rational? If we can agree on the criteria, and I can't find an example that satisfies it, then I will have to update my belief.


At this point, I kind of came up short: I was not able to find anything within LessWrong -- a site dedicated to the topic of rationality -- that suggested working guidelines by which to recognize rationality (or irrationality) in the wild. (Perhaps more tellingly, none of the more experienced members of this site stepped up and said "Oh, that -- we worked that out ages ago; you might start by reading this series of essays..." -- in which case this essay is me just escalating the question to where someone knowledgeable is likely to see it.)
At this point, I kind of came up short: I was not able to find anything within LessWrong -- a site dedicated to the topic of rationality -- that suggested working guidelines by which to ''recognize'' rationality (or irrationality) in the wild. (Perhaps more tellingly, none of the more experienced members of this site stepped up and said "Oh, that -- we worked that out ages ago; you might start by reading this series of essays..." -- perhaps giving the issue its own post will escalate it to the point where this will happen.)


==Seven and a Half Million Years Later==
==Seven and a Half Million Years Later==
Line 22: Line 28:
Without getting into a huge amount of detail on this iteration, the answer basically is... is...
Without getting into a huge amount of detail on this iteration, the answer basically is... is...


<big>''Within the context of moral deciding, rationality consists of applying logic to available data to arrive at a set of solutions to a given ethical problem, from which the best solution can be chosen by evaluating each one with a previously-chosen ethical algorithm.''</big>
<big>''Within the context of moral deciding, rationality consists identifying obstacles to your ethical goals, applying logic to available data to arrive at a set of solutions to each obstacle, and choosing the best solution from that set by evaluating each one for minimum distance to your ethical goals.''</big>
 
In other words, "moral rationality" is just rationality applies to goals that are specifically ethical in nature. I would be tempted to say "Duhhh!", except that maybe I misunderstood what he was asking.
 
Here's what he actually said:
<blockquote>There is no such thing as "rationally deciding if an action is right or wrong". This has nothing to do with particularism. It's just a metaethical position. I don't know what can be rational or irrational about morality.</blockquote>
 
The phrase "rationally deciding if an action is right or wrong" is too simple, and I think somewhat misleading. I don't think it's a phrase I ever used, so I don't know where Jack got it from.


That's kind of a tangled sentence; here it is with brackets for clarity: '''...rationality consists of {applying logic to available data} to arrive at {{{a set of solutions} to a given ethical problem}, from which {the best solution can be chosen by evaluating each [member of the set] with a previously-chosen ethical algorithm}}.'''
By "moral particularism", Jack apparently means [http://plato.stanford.edu/entries/moral-particularism/ this] -- but he says that's ''not'' the basis of his objection.


Note that an "ethical algorithm" probably cannot be accomplished by current computational methods and will -- I suspect -- probably always involve one or more intelligences. (Similar applies for the process of "applying logic to available data" -- we don't yet have a computer program that's good at solving word-problems.) That's just a hunch, though, and the argument doesn't depend on it. The point is that we have a method for deciding which solution is the best, and that we apply it rigorously -- and that this method should itself be arrived at by rational means.
By "metaethical", I think he must be referring to [http://lesswrong.com/lw/sk/changing_your_metaethics/ this] meaning -- where I am choosing an overall moral value from which all other moral values must follow.
 
Or, in other words, he may be thinking that I am judging the rationality of what I call a "[http://issuepedia.org/Moral_system moral system]" -- a set of morals (rules) by which the morality of an action may be judged.
 
Well... yes and no. I would argue that it's generally irrational to argue for any goal that doesn't support the good of your audience -- but it's irrelevant for the argument I'm making here. In the area of rationality and moral goals, we just need to agree on ''what the goals of a decision are'' when we try to figure out whether the decision is rational; they don't even have to be our ''actual'' goals.


Yes, it's turtles all the way down -- but that's no reason to think we can't get where we want to go by adding more turtles, is it?
==The Question to the Ultimate Answer==
==The Question to the Ultimate Answer==
So, how does this help us objectively decide whether or not a given moral decision was rational?
So, have we worked out how to objectively decide whether or not a given moral decision was rational?  


It sort of doesn't. However, it's actually not quite the whole picture.
Not quite. There's one more bit of understanding I need to spell out.


I think what we're actually talking about here is rules that govern '''dialogue''' [between sentient beings] with the end of ensuring that ''the overall process remains rational'' despite individual irrationality (bias, deceit, etc.) and the vagueness inherent in the communication protocol (natural language).
In my original post, I was suggesting that rationality was a possible attribute of a ''conversation'' about morals. Here's what I said:
<blockquote>My main hypothesis in starting Issuepedia is that it is, in fact, possible to be rational about politics, to overcome its "mind-killing" qualities -- if given sufficient "thinking room" in which to record and work through all the relevant (and often mind-numbing) details involved in most political issues in a public venue where you can "show your work" and others may point out any errors and omissions. I'm trying to use wiki technology as an intelligence-enhancing, bias-overcoming device.</blockquote>


In other words, I'm not so much talking about rationality ''in individuals''. but the rationality ''of a conversation'' between two or more people.
I'm not so much talking about moral rationality ''in individuals'', but rather the rationality ''of a conversation'' between two or more people --  rules for governing '''dialogue''' [between sentient beings] with the end of ensuring that ''the overall process remains rational'' despite individual irrationality (bias, deceit, etc.) and the vagueness inherent in the communication protocol (natural language).


This is fortunate, because if we are looking at conversations instead of purely internal states, there ''are'' sanity-checks we can perform.
This is fortunate, because if we are looking at conversations instead of purely internal states, there ''are'' sanity-checks we can perform.
Line 50: Line 67:
We can't depend on individuals to be reliably right for at least three reasons I can think of (in short: individuals go away, individuals may be corrupt, and rationally trustable individuals are as yet a relatively rare breed).
We can't depend on individuals to be reliably right for at least three reasons I can think of (in short: individuals go away, individuals may be corrupt, and rationally trustable individuals are as yet a relatively rare breed).


When you can ''reliably'' test for rationality in individuals, then perhaps we can arrive at something better; I'm attempting to devise a solution that works with what we have now -- because if we wait for that better solution, we spend a lot more time [http://lesswrong.com/lw/7i/rationality_is_systematized_winning/ losing], and we may die.
When you can ''reliably'' test for rationality in individuals, then perhaps we can arrive at something better; I'm attempting to devise a solution that works with what we have now -- because if we wait for that better solution, we spend a lot more time [http://lesswrong.com/lw/7i/rationality_is_systematized_winning/ losing], and failure to use imperfect improvements to our rationality does not prevent [http://issuepedia.org/Threats_to_civilization lethal] [http://lesswrong.com/lw/u2/the_sheer_folly_of_callow_youth/ failures of rationality].
==A Theory About the Brontosaurus==
==A Theory About the Brontosaurus==
Getting back to the original question yet again (ahem ahem), as slightly modified -- '''how can we tell if a dialogue is rational or not?'''
Getting back to the original question yet again (ahem ahem), as slightly modified -- '''how can we tell if a dialogue is rational or not?'''
Line 61: Line 78:
* making every reasonable '''effort to verify''' that:
* making every reasonable '''effort to verify''' that:
** your inputs are reasonably accurate, and
** your inputs are reasonably accurate, and
** there are no significant flaws in the reasoning process you are using (or [http://lesswrong.com/lw/mu/trust_in_math/ in your application of it]), and
** there are no other reasoning processes which might be better suited to this class of problem, and
** there are no significant flaws in [http://lesswrong.com/lw/mu/trust_in_math/ in your application] the reasoning processes you are using, and
** there are no significant inputs you are ignoring
** there are no significant inputs you are ignoring


I can define in more detail any terms that leave too much wiggle-room.
I can define in more detail any terms that leave too much wiggle-room.


If an argument satisfies all of these requirements, it is at least ''provisionally'' rational (what I'll call "first-draft rational"). If it fails any one of them, then it's not rational.
If an argument satisfies all of these requirements, it is at least ''provisionally'' rational. If it fails any one of them, then it's not rational.
 
This is not a circular definition (defining "rationality" by referring to "reasonable" things, where "reasonable" depends on people being "rational"); it is more like a recursive algorithm, where large ambiguous problems are split up into smaller and smaller sub-problems until we get to a size where the ambiguity is negligible.
 
This is not [http://lesswrong.com/lw/lq/fake_utility_functions/ one great moral principle]; it is more like a working process which itself can be refined over time. It is an attempt to apply the processes of science (or at least the same reasoning which arrived at those processes) to ethical discourse.


So... can we agree on this?
So... can we agree on this?

Latest revision as of 22:21, 25 August 2010

This is a draft of a post I am writing for LessWrong. I'm posting it here in order to get feedback because I'm not sure if it makes sense anywhere outside of my head. Thanks to Swingerzetta for a substantial review of version 2 and further review of version 3, and to Harena for an insight about the possible meaning of "rational morailty". --Woozle 15:07, 25 August 2010 (EDT)--

Spot the Loonie!

or
How to Identify the Essential Elements of Rationality from Quite a Long Way Away

(Disclaimer: I recognize that it may be a little bit bold for me to be addressing this topic -- rationality -- at such a fundamental level, when I have barely contributed two posts to this community and when I barely comprehend -- as yet -- many of the more highly-respected arguments which have been posted here. What I am attempting to do is to fill a gap I perceive. If the gap has already been filled, then please point me at the work I have overlooked. Thank you.)

In an earlier post, I argued that moral decisions could be made using rational processes. A commenter argued that moral decisions were not subject to rational analysis, and asked me how to define "rationality" in the context of a moral decision.

I had to go off and think about this for awhile. If it's true that moral decisions cannot be made on a rational basis, then it should be impossible for me to find an example of a moral decision which was made rationally, right? I'm pretty sure (i.e. I believe quite strongly, but in a way which I hope is amenable to update), however, that this is not the case, and that I could easily find examples of both rational and irrational moral decisions (and that most people would agree with my determinations).

This article presents what I assume to be more or less canon for this site, wherein rationality is defined (as I understand it) as a minor goal and major goal to which the minor is servant -- to wit: the ability to properly update one's beliefs based on evidence, thereby being able to achieve one's wishes by basing one's actions on accurate beliefs.

In other words:

Rationality consists of identifying obstacles to your goals, applying logic to available data to arrive at a set of solutions to each obstacle, and choosing the best solution from that set by evaluating each one for minimum distance to your goals.

The question then is, by what objective criteria do we determine whether a decision is rational? If we can agree on the criteria, and I can't find an example that satisfies it, then I will have to update my belief.

At this point, I kind of came up short: I was not able to find anything within LessWrong -- a site dedicated to the topic of rationality -- that suggested working guidelines by which to recognize rationality (or irrationality) in the wild. (Perhaps more tellingly, none of the more experienced members of this site stepped up and said "Oh, that -- we worked that out ages ago; you might start by reading this series of essays..." -- perhaps giving the issue its own post will escalate it to the point where this will happen.)

Seven and a Half Million Years Later

I've now thought about this question enough that I have an answer to offer (although I really have no idea if you're going to like it).

Without getting into a huge amount of detail on this iteration, the answer basically is... is...

Within the context of moral deciding, rationality consists identifying obstacles to your ethical goals, applying logic to available data to arrive at a set of solutions to each obstacle, and choosing the best solution from that set by evaluating each one for minimum distance to your ethical goals.

In other words, "moral rationality" is just rationality applies to goals that are specifically ethical in nature. I would be tempted to say "Duhhh!", except that maybe I misunderstood what he was asking.

Here's what he actually said:

There is no such thing as "rationally deciding if an action is right or wrong". This has nothing to do with particularism. It's just a metaethical position. I don't know what can be rational or irrational about morality.

The phrase "rationally deciding if an action is right or wrong" is too simple, and I think somewhat misleading. I don't think it's a phrase I ever used, so I don't know where Jack got it from.

By "moral particularism", Jack apparently means this -- but he says that's not the basis of his objection.

By "metaethical", I think he must be referring to this meaning -- where I am choosing an overall moral value from which all other moral values must follow.

Or, in other words, he may be thinking that I am judging the rationality of what I call a "moral system" -- a set of morals (rules) by which the morality of an action may be judged.

Well... yes and no. I would argue that it's generally irrational to argue for any goal that doesn't support the good of your audience -- but it's irrelevant for the argument I'm making here. In the area of rationality and moral goals, we just need to agree on what the goals of a decision are when we try to figure out whether the decision is rational; they don't even have to be our actual goals.

The Question to the Ultimate Answer

So, have we worked out how to objectively decide whether or not a given moral decision was rational?

Not quite. There's one more bit of understanding I need to spell out.

In my original post, I was suggesting that rationality was a possible attribute of a conversation about morals. Here's what I said:

My main hypothesis in starting Issuepedia is that it is, in fact, possible to be rational about politics, to overcome its "mind-killing" qualities -- if given sufficient "thinking room" in which to record and work through all the relevant (and often mind-numbing) details involved in most political issues in a public venue where you can "show your work" and others may point out any errors and omissions. I'm trying to use wiki technology as an intelligence-enhancing, bias-overcoming device.

I'm not so much talking about moral rationality in individuals, but rather the rationality of a conversation between two or more people -- rules for governing dialogue [between sentient beings] with the end of ensuring that the overall process remains rational despite individual irrationality (bias, deceit, etc.) and the vagueness inherent in the communication protocol (natural language).

This is fortunate, because if we are looking at conversations instead of purely internal states, there are sanity-checks we can perform.

A rational conversation must be capable of sustaining rationality -- maintaining the cognitive integrity and viability of the ideas and judgments as they are encoded in transmittable form and decoded at the other end. (I suddenly get this image of ideas as being like space travelers who must be carefully prepared for the harsh environment of extra-rational space.)

Looked at another way, appearance is everything in a conversation: it is more important for your thinking to appear rational than for your thinking to actually be rational (you know what I am saying, darleengs?) -- because if I can't see it in your argument, then I can't trust that what you are thinking makes sense (even if it actually does), and rational dialogue can't happen. Likewise, I must decide the merit of your conclusions based solely on any valid arguments which support them, even if those arguments aren't the actual thinking processes you used to reach your conclusion.

It is the thought processes which must be on trial -- not individual reputations. Civilization cannot be sustained by depending on individuals to be right.

In this particular, I apparently disagree with Yudkowsky's assertion that "You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions; if you put that burden down, don't expect anyone else to pick it up." -- because that methodology works even less reliably than the polar opposite (i.e. depending entirely on other people to find flaws).

We can't depend on individuals to be reliably right for at least three reasons I can think of (in short: individuals go away, individuals may be corrupt, and rationally trustable individuals are as yet a relatively rare breed).

When you can reliably test for rationality in individuals, then perhaps we can arrive at something better; I'm attempting to devise a solution that works with what we have now -- because if we wait for that better solution, we spend a lot more time losing, and failure to use imperfect improvements to our rationality does not prevent lethal failures of rationality.

A Theory About the Brontosaurus

Getting back to the original question yet again (ahem ahem), as slightly modified -- how can we tell if a dialogue is rational or not?

I propose that the key elements of a rational conversation are:

  • the use of documented reasoning processes:
    • using the best known process(es) for a given class of problem
    • stating openly which particular process(es) you are using
    • documenting any new processes you decide to use
  • making every reasonable effort to verify that:
    • your inputs are reasonably accurate, and
    • there are no other reasoning processes which might be better suited to this class of problem, and
    • there are no significant flaws in in your application the reasoning processes you are using, and
    • there are no significant inputs you are ignoring

I can define in more detail any terms that leave too much wiggle-room.

If an argument satisfies all of these requirements, it is at least provisionally rational. If it fails any one of them, then it's not rational.

This is not a circular definition (defining "rationality" by referring to "reasonable" things, where "reasonable" depends on people being "rational"); it is more like a recursive algorithm, where large ambiguous problems are split up into smaller and smaller sub-problems until we get to a size where the ambiguity is negligible.

This is not one great moral principle; it is more like a working process which itself can be refined over time. It is an attempt to apply the processes of science (or at least the same reasoning which arrived at those processes) to ethical discourse.

So... can we agree on this?