Transhumanism 101
originally written as a post on G+, but abandoned near the end; needs reformatting and finishing
Major aspects of transhumanism:
- Indefinite life-extension (ILX)
- Human-equivalent AI (HEAI)
- "Mind-uploading"
Indefinite Life Extension (ILX)
So, Ray Kurzweil (about whom I could say more) has said he wants to live forever. I don't see anything _inherently_ problematic in that ideal. What is problematic about it is the current socio-economic context in which we have people who can't afford basic medical care, on a planet which is already overpopulated.
There are worst-case scenarios on both edges of this:
A. If ILX is an expensive procedure that only a few could afford, and the quality of public medical care doesn't improve, then we'd have a few people living lives extended through privilege while most live lives unfairly shortened by deprivation.
B. If ILX is a cheap procedure -- perhaps even one that poses a cure for many now-incurable diseases like cancer -- we could easily end up with a population explosion, as old people stop dying off while young people continue to reproduce.
Obviously ILX tech, in whatever form it might arrive, poses serious ethical issues with regard to economic disparity and planetary carrying-capacity.
Fortunately, ILX looks to be a more difficult problem than population reduction and economic inequality. Both of those have practical solutions (better quality of life and minimum income, respectively -- each of which reinforces the other); the main obstacles are political, not practical. (Some might disagree; I'm fine with debating those points.)
The form of ILX which most appeals to me, however, would at least not cause a population explosion. I'll get to that in part 3.
Human-equivalent AI (HEAI)
I have run across people who are passionately opposed to the idea that HEAI is even possible. Based on past experience I'm not sure if meaningful discussion on that topic is possible, but I'm willing to give it a try if anyone here is of that persuasion.
In any case, HEAI is a key aspect of transhumanism; if you don't buy it, you probably won't see the point of the rest of it.
In any case, if HEAI happens, everything changes -- because you now have something like a person who can become more intelligent by adding more memory and CPU power. (Transhumanists admittedly tend to gloss over the difficulty of scaling up intelligence; so will I, for now, but that's another discussion we could get into.)
Allowing the premise that HEAI could make itself more intelligent through hardware improvements, however, you arrive at the conclusion that we would soon have AI smarter than any human.
And that's when things get interesting, because understanding the motives of a super-HEAI could easily be impossible for regular humans. We don't know what it might want, and it could easily be smart enough to trick us into carrying out its wishes regardless of whether that was in our best interests or not. (Kind of like Fox News viewers.)
The point at which this happens has been dubbed "the Singularity" (by SF author Vernor Vinge). People I respect, Including PZ Myers, think the Singularity is a load of bunk. I think PZ is misunderstanding it. The idea isn't that we're going to suddenly become so intelligent that we all vanish into some sort of cognitive black-hole, or everything will be transformed into "grey goo"[2] or computronium[3]. I mean, those are scenarios, but they're more nightmare edge-cases than they are what the Singularity is about.
What it's about is just that there will be this time past which things will simply so different from how they are now that it's essentially impossible to make meaningful predictions about them -- a sort of socio-temporal event horizon. (And in a way, it's an ongoing thing that is just accelerating now.)[4]
"Mind-uploading"
This is another thing about which there is much debate as to whether it's even possible, not to mention the level of difficulty that may be involved. (Roger Penrose theorizes[5] that quantum states are involved; I think that's getting needlessly mystical, but it is a possibility -- and could make this an _extremely_ difficult problem to solve, kind of like FTL travel.)
The basic idea, though, is that the mind is the result of the operation of natural laws -- and as such, it should be possible to build a computational equivalent.[6] Having done that, if you could read the exact state of a real person's brain at any given time and reproduce those conditions with sufficient precision inside the simulation, you would have effectively "copied" the person into an AI, aka "uploaded" them.[7]
Notes
2. https://en.wikipedia.org/wiki/Grey_goo
3. https://en.wikipedia.org/wiki/Computronium
4. I wrote more about this, but set it aside; can post on request.
5. https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind
6. Insert debate here about whether a simulated mind can really "think" or not. In my view, the mind is fundamentally a device with inputs and outputs, i.e. an information-processing device; as such, a simulation is equivalent to the real thing.
7. Insert long discussion here about "moving" vs. "copying", and all the technical and ethical issues involved.