Loading...
Welcome to Anarcho-Punk.net community ! Please register or login to participate in the forums.   Ⓐ//Ⓔ

Anarchism, Transhumanism, and the Singularity

Discussion in 'General political debates' started by NGNM85, Sep 9, 2009.

  1. Ivanovich

    Ivanovich Experienced Member Experienced member Forum Member


    676

    4

    6

    Jan 31, 2010
     
    Well, I never heard of this til just now, though interesting that I mentioned similar in that other thread about the 'wisdom of creating something more powerful than ourselves'. Ok, I gotta say that there seems a bit of hype here, there are too many limits for the rate of growth of computer power to continue until infinite; that's just bullshit, so they should really drop the 'singularity' name - just makes it look like pop psuedo science rant. That computer AI may surpass human, well ok, more realistic, maybe inevitable, I am not sure, some major difference between biological and artificial, but anyway, if... no doubt that would cause some big changes in society, but what the hell, it wouldn't be the first time, and we are an adaptable bunch.
     
  2. back2front

    back2front Experienced Member Experienced member


    95

    0

    0

    Nov 26, 2009
     
    There's a lot more to this than meets the eye.

    One thing is certain, that if we don't manage to wipe ourselves out (salient point), technology (including robotics) will continue to make leaps and bounds. Remember the technology that we have at our popular disposal is years old. Whether we agree with that or not, the fact remains that it will occur, because we are an inquisitive species and are propelled by it.

    Science fiction nerds will no doubt be aware of Dr Issac Asimov and his 3 laws of robotics:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Asimov actually likely coined the term 'robotics' in 1941 and in his stories he tried to overcome the 'tedious idea' that mad scientists create Frankensteins. He implied that just because something is potentially dangerous does not mean that it should be avoided. Asimov was also developing the Faust legend where Faust makes a deal with the devil in return for knowledge.

    The central problem with technology is who wields it and what its applications are. Generally speaking technonolgical advances can be seen in all aspects of society but the drive behind them tends to be either military or industrial. This internet thing we all use, now commonplace, is a former military invention for example. But the majority of resources are used for the enhancement of weapons of long and short range mass destruction or in factory complexes where machines take the place of labour.

    In respect of the latter, enter Bob Black and his book "The Abolition of Work". Black, echoing Kropotkin (of which more later) suggested that humans would be better off removing the drudgery of labour by inventing machines to do their work for them. THis would leave them more time to pursue intellectual or leisure interests, in turn creating a more fulfilling experience. This was a slight advance of Kropotkin who had, in the wake of workers fighting for the 8 hour day, suggested that workers should now strive for a 4-5 hour working day, again with the idea that they would be free to pursue their own interests. What was important about this point was that Kropotkin envisaged that in the anarchist society and, say, the establishment of the proposed 4 hour working day, there would be great strides in scientific and, subsequently, societal development. Freed from the constraint of cost (with money abolished) and with minds free to pursue their own interests (freed from time-constraints at least to a degree) society would be transformed and in a significantly shorter time span than under capitalism, which is restricted by cost, time and selfish interests.

    Black then promotes the idea of robotics (indirectly) to further this idea. Let's build machines that do the dirty work so we have more time to do what we like. Of course under capitalism the idea is to reduce cost and expand profit at the expense of workers so the construction of such machines in that sense strongly affects the welfare of workers.

    I think that the central objection I would make is that unless selfish interests are avoided (which in turn spells the destruction of the capitalist society) I would be highly uncertain of many of these developments including AI.

    In saying that, if Asimov's 3 laws, were hot-wired into all future robotics then we might have no need to fear. Again we need to get beyond the Frankenstain syndrome. For me such developments can only be executed in the post-capitalist society however. I don't accept that of themselves they will necessarily create that society. That will have to come from workers and from the bottom up. Until such times I view all things extremely cautiously but I certainly don't expect they should be dismissed outright.

    NGNM85 is correct to suggest that we need to develop ideas of anarchism and libertarian socialism. Much of what we continue to promote in that respect may be, as Murray Bookchin suggests, "archaic"...
     
  3. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    The most cited example, for obvious reasons, is Moore's law; that the number of transistors that can fit on a chip has doubled every year and a half to two years. This has been going on for some time, like clockwork. However, you are quite right when you say there are limitations. Again, barring an existential disaster, it's likely we'll reach the end of Moore's law within the next 20 years, maybe sooner. You can only make these componants so small, before you reach the limit, the .1 barrier, at which point electrons start leaking out everywhere and it gets all fucked up. However, this is not set in stone. Larry Krauss (Regular contributor to Scientific American, and author of "The Physics of Star Trek.") for various reasons predicts Moore's Law could continue for over 500 years, although I haven't read anything he's published on this subject. There are other ideas, like 3 dimensional, or layered chips, of course this has the problem of exponentially increasing the heat making it difficult to keep the system from frying itself. However, as Kurzweil points out, there are always new paradigms, Moore's law is just the latest. Here's a diagram from Kurzweil;http://en.wikipedia.org/wiki/File:pPTMooresLawai.jpg
    Again, we see the exponential, "J" curve. What might this new paradigm be? Well, most likely, I think, Quantum computers. The CIA has put a substantial amount of money into this. Potentially Quantum Computers would make a Cray look like a pocket calculator. However, there's the problem of keeping the electrons perfectly in line and preventing decoherence, etc.

    The name has some problems, but I think it works as well as any. It was originally coined by SF author and mathematician Vernor Vinge. Indidentally, his ex-wife's book "Catspaw" is one of the greatest SF novels I've ever read.

    Agreed.
     
  4. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    Word.

    I went back and re-read "I, Robot" a few months ago and what really amazed me was how ahead of his time he really was. Virtually all the things AI research is wrestling with today, he already thought of decades ago.

    Right on.

    This is what I mean when I say we need to draw a fine line between the science and the institutions that usurp it for their own purposes. Oppenheimer and Einstein were not driven to create an instrument of mass death. That's the perverse objective of the generals, politicians, and clerics when they get their hands on it.

    Exactly, just like Chomsky says, the fundamental tendency in Anarchism is towards a modern, sophisticated society. At the moment machines are displacing workers creating tension and unemployment, which also has to do with the nature of predatory economic institutions, but, anyhow, I think the endpoint of this is ultimately beneficial, essentially an end to labor as we know it, freeing us up to do all sorts of other thins. Here is a compilation of videos of manufacturing robots, it's absolutely amazing, and keep in mind, they're only going to get faster and more efficient.
    http://singularityhub.com/2010/02/11/no ... n-factory/

    Exactly.

    We should obviously continue to try and change this. However, as I've said before, I wonder if we are heading towards a point where these structures simply become untenable. For example, with inexhaustible clean energy, and so forth... To paraphrase Ozymandias, in "Watchmen"; power structures largely rest on scarcity, when resources approach infinity, the power gets diffused and much tougher to hang on to.

    As for AI, I'd say there are still reasons for concern. You mentioned the three laws. This is a good first principle, to create a "friendly" AI. However, we must acknowledge that as this thing exclipses us in intelligence more and more, the more ineffectual any barriers we might design for it will be. I would think it would probably be a good idea to not connect it to the internet until we can be relatively sure it wouldn't present any problems. I think an AI would be very different. First of all, most of our attitudes and behaviors are shaped by our biological evolution. Our moral sense, romantic love, and the desire to procreate, for example, it probably wouldn't have these natural tendencies. That said, just because it has no system of morality wouldn't make it necessarily hostile. I don't think we have to worry about getting into "Hasta la vista, baby." territory anytime soon.

    Maybe. Again, I think we might have to move on from this conception of the proletariat, which was constructed during a time when the west had a susbstantial manufacturing sector, and most of that labor was done by hand. This no longer applies to most workers in the west. I think we need to update this conception for the 21st century.

    A very good idea.
     
  5. ASA

    ASA Experienced Member Experienced member Forum Member


    888

    0

    0

    Nov 2, 2009
     
    just remember not to create a treehouse, cheer's yall
     
  6. Ivanovich

    Ivanovich Experienced Member Experienced member Forum Member


    676

    4

    6

    Jan 31, 2010
     
    I worked as op in '81 on IBM mainframe. This thing had 512k ram, 1gb HD, filled a room and cost £200k. Today you can get a mobile phone for £20 that rivals that. Considering the massive increase in computing power, remarkable that society has changed so little since then. The familiar is just too comforting for (most of) society to give up easily, I think.
     
  7. Rathryn

    Rathryn Experienced Member Experienced member Forum Member


    853

    1

    0

    Oct 21, 2009
     
    I think, personally, that there are too many unknown variables for what humanity can achieve afore trying to 'upgrade' ourselves with nano-tech or bionics just for the sake of upgrading. If however, we look at people that actually NEED these upgrades to improve their life (missing limbs, blindness, pace makers), hell I'm all for it.
    I'm just a tad more interested in what humans can achieve on their own, the unlocked potential, if you will.
     
  8. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    I think we have many people today who are pushing those boundries. People like Noam Chomsky and Steven Hawking displaying the greater levels of human cognitive abilities, olympic and professional athletes displaying the pinnacle of physical capabilities, also artists, writers, musicians. I think the key is to help more people get to the place where they can start realizing their potential.
    As for human enhancement, I'm paraphrasing, I forget the source, but, essentially, at this point in medical science there is no dividing line between medical science and human enhancement. Like aging, for example. It's very likely aging is simply the result of a few specific illnesses, or breakdowns in our biochemistry, treating these will lead to longer lifespan and eventually, perhaps "negligible senesence", an effective end to aging.
    I think human enhancement, should be given serious consideration and forethought. It's sort of a general truism that the greater the potential impact of any course of action, the greater the obligation to act responsibly. I don't see this as any different. That's why we discuss these issues, to explore them, to understand them, to make the right choices.
     
  9. ASA

    ASA Experienced Member Experienced member Forum Member


    888

    0

    0

    Nov 2, 2009
     
    soz but this thread is aws, don't get ag with me, carry on
     
  10. Rathryn

    Rathryn Experienced Member Experienced member Forum Member


    853

    1

    0

    Oct 21, 2009
     
    Point very well taken, but I mentioned it as it is an important part to my own philosophy, if you will. I grew up a martial artist and the adaptability of the human body has always amazed me, not to mention that the human body actually develops less bodily hair (on average) than it did a few generations ago, simply because there is even less need for it.
    But looking at more martial arts inclined adaptations: persons involved with 'breaking' actually have a modified bonestructure, where the mesh is thicker, to be able to withstand the pressures involved. And more in general if a person exercises more their mitochondria will become more adapt at burning fat for fuel, than a non-exercising person's mitochondria would.
     
  11. A Better World

    A Better World Experienced Member Experienced member


    78

    0

    0

    Feb 28, 2010
     
    The idea fascinates me from an intelectual standpoint, but i cant see it helping us at all. Technology has brought us to the point were in now and freedom from genes sounds alot like the holocaust to me. Wiping out the undesirable traits of human beings is control. Im quite primitivist in my beliefs, although i realize weve come to a point we probably cant turn back ffrom, more technology does not seem like the solution to our problems. Not only has weaponry destroyed the lives of millions, but communication technology has destroyed true communication, entertainment technology has destroyed true enjoyment, prefabricated technology has destroyed our ability to create, etc. Solving human problems by extended the cause looks like the same old shit to me.
     
  12. Rathryn

    Rathryn Experienced Member Experienced member Forum Member


    853

    1

    0

    Oct 21, 2009
     
    I fail to understand where the 'freedom of genes' comes from, but I might've misread somewhere on the linked pages.
    If I remember correctly the primary goals are to extend life comfortably and try to eradicate disease, this to me doesn't sound too bad.
     
  13. Ivanovich

    Ivanovich Experienced Member Experienced member Forum Member


    676

    4

    6

    Jan 31, 2010
     
    I dunno, 70 years enough for anyone I think, though making sure everyone gets there, without too much pain, is worthwhile goal.
     
  14. Rathryn

    Rathryn Experienced Member Experienced member Forum Member


    853

    1

    0

    Oct 21, 2009
     
    Which IS the goal, if I understand correctly, though most likely when we get to that stage there's people that will most likely corrupt the idea(l)s and use them for certain people only.
     
  15. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    I'm not advocating anything of the sort. Obviously, there's certain theoretical types of genetic engineering that should be discouraged. However, if we can isolate horrible genetic diseases like Cystic Fibrosis, or Huntington's, why wouldn't we? I have trouble believing that these horrible illnesses contribute anything vital or valuable to the human race. I don't see any justification for it, outside of some vague religious nonsense. Frankly, I think any diety whose divine plan included this degree of suffering and misery wouldn't deserve any consideration.I don't think Steven Hawking would be drastically less brilliant if he weren't crippled by a horrible disease. In fact, this kind of preemptive action could allow us to keep more of the Steven Hawking's or Picasso's that are felled by some awful sickness. In the coming years genetic medicine should become the norm. We may have the tools to rid the world of sickle-cell anemia, etc., etc., and honestly, I think it's about time.


    This is an area of fundamental disagreement. I think this is a confusion of the cause and the means. Science, which is really just applied reason, is neutral at least. Monoplithic power structures use technology to oppress people because monolthic structures generally exist to oppress people. The science isn't the problem. Einstein and Oppenheimer were not naturally inclined to create an instrument of mass death, that's what the politicians, generals, and clerics want, because that's the limit of the scope of their vision.
     
  16. NGNM85

    NGNM85 Experienced Member Experienced member Forum Member


    459

    0

    0

    Sep 8, 2009
     
    I can't speak for the author, but that's essentially how I read it. Anarchism is the freedom from artificial constraints, placed on us by governments, corporations, and religion. Transhumanism is the freedom from biological constraints. Limits on how smart we can be, how strong we can be, how long we can live, and how healthy we can be for how much of that finite lifespan.

    As Yudkowsky says in the essay I posted;

    "Suppose you find an unconscious six-year-old girl lying on the train tracks of an active railroad. What, morally speaking, ought you to do in this situation? Would it be better to leave her there to get run over, or to try to save her? How about if a 45-year-old man has a debilitating but nonfatal illness that will severely reduce his quality of life – is it better to cure him, or not cure him?"

    The point being we are morally obligated to try and save lives. You can't just let the 45-year-old die and say; "Fuck it, he had his chance." There is no magic deviding line. To paraphrase Nick Bostrom, one of the primary causes of death is aging, so if you really want to save lives, if you apply these ethical principles correctly, it's something you have to address.
     
Loading...