Friday, December 15, 2017

Will Artificial Intelligence Become Conscious?

I'm a "small-c" proponent myself, since it makes sense to me as an emergent property of a complex brain:

https://singularityhub.com/2017/12/15/will-artificial-intelligence-become-conscious/

2 comments:

flashgordon said...

I've often thought when one learns something, they become conscious of it. But, if you just jot down a data point; that's not being conscious of it. It has to be something dynamic.

I kind of grew up reading about Chaos theory. James Gliek's book was one of the first books I ever read. I've always felt that chaotic dynamics is at the heart of free will and consciousness. Chaos involves negative and positive feedback loops.

Consciousness, in the chaotic dynamics view is strobe-like. If you set the frequency of a light at the same speed as a rotating propeller, for instance, you can get it to stand still(at least that's the perception of it). There's even recent scientific findings to the affect,

https://reliawire.com/peak-auditory-perception-alternates-ears/?utm_content=buffer68cd3&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

- There's other remarkable scientific developments(i don't think I can dig out the article; it was a long time ago, and is now burried under so many scientific developments since then!) pointing out biological cycles, and that biological clocks anticipate everything from chemical reactions, to electrical signals. This may suggest consciousness is a kind of anticipation affect. If the anticipated phenomenon doesn't happen, we say, hey, what's going on here? Or, yea, I thought that was going to happen!

- for consciousness to happen in an A.I., the A.I. must become detached from from the mechanism of its creation. If you have a dystopian, if you will, connection to the A.I. telling it what to think, it changes the topology to a regular computer program. The A.I. has to be free and out of control. It's like a stable non-equilibrium structure. The tornado is part of the environment feeding it(the air). If you try to box it in, the conditions for its independent chaotic movements are no more.

I definitely think the ony way to create a general conscious A.I. is to give birth, and raise it like a baby.

flashgordon said...

Speaking of dystopian,

Well, I start with a quote from Subhash Kak, "A life of no work and only play may turn out to be a dystopia." I see this over and over again; if we don't have our manual labor jobs, I'd commit suicide or something! What? What kind of intellectuals do we have in this society?!

I did a military stint, and i was constantly pecked at for being a runner and an intellectual, as if those were bad things. They'd sit there and say how they're so happy dong manual labor 8 hours a day. When off the aircraft carrier, the hours are suppose to be for eight hours; but, something always goes wrong, and you end up a ten or more hour day. If never fails; and, then of course, on an aircraft carrier, the hours are 12 hours/7 days a week, for 6 to 9 months(more if in wartime). These people smile at you, and say, "aren't you having fun?" They have no interest in life at all.

And here, we have this 'intellectual' saying what will we do when the machines automate all the manual labor? I guess it'll be dystopian!

- Replacing humans? Well, if done violently, then, that's a dystopian and very unintelligent reaction from the machines. Throwing in a side-remark real quick - mankind is a lifeform, it evolves. So, if it evolves, then is that not replacing the species? Getting back to violently replacing the human species . . . well, that would be very stupid of A.I. to do. As of right now, the A.I. needs us to train their neural networks. 2), Humans represent a genuine, minimal example of intelligence/consciousness. One that can learn the universe. How do you know that you've learned everything about a human intelligence? The biology, and the natural intelligence of a human? Certainly wiping it out would do no good! You gotta keep a few examples around, at least!

This is the same fallacy the religious probably make of atheists who study the Bible. They probably say we want to destroy their bible to disprove it. You can't disprove it, if you destroy all the evidence! You want the data!

- These ideas, remarkably have not been thought of. That alone says a lot about the philosphical state of scientists in general today. They clearly don't understand what the problems are.

- The issue, as Eric Drexler got right the first time, is abuse by humans, not A.I. deciding to wipe out humans. I've seen some A.I. guys recently be brave enough to point this out; but, as this article's author suggests, not enough(not to mention Elon Musk, Stephen Hawking and bunch of others) This goes to religion which appears to be taboo to talk about amongst scientists(see the philosophical state of scientists today, above!). And, really, you want to talk about dystopian?

What's dystopia? Other than religion? Which says "do not think", and makes questining authority taboo, and asking questions, and learning about the universe mean to you? Hence, my Gospel of Truth - http://wwwscientifichumanism.blogspot.com/2017/06/gospel-of-truth-you-know-it-polished-up.html

'We have combined two marvels of modern medicine': Woman gets pig kidney and heart pump in groundbreaking procedures

Perhaps this could help FF: https://www.livescience.com/health/surgery/we-have-combined-two-marvels-of-modern-medicine-woman-gets-pig-kidney...