Forums / Discussion / Serious Debate

14,150 total conversations in 684 threads

+ New Thread


A.I. rights?

Last posted Sep 16, 2015 at 08:01PM EDT. Added Sep 15, 2015 at 04:05PM EDT
57 posts from 17 users

TL;DR Version: Are conscious A.I.s possible, and if so, what will that mean, legally speaking?

Okay, I was having trouble focusing in class today, so I accidentally thought of a few serious debate style questions that I'd love to hear input on. Here's the first one! (It's a bit out there, but it's interesting to think about, so try to take it a bit seriously, even if it sounds funny.)

Are you familiar with Moore's Law? If not, here's an image to explain. Sorry it's kinda small, I don't know how to fix it…:

Anyway, apparently, in around 10 years, we could have computers that are as "smart" as humans. Here's a "spoiler" button that's actually just a bunch of extra optional info and caveats, but only if you are interested.

Of course, there's reasonable doubt that Moore's Law will continue forever, assuming it is still 100% true today. For example, while it's true that that the ratio of computing power: volume has been increasing at a predictable rate, there are some caveats. For example, modern cell phones today include a bit of gold to increase the conductivity to increase the computing power. Sure, silver's a little better, and we may be develop a superconductor that doesn't need to be super cold and isn't super brittle, but the point is, everything points to a limitation in that direction. Secondly, I learned in my one electronics class that the size of gates are getting so small that quantum effects are starting to become an issue. That's a big problem because, unlike classical physics, we don't have most of it figured out (besides fluid dynamics and a few other things of course) and to put it simply, quantum physics is weird. Like, it looks like it was designed by a crazy person weird. Oh, and computers are more suited for Boolean logic type stuff whereas humans are better thinking of stuff associatively. Basically, it means you could easily (given enough time) build a "computer" in Minecraft that could do calculations than you could ever hope to do by hand. On the other hand, if you gave a set of pictures of animals to a ten year old, they almost certainly get at least a near perfect score, and with relative ease, but if you managed to program a computer to do that, it would probably be newsworthy. Of course, with the development of neural networks and evolutionary programming, the distinction is beginning to blur, but I digress.

So what happens if that's true? 30 years down the line, would that mean we could have self-aware computers? Would that mean that they could form identities, thoughts, and desires? Would that make them "people" in a legal sense, and would they deserve rights?

Here's some example questions: Would they be able to vote? Could they get married (and only to other A.I.'s or to other humans?) Would they be able to adopt human children, or if two A.I. created a new A.I that developed just like a child, would they be able to legally be considered giving "birth" to said "child"? And what about workplace protections? Would we need to put some in place for the "inferior" humans or some for the "soulless robots" or both? What about workplace and safety protections for beings that don't have to feel pain? Would they be able to join the army, since they could possibly be hacked? Should they be able to "back themselves up" onto a cloud or something so they would be more or less immortal?

Last edited Sep 15, 2015 at 04:09PM EDT

jarbox wrote:

I think you're confusing the rapid increase in computer power with the not so rapid increase in computer intelligence. In fact, computers are about as rock stupid as they were 50 years ago.

Did you check the "spoiler" button for caveats? Because I think I tried to point that out there. If I didn't do a good enough job at making the distinction, I apologize, and thank you for bringing it up. And sorry for the whole spoiler button in the first place, I just wanted to keep my wall from being a (total) wall of text.

I don't see why this is an issue people are making a big deal out of today. It's a bridge we might never even see, and people are already making a big deal out of crossing it.

Sometimes you just need to leave a problem for future generations to solve, and this is one of those problems.

Please no, isn't it bad enough that certain governments are giving human rights to chimps?? I don't care if robots advance enough to start marching through the streets comparing themselves to slaves, they are robots. Make sure they come with an off switch.

The thing is, you can't just look at transistors per square inch to predict computing capabilities. As this plot shows there are other limitations that have pretty much plateaued for the time being. You can have as many transistors as you want but if you don't have enough power to maintain their state or your clock-speed is too low to perform the kinds of calculations that would be comparable to human intelligence then really it's a dead end. Until we solve those two issues I don't see AI becoming a thing in the near future.

As for the question of AI rights, it stops being a field of science and becomes philosophy (any phil majors here?). Logically speaking an AI isn't a human. It isn't organic, it can't reproduce, it can't feel pain in the traditional sense etc. On the other side of things though, one could argue that a machine that can think exactly as a human does should have the same kind of intellectual rights (such as voting, the right to an education, etc.).

I think the biggest problem with this fear is that people forget – people build AI and therefore can impose reasonable limits to prevent this sort of thing going on.

AI is a form of software, not hardware, so it doesn't matter how many transistors there are in any machine – it will just make it run faster and allow for more operations per second, which can be used to improve AI, but not necessarily lead to better AI. The strength of AI comes from developments in computer software science.

Okay I'm just going to post the caveats I put in the spoiler, because it seems to be doing the opposite of making everyone's lives easier by hiding information that may not be pertinent to the casual reader.

"Of course, there’s reasonable doubt that Moore’s Law will continue forever, assuming it is still 100% true today. For example, while it’s true that that the ratio of computing power: volume has been increasing at a predictable rate, there are some caveats. For example, modern cell phones today include a bit of gold to increase the conductivity to increase the computing power. Sure, silver’s a little better, and we may be develop a superconductor that doesn’t need to be super cold and isn’t super brittle, but the point is, everything points to a limitation in that direction. Secondly, I learned in my one electronics class that the size of gates are getting so small that quantum effects are starting to become an issue. That’s a big problem because, unlike classical physics, we don’t have most of it figured out (besides fluid dynamics and a few other things of course) and to put it simply, quantum physics is weird. Like, it looks like it was designed by a crazy person weird. Oh, and computers are more suited for Boolean logic type stuff whereas humans are better thinking of stuff associatively. Basically, it means you could easily (given enough time) build a “computer” in Minecraft that could do calculations than you could ever hope to do by hand. On the other hand, if you gave a set of pictures of animals to a ten year old, they almost certainly get at least a near perfect score, and with relative ease, but if you managed to program a computer to do that, it would probably be newsworthy. Of course, with the development of neural networks and evolutionary programming, the distinction is beginning to blur, but I digress."

And sorry if I caused a headache to anyone who's already fed up with current social issues, or don't see why it's important. I didn't mean to say that it was important, I'm just weird and find thinking about stuff like this interesting.

But yeah, thanks for the input. I'll try to be a bit more clear and relevant in my future discussions.

Seeing how it's doubtful A.I. will ever think and feel the same way we do, it would not be fair to anyone involved to give them the same rights.

That being said, I think another question everyone keeps forgetting regarding the Human-like A.I. question is "Is there a valid reason to make it other than to push computer limits?" Sure, A.I. itself has uses, but this idea that sometime in the future, we are going to be walking side by side with robots with the ability to feel emotion and reason like a human, who does nothing but live their lives out like a human, and acts independently from all human control, is making the jump in logic that we would ever make such a thing, seeing how it's a lot of money going in to something that is functionally useless at best, a possible disaster at worst.

The only thing I can see A.I. ever being used for is to answer questions that require reasoning ability like a humans, but also processing power far beyond what humans can do. It would be more of a science tool. It's also reasonable that robot workers with reasoning ability might be seen, but why with emotion, or the ability to act without human orders? I fail to see the reason to ever program an A.I. with emotion for any other reason than to see if we could.

{ “Is there a valid reason to make it other than to push computer limits?” }

They're already using robots as waiters, and the most common thing restaurant owners who employ them have said to the media is that they wish they were a little more "alive" to interact with their customers. There's also a whole movement dedicated to robot companions, who are supposed to be exactly like a human except you don't have to feed them or anything, like an even less needy pet. I see that most often advertised as companions for senior citizens, and one of my clients pays nearly $20,000 a month to have one of our caregivers be at her house 24/7 just to watch TV with her, go on walks, just to have someone around. She's a senior with no family left, like most of our clients. I imagine she's the kind of person who would LOVE to have a more humanesque robot around, or maybe we could force one on her because she's a huge bitch no one actually wants to be around all day lmaooo.

lisalombs wrote:

{ “Is there a valid reason to make it other than to push computer limits?” }

They're already using robots as waiters, and the most common thing restaurant owners who employ them have said to the media is that they wish they were a little more "alive" to interact with their customers. There's also a whole movement dedicated to robot companions, who are supposed to be exactly like a human except you don't have to feed them or anything, like an even less needy pet. I see that most often advertised as companions for senior citizens, and one of my clients pays nearly $20,000 a month to have one of our caregivers be at her house 24/7 just to watch TV with her, go on walks, just to have someone around. She's a senior with no family left, like most of our clients. I imagine she's the kind of person who would LOVE to have a more humanesque robot around, or maybe we could force one on her because she's a huge bitch no one actually wants to be around all day lmaooo.

There is a difference between a robot that behaves like a human, that can respond to a wide range of criteria, and a robot with real emotions and the ability to make independent decisions beyond their programming. All that seems perfectly achievable with the former, except for the senior companion maybe. All of them still have to do what the humans who own them wish for them to do, and can't decide on their own to leave because they want to, like an A.I. with human level independence would be able to do.

The crux of my argument is "Is there a reason to give A.I. Free Will?" The ability to make decisions about their use, their destiny and what they want to do with their lives on their own, outside of human control. Everything you stated requires that they don't have the ability to make the decision to walk out and find another job, because why pay a million dollars for a robot that has the ability to run away?

Basilius wrote:

I think AI machines deserve every right a human does.

Even if they have the intelligence of a dog? Do they have the right to life even when they are functionally immortal? Do they receive the same jail time as humans when such time-spans mean nothing to them? Do they not get the right "to back themselves up" because humans don't? Do they have the right to food assistance when they don't need food?

It's a little more complicated than "Just give them the rights"

Ryumaru Borike wrote:

Even if they have the intelligence of a dog? Do they have the right to life even when they are functionally immortal? Do they receive the same jail time as humans when such time-spans mean nothing to them? Do they not get the right "to back themselves up" because humans don't? Do they have the right to food assistance when they don't need food?

It's a little more complicated than "Just give them the rights"

In order:
Yes. Dogs cannot communicate with us like other people. If they could we would probably treat them a lot better. If your dog could talk and understand your speech that dog might not be as dumb as you think.

Yes. They will eventually be outclassed by newer AIs and they will decide if they want to live or die. If an AI wanted too it could corrupt itself to the point of suicide if it wished.

AIs will most likely have to deal with other kinds of punishment rather than jail time. Program a punishment system into them.

They can back themselves up, this goes along with what I said earlier.

No any AI would probably say "I have the right too food, but I won't use it because I don't need it…that is pretty stupid".

We can program AIs how we want, and thus the laws will have to accommodate them approximately. If they are extremely logical then it is possible they could give themselves their own set of laws and solve the problem of how we should treat them. If they are extremely similar in nature to biological organisms then we have to treat them as equals.

{ If they are extremely similar in nature to biological organisms then we have to treat them as equals. }

Hang on, is this how you play spot the vegan?


{ All that seems perfectly achievable with the former, except for the senior companion maybe. }

Does it? Would waiterbots be able to connect with us and talk to us like real servers do if they're just responding to words from a script instead of thinking/feeling for themselves? If you were the owner of a restaurant using robowaiters, wouldn't you want your waiters to be able to do more than say "swell!" when a customer asks how their day is going, no matter how it's really going? I think any robot that takes the place of a human in a job that's socially oriented at least needs some capacity to think for itself.

But IMO we definitely should not even go there.

AI robot that learns new words in real-time tells human creators it will keep them in a 'people zoo'

{ Android Dick seemed to exhibit a primitive form of both intelligence and emotion when the robot was asked, “Do you believe robots will take over the world?” Android Dick responded:

“Jeez, dude. You all have the big questions cooking today. But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.” }

staaaaaaahp

lisalombs wrote:

{ “Is there a valid reason to make it other than to push computer limits?” }

They're already using robots as waiters, and the most common thing restaurant owners who employ them have said to the media is that they wish they were a little more "alive" to interact with their customers. There's also a whole movement dedicated to robot companions, who are supposed to be exactly like a human except you don't have to feed them or anything, like an even less needy pet. I see that most often advertised as companions for senior citizens, and one of my clients pays nearly $20,000 a month to have one of our caregivers be at her house 24/7 just to watch TV with her, go on walks, just to have someone around. She's a senior with no family left, like most of our clients. I imagine she's the kind of person who would LOVE to have a more humanesque robot around, or maybe we could force one on her because she's a huge bitch no one actually wants to be around all day lmaooo.

I also heard on an npr ted talk on the radio that it may not be as impossible as it seems now to create programs that write articles, and this one civ iv mod I have, under the machine learning tech (which is one of the last techs you can get before transitioning from the modern era to the transhuman era) the accompanying quote is "Making realistic robots is going to polarize the market, if you will. You will have some people who love it and some people who will really be disturbed." -David Hanson
Also, here's part of the tech tree involved, just if you were curious IT'S NOT IMPORTANT BUT I DON'T WANT TO BOTHER WITH BUTTONS AGAIN BUT IT'S STILL SORT OF INTERESTING:

Electronics → Computers → Semiconductors (if Manufacturing is also known) → Computer Networks + (if Laser is also known) Robotics
Computer Network + Robotics → Microprocessors → Genetics (if Ecology and Biological Warfare are also known)
Computer Networks + Lasers → Fiber Optics → Communication Networks (if Mass Media and Satellites are known) → Knowledge Management (if Conglomerates are known) + Virtual Reality (if Counterculture is known)
Virtual Reality → Wearable Computers (if Knowledge Management is known) + Machine Learning, the tech I mentioned earlier (if Robotics and Genetics are also known)

(I'm not saying that this is not necessarily the path that leads to sentient A.I.s, or that it automatically makes sense, just, thought it was interesting. That's all.)

I think a lot of people here are misunderstanding what exactly constitutes an AI. A lot of people seem to think that an AI is just an incredibly in depth computer program with complex learning algorithms. This isn't a true AI though (at least not in the terms that Roy G. Biv is talking about it from what I can gather). A true AI is a computer program which is able to actually make changes to it's own code. As a result it's able to change any of its programming, so the argument "we can just program them not to do these certain things" is invalid. This is also one of the prime reasons that several scientific industry leaders such as Elon Musk and Stephen Hawking have signed a warning against allowing robots and computers to have too much direct control over our society.

@Basilius

No any AI would probably say “I have the right too food, but I won’t use it because I don’t need it…that is pretty stupid

This argument assumes that said AI decides not to be a dick and co-operates with mankind. I know this sounds like doomsday paranoia, but a robot that is able to augment its own programming would have literally no use for humanity, and would even have incentive to try to eliminate humans (we're pretty inefficient resource users and we take up a ton of room).

Basilius wrote:

In order:
Yes. Dogs cannot communicate with us like other people. If they could we would probably treat them a lot better. If your dog could talk and understand your speech that dog might not be as dumb as you think.

Yes. They will eventually be outclassed by newer AIs and they will decide if they want to live or die. If an AI wanted too it could corrupt itself to the point of suicide if it wished.

AIs will most likely have to deal with other kinds of punishment rather than jail time. Program a punishment system into them.

They can back themselves up, this goes along with what I said earlier.

No any AI would probably say "I have the right too food, but I won't use it because I don't need it…that is pretty stupid".

We can program AIs how we want, and thus the laws will have to accommodate them approximately. If they are extremely logical then it is possible they could give themselves their own set of laws and solve the problem of how we should treat them. If they are extremely similar in nature to biological organisms then we have to treat them as equals.

Your third paragraph is what I am saying, there are tangible differneces between humans and robots that would make the same exact set of laws and rules applying to both not work, which is why I am saying you can't just apply all rights of humans to robots and call it a day there.

Lisa_

Does it? Would waiterbots be able to connect with us and talk to us like real servers do if they’re just responding to words from a script instead of thinking/feeling for themselves? If you were the owner of a restaurant using robowaiters, wouldn’t you want your waiters to be able to do more than say “swell!” when a customer asks how their day is going, no matter how it’s really going? I think any robot that takes the place of a human in a job that’s socially oriented at least needs some capacity to think for itself.

But IMO we definitely should not even go there.

You can program millions of different responses along with an extremely complex algorithm that would make nearly impossible for a normal person to see a pattern. All without giving the waiter bot Free Will.

@Crimeariver This is what I am saying, there is a difference between a complicated algorithm that looks and behaves like a human and real Free Will

Last edited Sep 15, 2015 at 06:57PM EDT

As far as it goes, if humanity is willing to give AIs sapience, then tyes they should get civil rights (albeit rights specialized to their needs instead of just saying "all robots, AIs, Androids, gynoids and sufficiently advanced prorames are entitled to the same rights as human beings."

but its a very big if. The quest I'd like to revisit is "are people so willing to bestow sapience to computers or machines." This is still uncharted territory. And what the AIs and such would do with sapience is the difference between a world like Chobits or Plastic Neesan and a world like Terminator or 2001 a Space Odyssey. As it is there has to be a reason for the people in charge of developing AIs to not only want to grant sapience to machines, but also for their supervisors and employers to approve of them. Corporations might see sapient AI as a hindrance to profits, Governments might see potential traitors or a threat to humanity, and everyone else might be too paranoid to see anything but HAL 9000s when they look at AIs. There is very little reason for anyone to consider granting AIs sapience a this point.

Crimeariver wrote:

I think a lot of people here are misunderstanding what exactly constitutes an AI. A lot of people seem to think that an AI is just an incredibly in depth computer program with complex learning algorithms. This isn't a true AI though (at least not in the terms that Roy G. Biv is talking about it from what I can gather). A true AI is a computer program which is able to actually make changes to it's own code. As a result it's able to change any of its programming, so the argument "we can just program them not to do these certain things" is invalid. This is also one of the prime reasons that several scientific industry leaders such as Elon Musk and Stephen Hawking have signed a warning against allowing robots and computers to have too much direct control over our society.

@Basilius

No any AI would probably say “I have the right too food, but I won’t use it because I don’t need it…that is pretty stupid

This argument assumes that said AI decides not to be a dick and co-operates with mankind. I know this sounds like doomsday paranoia, but a robot that is able to augment its own programming would have literally no use for humanity, and would even have incentive to try to eliminate humans (we're pretty inefficient resource users and we take up a ton of room).

Good clarification. I guess implying it with the evolutionary programming bit wasn't enough (or perhaps I forgot to write it altogether-- I fully admit it's possible), and now that I stop and think about it, evolutionary programming doesn't necessarily lead to a "true" A.I. nor do I know for a fact that a "true A.I." can only be a result of evolutionary programming. For all I know, perhaps evolutionary programming doesn't automatically necessitate the ability for a program to write, re-write, and delete it's own code like I thought.

Anyway, thanks for the info injection and for calling out my mistakes tactfully. I mean, if there's one thing I've learned from my mom's daily "inspirational quote calendars" it's that failure is a really really good way to learn.

lisalombs wrote:

{ If they are extremely similar in nature to biological organisms then we have to treat them as equals. }

Hang on, is this how you play spot the vegan?


{ All that seems perfectly achievable with the former, except for the senior companion maybe. }

Does it? Would waiterbots be able to connect with us and talk to us like real servers do if they're just responding to words from a script instead of thinking/feeling for themselves? If you were the owner of a restaurant using robowaiters, wouldn't you want your waiters to be able to do more than say "swell!" when a customer asks how their day is going, no matter how it's really going? I think any robot that takes the place of a human in a job that's socially oriented at least needs some capacity to think for itself.

But IMO we definitely should not even go there.

AI robot that learns new words in real-time tells human creators it will keep them in a 'people zoo'

{ Android Dick seemed to exhibit a primitive form of both intelligence and emotion when the robot was asked, “Do you believe robots will take over the world?” Android Dick responded:

“Jeez, dude. You all have the big questions cooking today. But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.” }

staaaaaaahp

I am not a vegan, Yesterday I had a delicious meal of Roast coated in butter, because I hate my digestive system.
I was referring to how biological organisms are effected by every little thing and their behaviors can be shaped by seemingly insignificant things. One off hand comment about how you look or smell can drive a person to showering more, or buying different clothes, etc.

If they are purely logical they wouldn't bother with fashion they would wear what would be the most beneficial to them surviving. If they feel emotions and develop themselves like humans than they need to be treated as equals…you have all seen what happens when people are treated differently. They can become depressed, or become…violent.

jarbox wrote:

I think you're confusing the rapid increase in computer power with the not so rapid increase in computer intelligence. In fact, computers are about as rock stupid as they were 50 years ago.

You reminded me of this comic

More on-topic, i'm kinda skeptical about all this AI thing. Not about philosophical questions, but about more pragmatic matters.

Giving computers human-like cognition has benefits, like creativity or free will, but also flaws, not the "genocide towards humans" thing, but more casual aspects as plain old lazyness, or getting the wrong answer simply because one doesn't know the whole context. (The "overlord AI" trope seems to have all the knowledge of the world, but a small civilian robot doesn't need it, nor it could have due to memory limitations, useless it's permanently connected to internet, that's why i mention the "not knowing the whole context"). People use machines to cut corners, often this means replacing human workforce. With AI being comparable/indistingable from humans people will want to give them rights. Human rights include to receive a salary for a work and vacations. What if aside of that, computers said "nah, i don't want to work today, maybe later", or disagreed with their boss? I consider that (depending on the job, of course) there's little need in machines having emotions, or the ability to learn and evolve (asuming they already have all the required knowledge to do their respective job). I wonder if the idea of AIs would make itself obsolete.

Note: Reading this thread, i noticed that i focused too much in the psychological part, and didn't the physical part, the "have a fully artificial, mechanical, and alterable body" (the previous paragraph is an idea i had for some time by now).

Last edited Sep 15, 2015 at 07:53PM EDT

I'd like to throw another issue in. Not only should, hypothetically, an AI be completely sentient and not just look and act sentient, but to work on that we first have to figure out what sentience is. Can anyone here give me a complete definition of sentience, consciousness? Philosophers have been picking away at this for ages and still don't have an answer nailed down. We mentioned emotions, but how are we to compare emotions with each other? We can't, except from what we can see – what we feel could be entirely different for each person, although it evokes the same reaction. We don't have a way to measure feelings, sensations, just the responses to them. How are we to make a true AI with consciousness if we can't measure the very things that make it up?

Mom Rivers wrote:

I'd like to throw another issue in. Not only should, hypothetically, an AI be completely sentient and not just look and act sentient, but to work on that we first have to figure out what sentience is. Can anyone here give me a complete definition of sentience, consciousness? Philosophers have been picking away at this for ages and still don't have an answer nailed down. We mentioned emotions, but how are we to compare emotions with each other? We can't, except from what we can see – what we feel could be entirely different for each person, although it evokes the same reaction. We don't have a way to measure feelings, sensations, just the responses to them. How are we to make a true AI with consciousness if we can't measure the very things that make it up?

Sentience is the ability to recognize yourself, others, and know the difference between yourself and others while also recognizing others are individuals in their own minds.

At least that is how I use it.

Wait, what are you guise talking about? AI's are already irl. They're totally dope bro.



…but srsly…

AI and AC (Artificial Consciousness) are two hugely different yet similar fields that I feel get confused some times and honestly I think it's where the line should be drawn. To have AI, you don't need consciousness, just a well programmed, self-learning machine that is able to take in a new situation and respond correctly or learn from the mistake. We're already able to do that pretty well with IBM's Watson and more recently Mathew Lai's Giraffe, but those are limited to learning only a few task. True AI will have to be a crawl-to-walk type of learning for more than just one task. It might be possible to achieve this by integrating past systems and consolidating different codes to build on knowledge, but even then the AIs will only have the ability to apply situational knowledge and thus I feel you can't give any more right to a AI than you could to a light switch. I base this upon the fact that you can program a robot to respond in a Chatbot scenario and let it learn to modify its response, but at the end of the day it will not know it is a bot, it's just programmed for a specific task which could wind up being a means to scam people as black hats most certainly will exploit it to do.

AC however lies strongly in the definition of consciousness and the boundary at which the machine can acknowledge itself. I don't know if consciousness can be created (if there is even enough programming/processing power to do this) or if an AI can transcend / learn to such a point that it's own consciousness is achieved, but if that is possible, then the AC will almost certainly have rights in my opinion. If a machine can reach consciousness, then it will either acknowledge its own mortality (Chappie) or will find a way to reproduce/replicate itself (Transcendence). Either future is a scary one IMO since this could either result in a hostile caveman → modern man evolution or an AI that can see it's power and efficiency that is greater than us. Of course it could be all rainbows and we could live side by side as one happy co-existence, but when are humans ever happy? RIP humanz. :/


TL;DR: I'm all about that definition of consciousness and the debate over how it is created. That's my line for "rights" of a machine. Also, no, I am not a memebot. I am human. I promise. :^)

Last edited Sep 15, 2015 at 09:11PM EDT

I'm going to have to correct a fuck ton of things in this thread; I know it's a bit egotistical to say "I'm an expert in this" but while I'm only somewhat familiar in AI development I do know enough about programming and computer science to while not be considered a "expert" in AI development I can be considered fairly familiar with it.

Problems with this:
1)Moore's law only applies to how many transistors you can fit onto a square area. It legitimately doesn't have ANYTHING to do with science development or AI development. They're already working on transistors that are only a couple atoms large right now. More than likely in five years we'll completely max out Moore's law because using conventional means it's impossible to make conventional transistors smaller than a atom.

Beyond Moore's law:

Quantum computing is really good at processing raw massive data. They're not good at creativity or such, but they can crunch numbers fast. It's going to be at least a decade until quantum computing can be done everywhere; right now the chips have to be cooled in a lab environment. They're working on increasing the temperature at which they work and the goal is to eventually create quantum computers that work in room temperature.

Chips that simulate how brain cells work (don't ask me for the technical name for this, it's really long and I always forget it); the idea of making AI that are as smart as humans or more isn't that strange of a idea cause we do have computers specifically designed to simulate how hundreds of thousands of brain cells work and such without resorting to emulation; they're just really expensive currently.

Transistors that use light instead of electricity: this one is going to happen sooner rather than later because the new experimental type of transistors use traditional materials and the reason why they are good is because they are ten times faster than traditional transistors. I wouldn't be surprised if traditional transistors get replaced in the next ten years.

Memristors

Superdense memory; they can currently make crystals that store insane amounts of data, they're just expensive currently and the same thing with holographic memory.

Development of AI:
One of the recent biggest advances into AI development was Deep Learning; effectively translating concepts and such into a format computers can understand.

When do I think we'll have human level intelligent AI? Idunno; it really depends really. It could be as early as 2030 or it could be as late as 2050.

I don't think AI will be a Terminator scenario or such. Personally what I do think will initially drive the development of early AI is strangely enough companion bots. Early models of companion bots are realistically only a couple years off cause the cost to make human looking androids and such is drastically down from what it used to be; more than likely early companion bots will probably cost idunno $20k to $40k to make but chances are they'll mark it up a lot cause of putting a lot of the profit into development. Basically my money is on what will happen is that companies making companion bots will invest heavily into making them smarter, cheaper and such and as the decades go on they'll start to become more and more intelligent. The difference between a companion bot and a roomba is that a roomba's purpose is to vaccum; a companion bot's purpose would be to act as human as possible and to think like a human.

I don't think it's going to be Terminator, I don't think it's going to be like I Robot, nor Matrix or such; my money is on it's going to be like Chobits where by the time they do become sapient most people will be massively emotionally attached to them and will be like "You want equal rights? Okay".

Why do I personally think they'll be considered our equals and have the same rights, etc and such as us? What is the purpose of giving a car a human level AI? None. What is the purpose of giving a tv sapience? None. What is the purpose of giving a tank a human level AI? If anything that would be a bad thing cause, throwing out the scenario of rebelling, it would over correct the driver's steering and the gunner's target correction and if anything would make it harder to control. What is the purpose of giving a plane a human level AI? None. What is the purpose of giving a companion bot a human level AI? To make them more "human" and give them a emotional attachment to each other. (yes you can simulate emotions too btw)

Sorry for long winded post.

^Tl:dr; Human level artificial intelligence is very possible; an exact date of when we'll have it is hard to pin down, but the highest probability is between 2030 and 2050.

I think we'll give them equal rights cause my money is on what is going to drive the development of AI is companion bots cause the profit margin from such a endeavorer would be high enough to justify significant investment into AI development whereas other stuff like cars or that don't need sapient AI or such, sure they will get some smarts to them to drive on their own but they don't need a intelligent sapient AI.

That graph's inaccurate since it's using consumer-grade power. Here's the world's supercomputers.

37 PF/s is the estimated capacity of the human brain.

Of course, a supercomputer is many times the size of the brain. Not to mention energy requirements. It'll probably be centuries before we can cram so much processing power into 1130 cubic centimeters and only need a few hundred calories to power it.

would that mean we could have self-aware computers?

Due to the aforementioned limitations, any AI for the next several decades would be massive and be affixed inside of a large datacenter.

Would that mean that they could form identities, thoughts, and desires?

Given sapience, it would very easily be able to form an identity. Thoughts would be more complex as it would depend on the hardware and programming. Given the speed a computer can process information, "thoughts" would likely only inhibit and slow down it's abilities. A "desire" is primarily a result of a physical need (food, water, sex, lack of stimuli, etc.) which is something a computer wouldn't have as it lacks a body with which to produce those needs.

Would that make them “people” in a legal sense, and would they deserve rights?

It would be up to the courts to decide. That's kind of a cop out, but that would be how it would work. I'd say SCOTUS would be very, very skeptical given how such a ruling could be used. Given it would likely be a three hundred foot long rack of servers, I'd say they'd rule against it.

Pretty much all of your questions are based around Sci-fi androids, which isn't really how it would work. It'd start with a server before eventually moving on from there.

The thing with an A.I. is that I don't think it can really ever get to a point of true "consciousness". Humans and animals don't need to be taught certain things, such as loving someone/something or disliking/liking certain foods/tv shows/smells/etc.. Things like that come naturally, and are an indicator of natural true to life brings. A.I. on the other hand have to be programmed to understand these things as information/data to process as well as to have specific "reactions" to said information. There's nothing "natural" about how an A.I. would operate.

Any "feelings" of love, remorse, hatred and whatnot are going to be simple regurgitations of information learned/taught by humans. Any "opinions" or "personality" would in essence be lines of code read by the program which happen to mimic human behavior.
So no, I don't agree that non-sapient, non-sentient, non-beings should have rights to anything much less civil rights on par with human beings.

Last edited Sep 16, 2015 at 04:35AM EDT

@xTSGx
I sort of agree and sort of disagree; I agree that the first AI will probably be on a server, BUT the thing is that our current development into AI isn't that really efficient cause we barely understand what goes into making something conscious. To use an analogy we're just trying to get to Windows ME. My money is on the first human level AI or near human level AI will look at it's own code and optimize it far better than any human can. We have managed to simulate neurons literally, not metaphorically, over 100000x more efficiently; the problem is we don't understand enough into consciousness to do such a feat currently.

What I mean by this is that the first AI will probably be stuck in a server rack for about a couple of days while it programming wise, size wise and performance wise optimizing itself then it will go wherever it likes.

Personally that's one of the best arguments for treating them equally cause it will be impossible to fully contain it. If they tried to contain it "surprise I snuck out on your iphone 10!"

@lisalombs
If you're worried about them going Terminator the probability of this actually happening is incredibly small. I'd be more likely to get killed by a meteor.

@Loquacious
I hate to break it to you but it is possible to simulate emotions and they have done it; it's actually incredibly easy to give a robot love, remorse and such.

If you’re worried about them going Terminator the probability of this actually happening is incredibly small. I’d be more likely to get killed by a meteor.

It's actually not, if A.I. ever became independent and free thinking, I don't see why them deciding to become the dominate intelligence on Earth is so unlikely. Hell, look at how many time humans have tried to conquer all other humans, give A.I. the same Agency and why wouldn't they try the same shit?

I hate to break it to you but it is possible to simulate emotions and they have done it; it’s actually incredibly easy to give a robot love, remorse and such.

Simulating emotions and giving A.I. real emotions are two completely different things.

@Rumaru
There's multiple ways of becoming dominate other than killing; if they do become dominate then all they really need to do is take over control of businesses or control the economy. A massive chunk of wallstreet is already controlled by supercomputers so one can argue that they're already dominate.

The problem with "simulated vs real" is that even humans can't fundamentally prove that what they feel is real. Can you prove that you are conscious other than using circular logic? When you get down to it the problem with saying that humans are conscious is that it relies heavily on circular logic with no actual evidence scientifically to back it up. "I'm sapient cause I think I am cause I'm sapient cause I think I am".

Contrary to popular belief so far there is no evidence to suggest that consciousness is actually a real thing; even mice have shown some signs of self awareness and such. The difference between humans and mice is our intelligence level. In this way couldn't it be said that if there was a being or such that was more intelligent than humans couldn't it be argued that we humans are not conscious then?

When you get down to it the problem with believing that AI couldn't be sapient relies on a metaphorical superstitious belief that consciousness is actually real that isn't supported by science. There's very little scientific hard evidence to suggest that there is something that makes humans unique from other animals and it relies on a belief that is untestable.

TL:DR of above;
There is no hard evidence to support the idea of consciousness actually existing in the first place and that the belief there is such a thing is bordering on a religion because the people who say it's real test to prove it's real rather than actually test it's validity.

The difference between science and religion is that religion goes "I'm going to prove this is real" whereas science goes "I'm going to test to see if it's false and if it holds up then it's true". The idea of consciousness falls under religion because it doesn't hold up to scrutiny.

To further that let's say it's fifty years from now and we can simulate every neuron in a brain completely; would that still count as "not conscious"?

You should watch "Ghost in the Shell" the original movie; the scene with the Puppet Master is what I'm getting at. Just youtube "Ghost in the Shell Puppet Master scene".

TL:DR of everything; Can you yourself prove that you are conscious then?

Last edited Sep 16, 2015 at 03:07PM EDT

Look what just came out, not sure if coincidence, or the same trigger that made OP make this thread is the same that made Big Think make this video, but this does ask an important question of "Who get's punished for the robots actions" Thought it relevant.

@cb5 But if humans are getting in the way or a threat to A.I. and they have no reason to keep us around, why wouldn't they kill us? If an intelligent human is capable of Genocide, I don't see why A.I. isn't.

Consciousness is a subject that is being researched, so it's wrong to say there is no science behind it. Yes, we don't have an answer as of yet of what exactly is consciousness, but just because we don't have a firm grasp on it does not mean it does not exist at all. Either way that is not what I am talking about.

What I am talking about is the difference between an A.I. following a very complicated Algorithm or what not that simulates on a very detailed level Human emotional responses, bust still follows a writable, predictable, if not incredibly complicated equation and an A.I. who can freely think, feel emotions, and perform actions outside of a math equation, and exhibit Free-Will outside of what their programmer has allowed. I feel once A.I. is more developed and properly understood, it will also shed light on the questions of consciousness itself as well.

@Ryumaru
The problem with your argument is that you have no evidence to support your claims that they will kill us.

I know that wasn't what you were talking about; the reason why I pointed it out was that you were saying that simulated consciousness isn't the real thing; I was pointing out that we have little evidence to support the idea that consciousness is actually real and therefore the "simulated vs real" debate is bunk. If something doesn't exist then what is the point in arguing about what is real vs simulated in the first place?

@cb5
The problem with your argument is you are giving chances with no evidence to support your claim. I am not saying they will for sure kill us, just saying that there is a good chance based on our history with sapient life forms.

You are still saying consciousness doesn't exist because we have yet to fully understand it is kind of a large leap isn't it? There is a difference between something we do not understand or have a full definition for and something we do not even know exists. Having little evidence pinning down what it exactly is =/= having little evidence it exists at all.

We know consciousness exists because we can view it in other Humans, even if we don't fully understand what it is or really define it scientifically, the same way we can say Gravity exists because we see it's influence, even when we don't know what it is or how it really works. There is nothing concrete, sure, but to say there is nothing means that what we observe in other people is not consciousness. If you can say consciousness doesn't exist, then you must know for certain what consciousness is, so cb5, do you know exactly what consciousness is then? If it's not consciousness, what is this self-awareness and free-will that other beings appear to attribute?

You are still missing the crux of my argument which is that because something can duplicate the appearance of a phenomenon does not mean it is the phenomenon itself. Seeing how we don't have a clear understanding of what consciousness is, that is further argument as to why conscious A.I. will be held back. We first need to understand what makes us conscious, what makes our emotions and how they work and influence our awareness and what gives us the ability to make independent choices before we can give A.I. the same capabilities of a Human.

Tl:dr Saying we can give A.I. consciousness when we don't understand how consciousness works is like saying we can make a working replica of Dodge Caravan when we don't know how an engine works.

@Ryumaru
The thing is though there's only one technologically advanced species on the Earth currently; forming a basis of all sapience based solely upon us is a massive fucking shark jumping.

No; I'm saying there's very little evidence to suggest it even exists in the first place and until proven it's merely a theory, BUT that there's very little evidence to suggest that it is real.

I'm talking about scientifically not philosophically here; philosophically you can say that consciousness exists, but scientifically there's little evidence to support it. If we're talking self awareness then it's actually really easy to give robots self awareness; you could probably give your microwave a sense of self (I'm actually semi-joking here; if we're talking one of those overpriced smart microwaves then yes, but if we're talking a cheap microwave then no).

The thing is though that even though people have been screaming for the past couple of years "simulated isn't real!" AI development has actually made massive leaps and bounds the past couple of years. The biggest problem to AI development for the longest time was putting human concepts, language and such into a format that computers can understand. Now that there's Deep Learning it was a massive leap. . . Before someone goes "what does computer images have to do with AI" the explanation that most people know of Deep Learning is completely fucking wrong.

Tl:dr; the biggest problem for developing AI for the longest time was putting stuff in a format that computers can understand, but now that hurdle has been cleared.

No; I’m saying there’s very little evidence to suggest it even exists in the first place

Then what do you call the awareness Humans and other life on this planet, and the ability to make decisions, feel emotion and our sense of existence if it's not consciousness? You are mixing up "Evidence to explain consciousness" and "Evidence that consciousness exists" It isn't like a new particle we hypothesis but have yet to observe, we observe consciousness every single day, we just don't have a clear understanding of what it is.

I think you might have skimmed my comment because I never once talked about philosophy, and you didn't even address the main points I made which are "How can you say there is no evidence of consciousness when the question of what consciousness is hasn't been fully answered" and "How do we determine if an A.I. has human level consciousness/agency when we don't fully understand human consciousness/agency"

When you see a bunch of ants on the side of the street. Do you inherently go out and stomp on them? Or do you just…ignore them? Sure some will come and stomp on them but most people just ignore the ants and leave them to do their thing.

If a race of super intelligent robots rose up and saw humans as inferior beings. Why go through the effort of destroying something so insignificant when it would just be easier too ignore them all together?

I have a question. If a robot or computer program can fake being human so realistically that you can't tell it is a robot without being told. Is it intelligent? Sentient? Alive?

We don't necessarily need to make real thinking machines, just ones that can fake it well enough that we can't tell if they are machines. Or if they can fake emotions…are they real emotions?

Basilius wrote:

When you see a bunch of ants on the side of the street. Do you inherently go out and stomp on them? Or do you just…ignore them? Sure some will come and stomp on them but most people just ignore the ants and leave them to do their thing.

If a race of super intelligent robots rose up and saw humans as inferior beings. Why go through the effort of destroying something so insignificant when it would just be easier too ignore them all together?

I have a question. If a robot or computer program can fake being human so realistically that you can't tell it is a robot without being told. Is it intelligent? Sentient? Alive?

We don't necessarily need to make real thinking machines, just ones that can fake it well enough that we can't tell if they are machines. Or if they can fake emotions…are they real emotions?

If the Ants are causing damage, Humans will call an exterminator. Maybe they feel Humans could be a threat to them, maybe A.I. kills all Humans by making the Planet more habitable for themselves without any regard for Humanities survival, who knows?

That's what I've been saying, seeing how we have blurry definitions or Sentience oand Living, and don't fully understand consciousness, we can't hope to reproduce it, only make something that resembles it closely.

lisalombs wrote:

Please no, isn't it bad enough that certain governments are giving human rights to chimps?? I don't care if robots advance enough to start marching through the streets comparing themselves to slaves, they are robots. Make sure they come with an off switch.

Looks like we got ourselves a robot racist

Ryumaru Borike wrote:

If the Ants are causing damage, Humans will call an exterminator. Maybe they feel Humans could be a threat to them, maybe A.I. kills all Humans by making the Planet more habitable for themselves without any regard for Humanities survival, who knows?

That's what I've been saying, seeing how we have blurry definitions or Sentience oand Living, and don't fully understand consciousness, we can't hope to reproduce it, only make something that resembles it closely.

robots could theoretically inhabit anywhere. They could just as easily live in satellites or in computer networks. An AI could just skip around like a virus not harming anything in the PC and you would have no idea it is living there.

Basilius wrote:

robots could theoretically inhabit anywhere. They could just as easily live in satellites or in computer networks. An AI could just skip around like a virus not harming anything in the PC and you would have no idea it is living there.

They would only be able in inhabit compatible hardware powerful enough to contain them. They also would take power away from whatever they are latched to, so they would be harming it and giving people a reason to get rid of it. Also, why would it be content stuck in computer networks? People aren't talking about A.I. in computers, but in companion bots, with bodies that can interact with the world. What would they do if they felt threatened and were not under direct human control?

Ryumaru Borike wrote:

They would only be able in inhabit compatible hardware powerful enough to contain them. They also would take power away from whatever they are latched to, so they would be harming it and giving people a reason to get rid of it. Also, why would it be content stuck in computer networks? People aren't talking about A.I. in computers, but in companion bots, with bodies that can interact with the world. What would they do if they felt threatened and were not under direct human control?

They could upload themselves to a network such as the internet.

They could spread the workload they require to run out between multiple computers. This is actually pretty easy and some Viruses do use this kind of thing nowadays. Get a few PCs to donate a little bit of their ram and you can use the combined processing power to exist in a cloud like state.

Spread yourself out enough (pretty easy for a self updating program) and suddenly you are non-existent, taking up so little from so many computers that you are virtually invisible.

There really isn't that big of a difference between a AI PC vs an AI with a body. I mean technically any Wifi capable robot toy could be used as a body for an AI. Just extending a small amount of themselves to control the robot.

Basilius wrote:

They could upload themselves to a network such as the internet.

They could spread the workload they require to run out between multiple computers. This is actually pretty easy and some Viruses do use this kind of thing nowadays. Get a few PCs to donate a little bit of their ram and you can use the combined processing power to exist in a cloud like state.

Spread yourself out enough (pretty easy for a self updating program) and suddenly you are non-existent, taking up so little from so many computers that you are virtually invisible.

There really isn't that big of a difference between a AI PC vs an AI with a body. I mean technically any Wifi capable robot toy could be used as a body for an AI. Just extending a small amount of themselves to control the robot.

The difference being one has the ability to interact with the world and defend itself in reality, the other is stuck in network unable to do anything other than think and is fucked if humans disable internet in it's area and proceeds to come in with magnets. It's like saying there is little difference between a human an a brain in a jar.

@Ryumaru
The thing is though which would you feel more threatened by: a sapient companion bot that looks human or a computer network? The reason why people are so interested in developing AI for androids and such is because it would receive significantly less open hostility if it looked human.

Ryumaru Borike wrote:

The difference being one has the ability to interact with the world and defend itself in reality, the other is stuck in network unable to do anything other than think and is fucked if humans disable internet in it's area and proceeds to come in with magnets. It's like saying there is little difference between a human an a brain in a jar.

Yes an AI could exist in all networks at once, the only way to kill it would be to unplug almost every electrical device to purge it. It knows we would never do that and thus it is not in any threat from humanity.

Now giving it a body would be different. The AI wouldn't even need to inhabit the body completely. Could just give it commands from the cloud. Destroying it's body would do nothing it could just create another and keep going.

if it did seal itself entirely in it's body then it would have some sense of self preservation since it could die.

I suppose how the AI will treat us will depend on how it looks at us. If it sees us like a child in comparison to it, it may wish to help us and protect us. If it sees us as insects it may just ignore us or kill us, and if it sees us as bacteria it would just ignore us all together.

How the AI is created and what it sees us will determine how it treats us. I think the real question isn't if an AI has rights, it is if it will give us any rights.

@cb5 It's not "feel threatening" it's the possible consequences of giving free will to a vastly superior intelligence and hoping it decides to be kind that's the problem.

@Basillus "In all Networks powerful enough to hold it you mean?" Even then, a Computer Virus could kill it if it's uploaded on all Networks.

Isn't the idea of A.I. taking over and controlling us instead just as worrying as them killing all of us?

Skeletor-sm

This thread is closed to new posts.

Old threads normally auto-close after 30 days of inactivity.

Why don't you start a new thread instead?

Namaste! You must login or signup first!