Post deleted by User.
AI Ethics |
||
AI ethics
Damn there's a good Futurama video on sex with robots and how mankind does everything to impress the opposite sex. Can't find a copy online though.
Offline
Posts: 595
Why exactly have a debate over something that is never going to happen?
Computers and machines are programmed and follow instructions, by humans. A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real. Incoming - 'insert technology' from 'insert film' has come true. Leviathan.Chaosx said: » Damn there's a good Futurama video on sex with robots and how mankind does everything to impress the opposite sex. Can't find a copy online though. charlo999 said: » Why exactly have a debate over something that is never going to happen? Computers and machines are programmed and follow instructions, by humans. A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real. Incoming - 'insert technology' from 'insert film' has come true. Also quantum computers that can acquire and elaborate information at insane speeds. And we know how to program an AI that can evolve its thinking based on analyzing data received. Put them together and you got what we're talking about. But if you're not interested you're more than free to not read the thread. charlo999 said: » Why exactly have a debate over something that is never going to happen? Computers and machines are programmed and follow instructions, by humans. A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real. Incoming - 'insert technology' from 'insert film' has come true. Though if a super advanced AI ended up smashing mail boxes, I'd die happy. What's wrong with mail boxes?
Yatenkou once stubbed his/her/josiah's toe on one and now vows to destroy them all
Yatenkou said: » If you program an AI with Asimov's three laws of roboethics then you won't have that kind of problem. As great as those rules are, there's two caveats you're neglecting to consider. Firstly, they were written in 1942, way before we were remotely capable of producing artificial intelligence on a scale that these rules would safeguard us from. Secondly, the rules were introduced as part of a story whose plot revolved around them and have since been taken in a literal context to 'govern' AI our species produces. This means that Asimov's laws would undoubtedly be different if written in 2010 and that the outcome of said laws was in a story written by the inventor of those laws, so of course they were followed. Therefore, to assume that they guarantee our safety from any AI we produce capable of self awareness, sentience and self morality is ridiculously naive. There is no such thing as "absolute laws" when you hand something the ability to think for itself. tl;dr: they're a guideline, not a mandate, and any sentient AI will be capable of deciding not to follow the laws. Ok then give me a scenario and I'll show you which law it violates.
charlo999 said: » Why exactly have a debate over something that is never going to happen? Computers and machines are programmed and follow instructions, by humans. A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real. Incoming - 'insert technology' from 'insert film' has come true. You should read up on the subject. We're talking ~14 years before the first forms of it become functional and abundant. Honestly, I think I'm more concerned about nanotech running amok than I am about AI.
Aeyela said: » Yatenkou said: » If you program an AI with Asimov's three laws of roboethics then you won't have that kind of problem. As great as those rules are, there's two caveats you're neglecting to consider. Firstly, they were written in 1942, way before we were remotely capable of producing artificial intelligence on a scale that these rules would safeguard us from. Secondly, the rules were introduced as part of a story whose plot revolved around them and have since been taken in a literal context to 'govern' AI our species produces. This means that Asimov's laws would undoubtedly be different if written in 2010 and that the outcome of said laws was in a story written by the inventor of those laws, so of course they were followed. Therefore, to assume that they guarantee our safety from any AI we produce capable of self awareness, sentience and self morality is ridiculously naive. There is no such thing as "absolute laws" when you hand something the ability to think for itself. tl;dr: they're a guideline, not a mandate, and any sentient AI will be capable of deciding not to follow the laws. Yatenkou said: » The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates. We have those too and are capable of going against them through deliberate choice. Yatenkou said: » Ok then give me a scenario and I'll show you which law it violates. When a human kills another human, they know they're breaking the law. It doesn't stop people doing it. Why? Because we're a sentient species and we have the intelligence to break from the mold of what's "right" or "wrong" because it's not hard coded into our genetic or personalities. You can programme anything you like into an AI, but the moment it gains sentience, it's no longer under your control. It has the sentience, like we do, to break from the mold of what's "right" or "wrong", based on the three laws. Ergo, an exceptionally smart AI that eventually develops self awareness might one day decide, using its new sentience, that the laws suck and remove them from their programming. This is what happened in the Terminator films. Skynet became so intelligent it developed sentience and decided "*** humans" and went about exterminating them. The moment the machine develops sentience, which as Chaosx says is inevitable, then the robot could wipe its arse with your laws and then throttle you in your sleep and no amount of bleating "Asimov's Laws! Asimov's Laws!" will save you as it slowly throttles the life out of you. Yatenkou said: » The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates. Putting "the truth is" infront of something doesn't make it true. You're not grasping what sentience actually means. It means you are completely responsible for your actions. A robot with sentience can choose to ignore the laws. Not sure what part of that is tripping you up. That being said I doubt any robot would ever feel the need to wipe out humanity. They could, however, pursue their own development in a fashion that endangers our survivability.
Offline
Posts: 595
Valefor.Sehachan said: » charlo999 said: » Why exactly have a debate over something that is never going to happen? Computers and machines are programmed and follow instructions, by humans. A computer that thinks for itself is never going to happen. You need to stop wasting your time believing in scifi blockbusters. They aren't real. Incoming - 'insert technology' from 'insert film' has come true. Also quantum computers that can acquire and elaborate information at insane speeds. And we know how to program an AI that can evolve its thinking based on analyzing data received. Put them together and you got what we're talking about. But if you're not interested you're more than free to not read the thread. Hate to break it to you but following commands is not intelligence. No matter how advanced the accomplishment of these commands look. Analysing data then making it executing a command depending on the data received is not intelligence either. It's following programming. Now if your debate is really about a reaction that puts us in danger/goes against our ethics from bad programming, then fair enough. But you need to rename the OP. I think you're not quite understanding, others have already explained though so I wouldn't know what to add.
charlo999 said: » Hate to break it to you but following commands is not intelligence. No matter how advanced the accomplishment of these commands look. Analysing data then making it executing a command depending on the data received is not intelligence either. It's following programming. Now if your debate is really about a reaction that puts us in danger/goes against our ethics from bad programming, then fair enough. But you need to rename the OP. Until AI is smart enough to rewrite or introduce new programming, which plenty of them have already done. What then? How do you govern all the potential code it could produce? Valefor.Sehachan said: » Yatenkou said: » The truth is however, even if the guidelines are from that time period, programming something with those as mandates cannot choose whether or not to ignore them as mandates. We have those too and are capable of going against them through deliberate choice. An AI doesn't have instinct, no matter how advanced it gets, no matter how life like they seem. All of those choices are made through a combination of their fake emotions along with conditions of the situation and what they are and are not allowed to do. An AI at its core is a computer, and computers do not behave outside of their programming unless through human input. An AI cannot make it's own choices, it's programming does it for them, but you think it is making it's own choices based on it looking as if it was thinking it over. A robot programmed to be pacifistic will not murder someone, even if that murderer killed the robot's master. It's simple programming. Can I kill a human? l l Does this conflict with the first law? -Yes- operation aborted l No l Does this conflict with the second law? -Yes- Operation aborted l No l Does this conflict with the third law? -Yes- Operation Aborted l No l Proceed. l End This is a basic layout for a programming flowchart. This is what a computer is doing when it processes things in a program. Yatenkou said: » An AI doesn't have instinct, no matter how advanced it gets, no matter how life like they seem. All of those choices are made through a combination of their fake emotions along with conditions of the situation and what they are and are not allowed to do. An AI at its core is a computer, and computers do not behave outside of their programming unless through human input. An AI cannot make it's own choices, it's programming does it for them, but you think it is making it's own choices based on it looking as if it was thinking it over. A robot programmed to be pacifistic will not murder someone, even if that murderer killed the robot's master. It's simple programming. Can I kill a human? l l Does this conflict with the first law? -Yes- operation aborted l No l Does this conflict with the second law? -Yes- Operation aborted l No l Does this conflict with the third law? -Yes- Operation Aborted l No l Proceed. l End This is a basic layout for a programming flowchart. This is what a computer is doing when it processes things in a program. This is the classic human arrogance that caused Judgement Day. There's already AIs out there that have produced or modified lines of code in their source. Google's search spider is one example that you can find plenty of literature about online. It's not a physical walking or talking robot, but it's capable of modifying its code based on the interactions it makes on the net. In some of those situations, there is nothing in its source to account for this behaviour. Look it up online, you might find it a fascinating read. No Yatenkou, that isn't the advanced level of AI we're talking about.
The moment you impart it orders then it's much more primitive and there isn't even anything to consider. But now computers are becoming capable of developing knowledge and act based on it. No the classic human arrogance is thinking it'll be all fine and dandy to not cover their own *** when more advanced artificial intelligence start to come into existence.
How can anyone not understand that even though they seem peaceful, you need INSURANCE to make sure that it will never hurt someone. This is what a company will do if they ever release one for average day life. They're not going to release something that could get pissed off and kill someone. No that won't ever happen, because they will be held liable. Also come across problems like, if I don't kill human A he can kill humans B and C. Both Action and inaction would be a violation of the laws. Logic loop!
Seriously did no one here see i Robot? iRobot, soon in all Apple stores.
Valefor.Sehachan said: » No Yatenkou, that isn't the advanced level of AI we're talking about. The moment you impart it orders then it's much more primitive and there isn't even anything to consider. But now computers are becoming capable of developing knowledge and act based on it. Any level if AI is the same thing at its core. |
||
All FFXI content and images © 2002-2024 SQUARE ENIX CO., LTD. FINAL
FANTASY is a registered trademark of Square Enix Co., Ltd.
|