Relevant I'd think:
YouTube Video Placeholder
Driverless Cars, Good Idea, Horrible Idea? |
||
Driverless Cars, Good Idea, Horrible Idea?
And on that note...
Relevant I'd think: YouTube Video Placeholder I'm 100% for it. I just want it to be done right and feel safe about it, which is asking a LOT.
Siren.Kyte
Offline
Rooks said: » Hi, random programmer here. No, we should not have driverless cars. If you feel otherwise, then you have a much rosier picture of software and the people who make it than I do. Hi, random human here. If you feel otherwise, you have a much rosier picture of humans than I do. Clinpachi said: » I'm 100% for it. I just want it to be done right and feel safe about it, which is asking a LOT. Rooks said: » Hi, random programmer here. No, we should not have driverless cars. If you feel otherwise, then you have a much rosier picture of software and the people who make it than I do. Seconding this When you've been in the industry long enough, you see some *** On a serious, ethical note -----
The problem is rather easy in a toy scenario when all cars are driverless. Everyone acts consistently and could potentially follow the same set of rules. But in reality, if such a technology were to become legal, then we'd see both driver'd and driverless cars on the streets. As you might imagine, this is more complicated, because the agents need to both reason against human and non-human actors in the world. Not an insurmountable problem, but not a trivial one either. I am not aware of how the car developers are addressing this problem. However, this is the real problem that these companies are facing: In a real life scenario, you, as a driver, might have to solve moral dilemmas. In an extreme case, you might have to decide between saving your own life and saving a pedestrian's life. In the real world, the decision is on you, the driver. With driverless cars, Google and Tesla programmers now have to decide whether the passengers or the pedestrians should be saved in these situations. You now add the politics of driver morality to the actual production and deployment of these cars! It's extremely messy since these decisions are no longer made ad hoc, but are programmed directly into the logic of the vehicles themselves (or perhaps by some appendage software). This is a real problem facing the industry (as somebody actually in AI research) Offline
Posts: 4394
Asura.Floppyseconds said: » These things are not issues. Yeah.. These things are issues. Soon as a comp car kills someone, that someones family is going to be suing the person who owned it, the person who designed the software, and the company that sold it. I'm sure the person who owned the car signed something that says they would take all responsibility. I and many millions of other people have not. The legality alone will keep these cars off the roads. Asura.Floppyseconds said: » Cars already autobrake, and have for years. Offline
Posts: 4394
Phoenix.Dabackpack said: » n a real life scenario, you, as a driver, might have to solve moral dilemmas. In an extreme case, you might have to decide between saving your own life and saving a pedestrian's life. In the real world, the decision is on you, the driver. This was already covered on page one. Asura.Floppyseconds said: » The rest is silly to debate about as there is not going to be a school bus of kids and a cliff and such an accident all happening. That is just a ridiculous notion. I dunno if Floppy lives in the real world however.. Altimaomega said: » Phoenix.Dabackpack said: » n a real life scenario, you, as a driver, might have to solve moral dilemmas. In an extreme case, you might have to decide between saving your own life and saving a pedestrian's life. In the real world, the decision is on you, the driver. This was already covered on page one. Asura.Floppyseconds said: » The rest is silly to debate about as there is not going to be a school bus of kids and a cliff and such an accident all happening. That is just a ridiculous notion. I dunno if Floppy lives in the real world however.. I know, I just wanted to state that this literally is not a solved problem in the industry. And @Floppy, it doesn't matter "if it will never happen", because the agent needs to be able to handle those situations if they do occur. This isn't a trivial point. Autonomous car producers literally need to consult and decide how to prioritize safety between passengers, pedestrians, and other cars. I'm not bullshitting you, this is real life, not theorycrafting. All of the above is for a truly autonomous vehicle, though. EDIT: Reiterating that the above is for pure automation. If the driver still has agency over the car (regarding actual driving and split-decision making) then the above points are less important. However, manufacturers are attempting pure automation in the future. (And by "pure automation", I mean having all humans in the car either serving as passengers or giving directions, not actually affecting the motion of the automobile) Asura.Floppyseconds said: » Yeah, okay. Is that the same real world that envisions a cartoon scenario of a bus of children, a car, a cliff? That the only two options are to either die off a cliff or hit the bus? How *** realistic is that? This particular scenario is cartoonish, but there are very real scenarios that are similar: If you're driving and a deer jumps in front of you. What do you do? Common knowledge would tell you to hit the deer, even if it kills it. But what if it's a toddler? (Both real things that have happened to me) To be honest with you, I feel like the videos for both Google and Tesla are hiding 1 simple fact. Somebody was out of view of the camera with their hand on a kill switch or safety mechanism. Even more likely the driver was probably instructed to carefully watch and wait to intervene at any given moment. We're just being showed the good parts and strides.
Marketing and brand recognition are playing key roles to interest you in this and think great things about their company. Also the legal issues Dabackpack outline are on point too. Asura.Floppyseconds said: » Phoenix.Dabackpack said: » Asura.Floppyseconds said: » Yeah, okay. Is that the same real world that envisions a cartoon scenario of a bus of children, a car, a cliff? That the only two options are to either die off a cliff or hit the bus? How *** realistic is that? This particular scenario is cartoonish, but there are very real scenarios that are similar: If you're driving and a deer jumps in front of you. What do you do? Common knowledge would tell you to hit the deer, even if it kills it. But what if it's a toddler? (Both real things that have happened to me) The toddler much like Bambi, would be dead without the system in place. So it is not like we need to blame the system for being faced with an event that can't be stopped. It doesn't matter if kittens, babies, or hail falls in front of your car as you are driving. With or without a system if you can't stop then you can't stop and nothing can be done. Therefore I do not believe it is a factor in the way it is being presented. I should have specified an alternative. Usually, it's "you either hit the deer, or you swerve the car and endanger yourself." In a pure autonomous situation, the cognition is redirected from human to machine. So yes, the manufacturer is now directly involved in the moral dilemma, a priori. Programmatically it's not difficult to implement these decisions. But the auto producers need to say somethign like "Okay, we will always prioritize the driver over any pedestrians in this situation." But that can be a huge legal liability, and I think it's a discussion we as a society need to have.
Asura.Floppyseconds said: » Phoenix.Dabackpack said: » Asura.Floppyseconds said: » Phoenix.Dabackpack said: » Asura.Floppyseconds said: » Yeah, okay. Is that the same real world that envisions a cartoon scenario of a bus of children, a car, a cliff? That the only two options are to either die off a cliff or hit the bus? How *** realistic is that? This particular scenario is cartoonish, but there are very real scenarios that are similar: If you're driving and a deer jumps in front of you. What do you do? Common knowledge would tell you to hit the deer, even if it kills it. But what if it's a toddler? (Both real things that have happened to me) The toddler much like Bambi, would be dead without the system in place. So it is not like we need to blame the system for being faced with an event that can't be stopped. It doesn't matter if kittens, babies, or hail falls in front of your car as you are driving. With or without a system if you can't stop then you can't stop and nothing can be done. Therefore I do not believe it is a factor in the way it is being presented. I should have specified an alternative. Usually, it's "you either hit the deer, or you swerve the car and endanger yourself." In a pure autonomous situation, the cognition is redirected from human to machine. So yes, the manufacturer is now directly involved in the moral dilemma, a priori. Doesn't matter because the human is supervising the car. They are still "driving". So it is not the fault of the manufacturer until there is no longer human control over the vehicle. This includes the ability to intervene. As I said, "driving". As for the swerve. Considering the ability of sensors. In an improved system I would expect it to be much better than a human reacting to the situation. First off it can calculate the actions MUCH faster than a human, and then it can react MUCH faster. All that is left would be improving the system and sensors. If there is room, swerve, and if there is not then you don't swerve. That's why I specified pure autonomy. And I am absolutely certain that will be a prospect Google or Tesla will act towards, so this is a problem they will need to face. If the driver is still 100% engaged, not listening to music or reading a book and is still able to take total control at any time, then there isn't a problem (yet). It doesn't matter if this toy example is avoided with stronger sensors. This is a decision-making problem, not a technological one. Any decision-making system needs to be able to make decisions according to any domain of perceptual data. It might decide "well, I'll just do nothing if a child runs across", but programmers absolutely need to make those judgments in order to give defined behavior in every scenario. Offline
Posts: 4394
Asura.Floppyseconds said: » I really don't care about sue happy Americans. There is a person behind the wheel because the system is not perfect. It is the drivers fault and not the automaker. Don't matter if you care or not, in real life this ***happens. We are talking about automated cars here, nobody is behind the wheel, please try and keep up. Asura.Floppyseconds said: » Cars themselves without the system can't even handle unplowed or icy roads. So what is your point? Asura.Floppyseconds said: » Is that the same real world that envisions a cartoon scenario of a bus of children, a car, a cliff? That the only two options are to either die off a cliff or hit the bus? How *** realistic is that? Asura.Floppyseconds said: » The toddler much like Bambi, would be dead without the system in place. So it is not like we need to blame the system for being faced with an event that can't be stopped. It doesn't matter if kittens, babies, or hail falls in front of your car as you are driving. With or without a system if you can't stop then you can't stop and nothing can be done. Therefore I do not believe it is a factor in the way it is being presented. Asura.Floppyseconds said: » Doesn't matter because the human is supervising the car. They are still "driving". So it is not the fault of the manufacturer until there is no longer human control over the vehicle. This includes the ability to intervene. As I said, "driving". Why have an automated car that you have to "drive".. I really do see the utility of having "semi-driverless" cars in which cognitive load is placed off humans and onto machines, but I don't think that's the topic of discussion at the moment.
Asura.Floppyseconds said: » In a completely autonomous situation? As of now they are programmed to stop. If the calculations come out that it is not possible to stop I imagine it could be programmed in the same breath to examine options around the car for a distance to swerve. This is already better than leaving it up to a human in this semi-hypothetical situation. If it can't then it just brakes while it hits it. Cars already autobrake to this situations than humans so hit-for-hit it will be better. (Resetting the quote train) Yes, I think that is the proper course of action in a low-level assessment of the environment (low-level meaning low levels of abstraction). For a purely autonomous situation, I think the automobiles just need to behave predictably almost all the time, at least to start. But in the quest of "making things better and safer" (which is always on our minds as scientists), developers will want to have the agents think at a higher level of awareness: "well, if I'm about to hit a kid on the highway but there's a car behind me, what should I do?" instead of "well, there's a kid in front of me, what should I do?", for example. And eventually at higher levels of abstraction: "do I save the passenger or two pedestrians?" For purely autonomous vehicles, the developers are in a pinch because they will need to make decisions about subjective analyses regarding human safety. It might be really easy: "Well, we'll always stop in X scenario". But that will reduce the appeal of the vehicles and will reduce the likelihood of autonomous vehicles been accepted legally and socially. It's a difficult situation. I think I'd rather have pilotless flying transport vehicles / aircraft, than driverless cars. It just seems to me that there is less room for error up in the air where you navigate in a three dimensional plane, than on the ground where you really only navigate on a two dimensional plane but with incline/decline slopes.
If there was a driverless car, and they detect a car in front of them slowing down that has a real driver in it, it will detect that and slow down as well, most likely at a faster reaction rate than a human could >>>IF<<< there wasn't anything wrong with the detection software, which there will most likely be cases where the software/hardware doesn't detect anything and it just ends up running into something. However, in a perfect software world, lets say there isn't anything that could go wrong with the detection software/hardware. Then what about other drivers? We don't have patience, at all. Some of us do, or try to practice it, but really when it comes to drivers on the road, we are very impatient. If there was a driverless vehicle in front of us, they would most likely be going the EXACT speed limit. People would rage, hard. They would constantly be passing it because well, when it comes to the highway, it's very rare that people are going the exact speed limit, and when someone is everyone is passing them. Also, what about crash detection. An oncoming driver about to hit the driverless vehicle. Is the software going to come to an automatic stop as soon as it detects a threat of collision? That might not be the best thing, it might get false threats and make random stops on the highway. That would be horrible. Also, what if it does get hit, what does it do then? Does it go into hyper-detection mode and try to come to a safe stop while avoiding other drivers in other lanes? To me, it would seem safest if we had roads meant only for driverless vehicles, and once you're about to get onto a driver only road, the car warns you to take controls while its coasting down the off-ramp or whatever. Once the driver responds and the car recognizes that they have full control over the car, it stops controlling it completely. But what if the driver doesn't respond? What if they ended up falling asleep? Not the craziest thing that would happen, probably would be quite common honestly. Do they just stay on the ramp and go back on an on-ramp back to the driverless road and just circle? Does it pull over to a stop? A rest stop? There would need to be solutions to problems like this. Though it could have a backup system where it keeps its detection software/hardware running to assist the driver from accidents, which I think luxury cars have been doing for awhile now. I could be wrong but I think I've seen commercials advertising stuff like that for Mercedes-Benz or w/e. I think the problem is that it's "not enough" to say "well, we're better than human drivers!"
Crashes will still happen, obviously. And even in these scenarios, they will likely still be better than human drivers. But part of the process of reaching that "final result" is making decisions on the road. That's a fundamental part of driving. Safe drivers make decisions and the developers understand that the decisions they make in programming the agents are really important. EDIT: Driving is one part "knowing the rules of the road" and one part "knowing how to make safe driving decisions and how to handle unique scenarios", or something like that. The drivers are great at the former, but special attention needs to be given to the subjective latter portion. The subjective stuff is always the stuff that we have problems with. Kalila said: » I think I'd rather have pilotless flying transport vehicles / aircraft, than driverless cars. It just seems to me that there is less room for error up in the air where you navigate in a three dimensional plane, than on the ground where you really only navigate on a two dimensional plane but with incline/decline slopes. This is also a very controversial prospect in the AI community! Well, for reasons regarding airspace laws and the like. Any drone that flies at a certain altitude is subject to aviation law. You need a permit or some kind of allowance in order to fly at certain altitudes. I don't know what the specifics are. But someone at NIST said something along the lines of "Amazon's drone service is not going to happen on a grand scale because it's a huge safety liability, regarding both airplanes and national security." Offline
Posts: 4394
Phoenix.Dabackpack said: » I think the problem is that it's "not enough" to say "well, we're better than human drivers!" Crashes will still happen, obviously. And even in these scenarios, they will likely still be better than human drivers. But part of the process of reaching that "final result" is making decisions on the road. That's a fundamental part of driving. Safe drivers make decisions and the developers understand that the decisions they make in programming the agents are really important. Now that Floppy has caught back up. That brings us to this. Altimaomega said: » Soon as a comp car kills someone, that someones family is going to be suing the person who owned it, the person who designed the software, and the company that sold it. I'm sure the person who owned the car signed something that says they would take all responsibility. I and many millions of other people have not. The legality alone will keep these cars off the roads. |
||
All FFXI content and images © 2002-2024 SQUARE ENIX CO., LTD. FINAL
FANTASY is a registered trademark of Square Enix Co., Ltd.
|