Order from us for quality, customized work in due time of your choice.
INTRODUCTION
Autonomous vehicles also known as self-driving cars are highly popular around the world to advance smart mobility and sustainable cities (Lim and Taeihagh, 2019). However, at some point in the near future when something has gone wrong on the road, it has to choose between two options to make a maneuver of whether to save the passenger inside the car while putting the pedestrians at risk or to save pedestrians on the road and put its own passengers at risk. How this decision make by AV depends on how it is programmed to work, in other words what ethical choice its software tells it to make (Ackerman, 2020). AVs would already be in the market if there were clear ethical rules to follow when confronted with such situation (Ackerman, 2020). Nevertheless, there are endless numbers of possible ethical problems and within that the most ethical course of action vary from person to person (Ackerman, 2020). Defining the algorithms of AVs that will help make the moral decisions is still a tough challenge (Bonnefon, Shariff and Rahwan, 2016).
BACKGROUND ANALYSIS
AVs got the potential to offer improved safety, congestion traffic efficiency, reducing carbon-dioxide emission, and eliminating most of the traffic accidents. Yet not all the accidents are avoidable by AVs, there will be some crashes where it requires AVs to make difficult ethical decision that cause unavoidable harm (Martin et al., 2017). For example, it can save the pedestrians by swerving and sacrificing its own passengers or save its own passengers and kill several pedestrians.Besides, even if these situations never occur, AVs programming still need ethical rules on what decision to make when face with such hypothetical situations (Li et al., 2018). Therefore, these types of decisions must be made before it is released for a global commodity (Bonnefon, Shariff and Rahwan, 2016). Algorithms form the basis of decision making in AVs, allowing them to perform driving task autonomously (Lim and Taeihagh, 2019). These algorithms that control AVs need to implant moral principles directing their decisions in situation of inevitable harm (McManus and Rutchick, 2018).[image: ]
To understand how people, feel about the potential for AVs to make ethical decisions, Jean-Francois Bonnefon, Azim Shariff, and Iyad Rahwan conducted six online surveys with a total number of 1928 of participants between June and November 2015 (Ackerman, 2020). They notice a possible concern with MTurk studies, that some participants may already be familiar with testing materials as these materials are used by many research groups. Therefore, they made their own testing materials which have never been used in a published MTurk study (Bonnefon, Shariff and Rahwan, 2016). In study one with 182 participants, 76% believed that it would be more moral for AVs to sacrifice one passenger rather than kill ten pedestrians. They also strongly expressed a moral preference for utilitarian AVs programmed to minimise the number of casualties (Bonnefon, Shariff and Rahwan, 2016). In study two, participants were introduced with dilemmas that varied the number of pedestrians lives. Participants moral approval increased with the number of lives that could be saved and even strong to the treatments in which they had to imagine themselves or a family member (Li et al., 2018). While in study three, participants were presented with social dilemma and asked to specify how likely they would buy an AVs with less casualties, meaning sacrificing themselves or a co-rider under unavoidable circumstances. As well as, how likely they would buy an AVs which is programmed to prioritise its own passengers under all circumstances (Bonnefon, Shariff and Rahwan, 2016).
Although the likelihood of buying an AV even for self-protective option was low as per the report, participants demonstrated a more lower likelihood of getting an AV when they think of their family member or they would be sacrificed for a greater good (Bonnefon, Shariff and Rahwan, 2016). On the other hand, participants think that utilitarian AVs were most moral and welcome them on the road, but they would not prefer to buy one for themselves (Ackerman, 2020). In study four, participants were given 100 points to assign between different types of algorithms to indicate firstly how normal the algorithms were, secondly how aggregable participants were in order for AVs to programmed in a given manner, and thirdly how likely participants were to buy an AV programmed in a given manner (Bonnefon, Shariff and Rahwan, 2016). Once again, participants approved utilitarian, self-sacrificing AVs but will not buy for themselves (Bonnefon, Shariff and Rahwan, 2016). In study five, participants were asked about their attitudes towards legally implementing utilitarian sacrifices and as normal the perceived morality of the sacrifice was high (Bonnefon, Shariff and Rahwan, 2016). In the last study, participants were asked about their likelihood of buying an AV whose algorithms have been maintained by the government (Bonnefon, Shariff and Rahwan, 2016). Participants were hesitant to accept the governmental regulations of utilitarian AVs (Ackerman, 2020). Overall, participants were less likely to buy an AV with such regulation than without (Ackerman, 2020).
Consequently, it is not only passengers of AVs who got a say in what is ethically right way for an AV to be programmed and behave. At the same time, the manufacturers who program the AVs and the government, which may regulate the kind of programming manufacturers can offer (McManus and Rutchick, 2018). As per the survey findings, people tend to believe that everyone would be better off with an utilitarian AVs with less causalities on the road, but also the same people have personal motive to travel in AVs that will protect them under all costs (Faulhaber et al., 2018). While this create a social dilemma, if both the utilitarian AVs and self-protective AVs were presented on the market, more people would be willing to ride in self-protective AVs rather than in utilitarian AVs (Bonnefon, Shariff and Rahwan, 2016). In addition, regulators may face with difficulties of whether to enforce utilitarian AVs which most people disapprove. Or to delay the approval of AVs which indicate that the lives saved by making utilitarian AVs may exceed the number of deaths caused by delaying the approval of AVs (McManus and Rutchick, 2018).
LEGAL RESOURCE AVAILABLE IN AUSTRALIA
Whereas in Australia, NTC (National Transport Council) an intergovernmental agency was tasked with leading a number of reforms to the regulation of autonomous vehicles, specially in relation to legal and safety issues by the Transport and Infrastructure Council in November 2016 (Preparing for Automated Vehicles, 2020). Subsequently, the Transport and Infrastructure Council came to agree some of the actions which include, to develop national enforcement guidelines that clarify regulatory concepts of control and proper control for different levels of driving automation (Preparing for Automated Vehicles, 2020). Develop options to manage government access to autonomous vehicle data that balances road safety and network efficiency outcomes and efficient enforcement of traffic laws with enough privacy security for autonomous vehicle users (Preparing for Automated Vehicles, 2020). Review injury insurance schemes to find any eligibility barriers for owners of an autonomous vehicle, or those involved in crash with an autonomous vehicle (Preparing for Automated Vehicles, 2020). Develop legislative reform options to clarify the application of current driver and driving laws to autonomous vehicles, and to build legal obligations for autonomous driving system entities (Preparing for Automated Vehicles, 2020). The action also includes, design and develop a safety assurance scheme for autonomous road vehicles (Preparing for Automated Vehicles, 2020).
Order from us for quality, customized work in due time of your choice.