Britain formally promulgated the moral standards of robots no harm, deception and addictive-msvbvm60.dll

Britain formally promulgated robot moral standards: don’t hurt, deception and addictive Isaac · Asimov (Isaac Asimov) in his science fiction novel describes the three law, norms of robot behavior include: not harm humans, obey human commands and protect themselves. Now, the British Standards Association officially released a set of robot ethics guide, more complex and mature than Asimov’s three law. It is time for our society to develop a code of conduct for the robot, which makes it possible to create a wonderful feeling of science fiction and reality. Standards (British Institute, referred to as BSI), is a more than 100 years of history of the National Standards Agency in the world with a high degree of authority. This is the full name of "ethical guidelines for ethical design and Application guide robot and machine system" (hereinafter referred to as the "guide"), mainly for the crowd is the robot design researchers and manufacturers, guide them to make moral risk assessment for a robot. The ultimate goal is to ensure that the human production of intelligent robots, can be integrated into the existing norms of human society. The standard file code named BS8611, released in September 15th in Oxford, England, "AI and AI" (Social Robotics and) conference. Winfield, Professor of robotics at University of the West of England Alan said that this is the industry’s first open standards on the design of robot ethics. Although the content of the file is boring, but the scene depicted in it, just like the science fiction pop up. Robots are deceptive, addictive, and have the ability to learn beyond the scope of the existing capabilities, these are classified as hazardous factors, designers and manufacturers need to consider. At the beginning of the "guide" gives a broad principle: the design objective of robot should not be exclusively or mainly used to kill or injure a human; human is the main body of responsibility, rather than the robot; to ensure the possibility to find a person in charge of the robot’s behavior. The "guide" is also the focus of some controversial topics, for example, humans and robots have emotional connections can do, especially when the purpose is to design the robot itself and the interaction between the children and the elderly. Noel, a professor at the University of Sheffield, Sharkey, argues that this is an example of the robot’s deception in the absence of an accident. Robots are not emotional, but people sometimes don’t think so. He pointed out that a recent study, the small robot into a kindergarten, the children are very fond of, and that these robots have a higher sense of perception than home pets. The guidelines suggest that designers should pay more attention to the transparency of the robot, but scientists say it is hard to apply in practice. This is because people do not know exactly how the AI system, especially the deep learning system, makes decisions. An artificial相关的主题文章: