11 11

By Nadia Zaifulizan

Artificial Intelligence (AI) constantly progresses towards working out the best strategy to achieve their set objectives. The best strategy means the pathway that leads to the objective in the fastest, most productive, and most efficient manner, regardless of any other unspecified elements. This is beneficial in achieving big goals set at a high standard but can also lead to unprecedented consequences. If the “unspecified element” is not included in the factors that the AI model is designed to consider, they will not be considered when achieving the objective. If the AI is designed to ensure a train gets from point A to point B, any unspecified element that is present in the train’s pathway will be disregarded as long as the train reaches the destination as efficiently as possible.

This is among the factors why human consideration is important when designing an AI. Since the AI’s priority is achieving the set objective and improving how they achieve it, it is those who design them who needs to steer the direction of the AI systems that they are creating. According to Stuart Russell, Head of the Center for Human-Compatible AI, setting the “wrong” objectives may have dire consequences as AI systems become more intelligent. AI’s abilities increase the more they are utilized. When the objectives set on AI systems are not specified completely and correctly, problems occur. AI systems are more intelligent and more powerful than humans can be, thus they will always be able to find loopholes around laws and restrictions.

Russel goes further on his views about objectives and AI by proposing the emphasis on a constitutional requirement that the AI must be beneficial to human beings. This is a different approach to the standard model of specifying a fixed objective for AI systems. By including the “beneficial for humans” requirement, AI systems will eventually turn to humans to seek information, feedback, and permission before going further.

When an AI is created, the creators often do not completely understand the knowledge acquired by intelligence systems, especially since this is not a requirement in order to create. The relevant knowledge is acquired in volumes by the intelligent computing as they work towards the objectives set on them. Creators often could not predict the extent of what their AIs can do. Thus, accountability is a concern. According to Brad Smith, the President and Chief Legal Officer of Microsoft, technology may be considered neutral, but technologists cannot be. He believes that the creators of Intelligence systems should take seriously the moral consequences of the work they do.

Current trends indicate that some entities exploit behavioural predictions generated from discreet surveillance of users. This AI-based exploitation is extremely profitable, and very common. Unfortunately, such activities in the name of profit-making are often undetected. Unknowingly, users are given inputs that they do not choose, to respond in a way that benefits the corporate entities.

The outcomes from AI systems affect people the most, thus humanity is a huge factor which should be considered when designing an AI. Accountability in the AI sector includes responsibility in the purpose, execution, and application of such systems. It means that the AI must be built upon objectives and goals that are not harmful to humanity, ethically source its inputs, and use its acquired insights responsibly.

Add your comment