Who Decides the Future of AI?

While machines have not taken over every human task yet, they continue to become more autonomous and human-like in their performance. From Google learning how to translate content into various languages to Facebook identifying people with facial recognition, we can observe the rampant emergence of human-like cognitive functions in artificially intelligent systems.

Although we cannot put a halt to the evolution and rise of artificial intelligence (AI), this is the time to regulate and control where it will be headed in the near future. Technology, especially AI, is only safe when all people have access to the knowledge about how to use it. If it is placed in the hands of a few powerful corporations, then it becomes difficult for common people to use or understand it. Meanwhile, influential companies will continue to prosper and grow their power due to the capabilities of AI, leaving the other groups to be incapable of catching up.

It is crucial to control the power and availability of AI in order to prevent the dominance of powerful companies with large amounts of data and funding. This is especially important for protecting smaller firms and universities.

As for AI itself, it should be regulated in its application to specific problems. For example, while it could help easily find the best-suited treatment method for patients in the medical field, AI can also greatly contribute to global instability. When used in the process of building and utilizing unconventional armaments, AI can contribute to the development of autonomous weapons and killer robots that can further elevate warfare without direct human involvement. These weapons pose a risk because they can be hacked by private citizens, non-state actors, or anybody knowledgeable in AI. To mitigate security risks, they should be banned by international communities from an early stage.

Research into AI itself, however, should not be regulated. The principles in computer science are general across infinite applications, and each application has different regulations to use that knowledge safely and fairly.

In order to ensure that AI has a safe and controllable future, small groups of people should not be allowed to easily access and generate wealth off of it. If AI is not regulated, the future will then depend on the nature of decision-making within those groups.

We can not trust unknown groups to use AI for possibly destructive purposes. We need to ensure that AI is used solely for educational, medical, scientific, and social purposes to ensure that it does not harm broader communities and the security of the world.

Leave a Reply

Your email address will not be published.