Malicious Use of Artificial Intelligence
A new research report – written by 26 authors from 14 institutions, spanning academia, civil society, and industry – has revealed the malicious use of Artificial Intelligence (AI).
The report says that AI and machine learning capabilities are growing at an unprecedented rate while these technologies have many beneficial applications, ranging from machine translation to medical image analysis.
Countless more such applications are being developed and can be expected over the long term. However, according to the report, less attention has historically been paid to the ways in which artificial intelligence can be used maliciously.
[ Fault Lines: A Research Report on the Quality of Education in Delhi Schools ]
This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats.
The report focuses on the types of attacks that are likely to happen if adequate defenses are not developed. In response to the changing threat landscape, the report makes some recommendations .
It suggests that policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse- related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
Further, the report recommends that the best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.