Artificial Intelligence (AI) has emerged as a breakthrough technology of the 21st century, with impactful applications ranging from medical diagnostics and treatment to autonomous systems that gather data through sensor networks and “learn” from experience.
Cognizant of Voltaire’s admonition that with great power comes great responsibility, and further recognizing the myriad moral and ethical challenges associated with AI applications, ISSAI operates in accordance with the following ethical principles:
- 1. Societal Well-Being – sometimes described as “AI for Good”, AI systems should prioritize the benefits for humanity and the stewardship of the environment, emphasizing sustainability, and observing the Hippocratic credo of “first, do no harm”.
- 2. Human-Centered Values – AI systems should respect human rights, the rule of law, and democratic values of freedom and dignity. AI systems should respect the privacy and anonymity of people, incorporating data protection, and observing values of equality, non-discrimination, diversity, social justice, and internationally recognized labor rights.
- 3. Transparency – AI systems utilize algorithms and learning methodologies that can be inscrutable, thus it is imperative to ensure responsible disclosure of a system’s design, methodologies, capabilities, limitations, and risks such that humans can understand and challenge the outcomes.
- 4. Technical Resilience and Robustness – an AI system must operate in a safe and secure manner, with engineered fault-tolerance and the capacity to detect risks and avoid harm in the event of an error or system failure.
- 5. Accountability – Organizations and humans developing, using and/or operating AI systems should be accountable for their proper functioning in line with the above principles.