1. Develop modular applications in which the pieces of an AI system can be easily changed to accommodate the requirements found in different places.
Every geography has a unique set of requirements and priorities to help determine what is legal, ethical, and permissible. For example, much of the groundbreaking work around AI has happened In the United States and China. Both countries have taken more of a laissez faire approach to overseeing AI development, each in pursuit of its respective goals of technological innovation and economic leadership. By taking a modular approach to application development, companies can more easily adapt their AI systems to regulations and requirements of a particular region (the EU, in this case), country or industry vertical.2. Study the seven key requirements for AI outlined in the EC white paper.
It won’t be enough to slightly change your AI strategy. To be compliant, companies must apply the spirit of the guidelines throughout the process, from design of an AI system, through development, training, deployment and beyond. Doing this effectively starts with a full understanding of the EC guidelines (summarized below) and their implications.- Human agency and oversight—Ensure that people have oversight and can use an AI system without relinquishing autonomy.
- Technical robustness and safety—Minimize unintentional and unexpected harm, and prevent unacceptable harm, including physical and mental health.
- Privacy and data governance—Protect the individual’s privacy and ensure the quality and integrity of data, including insights or decisions that a system generates.
- Diversity, non-description and fairness–Honor inclusion and diversity to ensure a fair and equitable system.
- Societal and environmental wellbeing—Be mindful of an AI system’s potential impact on individuals, society and the environment, and carefully monitor for negative impact.
- Transparency—document data sets, decisions, and processes as a means of recourse
- Accountability—Enable auditors (both external and internal) to evaluate an AI system’s decisions and how it reached them.