Skip to content

AI Governance Framework

To institutionalize responsible AI adoption, a National AI Governance Framework is important as it establishes the agreeable scope and limitations of what developers and stakeholders can do and AI’s intended function, keeping in mind the economic, political, and socio-cultural impact to society.

In 2022, the DTI has gathered almost a hundred stakeholders, from government, industry, academe, and CSOs, to collect ideas and recommendations in the formulation of the national framework. The rich insights and discussions were reflected in a White Paper which was published and now serves as a concrete foundation for the framework’s formulation.

Some of the AI principles identified in this work are the following:

  • AI should promote inclusive growth, sustainable development, and well-being;
  • An AI system has to be human-centric and fair, and fairness is essential to treating people with dignity and respect;
  • An AI system has to be robust, performing reliably and safely;
  • There should be proper functioning of AI systems which depends on AI actors assuming responsibility and accountability in their roles, environments, or contexts;
  • Transparency of policies, rules, and regulations governing AI systems is indispensable; and
  • AI should be trustworthy. With trust, people are willing to experience whether AI systems can be operated safely, reliably, and consistently even under difficult (if not unexpected) conditions.

Towards this end, the DTI has already been convening an AI Working Group, composed of government agencies, academe, technology providers, and other organizations, which is in-charge of identifying ways by which the framework can harmonize the various policies and programs and determine gaps in the country’s policies to develop the country’s AI ecosystem. Institutional mechanisms have to be in place to advance and expand the work in formulating the national AI governance framework.