The Resurgence of Artificial Intelligence
2023 was all about Artificial Intelligence or AI; this tech darling’s worldwide rise is poised to continue in 2024. AI’s benefit is well documented as its application continues to evolve. Naturally, issues and downsides are also anticipated, and many pundits predicted that AI-related dispute will increase in 2024. Hence, it was no surprise that 2023 closes with The New York Times suing OpenAI and Microsoft for alleged IP rights infringement. The New York Times alleges OpenAI of developing its AI by using The New York Times’ publication without due regard of intellectual property rights.
The trend in Indonesia is no different when it comes to AI. In anticipation of negative impacts from its increasing use, the Indonesian government has taken steps to regulate it. Similar to positions undertaken in other jurisdictions, the Indonesian government also adopted a ‘soft law’ approach to AI governance, where ethical guidelines are the norm.
In this client update, we will take a closer look at the two most recent guidelines on AI governance framework, which were issued almost simultaneously by the Ministry of Communication and Informatics (“Ministry”) and the Financial Services Authority (Otoritas Jasa Keuangan or “OJK”).
Ministry of Communication and Informatics Circular Letter No. 9 of 2023
Who is subject to the Circular Letter?
It should be noted at the outset that a ‘circular letter’ is not among the official regulatory forms under the Indonesian hierarchy of regulations. This implies that a circular letter is inward looking and not binding on any parties except for those that are the specific addressee of the circular letter. In the case of Ministry of Communication and Informatics Circular Letter No. 9 of 2023 (“Circular Letter”), its addressees are the following: (i) business actors operating under the 62015-business classification category; (ii) public electronic service operators (PSE publik); (iii) private electronic service operators (PSE privat).[1]
The 62015-business classification category was introduced in 2021[2] to cater to businesses activities that involve AI-based programming. The Regulation lists the examples of AI, which are machine learning, natural language processing, and expert system. Among the specific requirements that apply to businesses operating under the 62015-business classification category is to ‘prepare and implement internal company policy on data and ethics of artificial intelligence’. The scope of activities of the 62015-business classification category are: consultation, analysis, and programming of AI technologies. | ||||
What is the key requirement?
There is really only one key requirement under the Circular Letter, namely that any AI-programming activities must be based on ethics. Any business actors and electronic service operators that carry-out AI-programming activities must develop and implement internal guidelines on AI ethics. The development of AI-ethics guidelines is to ensure that AI is adopted in consideration of certain ethical principles, such as prudency, safety, and positive impact oriented. There are nine ethical values that are highlighted in the Circular Letter, and we elaborate these values below:
-
Inclusivity: AI developers must consider equality, justice, and order for the common good in producing information and innovation.
-
Humanity: AI developers must protect human rights, social relations, belief system, and individual opinion and thoughts.
-
Safety: security of users and data must be ensured, and the right of the users of the electronic system must be prioritised to ensure that no party is harmed.
-
Accessibility: all users must have equal right to access the AI-based technology.
-
Transparency: the implementation of AI must be based on transparency of the data used to avoid misuse of data.
-
Credibility and accountability: when distributed to the public, the information produced by AI must be trustworthy and accountable.
-
Personal data protection: AI provider must ensure that personal data protection requirements based on the prevailing law is adhered to.
-
Environmental sustainability: AI provider must carefully consider the impact of AI on humanity, the environment, and other living being.
-
Intellectual property : the implementation of AI is subject to the principles of intellectual property rights protection based on the prevailing law.
Intellectual property protection
As mentioned above, one of the ethical values in the Circular Letter is intellectual property, namely that the implementation of AI is subject to the principles of protecting intellectual property rights based on the prevailing laws and regulations.
There are three main issues in protecting intellectual property rights under the context of AI development (particularly for generative AI): first, the use of intellectual property belonging to other parties as training data; second, the author of the work generated by the AI; and third, the ownership of the work generated by the AI.
As of now, there is no clear regulation regarding these three main issues. It is, therefore, pertinent that the Ministry of Law and Human Rights, through the Directorate General of Intellectual Property, create a clear and concrete guideline to follow-up the Circular Letter.
Responsibilities
In addition to setting out the ethical principles and values, the Ministry places more responsibility emphasis to business actors and electronic service operators utilising AI technology. First, it must ensure that the AI technology used by it is not the sole determinant when developing policy and making decision that involves human livelihood. Second, the AI technology should be utilised as a tool to enhance innovation and assist in problem-solving. Lastly, when and if enacted, it must comply with any regulations that govern the utilisation of AI to ensure the safety and rights of any users of digital mediums.
The Ministry acknowledges that it lacks real enforcement power under the Circular Letter until a binding regulation on AI is enacted. However, a senior official from the Ministry has indicated that the Ministry has the option to exercise its right under Article 40A of the recent second amendment of the Electronic Information and Transaction Law (EIT Law) if it is of the view that the AI ethics requirements are not adhered to. This Article 40A basically states that the government is responsible to create a digital ecosystem that is fair, accountable, safe and innovative, and to ensure that it can exercise the said responsibilities it can instruct electronic system operators (and they must comply) to make ‘adjustment(s)’ to their electronic system(s) and/or to carry out any other specific actions. We understand that the wording of Article 40A is intentionally broadly drafted to accommodate any situation.
OJK AI Ethics Guidelines for the Financial Technology Sector
At the outset, OJK observes that the financial technology or fintech sector has transitioned to a data driven business process that utilises AI, particularly machine learning. The link between big data and AI is indeed the new reality. OJK welcomes this development but is also cautious of the risk potentials or in their parley, the ‘unprecedented risk’.
To mitigate the risks, OJK argues that a code of conduct is necessary. The code of conduct is intended to guide fintech operators and related parties to ensure that the AI-based applications that they are using have fulfilled the principles of beneficial, fair and accountable, transparent, explicable, robustness and security, or in short, are ‘responsible and trustworthy AI’.
The guidelines provide the following elaboration on basic principles for a responsible and trustworthy AI:
-
It adheres to Pancasila (Indonesia’s state philosophy): this is to ensure that the AI utilised is in line with the national interest.
-
Fair and accountable: adherence of fintech operators to privacy and non-discriminatory practices to consumers.
-
Transparent and explicable: fintech operators can explain how AI is utilised in its business process from the start (input) until the end (output); hence, ensuring transparency and assurance of ‘human in the loop’.
-
Robustness and security: fintech operators must ensure that the AI applications used are robust based on acceptable parameters and can withstand cyberattacks or have the means to recover from cyberattacks.
The guideline closes with a list of supporting factors for responsible and trustworthy AI. There are 37 supporting factors in support of the 4 basic principles mentioned above.
Closing
Considering its rapid developments and expectation of its future potentials, the government’s intervention to establish an AI governance framework early should be commended. The choice of ethical guidelines and code of conducts are also appropriate for now, and is in line with the worldwide trend. They provide AI developers and programmers with guidance on key principles that must be adopted without curbing the innovation and creativity of the nascent sector. The indicator of when to move to a stricter governance and regulation will be informed by the customers. In this regard, the respective regulators will need to have their ears close to the ground and diligently follow the development of this exciting technology.
[1] We discuss PSE publik and PSE privat in more details in our February 2021 client update. Click here to read.
[2] Ministry of Communication and Informatics Regulation No. 3 of 2021.