By: Atty. Edsel F. Tupaz, Atty. Gabriel G. Tabeta, & Atty. Julia S. Unarce
February 5, 2025
Summary
The National Privacy Commission’s AI Advisory outlines guidelines for AI systems processing personal data under the Data Privacy Act of 2012, emphasizing transparency, accountability, and fairness obligations. It requires AI developers, providers, and deployers to assess their operations for personal data processing, adopt governance mechanisms, and ensure data subject rights management. The AI Advisory encourages the use of Privacy-Enhancing Technologies to minimize regulatory exposure and mandates mechanisms for human intervention in AI decision-making. By aligning AI systems with data privacy laws, the NPC aims to foster responsible and ethical AI practices in the Philippines.
The NPC’s AI Advisory sets guidelines for AI systems processing personal data, focusing on transparency, accountability, and fairness.
The National Privacy Commission’s (NPC) Guidelines on the Application of Republic Act No. 10173 or the Data Privacy Act of 2012 (the Act), its Implementing Rules and Regulations (IRR), and the Issuances of the Commission to Artificial Intelligence (AI) Systems Processing Personal Data (the AI Advisory) has spurred dialogue among Philippine privacy and AI governance professionals. A reading of the AI Advisory will show that AI systems that process personal data may fall within the ambit of data privacy laws and regulations. Edsel Tupaz, Gabriel G. Tabeta, Julia S. Unarce, from Gorriceta Africa Cauton & Saaavedra, explore the key provisions of the AI Advisory, including its scope, transparency and accountability obligations, and PIC obligations regarding data subject rights.
Scope of the AI Advisory
The AI Advisory clarifies in its first section that it will only apply when the processing of personal data is involved in the development or deployment of AI systems, including its training and testing. The scope of the AI Advisory includes the processing of personal data throughout the AI system lifecycle. This will mean that AI developers, providers, and deployers, and persons involved in the entire AI lifecycle – from planning, design, data collection and processing, AI model building, verification, and validation, deployment, to monitoring – should consider all salient provisions of the AI Advisory in light of their obligations under the Act.
Consequently, persons and entities involved in the AI system lifecycle should assess whether their operations involve processing of personal information. This is vital since the applicability of the guidelines under the AI Advisory to a particular AI system activity will determine the extent and coverage of obligations. This implies that AI developers, providers, and deployers can lessen their regulatory exposure by minimizing or even outright avoiding personal data processing activities. For instance, resorting to anonymized or synthetic data throughout the AI system lifecycle pursuant to principles of data minimization may consequently reduce exposure.
The AI Advisory embodies general rules and principles found under the Act, namely, the personal information controller (PIC) and personal information processor (PIP) should adhere to general data privacy principles in the deployment or development of AI systems. These shall be discussed below.
Transparency obligations
The AI Advisory requires PICs to provide specific minimum information regarding AI systems in addition to those already required by the Act, its IRR, and the NPC’s issuances on data subject rights and consent.
PICs shall inform the data subjects of the nature, purpose, and extent of the processing of personal data when such processing is involved in the development or deployment of AI systems, including training and testing. Furthermore, PICs should be able to explain the AI system’s processing activities to the data subject. Information regarding the AI system should be presented to data subjects in a layered privacy notice that is accessible and understood by members of the privacy notice’s target audience.
PICs should inform the data subjects of the following:
- the purpose for the processing of personal information;
- the factors and inputs considered by the AI system;
- the risks associated with the AI processing;
- the expected output of the AI system;
- the impact of the AI system on data subjects;
- and any applicable dispute mechanisms available to data subjects.
PICs should ensure that any information about such processing is easy to access, concrete and definitive, understood by members of their target audience, and presented in a simple manner using clear and plain language while retaining necessary technical terms.
Accountability obligations
The NPC reiterated in the AI Advisory the principle of accountability, which makes PICs responsible for all personal data under their control or custody. Particularly, PICs shall be accountable for the outcomes and consequences of the processing of information when personal information is involved. Such responsibility includes responsibility over personal data which has been transferred to a third party for processing.
The accountability principle, therefore, makes AI developers, providers, and deployers responsible for the consequences of AI system outputs produced from personal data processing activities. The accountability principle is an especially important aspect of governance for AI systems whose outputs may impact data subjects or produce legal effects. For example, employers using AI systems to screen job applicants may need to observe the AI Advisory’s accountability obligations to ensure full compliance.
The accountability obligations of the AI Advisory require PICs to adopt demonstrable measures of compliance and governance mechanisms in accordance with the Act. The obligation to adopt demonstrable measures simply require PICs and PIPs to ensure that they maintain the necessary documentation of their policies and procedures in compliance with the Act, its IRR, and other issuances of the NPC in relation to their AI systems. Documentation may include Privacy Impact Assessment (PIA) reports, NPC registration certificates, privacy notices and consent forms, and centralized privacy manuals in relation to the use of AI systems.
On the other hand, the obligation to adopt governance mechanisms requires PICs to institute appropriate and effective mechanisms to ensure responsible and ethical processing of personal data. The NPC provided a checklist of measures that may be adopted to fulfill this obligation, which include:
- the conduct of a PIA;
- integration of Privacy by Design and Privacy by Default; implementation of common industry security standards;
- continuous monitoring of AI systems’ operations;
- creation of a dedicated AI ethics board;
- regular retraining and scrubbing of AI systems;
- and institution of mechanisms for human intervention in AI decision-making and review of AI system output.
The institution of mechanisms for human intervention is required for AI systems whose outputs may pose significant risks to the rights and freedoms of data subjects. These mechanisms should involve meaningful human intervention that is carried out by persons with necessary competence and authority.
Fairness obligations
The AI Advisory requires PICs to ensure that their AI systems do not process personal data in a manner that is manipulative or unduly oppressive to data subjects. As such, unconsented production of deepfakes, the employment of subliminal or deceptive techniques to impair a data subject’s decision-making, and social scoring used in contexts unrelated to the original context from which data is generated or collected are use cases that may fall within the purview of measures that are manipulative or unduly oppressive to data subjects.
Specific fairness obligations require PICs to implement mechanisms to identify and monitor biases in AI systems. Apart from this, PICs must limit such biases and also limit their impact on data subjects. Assessments and audits of AI systems should also consider the presence and impact of systemic bias, human bias, and statistical bias on AI outputs and data subjects. It is important to note that this fairness obligation does not require outright elimination of all biases in AI decision-making, as such biases may be difficult to remove or may even be essential to the safe operation of AI systems (e.g., ‘pro-human’ biases in AI systems interacting with persons). Rather, the AI Advisory imposes an obligation to only ‘limit’ such biases and any manipulative or unduly oppressive impact on data subjects.
Further, in response to reports of companies falsely selling AI products and services that actually rely on purely human work, the NPC has prohibited PICs from partaking in AI washing, or the practice where PICs overstate the involvement of AI systems in their products or services. This is to avoid misleading data subjects into sharing their personal data under the (false) belief that little to no human intervention will be involved.
Other obligations of the PIC
In line with the PIC’s fairness obligation is its duty to maintain the accuracy of personal data. This is to ensure fairness of output of AI systems. On the other hand, pursuant to data minimization, PICs shall exclude, by default, any personal data that is unlikely to improve the development or deployment of AI systems, including its training and testing.
Lastly, in light of the general data privacy principle to process personal data pursuant to a lawful basis, PICs who are involved in the AI lifecycle must determine the most appropriate lawful basis under the Act prior to the processing of personal data.
Obligations to data subjects
Data subject rights management is an essential discussion among AI governance practitioners due to the need to balance data subject rights and the practical realities of developing and training AI systems. Personal data is often incorporated into large AI training and verification datasets, becoming untraceable and indiscernible from other pieces of data AI developers and deployers store and process.
From the outset, the AI Advisory states that the fact that personal data has been incorporated into datasets does not automatically make the exercise of data subject rights unreasonable. The AI Advisory furthers this point in providing that a PIC’s inaction towards providing mechanisms allowing for the meaningful exercise of data subject rights negates any claim that a data subject access request (DSAR) it receives in relation to personal data processed by an AI system is unreasonable.
AI developers, providers, and deployers are now expected to implement mechanisms to ensure that data subjects may exercise their rights, while allowing for responsible and ethical processing of personal data in the development or deployment of AI systems, including its training and testing. Though certain rights – such as the right to object, rectification, erase, or block – may be difficult to honor in the context of many AI systems, the NPC requires PICs to implement effective mechanisms or alternative measures to carry out, as far as possible, the intended effect of these rights. In recognition of potential advancements in the field, the NPC suggests the use of Privacy-Enhancing Technologies (PETs) in the development or deployment of AI systems. PETs offer opportunities for AI system developers and deployers to incorporate Privacy by Design in their AI systems while minimizing, or even avoiding, regulatory exposure. AI developers and deployers may consider using anonymization and pseudonymization tools, AI- generated synthetic data, and federated learning, among other existing and emerging PETs, to enhance their AI systems’ privacy compliance.
Conclusion
The NPC’s AI Advisory marks a significant step forward in aligning the Philippines’ data privacy framework with the rapidly evolving global landscape of AI. By explicitly applying the Act and its IRR to AI systems processing personal data, the NPC brings home the point that AI is not exempt from legal and ethical scrutiny under data protection laws. The AI Advisory’s focus on transparency, accountability, fairness, and data subject rights sets a comprehensive standard for AI governance. By emphasizing obligations such as Privacy by Design, bias mitigation, and meaningful human intervention, the NPC encourages organizations to adopt responsible and ethical AI practices. This proactive approach not only enhances the protection of data subjects but also fosters trust and innovation in AI technologies, putting the Philippines in the same list of countries with privacy-focused AI regulation.
Edsel F. Tupaz is a Senior Partner, Head of Data Privacy, Cybersecurity and AI Initiatives Practice Group & Head of Special Projects and Infrastructure Group. Edsel is a Dual-qualified under the Philippine and New York Bars, with over 20 years of expertise across data privacy & protection, technology, cybersecurity, AI, infrastructure, government procurement, corporate law, and banking and financial services. Master of Laws from Harvard Law School, holds economics and law degrees from Ateneo (both with honors), served as Managing Technical Editor of the Harvard Human Rights Journal, and listed under the Experts Directory for Philippine privacy law on OneTrust DataGuidance. Certified Information Privacy Professional – Europe (CIPP/E) and Certified Information Privacy Manager (CIPP) under IAPP. Challenger at the Alan Turing Institute’s Data Challenge – Policy Priorities and AI for Sustainable Development Goals (2023-2024). Awarded “Data Privacy & Protection Lawyer of the Year” at the 2023 Philippine Law Awards and is recognized among the Top 100 Lawyers in the Philippines by Asia Business Law Journal.
Gabriel G. Tabeta is a Junior Associate and currently a member of the Data Privacy, Cybersecurity & AI, Tax, and Technology Media & Telecommunications Departments of the Firm. Gabriel is involved in the various data privacy and AI initiatives of the Firm, working with foreign and domestic clients to ensure their projects and operations comply with the country’s data privacy regulations. Gabriel also assists in processing reportorial requirements for businesses looking to make their entry into the Philippine market.
Julia Antoinette S. Unarce is a Junior Associate at the Firm and a member of its Litigation, Data Privacy, Anti-Money Laundering, and Environmental, Social, and Governance (ESG) Departments. Her practice spans a broad range of legal matters, including civil, criminal, and administrative cases. She actively represents clients before courts and administrative agencies, and is involved in the negotiation, settlement, and pre-litigation resolution of disputes. In addition to litigation, she provides advisory services on evolving regulatory frameworks, assisting clients with business formation, compliance with reportorial obligations, policy development, and other corporate and regulatory matters.
This article was also published under OneTrust Data Guidance. You may find the full article here: https://www.dataguidance.com/opinion/philippines-npc-releases-guidelines-ai-systems