The Israeli Privacy Protection Authority (the “Authority”) recently published draft guidelines on the applicability of the Privacy Protection Law (the “Law”) to artificial intelligence systems (the “Guidelines”). These Guidelines outline the Authority’s interpretation of the Law in the context of developing and using AI systems, and explain how the Authority intends to exercise its powers in that area — particularly following the entry into force of Amendment 13 to the Law (“Amendment 13”) in August 2025.
This update reviews the main points of the draft Guidelines that are relevant to companies and organizations, and offers practical steps for preparation.
Informed Consent and Notification Requirements
According to the Guidelines, the use of personal data in AI systems requires obtaining informed consent from the data subject. The Authority’s position is that in order to obtain valid consent, the following information must be presented to the data subject:
- the purposes for which the data is being collected;
- the recipients of the data and the purposes of disclosure;
- whether the person is legally required to provide the data;
- the types of data to be used and their sources;
- a description of the risks associated with each data processing purpose;
- how the AI system will process the data; and
- the fact that the data is being collected via an AI-based automated system (bot), if this may significantly affect consent.
Accountability
The Authority states that accountability practices are especially important in the development and use of AI systems. Key practices include:
- Appointing a Data Protection Officer (DPO) – Under Amendment 13, many organizations are required to appoint a DPO, who will serve as the professional authority and knowledge center for privacy matters. The Authority views the DPO as the most suitable and skilled figure to address issues related to AI use. Organizations that train AI models are “very likely” to be required to appoint a DPO under the amended Law.
- Conducting a Data Protection Impact Assessment (DPIA) – A DPIA is a methodological process that comprehensively and systematically analyzes the privacy impact of data processing on the data subject, identifies the full range of privacy risks, assesses alternatives, and proposes risk mitigation measures. The Authority’s position is that conducting a DPIA, prior to using artificial intelligence systems to process personal information, is the best and recommended way to verify and demonstrate that the use of these systems meets privacy protection requirements.
- Policy for Using Generative AI Tools – The Guidelines recommend adopting a policy to reduce the risk of exposing personal data when using generative AI tools such as ChatGPT, Gemini, Claude, and CoPilot. The policy should address, among other issues:
– who in the organization may use generative AI tools;
– who can authorize their use;- which tools are permitted;
– what types of data can be uploaded to such services;
– limiting the retention period of prompts;
– limiting the use of input data for algorithm training; and
– training employees on the specific security risks of these systems.
Web Scraping
The Guidelines state that scraping personal data from the internet to train AI models, without informed consent from the data subject, constitutes unlawful privacy infringement. Even when individuals publish data about themselves or others on social media, this generally cannot be considered informed consent for AI training purposes.
Furthermore, operators of online platforms (e.g., social media or dating apps) must take steps to prevent data scraping from the platforms they manage. Additionally, the Authority’s view is that unauthorized scraping may constitute a “serious security incident” that must be reported to the Authority.
Data Subject Rights
Under the Law, every individual has the right to access their personal data held in any database, and correct data that is not accurate, complete, or up-to-date.
In light of the importance of accuracy and reliability in AI-processed data, the Authority intends to strictly enforce the rights of access, correction, and deletion with regard to AI systems. The Guidelines also note that correcting inaccurate data in AI outputs may require changes to the algorithm that generated them.
Practical Steps for Implementing the Guidelines
The Guidelines show a trend of increased involvement by the Authority in shaping private sector data collection and processing standards. They also align with legislative and regulatory developments in other countries that aim to mitigate AI-related risks.
Recommended steps for implementation:
- Map AI Usage in the Organization: Identify all AI systems and processes that handle personal data and assess the data sources used to train AI models.
- Update Privacy Documents: Revise privacy policies and notices to data subjects to explicitly reference the use of AI and include the expanded disclosure obligations outlined in the Guidelines. Consider including disclosures in AI tools that interact directly with data subjects (e.g., chatbots).
- Conduct DPIAs: Carry out DPIAs for any AI systems that process personal data, and address identified privacy risks.
- Develop a Policy on External AI Services: Define rules for using services like ChatGPT, specify data types that must not be entered into such services, and train employees accordingly. Review each tool’s terms of use and privacy policy, particularly regarding the use of input data (e.g., for training models or sharing information with third parties).
- Appoint a DPO: Many companies are required, under Amendment 13, to appoint a DPO. Even if not strictly required, appointing a DPO is advisable to help integrate privacy principles into organizational practices and meet legal obligations.
- Scraping Review: Organizations involved in data scraping should assess the types of data collected, review the platforms’ terms of use, and adopt policies to reduce legal risks.
In light of the expected entry into force of Amendment 13 to the Privacy Protection Law in August 2025, and the expanded enforcement powers that will be granted to the Privacy Protection Authority within its framework, companies must ensure as soon as possible that they are properly implementing privacy protection requirements, including in connection with the use of artificial intelligence systems.
The draft Guidelines are open for public comment until June 5, 2025.
Please feel free to reach out to us should you have any questions.
This client update was prepared with the assistance of Reut Cohen.