As AI becomes deeply embedded in how businesses operate, the regulatory landscape is finally catching up. The European Union’s Artificial Intelligence Act (EU AI Act) – the first comprehensive AI regulation in the world – is being rolled out in stages, and August 2026 marks a major milestone for mandatory disclosure and transparency requirements. These changes will affect Irish companies that develop, deploy or use AI systems in the EU market, even if they are headquartered outside Europe.
The Act was adopted in 2024 and applies on a staggered timeline.
- Prohibited AI practices (e.g., social scoring, untargeted scraping of facial images) are already banned.
- But 2 August 2026 is the date when the mandatory disclosure requirements for many AI systems officially take effect across the EU, including Ireland.
Companies must be ready to demonstrate compliance or risk enforcement actions and significant penalties.
What the Mandatory Disclosure Requirements Entail
- Chatbots & conversational agents
- AI-generated media (text, images, video, audio)
- Content recommendation systems
The aim is to preserve user trust and avoid deception. If your website or product uses an AI tool that customers interact with, you’ll need to explicitly disclose that fact to people in plain language.
2. Labelling & Disclosure of AI-Generated ContentCertain AI output especially content that could influence public opinion or decisions must be labelled or otherwise identifiable as machine-generated. This includes deep fakes or synthetic media used in marketing, advertising, reporting, or social media.This change pushes companies to rethink how AI content is presented to audiences and to build transparent notification systems into their digital touchpoints.
3. Documentation and Internal RecordsCompanies must maintain documentation that shows how their AI systems work, how they manage risk, and what safeguards are in place. This documentation will be essential if authorities request compliance evidence during audits or investigations.For AI systems categorised as “high-risk”, Irish providers will also need:
- Risk assessments and mitigation plans
- Processes for human oversight
- Cybersecurity safeguards
- Ongoing monitoring and reporting frameworks
Even if your AI systems don’t fall into the highest-risk category, the transparency requirements will still apply in many cases. Here’s how to get ready:
1. Identify All AI Systems in UseInventory your company’s AI tools from customer support bots to recommendation algorithms—and classify them according to the AI Act’s risk framework.
2. Update Customer and User CommunicationsMake sure users are informed when they’re interacting with AI, and clearly label any AI-generated content using language that is easy to understand.
3. Build or Strengthen DocumentationPull together technical documentation, risk assessments, and operational processes so you’re ready if regulators request evidence of compliance.
4. Educate Teams & StakeholdersTrain product, legal, and marketing teams on what counts as a disclosure under the Act. Embedding awareness early avoids last-minute compliance stress.
A New Era of Trust & Transparency
For Irish companies navigating this shift, planning ahead and embedding transparency into your AI strategy now will set you apart as a forward-thinking and trustworthy partner in the AI age.
If you want help auditing your AI usage or shaping a compliance roadmap, Pennypop are here to help. Let’s make sure you’re ready for August 2026 and beyond. Get in touch!