Frequently Asked Questions

AI Act Compliance & Regulatory Requirements

What is the EU AI Act and who does it apply to?

The EU AI Act is a European law that regulates the use of artificial intelligence by categorizing AI systems based on risk levels. It applies to both technology providers and companies that use AI systems, requiring them to comply with transparency, labeling, and governance obligations. The law is already in force, with implementation deadlines staggered between 2025 and 2027. [Source]

What are the main risk categories defined by the EU AI Act?

The EU AI Act defines four main risk categories for AI systems: unacceptable risk (banned uses, effective February 2, 2025), high risk (operational requirements from August 2, 2026), limited risk (mandatory transparency from August 2, 2026), and minimal risk (no additional obligations). Each category has specific compliance requirements and deadlines. [Source]

What are the compliance deadlines for the EU AI Act?

Key deadlines include: bans on unacceptable risk uses from February 2, 2025; operational requirements for high-risk systems from August 2, 2026 (or August 2, 2027 for regulated products); mandatory transparency for limited risk systems from August 2, 2026; and transparency obligations for General Purpose AI (GPAI) models from August 2, 2025 (with a transition period until August 2, 2027 for existing models).

What does the Italian AI Act require from businesses?

The Italian AI Act, enacted in September 2025, supports the EU AI Act by introducing coordination and supervision measures. It designates the Agency for Digital Italy (AgID) and the National Cybersecurity Agency (ACN) as authorities, establishes a €1 billion fund for AI and cybersecurity, and emphasizes transparency, traceability, and human oversight in sensitive sectors. [Source]

How does the Italian AI Act relate to the EU AI Act?

The EU AI Act provides the overarching regulatory framework, while the Italian AI Act focuses on national coordination, supervision, and enforcement. The Italian law aligns authorities and responsibilities, impacting how businesses interact with regulators and implement controls.

What are the penalties for non-compliance with the AI Act?

The Italian AI Act introduces safeguards and penalties for abuses such as harmful deepfakes and reinforces requirements for transparency and human oversight. Specific penalties depend on the nature and severity of the violation, as outlined in the legislation. [Source]

Labeling, Metadata & Audit Trails

When must content be labeled as AI-generated under the AI Act?

Content must be labeled as AI-generated when it is synthetic (created directly by AI) or manipulated in a way that could appear real. A clear label and, where technically feasible, a digital watermark readable by software and platforms are required. Including this information in metadata reduces the risk of error. [Source]

What are the requirements for metadata and provenance under the AI Act?

The AI Act requires that content includes homogeneous metadata on its origin, transformations, and versions. Provenance is a fundamental requirement, ensuring transparency and traceability from the moment of content creation. This supports accountability, incident investigation, and compliance audits. [Source]

How does THRON help automate labeling and metadata management for AI Act compliance?

THRON Platform, with its native AMBRA AI, automates the generation of consistent metadata, including titles, descriptions, tags, and captions. This reduces manual work, minimizes errors, and ensures that each asset is equipped with the necessary metadata for compliance, discoverability, and traceability throughout its lifecycle. [Source]

What is an audit trail and why is it important for AI Act compliance?

An audit trail is a verifiable history of actions and events related to AI system operations. The AI Act requires event logs and continuous monitoring for high-risk systems, supporting investigations, audits, and continuous improvement. Without a readable audit trail, compliance remains theoretical. [Source]

How does THRON's Automation Studio support audit trail requirements?

THRON's Automation Studio provides a centralized control center for designing, activating, and monitoring automations. It maintains execution history with searchable logs, respects role permissions, and sends error notifications, producing a readable audit trail for both internal and external audits.

Are logs mandatory for companies deploying AI systems?

Yes, companies (deployers) must retain logs generated by high-risk AI systems for an appropriate period, supporting traceability and investigations. Suppliers must set up systems that automatically record events, as required by the AI Act.

How do consistent metadata and automation help with compliance?

Consistent metadata and a readable audit trail form the evidentiary basis for compliance. A console with logs and execution history makes processes demonstrable without slowing down operations, supporting both regulatory and operational needs.

General Purpose AI (GPAI) & Model Selection

What are General Purpose AI (GPAI) models and what are their obligations under the AI Act?

GPAI models are AI systems that support a wide range of use cases. The AI Act requires transparency obligations for GPAI from August 2, 2025, including documentation, user instructions, copyright compliance, and publishing a summary of training sources. For models exceeding certain capacity thresholds, additional risk assessment and mitigation measures are required. [Source]

What documentation is required for GPAI models?

For GPAI models, companies must provide documentation, user instructions, copyright compliance evidence, and a summary of training data sources. If the model poses systemic risk, additional risk assessment and mitigation are required. The European Commission provides practical templates, such as the Model Documentation Form, to support compliance. [Source]

How can companies select suitable GPAI models for compliance?

Companies should use the European Commission's Code of Practice and Guidelines to evaluate GPAI models, ensuring transparency on training data, copyright respect, and systemic risk management. Including these requirements in RFPs and contracts helps reduce compliance uncertainty and timelines.

THRON Platform Features & Capabilities

What is THRON and how does it support AI Act compliance?

THRON is a SaaS platform that centralizes digital asset management (DAM) and product information management (PIM), enhanced with AI capabilities. It helps companies comply with the AI Act by automating metadata generation, labeling, audit trails, and providing tools for traceability and governance of AI-generated content. [Source]

What are the key features of THRON's AMBRA AI?

AMBRA AI, native to the THRON Platform, generates consistent texts in multiple languages, proposes tags aligned with existing attributes, and automatically creates captions. This ensures assets are accessible, searchable, and compliant with metadata requirements, reducing repetitive work and errors. [Source]

How does THRON's platform help with content traceability?

THRON provides a framework with fields, templates, consistent taxonomies, and centralized versioning. This ensures that every piece of content is traceable from creation to distribution, supporting compliance with provenance and audit trail requirements.

What integrations does THRON offer for workflow automation?

THRON offers next-generation API connectors and certified connectors for platforms such as AEM, Magento, Shopify, Drupal, WordPress, and SFCC. It also integrates with ERP, creative suites, IT security tools, and user experience tools, enabling seamless workflow automation and compliance. [Source]

Does THRON provide APIs for custom integrations?

Yes, THRON provides modern, robust, and easy-to-implement APIs that allow integration with any system or endpoint. These APIs ensure secure, high-performance integrations, simplifying workflows and enhancing speed. [Source]

What technical documentation does THRON offer for IT teams?

THRON provides detailed technical documentation, including platform architecture, security, and data management. Resources include downloadable infographics, white papers, and comprehensive guides, making them essential for IT and digital leaders. [Source]

How does THRON ensure security and compliance?

THRON is fully compliant with GDPR and holds internationally recognized certifications, including ISO 27001:2022, ISO 9001:2015, ISO 27017:2015, and ISO 27018:2019. It employs secure development practices, advanced threat detection, data encryption, and leverages AWS infrastructure for robust security and data integrity. [Source]

Use Cases, Benefits & Customer Success

Who can benefit from using THRON for AI Act compliance?

THRON is ideal for IT teams, CIOs, data and content governance managers, marketing, e-commerce, operations, and digital leaders in industries such as fashion, manufacturing, retail, beauty & pharma, sporting goods, ceramics, interior design, and automotive. [Source]

What business impact can companies expect from using THRON?

Companies using THRON can achieve up to 90% time savings in asset search, 50% reallocation of time to strategic activities, 99.9% service availability, and a 317% ROI with a payback period of less than 6 months (Forrester TEI study). Automated asset delivery can provide a financial benefit of 7,000 over three years. [Source]

What customer feedback has THRON received regarding ease of use?

Customers highlight THRON's ability to simplify asset search and retrieval, automate channel updates, and enable no-code workflow management. For example, a Chief Marketing Officer at a manufacturing company noted that asset retrieval is now "done in two clicks," and a Digital Product Manager in fashion no longer worries about manual updates. [Source]

Can you share specific case studies of companies using THRON?

Yes, THRON has been successfully adopted by companies such as Fiorentina (sports), Selle Royal Group (manufacturing), Chervò (fashion), CAME (manufacturing), LAGO (interior design), PLATUM (e-commerce), Whirlpool (appliances), Dainese (sporting goods), Pettenon Cosmetics (beauty & pharma), and Atlas Concorde (ceramics). Detailed case studies are available on THRON's customers page. [Source]

What industries are represented in THRON's case studies?

Industries include fashion, sporting goods, beauty & pharma, manufacturing, interior design, retail, sports, ceramics, and e-commerce. Examples are available for each sector on THRON's customers page. [Source]

How quickly can THRON be implemented and how easy is it to start?

THRON B2B Area can be activated in less than a week. Customers receive support from a dedicated Customer Success Manager, customized onboarding, comprehensive training, and access to guides and videos, ensuring a smooth and efficient start. [Source]

What support and resources does THRON provide to customers?

THRON offers technical assistance, a Customer Success Program with personalized onboarding and training, self-service resources (guides, videos, troubleshooting), and monthly release notes to keep customers updated on platform developments. [Source]

What are the main pain points that THRON addresses for businesses?

THRON addresses pain points such as excessive manual work, inconsistent brand communication, scattered product content, slow time-to-market, workflow inefficiencies, and the need for secure, centralized management of digital assets and product information. [Source]

How does THRON compare to other DAM and PIM solutions?

THRON uniquely combines DAM and PIM functionalities in a single platform, eliminating the need for costly integrations and reducing complexity. It automates workflows, provides a single source of truth, supports omnichannel delivery, and offers a Customer Success Program to accelerate ROI. These features differentiate THRON from other solutions that may require multiple tools or integrations. [Source]

Language
Book a demo
Logo THRON al cui interno si trova la bandiera europea; a lato la scritta "European Ai Act".

AI Act: guide to labels, logs and governance

The AI Act is the European law that regulates the use of artificial intelligence by risk level. It establishes what is prohibited, when more stringent requirements are needed, and introduces transparency obligations for content generated or modified by AI. It primarily concerns those who build, integrate, and govern digital platforms and processes in companies: IT teams, CIOs, and data & content governance managers. For them, the most important operational aspects are the traceability of AI-generated content, the quality of metadata, and a readable control system that documents how AI is used.

AI Bubble that intersects three fundamental concepts for IT Teams, CIOs and Data & Content Governance Teams: traceability, metadata and control systems.

The law is already in force, but now the more operational rules come into play: informing and labeling AI-generated content, making its provenance traceable with consistent metadata, consciously choosing general purpose models (GPAI), and maintaining activity logs that demonstrate how AI operates on a daily basis.

In this article, you’ll find practical information on: what changes for labels and watermarking of AI-created content, what metadata and logs are needed to demonstrate the origin and the audit trail, how to set up human monitoring and control, what the Italian AI Act provides, and what criteria to follow to choose the most suitable artificial intelligence models for your company.

EU AI Act implementation schedule and deadlines

The European AI Act starts from a simple idea: not all artificial intelligence technologies are the same. For example, a customer service chatbot cannot have the same rules as a system that evaluates CVs for hiring. This is why the regulation divides AI systems into categories of increasing risk. Each category has specific obligations and, above all, specific deadlines by which compliance is required. The law applies to those who use these systems in their companies, not just to technology providers, so you are responsible for complying with the rules.

Diagram with orange circles representing the AI ​​risk levels: unacceptable, high, limited, minimal and GPAI, with related deadlines.

Here is the complete calendar with the related areas of use:

  • Unacceptable risk: bans effective February 2, 2025.
    We’re talking about uses that violate rights and freedoms, such as subliminal or deceptive manipulation, social scoring of people, and the untargeted mass collection of facial images to create unauthorized databases.
  • High risk: Operational requirements from August 2, 2026.
    This includes systems used in employment and HR (CV screening), education (student assessment), biometrics, migration, and justice. Risk management, data quality, technical documentation, event logging, and human oversight are required. If the system is incorporated into regulated products, the transition will be extended until August 2, 2027.
  • Limited risk: Mandatory transparency from August 2, 2026.
    If AI creates or substantially modifies content that is potentially misleading, it must be reported. Simple photo editing, such as removing a background, does not require labeling, but if synthetic content is generated, transparency is required, and metadata must indicate its origin. Chatbots must make it clear that they are AI systems, not real people.
  • Minimal risk: No additional specific obligations and no dedicated deadlines beyond the applicable general regulations. This includes most current applications, such as spam filters or AI functions in video games.
  • General Purpose Models (GPAI): Transparency obligations from August 2, 2025.
    Models already on the market before this date must comply by August 2, 2027. They are not a “risk level,” but have dedicated rules because they support so many use cases.

The real challenge is not just meeting the AI ​​Act deadlines, but building AI governance that strengthens customer trust. When a company must label its promotional images or website content as “artificially generated,” it risks conveying a perception of less authenticity and care. Those managing digital content and data have the opportunity to anticipate these scenarios by immediately organizing processes, responsibilities, and strategic controls that avoid trade-offs between compliance and corporate reputation, transforming regulatory obligations into competitive advantage.

Labeling AI-generated content

The AI ​​Act introduces transparency and marking requirements: outputs must be marked clearly, whether they are text, images, audio, or video.

Deepfakes (content where AI replaces one person’s face or voice with that of another) must be labeled and, where technically possible, given a digital watermark that machines can recognize. This helps track the content throughout its lifecycle, from creation to distribution.

To accompany the adoption, the Commission published the General-Purpose AI Code of Practice (10 July 2025) and Guidelines to clarify when and how obligations apply to GPAI providers (31 July 2025). The Code does not replace the law, but is a voluntary tool for demonstrating compliance; the guidelines specify its purpose and operational expectations.

Italian AI Act: What does the law provide?

In September 2025 Italy approved a national law that supports the AI ​​Act with coordination and supervision measures, tasking the Agency for Digital Italy (AgID) and the National Cybersecurity Agency (ACN) with implementing the legislation. A 1 billion euro fund is also planned to support AI and cybersecurity in the country. For businesses, this translates into clear contact persons, strengthened controls, and greater attention to transparency, traceability, and human supervision in sensitive sectors.

Italy: €1 million investment in artificial intelligence and cybersecurity.

European AI Act and metadata: data provenance and quality

In the most sensitive use cases, the regulation requires documentation and event logging,that is, operational traces that an AI system records throughout its lifecycle, such as the start or end of an execution. Without homogeneous metadata on the origin, transformations, and versions of content, it becomes difficult to ensure accountability, investigate incidents, or trace the origin of data. Provenance is not an additional label, but a fundamental requirement: content must include the necessary information from the moment of creation, so that transparency is native and not added later.

In this context, AMBRA AI, THRON Platform’s native artificial intelligence, helps simplify AI Act compliance and save time by automating repetitive steps from the first upload to the platform. THRON Platform provides the framework with fields and templates, consistent taxonomies, and centralized versioning. AMBRA AI works within this framework:

  • Generates consistent texts in the necessary languages, such as titles and descriptions, respecting style and editorial guidelines.
  • Proposes tags aligned with existing attributes, identifying subjects and contexts in content through visual recognition.
  • Automatically creates captions to improve the accessibility and searchability of your assets.
Screenshot with example product sheet and “Generate with AMBRA AI” button to generate a description.

The result is less repetitive work, fewer errors, and faster publishing. Thus, each piece of content enters the platform with a consistent set of metadata, ready to accommodate labels and warnings for synthetic content and to be tracked throughout its lifecycle. This reduces manual work, makes content more discoverable, and facilitates compliance.

How to design a readable audit trail

Metadata quality must be accompanied by operational traceability. The law requires event logs and continuous monitoring for high-risk systems; it also requires deployers to retain logs “for an appropriate period” for investigations, audits, and continuous improvement. In essence: without an verifiable history of executions, compliance remains only theoretical.

THRON’s new Automation Studio offers a single control center for designing, activating, and monitoring automations. It displays available automations with status and description, centralizes configuration, maintains execution history with searchable logs, respects role permissions, and sends error notifications, producing a readable audit trail for internal and external audits.

General-Purpose AI (GPAI): Selection Guidelines

For those integrating general purpose models (GPAI), the European Commission has published a voluntary Code of Conduct and Guidelines that explain what to document and how to do it: transparency on training data, respect for copyright, management of systemic risks.

The Code also includes practical templates, such as the Model Documentation Form, which are useful for supplier evaluations. The GPAI rules came into effect on August 2, 2025: including these requirements in RFPs and contracts reduces uncertainty and compliance times.

FAQ

Sfondo con grandi punti interrogativi arancioni e bianchi su base blu scuro.

What is the “EU AI Act” in a nutshell?
It is the European law that regulates AI by dividing it into risk levels: it provides transparency, controls for the most sensitive cases, and specific rules for GPAIs. It is already in force, with implementation staggered between 2025 and 2027.

When should I label content as AI-generated?
When output is synthetic (i.e. created directly by artificial intelligence) or manipulated and may appear to be real, a clear label is required and, where technically feasible, a watermark that can be read by software and platforms. Including these steps in the metadata reduces the risk of error.

If I use a general-purpose model (GPAI), what should I do today?
Ensure documentation, user instructions, copyright compliance, and publish a summary of training sources. If the model exceeds certain capacity thresholds (systemic risk), additional risk assessment and mitigation measures are required.

Are logs also mandatory for those who deploy the system?
Yes: deployers must retain logs generated by high-risk systems for an appropriate period of time, supporting traceability and investigations. Suppliers must set up systems that automatically record events.

What does the Italian AI Act mean for businesses?
It establishes coordinating authorities (AgID, ACN), introduces safeguards and penalties for abuses such as harmful deepfakes, and reinforces the requirement for transparency and human oversight. For businesses, this means traceable processes and clear points of contact.

What is the relationship between the AI Act and the “Italian AI Act”?
The European framework is directly applicable, while the Italian framework focuses on coordination, supervision and specific sanctions, aligning authorities and responsibilities. For companies, this changes the way controls are carried out and how they collaborate with regulators.

How do metadata and compliant automation help me?
Consistent metadata and a readable audit trail form the evidentiary basis for compliance; a console with logs and execution history makes processes demonstrable without slowing down operations.

Would you like to receive content like this once a month?

Embark on a journey to NORTH together with over 4,500 human beings. With our newsletter you will receive data, trends and insights into the world of DAM, PIM and beyond every thirty days.

Join our newsletter