The AI act is here: what should marketing teams do?

While most of us were Out of Office, the AI Act came into effect. Of course, it had been announced quite some time ago that this would happen, so it didn’t really come as a surprise. In fact, we wrote about it before. But still many companies either don’t know about this new piece of legislation, or they simply don’t care.

But they should care. The AI Act is the world’s first comprehensive regulation on artificial intelligence. It aims to ensure that AI is safe, transparent, and aligned with fundamental rights.

Now that the Act is in force, companies across the globe — especially those operating in Europe — must take swift action to adapt to these new rules. Here’s a guide on what businesses should prioritize to comply with the AI Act and future-proof their operations.

Understand the key provisions

The AI Act categorizes AI systems based on their risk level: from minimal or no risk to high risk and unacceptable risk. Each category comes with its own set of rules:

  • Unacceptable risk: these are AI systems that are considered too dangerous (e.g., social scoring by governments), and they are outright banned.
  • High-risk AI: This category includes AI systems used in critical areas like healthcare, law enforcement, and employment. These have to meet strict requirements around transparency, accuracy, cybersecurity, and human oversight.
  • Limited-risk and low-risk AI: These systems face fewer restrictions but may still need to meet transparency obligations.

Action Step: Companies need to classify their AI systems under these categories to understand their obligations. Reviewing the AI Act’s annexes, which detail the high-risk sectors, is critical for determining compliance needs for your organization.

Inventory and risk assessment

One of the first steps any organization should take is to take an inventory of all AI systems currently in use. This includes not only AI developed in-house but also third-party AI solutions integrated into business processes. And when you make that inventory, make sure to also get all the AI tools that individual coworkers are using. From experience, we know that a lot of people use something on an individual basis, so team leaders may not even be aware of some of the AI tools in use in your organization.

Key Considerations:

  • Identify AI use cases: Document where AI is being used, its purpose, and the potential risks involved.
  • Evaluate risk levels: For each AI system, evaluate whether it poses a high risk under the Act’s criteria. This could involve evaluating how decisions are made, the sensitivity of the data used, and the potential societal impact of the AI’s outputs on your marketing team AND your organization.
 

By mapping out AI systems, you can prioritize compliance efforts based on risk levels and ensure that high-risk AI systems are addressed first.

Documentation and accountability

The AI Act requires companies to provide clear documentation about how their AI systems work, how data is processed, and how your organization makes decisions. This transparency is crucial, especially for high-risk systems, and these will need to meet strict reporting requirements.

Action Steps:

  • Create detailed documentation: This should cover how the AI system was trained, the datasets used, how it was tested for bias, and how decisions are audited.
  • Assign accountability: Establish roles and responsibilities for ensuring AI compliance within your organization. This may involve creating new positions, like a compliance officer specifically for AI governance.
  • Set up regular audits: High-risk AI systems will require ongoing monitoring and periodic audits to ensure compliance with the legal requirements.

Data governance and quality

Data quality is at the heart of the AI Act’s focus on fairness and transparency. It emphasizes the need for high-quality, representative datasets to avoid bias, particularly for high-risk systems. And let’s be frank here: even if there was no AI Act, it STILL would be best practice to make sure your data quality is as high as possible and is governed the right way. If you’re in marketing or sales, you want to make sure you handle data in your CRM with care.

Action Steps:

  • Ensure data diversity: AI systems should be trained on diverse and representative datasets to avoid discriminatory outcomes.
  • Regularly update data: Keep training data updated to reflect changes that are relevant to your organization.
  • Set up data governance practices: Put strong data governance practices in place to ensure data is handled responsibly, and that sensitive data is protected.

Make it explainable and transparent

One of the key principles of the AI Act is the need for transparency. This is especially the case when it comes to decision-making processes. For AI systems that impact individuals, companies need to ensure that AI models are explainable and that users can understand how decisions are made. Some examples of this include AI systems that are used for hiring employees or for taking credit decisions.

Action Steps:

  • Develop explainable AI: Ensure that users can understand how your AI system reaches its conclusions. This might involve using simpler models or developing tools to explain decisions made by more complex models.
  • Provide user transparency: Inform users when they are interacting with AI, especially in high-risk scenarios. Make it clear when decisions are made by machines. And provide pathways for human intervention when necessary.

Make sure your vendors comply

Many organizations rely on third-party AI solutions to enhance their operations. These external providers also need to comply with the AI Act. As a result, need need to  carefully assess your AI vendors and ensure they meet all the regulatory standards.

Action Steps:

  • Conduct vendor due diligence: Make sure that your AI vendors comply with the AI Act’s requirements. This includes reviewing their data practices, transparency reports, and how they handle risk management.
  • Negotiate compliance clauses: Update contracts with AI vendors to ensure they adhere to the same high standards expected by the AI Act. Establish clear responsibilities if any compliance issues arise.
 
Of course, if you notice a vendor doesn’t comply with regulations and has no plan or desire to do so in the future, just drop them and look for other solutions. If not, you risk legal measures against your organization.

Train your team

Compliance with the AI Act may require organizational changes and training for your staff. Employees should know the AI Act’s rules well, especially those involved in data, AI development, and compliance.

Action Steps:

  • Provide targeted training: Ensure that anybody in your organization working with AI understands the new regulatory framework. This training should cover everything from data governance to transparency obligations.
  • Build AI literacy: Educate your team about the implications of AI in their daily work and the company’s obligations under the AI Act. Having a workforce that understands AI ethics and compliance can reduce risk.

The AI Act sets a new standard for regulating artificial intelligence. Organizations will need to adapt quickly to stay compliant. Those that apply the things we mentioned in this blog post will not only comply with the law but also position themselves as leaders. By addressing these areas, organizations can minimize risk, avoid fines, and build trust with their target audience. This way, regulatory compliance becomes a strategic asset that your marketing and sales teams will surely benefit from in their daily work. So if you haven’t done so yet, now is the time to act. Make sure you are using AI systems that are designed, developed, and deployed with care, and in line with the AI Act’s principles.

Report: The crisis of disconnection

In this free report, we share insights and data about how disconnection hurts your company, and how you can fix this.