Recent Articles
As mentioned in Part 1, the EU AI Act's requirements are proactive, which can be hard to satisfy for a startup. It can be challenging for many early-stage tech companies to read over these regulations and begin to navigate their application. Nevertheless, it is feasible to overcome potential compliance hurdles with the right strategy and turn them into a competitive edge.
As stated in Part 1, startups should consider data quality and how they would mitigate potential instances of bias. This is coherent with Article 10 of the Act, according to which high-risk AI systems shall use high-quality, unbiased data. This is hard for new-age startups that rely on several unstructured data sources. To address this challenge, the best solution always consisted of implementing advanced bias detection algorithms and developing auto data-cleaning processes. However, startups that need more funding to afford these processes should look at implementing data pipelines that include a regular check on the diversity and fairness of their data. They can also involve other stakeholders to ensure data stays relevant and representative. Startups will also need to pay for periodic third-party audits that validate the integrity of their data and demonstrate compliance with EU AI Act provisions.
Ensuring transparency and explainability could also be a challenge for startups. Advanced AI systems, such as deep learning algorithms, tend to operate as “black boxes,” making it difficult for humans to understand how they reach their decisions. Approaches such as Explainable AI (XAI) could make it easier for startups to track the workflows associated with complex AI systems. Another method of improving transparency and explainability is making the documentation user-friendly for all audiences. The documentation might consist of pictures in flowcharts or interpretability dashboards that simplify authoritarian principles inside AI models. Another way startups can handle this challenge is by working with data scientists to balance model accuracy and explainability.
Startups also need to develop robust documentation and records for compliance purposes. Under the Act, institutions must maintain detailed documentation of AI system development, data sources, and audit trails in decision-making processes. This is part of an effort to improve the introduction of traceability and accountability. Many startups lack sufficient funding or resources to manage detailed documents. One possible solution is to implement standardized documentation practices right from the start. Startups could employ version control systems to keep track of changes made to documents and records, leading to a clear audit trail. For any start-up with financial resources, compliance management software can be used instantly to ensure documentation is automated and records enforced consistently and accurately over time.
The EU AI Act requires human oversight and governance, but many institutions still don't have it in place. To ensure compliance, businesses are encouraged to formulate policies that clarify everyone's roles, build monitoring systems that keep updated in real-time, and train employees to intervene when needed. Startups should establish a governance groundwork for monitoring and managing AI. Ingraining these capabilities into the culture of a startup could help ensure that human oversight evolves to continue serving its purpose.
Finally, an environment that provides a component of accountability and compliance would help ensure attention to following guidelines for ethical behavior regarding AI systems. Startups should perceive compliance as a process that runs throughout the entire operation, not a one-time affair. This challenge can be managed from the very onset by implementing compliance efforts within the product development process. Startups might also focus on working with engineers, data scientists, and legal experts to establish an incorporated approach to achieve compliance with the ethical criteria for AI systems. Startups could reward efforts towards compliance for compliance by recognizing those employees and teams that best approach high standards of ethical responsibility with AI systems.
Despite the compliance challenges, executing some or all of the strategies noted above may assist startups in complying with the EU AI Act. Rather than force these challenges as setbacks, startups need to accept and take this opportunity to build trust among consumers, earn a positive brand reputation, and establish themselves as leaders who adopt the ethical way out of using AI systems.
Stay ahead of the curve with the latest legal insights, and updates from Axioma.
Thank you for subscribing to our newsletter!
We appreciate your interest and will keep you updated with the latest news and offers.
Oops! Something went wrong with your subscription.
Please try again later or contact our support team if the issue persists.
About the Firm
Resources
Contact
Tarik Zahzah
Avocat à la Cour | Attorney at Law
CNBF: 131266 | New York: 4532081
CDAAP, 11 Bd de Sébastopol
75001 Paris, France
Axioma Law. All rights reserved.