In Brief:
- Big Tech AI models are underperforming in critical areas, including cybersecurity and bias mitigation.
- Non-compliance with the EU AI Act could result in penalties up to 7% of global turnover.
- Companies have 30 days to take action and improve compliance.
A new AI compliance tool, developed by Swiss startup LatticeFlow in collaboration with ETH Zurich and Bulgaria’s INSAIT, has revealed critical gaps in some of the most prominent AI models, including those from Meta, OpenAI, and Alibaba. The AI checker, designed to evaluate models against the European Union’s AI Act, highlighted concerns around cybersecurity resilience and discriminatory output, key areas where models are underperforming.
With the EU’s AI Act set to come into full effect over the next two years, this tool provides a vital early assessment of how well AI models comply with forthcoming regulations. Notably, models from companies like OpenAIās GPT-3.5 Turbo and Metaās Llama 2 received relatively low scores in areas such as prompt hijacking and discriminatory outputs.
Companies that fail to comply with the AI Act face significant penalties, including fines of up to 7% of global annual turnover or 35 million euros. The findings offer a roadmap for companies to improve their AI models in alignment with the EU’s requirements, as non-compliance could lead to heavy financial consequences.
With this new tool, the EU is making its strongest move yet towards holding AI accountable for ethical and secure operations, pushing tech giants to fine-tune their models or face severe repercussions.