A new AI compliance tool, developed by Swiss startup LatticeFlow in collaboration with ETH Zurich and Bulgaria’s INSAIT, has revealed critical gaps in some of the most prominent AI models, including those from Meta, OpenAI, and Alibaba. The AI checker, designed to evaluate models against the European Union’s AI Act, highlighted concerns around cybersecurity resilience and discriminatory output, key areas where models are underperforming.
With the EU’s AI Act set to come into full effect over the next two years, this tool provides a vital early assessment of how well AI models comply with forthcoming regulations. Notably, models from companies like OpenAI’s GPT-3.5 Turbo and Meta’s Llama 2 received relatively low scores in areas such as prompt hijacking and discriminatory outputs.
Companies that fail to comply with the AI Act face significant penalties, including fines of up to 7% of global annual turnover or 35 million euros. The findings offer a roadmap for companies to improve their AI models in alignment with the EU’s requirements, as non-compliance could lead to heavy financial consequences.
With this new tool, the EU is making its strongest move yet towards holding AI accountable for ethical and secure operations, pushing tech giants to fine-tune their models or face severe repercussions.
As artificial intelligence continues to evolve, datacenters play an increasingly critical role in its development.…
October 31, 2024 – OpenAI’s popular chatbot ChatGPT just got a powerful upgrade. As of…
Summary Iranian Hackers: A group linked to Iran’s Basij paramilitary force, known as APT42, has…
In one of the largest healthcare data breaches in U.S. history, UnitedHealth's tech unit, Change,…
As the U.S. presidential election approaches, tensions rise with reports of an Iranian hacker group…
The 2024 U.S. presidential election could define the future of artificial intelligence (AI) policy, with…