WORLDWIDE HOMOLOGATION, REGULATORY COMPLIANCE, TYPE APPROVAL SPECIALIST

Trump White House Unveils AI Guidelines Prioritizing U.S.-Made Tech and Streamlined Procurement – GLOBAL

Global

30-04-2025

Trump White House Unveils AI Guidelines Prioritizing U.S.-Made Tech and Streamlined Procurement

The White House Office of Management and Budget (OMB) released two memos on April 3 outlining updated guidance for the use and acquisition of artificial intelligence (AI) in federal agencies. These memos, issued under President Trump, replace previous Biden-era directives but retain many of the same frameworks, such as the roles of chief AI officers and risk management for high-impact AI systems.

Memo M-25-21 sets out governance priorities—innovation, governance, and public trust—while memo M-25-22 addresses AI procurement, emphasizing U.S.-made AI, risk management, and collaboration. It introduces a 200-day deadline for creating a federal AI procurement tools repository.

Despite maintaining key oversight structures, some specific AI use cases listed under Biden—like election-related tools or unauthorized voice replication—were omitted. Implementation concerns persist, as watchdog groups question whether all agencies, especially newer Trump-era entities like the Department of Government Efficiency (DOGE), will adhere to the rules.

Finally, the memos follow a broader Trump initiative to support AI infrastructure, including identifying federal land for AI data centers, continuing a policy initially advanced by the Biden administration.

Would you like a comparison chart showing the differences between the Biden and Trump AI guidance?

 

European Commission Launches “AI Continent” Strategy to Lead in Trustworthy AI with €20B Investment and Startup Support – GLOBAL

Global

30-04-2025

European Commission Launches “AI Continent” Strategy to Lead in Trustworthy AI with €20B Investment and Startup Support

The European Commission has launched the AI Continent Action Plan, a bold initiative to position the EU as a global leader in artificial intelligence. Building on the €200 billion InvestAI initiative, the plan focuses on boosting AI development and adoption across five key areas.

Key measures include establishing 13 AI factories and up to 5 AI gigafactories, expanding data infrastructure, and implementing a new Cloud and AI Development Act to triple the EU’s data centre capacity. The plan also prioritizes increasing access to high-quality data, promoting AI in strategic sectors like healthcare and the public sector, and strengthening AI talent through education and international recruitment.

To support businesses in complying with upcoming regulations, the Commission will launch an AI Act Service Desk offering guidance on implementation. The action plan reflects the EU’s goal to turn its research and industrial strengths into AI leadership while promoting innovation, competitiveness, and responsible development.

Spain Plans Major Fines for Unlabelled AI Content – GLOBAL

Global

31-03-2025

Spain Plans Major Fines for Unlabelled AI Content

Spain has approved a draft bill to enforce strict regulations on AI-generated content, including hefty fines of up to €35 million or 7% of global turnover for companies that fail to label such content. The bill aligns with the EU’s AI Act and targets the spread of deepfakes and AI misuse, especially those posing risks to democracy and vulnerable groups.

The legislation, still pending approval from the lower house, bans subliminal manipulation, AI-based biometric profiling, and behavior-based scoring systems, while allowing biometric surveillance for national security. Oversight will be handled by Spain’s new AI agency, AESIA, with other regulators covering specific sectors like privacy, finance, and elections.

California Unveils AI Policy Report, Prompting Response from Anthropic – GLOBAL

Global

31-03-2025

California Unveils AI Policy Report, Prompting Response from Anthropic

California has released a draft AI policy report, reinforcing its leadership in the field. Led by experts including Stanford’s Dr. Fei-Fei Li and commissioned by Governor Gavin Newsom, the report advocates for responsible AI development through evidence-based policies, transparency, and safety standards. Newsom emphasized California’s pivotal role in shaping the future of AI, given its status as home to many of the world’s top AI companies.
California has released a draft AI policy report emphasizing responsible development through science-based standards, transparency, and public input. Key focus areas include using AI for public good, strengthening education and workforce training, and addressing AI-related risks. The state invites feedback on the report until April 8, 2025.
AI company Anthropic supports the initiative, highlighting alignment with its own transparency and safety practices. Both the state and Anthropic stress the importance of collaborative governance and economic impact monitoring to ensure AI benefits society while supporting innovation.

 

EU Unveils Draft Code of Practice for AI Providers, Emphasizing Risk Management and Copyright Compliance – GLOBAL

Global

29-11-2024

EU Unveils Draft Code of Practice for AI Providers, Emphasizing Risk Management and Copyright Compliance

On 14 November 2024, the EU introduced a draft Code of Practice for providers of general-purpose AI models with systemic risks. Developed through collaboration across four Working Groups, the draft focuses on transparency, risk identification, technical mitigation, and governance.

It is based on six key principles, including alignment with Union values, proportionality to risks, and future-proofing for AI advancements. Providers are required to continuously assess and mitigate risks throughout the AI lifecycle, with a focus on proportional risk management. Copyright compliance is emphasized, with providers needing to adhere to EU laws and conduct due diligence on training data. Additionally, the draft introduces an Acceptable Use Policy (AUP) outlining conditions for AI model use, including security, privacy, and compliance enforcement.

Singapore Unveils Guidelines and Companion Guide for Securing AI Systems – GLOBAL

Global

04-11-2024

Singapore Unveils Guidelines and Companion Guide for Securing AI Systems

On October 15, 2024, the Cyber Security Agency of Singapore (CSA) published the Singapore Guidelines on Securing AI Systems. These guidelines emphasize the need for AI systems to be secure by design and secure by default, allowing system owners to proactively manage security risks throughout the AI lifecycle. The guidelines aim to protect AI systems from both traditional cybersecurity threats, such as supply chain attacks, and emerging risks, including adversarial machine learning.

Organizations are encouraged to enhance awareness and provide training on security risks associated with AI, ensuring that all personnel are equipped to make informed decisions regarding AI adoption. Additionally, the guidelines recommend establishing incident management procedures.

Accompanying the guidelines is a Companion Guide on Securing AI Systems, which offers voluntary practical measures, security controls, and best practices to assist system owners in implementing effective security strategies.

G7 Data Protection Authorities Focus on Ethical AI Development and Data Privacy – GLOBAL

Global

04-11-2024

G7 Data Protection Authorities Focus on Ethical AI Development and Data Privacy

From October 9 to 11, 2024, the G7 Data Protection Authorities (DPA) held a roundtable focused on promoting the ethical development of Artificial Intelligence (AI). Key topics of discussion included Data Free Flow with Trust (DFFT), emerging technologies, and enhancing enforcement cooperation among member nations.

The G7 Action Plan aims to foster DFFT through international collaboration, regulatory alignment among G7 data protection authorities, and addressing privacy concerns related to emerging technologies such as AI. Following the roundtable, a statement was issued highlighting the critical role of DPAs in ensuring that AI technologies are trustworthy and comply with established data protection principles, including fairness, accountability, transparency, and security.

The DPAs emphasized their human-centric approach, prioritizing the protection of individual rights and freedoms in the context of personal data processing within AI systems. The meeting underscored the essential role of DPAs in overseeing personal data, particularly in the responsible development and deployment of generative AI applications.

New Jersey Proposes Law Requiring AI Companies to Conduct Safety Tests and Report Findings – GLOBAL

Global

04-11-2024

New Jersey Proposes Law Requiring AI Companies to Conduct Safety Tests and Report Findings

On October 7, Senator Troy Singleton (D) introduced bill S3742, which mandates that artificial intelligence companies in New Jersey conduct annual safety tests on their technologies. The bill defines an “artificial intelligence company” as any entity engaged in the sale, development, deployment, use, or offering of AI technology within the state.

The safety tests will focus on assessing potential cybersecurity threats, identifying data biases and inaccuracies, and ensuring compliance with both state and federal laws. Additionally, the bill requires companies to report the results of these tests to the Office of Technology through an annual report. This report must include details about the technologies tested, descriptions of the safety tests conducted, third-party involvement, and the outcomes of each test for review by the Office of Information Technology (OIT). The OIT is also tasked with establishing minimum requirements for AI safety tests performed in New Jersey.

U.S. Department of Labor Issues Guidance on AI and Worker Well-Being – GLOBAL

Global

04-11-2024

U.S. Department of Labor Issues Guidance on AI and Worker Well-Being

On October 16, 2024, the U.S. Department of Labor (DOL) announced the release of a new roadmap aimed at enhancing worker well-being, titled *Artificial Intelligence and Worker Well-Being: Principles and Best Practices for Developers and Employers.* This document builds on guidance issued in May regarding AI principles for worker well-being and outlines specific best practices to help developers and employers implement the eight established principles effectively.

The roadmap responds to President Biden’s Executive Order on the safe and trustworthy use of AI, underscoring the critical need to inform workers—particularly those from underserved communities—and to engage them in the design, development, testing, and oversight of AI systems used in the workplace.

Germany and France Release Joint Recommendations for AI Coding Assistants – GLOBAL

Global

04-11-2024

Germany and France Release Joint Recommendations for AI Coding Assistants

On October 4, the French Cybersecurity Agency and the German Federal Office for Information Security jointly released a set of recommendations aimed at ensuring the secure use of AI coding assistants. This initiative comes in response to the growing integration of AI tools in programming, highlighting both their potential benefits and the inherent risks they pose.

The recommendations are grounded in a comprehensive analysis that identifies various risks associated with AI programming assistants, while also proposing effective mitigation strategies. Among the key recommendations, the agencies advise users to exercise caution when employing AI tools, stressing the importance of robust security measures to protect against potential vulnerabilities.

Additionally, the guidelines emphasize the necessity of conducting thorough risk assessments before deploying AI coding assistants. They recommend that experienced developers closely review any code generated by these AI tools to ensure its reliability and security. This multi-faceted approach is designed to mitigate risks effectively and safeguard sensitive data from potential threats, underscoring the importance of vigilance in the rapidly evolving landscape of AI technology.