WORLDWIDE HOMOLOGATION, REGULATORY COMPLIANCE, TYPE APPROVAL SPECIALIST

AI Action Plan and Executive Orders Issued by Trump Administration: Implications for Employment Professionals – GLOBAL

Global

31-07-2025

AI Action Plan and Executive Orders Issued by Trump Administration: Implications for Employment Professionals


On July 23, 2025, the Trump Administration released the “America’s AI Action Plan” along with three Executive Orders aimed at accelerating U.S. AI leadership through deregulation, infrastructure investment, and a rejection of DEI (Diversity, Equity, and Inclusion) principles in federal AI guidance. The Plan promotes a centralized federal approach to AI governance, discouraging restrictive state regulations by tying federal funding to states’ AI policies. For labor and employment practitioners, key developments include the directive to revise the NIST AI Risk Management Framework to remove references to DEI, and the establishment of a Department of Labor-led AI Workforce Research Hub focused on job impacts, retraining, and skills development. While the “Preventing Woke AI” Executive Order applies only to federal AI procurement, it may influence private-sector AI tools over time. Importantly, the Plan does not alter existing employment discrimination laws, such as Title VII, nor does it impose new obligations on employers, though it signals long-term shifts in federal AI policy that could shape future workforce and compliance landscapes.

AI models posing systemic risks receive guidance on complying with EU AI regulations. – GLOBAL

Global

31-07-2025

AI models posing systemic risks receive guidance on complying with EU AI regulations.


On July 18, the European Commission issued guidelines to help AI models deemed to pose systemic risks comply with the EU’s AI Act, which takes effect on August 2. These models, including foundation models from companies like Google, OpenAI, Meta, Anthropic, and Mistral, face stricter requirements due to their potential impact on public safety and rights. Obligations include risk assessments, incident reporting, and cybersecurity measures. Foundation models must also meet transparency rules like providing technical documentation and training data summaries. The move aims to ease regulatory burdens and clarify compliance ahead of the 2026 deadline.

Dutch government allocates €70 million for new AI facility. – GLOBAL

Global

30-06-2025

Dutch government allocates €70 million for new AI facility.

The Dutch government has pledged €70 million to build an AI plant in Groningen, aiming to establish a European research hub for AI in fields like agriculture, healthcare, energy, and defense. Additional funding may come from the EU and the Groningen region, bringing the total to €200 million. Set to open in 2026, the plant is part of efforts to strengthen Europe’s digital independence and reduce reliance on U.S. tech firms.

U.S. lawmakers propose bill to ban Chinese AI technology from government agencies. – GLOBAL

Global

30-06-2025

U.S. lawmakers propose bill to ban Chinese AI technology from government agencies.

A bipartisan group of U.S. lawmakers has introduced the “No Adversarial AI Act,” a bill aiming to ban executive agencies from using AI models developed in China, Russia, Iran, and North Korea. Sparked by security concerns, particularly around Chinese firm DeepSeek’s alleged ties to China’s military, the bill mandates the creation of a list of banned foreign AI systems. Exemptions would require approval from Congress or the Office of Management and Budget.

McKinsey Survey Finds Responsible AI Key to Unlocking Full Potential of Technology – GLOBAL

Global

30-05-2025

McKinsey Survey Finds Responsible AI Key to Unlocking Full Potential of Technology

The rapid advancement of generative AI and large language models (LLMs) is fueling widespread adoption across various business functions, with organizations aiming to enhance productivity, efficiency, and innovation. However, realizing these benefits depends on the safe and responsible deployment of AI technologies. Responsible AI (RAI) practices are a foundational component of a broader AI trust strategy, essential for generating confidence among customers, employees, and stakeholders in how organizations develop and use AI.

RAI involves addressing key issues such as data governance, explainability, fairness, privacy, security, and transparency. By embedding these principles into their AI initiatives, organizations can mitigate risks, strengthen accountability, and build long-term trust—ultimately maximizing the impact of their AI solutions.

A recent McKinsey survey of over 750 leaders from 38 countries sheds light on the current state of RAI implementation in enterprises across diverse industries, including technology, healthcare, finance, legal, and engineering. Using the McKinsey AI Trust Maturity Model—a comprehensive framework with four main dimensions (strategy, risk management, data and technology, and operating model) and 21 subdimensions—the survey evaluates organizations’ RAI maturity across four levels, from early-stage foundational efforts to fully developed, proactive programs. The findings offer valuable insights into how organizations are progressing on their responsible AI journeys and where gaps still exist.

 

Trump White House Unveils AI Guidelines Prioritizing U.S.-Made Tech and Streamlined Procurement – GLOBAL

Global

30-04-2025

Trump White House Unveils AI Guidelines Prioritizing U.S.-Made Tech and Streamlined Procurement

The White House Office of Management and Budget (OMB) released two memos on April 3 outlining updated guidance for the use and acquisition of artificial intelligence (AI) in federal agencies. These memos, issued under President Trump, replace previous Biden-era directives but retain many of the same frameworks, such as the roles of chief AI officers and risk management for high-impact AI systems.

Memo M-25-21 sets out governance priorities—innovation, governance, and public trust—while memo M-25-22 addresses AI procurement, emphasizing U.S.-made AI, risk management, and collaboration. It introduces a 200-day deadline for creating a federal AI procurement tools repository.

Despite maintaining key oversight structures, some specific AI use cases listed under Biden—like election-related tools or unauthorized voice replication—were omitted. Implementation concerns persist, as watchdog groups question whether all agencies, especially newer Trump-era entities like the Department of Government Efficiency (DOGE), will adhere to the rules.

Finally, the memos follow a broader Trump initiative to support AI infrastructure, including identifying federal land for AI data centers, continuing a policy initially advanced by the Biden administration.

Would you like a comparison chart showing the differences between the Biden and Trump AI guidance?

 

European Commission Launches “AI Continent” Strategy to Lead in Trustworthy AI with €20B Investment and Startup Support – GLOBAL

Global

30-04-2025

European Commission Launches “AI Continent” Strategy to Lead in Trustworthy AI with €20B Investment and Startup Support

The European Commission has launched the AI Continent Action Plan, a bold initiative to position the EU as a global leader in artificial intelligence. Building on the €200 billion InvestAI initiative, the plan focuses on boosting AI development and adoption across five key areas.

Key measures include establishing 13 AI factories and up to 5 AI gigafactories, expanding data infrastructure, and implementing a new Cloud and AI Development Act to triple the EU’s data centre capacity. The plan also prioritizes increasing access to high-quality data, promoting AI in strategic sectors like healthcare and the public sector, and strengthening AI talent through education and international recruitment.

To support businesses in complying with upcoming regulations, the Commission will launch an AI Act Service Desk offering guidance on implementation. The action plan reflects the EU’s goal to turn its research and industrial strengths into AI leadership while promoting innovation, competitiveness, and responsible development.

Spain Plans Major Fines for Unlabelled AI Content – GLOBAL

Global

31-03-2025

Spain Plans Major Fines for Unlabelled AI Content

Spain has approved a draft bill to enforce strict regulations on AI-generated content, including hefty fines of up to €35 million or 7% of global turnover for companies that fail to label such content. The bill aligns with the EU’s AI Act and targets the spread of deepfakes and AI misuse, especially those posing risks to democracy and vulnerable groups.

The legislation, still pending approval from the lower house, bans subliminal manipulation, AI-based biometric profiling, and behavior-based scoring systems, while allowing biometric surveillance for national security. Oversight will be handled by Spain’s new AI agency, AESIA, with other regulators covering specific sectors like privacy, finance, and elections.

California Unveils AI Policy Report, Prompting Response from Anthropic – GLOBAL

Global

31-03-2025

California Unveils AI Policy Report, Prompting Response from Anthropic

California has released a draft AI policy report, reinforcing its leadership in the field. Led by experts including Stanford’s Dr. Fei-Fei Li and commissioned by Governor Gavin Newsom, the report advocates for responsible AI development through evidence-based policies, transparency, and safety standards. Newsom emphasized California’s pivotal role in shaping the future of AI, given its status as home to many of the world’s top AI companies.
California has released a draft AI policy report emphasizing responsible development through science-based standards, transparency, and public input. Key focus areas include using AI for public good, strengthening education and workforce training, and addressing AI-related risks. The state invites feedback on the report until April 8, 2025.
AI company Anthropic supports the initiative, highlighting alignment with its own transparency and safety practices. Both the state and Anthropic stress the importance of collaborative governance and economic impact monitoring to ensure AI benefits society while supporting innovation.

 

EU Unveils Draft Code of Practice for AI Providers, Emphasizing Risk Management and Copyright Compliance – GLOBAL

Global

29-11-2024

EU Unveils Draft Code of Practice for AI Providers, Emphasizing Risk Management and Copyright Compliance

On 14 November 2024, the EU introduced a draft Code of Practice for providers of general-purpose AI models with systemic risks. Developed through collaboration across four Working Groups, the draft focuses on transparency, risk identification, technical mitigation, and governance.

It is based on six key principles, including alignment with Union values, proportionality to risks, and future-proofing for AI advancements. Providers are required to continuously assess and mitigate risks throughout the AI lifecycle, with a focus on proportional risk management. Copyright compliance is emphasized, with providers needing to adhere to EU laws and conduct due diligence on training data. Additionally, the draft introduces an Acceptable Use Policy (AUP) outlining conditions for AI model use, including security, privacy, and compliance enforcement.