WORLDWIDE HOMOLOGATION, REGULATORY COMPLIANCE, TYPE APPROVAL SPECIALIST

Trump White House Unveils AI Guidelines Prioritizing U.S.-Made Tech and Streamlined Procurement – GLOBAL

Global

30-04-2025

Trump White House Unveils AI Guidelines Prioritizing U.S.-Made Tech and Streamlined Procurement

The White House Office of Management and Budget (OMB) released two memos on April 3 outlining updated guidance for the use and acquisition of artificial intelligence (AI) in federal agencies. These memos, issued under President Trump, replace previous Biden-era directives but retain many of the same frameworks, such as the roles of chief AI officers and risk management for high-impact AI systems.

Memo M-25-21 sets out governance priorities—innovation, governance, and public trust—while memo M-25-22 addresses AI procurement, emphasizing U.S.-made AI, risk management, and collaboration. It introduces a 200-day deadline for creating a federal AI procurement tools repository.

Despite maintaining key oversight structures, some specific AI use cases listed under Biden—like election-related tools or unauthorized voice replication—were omitted. Implementation concerns persist, as watchdog groups question whether all agencies, especially newer Trump-era entities like the Department of Government Efficiency (DOGE), will adhere to the rules.

Finally, the memos follow a broader Trump initiative to support AI infrastructure, including identifying federal land for AI data centers, continuing a policy initially advanced by the Biden administration.

Would you like a comparison chart showing the differences between the Biden and Trump AI guidance?

 

European Commission Launches “AI Continent” Strategy to Lead in Trustworthy AI with €20B Investment and Startup Support – GLOBAL

Global

30-04-2025

European Commission Launches “AI Continent” Strategy to Lead in Trustworthy AI with €20B Investment and Startup Support

The European Commission has launched the AI Continent Action Plan, a bold initiative to position the EU as a global leader in artificial intelligence. Building on the €200 billion InvestAI initiative, the plan focuses on boosting AI development and adoption across five key areas.

Key measures include establishing 13 AI factories and up to 5 AI gigafactories, expanding data infrastructure, and implementing a new Cloud and AI Development Act to triple the EU’s data centre capacity. The plan also prioritizes increasing access to high-quality data, promoting AI in strategic sectors like healthcare and the public sector, and strengthening AI talent through education and international recruitment.

To support businesses in complying with upcoming regulations, the Commission will launch an AI Act Service Desk offering guidance on implementation. The action plan reflects the EU’s goal to turn its research and industrial strengths into AI leadership while promoting innovation, competitiveness, and responsible development.

Spain Plans Major Fines for Unlabelled AI Content – GLOBAL

Global

31-03-2025

Spain Plans Major Fines for Unlabelled AI Content

Spain has approved a draft bill to enforce strict regulations on AI-generated content, including hefty fines of up to €35 million or 7% of global turnover for companies that fail to label such content. The bill aligns with the EU’s AI Act and targets the spread of deepfakes and AI misuse, especially those posing risks to democracy and vulnerable groups.

The legislation, still pending approval from the lower house, bans subliminal manipulation, AI-based biometric profiling, and behavior-based scoring systems, while allowing biometric surveillance for national security. Oversight will be handled by Spain’s new AI agency, AESIA, with other regulators covering specific sectors like privacy, finance, and elections.

California Unveils AI Policy Report, Prompting Response from Anthropic – GLOBAL

Global

31-03-2025

California Unveils AI Policy Report, Prompting Response from Anthropic

California has released a draft AI policy report, reinforcing its leadership in the field. Led by experts including Stanford’s Dr. Fei-Fei Li and commissioned by Governor Gavin Newsom, the report advocates for responsible AI development through evidence-based policies, transparency, and safety standards. Newsom emphasized California’s pivotal role in shaping the future of AI, given its status as home to many of the world’s top AI companies.
California has released a draft AI policy report emphasizing responsible development through science-based standards, transparency, and public input. Key focus areas include using AI for public good, strengthening education and workforce training, and addressing AI-related risks. The state invites feedback on the report until April 8, 2025.
AI company Anthropic supports the initiative, highlighting alignment with its own transparency and safety practices. Both the state and Anthropic stress the importance of collaborative governance and economic impact monitoring to ensure AI benefits society while supporting innovation.

 

EU Unveils Draft Code of Practice for AI Providers, Emphasizing Risk Management and Copyright Compliance – GLOBAL

Global

29-11-2024

EU Unveils Draft Code of Practice for AI Providers, Emphasizing Risk Management and Copyright Compliance

On 14 November 2024, the EU introduced a draft Code of Practice for providers of general-purpose AI models with systemic risks. Developed through collaboration across four Working Groups, the draft focuses on transparency, risk identification, technical mitigation, and governance.

It is based on six key principles, including alignment with Union values, proportionality to risks, and future-proofing for AI advancements. Providers are required to continuously assess and mitigate risks throughout the AI lifecycle, with a focus on proportional risk management. Copyright compliance is emphasized, with providers needing to adhere to EU laws and conduct due diligence on training data. Additionally, the draft introduces an Acceptable Use Policy (AUP) outlining conditions for AI model use, including security, privacy, and compliance enforcement.

NBTC Chairman Leads Thailand’s Telecom Transformation with AI, Boosting Mobile Operators and Economic Growth – THAILAND

Thailand Regulatory Authority

Thailand

27-11-2024

NBTC Chairman Leads Thailand’s Telecom Transformation with AI, Boosting Mobile Operators and Economic Growth

At the ‘THAILAND 2025: Opportunity • Hope • Reality’ event, Clinical Professor Dr. Saran Boonbaichaipruk, Chairman of the National Broadcasting and Telecommunications Commission (NBTC), emphasized the transformative potential of AI in shaping Thailand’s future. He stressed the NBTC’s commitment to promoting AI access across all regions, ensuring that even remote areas can benefit from advanced digital technologies via telecommunications systems.

Dr. Saran discussed how AI is already integrated into daily life through devices like smartphones and IoT, collecting and analyzing personal data to improve user experiences. He noted that AI has the potential to solve complex problems that humans cannot, while also raising ethical and safety concerns. The NBTC aims to support AI adoption in key sectors, including healthcare, finance, transportation, and education.

Looking ahead, Dr. Saran outlined plans for international cooperation to enhance AI infrastructure, increase frequency allocation, and improve network efficiency. He highlighted the growing demand for mobile broadband in Thailand, with 5G playing a crucial role in sectors such as agriculture, urban management, and education. The NBTC will expand 5G coverage to ensure equitable access to high-speed internet and modernize Universal Service Obligation (USO) centers, particularly for the public health system.

The Chairman also announced initiatives to foster innovation and fair competition, including Regulatory Sandboxes to allow AI trials in areas like predictive maintenance and data-driven services. The NBTC will work closely with the private sector to create a robust AI ecosystem while ensuring safety standards and fair competition.

For more information kindly click here: Official Announcement

Indonesia’s Deputy Minister Nezar Patria Calls for Stronger Stakeholder Collaboration to Boost AI Research – INDONESIA

Indonesia Wireless Regulatory Services

Indonesia

27-11-2024

Indonesia's Deputy Minister Nezar Patria Calls for Stronger Stakeholder Collaboration to Boost AI Research

Deputy Minister of Communication and Digital Nezar Patria emphasized the importance of collaboration between stakeholders and the AI industry to strengthen AI research and development in Indonesia. Speaking at the World Public Relations Forum 2024 in Bali, Patria highlighted two key challenges: high costs for AI research and development, and a gap in digital talent. He stressed the need for all stakeholders to contribute to developing the necessary talent pool to optimize AI technology.

Patria also noted that a national strategy and regulations are being prepared to help integrate AI into Indonesia’s digital economy, ensuring its responsible adoption. He urged early preparation to harness AI for human benefit, aiming to maximize its potential while minimizing risks.

For more information kindly click here: Official Announcement

Singapore Unveils Guidelines and Companion Guide for Securing AI Systems – GLOBAL

Global

04-11-2024

Singapore Unveils Guidelines and Companion Guide for Securing AI Systems

On October 15, 2024, the Cyber Security Agency of Singapore (CSA) published the Singapore Guidelines on Securing AI Systems. These guidelines emphasize the need for AI systems to be secure by design and secure by default, allowing system owners to proactively manage security risks throughout the AI lifecycle. The guidelines aim to protect AI systems from both traditional cybersecurity threats, such as supply chain attacks, and emerging risks, including adversarial machine learning.

Organizations are encouraged to enhance awareness and provide training on security risks associated with AI, ensuring that all personnel are equipped to make informed decisions regarding AI adoption. Additionally, the guidelines recommend establishing incident management procedures.

Accompanying the guidelines is a Companion Guide on Securing AI Systems, which offers voluntary practical measures, security controls, and best practices to assist system owners in implementing effective security strategies.

G7 Data Protection Authorities Focus on Ethical AI Development and Data Privacy – GLOBAL

Global

04-11-2024

G7 Data Protection Authorities Focus on Ethical AI Development and Data Privacy

From October 9 to 11, 2024, the G7 Data Protection Authorities (DPA) held a roundtable focused on promoting the ethical development of Artificial Intelligence (AI). Key topics of discussion included Data Free Flow with Trust (DFFT), emerging technologies, and enhancing enforcement cooperation among member nations.

The G7 Action Plan aims to foster DFFT through international collaboration, regulatory alignment among G7 data protection authorities, and addressing privacy concerns related to emerging technologies such as AI. Following the roundtable, a statement was issued highlighting the critical role of DPAs in ensuring that AI technologies are trustworthy and comply with established data protection principles, including fairness, accountability, transparency, and security.

The DPAs emphasized their human-centric approach, prioritizing the protection of individual rights and freedoms in the context of personal data processing within AI systems. The meeting underscored the essential role of DPAs in overseeing personal data, particularly in the responsible development and deployment of generative AI applications.

New Jersey Proposes Law Requiring AI Companies to Conduct Safety Tests and Report Findings – GLOBAL

Global

04-11-2024

New Jersey Proposes Law Requiring AI Companies to Conduct Safety Tests and Report Findings

On October 7, Senator Troy Singleton (D) introduced bill S3742, which mandates that artificial intelligence companies in New Jersey conduct annual safety tests on their technologies. The bill defines an “artificial intelligence company” as any entity engaged in the sale, development, deployment, use, or offering of AI technology within the state.

The safety tests will focus on assessing potential cybersecurity threats, identifying data biases and inaccuracies, and ensuring compliance with both state and federal laws. Additionally, the bill requires companies to report the results of these tests to the Office of Technology through an annual report. This report must include details about the technologies tested, descriptions of the safety tests conducted, third-party involvement, and the outcomes of each test for review by the Office of Information Technology (OIT). The OIT is also tasked with establishing minimum requirements for AI safety tests performed in New Jersey.