WORLDWIDE HOMOLOGATION, REGULATORY COMPLIANCE, TYPE APPROVAL SPECIALIST

NBTC Chairman Leads Thailand’s Telecom Transformation with AI, Boosting Mobile Operators and Economic Growth – THAILAND

Thailand Regulatory Authority

Thailand

27-11-2024

NBTC Chairman Leads Thailand’s Telecom Transformation with AI, Boosting Mobile Operators and Economic Growth

At the ‘THAILAND 2025: Opportunity • Hope • Reality’ event, Clinical Professor Dr. Saran Boonbaichaipruk, Chairman of the National Broadcasting and Telecommunications Commission (NBTC), emphasized the transformative potential of AI in shaping Thailand’s future. He stressed the NBTC’s commitment to promoting AI access across all regions, ensuring that even remote areas can benefit from advanced digital technologies via telecommunications systems.

Dr. Saran discussed how AI is already integrated into daily life through devices like smartphones and IoT, collecting and analyzing personal data to improve user experiences. He noted that AI has the potential to solve complex problems that humans cannot, while also raising ethical and safety concerns. The NBTC aims to support AI adoption in key sectors, including healthcare, finance, transportation, and education.

Looking ahead, Dr. Saran outlined plans for international cooperation to enhance AI infrastructure, increase frequency allocation, and improve network efficiency. He highlighted the growing demand for mobile broadband in Thailand, with 5G playing a crucial role in sectors such as agriculture, urban management, and education. The NBTC will expand 5G coverage to ensure equitable access to high-speed internet and modernize Universal Service Obligation (USO) centers, particularly for the public health system.

The Chairman also announced initiatives to foster innovation and fair competition, including Regulatory Sandboxes to allow AI trials in areas like predictive maintenance and data-driven services. The NBTC will work closely with the private sector to create a robust AI ecosystem while ensuring safety standards and fair competition.

For more information kindly click here: Official Announcement

Indonesia’s Deputy Minister Nezar Patria Calls for Stronger Stakeholder Collaboration to Boost AI Research – INDONESIA

Indonesia Wireless Regulatory Services

Indonesia

27-11-2024

Indonesia's Deputy Minister Nezar Patria Calls for Stronger Stakeholder Collaboration to Boost AI Research

Deputy Minister of Communication and Digital Nezar Patria emphasized the importance of collaboration between stakeholders and the AI industry to strengthen AI research and development in Indonesia. Speaking at the World Public Relations Forum 2024 in Bali, Patria highlighted two key challenges: high costs for AI research and development, and a gap in digital talent. He stressed the need for all stakeholders to contribute to developing the necessary talent pool to optimize AI technology.

Patria also noted that a national strategy and regulations are being prepared to help integrate AI into Indonesia’s digital economy, ensuring its responsible adoption. He urged early preparation to harness AI for human benefit, aiming to maximize its potential while minimizing risks.

For more information kindly click here: Official Announcement

Singapore Unveils Guidelines and Companion Guide for Securing AI Systems – GLOBAL

Global

04-11-2024

Singapore Unveils Guidelines and Companion Guide for Securing AI Systems

On October 15, 2024, the Cyber Security Agency of Singapore (CSA) published the Singapore Guidelines on Securing AI Systems. These guidelines emphasize the need for AI systems to be secure by design and secure by default, allowing system owners to proactively manage security risks throughout the AI lifecycle. The guidelines aim to protect AI systems from both traditional cybersecurity threats, such as supply chain attacks, and emerging risks, including adversarial machine learning.

Organizations are encouraged to enhance awareness and provide training on security risks associated with AI, ensuring that all personnel are equipped to make informed decisions regarding AI adoption. Additionally, the guidelines recommend establishing incident management procedures.

Accompanying the guidelines is a Companion Guide on Securing AI Systems, which offers voluntary practical measures, security controls, and best practices to assist system owners in implementing effective security strategies.

G7 Data Protection Authorities Focus on Ethical AI Development and Data Privacy – GLOBAL

Global

04-11-2024

G7 Data Protection Authorities Focus on Ethical AI Development and Data Privacy

From October 9 to 11, 2024, the G7 Data Protection Authorities (DPA) held a roundtable focused on promoting the ethical development of Artificial Intelligence (AI). Key topics of discussion included Data Free Flow with Trust (DFFT), emerging technologies, and enhancing enforcement cooperation among member nations.

The G7 Action Plan aims to foster DFFT through international collaboration, regulatory alignment among G7 data protection authorities, and addressing privacy concerns related to emerging technologies such as AI. Following the roundtable, a statement was issued highlighting the critical role of DPAs in ensuring that AI technologies are trustworthy and comply with established data protection principles, including fairness, accountability, transparency, and security.

The DPAs emphasized their human-centric approach, prioritizing the protection of individual rights and freedoms in the context of personal data processing within AI systems. The meeting underscored the essential role of DPAs in overseeing personal data, particularly in the responsible development and deployment of generative AI applications.

New Jersey Proposes Law Requiring AI Companies to Conduct Safety Tests and Report Findings – GLOBAL

Global

04-11-2024

New Jersey Proposes Law Requiring AI Companies to Conduct Safety Tests and Report Findings

On October 7, Senator Troy Singleton (D) introduced bill S3742, which mandates that artificial intelligence companies in New Jersey conduct annual safety tests on their technologies. The bill defines an “artificial intelligence company” as any entity engaged in the sale, development, deployment, use, or offering of AI technology within the state.

The safety tests will focus on assessing potential cybersecurity threats, identifying data biases and inaccuracies, and ensuring compliance with both state and federal laws. Additionally, the bill requires companies to report the results of these tests to the Office of Technology through an annual report. This report must include details about the technologies tested, descriptions of the safety tests conducted, third-party involvement, and the outcomes of each test for review by the Office of Information Technology (OIT). The OIT is also tasked with establishing minimum requirements for AI safety tests performed in New Jersey.

U.S. Department of Labor Issues Guidance on AI and Worker Well-Being – GLOBAL

Global

04-11-2024

U.S. Department of Labor Issues Guidance on AI and Worker Well-Being

On October 16, 2024, the U.S. Department of Labor (DOL) announced the release of a new roadmap aimed at enhancing worker well-being, titled *Artificial Intelligence and Worker Well-Being: Principles and Best Practices for Developers and Employers.* This document builds on guidance issued in May regarding AI principles for worker well-being and outlines specific best practices to help developers and employers implement the eight established principles effectively.

The roadmap responds to President Biden’s Executive Order on the safe and trustworthy use of AI, underscoring the critical need to inform workers—particularly those from underserved communities—and to engage them in the design, development, testing, and oversight of AI systems used in the workplace.

Germany and France Release Joint Recommendations for AI Coding Assistants – GLOBAL

Global

04-11-2024

Germany and France Release Joint Recommendations for AI Coding Assistants

On October 4, the French Cybersecurity Agency and the German Federal Office for Information Security jointly released a set of recommendations aimed at ensuring the secure use of AI coding assistants. This initiative comes in response to the growing integration of AI tools in programming, highlighting both their potential benefits and the inherent risks they pose.

The recommendations are grounded in a comprehensive analysis that identifies various risks associated with AI programming assistants, while also proposing effective mitigation strategies. Among the key recommendations, the agencies advise users to exercise caution when employing AI tools, stressing the importance of robust security measures to protect against potential vulnerabilities.

Additionally, the guidelines emphasize the necessity of conducting thorough risk assessments before deploying AI coding assistants. They recommend that experienced developers closely review any code generated by these AI tools to ensure its reliability and security. This multi-faceted approach is designed to mitigate risks effectively and safeguard sensitive data from potential threats, underscoring the importance of vigilance in the rapidly evolving landscape of AI technology.

Bridging the Gap: Evaluating Machine Learning Tools in Healthcare to Enhance Diagnosis and Treatment – GLOBAL

Global

31-10-2024

Bridging the Gap: Evaluating Machine Learning Tools in Healthcare to Enhance Diagnosis and Treatment

Healthcare systems are increasingly leveraging digital technologies, generating vast amounts of data that machine-learning algorithms can analyze to support diagnosis, prognosis, triage, and treatment of diseases. However, the integration of these algorithms into clinical practice is often impeded by insufficient evaluation across different environments. To address this, guidelines for evaluating machine learning tools in healthcare (ML4H) have been developed, focusing on assessing models for bias, interpretability, robustness, and potential failure modes. 

This study employed an ML4H audit framework across three use cases, revealing varied outcomes while underscoring the need for context-specific quality assessment and detailed evaluation. The paper recommends enhancements for future ML4H evaluation frameworks and explores the challenges associated with assessing bias, interpretability, and robustness. Standardized evaluation and reporting of ML4H quality are crucial for effectively integrating machine learning algorithms into medical practice.