WORLDWIDE HOMOLOGATION, REGULATORY COMPLIANCE, TYPE APPROVAL SPECIALIST

Singapore Unveils Guidelines and Companion Guide for Securing AI Systems – GLOBAL

Global

04-11-2024

Singapore Unveils Guidelines and Companion Guide for Securing AI Systems

On October 15, 2024, the Cyber Security Agency of Singapore (CSA) published the Singapore Guidelines on Securing AI Systems. These guidelines emphasize the need for AI systems to be secure by design and secure by default, allowing system owners to proactively manage security risks throughout the AI lifecycle. The guidelines aim to protect AI systems from both traditional cybersecurity threats, such as supply chain attacks, and emerging risks, including adversarial machine learning.

Organizations are encouraged to enhance awareness and provide training on security risks associated with AI, ensuring that all personnel are equipped to make informed decisions regarding AI adoption. Additionally, the guidelines recommend establishing incident management procedures.

Accompanying the guidelines is a Companion Guide on Securing AI Systems, which offers voluntary practical measures, security controls, and best practices to assist system owners in implementing effective security strategies.

G7 Data Protection Authorities Focus on Ethical AI Development and Data Privacy – GLOBAL

Global

04-11-2024

G7 Data Protection Authorities Focus on Ethical AI Development and Data Privacy

From October 9 to 11, 2024, the G7 Data Protection Authorities (DPA) held a roundtable focused on promoting the ethical development of Artificial Intelligence (AI). Key topics of discussion included Data Free Flow with Trust (DFFT), emerging technologies, and enhancing enforcement cooperation among member nations.

The G7 Action Plan aims to foster DFFT through international collaboration, regulatory alignment among G7 data protection authorities, and addressing privacy concerns related to emerging technologies such as AI. Following the roundtable, a statement was issued highlighting the critical role of DPAs in ensuring that AI technologies are trustworthy and comply with established data protection principles, including fairness, accountability, transparency, and security.

The DPAs emphasized their human-centric approach, prioritizing the protection of individual rights and freedoms in the context of personal data processing within AI systems. The meeting underscored the essential role of DPAs in overseeing personal data, particularly in the responsible development and deployment of generative AI applications.

New Jersey Proposes Law Requiring AI Companies to Conduct Safety Tests and Report Findings – GLOBAL

Global

04-11-2024

New Jersey Proposes Law Requiring AI Companies to Conduct Safety Tests and Report Findings

On October 7, Senator Troy Singleton (D) introduced bill S3742, which mandates that artificial intelligence companies in New Jersey conduct annual safety tests on their technologies. The bill defines an “artificial intelligence company” as any entity engaged in the sale, development, deployment, use, or offering of AI technology within the state.

The safety tests will focus on assessing potential cybersecurity threats, identifying data biases and inaccuracies, and ensuring compliance with both state and federal laws. Additionally, the bill requires companies to report the results of these tests to the Office of Technology through an annual report. This report must include details about the technologies tested, descriptions of the safety tests conducted, third-party involvement, and the outcomes of each test for review by the Office of Information Technology (OIT). The OIT is also tasked with establishing minimum requirements for AI safety tests performed in New Jersey.

U.S. Department of Labor Issues Guidance on AI and Worker Well-Being – GLOBAL

Global

04-11-2024

U.S. Department of Labor Issues Guidance on AI and Worker Well-Being

On October 16, 2024, the U.S. Department of Labor (DOL) announced the release of a new roadmap aimed at enhancing worker well-being, titled *Artificial Intelligence and Worker Well-Being: Principles and Best Practices for Developers and Employers.* This document builds on guidance issued in May regarding AI principles for worker well-being and outlines specific best practices to help developers and employers implement the eight established principles effectively.

The roadmap responds to President Biden’s Executive Order on the safe and trustworthy use of AI, underscoring the critical need to inform workers—particularly those from underserved communities—and to engage them in the design, development, testing, and oversight of AI systems used in the workplace.

Germany and France Release Joint Recommendations for AI Coding Assistants – GLOBAL

Global

04-11-2024

Germany and France Release Joint Recommendations for AI Coding Assistants

On October 4, the French Cybersecurity Agency and the German Federal Office for Information Security jointly released a set of recommendations aimed at ensuring the secure use of AI coding assistants. This initiative comes in response to the growing integration of AI tools in programming, highlighting both their potential benefits and the inherent risks they pose.

The recommendations are grounded in a comprehensive analysis that identifies various risks associated with AI programming assistants, while also proposing effective mitigation strategies. Among the key recommendations, the agencies advise users to exercise caution when employing AI tools, stressing the importance of robust security measures to protect against potential vulnerabilities.

Additionally, the guidelines emphasize the necessity of conducting thorough risk assessments before deploying AI coding assistants. They recommend that experienced developers closely review any code generated by these AI tools to ensure its reliability and security. This multi-faceted approach is designed to mitigate risks effectively and safeguard sensitive data from potential threats, underscoring the importance of vigilance in the rapidly evolving landscape of AI technology.

Bridging the Gap: Evaluating Machine Learning Tools in Healthcare to Enhance Diagnosis and Treatment – GLOBAL

Global

31-10-2024

Bridging the Gap: Evaluating Machine Learning Tools in Healthcare to Enhance Diagnosis and Treatment

Healthcare systems are increasingly leveraging digital technologies, generating vast amounts of data that machine-learning algorithms can analyze to support diagnosis, prognosis, triage, and treatment of diseases. However, the integration of these algorithms into clinical practice is often impeded by insufficient evaluation across different environments. To address this, guidelines for evaluating machine learning tools in healthcare (ML4H) have been developed, focusing on assessing models for bias, interpretability, robustness, and potential failure modes. 

This study employed an ML4H audit framework across three use cases, revealing varied outcomes while underscoring the need for context-specific quality assessment and detailed evaluation. The paper recommends enhancements for future ML4H evaluation frameworks and explores the challenges associated with assessing bias, interpretability, and robustness. Standardized evaluation and reporting of ML4H quality are crucial for effectively integrating machine learning algorithms into medical practice.