loader

Trustworthy Artificial Intelligence has the potential to help achieve agreed and important policy goals such as inclusive growth and the Sustainable Development Goals. It can also amplify and drag on today’s social, economic and geopolitical problems. At worst, it can be deployed for nefarious and destructive purposes. The path the world will take towards AI is essentially a policy choice. While interest in medical AI has grown and the private sector is growing rapidly, widespread use is still limited. This provides an opportunity for policymakers to stay ahead of the curve and discuss how best to take advantage of the real opportunities AI presents, while also looking at mechanisms to ensure risk is prevented block, reduce, and contain.

Their value-based G20 AI Principles aim to foster innovation and trust in AI by promoting responsible governance of trusted AI while ensuring respect for civic values sovereignty and human rights. They identify five principles based on complementary values: inclusive growth, sustainable development and wellbeing; human-centered values ​​and fairness; transparency and explainability; strong, secure and safe; and accountability.

Trustworthy Artificial Intelligence

Their value-based G20 AI Principles aim to foster innovation and trust in AI by promoting responsible governance of trusted AI while ensuring respect for civic values sovereignty and human rights. They identify five principles based on complementary values: inclusive growth, sustainable development and wellbeing; human-centered values ​​and fairness; transparency and explainability; strong, secure and safe; and accountability.

In addition to and consistent with these value-based principles, the G20 AI Guidelines note five recommendations for policymakers on national policy and international cooperation for trusted AI: invest in AI research and development; foster a digital ecosystem for AI; shaping a policy environment conducive to AI; building human capacity and preparing for labor market transformation; and international cooperation for trusted AI. Particularly related to health is perhaps the following set of principles and recommendations: promote a digital ecosystem for AI by supporting safe, fair, legal and ethical data sharing ; operate AI principles, in particular ensuring transparency, explainability and accountability of AI outputs; implement regulatory oversight and guidance that encourages innovation in trusted AI; building human capacity (among healthcare workers but also patients and residents, who must be comfortable with ‘smart’ machines); and the allocation of long-term public investment (in areas where resources are increasingly scarce).

AI in Healthcare

AI cannot work as intended without good data to learn from. High-quality, representative, and real-world data is essential to minimize the risk of errors and biases. Creating an environment where that data – especially personal health data – is available to AI researchers and developers in a secure way that respects individual privacy and autonomy is key basic. This requires robust frameworks for managing health data, within and across countries.


The lack of trust among patients, the public, data custodians and other stakeholders about how data is used and protected is a major obstacle to data use and sharing. Personal health data is sensitive and understandably privacy is one of the most commonly cited barriers to using them. However, the potential benefits of using personal health data to generate new knowledge cannot be minimized, such as in the context of testing much-needed drugs and vaccines (such as the COVID-19 crisis is currently being highlighted). Healthcare leaders should work to advertise the benefits of using health data, change the argument that data use is the only risk, and ignore the foregone benefits to individuals and society of not putting data into action. It is also essential to dispel the idea that there is a trade-off between data protection and secondary use of health data.

A risk management approach and careful implementation of good practices can enable both data protection and use. Periodically updated, formal risk management processes may include unwanted data removal, re-identification, breach or other misuse, especially when establishing new programs or introducing new methods. For a number of reasons (e.g., representativeness and breadth of input), many applications of healthcare AI will benefit significantly from cross-border collaboration in data processing. personal health for purposes of serving the public interest. This includes identifying and removing barriers to effective cross-border cooperation in the processing of personal health data, as well as engaging with relevant experts and organizations to develop mechanisms that enable the efficient exchange and interoperability of health data, including by establishing standards for data and terminology exchange.

Female Medical Research Scientist Working with Brain Scans on Her Personal Computer. Modern Laboratory Working on Neurophysiology, Science, Neuropharmacology. Understanding Human Brain.

Data sharing across jurisdictions is central to advancing AI research in areas such as cancer and rare diseases, as it requires sufficiently large, representative and complete (and possibly potentially reducing AI-related carbon emissions). Cross-border data sharing is also important in times of pandemic (e.g. COVID-19), when infection spreads globally and coordinated action is required. Given the fundamental importance of good data in AI, failure to implement robust governance models will hinder and ultimately stall the potential benefits of this powerful technology.

AI in Healthcare

Agreement on the value-based elements of the G20 AI Principles is an important achievement but only the beginning. Running them consistently across countries will be a challenge. For example, AI actors should commit to transparency and explainability (G20 AI Principles) and responsible disclosure regarding AI systems. Implementing this in practice can be technically challenging: Many modern AI systems are cloud-based, geographically distributed, Arbitrary computing architectures spatially, indeterminate parallelism that at any point in time is physically unidentifiable. In principle, being able to generate and maintain a complete log of each processor would have contributed in part to the implementation of the AI ​​multi-accelerated synthesis model, but it may be costly and too cumbersome to implement. As a result, limited traceability and essentially non-reproducible and non-retestable performance in a patient or clinician’s specific implementation of the AI ​​system may contain errors or produces errors or failures — incorrect, unexpected deviations from its specifications, validation testing, and hazard analysis — that can cause specific problems for regulators, courts, developers and the public.

AI agents are also responsible for the proper functioning of their algorithms, within their own role. The European Union recently released a report stating that producers of digital content or products that integrate emerging digital technologies are liable for damage caused by defects in their products, even if the defect is due to changes made to the product under the manufacturer’s control after it has been placed on the market.

Artificial intelligence in healthcare

Ensuring the robustness, security, and safety of AI algorithms and applications is paramount, and the FDA recently proposed a set of guidelines for managing algorithms that evolve over time. Among them is the expectation that developers will monitor how their algorithms are changing to ensure they continue to function as designed and require them to notify the agency if they see changes. Unexpected changes may require reevaluation.

Operating value-based AI Principles will require significant investments of financial and political capital. For example, AI in healthcare would be a lucrative business and it would take a lot of political will to enact legislation that would guarantee openness, transparency, and accountability. For example, some advocate that personal health data is like any good owned by the data subject, who should have the freedom to sell or exchange them. While the question of ownership can be debated, there is little doubt that commodifying such medical data would encourage poorer, more disadvantaged people to sell theirs. Beyond the ethical underpinnings of such a policy point of view, purely from a technical standpoint, this would create sample bias in the data used to train the AI ​​models. This may increase the representativeness of patients with lower socioeconomic status in AI algorithms, but there are other ways to increase representation that don’t involve a group of patients selling their medical data.

AI in Healthcare

AI is new territory for health policymakers, providers, and the public. In terms of promoting trusted AI to patients and the community, there needs to be a policy environment conducive to AI that includes a risk management approach. The regulation allows healthcare developers and providers to test and evaluate innovative products, services, and business models in a live environment with oversight and safeguards. appropriate (barriers broader health systems from potential unintended risks and consequences).

The Guidelines also specify that AI actors should create and publish AI usage policies and keep users, consumers, and others aware of AI use. It is encouraging to see a broad consensus on the need for AI principles and values, with at least 84 public-private initiatives describing high-level principles, values, and principles. to guide the ethical development, implementation, and governance of AI. However, a multitude of frameworks threaten to affect international cooperation. States have a responsibility to build upon a set of value propositions to develop and implement the necessary policies, regulations and legal frameworks. Consistency across jurisdictions benefits everyone, and the practical and technological challenges to some AI principles could perhaps be better overcome through international cooperation for trusted AI.

Artificial intelligence in healthcare

So far, there is no evidence that AI will replace humans in healthcare, but there is a lot of opinion that it will fundamentally enhance human tasks, skills and responsibilities. Given the scale at which AI can change the healthcare landscape, the way healthcare workers – and indeed the entire workforce – are educated, trained, and socialized will need to adapt. The approach will need to be multidisciplinary, involving AI developers, implementers, healthcare system leaders, frontline clinical teams, ethicists, humanists and patients and caregivers, as each provides their own perspective and life experience – each needs to be heard.

New jobs and professions will be needed to realize the potential health benefits of AI: trainers, interpreters, and maintainers. Instructors will provide meaning, purpose and direction; explainers will use their knowledge in both technical and applied fields to explain how AI algorithms can be trusted to support decisions; and maintainers will help maintain, interpret, and monitor the behavior and unintended consequences of AI systems.

Artificial intelligence in healthcare

Preparing the health system to manage risk and get the most out of AI requires long-term strategic investment. Strategic, coordinated, and sustainable resources are needed to ensure that AI leads to the desired health, social and economic outcomes and on a trajectory similar to the industrial revolutions that succeeded in the past. past. Public resources have always been scarce, but they need to be found with profound opportunities to deliver the better and more equitable health outcomes offered by AI, also act as a counterweight to private investment

Besides developing potential AI tools themselves, governments and public institutions should dedicate resources to developing roadblocks that ensure the technology has maximum effect and minimal harm – and at the same time checks and balances to steer the private sector in the right direction. This includes establishing and maintaining sound policy frameworks, building institutional and human capacity to verify AI technology as needed, and use it appropriately and cost-effectively.

Evaluating the economic benefits of AI, and its superiority over conventional and cheaper techniques, must also be an important consideration given the comparative cost of AI. National health systems often lack investment in information systems due to the primary importance of information, communication and knowledge in this area. To be clear, the financial space to invest in guiding AI in healthcare must be found. Fortunately, the implementation of intelligent artificial intelligence (and more broadly digital technology) offers the opportunity to reduce waste and bring efficiency in an inefficient and wasteful industry. Unlike drugs or medical devices, AI is a ‘general purpose’ technology that can be implemented in any aspect or activity of the healthcare industry. Instead of creating new processes or activities (although there are valuable research applications of AI – in the discovery of new drugs, for example), AI can essentially be used to make administrative and clinical processes now more efficient, more efficient and fairer. With about a fifth of health spending being wasted or even harmful, this is a huge opportunity to improve outcomes and value.

Application of TAI principles Across the AI Lifecycle

Hekate welcomes the goal of building concerns and expectations around AI. Medical technologies are enabling more accurate disease prevention, diagnosis, treatment, and monitoring, and innovation in this field continues to evolve through breakthroughs in science and the digital revolution, including Artificial Intelligence (AI). While AI in Healthcare today may be based on existing regulatory frameworks, there may be autonomous, highly iterative AI models and technologies that may require additional guideline development ( technology-neutral) and/or an international regulatory approach, which will facilitate regulatory approvals quickly through the product improvement cycle and enable these devices to continuously improve provide effective safeguards

Artificial Intelligent in Healthcare

Hekate is committed to being a cooperative and trusted partner of health organizations in their efforts to form the basis for building a comprehensive societal framework for AI. A common AI policy should be forward-looking, dynamic and sustainable, encouraging all stakeholders involved in the development of AI to work together towards a reliable AI implementation in Vietnam and moreover.