Trump administration updates AI strategy, with emphasis on transparency, data integrity

Trump administration updates AI strategy, with emphasis on transparency, data integrity

The strategic plan boils down to eight strategies for how government can better enable development of safe and effective AI and machine learning technologies for healthcare and other industries.

Make long-term investments in AI research, prioritizing next-generation applications that can help “drive discovery and insight and enable the United States to remain a world leader in AI.” Develop more effective strategies for human-AI collaboration, with a focus on AI systems that “effectively complement and augment human capabilities.” Understand and address the “ethical, legal, and societal implications of AI” and how they can be addressed through the technology. Work to ensure AI systems’ safety and security, and spread knowledge of “how to design AI systems that are reliable, dependable, safe, and trustworthy.” Create high-quality, shared public datasets and environments for AI training and testing. Measure and evaluate AI with standards and benchmarks, eventually arriving at a broad set of evaluative techniques, including technical standards and benchmarks. Better understand the workforce needs of AI researchers and developers nationwide, and work strategically to foster an AI-ready workforce. Expand existing public-private partnerships, and create new ones to speed advances in AI, promoting opportunities for sustained investment R&D and for “transitioning advances into practical capabilities, in collaboration with academia, industry, international partners, and other non-Federal entities.” The 50-page document takes special interest in ensuring that data used to power AI is trustworthy and that the algorithms used to process it are understandable – not least in healthcare.

“A key research challenge is increasing the ’explainability’ or ‘’transparency’ of AI,” according to the report. “Many algorithms, including those based on deep learning, are opaque to users, with few existing mechanisms for explaining their results. This is especially problematic for domains such as healthcare, where doctors need explanations to justify a particular diagnosis or a course of treatment. AI techniques such as decision-tree induction provide built-in explanations but are generally less accurate. Thus, researchers must develop systems that are transparent, and intrinsically capable of explaining the reasons for their results to users.”




Next Article

Did you find this useful?

Medigy Innovation Network

Connecting innovation decision makers to authoritative information, institutions, people and insights.

Medigy Logo

The latest News, Insights & Events

Medigy accurately delivers healthcare and technology information, news and insight from around the world.

The best products, services & solutions

Medigy surfaces the world's best crowdsourced health tech offerings with social interactions and peer reviews.


© 2024 Netspective Media LLC. All Rights Reserved.

Built on Apr 23, 2024 at 3:40am