What Is the Right Evidence Standard for AI?Innovations in medications and medical devices are required to undergo extensive evaluation, often including randomized clinical trials and postmarketing surveillance, to validate clinical effectiveness and safety. If AI is to directly influence and improve clinical care delivery, then an analogous evidence standard is needed to demonstrate improved outcomes and a lack of unintended consequences. The evidence standard for AI tasks is currently ill-defined but likely should be proportionate to the task at hand. For example, validating the accuracy of AI-enabled imaging applications against current quality standards for traditional imaging is likely sufficient for clinical use. However, as AI applications move to prediction, diagnosis, and treatment, the standard for proof should be significantly higher.1 To this end, the US Food and Drug Administration is actively considering how best to regulate AI-fueled innovations in care delivery, attempting to strike a reasonable balance between innovation, safety, and efficacy.