Under Section 1557 of the Affordable Care Act (ACA), all covered entities, including radiologists and radiology practices, are responsible for preventing discrimination in practice.
In May 2024, the Department of Health and Human Services updated the final rule published in the Federal Register to include requirements related to the use of patient care decision support tools by covered entities.[i]
The Radiological Society of North America (RSNA) recommended radiologists and radiology practices who use patient care decision support tools (including AI algorithms) ask AI vendors the following questions to ensure they remain compliant with ACA requirements:[ii]
1. Below are the answers to these questions for Annalise Enterprise Critical Care AI. Does the AI software fall under the jurisdiction of Section 1557 or any state-based non-discrimination laws?
Annalise Critical Care AI aligns with ACA 1557 by ensuring controls are in place to minimize bias that could lead to discrimination. Annalise AI is committed to transparency in algorithm development and surveillance to ensure end users of the device can remain compliant. In developing, training, and monitoring its AI models, Annalise AI takes steps to meet all requirements set forth by the FDA to minimize bias and monitor for changes in performance that could adversely impact patients through any form of bias.
2. Does the software consider any input variables protected under Section 1557 or state non-discrimination laws, such as race, color, national origin, sex, age, or disability? If yes, please state which variables and how they are used in the tool’s decision-making process.
Annalise Critical Care AI will analyze all conformant cases regardless of race, color, national origin, or disability. Annalise Critical Care AI analyzes images for patients who are 22 years of age and older, so the software uses patient age as an input to ensure images are analyzed according to the device’s intended use.
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
3. What steps does the vendor take to mitigate potential harm to patients in protected groups?
The Annalise Critical Care AI model:
- Is trained on one of the largest datasets in the world and comes from a diverse patient population.
- Is trained to detect a comprehensive number of findings including very rare and challenging findings.
- Has been independently validated on a US population across a range of patient demographics, disease characteristics, and technical factors and does not show significantly different performance across subgroups.
- Is transparent about performance with regards to a range of demographic variables, with information readily available in Annalise Enterprise Performance Guides.
- Has been scrutinized by the FDA through the 510k program, which includes checks for unintended bias.
- Is used globally across a range of populations, with ongoing research and post-market surveillance to monitor effectiveness across these populations.
4. Does the vendor audit software performance to ensure it does not inadvertently discriminate against protected groups? If yes, what are the frequency and criteria of such audits?
Annalise AI monitors model performance on an ongoing basis by:
- Running local validation studies
- Collecting feedback proactively and passively
- Monitoring customer complaints
- Conducting performance investigations.
5. How does the vendor ensure transparency around non-discrimination compliance?
Annalise Critical Care AI software meets all FDA requirements to demonstrate generalizability to the intended population. Subgroup performance is submitted and reviewed by the FDA and is readily available in Annalise Critical Care AI Performance Guides.
6. Does the vendor provide training to its staff and clients on non-discrimination and best practices in
healthcare software?
Annalise AI programmers and software validation teams are trained and follow the Good Machine Learning Practice for Medical Device Development: Guiding Principles published by FDA. These principles include ensuring the model is trained on data representative of the intended population and minimizes bias and discrimination.
Annalise AI is also a member of AdvaMed and follows its Code of Ethics and Principles on Health Equity.
Annalise AI trains all users on the intended use of the software and solicits feedback where the device is not performing as intended. This allows Annalise AI to track performance across locations and use cases.
Annalise AI is committed to ensuring controls are in place to minimize bias, ongoing transparency in model performance and monitoring, and continued efforts to identify areas where we can mitigate against potential discrimination.
[i] Federal Register, 5/6/24: https://www.federalregister.gov/documents/2024/05/06/2024-08711/nondiscrimination-in-health-programs-and-activities
[ii] RSNA website: https://www.rsna.org/-/media/files/rsna/practice-tools/faq-for-section-1557-acapdf?rev=14c0422c641f4be1aa7a0d9a97a090b8&hash=275A60D09574CCCAFEF55BCDA5836CAF