FDA AI/ML Software as Medical Device (SaMD)

Evaluate FDA AI/ML-based Software as Medical Device compliance per 2021 guidance, covering predetermined/adaptive algorithms, algorithm change protocols, and real-world performance monitoring

MedtechHealthcareBiotechTechnology25 minutes15 questions

1. Algorithm Characterization

Is your AI/ML algorithm characterized as predetermined or adaptive?*

Predetermined algorithms locked, adaptive algorithms continue learning from real-world data

Have you documented the intended use and clinical indication?*

Clear definition of medical purpose, target population, clinical workflow integration

Is the ML model architecture and training methodology documented?*

Model type (CNN, transformer, etc.), hyperparameters, training/validation approach

2. Risk Management

Have you conducted risk analysis per ISO 14971 for AI/ML-specific hazards?*

Risks: bias, overfitting, dataset shift, adversarial attacks, explainability

Are AI/ML risk controls implemented (monitoring, alarms, human oversight)?*

Safety controls for algorithm errors, edge cases, distribution shift

3. Algorithm Change Protocol

Is an Algorithm Change Protocol (ACP) established for adaptive algorithms?*

Pre-specified modifications (SaMD Pre-Specifications) and Algorithm Change Protocol

Does the ACP define what modifications trigger regulatory submission?*

Classification of changes: no submission, 510(k), PMA supplement

4. Data Management

Is training data representative of intended use population and diverse?*

Dataset demographics, clinical characteristics match target population

Have you addressed bias and fairness in training datasets?*

Evaluate performance across demographic subgroups, mitigate bias

Is data provenance and quality documented?*

Data sources, labeling procedures, quality control, curation methodology

5. Performance Monitoring

Do you monitor real-world performance of AI/ML algorithms post-deployment?*

Track accuracy, sensitivity, specificity, false positives/negatives in production

Are data drift and model degradation monitored continuously?*

Detect distribution shift indicating retraining needed

Is there a process to update/retrain models when performance degrades?*

Defined thresholds trigger model updates, revalidation before deployment

6. Transparency & Explainability

Are model predictions explainable to clinicians?*

Techniques: SHAP, LIME, attention maps, saliency maps for interpretability

Is model performance (accuracy, limitations) disclosed to users?*

Labeling includes performance metrics, intended use limitations

Please answer all required questions to see your results