Medical artificial intelligence (AI) technologies, by their capacity to decipher enormous data sets, identify meaningful patterns beyond what human intelligence can recognize, and in some cases render decisions without human assistance, are poised to transform healthcare. As with any powerful technology, careful ethical analysis is needed if we are to realize the benefits of AI while avoiding its perils. Four available perspectives are recognized. One perspective is technological sentimentalism, which resists novel technologies that seem to displace a more natural way of inhabiting the world. A second perspective is technological messianism, which uncritically welcomes novel technology as intrinsically good and the answer to all human problems. A third perspective, common today, is technological pragmatism, which weighs benefits and risks in a utilitarian framework that emphasizes empirical facts but disregards moral values, considering them to be opinions without consequence or validity. A fourth and preferred perspective is technological responsibilism, which considers not only outcomes but also the moral values laden in the design and implementation of technology. Technological responsibilism respects the deeply human attributes of voluntary responsibility, moral agency, and character. Morally responsible use of AI is needed if healthcare professionals are to sustain their focus, not on technology, but on patients.
Biomedical enhancements have the potential to extend human capacities and significantly improve human life. Consequently, their widespread use may yield greater benefits than current interventions in biopharmaceutical medicine. Ethical assessment of novel biomedical technologies prior to widespread adoption is therefore important.