Technology and Ethics Glossary
25 essential terms — because precise language is the foundation of clear thinking in Technology and Ethics.
Showing 25 of 25 terms
Systematic discrimination in the outputs of algorithmic systems resulting from biased data, flawed design, or amplification of existing social inequalities.
Weapon systems capable of selecting and engaging targets without human intervention, raising profound ethical and legal questions about accountability.
An ethical framework that emphasizes relationships, vulnerability, and the responsibility to care for those affected by our actions, particularly the most vulnerable.
User interface design techniques that manipulate or deceive users into making choices they did not intend, undermining autonomy and informed consent.
The principle of collecting only the minimum personal data necessary for a specified purpose, reducing privacy risks.
An ethical framework that evaluates actions based on adherence to rules, duties, and rights rather than consequences.
An individual's capacity to make free and informed decisions about their engagement with digital technologies and data.
The right to control personal information in digital contexts, including collection, use, sharing, and retention of data.
Technology that can serve both beneficial and harmful purposes, creating dilemmas about its development and distribution.
A systematic evaluation of the potential ethical risks and societal impacts of a technology before it is deployed.
The practice of using superficial ethics rhetoric without substantive changes to harmful practices, as a public relations strategy.
The capability of an AI system to provide human-understandable explanations of its decisions and reasoning processes.
The General Data Protection Regulation, an EU law establishing comprehensive data protection rights and obligations for organizations handling personal data.
A design principle ensuring that humans retain meaningful decision-making authority in automated systems, especially for high-stakes decisions.
The principle that individuals should understand and voluntarily agree to the terms under which their data is collected and used.
The use of algorithms to forecast criminal activity, which can perpetuate racial bias when trained on historically biased policing data.
An approach that embeds privacy protections into the design and architecture of technology systems from the outset, rather than adding them later.
A framework for integrating ethical reflection, public engagement, and social impact anticipation into the research and innovation process.
The right of individuals to request erasure of their personal data from digital systems when it is no longer necessary or lawful to retain.
An economic model based on the extraction and commodification of personal behavioral data for prediction and behavioral modification.
The tendency to frame complex social, political, and cultural problems as having simple technological solutions, overlooking systemic causes.
A thought experiment in ethics concerning the morality of diverting harm from many to few, widely applied to autonomous vehicle decision-making.
An ethical framework that evaluates actions based on their consequences, seeking to maximize overall well-being or minimize overall harm.
A design methodology that accounts for human values (privacy, fairness, autonomy, trust) throughout the technology development process.
An ethical framework focused on the character and moral virtues of the agent, asking what kind of person or society a particular action or technology cultivates.