Gsd-skill-creator responsible-innovation
Ethical frameworks for responsible technology creation and deployment -- consequentialism, deontology, virtue ethics, care ethics, and justice theory applied to technology decisions. Covers stakeholder analysis, impact assessment, the Collingridge dilemma, technology governance models, digital equity, environmental sustainability, labor implications, and the social construction of technology. Use when evaluating the ethics of technology decisions, designing for social impact, assessing unintended consequences, or teaching responsible technology citizenship. This is the normative complement to emerging-tech's descriptive analysis.
git clone https://github.com/Tibsfox/gsd-skill-creator
T=$(mktemp -d) && git clone --depth=1 https://github.com/Tibsfox/gsd-skill-creator "$T" && mkdir -p ~/.claude/skills && cp -r "$T/examples/skills/technology/responsible-innovation" ~/.claude/skills/tibsfox-gsd-skill-creator-responsible-innovation && rm -rf "$T"
examples/skills/technology/responsible-innovation/SKILL.mdResponsible Innovation
Technology shapes society, and society shapes technology. The choices made during design, development, and deployment -- what to build, for whom, at what cost, with what safeguards -- are ethical choices with real consequences. Responsible innovation is the discipline of making those choices deliberately, with awareness of who benefits, who is harmed, and what alternatives exist.
Agent affinity: gates-m (technology for social impact, digital inclusion), hicks (social construction of technology, gender and labor), joy (existential risk, precautionary reasoning), resnick (equitable access to creative tools)
Concept IDs: tech-responsible-innovation, tech-digital-rights-privacy, tech-environmental-impact-tech, tech-transformative-technologies
Part I -- Ethical Frameworks for Technology
Consequentialism (Outcomes-Based)
Core question: Does this technology produce more benefit than harm?
Consequentialism evaluates technology by its outcomes. A consequentialist analysis of facial recognition technology would weigh crime prevention benefits against surveillance harms, false positive rates across demographics, and chilling effects on free expression.
Strength: Focuses on real-world impact. Weakness: Outcomes are often uncertain, unevenly distributed, and difficult to measure.
Deontology (Rules-Based)
Core question: Does this technology respect people's rights and dignity?
Deontological ethics holds that certain actions are right or wrong regardless of outcomes. A deontological analysis might hold that mass surveillance violates the right to privacy even if it reduces crime, because treating people as objects of monitoring violates their dignity.
Strength: Protects individuals from being sacrificed for collective benefit. Weakness: Rights can conflict (privacy vs safety), and resolving conflicts requires judgment beyond the rules themselves.
Virtue Ethics (Character-Based)
Core question: What kind of people and communities does this technology help us become?
Virtue ethics asks not whether the technology is good but whether building and using it cultivates human flourishing. Social media might pass a consequentialist test (connects people) and a deontological test (no rights violated) while still failing a virtue ethics test (cultivates vanity, anxiety, and shallow engagement).
Strength: Addresses long-term cultural effects. Weakness: "Flourishing" is culturally situated and hard to operationalize.
Care Ethics (Relationship-Based)
Core question: Does this technology strengthen or weaken relationships of care and interdependence?
Care ethics centers relationships rather than abstract principles. A care ethics analysis of elder-care robots would ask: does the robot supplement human caregiving, or replace it? Does it free caregivers for deeper engagement, or justify reducing care staff?
Strength: Foregrounds vulnerability and dependency. Weakness: Can be difficult to scale beyond interpersonal relationships.
Justice Theory (Equity-Based)
Core question: Does this technology distribute benefits and burdens fairly?
Drawing on John Rawls's theory of justice, this framework asks whether the technology would be acceptable if you did not know your position in society (the "veil of ignorance"). Would you accept an algorithm that determines bail decisions if you did not know whether you would be the defendant or the victim?
Strength: Directly addresses systemic inequality. Weakness: The veil of ignorance is a thought experiment, not a practical tool; actual power dynamics are always present.
Part II -- The Collingridge Dilemma
David Collingridge (1980) identified a fundamental tension in technology governance:
- Early in development, a technology's impact is uncertain, so it is difficult to know what to control.
- Late in development, a technology is entrenched, so it is difficult to change even when impacts are clear.
This means the easiest time to change a technology is when you know the least about its effects, and the most informed time to change it is when change is most costly. Every technology governance decision takes place within this dilemma.
Practical responses:
- Adaptive governance: Build review and adjustment mechanisms into deployment from the start.
- Modularity: Design systems so components can be modified without replacing the whole.
- Sunset clauses: Require periodic re-authorization rather than permanent deployment.
- Reversibility preference: When choosing between equally effective approaches, prefer the more reversible one.
Part III -- Stakeholder Analysis
Identifying Stakeholders
Every technology decision affects multiple groups. Responsible innovation requires identifying all affected parties, not just the intended users.
| Stakeholder type | Examples | Often overlooked? |
|---|---|---|
| Direct users | Customers, employees using the tool | No |
| Indirect users | People affected by decisions the tool enables | Yes |
| Non-users | People who cannot access or choose not to use the tool | Yes |
| Future users | People who will inherit the technology's consequences | Yes |
| Ecosystems | Natural environments affected by resource extraction, energy use, e-waste | Yes |
Power Analysis
Not all stakeholders have equal voice. A power analysis asks:
- Who decides what gets built?
- Who funds the development?
- Who has access to the finished product?
- Who bears the risk if it fails?
- Who profits if it succeeds?
When the people who bear the risk are not the people who make the decisions, responsible innovation demands deliberate mechanisms for including their perspective.
Part IV -- Technology and Labor
Mar Hicks's Programmed Inequality (2017) demonstrated that technology does not simply "automate away" jobs in a neutral process. The choices about which jobs to automate, how to retrain (or not), and who controls the automation reflect and reinforce existing power structures.
Automation and Displacement
- Task automation vs job automation: Most automation replaces tasks within jobs, not entire jobs. This changes the nature of work rather than eliminating it.
- Skill polarization: Automation tends to eliminate middle-skill jobs while increasing demand for high-skill and low-skill work, hollowing out the middle class.
- The productivity paradox: Automation increases productivity but does not automatically share the gains with workers. Whether workers benefit depends on institutional and political choices, not technological inevitability.
Gig Economy and Platform Labor
Digital platforms (Uber, DoorDash, TaskRabbit) create new forms of work that often lack traditional labor protections. Responsible innovation in platform design considers:
- Worker classification (employee vs contractor)
- Algorithmic management (opaque performance metrics, automated deactivation)
- Income stability and benefits
- Data asymmetries (platforms know everything about workers; workers know little about the algorithm)
Part V -- Digital Equity
Technology access is not equally distributed. The digital divide operates on multiple dimensions:
| Dimension | Gap |
|---|---|
| Access | Physical availability of devices and connectivity |
| Affordability | Cost relative to income |
| Skills | Ability to use technology effectively |
| Relevance | Availability of content and services in users' languages and contexts |
| Agency | Ability to shape technology rather than merely consume it |
Melinda French Gates's work through Pivotal Ventures has emphasized that digital inclusion requires addressing all five dimensions, not just access. Providing a device without skills training, relevant content, and genuine agency is performative inclusion.
Part VI -- Environmental Impact
Energy and Carbon
- Data centers consume approximately 1-2% of global electricity.
- Training a large language model can emit as much carbon as five cars over their lifetimes.
- Cryptocurrency mining consumes more electricity than many countries.
- Streaming video accounts for a significant and growing share of internet traffic and energy use.
E-Waste
- 53.6 million metric tons of e-waste generated globally in 2019.
- Less than 20% is formally recycled.
- Informal recycling in developing countries exposes workers to toxic materials.
- Planned obsolescence accelerates the cycle.
Responsible Design Responses
- Design for longevity (repairable, upgradeable hardware).
- Design for efficiency (optimize algorithms, reduce data transfer).
- Design for end-of-life (recyclable materials, take-back programs).
- Measure and report environmental impact transparently.
Part VII -- Technology Assessment Protocols
Impact Assessment Template
For any proposed technology deployment:
- Purpose: What problem does this solve? For whom?
- Stakeholders: Who is affected? Who is excluded?
- Benefits: What are the intended outcomes?
- Risks: What could go wrong? For whom?
- Alternatives: What other approaches could solve the same problem?
- Reversibility: If the technology causes harm, can it be undone?
- Monitoring: How will outcomes be measured after deployment?
- Governance: Who decides whether to continue, modify, or discontinue?
The "Headline Test"
Before deploying: imagine the worst plausible outcome appearing as a newspaper headline. If the headline is unacceptable, the risk mitigation is inadequate.
Cross-References
- gates-m agent: Technology for social impact and digital inclusion. Primary agent for equity analysis.
- hicks agent: Social construction of technology, labor implications, gender in tech. Primary agent for labor and power analysis.
- joy agent: Technology risk assessment, existential risk. Primary agent for high-stakes risk reasoning.
- resnick agent: Equitable access to creative tools and learning environments.
- emerging-tech skill: Descriptive analysis of technologies that this skill evaluates normatively.
- cybersecurity-basics skill: Security dimensions of responsible innovation.
References
- Jasanoff, S. (2016). The Ethics of Invention. W. W. Norton.
- Hicks, M. (2017). Programmed Inequality. MIT Press.
- Gates, M. F. (2019). The Moment of Lift. Flatiron Books.
- Joy, B. (2000). "Why the Future Doesn't Need Us." Wired, April 2000.
- Collingridge, D. (1980). The Social Control of Technology. Frances Pinter.
- Rawls, J. (1971). A Theory of Justice. Harvard University Press.
- Winner, L. (1980). "Do Artifacts Have Politics?" Daedalus, 109(1), 121-136.
- Crawford, K. (2021). Atlas of AI. Yale University Press.
- Stilgoe, J., Owen, R., & Macnaghten, P. (2013). "Developing a Framework for Responsible Innovation." Research Policy, 42(9), 1568-1580.