Lecture - Ethics and Responsible Innovation MT22, I
Notes
- Course Structure
- 4 interactivel lectures
- 2 practical sessions during Hilary (2 hours)
- Written assesment (S+/S/S-)
- Types of harms
- Deliberate harms
- Accidental harms, i.e. unintentional side effects
- Algorithmic bias - systematic and repeatable erros in a computer system that reate unfair outcomes, such as privileging one arbitrary group of users over others.
- Allocative harms - when a system provides different groups unequal opportunities, resources or capabilities
- Occur a lot in ML systems
- Often caused by dataset sample bias
- E.g. voice recognition or face recognition disproportionally performing worse on minorities
- Representational harms - algorithmically curated or created depiction that is dsicriminatory or otherwise harmful
- E.g. google showing white men when you google CEO
- Allocative harms - when a system provides different groups unequal opportunities, resources or capabilities
- Algorithmic decision making
- Support human decision making = decision support
- CV screening
- Making decisions in place of humans = automatied decision making
- Fraud detection
- Not always a bad thing – humans have bias too and so automated systems can sometimes do a better job
- Reasons for concern
- Sample bias
- Feature horizon - not seeing everything that might be relevant
- Falibility of human judgement - human biases might be baked in
- Inscrutability - inability to easily tell what the model is really learning and using for inference
- Feedback loops
- Support human decision making = decision support