Ethics and Responsible Innovation HT23, Written Reflection Plan


  • Artificial general intelligence (hence referred to as AGI), would like to work on in the future
    • “Would like to work on in the future” seems a little off as a phrase, almost arrogant. I want to say something about a career focussed on AI, which might be more to do with many different things.
  • What it is: many AI systems in the past are “weak AI”, i.e. they are narrowly focussed on one specific task such as identifying objects in images. One description of AGI is as “strong AI” which can achieve good performance over a variety of diverse domains.
  • Recently, Microsoft released a paper called “Sparks of AGI”, demonstrating that new large language models such as GPT-4 have some capacity to reason, solve problems and think abstractly in a variety of different situations.
  • My ethical concern: Aristotle posited the idea of “flourishing”, a state of non-superficial happiness that comes from fufilling one’s potential, capabilities and talents. Flourishing is intimitely tied to contributing to a community and feeling like you have a sense of purpose.
    • Need to elaborate more on how flourishing is tied to finding a sense of purpose I think.
  • When AGI happens, assuming that all the hurdles and complexity of AI alignment have been tackled, which itself is an interesting ethical problem, of course there will be many jobs that can be automated. But sufficiently strong AGI will also automate the process of scientific discovery, invention, etc.
  • Since the scientific revolution in the 16th and 17th centuries, the idea of scientific and technological progress has been an important part of society. Will humans still be able to experience a sort of “cultural flourishing” once AGI becomes sufficiently capable enough to automate this? Will we be left without a purpose?
    • The idea of “cultural flourishing” seems a bit weak, how can I make this more personal and tied to one’s individual flourishing
    • I want to make an analogy here but can’t think of one. Something about feeling really proud of yourself and happy with your progress but then something or someone comes along that’s infinitely better than you and makes you feel almost worthless
    • How to tie this back to ethics?
  • Some ways to address these concerns; using AI as a tool rather than as an agent as a way to work alongside humanity rather than “above” it (there are some interesting ideas in Nick Bostrom’s “Superintelligence” about how AI could be used as an oracle, may be worth elaborating), using AI as a “guardian angel” (for lack of a better term) preventing catastrophe and reducing suffering while almost letting humanity figure it out on their own, or maybe that this is actually not a problem, since AGI would be able to devise a solution itself in order to maximise humanity’s potential.

Overall personal notes:

  • It would be good to include more about what we learnt in the course, maybe about the AI being biased from its creators and its vision of the future of humanity being heavily influenced by the views of its creators. This also brings up ideas of moral relativism. One of the things we learnt in the course was about the dangers of “algorithmic decision making” which comes up a lot in the above.
  • Another thing we learnt in the course was about the “Outcome Lens”, a consequentialist-inspired view of technology, the “Process Lens” a deontology-inspired view of technology, and the “Structure Lens”, which is about considerations of fairness. It would be good to include some stuff about these different lenses.
  • Should make it clear at the start of the section where I talk about my own ethical concerns that this is all conditioned on AGI being safely developed, and the technical alignment problem being solved and that this doesn’t address more short-term ethical concerns such as current AI systems that are having a palpable impact right now.



Related posts