AIMA - Introduction


In which we try to explain why we consider artificial intelligence to be a subject most worthy of study, and in which we try to decide what exactly it is, this being a good thing to decide before embarking.

This chapter introduces some of the key definitions such as potential interpretations of “artificial intelligence”, presents the subject as the synergy of several other fields and and provides a history of the different approaches and progress in the field of AI.

3rd Edition

Notes

What Do We Mean By AI?

You could define artificial intelligence in 4 different ways:

  • Thinking humanly: “The exciting new effort to make computers think… machines with minds, in the full and literal sense”.
  • Thinking rationally: “The study of the computations that make it possible to perceive, reason and act.”
  • Acting humanly: “The art of creating machines that perform functions that require intelligence when performed by people.”
  • Acting rationally: “Computational intelligence is the sutdy of the design of intelligent agents.”

To do something humanly is to do it in the same way a human would do it. To do something rationally is to do it in the “right way” – the ideal set of actions given the current set of information that will achieve the best outcome. Humans aren’t always rational – that’s pretty much the topic of [[Thinking, Fast and Slow, Kahneman]]N.

Acting Humanly

One way to measure how well an AI can “think humanly” is through the Turing Test since it requires possessing several different capabilities. Although it’s a good measure of human-like intelligence, AI researchers haven’t put much effort into trying to pass it.

The quest for artificial flight succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics.

It’s more important to study the underlying principles of intelligence rather than trying to duplicate an exemplar – i.e. a human.

Thinking Humanly

The idea with this approach is that once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as a computer program. The field of “cognitive science” therefore pursues precise and testable theories of the human mind.

Thinking Rationally

Thinking rationally means “thinking correctly”.

Aristotle came up with syllogisms, patterns in making arguments that always yielded correct conclusions when given correct premises, i.e. irrefutable reasoning processes. These syllogisms and “laws of thought” provided the bases for the field of logic.

The logicist tradition in AI hopes to use the laws of logic to solve problems and create intelligent systems. This however is difficult:

  • It’s hard to state informal knowledge in formal terms, especially when you have imperfect or uncertain information.
  • Even though there exist programs which could hypothetically solve any problem stated in logical notation, they would require enormous amounts of computing power. “There is a big difference between solving a problem ‘in principle’ and solving it in practice”.
Acting Rationally

This is the approach to defining intelligence that the book takes, focussing on the general principles of rational agents and on components for constructing them.

Acting rationally means acting in a way achieves the best expected outcome. This is distinct from “thinking rationally”, which is all about reasoning logically to the conclusion that will best achieve one’s goals. For example, recoiling from a hot stove shouldn’t require careful deliberation; it should be done automatically.

Flashcards

What are the two different ways you could view the performance of an AI?


  • If it acts humanly
  • If it acts rationally

What does it mean for a system to be rational?


It does the “right thing”, given what it knows

What is a syllogism?


A form of reasining that always yields correct conclusions when given correct premises.

What is the logicist tradition in AI?


Using the laws of logic to create intelligent systems.

What is a rational agent?


An agent that acts to achieve the best expected outcome.

Why is acting rationally in complex environments not always possible?


It requires too much computing power.

What is acting with limited rationality?


Acting appropriately when there is not enough time to do all the computation.

What did Godel’s Incompleteness Theorem show?


Any formal system as strong as Peano arithemetic contains true statements that are undecidable.

What does it mean for a problem to be intractable?


The time required to solve the problem grows exponentially with the size of the input.

If a problem can be shown to belong to a class that’s NP-complete, what does that mean for the problem?


It is intractable.

Why can computing power alone not create intelligent systems in complex environments?


Complex environments mean computational demands grow very quickly.

What is Bayes’ Theorem used for?


Updating probabilities in light of new evidence.

What is decision theory?


A framework for making decisions under uncertainty that maximise utility.

Before the 18th century, where did people thing conciousness might exist?


In the heart, brain or spleen.

What “amazing conclusion” can we draw from neuroscience?


A collection of simple cells can lead to thought, action and consciousness.

In ‘Design for a Brain’, how does Ashby argue you could create intelligence?


Using homeostatic devices containing feedback loops to achieve stable behavior.

Why is understanding language a really difficult problem?


It requires understanding the subject matter and context rather than just the structure of sentences.

What is Hebbian learning?


A model for updating the strengths between simple neurons to learn.

What is true about computable functions and neurons?


Any computable function could be computed by some network of connected neurons.

What’s another term for “artificial intelligence” that more narrowly describes the process of acting rationally?


Computational rationality.

What is a microworld?


A limited environment for exploring problems that require intelligence to solve.

What was the “combinatorial explosion” in AI research?


Where early AI systems failed miserably on wider selections of problems.

What were the “weak methods” that arose in the first decade of AI research?


General purpose search mechanisms that strung together elementary reasoning steps to find complete solutions.

Why are “weak methods” called weak?


Because they do not scale up to large problems.

What is a knowledge-intensive system?


A system where expertise is derived from large numbers of special-purpose rules.

Who are the “neats” in AI research?


Those who believe that AI theories should be grounded in mathematical rigor.

Who are the “scruffles” in AI research?


Those who want to try out lots of ideas, write some programs and assess what seems to be working.

What do proponents of HLAI (human-level AI) want?


For AI research to go back to creating machines that can think, learn and create.

What does AGI look for?


A universal algorithm for learning and acting in any environment.

What has recent availability of large data sets shown?


Many difficult problems can be solved by using lots of data rather than complicated rules.

Commentary

I quickly found out that rewriting the entire textbook would be a long and boring process. My process was going to be:

  • Highlight key phrases and definitions
  • Write notes
  • Convert notes into flashcards

But I think it’s much more streamlined if the whole “write notes” process is skipped:

  • Highlight key phrases and definitions
  • Write flashcards

I can still use [[Sergeant]]? to practice applying the knowledge.

4th Edition

Flashcards

What is the standard model definition of AI?


AI is the study and construction of agents that do the right thing.

The standard model for AI is “AI is the study and construction of agents that do the right thing”. What is the main issue with this?


It doesn’t account for vagueness in what the right thing actually is.

What is the value alignment problem?


Trying to align the values and objectives you program into a machine to be the same as human values.

Summary

  • AI is the synthesis of a lot of different fields coming together.
  • The standard model - that AI is the design and creation of agents that do the right thing - is how AI has been considered in the past, but the standard model doesn’t account for uncertainties in how AIs should match human objectives.
  • AI goes through hype cycles which lead to cuts in funding, enthusiasm.
  • AI has matured considerably.
  • We need to start thinking about the risks that AI could incur.

Questions

  • Do you think the current enthusiasm around deep learning techniques is just another hype cycle?



Related posts