Title: Legal and Moral Complexities of Artificial General Intelligence
1Legal and Moral Complexities of Artificial
General Intelligence
- Peter Voss
- Adaptive A.I. Inc.
2Topics
- What is AGI, and how does it differ from
conventional AI? - Key Uncertainties
- AGI Savior or Mortal Danger?
- Moral Implications
- Legal Implications
3AGI The forgotten science
- Real AI Human-level learning and understanding
Artificial General Intelligence (AGI) Conventional AI
4AGI The forgotten science
- Real AI Human-level learning and understanding
Artificial General Intelligence (AGI) Conventional AI
Focus on acquiring knowledge and skills Focus on having knowledge and skills
5AGI The forgotten science
- Real AI Human-level learning and understanding
Artificial General Intelligence (AGI) Conventional AI
Focus on acquiring knowledge and skills Focus on having knowledge and skills
Acquisition via learning Acquisition via programming
6AGI The forgotten science
- Real AI Human-level learning and understanding
Artificial General Intelligence (AGI) Conventional AI
Focus on acquiring knowledge and skills Focus on having knowledge and skills
Acquisition via learning Acquisition via programming
General ability, using abstraction and context Domain specific, rule-based and concrete
7AGI The forgotten science
- Real AI Human-level learning and understanding
Artificial General Intelligence (AGI) Conventional AI
Focus on acquiring knowledge and skills Focus on having knowledge and skills
Acquisition via learning Acquisition via programming
General ability, using abstraction and context Domain specific, rule-based and concrete
Ongoing cumulative, adaptive, grounded, self-directed learning Relatively fixed abilities. Externally initiated improvements
8Implications of AGI
- Human-level learning and understanding
- Self-aware
- Self-improvement (Ready-to-Learn)
- Seed AI
- Human Augmentation / Integration
9Key Questions
- How soon?
- How powerful? Hard limits to intelligence?
- Will there be a hard take-off?
- Can we put the genie back into the bottle?
- Will it have a mind or agenda of its own?
- Cant we first integrate AGIs into humans?
10AGI Our Savior?
- Do we need AGI to save us from ourselves?
- Dangers of biotech
- Dangers of nanotech
- Dangers of social breakdown
-
11AGI Our Savior?
- Do we need AGI to save us from ourselves?
- Dangers of biotech
- Dangers of nanotech
- Dangers of social breakdown
-
- AGI can potentially help by
- - providing tools to prevent disaster
- - protecting us directly (universal policeman)
- - helping to alleviate poverty and suffering
- - making us more moral
12AGI A Mortal Danger?
- What is the real risk
- An AGI with a mind of its own, or one without?
-
13AGI A Mortal Danger?
- What is the real risk
- An AGI with a mind of its own, or one without?
-
- Little evidence that AGI will be detrimental to
humans unless specifically designed to be! - Original applications or training may have large
impact (a2i2 vs. military). - The power of AGI in (the wrong) human hands is a
bigger concern. - Mitigating factor AGIs positive moral influence
14Moral ImplicationsAGI Human Interaction
- How should we treat AGIs?
- Will they desire life and liberty? And to
pursue happiness? -
- How will we treat AGIs?
- Will they be moral amplifiers? Make us more what
we are? Bring out our fears? Our best? - How will AGIs act towards us?
- Rationally - they better understand consequences
of their actions. They lack primitive
evolutionary survival instincts.
15Moral ImplicationsHuman Development and
Integration
- How will AGIs change human morality?
- They will change the world Major impact on law,
politics, and social justice (The Truth Machine).
More rationality. Less material poverty and
desperation. Coping with change. - Will they help move us up Maslows hierarchy?
-
- Implications of radical Intelligence Augmentation
- We will be much more like AGI than humans AGI
thought and morality will dominate. -
16Moral ImplicationsRational Ethics
- Rational Personal Ethics
- Principles for Optimal Living
- AGIs will have rational virtues
- AGI as wise oracle / mentor
- AGIs will help us become more virtuous
-
17Legal Implications
- What are the primary legal issues?
- - To protect humans, or AGIs? (or governments!)
- - Will AGIs want life, power, protection?
- Can the legal system respond fast enough to
prevent or limit potential risks of AGI? - Future of the legal system AGI judges? Truth
based?
18Summary
- AGI is fundamentally different from conventional
AI - Human-level AGI may well arrive in 3 to 6 years
- AGI will improve very rapidly beyond this stage
- AGIs are unlikely to have their own agenda
- AGIs are our best hope to protect and improve the
human condition, and to improve our morality. - Powerful AGIs will arrive long before significant
IA - Legal issues are more likely to center around
limiting the production or use of AGIs, rather
than protecting AGIs - Legal mechanisms will be ineffective
19What to do ?
- Contact me peter_at_optimal.org
20References
- AGI Artificial General Intelligence
- http//adaptiveai.com/faq/index.htm
- http//adaptiveai.com/research/index.htm
- The Truth Machine by James Halperin
- Existential Risks Analyzing Human Extinction
Scenarios - http//nickbostrom.com/existential/risks.html
- True Morality Rational Principles for Optimal
Living - http//optimal.org/peter/rational_ethics.htm
- Why Machines will become Hyper-Intelligent before
Humans do - http//optimal.org/peter/hyperintelligence.htm
21What to do
- Find ways to ensure(?) that AGI technology is
used in ways that benefit humans - Create forums for AGI researchers and developers
to gain a better understanding of crucial issues
and options to allow us to make better
decisions. - Get smart, wise people from outside of the AGI
community involved in this process