Ethical - PowerPoint PPT Presentation

1 / 12
About This Presentation
Title:

Ethical

Description:

Senior Systems Engineer, Lockheed Martin TSS ... and artificial' puppies like puppies, but treat artificial' puppies-with-people-level-minds like people, and ... – PowerPoint PPT presentation

Number of Views:55
Avg rating:3.0/5.0
Slides: 13
Provided by: medi7
Learn more at: https://www.agiri.org
Category:
Tags: ethical | puppies

less

Transcript and Presenter's Notes

Title: Ethical


1
Ethical Mathematical Adventures in Coding
Better Children
  • Jeff MedinaSenior Systems Engineer, Lockheed
    Martin TSSAssociate Director, The Singularity
    Institute for Artificial IntelligenceFellow, The
    Institute for Ethics and Emerging Technologies

2
But firstA Brief Digressionto Bring
Everyone-Less-eto Near-Agreement onthe
Presumption of the Moral Significance of
Behaviorally-Personish Artificial Beings
3
  • Moral Equivalence in Practice MEIPAn
    artificial being should be treated at least as
    well as if she were a natural being in the
    nearest cognitive-behavioral class.
  • SO, for example,
  • treat artificial people like people,
  • and artificial puppies like puppies,
  • but treat artificial puppies-with-people-level-m
    inds like people, and so forth.
  • More succinctly, TREAT ALIFE LIKE LIFE.

4
Why MEIP? A brief argument
  • P(x in Y For nearly all F, yF ? xF) 0,and
    arguably 0.5
  • If (x in Y) humans act as if (x not in Y),
    massive suffering ensues, scaling linearly with
    C(X), with C(X) itself plausibly growing
    exponentially.
  • In the opposite case, the expected neg. utility
    (from wasted resources) is less than above due to
    ALife per capita resource efficiency.

5
What responsibility do we have for the lives we
create?
  • ParentsGodsand A.G.I. developers?All quite
    similar, as it turns out

6
Case 1The responsibility of parents toward
their children
  • Procreative Beneficence(Savulescu 2001)Parents
    should select the child, of the possible
    children they could have, who is expected to have
    the best life, or at least as good a life as the
    others, based on the relevant, available
    information.

7
Analogue to Case 1For those who would build (or
birth) A.G.I.
  • Programmatic Beneficence!(Medina, just now)The
    A.G.I.-expectant should select the A.G.I., of the
    possible A.G.I. they could build, who is expected
    to have the best life, or at least as good a life
    as the others, based on the relevant, available
    information.

8
Case 2Theodicy Epicurus et al.s problem of
evil
  • If (insert favorite deity here) is
  • maximally morally good? wants to help us
  • omnipotent? is capable of helping us
  • and omniscient..? knows how to help us

9
WHY DOES LIFE SUCK?!?(relatively speaking)
10
This reasoning reverse-implicates obligations for
A.G.I. creators
With greater control over a sentient beings life
comes greater responsibility to do right by it.
11
But how much is enough?
  • Do we satisfice? Arbitrary! Nooooo
  • Do we maximize? Yes! but how?

12
Considerations
  • The intelligence of the A.G.I. designer
    constrains dI given n resources and t time.
  • Does finding further improvements require similar
    intelligence resources, or more? If more, how
    much more? Is scaling linear, exponential,
  • Create the A.G.I. at intelligence equal to or
    greater than the creators, and provide n
    resources to the A.G.I. for use in improving
    herself?(Bad idea in the case of humans, but
    maybe less so for A.G.I. How malleable is your
    design after the initial developmental period?)
Write a Comment
User Comments (0)
About PowerShow.com