Introduction to Neural Networks - PowerPoint PPT Presentation

About This Presentation
Title:

Introduction to Neural Networks

Description:

Title: PowerPoint Presentation Last modified by: bIOcOMP Created Date: 1/1/1601 12:00:00 AM Document presentation format: Presentazione su schermo (4:3) – PowerPoint PPT presentation

Number of Views:119
Avg rating:3.0/5.0
Slides: 40
Provided by: unib167
Category:

less

Transcript and Presenter's Notes

Title: Introduction to Neural Networks


1
Introduction to Neural Networks
  • Gianluca Pollastri, Head of Lab
  • School of Computer Science and Informatics and
  • Complex and Adaptive Systems Labs
  • University College Dublin
  • gianluca.pollastri_at_ucd.ie

2
Credits
  • Geoffrey Hinton, University of Toronto.
  • borrowed some of his slides for Neural Networks
    and Computation in Neural Networks courses.
  • Paolo Frasconi, University of Florence.
  • This guy taught me Neural Networks in the first
    place (and I borrowed some of his slides too!).

3
Recurrent Neural Networks (RNN)
  • One of the earliest versions Jeffrey Elman,
    1990, Cognitive Science.
  • Problem it isnt easy to represent time with
    Feedforward Neural Nets usually time is
    represented with space.
  • Attempt to design networks with memory.

4
RNNs
  • The idea is having discrete time steps, and
    considering the hidden layer at time t-1 as an
    input at time t.
  • This effectively removes cycles we can model the
    network using an FFNN, and model memory
    explicitly.

5
d
d delay element
6
BPTT
  • BackPropagation Through Time.
  • If Ot is the output at time t, It the input at
    time t, and Xt the memory (hidden) at time t, we
    can model the dependencies as follows

7
BPTT
  • We can model both f() and g() with (possibly
    multilayered) networks.
  • We can transform the recurrent network by
    unrolling it in time.
  • Backpropagation works on any DAG. An RNN becomes
    one once its unrolled.

8
d
d delay element
9
(No Transcript)
10
gradient in BPTT
  • GRADIENT(I,O,T)
  • Iinputs, Ooutputs, Ttargets
  • T size(O)
  • X0 0
  • for t 1..T
  • Xt f( Xt-1 , It )
  • for t 1..T
  • Ot g( Xt , It )
  • g.gradient( Ot - Tt )
  • dt g.deltas( Ot - Tt )
  • for t T..1
  • f.gradient( dt )
  • dt-1 f.deltas( dt )

11
(No Transcript)
12
(No Transcript)
13
Xt-2
14
Ot-2
Xt-1
Xt-2
15
Ot-1
Ot-2
Xt
Xt-1
Xt-2
16
Ot
Ot-1
Ot-2
Xt
Xt1
Xt-1
Xt-2
17
Ot
Ot1
Ot-1
Ot2
Ot-2
Xt
Xt1
Xt-1
Xt2
Xt-2
18
Ot
Ot1
Ot-1
Ot2
Ot-2
Xt
Xt1
Xt-1
Xt2
Xt-2
19
Ot
Ot1
Ot-1
Ot2
Ot-2
Xt
Xt1
Xt-1
Xt2
Xt-2
20
Ot
Ot1
Ot-1
Ot2
Ot-2
Xt
Xt1
Xt-1
Xt2
Xt-2
21
Ot
Ot1
Ot-1
Ot2
Ot-2
Xt
Xt1
Xt-1
Xt2
Xt-2
22
Ot
Ot1
Ot-1
Ot2
Ot-2
Xt
Xt1
Xt-1
Xt2
Xt-2
23
Ot
Ot1
Ot-1
Ot2
Ot-2
Xt
Xt1
Xt-1
Xt2
Xt-2
24
What I will talk about
  • Neurons
  • Multi-Layered Neural Networks
  • Basic learning algorithm
  • Expressive power
  • Classification
  • How can we actually train Neural Networks
  • Speeding up training
  • Learning just right (not too little, not too
    much)
  • Figuring out you got it right
  • Feed-back networks?
  • Anecdotes on real feed-back networks (Hopfield
    Nets, Boltzmann Machines)
  • Recurrent Neural Networks
  • Bidirectional RNN
  • 2D-RNN
  • Concluding remarks

25
Bidirectional Recurrent Neural Networks (BRNN)
26
BRNN
  • Ft ?( Ft-1 , Ut )
  • Bt ?( Bt1 , Ut )
  • Yt ?( Ft , Bt , Ut )
  •  
  • ?() ?() ed ?() are realised with NN 
  • ?(), ?() and ?() are independent from t
    stationary

27
BRNN
  • Ft ?( Ft-1 , Ut )
  • Bt ?( Bt1 , Ut )
  • Yt ?( Ft , Bt , Ut )
  •  
  • ?() ?() ed ?() are realised with NN 
  • ?(), ?() and ?() are independent from t
    stationary

28
BRNN
  • Ft ?( Ft-1 , Ut )
  • Bt ?( Bt1 , Ut )
  • Yt ?( Ft , Bt , Ut )
  •  
  • ?() ?() ed ?() are realised with NN 
  • ?(), ?() and ?() are independent from t
    stationary

29
BRNN
  • Ft ?( Ft-1 , Ut )
  • Bt ?( Bt1 , Ut )
  • Yt ?( Ft , Bt , Ut )
  •  
  • ?() ?() ed ?() are realised with NN 
  • ?(), ?() and ?() are independent from t
    stationary

30
Inference in BRNNs
  • FORWARD(U)
  • T ? size(U)
  • F0 ? BT1 ? 0
  • for t ? 1..T
  • Ft ?( Ft-1 , Ut )
  • for t ? T..1
  • Bt ?( Bt1 , Ut )
  • for t ? 1..T
  • Yt ?( Ft , Bt , Ut )
  • return Y

31
Learning in BRNNs
  • GRADIENT(U,Y)
  • T ? size(U)
  • F0 ? BT1 ? 0
  • for t ? 1..T
  • Ft ?( Ft-1 , Ut )
  • for t ? T..1
  • Bt ?( Bt1 , Ut )
  • for t ? 1..T
  • Yt ?( Ft , Bt , Ut )
  • dFt, dBt ?.backpropgradient( Yt - Yt )
  • for t ? T..1
  • dFt-1 ?.backpropgradient(dFt )
  • for t ? 1..T
  • dBt1 ?.backpropgradient(dBt )

32
What I will talk about
  • Neurons
  • Multi-Layered Neural Networks
  • Basic learning algorithm
  • Expressive power
  • Classification
  • How can we actually train Neural Networks
  • Speeding up training
  • Learning just right (not too little, not too
    much)
  • Figuring out you got it right
  • Feed-back networks?
  • Anecdotes on real feed-back networks (Hopfield
    Nets, Boltzmann Machines)
  • Recurrent Neural Networks
  • Bidirectional RNN
  • 2D-RNN
  • Concluding remarks

33
2D RNNs
Pollastri Baldi 2002, Bioinformatics Baldi
Pollastri 2003, JMLR
34
2D RNNs
35
2D RNNs
36
2D RNNs
37
2D RNNs
38
2D RNNs
39
2D RNNs
Write a Comment
User Comments (0)
About PowerShow.com