PETSc and Neuronal Networks - PowerPoint PPT Presentation

1 / 67
About This Presentation
Title:

PETSc and Neuronal Networks

Description:

Recall: have an input program to convert to PETSc binary format ... monitor(ts, iter#, t, u, void *args) Tips for Homogeneous Nets. Tips for Homogeneous Nets ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 68
Provided by: tis4
Category:

less

Transcript and Presenter's Notes

Title: PETSc and Neuronal Networks


1
PETSc and Neuronal Networks
  • Toby Isaac
  • VIGRE Seminar, Wednesday, November 15, 2006

2
Tips for general ODEs
3
Tips for general ODEs
  • Recall have an input program to convert to PETSc
    binary format

4
Tips for general ODEs
  • Recall have an input program to convert to PETSc
    binary format
  • e.g. Vec for initial values, Mat for linear ODE,
    adjacency/connectivity Mat

5
Tips for general ODEs
  • Recall have an input program to convert to PETSc
    binary format
  • e.g. Vec for initial values, Mat for linear ODE,
    adjacency/connectivity Mat
  • PetscBinaryView for arrays of scalars (see
    exampleinput.c)

6
Tips for general ODEs
  • To keep scalar parameters organized (end time,
    dt, cells, etc.) use a PetscBag

7
Tips for general ODEs
  • To keep scalar parameters organized (end time,
    dt, cells, etc.) use a PetscBag
  • Allows you to save a struct in binary and read in
    to all processors

8
Tips for general ODEs
  • To keep scalar parameters organized (end time,
    dt, cells, etc.) use a PetscBag
  • Allows you to save a struct in binary and read in
    to all processors
  • No need to keep track of order in which scalars
    are written/read

9
Tips for general ODEs
  • Recall using Load functions, parallel layout
    is specified at read in

10
Tips for general ODEs
  • Recall using Load functions, parallel layout
    is specified at read in
  • Except for arrays only go to first processor

11
Tips for general ODEs
  • Recall using Load functions, parallel layout
    is specified at read in
  • Except for arrays only go to first processor
  • Use MPI_Bcast to send those arrays to all
    processors

12
Tips for general ODEs
  • Recall using Load functions, parallel layout
    is specified at read in
  • Except for arrays only go to first processor
  • Use MPI_Bcast to send those arrays to all
    processors
  • e.g. piecewise constant inj. current

13
Tips for general ODEs
  • A TS object keeps track of the settings for
    time-stepping

14
Tips for general ODEs
  • A TS object keeps track of the settings for
    time-stepping
  • Same old song TSCreate and TSDestroy

15
Tips for general ODEs
  • A TS object keeps track of the settings for
    time-stepping
  • Same old song TSCreate and TSDestroy
  • TSSetType forward Euler, backward Euler,
    ode45, (pseudo-timestepping)

16
Tips for general ODEs
  • A TS object keeps track of the settings for
    time-stepping
  • Same old song TSCreate and TSDestroy
  • TSSetType forward Euler, backward Euler,
    ode45, (pseudo-timestepping)
  • TSSetProblemType linear, nonlinear

17
Tips for general ODEs
  • TSSetSolution set initial conditions

18
Tips for general ODEs
  • TSSetSolution set initial conditions
  • TSSetRHSFunction/TSSetRHSMatrix

19
Tips for general ODEs
  • TSSetSolution set initial conditions
  • TSSetRHSFunction/TSSetRHSMatrix
  • Specified functions has format rhsfunc(ts, t, u,
    du, void additional arguments)

20
Tips for general ODEs
  • TSSetSolution set initial conditions
  • TSSetRHSFunction/TSSetRHSMatrix
  • Specified functions has format rhsfunc(ts, t, u,
    du, void additional arguments)
  • Create a struct for passing additional arguments

21
Tips for general ODEs
  • TSSetRHSJacobian, if method calls for it

22
Tips for general ODEs
  • TSSetRHSJacobian, if method calls for it
  • TSSetInitialTimeStep (that is, initial time and
    initial time step)

23
Tips for general ODEs
  • TSSetRHSJacobian, if method calls for it
  • TSSetInitialTimeStep (that is, initial time and
    initial time step)
  • TSSetDuration

24
Tips for general ODEs
  • TSSetRHSJacobian, if method calls for it
  • TSSetInitialTimeStep (that is, initial time and
    initial time step)
  • TSSetDuration
  • TSRKSetTolerance

25
Tips for general ODEs
  • TSSetRHSJacobian, if method calls for it
  • TSSetInitialTimeStep (that is, initial time and
    initial time step)
  • TSSetDuration
  • TSRKSetTolerance
  • Control absolute error over whole time of
    integration a bit sketchy

26
Tips for general ODEs
  • If only interested in final state, run TSStep to
    execute

27
Tips for general ODEs
  • If only interested in final state, run TSStep to
    execute
  • If interested in progress along the way, you need
    a monitor function

28
Tips for general ODEs
  • If only interested in final state, run TSStep to
    execute
  • If interested in progress along the way, you need
    a monitor function
  • Runs after every time step, can output, plot,
    change parameters, change time-step etc.

29
Tips for general ODEs
  • Multiple monitor functions can run e.g. one for
    parameter changes, one for output

30
Tips for general ODEs
  • Multiple monitor functions can run e.g. one for
    parameter changes, one for output
  • Attention IAF modelers you can change the state
    vector too!

31
Tips for general ODEs
  • Multiple monitor functions can run e.g. one for
    parameter changes, one for output
  • Attention IAF modelers you can change the state
    vector too!
  • Syntax TSSetMonitor,
  • monitor(ts, iter, t, u, void args)

32
Tips for Homogeneous Nets
33
Tips for Homogeneous Nets
  • Most dependency occurs within cell bad to have
    one cell divided across processors

34
Tips for Homogeneous Nets
  • Most dependency occurs within cell bad to have
    one cell divided across processors
  • No guarantee that PETSC_DECIDE wont split your
    vector this way

35
Tips for Homogeneous Nets
  • Have a vector y of length cells

36
Tips for Homogeneous Nets
  • Have a vector y of length cells
  • PETSc evenly distributes this vector

37
Tips for Homogeneous Nets
  • Have a vector y of length cells
  • PETSc evenly distributes this vector
  • nlocal VecGetLocalSize(y)

38
Tips for Homogeneous Nets
  • Have a vector y of length cells
  • PETSc evenly distributes this vector
  • nlocal VecGetLocalSize(y)
  • VecCreateMPI(, neqnsnlocalcells,
    PETSC_DETERMINE,x)

39
Tips for Homogeneous Nets
  • VecSetBlockSize set this to the number of
    equations per cell

40
Tips for Homogeneous Nets
  • VecSetBlockSize set this to the number of
    equations per cell
  • VecStrideGather send value from same index for
    each block to another vector

41
Tips for Homogeneous Nets
  • VecSetBlockSize set this to the number of
    equations per cell
  • VecStrideGather send value from same index for
    each block to another vector
  • VecStrideScatter send values from a vector to
    the same index for each block

42
Tips for Homogeneous Nets
  • Paradigm for ease/simplicity gather like
    indices, make changes, scatter back

43
Tips for Homogeneous Nets
  • Paradigm for ease/simplicity gather like
    indices, make changes, scatter back
  • VecStrideGatherAll/VecStrideScatterAll take the
    state vector, break it up into an array of
    vectors, one for each equivalent index

44
Tips for Homogeneous Nets
  • In RHSFunction Vec U and Vec DU are inputs

45
Tips for Homogeneous Nets
  • In RHSFunction Vec U and Vec DU are inputs
  • Declare arrays Vec uneqns, duneqns

46
Tips for Homogeneous Nets
  • In RHSFunction Vec U and Vec DU are inputs
  • Declare arrays Vec uneqns, duneqns
  • VecStrideGatherAll at the start

47
Tips for Homogeneous Nets
  • In RHSFunction Vec U and Vec DU are inputs
  • Declare arrays Vec uneqns, duneqns
  • VecStrideGatherAll at the start
  • Set dui in terms of u for each i

48
Tips for Homogeneous Nets
  • In RHSFunction Vec U and Vec DU are inputs
  • Declare arrays Vec uneqns, duneqns
  • VecStrideGatherAll at the start
  • Set dui in terms of u for each I
  • VecStrideScatterAll at the end

49
Tips for Homogeneous Nets
  • For very large networks, large number of
    processors message passing will take its toll

50
Tips for Homogeneous Nets
  • For very large networks, large number of
    processors message passing will take its toll
  • Order cells so that connections occur between
    close numbers

51
Tips for Homogeneous Nets
  • For very large networks, large number of
    processors message passing will take its toll
  • Order cells so that connections occur between
    close numbers
  • MatGetOrdering, MATORDERING_RCM, MatPermute

52
Tips for Inhomogeneous Nets
53
Tips for Inhomogeneous Nets
  • VecSetBlockSize no longer an option

54
Tips for Inhomogeneous Nets
  • VecSetBlockSize no longer an option
  • Can be reproduced with more generic
    VecGather/VecScatter

55
Tips for Inhomogeneous Nets
  • VecSetBlockSize no longer an option
  • Can be reproduced with more generic
    VecGather/VecScatter
  • Requires the creation of arrays of VecScatter
    objects one for each state

56
Tips for Inhomogeneous Nets
  • VecSetBlockSize no longer an option
  • Can be reproduced with more generic
    VecGather/VecScatter
  • Requires the creation of arrays of VecScatter
    objects one for each state
  • VecScatter created from two IS index objects TO
    Vec and FROM Vec

57
Tips for Inhomogeneous Nets
  • Different types gathered on separate processors
    or mixed?

58
Tips for Inhomogeneous Nets
  • Different types gathered on separate processors
    or mixed?
  • Gathered is easiest to implement specify which
    processor, treat like the homogeneous case

59
Tips for Inhomogeneous Nets
  • Different types gathered on separate processors
    or mixed?
  • Gathered is easiest to implement specify which
    processor, treat like the homogeneous case
  • Mixed is faster balance processor load,
    potentially less message passing

60
Tips for Inhomogeneous Nets
  • If there are ODEs for each connection, then the
    need for mixed distribution is greater

61
Tips for Inhomogeneous Nets
  • If there are ODEs for each connection, then the
    need for mixed distribution is greater
  • Ideally (if disparity in eqns isnt great)

62
Tips for Inhomogeneous Nets
  • If there are ODEs for each connection, then the
    need for mixed distribution is greater
  • Ideally (if disparity in eqns isnt great)
  • Lump all cells together, RCM permute, equal
    cells per processor

63
Tips for Adaptive Time Step
64
Tips for Adaptive Time Step
  • PETSc RK45 lt ode45

65
Tips for Adaptive Time Step
  • PETSc RK45 lt ode45
  • Will not integrate across discontinuities

66
Tips for Adaptive Time Step
  • PETSc RK45 lt ode45
  • Will not integrate across discontinuities
  • For discontinuities you know of (e.g. time
    dependent forcing function)

67
Tips for Adaptive Time Step
  • PETSc RK45 lt ode45
  • Will not integrate across discontinuities
  • For discontinuities you know of (e.g. time
    dependent forcing function)
  • Loop TSSetDuration to discontinuity, TSStep
Write a Comment
User Comments (0)
About PowerShow.com