Benchmarking for Physical Synthesis - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

Benchmarking for Physical Synthesis

Description:

Why industry should care about benchmarking. What is (and is ... Not in this talk, but in a focus group. Incentives for verifying ... the same algo excel in ... – PowerPoint PPT presentation

Number of Views:66
Avg rating:3.0/5.0
Slides: 29
Provided by: sig4
Category:

less

Transcript and Presenter's Notes

Title: Benchmarking for Physical Synthesis


1
Benchmarking for Physical Synthesis
  • Igor Markov and Prabhakar Kudva
  • The Univ. of Michigan / IBM

2
In This Talk
  • Benchmarking vs benchmarks
  • Benchmarking exposes new research Qs
  • Why industry should care about benchmarking
  • What is (and is not) being doneto improve
    benchmarking infrastructure
  • Not in this talk, but in a focus group
  • Incentives for verifying published work
  • How to accelerate a culture change

3
Benchmarking
  • Design benchmarks
  • Data model / representation Instances
  • Objectives (QOR metrics) and constraints
  • Algorithms, methodologies Implementations
  • Solvers ditto
  • Empirical and theoretical analyses, e.g.,
  • Hard vs easy benchmarks (regardless of size)
  • Correlation between different objectives
  • Upper / lower bounds for QOR, statistical
    behavior, etc
  • Dualism between benchmarks and solvers
  • For more details, see http//gigascale.org/bookshe
    lf

4
Industrial Benchmarking
  • Growing size complexity of VLSI chips
  • Design objectives
  • Area / power / yield / etc
  • Design constraints
  • Timing / FP fixed-die partitions / fixed IPs
    /routability / pin access / signal integrity
  • Can the same algo excel in all contexts?
  • Sophistication of layout and logic motivate open
    benchmarking for Synthesis and PR

5
Design Types
  • ASICs
  • Lots of fixed I/Os, few macros, millions of
    standard cells
  • Design densities 40-80 (IBM)
  • Flat and hierarchical designs
  • SoCs
  • Many more macro blocks, cores
  • Datapaths control logic
  • Can have very low design densities lt 20
  • Micro-Processor (?P) Random Logic Macros(RLM)
  • Hierarchical partitions are LSPR instances
    (5-30K)
  • High placement densities 80-98 (low
    whitespace)
  • Many fixed I/Os, relatively few standard cells
  • Note Partitioning w Terminals DAC99, ISPD
    99, ASPDAC00

6
Why Invest in Benchmarking
  • Academia
  • Benchmarks can identify / capture new research
    problems
  • Empirical validation of novel research
  • Open-source tools/BMs can be analyzed and tweaked
  • Industry
  • Evaluation and transfer of academic research
  • Support for executive decisions(which tools are
    relatively week must be improved)
  • Open-source tools/BMs can be analyzed and tweaked
  • When is an EDA problem (not) solved?
  • Are there good solver implementations?
  • Can they solve existing benchmarks?

7
Participation / Leadership Necessary
  • Activity 1 Benchmarking platform / flows
  • Activity 2 Establishing common evaluators
  • Static timing analysis
  • Congestion / yield prediction
  • Power estimation
  • Activity 3 Standard-cell libraries
  • Activity 4 Large designs w bells whistles
  • Activity 5 Automation of benchmarking

8
Activity 1 Benchmarking Platform
  • Benchmarking platform a reasonable subset of
  • data model
  • specific data representations (e.g., file
    formats)
  • access mechanisms (e.g., APIs)
  • reference implementation (e.g., a design
    database)
  • design examples in compatible formats
  • Base platforms available (next slide)
  • More participation necessary
  • regular discussions
  • additional tasks / features outlined

9
Common Methodology Platform
Common Model (Open Access?)
Synthesis (SIS, MVSIS)
Blif ? Bookshelf format
Placement (Capo, Dragon, Feng Shui, mPl,)
Blue Flow exists, Common model hooks To be Done
10
(No Transcript)
11
Placement Utilities
  • http//vlsicad.eecs.umich.edu/BK/PlaceUtils/
  • Accept input in the GSRC Bookshelf format
  • Format converters
  • LEF/DEF ? Bookshelf
  • Bookshelf ? Kraftwerk (DAC98 BP, EJ)
  • BLIF(SIS) ? Bookshelf
  • Evaluators, checkers, postprocessors and
    plotters
  • Contributions in these categories are welcome

12
Placement Utilities (contd)
  • Wirelength Calculator (HPWL)
  • Independent evaluation of placement results
  • Placement Plotter
  • Saves gnuplot scripts (? .eps, .gif, )
  • Multiple views (cells only, cellsnets, rows,)
  • Probabilistic Congestion Maps (Lou et al.)
  • Gnuplot scripts
  • Matlab scripts
  • better graphics, including 3-d fly-by views
  • .xpm files (? .gif, .jpg, .eps, )

13
Placement Utilities (contd)
  • Legality checker
  • Simple legalizer
  • Layout Generator
  • Given a netlist, creates a row structure
  • Tunable whitespace, aspect ratio, etc
  • All available in binaries/PERL at
  • http//vlsicad.eecs.umich.edu/BK/PlaceUtils/
  • Most source codes are shipped w Capo

14
Activity 2 Creating Evaluators
  • Contribute measures/analysis tools for
  • Timing Analysis
  • Congestion/Yield
  • Power
  • Area
  • Noise.

15
Challenges for Evaluating Timing-Driven
Optimizations
  • QOR not defined clearly
  • Max path-length? Worst set-up slack?
  • With false paths or without?...
  • Evaluation methods are not replicable (often
    shady)
  • Questionable delay models, technology params
  • Net topology generators (MST, single-trunk
    Steiner trees)
  • Inconsistent results path delays lt ? gate
    delays
  • Public benchmarks?...
  • Anecdote TD-place benchmarks in Verilog (ISPD
    01)
  • Companies guard netlists, technology parameters
  • Cell libraries area constraints

16
Metrics for Timing Reporting
  • STA non-trivial use PrimeTime or PKS
  • Distinguish between optimization and evaluation
  • Evaluate setup-slack using commercial tools
  • Optimize individual nets and/or paths
  • E.g., net-length versus allocated budgets
  • Report all relevant data
  • How was the total wirelength affected?
  • Were per-net and per-path optimizations
    successful?
  • Did that improve worst slack or did something
    else?
  • Huge slack improvements reported in some 1990s
    papers,but wire delays were much smaller than
    gate delays

17
Benchmarking Needs for Timing Opt.
  • A common, reusable STA methodology
  • High-quality, open-source infrastructure
  • False paths realistic gate/delay models
  • Metrics validated against phys. synthesis
  • The simpler the better,but must be good
    predictors
  • Buffer insertion profoundly impacts layout
  • The use of linear wirelength in timing-driven
    layout assumes buffers insertion (min-cut vs
    quadratic)
  • Apparently, synthesis is affected too

18
Vertical Benchmarks
  • Tool flow
  • Two or more EDA tools, chained sequentially(poten
    tially, part of a complete design cycle)
  • Sample contexts physical synthesis, place
    route, retiming followed by sequential
    verification
  • Vertical benchmarks
  • Multiple, redundant snapshots of a tool
    flowsufficient info for detailed analysis of
    tool performance
  • Herman Schmit _at_CMU is maintaining a resp. slot
    in the VLSI CAD Bookshelf
  • See http//gigascale.org/bookself
  • Include flat gate-level netlists
  • Library information ( lt 250nm)
  • Realistic timing fixed-die constraints

19
Infrastructure Needs
  • Need common evaluators of delay / power
  • To avoid inconsistent / outdated results
  • Relevant initiatives from Si2
  • OLA (Open Library Architecture)
  • OpenAccess
  • For more info, see http//www.si2.org
  • Still no reliable public STA tool
  • Sought OA-based utilities for timing/layout

20
Activity 3 Standard-cell Libraries
  • Libraries carry technology information
  • Impact of wirelength delays increases in recent
    technology generations
  • Cell characteristics must be compatible
  • Some benchmarks in the Bookshelfuse 0.25?m and
    0.35?m libraries
  • Geometry info is there, timing (in some cases)
  • Cadence test library?
  • Artisan libraries?
  • Use commercial tools to create libraries
  • Prolific, Cadabra,

21
Activity 4 Need New BenchmarksTo Confirm /
Defeat Tool Tuning
  • Data on tuning from the ISPD03 paperBenchmarking
    for Placement, Adya et al.
  • Observe that
  • Capo does well on Cadence-Capo, grid-like
    circuits
  • Dragon does well on IBM-Place (IBM-Dragon)
  • FengShui does well on MCNC benchmarks
  • mPL does well on PEKO
  • This is hardly a coincidence
  • Motivation for more / better benchmarks
  • P.S. Most differences above have been
    explained,all placers above have been improved

22
Activity 4 Large Benchmark Creation
  • www.opencores.org has large designs
  • May be a good starting point use vendor tools
    to create blif files(post results)
  • Note there may be different ways to convert
  • A group of design houses (IBM, Intel, LSI,
    HP)is planning a release of new largegate-level
    benchmarks for layout
  • Probably no logic information

23
Activity 5 Benchmarking Automation
  • Rigorous benchmarking is laborious. Risk of
    errors is high
  • How do we keep things simple / accessible?
  • Encapsulate software management in an ASP
  • Web uploads for binaries and source in tar.gz w
    Makefiles
  • Web uploads for benchmarks
  • GUI interface for NxM simulations tables created
    automatically
  • GUI interface for composing tool-flows flows can
    be saved/reused
  • Distributed back-end includes job scheduling
  • Email notification of job completion
  • All files created are available on the Web
    (permissions policies)
  • Anyone can re-run / study your experiment or
    interface with it

24
(No Transcript)
25
(No Transcript)
26
(No Transcript)
27
(No Transcript)
28
(No Transcript)
29
(No Transcript)
30
(No Transcript)
31
Follow-on Action Plan
  • Looking for volunteers to ?-test Bookshelf.exe
  • Particularly, in the context of synthesis
    verification
  • Contact Igor imarkov_at_eecs.umich.edu
  • Create a joint benchmarking groupfrom industry
    and academia
  • Contact Prabhakar kudva_at_us.ibm.com
  • Regular discussions
  • Development basedon common infrastructure
Write a Comment
User Comments (0)
About PowerShow.com