Decompression After The Cache For Compressed Code Execution - PowerPoint PPT Presentation

1 / 13
About This Presentation
Title:

Decompression After The Cache For Compressed Code Execution

Description:

Add a branch compensation cache (BCC), try to pre-store the target instructions. When encounter a PC jump, go to check if the target instruction is there. CPU. addr ... – PowerPoint PPT presentation

Number of Views:21
Avg rating:3.0/5.0
Slides: 14
Provided by: bwrcEecs
Category:

less

Transcript and Presenter's Notes

Title: Decompression After The Cache For Compressed Code Execution


1
Decompression After The Cache For Compressed Code
Execution
  • By
  • Yujia Jin
  • Rong Chen

2
Overall Picture
Instruction ROM
I
cpu
3
Overall Picture
Instruction ROM
I
cpu

decomp
decomp
CLB
LAT
4
Pipeline
IF
IF
5
Traditional Compression Algorithm
  • Break program into n instructions blocks,
    compress each block serially at byte level.
  • Problems
  • 8 byte instruction ? 8 cycle to decompress a
    single instruction.
  • Not true random access.
  • Jump from block x to the nth instruction in block
    y will require the first n-1 instruction be
    decompressed.

6
Compression Algorithm
Use LAT to keep track if the instruction is
compressed
7
Result
8
Result
ijpeg in spec95
9
Branch Compensation Cache
  • Problems of branch instructions
  • Whenever encounter a branch instruction, the
    pipeline has to waste 3 stages

Find out its a jump, has to wait here
IF
DEC
BUB
BUB
BUB
CLB
BUB
IF
BUB
BUB
BUB
BUB
CLB
BUB
BUB
BUB
BUB
BUB
BUB
CLB
BUB
Restart a new pipeline and proceed until
10
Branch Compensation Cache
  • Solutions
  • Add a branch compensation cache (BCC), try to
    pre-store the target instructions. When encounter
    a PC jump, go to check if the target instruction
    is there.

yes
IE
11
Branch Compensation Cache
  • Results
  • The branch compensation cache behaves like the
    ordinary cache.

ijpeg in spec95
12
Result
ijpeg in spec95
13
Conclusion
  • Our approach can increase instruction cache hit
    rate. This may translate to instruction cache
    area saving.
  • - Our current compression algorithm provides true
    random access at the cost of lower compression
    rate.
  • possible improvements
  • provide optimal table entries.
  • provide multiple levels of tables.
Write a Comment
User Comments (0)
About PowerShow.com