Skip to main content
Computer Architecture Course Blog

第四次作业增加练习,作业推迟至 4月11日交!

08 Apr 2014

第四次作业增加下面的四道练习题,推迟至 4 月 11 日上课前交,逾期不再接收!

B.8

The LRU replacement policy is based on the assumption that if address A1 is accessed less recently than address A2 in the past, then A2 will be accessed again before A1 in the future. Hence, A2 is given priority over A1. Discuss how this assumption fails to hold when the a loop larger than the instruction cache is being continuously executed. For example, consider a fully associative 128-byte instruction cache with a 4-byte block (every block can exactly hold one instruction). The cache uses an LRU replacement policy.

a. What is the asymptotic instruction miss rate for a 64-byte loop with a large number of iterations?

b. Repeat part (a) for loop sizes 192 bytes and 320 bytes.

c. If the cache replacement policy is changed to most recently used (MRU) (replace the most recently accessed cache line), which of the three above cases (64-, 192-, or 320-byte loops) would benefit from this policy?

d. Suggest additional replacement policies that might outperform LRU.

B.9

Increasing a cache’s associativity (with all other parameters kept constant), statistically reduces the miss rate. However, there can be pathological cases where increasing a cache’s associativity would increase the miss rate for a particular workload. Consider the case of direct mapped compared to a two-way set associative cache of equal size. Assume that the set associative cache uses the LRU replacement policy. To simplify, assume that the block size is one word. Now construct a trace of word accesses that would produce more misses in the two-way associative cache. (*Hint*: Focus on constructing a trace of accesses that are exclusively directed to a single set of the two-way set associative cache, such that the same trace would exclusively access two blocks in the direct-mapped cache.)

2.11

Consider the usage of critical word first and early restart on L2 cache misses. Assume a 1 MB L2 cache with 64 byte blocks and a refill path that is 16 bytes wide. Assume that the L2 can be written with 16 bytes every 4 proces- sor cycles, the time to receive the first 16 byte block from the memory controller is 120 cycles, each additional 16 byte block from main memory requires 16 cycles, and data can be bypassed directly into the read port of the L2 cache. Ignore any cycles to transfer the miss request to the L2 cache and the requested data to the L1 cache.

a. How many cycles would it take to service an L2 cache miss with and without critical word first and early restart?

b. Do you think critical word first and early restart would be more important for L1 caches or L2 caches, and what factors would contribute to their relative importance?

2.12

You are designing a write buffer between a write-through L1 cache and a write-back L2 cache. The L2 cache write data bus is 16 B wide and can per- form a write to an independent cache address every 4 processor cycles.

a. How many bytes wide should each write buffer entry be?

b. What speedup could be expected in the steady state by using a merging write buffer instead of a nonmerging buffer when zeroing memory by the execution of 64-bit stores if all other instructions could be issued in parallel with the stores and the blocks are present in the L2 cache?

c. What would the effect of possible L1 misses be on the number of required write buffer entries for systems with blocking and nonblocking caches?