The cache system can be located between the CPU and the MMU (i.e., a logical cache), or between the MMU and the system random access memory (i.e., a physical cache). What factors determine the optimum location of cache memory?
What will be an ideal response?
If the cache is on the CPU side of the MMU, it is caching logical addresses (i.e., the addresses generated by the
processor). The advantage of this is that it provides the fastest operation because the cache does not have to wait
for the address translation to be performed by the MMU before carrying out a look up. When a context switch
(task switch) takes place and the MMU page tables are reloaded, the mapping between logical and physical
memory changes and the logical cache must be flushed. This takes time.
Another problem with logical cache is that more than one logical address can share the same physical address
(shared data or code). If two virtual addresses that are mapped onto the same physical address are cached, then
both virtual addresses will be cached even though they refer to the same physical location. It is then possible for
one cached logical address to be updated, leaving the other one stale.
If the cache is on the memory side of the MMU, the addresses cached are physical addresses. This can incur a
delay due to address translation. However, as physical data is cached, there is no problem with context (task)
switching and the cache need not be flushed.
You might also like to view...
A stateless or stateful packet filter that supports active ____ must allow all traffic coming from TCP Port 20 as well as outbound traffic coming from ports above 1023.
A. TCP B. ICMP C. FTP D. UDP
Which term refers to the passage of a packet through a router?
A. Hop B. Jump C. Gateway D. Metric