[ Pobierz całość w formacie PDF ]
.To avoid this situation without paying the high implementation price of a fullyassociative cache memory, the set associative mapping scheme can be used,which combines aspects of both direct mapping and associative mapping.Setassociative mapping, which is also known as set-direct mapping, is described inthe next section.274 CHAPTER 7 MEMORY7.6.3 SET ASSOCIATIVE MAPPED CACHEThe set associative mapping scheme combines the simplicity of direct mappingwith the flexibility of associative mapping.Set associative mapping is more prac-tical than fully associative mapping because the associative portion is limited tojust a few slots that make up a set, as illustrated in Figure 7-15.For this example,Valid Dirty Tag32 words14Slot 0 Block 0per blockSet 0Slot 1 Block 1.Slot 2Set 1.Block 213.Set 213 1Slot 214 1 Block 213+1.Cache.Block 227 1Main MemoryFigure 7-15 A set associative mapping scheme for a cache memory.two blocks make up a set, and so it is a two-way set associative cache.If there arefour blocks per set, then it is a four-way set associative cache.Since there are 214 slots in the cache, there are 214/2 = 213 sets.When an addressis mapped to a set, the direct mapping scheme is used, and then associative map-ping is used within a set.The format for an address has 13 bits in the set field,which identifies the set in which the addressed word will be found if it is in thecache.There are five bits for the word field as before and there is a 14-bit tag fieldthat together make up the remaining 32 bits of the address as shown below:Tag Set Word14 bits 13 bits 5 bitsAs an example of how the set associative cache views a main memory address,consider again the address (A035F014)16.The leftmost 14 bits form the tagfield, followed by 13 bits for the set field, followed by five bits for the word fieldCHAPTER 7 MEMORY 275as shown below:Tag Set Word1 0 1 0 0 0 0 0 0 0 1 1 0 1 0 1 1 1 1 1 0 0 0 0 0 0 0 1 0 1 0 0As before, the partitioning of the address field is known only to the cache, andthe rest of the computer is oblivious to any address translation.Advantages and Disadvantages of the Set Associative Mapped CacheIn the example above, the tag memory increases only slightly from the directmapping example, to 13 × 214 bits, and only two tags need to be searched foreach memory reference.The set associative cache is almost universally used intoday s microprocessors.7.6.4 CACHE PERFORMANCENotice that we can readily replace the cache direct mapping hardware with asso-ciative or set associative mapping hardware, without making any other changesto the computer or the software.Only the runtime performance will changebetween methods.Runtime performance is the purpose behind using a cache memory, and there area number of issues that need to be addressed as to what triggers a word or blockto be moved between the cache and the main memory.Cache read and write pol-icies are summarized in Figure 7-16.The policies depend upon whether or notthe requested word is in the cache.If a cache read operation is taking place, andthe referenced data is in the cache, then there is a cache hit and the referenceddata is immediately forwarded to the CPU.When a cache miss occurs, then theentire block that contains the referenced word is read into the cache.In some cache organizations, the word that causes the miss is immediately for-warded to the CPU as soon as it is read into the cache, rather than waiting for theremainder of the cache slot to be filled, which is known as a load-through oper-ation.For a non-interleaved main memory, if the word occurs in the last positionof the block, then no performance gain is realized since the entire slot is broughtin before load-through can take place.For an interleaved main memory, theorder of accesses can be organized so that a load-through operation will alwaysresult in a performance gain.276 CHAPTER 7 MEMORYCache CacheRead WriteData is Data is Data is Data isin the not in the in the not in thecache cache cache cacheForward Load Through: Write Through: Write Allocate: Bringto CPU.Forward the word Write data to both line into cache, thenas cache line is cache and main update it,filled, memory, -or--or- Write No-Allocate:-or-Fill cache line and Update main memoryWrite Back: Writethen forward word.only.data to cache only.Defer main memorywrite until block isflushed.Figure 7-16 Cache read and write policies.For write operations, if the word is in the cache, then there may be two copies ofthe word, one in the cache, and one in main memory.If both are updated simul-taneously, this is referred to as write-through.If the write is deferred until thecache line is flushed from the cache, this is referred to as write-back.Even if thedata item is not in the cache when the write occurs, there is the choice of bring-ing the block containing the word into the cache and then updating it, known aswrite-allocate, or to update it in main memory without involving the cache,known as write-no-allocate
[ Pobierz całość w formacie PDF ]
Linki
- Indeks
- Christina James [DI Yates 03] Sausage Hall (v5.0) (epub)
- John McMurtry The Cancer Stage of Capitalism (1999)
- 03. Krzysztof J. Kwiatkowski SURVIVAL po polsku 1999
- English Learning History of The Kings of Britain 1999
- Ocalic Daringham Hall Kathryn Taylor
- Zostawic Daringham Hall Kathryn Taylor
- r 16 00
- Dietrich William Ethan Gage 02 Klucz z Rosetty
- Wiggs Susan Obudzić szczęÂście
- Susan Estrich Soulless, Ann Coulter
- zanotowane.pl
- doc.pisz.pl
- pdf.pisz.pl
- aeie.pev.pl