home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!usc!wupost!spool.mu.edu!umn.edu!msus1.msus.edu!msus1.msus.edu!news
- Newsgroups: comp.sys.amiga.hardware
- Subject: Re: Data/Instruction Cache & BURST modes on 68030? Why/when?
- Message-ID: <1992Dec24.122601.1957@msus1.msus.edu>
- From: lkoop@TIGGER.STCLOUD.MSUS.EDU (LaMonte Koop)
- Date: 24 Dec 92 12:26:01 -0600
- Reply-To: lkoop@TIGGER.STCLOUD.MSUS.EDU
- References: <hellerS.724958427@batman> <72192@cup.portal.com><smcgerty.725067816@unix1.tcd.ie>,<72292@cup.portal.com>
- Organization: SCS GP/Engineering Cluster
- Nntp-Posting-Host: tigger.stcloud.msus.edu
- Lines: 36
-
- In article <72292@cup.portal.com>, Tony-Preston@cup.portal.com (ANTHONY FRANCIS PRESTON) writes:
- >It is simple. if you have both data and instructions in the same
- >cache, you have both data and instructions bumping each other in
- >the same cache. The net result is you can have an instruction that
- >is replaced by data and a data item that is replaced by instruction
- >that you would cause a lose of a hit on the cache. Cache obeys a
- >square law that says to double the hit rate, you need to quadurple the
- >cache size(4 times as much cache to get twice the hit rate). By going
- >to an 8K cache, you have double the cache size, but have twice the
- >amount of access to it.
-
- Well, its not really quite as simple as that. Many factors affect cache
- hit rates, including relative data/instruction access densities, and usage
- patterns for a particular system. The associativity used in the cache can
- also greatly affect hit rates as well. (for example, a 64 way set associative
- cache will generally invoke a better hit rate than a simple 4 way associative
- arrangment).
-
- > By having 2 separate caches, you get almost the same performance, but
- >don't have data and instructions hitting of the same cache locations. What
- >I mean is If a data item caches to location 0 in the cache and the
- >instruction also does, the motorola way will be faster. Otherwise they will
- >be the same speed. If the instruction mix is spread of 4K or less, motorola
- >will be the same speed, if over 8K the intell way may be faster. In
-
- This is true to a point. However, it isn't the breaking reason for using
- a non-unified arrangement, though it does allow for a better seperation. One
- important advantage to using seperate code/data caches is concurrent access to
- cache operands. This is not possible in a unified arrangment. Indeed, you will
- see instruction/data mix effects as described above with a unified cache though.
-
- ----------------------------------------
- LaMonte Koop -- SCSU Electrical/Computer Engineering
- Internet: lkoop@tigger.stcloud.msus.edu -OR- f00012@kanga.stcloud.msus.edu
- "You mean you want MORE lights on this thing???"
- ---------------------------------------------------------------------------
-