home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!ferkel.ucsb.edu!taco!rock!stanford.edu!agate!doc.ic.ac.uk!uknet!acorn!armltd!dseal
- From: dseal@armltd.co.uk (David Seal)
- Newsgroups: comp.arch
- Subject: Re: Registerless processor
- Message-ID: <9841@armltd.uucp>
- Date: 18 Nov 92 18:13:11 GMT
- References: <1992Nov13.181654.11692@fcom.cc.utah.edu>
- Sender: dseal@armltd.uucp
- Distribution: comp
- Organization: A.R.M. Ltd, Swaffham Bulbeck, Cambs, UK
- Lines: 50
-
- In article <1992Nov13.181654.11692@fcom.cc.utah.edu> drdavis@u.cc.utah.edu
- (Darren R. Davis) writes:
-
- >I have been pondering an idea for a machine architecture. A processor
- >that has no registers. I am familiar with some architectures that have
- >done this. My twist on this theme is to have a very large cache on chip
- >for memory locations (effectivelly becoming registers). This goes
- >against the RISC idea of having very large register sets with load store
- >instructions. This machine would just reference memory, and the most
- >common addresses becoming cached internally to the processor giving very
- >fast access. Does anyone know of such a machine, and what are your
- >thoughts on this kind of architecture. I envision the cache being
- >something like 8K or greater giving a large effective register set.
-
- One common difference between the registers and the cache is that the
- register bank is multi-ported, while the cache is single-ported. This
- results in the registers supplying a much higher effective bandwidth to the
- ALU and other parts of the processor than the cache can. To get equivalent
- performance out of a registerless machine, you're going to have to
- multi-port the cache - which will make it take up similar amounts of space,
- power, etc., as a large register bank.
-
- Another big difference is that the register bank usually is usually
- addressed statically (i.e. the compiler decides what register is used) and
- has comparatively few entries, while the cache is addressed dynamically
- (i.e. the processor decides on the fly what cache entry to use) and has
- comparatively many entries. A couple of side effects of this are:
-
- (a) Register access is generally faster than cache access.
-
- (b) Register addresses (i.e. register numbers) are generally shorter than a
- cache address could possibly be, which in turn are shorter than the
- addresses that are actually used for the cache (i.e. memory addresses).
- This improves code density considerably, with various effects on speed
- (denser code requires less memory bandwidth, uses the available cache
- more efficiently, etc.).
-
- Overall, for these and other reasons, I believe the experience of the
- computing community is that this is one of the situations where having two
- levels in the memory hierarchy (i.e. registers *and* cache) is more
- cost-effective than just having one. Some of the reasons are in fact very
- similar to those for having both cache and main memory rather than just main
- memory; some are specific to registers, which tend to be addressed
- differently to other layers of the hierarchy because of instruction set
- considerations.
-
- David Seal
- dseal@armltd.co.uk
-
- All opinions are mine only...
-