home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!olivea!spool.mu.edu!yale.edu!yale!mintaka.lcs.mit.edu!ai-lab!zurich.ai.mit.edu!gjr
- From: gjr@zurich.ai.mit.edu (Guillermo J. Rozas)
- Newsgroups: comp.lang.scheme.c
- Subject: Re: SPARC native code compiler
- Message-ID: <GJR.93Jan22090613@chamarti.ai.mit.edu>
- Date: 22 Jan 93 17:06:13 GMT
- References: <XJAM.93Jan21200905@cork.CS.Berkeley.EDU>
- Reply-To: gjr@zurich.ai.mit.edu
- Distribution: inet
- Organization: M.I.T. Artificial Intelligence Lab.
- Lines: 39
- NNTP-Posting-Host: chamartin.ai.mit.edu
- In-reply-to: xjam@cork.CS.Berkeley.EDU's message of 21 Jan 93 20:09:05
-
- In article <XJAM.93Jan21200905@cork.CS.Berkeley.EDU> xjam@cork.CS.Berkeley.EDU (The Crossjammer) writes:
-
- | From: xjam@cork.CS.Berkeley.EDU (The Crossjammer)
- | Newsgroups: comp.lang.scheme.c
- | Date: 21 Jan 93 20:09:05
- |
- | I'm interested in porting the native code compiler to the SPARC
- | architecture if it hasn't been done already. Who should I contact?
-
- To our knowledge, there is no back end for the Sparc architecture.
-
- Depending on how much time you have, there are two possibilities.
-
-
- Option A: Write a full native port. To find out how to do this, ftp the
- compiler porting guide from
-
- altdorf.ai.mit.edu:archive/scheme-7.2/compdoc.tar.Z
-
- It is very slightly out of date, but the differences should not be
- important.
-
- This will take a considerable amount of time (~3 weeks for an expert).
-
-
- Option B: We have a back end that generates C almost finished. It is
- up enough to compile most of the runtime system (all but three files).
- The output is both larger and slower than the native version, but much
- faster than the interpreter. Besides some debugging and cleaning up
- of rough edges, one of the things that is missing is the ability to
- load compiled files dynamically -- right now it is all done
- statically, when the "microcode" is generated. You could have a
- working system much faster if you took this approach.
-
-
- Even if you decide to go with option A, you may be interested in
- option B as a reference. The RTL rules expand into macroized-C
- fragments rather than any individual architecture, so they may be
- easier to understand.
-