home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.parallel
- Path: sparky!uunet!charon.amdahl.com!pacbell.com!decwrl!elroy.jpl.nasa.gov!sdd.hp.com!ncr-sd!ncrcae!hubcap!fpst
- From: cpr4k@holmes.acc.Virginia.EDU (Christian P. Roberts)
- Subject: Re: PVM & HeNCE
- Message-ID: <1992Nov16.164435.19568@murdoch.acc.Virginia.EDU>
- Sender: usenet@murdoch.acc.Virginia.EDU
- Organization: Academic Computing Center - U.Va.
- References: <1992Nov15.011251.3464@colorado.edu>
- Date: Mon, 16 Nov 1992 16:44:35 GMT
- Approved: parallel@hubcap.clemson.edu
- Lines: 21
-
- Regarding the following from Christos Triantafillou
- <christos@bohemia.cs.colorado.edu>:
- >>There is a rumor that PVM treats a whole parallel system
- >>(eg IPSC) like a single parallel node, that is, you can
- >>send messages between different IPSC systems but not
- >>between Ipsc's nodes. Is that true?
-
- Are you referring to the Intel hypercube here? Do you mean that
- you are trying to run a hypercube program, where the Intel machine
- is a node in PVM? Then are you asking whether or not you can have
- communication between the nodes on the hypercube (which I would
- presume you still would be able to do), or whether you can have
- communication between a node of the hypercube and another PVM
- node, for example, a Cray?
-
- --
-
- Chris Roberts ITC/Academic Computing Center
- cpr4k@virginia.edu University of Virginia
- (804) 982-4693 Charlottesville, Virginia 22903
-
-