home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.databases
- Path: sparky!uunet!elroy.jpl.nasa.gov!swrinde!gatech!cc.gatech.edu!terminus!brent
- From: brent@terminus.gatech.edu (Brent Laminack)
- Subject: Re: 500'000 records - who does best?
- Message-ID: <brent.725766347@cc.gatech.edu>
- Sender: news@cc.gatech.edu
- Organization: Georgia Tech College of Computing
- References: <18971@mindlink.bc.ca> <0t0avvy@Unify.Com>
- Date: Thu, 31 Dec 1992 01:45:47 GMT
- Lines: 31
-
-
- >I believe that this thread started out as a "can anyone do this REASONABLY"
-
- >UNIFY 2000 can handle this (500K rows @ 30K per row). If you have the disk space,
- >we have the database. :-)
-
- >Mischa is right... its not the users, it's the size of the database. The
- >UNIFY 2000 database package can have individual table segments in the 16+Mb
- >category, allowing over 500 rows per segment, so the table would only require
- >10000 segments.
-
- Is this the same Unify company that tells me to try to keep all tables
- to 16 or fewer segments? After that performance is said to degrade? If
- memory serves, 10000 >>> 16
-
- >You would still have plenty of room for indices and the rest.
-
- >Performance is impossible to judge without getting your hands a little dirty.
- >It depends on hardware, access patterns, indexing strategies, and a whole lot
- >more, but in general UNIFY 2000 is very hard to beat in the performance area.
-
- There are other considerations, however. Ask Unify to do an outer join.
- They can't.
-
- No brag, just facts.
-
- Brent Laminack (brent@cc.gatech.edu)
-
-
-
- -Brent Laminack (brent@cc.gatech.edu)
-