home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.databases
- Path: sparky!uunet!cis.ohio-state.edu!zaphod.mps.ohio-state.edu!usc!elroy.jpl.nasa.gov!ames!agate!rsoft!mindlink!a269
- From: Mischa_Sandberg@mindlink.bc.ca (Mischa Sandberg)
- Subject: Re: 500'000 records - who does best?
- Organization: MIND LINK! - British Columbia, Canada
- Date: Tue, 29 Dec 1992 19:47:16 GMT
- Message-ID: <18996@mindlink.bc.ca>
- Sender: news@deep.rsoft.bc.ca (Usenet)
- Lines: 58
-
- > Mark Baldridge writes:
- > >It isn't the number of users, it's the basic size of the tables.
- > >One of our clients has pushed a table to 900k rows, 250Mb on an RS/6000
- > >with 50Mb memory dedicated to the Sybase server. Working with it
- > >feels like retiling the bathroom while an elephant uses the john.
- >
- > Maybe I am missing something, but we routinely have customers with 250 to
- > 800
- > megabyte tables and 200-300 users on RS6000s. Earlier this year, we did a
- > benchmark with 1600 users, all with under one or two (I do not remember
- > now)
- > second response time. The benchmark was to ensure that the fully
- > configured
- > system could handle the expected 2500 users.
- >
- > We use uniVerse.
-
- Pardon my ignorance as to what uniVerse is.
-
- I suspect we are talking about different kinds of apps here, in which
- case you may find Sybase or Oracle or what-have-you to be plenty good
- (but perhaps you could expand on what uniVerse is or does). Sorry,
- from your original posting I thought you were having a problem.
-
- "one or two ... second response time" suggests that your client users
- are perform transactions on a limited number of rows, and from the numbers
- I suspect that your chief concern is TPS. B-tree or (even better) hashed
- access time doesn't rise dramatically with table size, and you probably
- aren't even really interested in clustering and contiguous allocation.
- Furthermore, I'd wager that database changes once the system is in production
- are not a major concern, either.
-
- In any case, I'm impressed by anything
- running on a Unix box, presumably with TCP/IP connections, actually
- handling 1600 or more concurrent requests, which amounts to 1K TPS
- or better --- assuming that is what you indeed mean by "1600 users".
- Our systems "handle" a few hundred users, but with only a couple dozen
- connected at one time :-).
-
- The kind of transactions that concern me are high-activity requests,
- i.e., where nearly every transaction involves a significant chunk
- of the table. Our servers receive remote updates over a WAN that
- are EACH typically 1-10k rows; and alterations to the running system
- occur on a monthly basis, typically requiring all rows to be updated
- (this is the way the apparel business works).
-
- So, we may well be talking about completely different problem domains.
- I'd be interested in hearing more about the specifics of yours.
- Did I miss it when you mentioned exactly what these 30K records contain?
-
-
- --
- Mischa Sandberg ... Mischa_Sandberg@mindlink.bc.ca
- or uunet!van-bc!rsoft!mindlink!Mischa_Sandberg
- *-*-*-*-*-*-*-*-*-*-*
- Engineers think equations are an approximation of reality.
- Physicists think reality is an approximation of the equations.
- Mathematicians never make the connection.
-