home *** CD-ROM | disk | FTP | other *** search
Text File | 1990-08-06 | 62.4 KB | 1,400 lines |
-
- 286-Based NetWare v2.1x
- File Service Processes
-
- The Final Word
-
-
- Systems Engineering Division
- April 1990
- Novell, Inc.
- 122 East 1700 South
- Provo, UT. 84606
-
- Disclaimer
-
-
- Novell Inc. makes no representations or warranties with respect to the
- contents or use of this report, and specifically disclaims any express or
- implied warranties of merchantability or fitness for any particular
- purpose. Further, Novell Inc. reserves the right to revise this report and
- to make changes in its content at any time, without obligation to notify
- any person or entity of such revision or changes.
-
- (c) Copyright 1990 by Novell, Inc., Provo, Utah
-
- File Service Processes
-
- All rights reserved. This report may be stored electronically, and
- reproduced for your use, as long as no part of this report is omitted or
- altered. Further, this report, in part or whole, may not be reproduced,
- photocopied, stored in a retrieval system, or transmitted, in any form or
- by any means, electronic, mechanical, photocopying, recording, or
- otherwise, for any publication, general, trade, user group or otherwise,
- without the express prior written consent of Novell, Inc.
- Preface
- -------
-
- The following report is a preliminary excerpt from an upcoming Novell
- Systems Engineering Division Research report entitled "NetWare Internals
- and Structure". The actual report may differ slightly from this excerpt,
- however the content will be the same. This particular excerpt provides an
- in-depth explanation of File Service Processes (FSP) under 286-based
- NetWare v2.1x. This includes ELS I and II, Advanced Dedicated, Advanced
- Non-Dedicated, and SFT. Because of the way in which FSPs are allocated, the
- following excerpt will also provide a detailed explanation of RAM allocated
- in the DGroup data segment under 286-based NetWare v2.1x.
-
- Please note that NetWare 386 incorporates, among other things, a completely
- different memory scheme than 286-based NetWare. Due to this fact, none of
- this discussion of limitations or memory segments, apply to NetWare 386.
-
- The most evident problem experienced by users, is a shortage of File
- Service Processes. This problem has recently surfaced because of two
- reasons. The first has been the growing tendency towards building bigger
- and more complex server configurations. The second has been the addition
- of functionality and features to the NetWare OS. With this shortage of
- FSPs, has come a variety of explanations for this problem from both within
- Novell, and outside of Novell. While some of these explanations have
- provided some partially correct answers, this excerpt provides the actual
- mechanics and breakdown of this component of the 286-based NetWare
- operating system.
-
- After reading this report you should be able to understand all the factors
- affecting FSP allocation, as well as be able to correctly recognize when
- a server has insufficient FSPs. Additionally you will have several options
- for dealing with FSP starved servers.
-
-
-
- Page 1 Copyright 1990 by Novell, Inc., Provo, Utah
-
- File Service Processes
- ----------------------
-
- A File Service Process is a process running in the NetWare Server that
- services File Service Packets. These are typically NetWare Core Protocol
- (NCP) requests. Workstations, or clients, in a NetWare network request
- services from the File Server through NCP requests. When a workstation
- wants to read a file, the NetWare shell builds a packet with the
- appropriate NCP request for reading the correct file, and then sends it off
- to the server.
-
- At the server, the NCP request is handed off to a FSP. The FSP processes
- the NCP request. It is the only process running in the NetWare server that
- can process a NCP request. The FSP does this in one of two ways. It either
- processes the request directly, or it can schedule additional processes,
- in order to service the request.
-
- Because there are various processes with various lengths of run time that
- can be used in the servicing of a workstation's NCP request, File Service
- Processes become a potential bottleneck at the server. The following is an
- example of this:
-
- A workstation sends a NCP request that asks that a certain block of data
- be read from the server's disk. The FSP servicing the NCP request schedules
- the appropriate process to retrieve the information from the disk, and then
- instructs this disk process to "wake it up", when it has the information.
- The FSP then "goes to sleep" waiting for completion of the disk process.
-
- If no other FSPs are available to run, then no other NCP requests can be
- processed until this first request is finished During this time period,
- the server is forced to process lower priority processes (if any are
- scheduled to run) until the disk request is completed and the FSP returns
- with another request. The server will also delay or ignore any new NCP
- requests that come in during this time period.
-
- It should be noted that a FSP will, in all real terms, only go to sleep
- when it waits upon information coming back from a disk request. There are
- typically no other processes in the NetWare server that the FSP uses, that
- would cause the FSP to "go to sleep".
-
- Page 2 Copyright 1990 by Novell, Inc., Provo, Utah
- File Service Processes (continued)
- ----------------------------------
-
-
-
- When a server does not have enough FSPs, typically performance will
- degrade, especially in heavy data movement environments, (large file
- copies, database environments, etc.). The problem that is created was
- depicted in the previous scenario. The file server must process the NCP
- requests in more of a serial fashion, rather than a parallel fashion, hence
- creating a longer waiting line for requests.
-
- (How many of us have expressed frustration at seeing only one bank teller
- servicing the waiting line, especially on a Friday afternoon.)
-
- Additionally, because there is only a certain amount of buffer space
- available on a server for incoming packets, packets coming in after this
- buffer space is filled are trashed. The workstations must then spend more
- time resending requests, which reduces performance for the workstation and
- also reduces performance for the network due to the increased traffic over
- the cable.
-
- However, not all degradation can be attributed to a lack of FSPs, even in
- the aforementioned heavy data movement environments. In some instances bad
- or intermittent NICs, either at the server, or at another node, can create
- the very same performance degradations.
-
- Before deciding that a network problem is due to FSP shortages, you should
- consult the following FCONSOLE statistic:
-
- The very first indication of FSP problems is shown under FCONSOLE -> LAN
- I/O Stats -> File Service Used Route. The number on this line, is the
- number of File Service Packets (or NCP requests) that had to wait because
- a FSP was not able to service it.
-
- Take this number and divide it by the amount of File Service Packets (also
- on this screen), which indicates the number of File Service Packets
- serviced by this server. This ratio should be below 1%.
-
- Page 3 Copyright 1990 by Novell, Inc., Provo, Utah
- File Service Processes (continued)
- ----------------------------------
-
-
-
-
- Care should be taken when using this FCONSOLE diagnostic method. The
- problem is that the File Service Used Route counter will roll over easily,
- especially in a FSP starved environment. While it should be easy to see if
- this number counter will turn over for your server (you will be able to see
- the number steadily increasing as you look at it) it's still recommended
- that these numbers be taken several times over the course of the day, or
- several days, to see if there are any radical differences in the
- percentages. It is especially useful to do this calculation during heavy
- utilization times on your server. In some instances you might not
- experience FSP starvation until a particular application is running or a
- certain activity takes place.
-
- Lastly, in terms of describing FSP usage, and as will be explained later
- in this document, attaching a FSP amount to a particular option is
- inaccurate. For example, saying that a certain LAN board "takes up 2 FSPs"
- is inaccurate. What one server's FSP allocation is like can be radically
- different from another's. The most accurate information that can be given
- for server options, is in DGroup bytes used, not FSPs.
-
-
- Page 4 Copyright 1990 by Novell, Inc., Provo, Utah
-
- DGroup Data Segment
- -------------------
-
-
-
- The DGroup data segment is the most important segment of memory for the
- NetWare Operating System. It consists of a single 64K block of RAM, which
- cannot be changed due to the way in which pre-80386 Intel microprocessors
- segment RAM into 64K blocks. This 64K block of RAM exists (and is required
- in order for the Server to even operate) in the smallest of server
- configurations, as well as the largest. Adding or removing RAM does not
- affect this block at all. (Indeed this is RAM that is part of the "Minimum
- RAM required" specification of the NetWare OS.)
-
- The DGroup data segment contains various integral components which serve
- as the heart of the NetWare OS. Briefly these components are, the Global
- Static Data area, the Process Stack area, the Volume and Monitor Table
- area, Dynamic Memory Pool 1, and the File Service Process Buffer area. The
- reasons that these components all reside within this 64K data segment are
- mostly for performance advantages. In past versions of NetWare, some
- components were removed from the DGroup Data segment in order to
- accommodate increased functionality as it was added to the OS. However,
- further removal of components from this area with the current version of
- the OS would necessitate major changes.
-
- The Global Static Data area contains all the global variables defined in
- the NetWare OS. This area also contains all of the global variables defined
- by the LAN and disk, (or VADD), drivers.
-
- The Process stacks area provides stack space for all of the various NetWare
- processes.
-
- The Volume and Monitor tables contain information for the Monitor screen
- of the file server, as well as information on all of the disk volumes
- mounted on the server.
-
- Dynamic Memory Pool 1 is used by virtually all NetWare processes and
- routines as either temporary or semi-permanent workspace.
-
-
-
- Page 5 Copyright 1990 by Novell, Inc., Provo, Utah
-
- DGroup Data Segment (continued)
- -------------------------------
-
-
-
-
- The File Service Process Buffers are the buffers where incoming File
- Service Packets are placed. An interesting side note is that File Service
- Processes are not a part of DGroup itself. However, the number of File
- Service Process Buffers directly determine how many File Service Processes
- are allocated.
-
- The following graphic illustrates the five components and their minimum
- to maximum RAM allocation:
-
-
- |===============================================================|
- | |
- | Global Static Data: 28-40 KB |
- | |
- |===============================================================|
- | |
- | Process Stacks: 7-11 KB |
- | |
- |===============================================================|
- | |
- | Volume & Monitor Tables: 1-12 KB |
- | |
- |===============================================================|
- | |
- | Dynamic Memory Pool 1: 16-21 KB |
- | |
- |===============================================================|
- | |
- | File Service Process Buffers: 2-12 KB |
- | |
- |===============================================================|
-
-
-
- The following portion of the report will go into more detail on each one
- of these DGroup components.
-
-
-
-
-
-
-
-
-
- Page 6 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- |===============================================================|
- | |
- | Global Static Data: 28-40 KB |
- | |
- |===============================================================|
-
-
-
-
- The Global Static Data Area is typically the largest single segment of
- DGroup allocated.
-
- The Global Static Data Area contains all of the global variables defined
- by the operating system code. This number has increased with not only each
- successive version of the OS, but for most revisions as well. A table of
- OS DGroup allocation is included for comparison.
-
- This area also contains all of the global variables defined in both the
- NetWare NIC Drivers and the Disk drivers. Tables for disk and NIC Driver
- DGroup allocations are also included.
-
- When loading multiple NIC Drivers, the variables are allocated in DGroup
- once for each NIC Driver. If the same NIC Driver is loaded twice, then the
- variables are allocated twice. For example if you configure two NE2000s
- into the OS, then the DGroup allocation is 812 bytes, (2 times 406 bytes).
-
- When loading multiple Disk drivers, the variables are also allocated in
- DGroup once for each Disk driver. However, if the same Disk driver is
- loaded multiple times, the variables are still only allocated once. For
- example if you configure the ISA disk driver and the Novell DCB driver into
- the OS, then the DGroup allocation is 292 plus 783, or 1075 bytes. However
- is you configure two Novell DCBs into the OS, then the DGroup allocation
- is only 783, and not 1566 bytes.
-
- The only user configurable options for this component of DGroup are what
- types and how many NIC and Disk drivers, will be loaded into the OS.
-
-
-
-
-
-
-
-
- Page 7 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- Operating System and Disk Driver DGroup Allocation Tables
- ---------------------------------------------------------
-
-
-
- |===============================================================|
- | |
- | Operating System |
- | |
- |===============================================================|
- | |
- | DGroup |
- | Operating System Allocation |
- | |
- |===============================================================|
- | |
- | Advanced NetWare v2.15 28,454 Bytes |
- | Advanced NetWare v2.15 (Non-Dedicated) 28,518 Bytes |
- | SFT NetWare v2.15 28,444 Bytes |
- | SFT with TTS NetWare v2.15 28,596 Bytes |
- | Advanced NetWare v2.15c 28,466 Bytes |
- | Advanced NetWare v2.15c (Non-Dedicated) 28,530 Bytes |
- | SFT NetWare v2.15c 28,466 Bytes |
- | SFT with TTS NetWare v2.15c 28,608 Bytes |
- | |
- |===============================================================|
-
-
-
- |===============================================================|
- | |
- | Disk Drivers |
- | |
- |===============================================================|
- | |
- | DGroup |
- | Disk Drivers Allocation |
- | |
- |===============================================================|
- | |
- | IBM AT hard disk controller 170 Bytes |
- | Novell Disk CoProcessor - AT 783 Bytes |
- | IBM PS/2 Model 30 286 MFM 138 Bytes |
- | IBM PS/2 MFM disk controller 152 Bytes |
- | IBM PS/2 ESDI disk controller 180 Bytes |
- | Industry Standard ISA or AT Controller 292 Bytes |
- | |
- |===============================================================|
-
-
-
-
-
- Page 8 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- NIC Driver DGroup Allocation Table
- ----------------------------------
-
-
- |===============================================================|
- | |
- | NIC Driver DGroup Allocation |
- | |
- |===============================================================|
- | DGroup |
- | NIC Driver Allocation |
- |===============================================================|
- | Ethernet |
- | Novell Ethernet NE1000 301 Bytes |
- | Novell Ethernet NE2000 406 Bytes |
- | Novell Ethernet NE/2 356 Bytes |
- | Novell Ethernet NE2000 W/AppleTalk 881 Bytes |
- | Novell Ethernet NE/2 W/AppleTalk 837 Bytes |
- | Micom-Interlan NP600 243 Bytes |
- | 3Com 3C501 EtherLink 403 Bytes |
- | 3Com 3C505 EtherLink Plus (2012) 405 Bytes |
- | 3Com 3C505 EtherLink Plus (1194) 573 Bytes |
- | 3Com 3C505 Etherlink Plus W/AppleTalk 798 Bytes |
- | 3Com 3C503 EtherLink II 388 Bytes |
- | 3Com 3C523 EtherLink/MC 258 Bytes |
- |===============================================================|
- | Token Ring |
- | IBM Token-Ring 644 Bytes |
- | IBM Token-Ring Source Routing 3920 Bytes |
- |===============================================================|
- | Arcnet |
- | Novell RX-Net 256 Bytes |
- | Novell RX-Net/2 -- SMC PS110 259 Bytes |
- | SMC Arcnet/Pure Data 256 Bytes |
- |===============================================================|
- | Other Protocols |
- | Novell NL1000 & NL/2 (AppleTalk) 108 Bytes |
- | Novell Star Intelligent NIC 160 Bytes |
- | AT&T StarLAN 103 Bytes |
- | Corvus Omninet 162 Bytes |
- | IBM PC Cluster 1044 Bytes |
- | IBM PCN (Original Adapter) 606 Bytes |
- | IBM PCN II & Baseband 696 Bytes |
- | Gateway Communications Inc. G/NET 241 Bytes |
- | Proteon ProNET-10 P1300/P1800 356 Bytes |
- | Generic NetBIOS 1526 Bytes |
- | IBM Async (Com1/Com2) 3203 Bytes |
- | Async WNIM 9942 Bytes |
- | Telebit P.E.P. Modem/WNIM 9952 Bytes |
- |===============================================================|
-
-
-
- Page 9 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- |===============================================================|
- | |
- | Process Stacks: 7-11 KB |
- | |
- |===============================================================|
-
-
-
-
- There is a stack area allocated for each NetWare Server process. Each of
- the stacks range from 80 to 1000 bytes. The following are the stack space
- requirements for the NetWare processes:
-
-
-
-
- |===============================================================|
- | |
- | Standard Operating System processes: 7136 bytes |
- | |
- | TTS Stack: 250 bytes |
- | |
- | Print Spooler Stack: 668 bytes |
- | (This is allocated once for |
- | each port spooled in Netgen) |
- | |
- |===============================================================|
-
-
-
-
- Note that the print spooler stacks are created for a spooled port that is
- defined in Netgen. Print servers and / or print queues do not impact FSP
- allocation.
-
- The only user configurable options for this component of DGroup are
- loading TTS and configuring spooled ports in Netgen.
-
-
-
-
-
-
-
-
-
-
-
-
-
- Page 10 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
- |===============================================================|
- | |
- | Volume & Monitor Tables: 1-12 KB |
- | |
- |===============================================================|
-
-
- The Monitor table is used by the console to store information required to
- display the Monitor screen. This table size is fixed, not configurable.
-
-
- |===============================================================|
- | |
- | Monitor Table Size: 84 bytes |
- | |
- |===============================================================|
-
- The Volume table is used to maintain information on each of the disk
- volumes mounted on the server. The size of memory allocated for this table
- is dependent on the size of the mounted volumes, as well as the amount of
- directory entries allocated in Netgen. Therefore this is the user
- configurable portion of this DGroup component.
-
- Please note that mounted volume size is used for these tables and
- therefore mirrored drives are not counted twice. The following is the
- Volume table memory requirements:
-
-
- |===============================================================|
- | |
- | For each volume mounted on the server: 84 bytes |
- | |
- | For each MB of disk space mounted: 1.75 bytes |
- | (This number is rounded to |
- | the next highest integer.) |
- | |
- | For each 18 directory entries on all volumes: 1 byte |
- | (This number is rounded to |
- | the next highest integer.) |
- | |
- |===============================================================|
-
- For example, if you had a server with only one volume (SYS:) mounted, with
- a size of 145mb and 9600 directory entries, the Volume and Monitor Tables
- would require (1*84) + (145*1.75) + (9600/18) + 84 bytes of DGroup memory,
- or 956 bytes (rounded up).
-
- Page 11 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
- |===============================================================|
- | |
- | Dynamic Memory Pool 1: 16-21KB |
- | |
- |===============================================================|
-
-
-
-
- Dynamic Memory Pool 1 is used by virtually all NetWare processes and
- routines as temporary workspace. Workspace from 2 to 1024 bytes, with 128
- being the average, is allocated to a NetWare process. It is then used then
- recovered upon completion.
-
- Additionally, there are several NetWare processes and routines that hold
- memory allocated out of DMP 1, either on a semi- permanent basis, or until
- the process or routine finishes. A table of these semi-permanent DMP 1
- allocations is included.
-
- If there is no Dynamic Memory Pool 1 RAM available for a process or
- routine, the workstation will likely display "Network Error: out of dynamic
- workspace during <operation>", where <operation> refers to the name of the
- DOS call that was being tried. In some instances, with some versions of the
- OS, running out of DMP 1 RAM can cause print jobs to either disappear
- (until more DMP 1 RAM is freed) or be lost completely. It has also been
- reported that under some earlier versions of NetWare 286, running out of
- DMP 1 RAM can cause the server to lock up without displaying an ABEND error
- message.
-
- Please note that references to Dynamic Workspace in an error message can
- refer to the unavailability of RAM in either Dynamic Memory Pools 1, 2 or
- 3. Use the FCONSOLE => Summary => Statistics screen to determine the exact
- memory pool involved.
-
- Based upon the way in which DMP 1 is allocated, it is very difficult to
- manipulate the size of this pool. Therefore the major user configurable
- options for this DGroup component are in the semi-permanent DMP 1
- allocations.
-
-
-
-
-
-
-
-
- Page 12 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- |===============================================================|
- | |
- | Semi-Permanent Dynamic Memory Pool 1 Allocations: |
- | |
- |===============================================================|
- | |
- | Drive Mappings 14 bytes per map assignment, |
- | per workstation |
- | |
- |*Additional Drive information 612 bytes per physical drive |
- | |
- |*Process Control Blocks 28 bytes each (30 allocated |
- | initially) |
- | |
- |*Semaphores 6 bytes each (40 allocated |
- | initially) |
- | |
- | Auto Remirror Queue 4 bytes per drive to be |
- | remirrored |
- | |
- | Apple MAC file support 4 bytes per open MAC file |
- | |
- | Workstation support 8 bytes per logged in |
- | workstation |
- | |
- |*Disk Storage Tracking Process 960 bytes (if Accounting is |
- | enabled) |
- | |
- | Spool Queue entries 44 bytes per spooled print |
- | job |
- | |
- | Queue Management System 28 bytes per queue |
- | |
- | QMS Queue servers 5 bytes per queue server up |
- | to a maximum of 25 queue |
- | servers |
- | |
- |*Volume Names up to 16 bytes per mounted |
- | volume |
- | |
- |*VAPs 128 bytes per VAP, for stack |
- | space |
- | |
- |===============================================================|
-
-
- Please note that the DMP 1 allocations marked with an asterisk* are, in
- practical terms, permanent allocations.
-
-
-
-
-
- Page 13 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- |===============================================================|
- | |
- | File Service Process Buffers: 2-12 KB |
- | |
- |===============================================================|
-
-
-
-
- The File Service Process buffers are the buffers allocated in DGroup for
- incoming File Service Request Packets. The number of FSP buffers available
- directly determines how many FSPs your server will have, in a one to one
- relationship. If you have 4 FSP buffers available, then you will have 4
- FSPs. The maximum FSPs available for any server configuration is 10.
-
- The following is the breakdown of memory requirements, for each File
- Service Process buffer:
-
-
- |===============================================================|
- | |
- | Reply buffer 94 bytes |
- | Workspace 106 bytes |
- | Stack space 768 bytes |
- | Receive buffer 512-4096 bytes |
- | |
- |===============================================================|
-
- The total size of the FSP buffer is dependent upon the largest packet size
- of any NIC Driver installed in the File Server. The exact packet size
- constitutes the receive buffer portion of the FSP buffer.
-
- As an example, if you have configured an Ethernet driver with a packet
- size of 1024 bytes, and an Arcnet driver using 4096 byte packets, then the
- FSP buffers for this server will be 5064 bytes, (4096 + 768 + 106 + 94).
- This provides ample evidence that configuring a server with NIC drivers of
- varied packet sizes can be very inefficient. If at all possible, moving
- different NIC drivers to external bridges can remedy this situation.
-
-
-
-
-
-
-
-
- Page 14 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
- Additional File Service Process Buffer RAM
- ------------------------------------------
-
-
- |===============================================================|
- | |
- | Additional Reply buffer 94 bytes |
- | Memory Set aside for DMA workaround 0-4095 bytes |
- | |
- |===============================================================|
-
-
- Additionally, there is a one time allocation of a single additional reply
- buffer of 94 bytes. Lastly, if any NIC driver configured into the OS,
- supports DMA access, there may be additional memory that will be set aside,
- (unused).
-
- The problem is due to the fact that in some PCs, the DMA chip cannot
- handle addresses correctly across physical 64K RAM boundaries. Therefore,
- if the receive buffer of a FSP buffer straddles a physical 64K RAM
- boundary, then the OS will skip the memory (depending on the size of the
- receive buffer it could be 0-4095 bytes) and not use it. This problem can
- be erased by changing to a non-DMA NIC. It is also conceivable that
- changing the Volume Tables can shift the data structures enough to allow
- for no straddling. The following graphic depicts this workaround.
-
-
- |===============================================================|
- | ---------- | | |
- | | Global | | | |
- | | Static | | | |
- | | Data | | | |
- | ---------- | Physical | |
- | | Process | | | |
- | | Stacks | | Block | |
- | ---------- | | |
- | | Volume/ | | 64K | |
- | | Monitor | | | |
- | | Tables | | RAM | |
- | ---------- | | |
- | | DMP 1 | | | Unused RAM, |
- | ---------- --|------------|--------\ skipped for DMA |
- | | FSP |---============---------/ workaround, if less |
- | | Buffers | ============ than a complete |
- | ---------- | Physical | receive buffer |
- | | Block | |
- | | 64K | |
- | | RAM | |
- |===============================================================|
-
- Page 15 Copyright 1990 by Novell, Inc., Provo, Utah
-
- |===============================================================|
- | |
- | NIC Driver Packet and DGroup Buffer Sizes (bytes) |
- | |
- |===============================================================|
- | Max.Packet DGroup H/W |
- | NIC Driver Size Buffer DMA |
- |===============================================================|
- | Ethernet |
- | Novell Ethernet NE1000 1024 1992 No |
- | Novell Ethernet NE2000 1024 1992 No |
- | Novell Ethernet NE/2 1024 1992 No |
- | Novell Ethernet NE2000 W/AppleTalk 1024 1992 No |
- | Novell Ethernet NE/2 W/AppleTalk 1024 1992 No |
- | Micom-Interlan NP600 1024 1992 Yes |
- | 3Com 3C501 EtherLink 1024 1992 No |
- | 3Com 3C505 EtherLink Plus (2012) 1024 1992 Yes |
- | 3Com 3C505 EtherLink Plus (1194) 1024 1992 Yes |
- | 3Com 3C505 ELink Plus W/AppleTalk 1024 1992 Yes |
- | 3Com 3C503 EtherLink II 1024 1992 Yes |
- | 3Com 3C523 EtherLink/MC 1024 1992 No |
- |===============================================================|
- | Token Ring |
- | IBM Token-Ring 1024 1992 No |
- | IBM Token-Ring Source Routing 1024 1992 No |
- |===============================================================|
- | Arcnet |
- | Novell RX-Net 512 1480 No |
- | Novell RX-Net/2 -- SMC PS110 512 1480 No |
- | SMC Arcnet/Pure Data 512 1480 No |
- |===============================================================|
- | Other Protocols |
- | Novell NL1000 & NL/2 (AppleTalk) 1024 1992 No |
- | Novell Star Intelligent NIC 512 1480 No |
- | AT&T StarLAN 512 1480 No |
- | Corvus Omninet 512 1480 No |
- | IBM PC Cluster 512 1480 No |
- | IBM PCN (Original Adapter) 1024 1992 Yes |
- | IBM PCN II & Baseband 1024 1992 No |
- | Gateway Communications Inc. G/NET 1024 1992 No |
- | Proteon ProNET-10 P1300/P1800 1024 1992 No |
- | Generic NetBIOS 2048 3016 No |
- | IBM Async (Com1/Com2) 512 1480 No |
- | Async WNIM 512 1480 No |
- | Telebit P.E.P. Modem/WNIM 512 1480 No |
- |===============================================================|
-
- The following tables show various NIC Drivers and their Packet and FSP
- Buffer sizes, and whether or not they use DMA. Following tables will show
- the exact steps in allocating DGroup RAM, as well as the biggest impacters
- on DGroup RAM allocation and some steps for alleviating FSP shortages.
-
- Page 16 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
- |===============================================================|
- | |
- | DGroup RAM Allocation Process |
- | |
- |===============================================================|
- | |
- | The following is the step by step process of allocating |
- | RAM in Dgroup: |
- | |
- | 1) The OS first allocates the Global Static Data Area of |
- | DGroup. This includes OS, NIC, and Disk variables. |
- | |
- | 2) The Process Stacks are allocated next. |
- | |
- | 3) The Volume and Monitor tables are next allocated. |
- | |
- | 4) 16KB is set aside for Dynamic Memory Pool 1. |
- | |
- | 5) The remaining DGroup RAM is used to set up File Service |
- | Process buffers. |
- | |
- | 6) First 94 bytes is set aside as an additional reply buffer. |
- | |
- | 7) Next 0-4095 bytes may be set aside, (unused), if any |
- | installed NIC Driver supports DMA. |
- | |
- | 8) Then the remaining RAM is divided by the total FSP buffer |
- | size up to a maximum of 10. |
- | |
- | 9) The remainder DGroup RAM that could not be evenly made |
- | into a FSP buffer is added to Dynamic Memory Pool 1. |
- | |
- |===============================================================|
-
- A server configured with a NE2000 NIC and a SMC Arcnet NIC, (using the
- Novell driver), would require a FSP buffer size of 1992 which is the FSP
- buffer size of the larger packet size NIC, (the NE2000 has a 1024 byte
- packet size as opposed to the 512 byte packet size of the Arcnet card).
-
- If after allocating all the prior DGroup data structures, there remained
- 7500 bytes of DGroup available for FSP buffers the allocation would be as
- follows: Subtract 94 bytes from the 7500 for the one additional reply
- buffer, divide the remainder by the 1992 FSP buffer size, giving 3 FSP
- buffers and a remainder of 1430 to be added to Dynamic Memory Pool 1. The
- following is the computation:
-
- (7500 - 94) / 1992 = 3 with remainder of 1430
-
- Page 17 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
- DGroup RAM Allocation Process (continued)
- -----------------------------------------
-
-
-
-
- Once understanding this process it becomes very easy to see how close any
- particular server is to gaining another FSP.
-
-
- |===============================================================|
- | |
- | 1) First, figure out the FSP buffer size. |
- | |
- | 2) Next take the Maximum RAM in Dynamic Memory Pool 1, and |
- | subtract it from 16384 bytes. |
- | (the fixed size of DMP 1) |
- | |
- | 3) Lastly subtract that difference from the FSP buffer size. |
- | |
- | That amount is how many bytes short that server |
- | configuration is from gaining an additional FSP. |
- | |
- |===============================================================|
-
- For example, if the server configuration had a 1992 FSP buffer size, and
- the maximum DMP 1 was 16804, then you could figure out that in order to
- gain an additional FSP, you would have to free up an additional 1572 bytes
- of DGroup. The following is the computation:
-
- 1992 - (16804 - 16384) = 1572
-
- Two solutions for this configuration, would be to remove three spooled
- printer ports, or reduce directory entries by 28296. Either of these would
- free up the necessary DGroup RAM.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Page 18 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- Troubleshooting
- ---------------
-
-
-
-
- |===============================================================|
- | |
- | Troubleshooting |
- | |
- |===============================================================|
- | |
- | The following are the biggest impacters on DGroup RAM |
- | allocation, in order of importance: |
- | |
- |===============================================================|
- | |
- | 1) NIC Driver Packet Size |
- | |
- | 2) Amount of Disk space mounted and |
- | Directory Entries allocated |
- | |
- | 3) LAN and Disk driver Global variables |
- | |
- | 4) Possible DMA compatibility allowances, |
- | (if the NIC Driver uses DMA) |
- | |
- | 5) Spooled Ports defined in Netgen |
- | |
- | 6) TTS |
- | |
- |===============================================================|
-
-
-
- The following is the methodology used to define this list:
-
- 1) The NIC Driver packet size has the most significant impact on the
- allocation of FSPs, due to the fact that this determines what divisor is
- used to allocate FSP buffers. The larger the packet size, the larger the
- FSP buffer, and the smaller the amount of FSPs.
-
- 2) Larger disk configurations this can conceivably have the second largest
- impact on DGroup RAM allocation. Mounting a single volume of 2GB with
- 10,000 dir entries would require 13584 bytes of DGroup RAM alone.
-
-
-
-
- Page 19 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- Troubleshooting (continued)
- ---------------------------
-
-
-
-
- 3) These NIC and Disk variables can be significant in size. The Async WNIM
- driver alone requires 9942 bytes of DGroup RAM.
-
- 4) The maximum DGroup RAM lost to this workaround is 4095 bytes.
-
- 5) The maximum DGroup RAM that can be allocated to print spooler stacks
- is 3340 bytes.
-
- 6) The TTS process stack uses 250 bytes of DGroup RAM. Also additional
- Global Static Data variables for TTS can run from 142 to 152 additional
- bytes.
-
-
- Page 20 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- Troubleshooting (continued)
- ---------------------------
-
-
-
-
- |===============================================================|
- | |
- | DGroup RAM Management Methods |
- | |
- |===============================================================|
- | |
- | 1) Remove TTS (If Possible) |
- | |
- | 2) Remove Printers in Netgen (Possibly to Print Servers) |
- | |
- | 3) Decrease Directory Entries |
- | |
- | WARNING: Before decreasing directory entries, please read |
- | the Directory Entries information in the next |
- | section. If you incorrectly reduce your directory |
- | entries, it is possible that you will lose files. |
- | |
- | 4) Change NIC Drivers to non-DMA ones |
- | (If current drivers use DMA) |
- | |
- | 5) Decrease NIC Driver Packet size |
- | (Or move large packet drivers to bridge) |
- | |
- | 6) Decrease Disk space |
- | |
- | 7) Use Dynamic Memory Pool 1 patch for qualified servers |
- | |
- |===============================================================|
-
-
- The order used in this list was an attempt to minimize the impact these
- changes would have on a given server. It goes without saying that some of
- these changes will be impossible for certain configurations. For example
- if you are using a TTS database system on your system, removing TTS does
- not become an option. Likewise removing spooled printers from your server
- may be prohibitive or impossible.
-
-
-
-
-
-
-
-
-
-
- Page 21 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- Directory Entries
- -----------------
-
-
- When a 286-based NetWare server sets up a directory entry block based upon
- the amount of directory entries defined in Netgen, it allocates that as one
- block. As directory entries are used up ( by files, directories, and
- trustee privileges ), the block is filled up more or less sequentially. As
- the directory entry block is filled, NetWare keeps track of the peak
- directory entry used. This is the highest number entry used in the
- directory entry block.
-
- However, directory entries are added sequentially only as long as prior
- directory entries are not deleted. When directory entries are deleted
- "holes" in the directory entry block are created that are filled by
- subsequent new directory entries.
-
-
- |===============================================================|
- | |
- | ----------- |
- | New Server | | |
- | Directory | | |
- | Entry Block | Used | |
- | ----------- | Entries | |
- | | | |
- | | | |
- | /---|-----------|--- Peak Directory |
- | Total / | | Entries Used |
- | Free ---| | Free | |
- | Directory ---| | Entries | |
- | Entries \ | | |
- | \--- ----------- |
- | |
- | Used Server ----------- ------------------- |
- | Directory | | | |
- | Entry Block | Used | |"Live" |
- | ----------- | | | Block |
- | /---|-----------|--\ Deleted Files |-of |
- | -- | Free | - Directory | Dir. |
- | Total | \---|-----------|--/ Entries | Entries |
- | Free -| | Used | | |
- | Directory -| /---|-----------|--- Peak Directory-- |
- | Entries | / | | Entries Used |
- | - | Free | |
- | \ | | |
- | \--- ----------- |
- | |
- |===============================================================|
-
- Page 22 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
- Directory Entries
- -----------------
-
-
-
- This directory entry block fragmentation is not defragmented either under
- NetWare, or after running VREPAIR. To compute the amount of directory block
- fragmentation perform the following steps:
-
- 1) Pull up the FCONSOLE => Statistics => Volume Information => (select a
- volume) screen.
-
- 2) Take the Maximum Directory Entries number and subtract the Peak
- Directory Entries Used number from it. This is the number of free directory
- entries that can be safely manipulated via Netgen, without loss of files.
-
- 3) Next take the Current Free Directory Entries number and subtract the
- number from (2) from it. This number is the amount of free directory
- entries that are inside your "live" block of directory entries. This number
- indicates how much fragmentation of your directory block exists. The higher
- proportion of free directory entries you have inside your "live" block of
- directory entries (the number from (3)) to the ones you have outside your
- "live" block (the number from (2)), represent a more fragmented directory
- block.
-
- When you down a server and run Netgen, in order to reduce directory
- entries, the directory entry block is simply truncated to the new number.
- Netgen does not check to see if directory entries that are about to be
- deleted, are in use. If you have a fragmented directory block, and you
- reduce the directory entry block based upon the amount of free directory
- entries you have available, it is entirely likely that you will be deleting
- directory entries that are in use.
-
- This will cause files, directories, and trustee privileges that were
- defined in those directory entries to be lost. Running VREPAIR will salvage
- the files and some of the directory structure, but will save most files in
- the root directory with names that are created by VREPAIR. These names are
- typically something like VF000000.000. For a more complete description of
- the operation of VREPAIR, consult section 8 in the SFT/Advanced NetWare 286
- Maintenance manual.
-
- Page 23 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
- Directory Entries
- -----------------
-
-
- The only safe number to use, in determining how much you can reduce your
- directory entries by, is your Peak Directory Entries Used number. You can
- reduce your total directory entries to near this number, without loss of
- files. Please note that you can quickly calculate if manipulating the
- directory entries will buy your server any FSPs. You should do this by
- checking your Maximum Directory Entries number against your Peak Directory
- Entries Used number. The difference between these two represent the amount
- of directory entries you can manipulate. You can then calculate if reducing
- this number can aid your FSP situation.
-
- If you recognize that you have a significant amount of directory block
- fragmentation, you can elect to defragment it using the following method:
-
- 1) Make a complete backup of the volume(s) whose directory entry block you
- wish to defragment.
-
- 2) Make note of how many directory entry blocks you have USED.
-
- Do this by rerunning FCONSOLE and selecting the Statistics => Volume
- Information => (select a volume) screen. Next take the amount of Maximum
- Directory Entries and subtract the number of Current Free Directory
- Entries, from it. This is the number of directory entries that you have
- used, and need in order to restore all of your volume.
-
- You should now calculate how many total directory entries you wish to have.
- If you do not have a set number in mind, taking this number of used
- directory entries and adding half again to it, is a good start. An example
- is that if you have used 6000 directory entries, allocating 9000 is a good
- start.
-
- 3) Down the server and rerun Netgen.
-
- 4) Reinitialize the selected volume(s).
-
- 5) Reset the number of directory entries to the number you calculated you
- needed from (2).
-
- Page 24 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
- Dynamic Memory Pool 1 Patch
- ---------------------------
-
-
-
- Finally, be aware that Novell has a patch fix for some FSP starved server
- configurations. This patch has been available through LANSWER technical
- support for qualified configurations. This is the only Novell supported
- method for distributing this patch. This patch also appears to be available
- from other sources in an unsupported fashion, however warnings should
- precede its use.
-
- The patch consists of three files, a general purpose debug type program
- called PATCH.EXE, a patch file to be used with the PATCH.EXE program, and
- a README file. The instructions for the PATCH program are listed on the
- following page. The basic way in which the patch program works is to take
- the patch instruction file and patch the specified file. The patch
- instruction file consists of three lines, a pattern line, an offset, and
- a patch line, in that order. The following is the Novell supplied patch
- instruction file called SERVPROC.
-
-
- |===============================================================|
- | |
- | Novell Dynamic Memory Pool 1 Patch Instructions |
- | |
- |===============================================================|
- | |
- | 8B E0 05 00 1F |
- | 4 |
- | 08 |
- | |
- |===============================================================|
-
- This means that the patch program will search the specified file, (one of
- the OS .OBJ files), for the pattern 8B E0 05 00 1F, and will replace the
- byte 1F with 08. (You can read the PATCH program instructions for a further
- explanation of the PATCH program, by simply running the PATCH.EXE program
- without any parameters.)
-
- What this does is to change a portion of the fixed size of Dynamic Memory
- Pool 1. The 1F bytes represents a fixed size of this portion of DMP 1 of
- 7936 bytes or 1F00h. What the patch program then does is change that byte
- to 08 or 800h, or 2048 bytes. This change means that DMP 1 will be reduced
- approximately 5888 bytes or 5.75K. (7936 - 2048 = 5888)
-
- Page 25 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
- Dynamic Memory Pool 1 Patch
- ---------------------------
-
-
-
-
- Understanding the operation of the patch can allow the number of bytes that
- DMP 1 is reduced by, to be changed. In other words you can use any value
- beginning at 1F and decreasing to 00, in the patch line of the patch
- instruction file. This will decrease DMP 1 in 256 byte increments as
- follows:
-
-
- 1E Decrease DMP 1 by 256 bytes
- 1D Decrease DMP 1 by 512 bytes
- 1C Decrease DMP 1 by 768 bytes
- 1B Decrease DMP 1 by 1024 bytes
- ...
- 08 Decrease DMP 1 by 5888 bytes
- ...
- 00 Decrease DMP 1 by 7936 bytes
-
- Remember that in actuality, the hex number you are changing is the two
- digit hex number above followed by 00. In the first case you will be
- patching the number 1F00h (the original value) to 1E00h. Subtracting the
- two gets you the difference of 256 bytes. It is also conceivable that the
- patch can be used in this manner to increase the fixed size of this portion
- of DMP 1. You could increase this fixed size by using numbers greater than
- 1F. You would again be increasing this fixed size in 256 byte increments
- for every single number increment above 1F.
-
- It is strongly recommended that you do not alter the patch line numbers
- in an effort to provide a quick fix to your server. If you have not
- performed the DGroup RAM calculations, and you do not know exactly what you
- are gaining in FSPs, and losing in DMP 1, you should not be altering the
- patch. You should also be aware that in some configurations, if you are not
- reducing DMP 1 by a at least one FSP buffer size, you will not be gaining
- any additional FSPs.
-
-
-
-
-
-
-
-
-
-
- Page 26 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- Dynamic Memory Pool 1 Patch
- ---------------------------
-
-
-
-
- The current patch value of 08 was arrived at because it will provide the
- following FSP gains in a minimum to maximum range of FSPs:
-
-
- 512K Packet NIC Drivers 3-4 Additional FSPs
- 1024K Packet NIC Drivers 2-3 Additional FSPs
- 2048K Packet NIC Drivers 1-2 Additional FSPs
- 4096K Packet NIC Drivers 1-2 Additional FSPs
-
- The warnings for use of this patch should be self evident by now. If you
- run short of Dynamic Memory Pool 1 you will get erratic and sometimes fatal
- behavior of your server. Also altering the patch numbers is not a
- guaranteed or supported function of the patch. These numbers and this
- explanation, were arrived at based upon understanding how the patch works,
- and then performing the calculations. If you feel the need to use the
- patch, you should use it as it is supplied.
-
- What LANSWER will do prior to sending a user the patch, is guarantee that
- the peak used of Dynamic Memory Pool 1 is at least 6K greater than the
- maximum allocated. If you receive the patch through other means you should,
- at the minimum, check those numbers yourself.
-
- Due to the nature of these types of fixes, ones which patch the operating
- system, it is always recommended that you try all prior means of curing a
- FSP starved server, before resorting to this type of patch.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Page 27 Copyright 1990 by Novell, Inc., Provo, Utah
-
-
-
-
- Final Notes
- -----------
-
-
-
-
- 286-Based NetWare v2.1x was designed along the principal of enhanced client
- server computing being the foundation of future computer networking.
- Therefore technologies such as a non- preemptive server OS, disk mirroring,
- transaction tracking, and hardware independence, all implemented with
- exceptional speed and security, formed the basis of the technology and
- design that went into 286-based NetWare. The fact that almost all other
- current network implementations are now borrowing heavily from these ideas,
- matter little. The additional fact that more and more users are placing
- larger amounts of computer resources into their NetWare LANs only reaffirms
- the sound concepts behind the design. However, as with any design,
- limitations define the playing field.
-
- One conclusion that can be drawn from this report is echoed on the final
- page of the SFT NetWare 286 In-Depth Product Definition:
-
- "*NOTE: Maximums listed are individual limits. You will not be able to
- use these specifications at maximum levels at all times. You may have to
- purchase additional hardware to achieve some of these maximums."
-
- After understanding the relationship between DGroup RAM allocation, the
- separate DGroup RAM components, and File Service Processes, it becomes
- evident that as an example, setting up an Advanced NetWare 286 server with
- 100 workstations, 2GB of disk space, and four NICs, becomes inadvisable if
- at all possible. Many of the limitations for this type of configuration can
- be largely attributed to many of the current hardware limitations.
-
- NetWare design technologies remain firmly focused at furthering network
- computing. NetWare 386 is the next logical design step. The fact is that
- NetWare 386 introduces a completely new memory scheme that renders all of
- the current discussion on FSP and DGroup limitations, academic. Using a
- completely dynamic memory and module management system, NetWare 386 is
- beginning to introduce a new set of features and technologies that will
- represent the new standard for computer networks. And as the next
- generation of computer hardware becomes available, we will find that
- NetWare 386 will be there, ready and waiting.
-
- Page 28 Copyright 1990 by Novell, Inc., Provo, Utah
-
-