home *** CD-ROM | disk | FTP | other *** search
Wrap
- 1 - 4. _N_e_w__F_e_a_t_u_r_e_s This chapter describes MPT features released since the MPT 1.5 release. Text that applies to IRIX systems only is marked "IRIX systems only." Text that applies to Linux systems only is marked "Linux systems only." 4.1 _N_e_w__F_e_a_t_u_r_e_s__f_o_r__M_P_T__R_e_l_e_a_s_e__1_._5_._1 This section describes the features that were new in the MPT 1.5.1 release. 4.1.1 _E_n_h_a_n_c_e_m_e_n_t__t_o__t_h_e__M_P_I___A_b_o_r_t__F_u_n_c_t_i_o_n The MMMMPPPPIIII____AAAAbbbboooorrrrtttt function now returns the error code to the run environment. In addition, if an MPI process is terminated by a signal such as SSSSIIIIGGGGBBBBUUUUSSSS or SSSSIIIIGGGGSSSSEEEEGGGGVVVV, a message with the signal number that caused the abort is displayed. 4.1.2 _R_O_M_I_O__V_e_r_s_i_o_n__1_._0_._3 As of MPT release 1.5.1, MPT includes ROMIO 1.0.3. The major change in ROMIO 1.0.3 is that the library automatically compensates for the restrictions imposed by IRIX on buffer and file alignment and transfer size for files using direct I/O. The nonconforming portion(s) of the I/O operation are performed with buffered I/O, and the remainder (if any) uses direct I/O. Direct I/O can be used only on XFS filesystems on single system image IRIX systems. If the environment variables, MMMMPPPPIIIIOOOO____DDDDIIIIRRRREEEECCCCTTTT____RRRREEEEAAAADDDD and/or MMMMPPPPIIIIOOOO____DDDDIIIIRRRREEEECCCCTTTT____WWWWRRRRIIIITTTTEEEE, are set to "TRUE," all such files opened by ROMIO use direct I/O. Alternatively, individual files can be selected for direct I/O, using the info keys, "direct_read" and "direct_write." 4.1.3 _c_c_-_N_U_M_A__E_n_h_a_n_c_e_m_e_n_t_s__(_I_R_I_X__s_y_s_t_e_m_s__o_n_l_y_) The following changes have been made to the cc-NUMA support provided by MPI and SHMEM with this release: +o When operating on hosts running IRIX 6.5.11 or higher, the default placement topology is now TTTTOOOOPPPPOOOOLLLLOOOOGGGGYYYY____CCCCPPPPUUUUCCCCLLLLUUUUSSSSTTTTEEEERRRR. This placement topology is designed to place processes as close together as possible, taking into account disabled CPUs. +o For Origin 3000 series systems, placement of MPI and SHMEM processes when setting MMMMPPPPIIII____DDDDSSSSMMMM____PPPPPPPPMMMM for sparse placement has been improved. When MMMMPPPPIIII____DDDDSSSSMMMM____PPPPPPPPMMMM is set - 2 - to 2, processes are placed on separate CPU buses within a node. +o An option has been added to allow you to specify a list of CPUs to use for an MPI or SHMEM job. For details, see the MMMMPPPPIIII(1) and iiiinnnnttttrrrroooo____sssshhhhmmmmeeeemmmm(3) man pages. +o Support has been added to the MPI product for nondefault page sizes for the user data segment. 4.1.4 _X_P_M_E_M__D_r_i_v_e_r__(_I_R_I_X__s_y_s_t_e_m_s__o_n_l_y_) Each of the SGI SN architectures can be partitioned into multiple independent systems. With the 1.5.1 release, MPI supports the XPMEM (cross partition) device driver, which allows MPI processes running on one partition to communicate with MPI processes on a different partition via the NUMAlink network. The NUMAlink network is powered by block transfer engines (BTEs), which can be viewed as cache-coherent DMA engines. BTEs reside on the Bedrock ASICs and can be used to copy data from one physical memory range to another. BTE data transfers do not require processor resources, and they can be performed across partition boundaries. Currently, the MPI/XPMEM driver supports the send/receive model, including single-copy mode. For more information regarding single-copy transfers, see the MMMMPPPPIIII____BBBBUUUUFFFFFFFFEEEERRRR____MMMMAAAAXXXX environment variable. The MPI-2 one-sided feature has not yet been implemented using the XPMEM driver. When using the single-copy mechanism, MPI processes on one partition can transfer data to MPI processes on a different partition, regardless of where the data resides. To achieve optimal performance using single-copy transfers, you should align the user send and receive data buffers on cacheline (128-byte) boundaries. Note: We have occasionally experienced MPI program hangs when running with XPMEM across partitions on IRIX versions 6.5.13, 6.5.14 and 6.5.15. Due to this condition, we do not recommend running MPI across partitions using the XPMEM driver on any of the aforementioned IRIX OS versions. IRIX OS version 6.5.16 resolves these problems. The XPMEM driver feature requires an Origin 3000 or Origin 300 system running in a partitioned mode. New environment variables to support this feature are as follows: MMMMPPPPIIII____UUUUSSSSEEEE____XXXXPPPPMMMMEEEEMMMM Requires the MPI library - 3 - to use the XPMEM driver as the interconnect when running across multiple hosts or running with multiple binaries. For more information on selecting an interconnect, see "Default Interconnect Selection" in these relnotes. If MPI is successful in configuring the XPMEM driver, the following message appears at startup when you use the ----vvvv (verbose) option: MMMMPPPPIIII:::: UUUUssssiiiinnnngggg XXXXPPPPMMMMEEEEMMMM NNNNUUUUMMMMAAAAlllliiiinnnnkkkk LLLLaaaayyyyeeeerrrr MMMMPPPPIIII____XXXXPPPPMMMMEEEEMMMM____VVVVEEEERRRRBBBBOOOOSSSSEEEE Allows additional MPI initialization information to be printed in the standard output stream. Note: This feature is disabled by default. 4.1.5 _S_h_a_r_e_d__M_e_m_o_r_y__O_p_t_i_m_i_z_a_t_i_o_n__w_i_t_h__M_y_r_i_n_e_t An interoperability problem between GM and MPI on IRIX systems has been fixed in the MPT 1.5.1 release. MPI applications using Myrinet(GM) can now also use globally accessible memory. This allows for faster communication between MPI processes on the same IRIX host. For details about using globally accessible memory in MPI applications, see the section on buffering in the MMMMPPPPIIII(1) man page. 4.1.6 _A_r_r_a_y__S_e_r_v_i_c_e_s__S_u_p_p_o_r_t__(_L_i_n_u_x__s_y_s_t_e_m_s__o_n_l_y_) By default on Linux systems, mmmmppppiiiirrrruuuunnnn uses a launcher that performs rrrrsssshhhh calls. However, this launcher has a limitation of not cleaning up the processes if one or more abnormally terminates. With this release, users who are running version 1.5.1 of Linux Base Software (LBS) can enable Array Services to assist in launching MPI programs by setting the MMMMPPPPIIII____AAAARRRRRRRRAAAAYYYY____LLLLAAAAUUUUNNNNCCCCHHHH environment variable prior to invoking the mmmmppppiiiirrrruuuunnnn command. Array Services is designed to not only help in job launch but also in cleaning up stray MPI processes. Array Services assisted launching of MPI programs will become the default in a future release. If the mmmmppppiiiirrrruuuunnnn command displays a message that Array Services is not running, check that it is installed correctly, as follows: - 4 - 1. Determine if the Array Services package is installed by invoking the rrrrppppmmmm ----qqqq aaaarrrrrrrraaaayyyyssssvvvvccccssss command. If it is installed, refer to the aaaarrrrrrrraaaayyyy____sssseeeerrrrvvvviiiicccceeeessss, aaaarrrrrrrraaaayyyydddd....ccccoooonnnnffff, and aaaarrrrrrrraaaayyyydddd....aaaauuuutttthhhh man pages for proper configuration. 2. Determine if the daemon is running by invoking the ppppssss ----eeeeffff |||| ggggrrrreeeepppp aaaarrrrrrrraaaayyyydddd command. If the daemon is not running, as root, start it by invoking the following command: ////eeeettttcccc////rrrrcccc....dddd////iiiinnnniiiitttt....dddd////aaaarrrrrrrraaaayyyy ssssttttaaaarrrrtttt If the daemon is running, but you have made configuration changes, as root, restart it by invoking the following command: ////eeeettttcccc////rrrrcccc....dddd////iiiinnnniiiitttt....dddd////aaaarrrrrrrraaaayyyy rrrreeeessssttttaaaarrrrtttt 3. Determine if your configuration contains any errors by invoking the ////uuuussssrrrr////eeeettttcccc////aaaasssscccchhhheeeecccckkkk command. Remember, after making configuration changes, you will need to restart the aaaarrrrrrrraaaayyyydddd daemon, as previously noted. Note: On IRIX systems, mmmmppppiiiirrrruuuunnnn uses Array Services by default. 4.2 _N_e_w__F_e_a_t_u_r_e_s__f_o_r__M_P_T__R_e_l_e_a_s_e__1_._5_._2 This section describes the features that were new in the MPT 1.5.2 release. 4.2.1 _G_S_N__D_r_i_v_e_r__f_o_r__l_i_b_s_t__1_._0__(_I_R_I_X__s_y_s_t_e_m_s__o_n_l_y_) Support for the GSN Driver for libst 1.0 has been dropped with this release. 4.2.2 _G_S_N__D_r_i_v_e_r__(_I_R_I_X__s_y_s_t_e_m_s__o_n_l_y_) GSN (ST protocol) interconnect capabilities are now available for MPI programs running across multiple hosts. The minimum requirements for this feature are IRIX version 6.5.12, GSN version 3.0, and libst version 2.0. This MPI GSN driver now supports configurations of more than 128 processes per host. This allows for 256 or 512 processor hosts to use the GSN interconnect for message passing when running large MPI jobs across multiple hosts. Environment variables to support this feature are as follows: - 5 - MMMMPPPPIIII____UUUUSSSSEEEE____GGGGSSSSNNNN Requires the MPI library to use the GSN (ST protocol) OS bypass driver as the interconnect when running across multiple hosts or running with multiple binaries. For more information on selecting an interconnect, see "Default Interconnect Selection" in these relnotes. If MPI is successful in configuring the GSN driver, the following message appears at startup when using the ----vvvv (verbose) option: MMMMPPPPIIII:::: UUUUssssiiiinnnngggg GGGGSSSSNNNN ((((SSSSTTTT pppprrrroooottttooooccccoooollll)))) OOOOSSSS bbbbyyyyppppaaaassssssss MMMMPPPPIIII____GGGGSSSSNNNN____VVVVEEEERRRRBBBBOOOOSSSSEEEE Allows additional MPI/GSN/ST initialization information to be printed in the standard output stream. MMMMPPPPIIII____GGGGSSSSNNNN____DDDDEEEEVVVVSSSS Sets the order for opening GSN adapters For details of these variables, see the MMMMPPPPIIII(1) man page. 4.2.3 _U_S_E__M_P_I__S_t_a_t_e_m_e_n_t Support for the Fortran 90 USE MPI statement on Linux IA64 systems has been dropped with this release. 4.2.4 _d_p_l_a_c_e__I_n_t_e_r_o_p_e_r_a_b_i_l_i_t_y__E_n_h_a_n_c_e_m_e_n_t_s When running on hosts with IRIX 6.5.13 or higher installed, MPI and SHMEM programs now make use of new internal features in dplace for performance improvements. 4.2.5 _D_i_s_s_e_m_i_n_a_t_i_o_n_/_B_u_t_t_e_r_f_l_y__B_a_r_r_i_e_r__E_n_a_b_l_e_d__b_y__D_e_f_a_u_l_t For MPI or SHMEM jobs using 64 or more processors on a given host, the dissemination/butterfly barrier is now enabled by default. To disable the dissemination/butterfly algorithm, the MMMMPPPPIIII____BBBBAAAARRRR____CCCCOOOOUUUUNNNNTTTTEEEERRRR environment variable can be set. 4.2.6 _M_P_I___G_e_t___a_d_d_r_e_s_s__F_u_n_c_t_i_o_n_a_l_i_t_y A new man page, MMMMPPPPIIII____GGGGeeeetttt____aaaaddddddddrrrreeeessssssss, has been added in this release. This function is an MPI-2 feature, which gets the - 6 - address of a location in memory and returns it as an MMMMPPPPIIII____AAAAiiiinnnntttt value. This function replaces the MPI-1 function, MMMMPPPPIIII____AAAAddddddddrrrreeeessssssss, which returned an iiiinnnntttt value, thus making MMMMPPPPIIII____GGGGeeeetttt____aaaaddddddddrrrreeeessssssss better suited for the 64-bit ABI. It is currently only available with ABI 64 for Fortran. 4.2.7 _M_P_I___R_E_Q_U_E_S_T___M_A_X__L_i_m_i_t__I_n_c_r_e_a_s_e_d The maximum limit for the MPI environment variable MMMMPPPPIIII____RRRREEEEQQQQUUUUEEEESSSSTTTT____MMMMAAAAXXXX has increased from 16,384 to 65,536. 4.2.8 _S_h_o_r_t__M_e_s_s_a_g_e__L_a_t_e_n_c_y__R_e_d_u_c_t_i_o_n Latency for short messages has been reduced for Origin 3000 systems. 4.3 _N_e_w__F_e_a_t_u_r_e_s__f_o_r__M_P_T__R_e_l_e_a_s_e__1_._5_._3 This section describes the features that were new in the MPT 1.5.3 release. 4.3.1 _E_n_h_a_n_c_e_d__S_i_n_g_l_e_-_C_o_p_y__M_o_d_e__(_I_R_I_X___s_y_s_t_e_m_s___o_n_l_y_) Single-copy send/receive mode has been enhanced for processes residing on the same host. The XPMEM driver provides this new functionality and increased bandwidth capability. You can enable the enhanced single-copy mode by setting the MMMMPPPPIIII____BBBBUUUUFFFFFFFFEEEERRRR____MMMMAAAAXXXX and the MMMMPPPPIIII____XXXXPPPPMMMMEEEEMMMM____OOOONNNN variables. For more information on single-copy mode, see the MMMMPPPPIIII____BBBBUUUUFFFFFFFFEEEERRRR____MMMMAAAAXXXX environment variable description in the MMMMPPPPIIII man page. Previous releases of MPT offered the single-copy feature only if the sender data resided in either the symmetric data, symmetric heap, or global heap. The 1.5.3 MPT release supports the single-copy feature for basic predefined MPI data types from any sender data location, including the stack and private heap. Both the MMMMPPPPIIII____XXXXPPPPMMMMEEEEMMMM____OOOONNNN and MMMMPPPPIIII____BBBBUUUUFFFFFFFFEEEERRRR____MMMMAAAAXXXX variables must be set to enable these enhancements. Both are disabled by default. If the following additional conditions are met, the block transfer engine (BTE) is invoked instead of bbbbccccooooppppyyyy, to provide increased bandwidth: +o Send and receive buffers are cache-aligned. - 7 - +o Amount of data to transfer is greater than or equal to the MMMMPPPPIIII____XXXXPPPPMMMMEEEEMMMM____TTTTHHHHRRRREEEESSSSHHHHOOOOLLLLDDDD value. The MMMMPPPPIIII____XXXXPPPPMMMMEEEEMMMM____TTTTHHHHRRRREEEESSSSHHHHOOOOLLLLDDDD environment variable can be used to specify a minimum message size, in bytes, for which the message will be transferred via the BTE, provided all the above conditions are met. The default threshold is set to 8192 bytes. NNNNooootttteeee:::: The XPMEM driver does not support checkpoint/restart at this time. If you enable these XPMEM enhancements, you will not be able to checkpoint and restart your MPI job. For more information about the general XPMEM driver, see "XPMEM Driver" in these relnotes. The single-copy enhancements require an Origin 3000 or Origin 300 server running IRIX release 6.5.15 or greater. 4.3.2 _M_P_I___A_l_l_o_c___m_e_m__a_n_d__M_P_I___F_r_e_e___m_e_m__F_u_n_c_t_i_o_n_a_l_i_t_y MPI-2 functionality has been added for the MMMMPPPPIIII____AAAAlllllllloooocccc____mmmmeeeemmmm and MMMMPPPPIIII____FFFFrrrreeeeeeee____mmmmeeeemmmm functions. MMMMPPPPIIII____AAAAlllllllloooocccc____mmmmeeeemmmm allocates special memory, and is used in conjunction with MMMMPPPPIIII____FFFFrrrreeeeeeee____mmmmeeeemmmm, which frees the dynamically allocated memory. Currently, this functionality is implemented only on IRIX (on Linux systems, it will return MMMMPPPPIIII____EEEERRRRRRRR____NNNNOOOO____MMMMEEEEMMMM). Additionally, while the C bindings work with both -n32 and -64 applications, the Fortran bindings work only with -64 applications. 4.3.3 _M_P_I___T_r_a_n_s_f_e_r___o_f___h_a_n_d_l_e_s__F_u_n_c_t_i_o_n_s The MPI-2 MMMMPPPPIIII____TTTTrrrraaaannnnssssffffeeeerrrr____ooooffff____hhhhaaaannnnddddlllleeeessss functions have been added for the MPT 1.5.3 release. These functions convert a Fortran integer to the appropriate C handle, and vice-versa. For example, the mmmmppppiiii____ccccoooommmmmmmm____ffff2222cccc function converts an integer Fortran communicator handle to an MMMMPPPPIIII____CCCCoooommmmmmmm C communicator handle. Conversely, mmmmppppiiii____ccccoooommmmmmmm____cccc2222ffff converts a C communicator handle to an integer Fortran communicator handle. Other functions for converting between Fortran integers and C handles are as follows: mmmmppppiiii____ggggrrrroooouuuupppp____cccc2222ffff////mmmmppppiiii____ggggrrrroooouuuupppp____ffff2222cccc mmmmppppiiii____ttttyyyyppppeeee____cccc2222ffff////mmmmppppiiii____ttttyyyyppppeeee____ffff2222cccc mmmmppppiiii____rrrreeeeqqqquuuueeeesssstttt____cccc2222ffff////mmmmppppiiii____rrrreeeeqqqquuuueeeesssstttt____ffff2222cccc mmmmppppiiii____oooopppp____cccc2222ffff////mmmmppppiiii____oooopppp____ffff2222cccc mmmmppppiiii____wwwwiiiinnnn____cccc2222ffff////mmmmppppiiii____wwwwiiiinnnn____ffff2222cccc - 8 - 4.4 _N_e_w__F_e_a_t_u_r_e_s__f_o_r__M_P_T__R_e_l_e_a_s_e__1_._6 This section describes the features that are new in the MPT 1.6 release. 4.4.1 _P_V_M__U_n_b_u_n_d_l_e_d__f_r_o_m__M_P_T__1_._6 SGI has unbundled PVM out of the MPT product and into a stand-alone package available through the SGI software download page. PVM 3.3 is now in a courtesy mode of support. We do not plan future PVM bugfix releases, but will provide a downloadable SGI version of PVM to customers who wish to continue using this product. As an alternative, customers can obtain versions of PVM from the hhhhttttttttpppp::::////////wwwwwwwwwwww....eeeeppppmmmm....oooorrrrnnnnllll....ggggoooovvvv////ppppvvvvmmmm//// website. In the future, SGI will continue to provide efficient and feature-rich implementations of the MPI and SHMEM message- passing APIs. However, customer interest in further optimization to the SGI PVM library is decreasing. In prior MPT releases, installation of the PVM software was required. In the MPT 1.5.3 release package, we permitted the uninstalling of PVM. This allowed customers to try out the effect of removing PVM software from a system prior to upgrading to MPT 1.6. 4.4.2 _D_e_f_a_u_l_t__I_n_t_e_r_c_o_n_n_e_c_t__S_e_l_e_c_t_i_o_n In previous SGI MPI releases, the software attempted to find HIPPI 800 hardware on the host(s) when MPI was launched across multiple IRIX hosts. Other interconnects such as GSN, Myrinet, and XPMEM had to be manually selected via an environment variable. Starting with the MPT 1.6 release, this search algorithm has been significantly modified. By default, if MPI is being run across multiple hosts, or if multiple binaries are specified on the mmmmppppiiiirrrruuuunnnn startup command, the software searches for interconnects in the following order (for IRIX systems): 1. XPMEM (NUMAlink) 2. GSN 3. MYRINET 4. HIPPI 800 5. TCP/IP - 9 - The only supported interconnect on Linux systems at this time is TCP/IP. MPI uses the first interconnect it can detect and configure correctly. There will be only one interconnect configured for the entire MPI job, with the exception of XPMEM. If XPMEM is found on some hosts but not on others, one additional interconnect is selected for those hosts not found on the NUMAlink cluster. The user can specify a mandatory interconnect to use by setting one of the following new environment variables. These variables are assessed in the order shown. 1. MMMMPPPPIIII____UUUUSSSSEEEE____XXXXPPPPMMMMEEEEMMMM 2. MMMMPPPPIIII____UUUUSSSSEEEE____GGGGSSSSNNNN 3. MMMMPPPPIIII____UUUUSSSSEEEE____GGGGMMMM 4. MMMMPPPPIIII____UUUUSSSSEEEE____HHHHIIIIPPPPPPPPIIII 5. MMMMPPPPIIII____UUUUSSSSEEEE____TTTTCCCCPPPP If MPI determines that the requested interconnect is unavailable or not configured properly, an error message is printed to stdout and the job is terminated. For a mandatory interconnect to be used, all of the hosts listed on the mmmmppppiiiirrrruuuunnnn command line must be connected via this device. If this is not the case, the job is terminated. XPMEM is an exception to this rule, however. If MMMMPPPPIIII____UUUUSSSSEEEE____XXXXPPPPMMMMEEEEMMMM is set, one additional interconnect can be selected via the MMMMPPPPIIII____UUUUSSSSEEEE variables. Messaging between the partitioned hosts uses the XPMEM driver while messaging between nonpartitioned hosts uses the second interconnect. If a second interconnect is required but not selected by the user, MPI chooses the interconnect to use, based on the default hierarchy. If the global ----vvvv verbose option is used on the mmmmppppiiiirrrruuuunnnn command line, a message is printed to stdout, indicating the multi-host interconnect used for the job. The following interconnect selection environment variables have been deprecated in the MPT 1.6 release: MMMMPPPPIIII____GGGGSSSSNNNN____OOOONNNN, MMMMPPPPIIII____GGGGMMMM____OOOONNNN, and MMMMPPPPIIII____BBBBYYYYPPPPAAAASSSSSSSS____OOOOFFFFFFFF. If any of these variables are set, MPI prints a warning message to ssssttttddddoooouuuutttt. The previous meanings of these variables are ignored. - 10 - 4.4.3 _M_P_I_-_2__P_r_o_c_e_s_s__M_a_n_a_g_e_r__I_n_t_e_r_f_a_c_e__F_u_n_c_t_i_o_n_a_l_i_t_y MPI-2 functionality has been added for the MMMMPPPPIIII____CCCCoooommmmmmmm____ssssppppaaaawwwwnnnn and related functions and attributes described in sections 5.3, 5.5.1, and 5.5.3 of the MPI-2 standard. In the MPT 1.6 release, this functionality is available only for MPI jobs running on a single IRIX host. For further details, see the mmmmppppiiiirrrruuuunnnn(1) and MMMMPPPPIIII(1) man pages. 4.4.4 _M_P_I_-_2__R_e_p_l_a_c_e_m_e_n_t_s__f_o_r__D_e_p_r_e_c_a_t_e_d__D_a_t_a_t_y_p_e__F_u_n_c_t_i_o_n_s The MPI-2 standard defines a set of functions that are to be used as replacements for several that are deprecated. These new functions take advantage of the use of type IIIINNNNTTTTEEEEGGGGEEEERRRR((((KKKKIIIINNNNDDDD====MMMMPPPPIIII____AAAADDDDDDDDRRRREEEESSSSSSSS____KKKKIIIINNNNDDDD)))) so that addresses can be 64- bit addressable and are not limited to the size of an integer. For C, the following new functions have been added for ABI 32 and ABI 64: MMMMPPPPIIII____TTTTyyyyppppeeee____ccccrrrreeeeaaaatttteeee____hhhhiiiinnnnddddeeeexxxxeeeedddd MMMMPPPPIIII____TTTTyyyyppppeeee____ccccrrrreeeeaaaatttteeee____hhhhvvvveeeeccccttttoooorrrr MMMMPPPPIIII____TTTTyyyyppppeeee____ccccrrrreeeeaaaatttteeee____ssssttttrrrruuuucccctttt The Fortran versions of these routines have been added for ABI 64 only. In addition, for Fortran, the MPI-2 functions, MMMMPPPPIIII____TTTTYYYYPPPPEEEE____GGGGEEEETTTT____CCCCOOOONNNNTTTTEEEENNNNTTTTSSSS and MMMMPPPPIIII____TTTTYYYYPPPPEEEE____GGGGEEEETTTT____EEEENNNNVVVVEEEELLLLOOOOPPPPEEEE, have been added for ABI 64. 4.4.5 _I_m_p_r_o_v_e_d__M_P_I_/_O_p_e_n_M_P__I_n_t_e_r_o_p_e_r_a_b_i_l_i_t_y__(_I_R_I_X__o_n_l_y_) In this release, improved placement algorithms are available for use with MPI/OpenMP hybrid applications. This feature is available on Origin 300 and Origin 3000 series servers. For details about this feature, see the MMMMPPPPIIII(1) man page. 4.4.6 _R_e_d_u_c_e_d__S_t_a_r_t_u_p__T_i_m_e__O_v_e_r_h_e_a_d Changes have been made to reduce the startup time overhead observed for MPI jobs involving more than 128 MPI processes per host. 4.4.7 _S_t_a_t_u_s__V_a_l_u_e__N_o_w__U_s_e_d__f_o_r__C_e_r_t_a_i_n__I_/_O__O_p_e_r_a_t_i_o_n_s In this release, a change was made to allow MMMMPPPPIIII____GGGGeeeetttt____eeeelllleeeemmmmeeeennnnttttssss() and MMMMPPPPIIII____GGGGeeeetttt____ccccoooouuuunnnntttt() to be used on the status variable passed into read, write, MMMMPPPPIIIIOOOO____TTTTeeeesssstttt(), and MMMMPPPPIIIIOOOO____WWWWaaaaiiiitttt() calls to determine the transfer length of the associated I/O operation. Previously, the status value was not filled in for such calls. - 11 -