This document (Publishing this document) is for Savannah administrators, not Savannah users. Savannah is a SourceForge clone based on the SourceForge-2.0 software. It is dedicated the GNU projects.
Because of the highly specific nature of the software, Savannah is a fork of the SourceForge-2.0 software. Attempting to make it modular and configurable is a waste of time. The whole Savannah software is available from CVS and is managed by the Savannah project. The ChangeLog explains the modifications made to the original code.
--- The Detailed Node Listing ---
CVS repositories
Database
System Administration
Savannah currently provides the CVS frontend. Check the Task List for details on planned developments.
Setting up Savannah is not an easy task because it has to integrate existing habits and projects without breaking anything. However, the SourceForge Installation Guide by Guillaume Morin helps a lot understanding the software.
Savannah is installed on the machine subversions.gnu.org. The root of the installation is in /subversions/sourceforge. All the software that is not system wide and is needed to run Savannah is installed in this directory. The structure of this directory is similar to FHS-2.1. In the following table the path names are relative to the installation root. All directories covered by the SourceForge Installation Guide are omitted.
tmp
src/savannah
src/savannah/www
src/savannah/gnuscripts
The whole Savannah software is available from CVS and is managed by the Savannah project.
In order to install changes commmited in the savannah project CVS tree, proceed as follows:
login subversions su - export CVS_RSH=ssh cd /subversions/sourceforge/src/savannah cvs -q update
At a given time only one person can be in charge of approving or rejecting projects submitted to Savannah. The /admin/ interface is not fit for concurrent access. The user assigned to this task is the one in charge. It is his responsibility to find someone else before leaving :-)
The task current administrator is assigned to the person currently in charge of approving the projects submitted to Savannah.
The password of the admin
user is known by
Loic Dachary, Guillaume Morin, Hugo Gayosso and
Jaime E. Villate.
Each project registered on Savannah that is not part of the GNU project is granted a publicly available file download area at
http://freesoftware.fsf.org/download/projectname/
Each project member can upload files to this directorly by using scp or rsync over ssh to the following location:
freesoftware.fsf.org:/upload/projectname/
Sample commands for doing this are:
# # Copy an entire tree verbatim, that may imply to remove files on # freesoftware.fsf.org. # rsync --delete -av --rsh=ssh . freesoftware.fsf.org:/upload/projectname # # Copy a single file with scp # scp -q file.tar.gz freesoftware.fsf.org:/upload/projectname # # Copy a single file with rsync over ssh # rsync --rsh=ssh file.tar.gz freesoftware.fsf.org:/upload/projectname
A reminder is included in the Project Admin
page of each project.
For each project registered on Savannah there may be two CVS repositories. One to store the sources of the project and one to store the web of the project. The sources repository is in /subversions/cvs/software and the web repository is in /webcvs. The /webcvs symbolic link points to /webcvs and the /cvsroot symbolic link points to /subversions/cvs/software.
Existing projects that migrate to Savannah may want their CVS repository to be transfered to subversions. Time is essential for such an operation since the project contributors want to work on the new repository on subversions and stop using the old. When the author asks cvs-hackers@gnu.org, ask him to send the tarbal by mail or send a URL from which it can be downloaded. Make an appointement with him and guarantee that the repository will be untared on subversions with 24 hours maximum. The project contributor must first create a project on subversions. When you have the tarbal untar it at /cvsroot/project. Make sure it does not contain a CVSROOT that would override the existing CVSROOT. If it does manualy copy the history and val-tags files only. Make sure the imported repository is untared under /cvsroot/project/project and does not polute the root of the repository.
When a project has a license that is not website
a source
repository is created under /subversions/cvs/software/project with
a private CVSROOT that only contains anoncvs. The developers of the
project have access to the CVSROOT directory.
The group project
is created to grant write access to the repository
to all the members of the project.
When a Savannah project is assigned the website
license, it only
has a portion of the webcvs repository and no source CVS repository.
If the html_cvs
field for a given Savannah project is empty, it
is not associated with a part of the webcvs repository.
It allows them to add commit notification by doing the following,
replacing project
with the name of their project:
cvs -d subversions.gnu.org:/cvsroot/project co CVSROOT In CVSROOT/commitinfo ^project /usr/local/bin/commit_prep -T project -r In CVSROOT/loginfo ^project /usr/local/bin/log_accum -T project -C -m project-commit@gnu.org -s %{sVv}
The email address must exist, it will not be automatically generated.
For compatibility with the cvs setup before Savannah was introduced, /subversions/cvs/common contains repositories that existed before Savannah. When a project is registered in Savannah, a symbolic link is created (/subversions/cvs/software/project/project) that points to the already existing /subversions/cvs/common/project directory.
The /cvs symbolic link points to /subversions/cvs/common so that people already using it to access their repositories can continue to do so. Before Savannah existed a pserver access was available and Savannah continues to maintain it for these projects, updating the CVSROOT/passwd files with user/password pairs that are in the Savannah database.
The sf_backup script builds tarbals for each repository in the /subversions/cvs/software directory. Those tarbals are stored in the /subversions/cvs/software.backups directory and linked with the savannah.gnu.org:/cvs.backups URL. The tarbals are generated daily, only if at least one file in the repository is more recent than the tarbal.
When a project has an html_cvs
field that is not empty in the
group
table, a web repository is created in
/webcvs/html_cvs
. By default the html_cvs
field has the value /software/project/
but it may be edited with
the savannah.gnu.org/admin/. See the gnujobs, greve and bravegw projects
for examples.
If a project is tagged as non-gnu (gnu field in table groups set to N) it is given a space in the /non-gnu/project directory instead.
When a Savannah project is assigned the website
license, it only
has a portion of the webcvs repository and no source CVS repository.
If the html_cvs
field for a given Savannah project is empty, it
is not associated with a part of the webcvs repository.
The group webproject
is created to grant write access to the repository
to all the members of the project.
All the www.gnu.org web was imported in /webcvs.
When a project is registered on Savannah and there already exists
a directory for it in the repository (either .../software/project or
the value of the html_cvs
field), a chgrp -R webproject is done
on this repository to grant the members of the project a write access
to this portion of the web repository and only this one.
The www
project in Savannah is treated in a special way. All the
members of the www
project have access to the whole repository
in /webcvs. It means that they are always included in every webproject
created.
Since CVS is not able to handle symbolic links, a simple mechanism has been implemented on the machine hosting the www.gnu.org to allow webmasters to control the symbolic link from the CVS tree.
The special file .symlinks
contains a list of file name pairs,
one per line. For instance:
foo.html index.html bar.html other.html
is a valid .symlinks
file. Every night a script reads all the
.symlinks
files, prepend a ln -s
in front of each line
and execute them. Well, in reality it's not that simple but you get
the idea. The .symlinks
file can only be used to control the
symbolic link in the directory where they are. File names with / will
be ignored.
The /webcvs/CVSROOT/loginfo file contains a trigger that update the gnudist.gnu.org:/home/www/html directory whenever a commit is done. There is a single CVSROOT for all the projects that have a web repository.
The /subversions/sourceforge/src/savannah/gnuscripts/sf_www_sync.c program was derived from the /usr/local/bin/webcvs.c program. It is called on each commit to keep the www.gnu.org web site in sync with the CVS repository.
The idea is to runs a cvs update -l (to prevent recursion) in the directory where the commit was done. Since the command will be called once for each directory where a commit did some action there is no need for recursion.
The %{s} argument given in the loginfo file is a single argument that lists the directory and all the files involved. As a special case if the directory was added the file list is replaced by '- New directory'. This is lame since adding the files -, New and directory will produce the same effect, but it's unlikely.
There are three cases to take in account:
In order to prevent security compromision the directory name is quoted.
The traces of all the updates are kept in /var/log/sf_sync_www.log.
The special project www
have write access to all the /webcvs
repository. It is possible to create projects that will limit write
access of the members of the project to a subdirectory of the /webcvs
repository only. For instance the bravegw
Savannah project only
give write access to the /webcvs/brave-gnu-world part of the repository.
A project bound to a specific subdirectory will grante write access to all the tree under this subdirectory. There is no way, for instance, to grant write access to group B to /webcvs/thispart and write access to group A to /webcvs/thispart/subdir. If you do this group B win and will have write access to /webcvs/thispart recursively and group A will have access to nothing. If you see a way to overcome this limitation, let us know.
The sf_www script generates the map that is published at www.gnu.org to Savannah. It writes the file in /subversions/sourceforge/src/server/standards and commits it. The server/standards directory is a read-write checkout of the www.gnu.org web CVS. The sf_www script is run once a day by the crontab.
A more webmaster oriented documentation explains the organisation of the www.gnu.org CVS tree and the rationale of its usage.
Savannah uses MySQL and the sourceforge
database. The root user has
a ~/.my.cnf file that defines the user/passwd. It is not necessary to
specify them on the command line.
A read-only access to the sourceforge
database is granted
to the following machines:
fr.fsf.org
The sf_xml
script builds daily an XML dump of the public information
from the Savannah database into the savannah.gnu.org/savannah.xml
file.
In addition a dump containing information that users may not want to
publish to the public such as email and ssh keys is built in
/subversions/sourceforge/dumps/savannah.xml
. The command line
sf_xml --private
is used to generate this dump.
A set of XSLT files can be written in the /subversions/sourceforge/dumps
directory to build custom files from the savannah.xml
file that
is located in the same directory. This is used, for instance, for
account creation information files. If an XSLT file is created
(a.xsl
for instance) the Makefile
must be updated to add the
a.txt
file in the list of dependencies of the all
goal.
For instance:
all: accounts-fsffr.txt accounts.txt myown.txt
The generation of both savannah.xml files and the XSLT processing is run daily from the crontab.
The MySQL database named sourceforge
that holds all the
information used by Savannah is dumped daily. See
http://savannah.gnu.org/projects/sysadmin/ to find out where the dumps
are stored. The dumps are compressed and rotated daily with a maximum of
30, as described in /subversions/sourceforge/dumps/logrotate.conf. The
sf_backup
script takes care of all this and is called from the
crontab.
The tables people_skill
and people_skill_level
are loaded
from the skill database maintained by CJN (http://cjn.sourceforge.net/).
The script /subversions/sourceforge/src/savannah/gnuscripts/sf_skill
loads the XML skill files from CJN and replace the content of the
tables in the sourceforge database.
If some proprietary software shows on the skill list, add it to the %ignore table in the sf_skill script and re-run it.
cd /subversions/sourceforge/src/savannah/gnuscripts edit sf_skill sf_skill cvs commit -m 'Ignore proprietary software xxxx'
It is convenient to use Savannah to manage accounts on a machine that is
completly unrelated to Savannah itself. For instance, the project
fsffr lists all the
users who should have an account on the france.fsfeurope.org
machine.
A cron job on the remote machine can fetch the list of users from Savannah and update the password files accordingly. Adding a user to the machine can then be done by adding the user as a developer of the project.
A guide to install the savannahusers
script on the target
machine is available in the savannahusers manual page. This chapter deals with the necessary
actions on the savannah.gnu.org machine, not on the target machine.
In order for remote machines to take advantage of Savannah for account management, a list of all Savannah users is dumped daily, both in XML format and text format (XML Dump).
The access to the user information is restricted and has to be done in the following way:
rsync --rsh=ssh xmlbase@savannah.gnu.org: .
The user xmlbase
on savannah.gnu.org is only used for this
purpose. The ssh public key of the user doing the rsync
on the
remote machine must be registered in the authorized_keys
file of
the xmlbase
. He will only be allowed to access a single file.
You don't need to give the command you want to execute, indeed
this information is already in the authorized_key :
command="rsync --server --sender . /subversions/sourceforge/dumps/savannah.xml" 1024 35 1325...
Two files may be accessed in this way:
savannah.xml
accounts.txt
loic Loic Dachary loic@gnu.org 1024 35 14482406825620879676223610524821306708503540742800... rodolphe Rodolphe Quiedeville rq@lolix.org 1024 35 13773675641076158303518150007131532895996406770957... 1024 35 13392800240284295490871092259529193810644583890958...
Each account block is separted by an empty line. The first line is
the uniq user name. The second line is the full name of the user. The
third line is the e-mail address of the user. The next lines are the
content of the authorized_keys
file.
It is possible to generate files specific to a given target machine
quite easily. For instance the account-fsffr.txt
file is a
selection of the users that are members of the
fsffr projects. The
Makefile
in the dumps directory is responsible for the creation of
these files. It uses XSLT to select the relevant informations from the
savannah.xml
dump.
Address all questions and requests to savannah-hackers@gnu.org and log support requests on the web.
Savannah features a way to link a project with its mailing lists. They are handled by Mailman on fencepost. The purpose of this section is to explain the link between savannah and Mailman.
Some details regarding the setup of Mailman can be found in the sysadmin.texi file at http://savannah.gnu.org/projects/sysadmin/.
Before Savannah was available, some mailing lists have been created. Some of the GNU packages have been migrated on savannah since then. From the list adminsitration page on each Savannah project, it is possible to to make the link between these packages and their mailing lists (the file is savannah.gnu.org/www/mail/admin/index.php). The administrator of the Savannah project has to fill aform with the name of its mailing lists and the admin password of the list.
When a Savannah project administrator chooses to add a mailing list for
his(her) project, an entry is added to the Savannah database. This
information will be dumped by (by the sf_xml script). A cronjob on
fencepost.gnu.org will read that dump and find which lists must be
created. It will launch the newlist
binary and update the alias file.
This cronjob that creates mailing lists can be found in gnuscripts/Mailman/mailing_lists_create.pl in the CVS tree (see http://savannah.gnu.org/projects/savannah/ for get the source tree).
The parameters needed to bind an existing mailing list to a Savannah project (list name and password) are checked by a CGI script installed on fencepost.gnu.org. The related files are in the gnuscripts/Mailman directory of the Savannah sources (see http://savannah.gnu.org/projects/savannah/ for get them). They can be installed on fencepost via a Makefile (details are in sysadmin.texi).
The mailing lists of the Savannah projects that are not part of the GNU project are hosted under the domain freesoftware.fsf.org. This domain and the corresponding mailman installation are installed on the fencepost.gnu.org machine.
Mailman was installed in /com/mailer/freesoftware
.
Savannah will try to send mail to users under various circumstances (bug reports notification, account creation etc.). In some cases it will use the real mail address of the user, in others it will use user@savannah.gnu.org. In order for the user@savannah.gnu.org address to work properly for outgoing mails, the /etc/email-addresses file is updated automatically every 5 minutes with the following command:
sf_aliases
The user@savannah.gnu.org can never
be used to recieve mail for
the good reason that savannah.gnu.org does not listen on the SMTP port.
People who have a simple alias name@gnu.org but no account on Kerberos cannot create an account on Savannah. When they ask to unlock the account name on savannah-hackers@gnu.org, tell them to create an account using a fake username and to send this username to savannah-hackers@gnu.org. When receiving that user name substitute the fake login name by the desired one:
mysql -e "update user set user_name = 'desired' where user_name = 'fake'" sourceforge
Must be root to run this script. You are advised to run it in /subversions/sourceforge/tmp, although it is not mandatory.
The sf_migrate
script creates a Savannah project for an existing project
in the /subversions/cvs/common directory. It is done in three steps:
--add
When explaining the situation to a user added to Savannah in this
way, one could say it like this. If you have a Kerberos account on
gnu.org, use the same login and password on Savannah and change the
password immediately afterwards: it will not change your Kerberos
password, just the Savannah password. If you only have a pserver
account, use the same login and password on Savannah. If you have
both, use the Kerberos account login and password. If you have none
and access CVS using SSH public keys, ask to cvs-hackers@gnu.org to
give you a password. This last case requires human interaction to
prevent someone from stealing your account name.
--bind
--mail
When a user with SSH access thru public key was added by sf_migrate, she/he will be instructed to ask for a password to cvs-hackers@gnu.org. The sf_pass script can be used to set her/his password. A mail must be sent by the user requesting the password with the encrypted password. Instruct the user to generate the password using the following command:
perl -e 'print crypt("mypassword", join "", (".", "/", 0..9, "A".."Z", "a".."z")[rand 64, rand 64])'
When the user send the password crypted, set it using:
sf_pass --set thename cryptpass | mysql sourceforge
After 24 hours, check user logged in and lock the user if it is not the case. This is to prevent obvious holes.
sf_pass --unset thename | mysql sourceforge
Must be root to run this script. Must export CVS_RSH=ssh. You are advised to run it in /subversions/sourceforge/tmp, although it is not mandatory.
It is run from the crontab and the output is logged in /var/log/sf_cvs.log.
The sf_cvs
script generates a shell script that will synchronize
the system files with the state of the Savannah database (sourceforge).
This script only generates lines if something needs to be done. When
the resulting script is executed, another run must not
display
any action, unless the database was modified in the meantime.
It performs the following tasks, in this order.
Add new projects
Update existing projects
Add missing users
Remove users
Update existing users
Update the groups of anoncvs
Update the CVS password file
Create download area for non-GNU projects
Update xinetd.conf
The HTML version of this document is published in two places:
Savannah Administration Guide and
Savannah Administration Guide. The source is stored in the
subversions.gnu.org:/cvsroot/savannah
CVS repository, in the
$Source: /webcvs/server/standards/README.savannah.html,v $
file (and/or subversions.gnu.org:/cvs co gnudocs
repository ? - FIXME). To facilitate the publication process you can edit it in
the subversions.gnu.org:/subversions/sourceforge/src/gnudocs directory
and then issue a
make publish
The publish
goal assumes that the Savannah document root is in
../savannah/www and a read-write checkout of the www.gnu.org/server/standards
directory is in ../server/standards. It will format the document to
HTML and commit the changes to the repository.
The SSL certificate for savannah.gnu.org was generated in /etc/apache-ssl/. Check the README file for a log of the command. There has been a lot of discussions regarding the root certificate for GNU, the use of a PKI. At some point the savannah certificate will be generated using a proper root certificate.
Statistics for savannah.gnu.org web usage are generated using webalizer.
The sf_stat
script does the job on a daily basis (called from the
crontab) using the webalizer.conf
in the
/subversions/sourceforge/src/savannah/www/webalizer directory and moving
the generated report in the same directory.
The sf_stat
script is also called before rotating logs, as specified
in the /etc/apache-ssl/cron.conf
script.
The Savannah crontab jobs are in /etc/cron.d/savannah. Every cron command output is sent to savannah-hackers@gnu.org.
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6/bin:/subversions/sourceforge/bin:/usr/local/mysql/bin MAILTO=savannah-hackers@gnu.org # # Build /etc/aliases, # http://savannah.gnu.org/savannah.html#Mails%20and%20aliases # */5 * * * * root sf_aliases # # Build www map, # http://savannah.gnu.org/savannah.html#Web%20CVS%20and%20Projects # 10 4 * * * root sf_www # # Sync projects with CVS related system files, # http://savannah.gnu.org/savannah.html#Users%20and%20CVS%20synchronization # 17 * * * * root cd /subversions/sourceforge/tmp ; sf_cvs | ( date ; sh -x ) >> /var/log/sf_cvs.log 2>&1 # # Daily backups of the Savannah database, # http://savannah.gnu.org/savannah.html#Database%20Backups # 7 5 * * * root sf_backup # # Daily XML dump of Savannah public information # http://savannah.gnu.org/savannah.html#XML%20Dump # 7 6 * * * root sf_xml > /subversions/sourceforge/src/savannah/www/savannah.xml 14 6 * * * root sf_xml --private > /subversions/sourceforge/dumps/savannah.xml 30 6 * * * root make -s -C /subversions/sourceforge/dumps all # # Daily web server statistics # http://savannah.gnu.org/savannah.html#Web%20Usage%20Statistics # 7 7 * * * root sf_stat
The logs of Savannah are in /var/log
. They are rotated by the
/etc/logrotate.d/savannah
configuration file of logrotate.
/var/log/sf_cvs.log
/var/log/sf_sync_www.log
must
be read-write for everyone.
/var/log/apache-ssl/access.log
All software that is not system wide but only used for the purpose of Savannah must be installed in the prefix /subversions/sourceforge.
The MySQL installation is an exception that must be fixed. It is installed with the /usr/local/mysql prefix. It was not installed from the debian package because I (loic@gnu.org) was not able to fix the MySQL-3.23 package to make it work on potato.
The large number of groups a user can have (>32) implies to modify some basic programs (namely useradd and usermod).
Gordon Matzigkeit <gord@fig.org> modified /usr/local/src/cvs-1.10.8/src/server.c to overcome the limit builtin glibc.
Here is the patch applied to /usr/local/src/shadow-19990827. The modified usermod and useradd have been installed in /subversions/sourceforge/bin.
*** ./debian/rules.~1~ Fri Feb 9 02:05:06 2001 --- ./debian/rules Fri Feb 9 02:05:41 2001 *************** *** 38,44 **** ifneq ($(DEB_HOST_GNU_SYSTEM),gnu) include debian/scripts/login.mk package-list += binary-login ! config_options += --with-libpam control_defs += -DMODDEP="(>= 0.72-5)" endif --- 38,44 ---- ifneq ($(DEB_HOST_GNU_SYSTEM),gnu) include debian/scripts/login.mk package-list += binary-login ! # config_options += --with-libpam control_defs += -DMODDEP="(>= 0.72-5)" endif *** ./build-tree/shadow-19990827/libmisc/addgrps.c.~1~ Mon Dec 28 12:34:41 1998 --- ./build-tree/shadow-19990827/libmisc/addgrps.c Fri Feb 9 03:04:47 2001 *************** *** 20,25 **** --- 20,28 ---- * already there. Warning: uses strtok(). */ + #undef NGROUPS_MAX + #define NGROUPS_MAX 512 + int add_groups(const char *list) { *** ./build-tree/shadow-19990827/src/usermod.c.~1~ Fri Jul 9 09:27:38 1999 --- ./build-tree/shadow-19990827/src/usermod.c Fri Feb 9 03:05:52 2001 *************** *** 74,79 **** --- 74,82 ---- #define VALID(s) (strcspn (s, ":\n") == strlen (s)) + #undef NGROUPS_MAX + #define NGROUPS_MAX 512 + static char *user_name; static char *user_newname; static char *user_pass; *** ./build-tree/shadow-19990827/src/groups.c.~1~ Mon Jun 7 09:40:45 1999 --- ./build-tree/shadow-19990827/src/groups.c Fri Feb 9 03:15:54 2001 *************** *** 42,47 **** --- 42,50 ---- static void print_groups P_((const char *)); int main P_((int, char **)); + #undef NGROUPS_MAX + #define NGROUPS_MAX 512 + /* * print_groups - print the groups which the named user is a member of * *** ./build-tree/shadow-19990827/src/id.c.~1~ Mon Jun 7 09:40:45 1999 --- ./build-tree/shadow-19990827/src/id.c Fri Feb 9 03:16:34 2001 *************** *** 50,55 **** --- 50,58 ---- static void usage P_((void)); int main P_((int, char **)); + #undef NGROUPS_MAX + #define NGROUPS_MAX 512 + static void usage(void) { *** ./build-tree/shadow-19990827/src/useradd.c.~1~ Fri Feb 9 02:06:01 2001 --- ./build-tree/shadow-19990827/src/useradd.c Fri Feb 9 03:28:52 2001 *************** *** 53,58 **** --- 53,61 ---- #endif #include "faillog.h" + #undef NGROUPS_MAX + #define NGROUPS_MAX 512 + #ifndef SKEL_DIR #define SKEL_DIR "/etc/skel" #endif *** ./build-tree/shadow-19990827/src/newgrp.c.~1~ Fri Feb 9 02:06:00 2001 --- ./build-tree/shadow-19990827/src/newgrp.c Fri Feb 9 03:29:10 2001 *************** *** 49,54 **** --- 49,57 ---- static GETGROUPS_T *grouplist; #endif + #undef NGROUPS_MAX + #define NGROUPS_MAX 512 + static char *Prog; static int is_newgrp;
The sshd daemon has been rebuilt with the following patch so that CVS ssh operations have the proper set of groups. The sources are in /usr/local/src/openssh-1.2.3/ and the corresponding debian package is at /usr/local/src/ssh_1.2.3-9.2loic_i386.deb. The package was tagged on hold using dselect to prevent accidental upgrade. Note that this patch may have hideous impact for users who have real account and use ssh since most of the commands that deal with groups have not been recompiled to handle more than the limit of 32 groups. For instance the id command will core dump. Here is the patch applied on the distribution:
*** sshd.c.~1~ Fri Mar 17 04:40:18 2000 --- sshd.c Tue Feb 13 06:32:17 2001 *************** *** 147,152 **** --- 151,240 ---- const char *display, const char *auth_proto, const char *auth_data, const char *ttyname); + #ifdef AUTH_SERVER_SUPPORT + #ifdef HAVE_GETSPNAM + #include <shadow.h> + #endif + #endif /* AUTH_SERVER_SUPPORT */ + + /* The GNU C Library currently has a compile-time limit on the number of + groups a user may be a part of, even if the underlying kernel has been + fixed, and so we define our own initgroups. */ + #include <grp.h> + static int + xinitgroups (char *user, gid_t gid) + { + struct group *grp; + gid_t *buf; + int buflen, ngroups; + + /* Initialise the list with the specified GID. */ + ngroups = 0; + buflen = 16; + buf = malloc (buflen * sizeof (*buf)); + buf[ngroups ++] = gid; + + setgrent (); + while ((grp = getgrent ())) + { + /* Scan the member list for our user. */ + char **p = grp->gr_mem; + while (*p && strcmp (*p, user)) + p ++; + + if (*p) + { + /* We found the user in this group. */ + if (ngroups == buflen) + { + /* Enlarge the group list. */ + buflen *= 2; + buf = realloc (buf, buflen * sizeof (*buf)); + } + + /* Add the group id to our list. */ + buf[ngroups ++] = grp->gr_gid; + } + } + endgrent (); + + /* Return whatever setgroups says. */ + buflen = setgroups (ngroups, buf); + free (buf); + return buflen; + } + #define initgroups xinitgroups + + /* This worked fine, and was adopted into glibc, until setgroups got a + similar limitation, so we override it as well. */ + #include <linux/posix_types.h> + #include <sys/syscall.h> + #include <errno.h> + + int + setgroups (size_t n, const gid_t *groups) + { + size_t i; + __kernel_gid_t kernel_groups[n]; + + for (i = 0; i < n; i ++) + kernel_groups[i] = groups[i]; + + { + long res; + __asm__ volatile ("int $0x80" + : "=a" (res) + : "0" (__NR_setgroups),"b" ((long)(n)), + "c" ((long)(kernel_groups))); + + if ((unsigned long)(res) >= (unsigned long)(-125)) { + errno = -res; + res = -1; + } + return (int) (res); + } + } + /* * Remove local Xauthority file. */
The cron daemon was recompiled from /usr/local/src/cron-3.0pl1/
with
the following patch applied, to fix the NGROUPS_MAX limit.
*** do_command.c.~1~ Tue Jun 12 06:35:48 2001 --- do_command.c Tue Jun 12 06:25:48 2001 *************** *** 30,35 **** --- 30,112 ---- # include <syslog.h> #endif + /* The GNU C Library currently has a compile-time limit on the number of + groups a user may be a part of, even if the underlying kernel has been + fixed, and so we define our own initgroups. */ + #include <grp.h> + static int + xinitgroups (char *user, gid_t gid) + { + struct group *grp; + gid_t *buf; + int buflen, ngroups; + + /* Initialise the list with the specified GID. */ + ngroups = 0; + buflen = 16; + buf = malloc (buflen * sizeof (*buf)); + buf[ngroups ++] = gid; + + setgrent (); + while ((grp = getgrent ())) + { + /* Scan the member list for our user. */ + char **p = grp->gr_mem; + while (*p && strcmp (*p, user)) + p ++; + + if (*p) + { + /* We found the user in this group. */ + if (ngroups == buflen) + { + /* Enlarge the group list. */ + buflen *= 2; + buf = realloc (buf, buflen * sizeof (*buf)); + } + + /* Add the group id to our list. */ + buf[ngroups ++] = grp->gr_gid; + } + } + endgrent (); + + /* Return whatever setgroups says. */ + buflen = setgroups (ngroups, buf); + free (buf); + return buflen; + } + #define initgroups xinitgroups + + /* This worked fine, and was adopted into glibc, until setgroups got a + similar limitation, so we override it as well. */ + #include <linux/posix_types.h> + #include <sys/syscall.h> + #include <errno.h> + + int + setgroups (size_t n, const gid_t *groups) + { + size_t i; + __kernel_gid_t kernel_groups[n]; + + for (i = 0; i < n; i ++) + kernel_groups[i] = groups[i]; + + { + long res; + __asm__ volatile ("int $0x80" + : "=a" (res) + : "0" (__NR_setgroups),"b" ((long)(n)), + "c" ((long)(kernel_groups))); + + if ((unsigned long)(res) >= (unsigned long)(-125)) { + errno = -res; + res = -1; + } + return (int) (res); + } + } static void child_process __P((entry *, user *)), do_univ __P((user *)); *************** *** 240,246 **** */ setgid(e->gid); # if defined(BSD) || defined(POSIX) ! initgroups(env_get("LOGNAME", e->envp), e->gid); # endif setuid(e->uid); /* we aren't root after this... */ chdir(env_get("HOME", e->envp)); --- 317,323 ---- */ setgid(e->gid); # if defined(BSD) || defined(POSIX) ! xinitgroups(env_get("LOGNAME", e->envp), e->gid); # endif setuid(e->uid); /* we aren't root after this... */ chdir(env_get("HOME", e->envp)); *** cron.c.~1~ Tue Jun 12 06:35:35 2001 --- cron.c Tue Jun 12 06:17:13 2001 *************** *** 25,35 **** #include "cron.h" #include <signal.h> - #if SYS_TIME_H - # include <sys/time.h> - #else # include <time.h> - #endif static void usage __P((void)), --- 25,31 ----
The ssh
service is bound to
lsh with a fallback to ssh
for protocol version 1. The startup of lsh
is done with the
/etc/init.d/lsh
script.
The version of lsh
installed is 1.2.1
compiled in
/usr/local/src/lsh-1.2.1
. It includes a patch for dealing
with the NGROUPS_MAX problem described in another chapter.
An entropy initialization bug was fixed with the following patch:
Index: lshd.c =================================================================== RCS file: /lysator/cvsroot//nisse/lsh/src/lshd.c,v retrieving revision 1.112.2.1 diff -u -a -r1.112.2.1 lshd.c --- lshd.c 2001/04/17 21:42:16 1.112.2.1 +++ lshd.c 2001/04/25 18:32:47 @ -480,9 +480,6 @ else argp_error(state, "All user authentication methods disabled."); - /* Start background poll */ - RANDOM_POLL_BACKGROUND(self->random->poller); - break; } case 'p': @ -751,6 +748,13 @ return EXIT_FAILURE; } + /* NOTE: We have to do this *after* forking into the background, + * because otherwise we won't be able to waitpid() on the background + * process. */ + + /* Start background poll */ + RANDOM_POLL_BACKGROUND(options->random->poller); + { /* Commands to be invoked on the connection */ struct object_list *connection_hooks;
A patch to gracefully handle utf8 errors in passwords was applied:
diff -u -r1.31 server_userauth.c --- server_userauth.c 2001/02/25 22:38:20 1.31 +++ server_userauth.c 2001/06/25 12:26:57 @ -343,9 +343,12 @ connection->dispatch[SSH_MSG_USERAUTH_REQUEST] = make_userauth_handler(self->methods, self->services, c, e, + /* Use the connection's exception handler as + * parent, in order to get reasonable + * handling of EXC_PROTOCOL. */ make_exc_userauth_handler(connection, self->advertised_methods, - AUTH_ATTEMPTS, e, + AUTH_ATTEMPTS, connection->e, HANDLER_CONTEXT)); }
The error message issued by lsh when encountering this error was:
Unhandled exception of type 0x1000: Invalid utf8 in password.
A dsa internal error bug was fixed with the following patch:
diff -u -a -r1.26 dsa.c --- dsa.c 2001/02/08 16:33:01 1.26 +++ dsa.c 2001/07/15 16:18:15 @ -525,7 +525,8 @ break; default: - fatal("do_dsa_sign: Internal error."); + fatal("do_dsa_sign: Internal error, unexpected algorithm %a.\n", + algorithm); } mpz_clear(r); mpz_clear(s);
The error message issued by lsh when encountering this error was:
do_dsa_sign: Internal error.
It seems that werror("...%a...") couldn't handle the atom 0, which is used to represent any algorithm not present in lsh's list in atoms.in. It was fixed with the following patch:
--- src/werror.c 2001/07/04 18:37:56 1.61 +++ src/werror.c 2001/09/12 07:23:51 @ -429,11 +429,11 @ case 'a': { int atom = va_arg(args, int); - - assert(atom); - werror_write(get_atom_length(atom), get_atom_name(atom)); - + if (atom) + werror_write(get_atom_length(atom), get_atom_name(atom)); + else + werror_write(9, "<unknown>"); break; } case 's':
The error message issued by lsh when encountering this error was:
do_exc_connection_handler: Raising exception locking connection. (type 1048577), using handler installed by handshake.c:355: do_handshake
This shows that "ssh-dss" is selected as the host algorithm to use, which is identified internally by lsh(d) as the integer ATOM_SSH_DSS. However, when that integer has been passed all the way down to dh_make_server_msg and do_dsa_sign, that value has been replaced by zero. It was fixed with the following patch:
--- src/server_keyexchange.c 2001/02/25 22:38:20 1.47 +++ src/server_keyexchange.c 2001/09/19 11:24:19 @ -135,9 +135,8 @ { hostkey_algorithm = ATOM_SSH_DSS_KLUDGE_LOCAL; } - else #endif - dh->hostkey_algorithm = hostkey_algorithm; + dh->hostkey_algorithm = hostkey_algorithm; dh->algorithms = algorithms;
The error message issued by lsh when encountering this error was:
Client version: SSH-1.99-2.0.13 (non-commercial) Server version: SSH-1.99-lshd_1.2.1 lsh - a free ssh Selected keyexchange algorithm: diffie-hellman-group1-sha1 with hostkey algorithm: ssh-dss
The people involved in this installation are Niels Moller (author of lsh), Gordon Matzigkeit (author of the NGROUPS_MAX patch) and Loic Dachary who did the installation.
Should a problem occur with this version of lsh, one has to send a bug
report to nisse@lysator.liu.seNiels Moller including the
relevant /var/log/syslog
lines (tagged with lshd) and a stack
trace of the core, if available. To get the stack trace do the following:
$ gdb /usr/local/sbin/lshd /core gdb> bt
With the appropriate information Niels is usually able to provide a patch within very short delays.
The menu.lst used by grub is installed at (/dev/hdb2)/boot/grub/menu.lst or, in grub jargon, hd(1,1)/boot/grub/menu.lst.
To access it
mount /dev/hdb2 /rescue edit /rescue/boot/grub/menu.lst umount /rescue
The service-entrance.gnu.org machine has two serial lines going to savannah.gnu.org. One that allows to see the console, the other that allows to power cycle the machine. More information on this subject may be found in sysadmin.texi (http://savannah.gnu.org/projects/sysadmin/).
A full Debian installation was done on /dev/hdb2
and can be used if for
some reason the installation is so corrupted that even a single boot
will not work. This emergency installation is labeled as such in the
grub menu.
When booting on this emergency partition the file systems of the regular
installation are mounted under the /subversions.gnu.org/
directory.
The grub menu file (menu.lst) can is located on this partition, as explained above.
The kernel was rebuilt in /usr/src/kernel-source-2.2.19pre17-2.2.19pre17
and installed from /usr/src/kernel-image-2.2.19pre17_512_i386.deb
.
It was recompiled with the following patch applied to raise the maximum
number of groups per process to 512.
*** include/asm-i386/param.h.~1~ Tue Aug 1 11:08:17 1995 --- include/asm-i386/param.h Sat May 26 15:44:10 2001 *************** *** 8,14 **** #define EXEC_PAGESIZE 4096 #ifndef NGROUPS ! #define NGROUPS 32 #endif #ifndef NOGROUP --- 8,14 ---- #define EXEC_PAGESIZE 4096 #ifndef NGROUPS ! #define NGROUPS 512 #endif #ifndef NOGROUP *** include/linux/limits.h.~1~ Tue Dec 2 16:44:40 1997 --- include/linux/limits.h Sat May 26 13:47:52 2001 *************** *** 3,9 **** #define NR_OPEN 1024 ! #define NGROUPS_MAX 32 /* supplemental group IDs are available */ #define ARG_MAX 131072 /* # bytes of args + environ for exec() */ #define CHILD_MAX 999 /* no limit :-) */ #define OPEN_MAX 256 /* # open files a process may have */ --- 3,9 ---- #define NR_OPEN 1024 ! #define NGROUPS_MAX 512 /* supplemental group IDs are available */ #define ARG_MAX 131072 /* # bytes of args + environ for exec() */ #define CHILD_MAX 999 /* no limit :-) */ #define OPEN_MAX 256 /* # open files a process may have */
The configuration of the IDE disks are done in /etc/init.d/hdparm
.
That boosts the transfert rate from 4.4Mb/s to 23.4Mb/s.
hdparm -k 0 -d 1 -c 3 -m 16 -a 16 -u 1 -X66 /dev/hda hdparm -k 0 -d 1 -c 3 -m 16 -a 16 -u 1 -X66 /dev/hdb hdparm -k 0 -d 1 -c 3 -m 16 -a 16 -u 1 -X66 /dev/hdc