HSS UNIX Public Cookbook (Jan 09, 2006)

Chris Malek


Table of Contents

Preface
1. Maintaining Operating Systems
1.1. Stress testing a new machine before deployment
1.2. Restoring /etc/passwd on a Solaris box when root doesn't exist
1.3. How to add a swap file under Solaris
1.4. Erasing GRUB or LILO from the master boot record
1.5. Configuring X11
1.6. Specifying a once only default boot image to GRUB or LILO
1.7. Setting up a Fedora Core 3 box for backup to a USB drive
2. Hardware
2.1. Replacing a hard drive in olympia's RAID array
2.2. Converting a drive from Sun disk labels to DOS partitioning
2.3. Adding an additional hard drive to a Solaris 7 box
2.4. Burning a DVD+R or DVD+RW under Fedora
2.5. Fixing sound device permssions under Fedora Core 4
3. E-mail
3.1. Deleting a Mailman mailing list
3.2. Migrating your Mailman lists to a new server
3.3. Setting up redundant Mailman servers
3.4. Saving a copy of outgoing mail to a "Sent Mail" folder in IMP
3.5. Upgrading the Horde PostgreSQL database from Horde 2 to Horde 3
4. Apache
4.1. Using PAM to authenticate users in Apache 2.x under Linux
4.2. Using PAM to authenticate users in Apache 1.3.x under Solaris
4.3. Getting pwauth to work with pam_pwdfile in the context of mod_auth_external
4.4. Setting up an SSL VirtualHost on a NameVirtualHost IP address
4.5. Redirecting from HTTP to HTTPS
4.6. Setting up a Windows XP compatible WebDAV Folder under Apache 1.3.x
4.7. Setting up a Windows XP compatible WebDAV Folder under Apache 2.x
4.8. Enabling CGI scripts for people
5. Databases
5.1. Setting the MySQL root password
5.2. Dumping and restoring a MySQL database
5.3. Dumping and restoring a PostgreSQL database
6. RPM Building and Maintenance
6.1. Setting up to use the a non-root RPM build environment
6.2. Automatically adding the platform name to the RPM release
6.3. Adding a code patch to an RPM
6.4. Building the JPackage java 1.4.2 nosrc RPM
6.5. Building the JPackage java 1.5.0 nosrc RPM
6.6. Building RPMs of Perl modules
6.7. Building RPMs of Python modules
6.8. Building unsquashable Fedora replacement packages
6.9. Adding a user and group for a service in an RPM for a Fedora system
6.10. Adding a menu entry and icon for a GUI program for a Fedora system
7. Programming
7.1. Importing an existing software package into a CVS repository
7.2. Importing an existing software package into a Subversion repository
7.3. Importing a CVS repository into Subversion
7.4. Tagging a release in Subversion
7.5. Importing the contents of another Subversion repository into yours
7.6. Populating a Subversion repository
A. asciidoc include tree for this document
B. Style Guide for HSS Cookbook recipes
B.1. Recipe Template
B.2. Code fragments or command sequences
B.3. Text markup
C. Maintaining the HSS UNIX Cookbook
C.1. Building the HTML and PDF versions of this book
Index

Preface

This document is the repository for the various procedures we use here in Caltech's Division of Humanities and Social Sciences to maintain our UNIX systems.

This is the public version of the Cookbook. Nothing inappropriate for strangers — things should compromise the security of our systems or privacy of our users — should appear here. If you see something which looks like it shouldn't be in here, be a pal and e-mail me about it at cmalek@caltech.edu.

Chapter 1. Maintaining Operating Systems

1.1. Stress testing a new machine before deployment

1.1.1. Problem

I like to stress test the machine to ensure that the RAM is good, that the hard drive has no bad blocks, etc. This way, we hopefully root out any hardware failures before we deploy the machine, and thus save ourselves some embarrassment.

I also like to take some disk subsystem and computational benchmarks from the machine so I can get an idea of how fast it is compared to our other systems.

1.1.2. Solution

1.1.2.1. Diagnostics

If the machine comes with its own diagnostic tools, then run a complete set of those on the machine.

If the machine doesn't come with its own diagnostic tools, use our copy of The Ultimate Boot CD v3.2 (a CD packed full of freely downloadable system diagnostic tools) to test the system.

  • Run memtest86 (in the Mainboard Tools menu) to test the RAM. I just run it once. Run it overnight if you suspect memory instabilities.
  • Run PC-Config (in the Mainboard menu) to find out the model and vendor for the system's hard disk. Do "Alt-W" and select "more hardware".
  • If you have a few days to set up the machine, run the Mersenne Prime Test (in the Mainboard Tools menu) for 24 or more hours. This will bring to light any instabilities with the CPU subsystem.
  • Run the appropriate vendor diagnostic tool from the Hard Disk Utilities menu to test the hard disk.

1.1.2.2. Benchmarking

After installing Linux on the system, run the following two benchmarks. These programs are installed by default as RPMs on all our systems.

First run /usr/bin/bonnie++.

touch /cf_hold
cd /var/tmp
mkdir bonnie++
cd bonnie++
bonnie++ -n 16
rm /cf_hold

Update /home/sysadmin/public_html/io-benchmarks.html with the results from the bonnie++ run.

Then run lmbench.

touch /cf_hold
cd /home/sysadmin/data/lmbench
/usr/bin/lmbench/config-run
/usr/bin/lmbench/results
cd results
make > /home/sysadmin/public_html/lmbench.txt
rm /cf_hold

1.1.3. Discussion

I no longer stress test new machines from reputable vendors like Dell and IBM after never having found a problem with such. I do benchmark them, though.

Dell and IBM (at least) ship such tools with their machines. If you can't find the CDs, go to the approriate company's website and download them.

Just being able to successfully install Linux over the net is a bit of a diagnostic test.

The homepage for The Ultimate Boot CD is http://ubcd.sourceforge.net/; the maintainers update the disk image every few months, so check the site for updates. The Western Digital DLG Diagnostic tools don't seem to work on the 2.21 version Ultimate Boot CD.

memtest86 can take a long time — one to two hours, depending on processor speed and the amount of RAM the machine has.

If you can't find the CD, you can burn a new one from our local copy of the ISO image: /home/sysadmin/isos/ubcd*.iso.

For /usr/bin/lmbench/config-run, take the defaults for everything except "Mail results" — say "no" for that.

We do "touch /cf_hold" before we run our benchmarks so that cfengine won't corrupt them.

1.2. Restoring /etc/passwd on a Solaris box when root doesn't exist

1.2.1. Problem

/etc/passwd on one of our Solaris boxes has gotten garbled or deleted somehow so that there's no entry for the root user, and now you can't even log in in single user mode, because Solaris will ask you for the root password even then, but won't let you log in even if /etc/shadow exists.

1.2.2. Solution

What we're going to do is boot frm the Solaris 7 install CD in single user mode, then mount the partition containing /etc and restore the passwd file manually.

If the system is running, you'll have to hard boot the system because you won't be able to get in as root (because you deleted the root account, bad boy).

If you're at a serial terminal, make sure the terminal is connected to the proper machine, and hit "Break". If you're using the Sun keyboard and monitor combo, hit Stop-A.

Insert the install CD into the CD drive, and boot the computer.

At the "ok" prompt, enter "sync". The computer will try to sync the filesystems to disk and reboot.

When you see it start initializing the RAM, stop it again using Break or Stop-A. At the "ok prompt, enter "boot cdrom -s and wait for it to boot.

When you get a shell prompt, enter the following commands:

fsck /dev/rdsk/c0t0d0s0
mount /dev/dsk/c0t0d0s0 /a
cd /a/etc
mv passwd passwd.orig
cp /etc/passwd ./passwd
cat passwd.orig >> ./passwd
reboot

The system should come up, but it'll be hurting because it has only the most basic set of accounts. Once the strongbox master has pushed the full set of passwd files to it, reboot.

1.3. How to add a swap file under Solaris

1.3.1. Problem

Sometimes the swap partition we add normally on our Solaris boxes just isn't enough, and we need to add more swap space without rebuilding the machine.

1.3.2. Solution

Create the swap file:

mkfile 512m /var/spool/swapfile

Filesize can be specified in megabytes(m) , blocks(b) or kilobytes(default)

Add this file to the swap space

swap -a /var/spool/swapfile

See if the new swap space is available:

swap -l

Make the new swap space be mounted at boot time by adding the new swap informaton to /etc/vfstab:

/var/spool/swapfile - - swap - no -

1.4. Erasing GRUB or LILO from the master boot record

1.4.1. Problem

You have a machine which used to be a dual boot Linux/Windows and now you want it to be single boot Windows. You've formatted the Linux partition as a Windows drive, but now when you boot, the computer hangs trying to run LILO or GRUB.

1.4.2. Solution

1.4.2.1. Option 1

If you have a Win98 or Win2000 (?) computer available, make a bootable DOS floppy, copy fdisk.exe to it (use Start -> Find to locate fdisk.exe on the Windows computer's hard drive). Then boot the GRUB/LILO afflicted box with that floppy, and do:

fdisk /mbr

You can also go out onto the 'Net and look at http://www.bootdisk.com: last time I checked, they had Win98 boot disk images which contained fdisk.exe.

1.4.2.2. Option 2

If you have a Win XP install CD, you can boot from the Windows XP CD and press the "R" key during the setup in order to start the Recovery Console. Select your Windows XP installation from the list and enter the administrator password. At the input prompt, enter the command "FIXMBR" and confirm the query with "y". The MBR will be rewritten and GRUB/LILO will be erased. Press "exit" to reboot the computer. Note that for this, you will have to know the Administrator password for the afflicted computer. If you don't know it, this ain't a gonna help you.

1.5. Configuring X11

1.5.1. Problem

Different versions of Linux have different ways of configuring X (I mean in an automated fashion — you can always create your XF86Config/xorg.conf file manually).

1.5.2. Solution

Redhat 7.3:

Xconfigurator

Redhat 9:

redhat-config-xfree86

Fedora Core 1:

redhat-config-xfree86

Fedora Core 2:

system-config-display

Fedora Core 3:

system-config-display

1.6. Specifying a once only default boot image to GRUB or LILO

1.6.1. Problem

Sometimes, you want to reboot a box and have it boot a specific boot image (kernel, or Windows install) that is not the default kernel from within Linux.

Perhaps you are logged in remotely to a box, and need to revert to an older kernel to test something, or perhaps you have one of those DVI flat panel setups in which you get no VGA output on your display, and thus can't see the boot loader menu.

1.6.2. Solution

For the examples, let's say that your normal default is to boot into Linux, but you want to boot into windows on the next boot only, and in your Windows image label in /etc/lilo.conf or /etc/grub.conf is windows.

1.6.2.1. LILO

If you use LILO as your boot loader, just before you reboot, do:

(as root)
lilo -R <desired once only image>

Example:

(as root)
lilo -R windows

1.6.2.2. grub

If you use grub as your boot loader, just before you reboot, do:

(as root)
grub --batch <<EOT
1>/dev/null 2>/dev/null
savedefault --default=# --once
quit
EOT

Where "#" is the number of the boot image in /etc/grub.conf. The first boot image is 0, the second is 1, etc.

Example:

(as root)
#!/bin/sh
grub --batch <<EOT
1>/dev/null 2>/dev/null
savedefault --default=4 --once
quit
EOT

1.7. Setting up a Fedora Core 3 box for backup to a USB drive

1.7.1. Problem

Sometimes we have boxes that have data stored on their local hard drives for various reasons. Some people have very large (200GB+) datasets that we just don't have room for on the filer; some have privacy or access concerns for certain data. We still want this data to be backed up.

We'd like to buy and external USB hard drive (which can be had for about $1/GB now) and back up to that.

1.7.2. Solution

1.7.2.1. Connect the drive, and format it as ext3

Connect the drive to the host, wait a bit and run dmesg to figure out what device in /dev you'll be accessing the drive via.

> dmesg
usb-storage: device found at 2
usb-storage: waiting for device to settle before scanning
  Vendor: LaCie     Model: Big Disk G379     Rev:
  Type:   Direct-Access                      ANSI SCSI revision: 04
usb-storage: device scan complete
SCSI device sda: 796594175 512-byte hdwr sectors (407856 MB)
sda: assuming drive cache: write through
SCSI device sda: 796594175 512-byte hdwr sectors (407856 MB)
sda: assuming drive cache: write through
 sda: sda1

You may have to do modprobe usb-storage before plugging in the drive.

Once Linux detects the drive, run fdisk on the device, delete the vfat partition, add a Linux partition (type 82), and write your changes.

Then format the new partition:

(as root)
mkfs.ext3 /dev/${device}

1.7.2.2. Install the rsnapshot package on the host

Get the RPM from here:

http://www.hss.caltech.edu/yum/fedora-core-3/en/os/i386/RPMS.extras/rsnapshot-1.1.6-1.noarch.rpm

1.7.2.3. Make an rsnapshot.conf for the host, if necessary

Here's a sample /etc/rsnapshot.conf that keeps 30 days of backups, backs up /local to /misc/usbdrive/snapshots and logs what it's doing to /var/log/rsnapshot. If you want to do something different than this, you'll need to make a special copy of rsnapshot.conf for the host.

# rsnapshot.conf

snapshot_root   /misc/usbdrive/snapshots
no_create_root  0

cmd_cp          /bin/cp
cmd_rm          /bin/rm
cmd_rsync       /usr/bin/rsync
cmd_logger      /usr/bin/logger

interval        daily   30

link_dest       1
verbose         2
loglevel        3


logfile /var/log/rsnapshot

lockfile        /var/lock/subsys/rsnapshot

# Here's the important bit -- this is the directory we're backing up
# TAB SEPARATED!!!
backup  /local/         local/

1.7.2.4. Add an automounter entry to mount the drive on demand

Add this line to /etc/auto.master, if it doesn't already exist:

/misc           /etc/auto.misc

Then create /etc/auto.misc, if necessary, and add this line:

usbdrive -fstype=ext3  :/dev/$device"

where $device is the "sd[a-z][0-9]" device you discovered, above.

Restart the automounter:

(as root)
/sbin/service autofs restart

1.7.2.5. Add a cron job to run rsnapshot

Create /etc/cron.daily/rsnapshot, and give it the following contents:

#!/bin/bash
/usr/bin/rsnapshot daily

1.7.3. Discussion

1.7.3.1. rsnapshot

rsnapshot is an implementation of Mike Rubel's rsync snapshot idea (http://www.mikerubel.org/computers/rsync_snapshots/). It uses rsync and hard links to keep a number of daily snapshots of the directory being backed up: go to the snapshot from 3 days ago, and you should see what the directory looked like 3 days ago.

The default configuration for rsnapshot is:

  • The snapshots will be written to /misc/usbdrive/snapshots.
  • /local is the directory to be backed up.
  • Backups occur daily at 4:05 am.
  • We keep 30 days of snapshots.

1.7.3.2. Automounting vs. udev and /media

I make an automounter entry instead of relying on udevd to mount the drive under /media because I want a consistent name across all my systems which do this. udevd will mount the drive under /media/<token> where token is the disk label (maybe? not sure about where the name comes from) on the drive, which varies with manufacturer, model, etc.

1.7.3.3. Formatting the drive as ext3

The drive will come from the manufacturer formatted with a Windows (probably VFAT) filesystem. This doesn't do us any good, as if we copy our data to that filesystem, we lose our permissions and ownership information. So we need to repartition and reformat.

Chapter 2. Hardware

2.1. Replacing a hard drive in olympia's RAID array

2.1.1. Problem

olympia's Arena IDE RAID array has 8 drives: 7 are live, and one is a hot spare. When the RAID controller determines that one of the live drives has gone bad, it pulls it out of the array, inserts the hot spare in its place, and rebuilds the array. The dead drive now should be repaired by one of the cold spares.

2.1.2. Solution

2.1.2.1. Wait until the RAID rebuild completes

First, wait until the RAID rebuild has completed. If you look at the LCD panel on the front of the array, you'll see that it displays a percentage — this is how much of the rebuild it has completed.

If you touch any of the buttons on the front of the array, you'll lose the rebuild status display. So don't touch anything.

The RAID rebuild takes around 4.5 hours.

2.1.2.2. Remove the old drive

The drive caddies are locked into the array. Keys that will unlock them are attached to the drive case.

After the rebuild has completed, unlock the drive caddy with the red LED and remove it from the array. Remove the drive and toss it — I've tested drives the array rejects, and they sure are dead.

2.1.2.3. Insert the replacement drive

I store the cold spares on top of the array, so everyone will be able to find them easily.

Take one of the cold spares, make sure that it is an IDE master (by default, the drives are set to "Cable select", but I've made the appropriate modifications to the cold spares), and insert it into the caddy.

Then re-insert the caddy into the array, and lock it.

If you look at the LCD display on array after you do this, you should see the indicator for that slot change from "X" to "I" and at last to "S" ("No drive" -> "Identifying drive" -> "Hot spare").

The new drive is now the new hot spare.

2.1.2.4. Buy a replacement cold spare

I like to have two cold spares. Now that we've used one of the cold spares, we only have one left, and need to buy a new one.

The array was originally stocked with Seagate ST3120023A 120GB drives; I've replaced them with ST3120026A 120GB drives without problems.

2.2. Converting a drive from Sun disk labels to DOS partitioning

2.2.1. Problem

I've got these hard drives I scrounged from a Sun and I want to use them on my Linux box. They've got Sun disk labels on them: how do I change them to use regular old DOS partitioning like my other disks?

2.2.2. Solution

2.2.2.1. Ulitimate Boot CD

This is the easier way if the drive is the primary drive in a computer on which you want to install Linux.

Boot the computer with a copy of the Ultimate Boot CD, go to the "Filesystem Utilities" section and choose FDISK.

Then just make your partitions, and presto, you have a new partition table.

2.2.2.2. From Linux

Attach the drive to your Linux box, or boot the box it's in with a Linux Live CD (Knoppix, for example).

Zero out the partition table area of the disk with dd (using /dev/sdc for this example):

dd if=/dev/zero of=/dev/sdc bs=1024 count=10

Then just run fdisk on that disk and use "w" to write a new map — fdisk will write a fresh DOS partition map, like so:

> sudo fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 17274.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by
w(rite)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

2.3. Adding an additional hard drive to a Solaris 7 box

2.3.1. Problem

Once in a while, I need to add an addtitional hard drive to a Solaris 7 box, possibly because the box needs more space.

2.3.2. Solution

First, log into the machine as root and do:

(as root)
touch /reconfigure
halt

Physically add the disk, power on the machine, and enter "boot -r" at the OK prompt. I'm not sure whether you have to do both "boot -r" and "touch /reconfigure".

After the machine finishes booting, log in as root and run format.

We use the format command under Solaris to partition the disk. The man page for format is accessible via "man -s 1m format". (if you don't use the "-s 1m", you'll get the tcl format man page).

When you run format, you will be presented with a menu of all the disks you can use. Choice "0" is your boot drive.

Type "partition" to go into the partition menu, and "0"-"8" to set up your partitions. Type "label" when you're done to write the partition table you've created to disk, and then "quit" to get back to the shell prompt.

Now run newfs on the partitions you created. You only have to newfs the partitions you expect to mount, not swap partitions.

An example of running newfs is this:

newfs /dev/rdsk/c0t2d0s0

Now mount the disks, add them to vfstab, etc.

2.3.3. Discussion

The "touch /reconfigure" tells the kernel to go build the /devices tree for the new device at the next reboot. When you reboot, you'll see a "Configuring devices …" message among the boot messages, which you would not see on a normal boot up.

If this is an IDE disk you're adding, make sure you have the Master/Slave jumper set correctly before installing it.

You will of course have to figure out what device in /dev/rdsk corresponds to the partition you want to run newfs on. Right now I'm can't remember how to figure this out.

2.4. Burning a DVD+R or DVD+RW under Fedora

2.4.1. Problem

The cdrecord that comes with Fedora Core 3 is patched to be able to write to DVD-R, DVD-RW and DVD-RAM drives, but not DVD+R or DVD+RW drive. cdrecord will happily burn DVDs for you, but you won't be able to use them: it treats DVD+R/DVD+RW drives as DVD-R/DVD-RW, which just doesn't work.

You don't need to use this recipe if you have a DVD-R/DVD-RW capable drive — just use the cdrecord that comes with Fedora Core 3.

2.4.2. Solution

2.4.2.1. Get cdrecord-ProDVD

cdrecord-ProDVD knows how to write to DVD+R and DVD+RW only drives.

This seems to be the distribution point for it:

http://gd.tuwien.ac.at/utils/schilling/cdrecord/ProDVD/

Go there and download the binary version with the latest version that is appropriate to your architecture.

2.4.2.2. Configure cdrecord-ProDVD

cdrecord-ProDVD will look in ~/.prodvd.conf to figure out the device to write to, and what speed to use for that device.

You can reference files in /dev for the device. Here's an example ~/.prodvd.conf:

DVD RW DEVICE = /dev/hdd
DVD RW SPEED  = 4

2.4.2.3. Using cdrecord-ProDVD

Make your ISO image as usual with mksiofs. Then load a blank DVD+R or DVD+RW into your drive, and burn it like so:

(as root)
prodvd <iso image>

2.4.3. Discussion

2.4.3.1. Security Keys

cdrecord-ProDVD is free only for private, non-commercial use and requires a security key to unlock write speeds greater than 1. It comes as a binary package only, and ships with a time limited key. When the key expires, you have to go back to the the distribution website and download a new copy of cdrecord-wrapper.sh: the key is in that file.

2.5. Fixing sound device permssions under Fedora Core 4

2.5.1. Problem

Permissions on the sound devices in Fedora Core 4 are so restrictive that normal users can't play sounds or adjust mixer settings.

2.5.2. Solution

Since FC3 and later use udevd to manage devices in /dev, we can't just chmod things to our liking in /dev and expect it to work for very long: upon reboot, udevd will return the permissions on our devices to what it thinks they should be, and we're back in the same boat we started in.

What you need to do is edit /etc/udev/rules.d/50-udev.rules and change the permissions on the audio devices to 0666:

# audio devices
KERNEL=="dsp*",                 MODE="0666"
KERNEL=="audio*",               MODE="0666"
KERNEL=="midi*",                MODE="0666"
KERNEL=="mixer*",               MODE="0666"
KERNEL=="sequencer*",           MODE="0666"
KERNEL=="sound/*",              MODE="0666"
KERNEL=="snd/*",                MODE="0666"
KERNEL=="beep",                 MODE="0666"
KERNEL=="admm*",                MODE="0666"
KERNEL=="adsp*",                MODE="0666"
KERNEL=="aload*",               MODE="0666"
KERNEL=="amidi*",               MODE="0666"
KERNEL=="dmfm*",                MODE="0666"
KERNEL=="dmmidi*",              MODE="0666"
KERNEL=="sndstat",              MODE="0666"

… and then, later in the file …

# alsa devices
KERNEL=="controlC[0-9]*",       NAME="snd/%k" MODE="0666"
KERNEL=="hw[CD0-9]*",           NAME="snd/%k" MODE="0666"
KERNEL=="pcm[CD0-9cp]*",        NAME="snd/%k" MODE="0666"
KERNEL=="midi[CD0-9]*",         NAME="snd/%k" MODE="0666"
KERNEL=="timer",                NAME="snd/%k" MODE="0666"
KERNEL=="seq",                  NAME="snd/%k" MODE="0666"

Then make the changes take effect by doing:

sudo start_udev

Now test your changes by running a sound producing program as a normal user; XMMS, for instance.

Chapter 3. E-mail

3.1. Deleting a Mailman mailing list

3.1.1. Problem

When the usefulness of a mailing list has ended, we want to remove it from Mailman so that we don't have an ever growing roster of mailing lists on the admin and listinfo pages.

3.1.2. Solution

You just delete the directories associated with the list and that's that:

(as root)

rm -rf /infosys/mailman/lists.hss.caltech.edu/lists/<list name>
rm -rf /infosys/mailman/lists.hss.caltech.edu/archives/private/<list name>
rm -rf /infosys/mailman/lists.hss.caltech.edu/archives/private/<list name>.mbox
rm -rf /infosys/mailman/lists.hss.caltech.edu/archives/public/<list name>
rm -rf /infosys/mailman/lists.hss.caltech.edu/archives/public/<list name>.mbox

3.2. Migrating your Mailman lists to a new server

3.2.1. Problem

You had your lists on "oldlists.domain" and you want to move them to a server called "newlists.domain". It's not as simple as just tar'ing up the appropriate config directories, because the old Mailman data refers to "oldlists.domain" internally all over the place, and the Mailman install on your new machine will believe the configuration data when constructing URLs.

Additionally, you'll have troubles if the list archives on the new server live in a different place in the filesystem than on the old server.

3.2.2. Solution

3.2.2.1. Configure Mailman on the new server

At least set DEFAULT_URL_HOST in mm_cfg.py.

3.2.2.2. Transfer the list data

Tar up the /data, /lists and /archives directories on the old server, and transfer the tar files to the new server.

(as root, on old server)
cd <mailman var directory>
# Probably /var/lib/mailman
tar zcf mmdata.tar.gz data
tar zcf mmlists.tar.gz lists
tar zcf mmarchives.tar.gz archives

Now copy mmdata.tar.gz, mmlists.tar.gz and mmarchives.tar.gz to the new server.

3.2.2.3. Install the list data on the new server

Now install the list data on the new server

(as root, on new server)
cd <mailman var directory>
# Probably /var/lib/mailman
tar zxf mmdata.tar.gz
tar zxf mmlists.tar.gz
tar zxf mmarchives.tar.gz

3.2.2.4. Maybe fix the list archive paths

The mailman lists you copied over to the new server store the absolute path to their archive directories within the config files themselves. If the path to the mailman archive directories on the new server is different than the directory on the old server, you need to fix those paths. If you don't, you won't be able to find the list archives.

This bash script fixes those paths.

#!/bin/bash

newroot="<new mailman var directory>"
mailmanbin="<new mailman bin directory>"

for list in `ls $newroot/lists`; do
    python -i $mailmanbin/bin/withlist -l $list <<EOF
    m.private_archive_file_dir
    m.private_archive_file_dir='$newroot/archives/private/$list.mbox'
    m.public_archive_file_dir
    m.public_archive_file_dir='$newroot/archives/public'
    m.archive_directory
    m.archive_directory='$newroot/archives/private/$list'
    m.Save()
    EOF
done

3.2.2.5. Maybe fix the list URLs

The mailman lists you copied over to the new server store the URL to their admin pages, etc. within the config files. If the URL to the mailman CGIs are different on the new server than on the old server, you need to fix this. If you don't, none of your lists will show up on the "listinfo" page, you won't be able to configure any of the lists via the "admin" pages, or moderate messages on the "admindb" pages.

This bash script change the URLs in the config files to "mm_cfg.DEFAULT_URL_PATTERN % mm_cfg.DEFAULT_URL_HOST" (that's python code, FYI), where DEFAULT_URL_PATTERN and DEFAULT_URL_HOST are defined in mm_cfg.py.

#!/bin/bash

newroot="<new mailman var directory>"
mailmanbin="<new mailman bin directory>"

for list in `ls $newroot/lists`; do
    $mailmanbin/bin/withlist -l -r fix_url $list
done

3.2.2.6. Regenerate the aliases

The mailman binary may have a different path on the new server, so regenerate the aliases.

<new mailman bin directory>/genaliases

Be aware that the aliases file may show up in a different place on the new server than on the old.

3.2.2.7. Maybe set up redirects on your old server

If you want to allow people to get to the new list management pages via the old URLs, setup the following redirects on your old server for each list:

RedirectMatch /mailman/(.*)/mylist(.*) http://newlists.domain/mailman/$1/mylist$2
RedirectMatch /pipermail/mylist(.*) http://newlists.domain/pipermail/mylist$1

3.2.3. Discussion

I got some of this from a mailman-users mailing list posting:

http://mail.python.org/pipermail/mailman-users/1999-August/002009.html

and some from the Mailman FAQ:

http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq03.004.htp

3.2.3.1. Caveat: StringIO errors

From the Mailman FAQ:

One caveat that might occur is you'll start getting notices about a missing stringIO modules. If this occurs to you then delete the following directories from the mailman directory and redo the above steps:

archives data lists locks logs qfiles spam

3.2.3.2. Caveat: Fedora Core 3

Under RH9, the sitewide admin password was stored in /var/mailman/data/adm.pw, but in FC3, it's stored in /etc/mailman/adm.pw.

Restore your admin password by running /usr/lib/mailman/bin/mmsitepass.

Under RH9, the Mailman aliases file was in /var/mailman/data/aliases, but in FC3, it's stored in /etc/mailman/aliases

Adjust your MTA's alias file settings appropriately.

3.3. Setting up redundant Mailman servers

3.3.1. Problem

I have two SMTP servers for my domain: a primary and a secondary.

I run Mailman as my mailing list manager on the primary SMTP server. The lists that it maintains are important for my organization — most of them are internal lists. Thus I want the secondary SMTP server to be able to process and distribute incoming list mail properly.

I want the following things to be true:

  • Both servers should have the same list data: membership, and user and list options
  • Both servers should update the list archives when mail comes in.
  • The primary SMTP server should be canonical: all list membership and options changes should be done there, and be propagated to the secondary.
  • List membership or options changes should be propagated from the primary to the secondary as quickly as possible.
  • The secondary SMTP server should be able to receive, process and deliver list mail if the primary is down.
  • The two servers should not interfere with each other in that they should not stomp on each other's state (which means that we need to know what state a Mailman server keeps).

3.3.2. Solution

3.3.2.1. Environment

  • My OS is Fedora Core 3.
  • My MTA is Postfix 2.1.5.
  • I use Mailman 2.1.5.
  • My primary MX is at priority 0.
  • My secondary MX is at priority 10.

Your milage may vary in regards to these instructions if your setup is different from this.

3.3.2.2. Overview

I'm going to store list data (membership, options, and archives) in /infosys/mailman/<hostname>, which is mounted from my NFS server. I'll mount this directory from both the primary and secondary SMTP server, and thus, they will both see the same list membership data and options, and will be able to append to the same archives.

I'm going to store the alias files and adm.pw (which holds the site admin password) in the same directory, so the secondary SMTP server will have its aliases updated immediately when lists are added or deleted on the primary.

Finally, I'm going to store all the moderation data (messages and sign ups to be moderated) in /infosys/mailman/<hostname>, as well. This so if one of the hosts goes down, I still have access to pending moderation requests.

Both machines should run the web interface. The primary needs it so that we can manage the lists, of course. The secondary needs it so that we can handle messages that need to be moderated.

All list maintenance, aside from moderation, should be done on the primary. Only the primary will be able to list the existing lists in the listinfo and admin views.

3.3.2.3. Setup the Mailman directories

First, move the list data from the primary to the NFS directory:

(as root, on the primary)
mkdir -p /infosys/mailman/<hostname>
cd /var/lib/mailman

# archives
rsync -av archives /infosys/mailman/<hostname>
rm -rf archives
ln -s /infosys/mailman/<hostname>/archives .

# list data
rsync -av lists /infosys/mailman/<hostname>
rm -rf lists
ln -s /infosys/mailman/<hostname>/lists .

# moderation data
rsync -av data/ /infosys/mailman/<hostname>/data.primary
rm -rf data
ln -s /infosys/mailman/<hostname>/data.primary ./data

# config dir
rsync -av /etc/mailman/ /infosys/mailman/<hostname>/etc
rm -rf /etc/mailman
ln -s /infosys/mailman/<hostname>/etc /etc/mailman

Now setup the secondary:

(as root, on the secondary)
cd /var/lib/mailman

# archives
rm -rf archives
ln -s /infosys/mailman/<hostname>/archives .

# list data
rm -rf lists
ln -s /infosys/mailman/<hostname>/lists .

# moderation data
rsync -av data/ /infosys/mailman/<hostname>/data.secondary
rm -rf data
ln -s /infosys/mailman/<hostname>/data.secondary ./data

# config dir
rm -rf /etc/mailman
ln -s /infosys/mailman/<hostname>/etc /etc/mailman

3.3.2.4. mm_cfg.py

/infosys/mailman/<hostname>/etc/mm_cfg.py should be a symlink to /usr/lib/mailman/Mailman/mm_cfg.py, so that each server gets its own version of mm_cfg.py.

Add these lines to mm_cfg.py on the primary, replacing the existing definitions for DEFAULT_URL_HOST and DEFAULT_EMAIL_HOST, and replacing "primary.yourdomain" and "yourdomain" appropriately:

DEFAULT_URL_HOST   = 'primary.yourdomain'
DEFAULT_EMAIL_HOST = 'yourdomain'

DELIVERY_MODULE = 'SMTPDirect'
SMTPHOST = 'localhost'
SMTPPORT = 25
MTA = 'Postfix'

Add these lines to mm_cfg.py on the secondary, replacing the existing definitions for DEFAULT_URL_HOST and DEFAULT_EMAIL_HOST, and replacing "secondary.yourdomain" and "yourdomain" appropriately:

DEFAULT_URL_HOST   = 'secondary.yourdomain'
DEFAULT_EMAIL_HOST = 'yourdomain'

DELIVERY_MODULE = 'SMTPDirect'
SMTPHOST = 'localhost'
SMTPPORT = 25
MTA = 'Postfix'

3.3.2.5. Mailman cron jobs

Mailman mails out monthly emails to people, reminding them that they are subscribed to our mailing lists. It also sends out mails once per day to disabled members, reminding them that they are disabled.

Since the primary already does these things, the secondary doesn't need to. Edit /etc/cron.d/mailman and comment out the lines referring to /usr/lib/mailman/cron/disabled and /usr/lib/mailman/cron/mailpasswds.

3.3.2.6. Postfix

Ensure that the alias_maps line in /etc/postfix/main.cf for both your servers contains "hash:/etc/mailman/aliases".

3.3.3. Discussion

I've tested this setup, and it seems to work.

My only fear comes in regards to the archives — if you have a high traffic list, and somehow have both servers accepting many mails for it and appending simultaneously to the archives, can the archives become corrupted?

In my case, since my primary MX is at priority 0 and the secondary is at 10, the secondary should only be accepting mail if the primary is down, or if the primary is so overloaded it can't accept incoming mail. So I'm not too worried, especially with my typical traffic levels.

3.3.3.1. Anomalies

  • On the backup server, the all the links on the top of the admin page for all your lists will be wrong: they will point to the primary server. To get to the appropriate places on the backup server, manually enter the full URL.

3.3.3.2. First time setup

If you're setting up your lists for the first time, do this on the primary:

(on primary, as root)
cd /usr/lib/mailman/bin
./mmsitepass
./newlist mailman

3.3.3.3. State

Mailman state that should not be squashed by another mailman instance is:

  • queued up messages waiting to go out: /var/spool/mailman
  • log files: /var/log/mailman
  • locks: /var/lock/mailman
  • moderation items: /var/lib/mailman/data

3.3.4. See Also

[Mailman-Users] Multiple Servers for a List?
http://www.mail-archive.com/mailman-users@python.org/msg13293.html

3.4. Saving a copy of outgoing mail to a "Sent Mail" folder in IMP

3.4.1. Problem

People like to be able to go back and look at all the mail they've sent out from time to time. By default, IMP doesn't do this, but can be configured to do so.

3.4.2. Solution

  1. Login, and click Options.
  2. Click Personal Information.
  3. Click Edit your identities.
  4. Select your identity in the "Your identities" dropdown list. If you haven't set one up specifically for you, then select "Default Identity".
  5. Check the "Save sent mail" check box.
  6. In the *Sent mail folder:" dropdown list, either select an existing folder, or select "Create new sent mail folder".

From now on, all outgoing mail sent within IMP will be saved to the folder you chose in step 6.

3.5. Upgrading the Horde PostgreSQL database from Horde 2 to Horde 3

3.5.1. Problem

You have the Horde database (used for storing preferences) on one PostgreSQL server, but want to migrate it to another. I'm assuming you have two servers here. One with Horde 2 and a populated PostgreSQL server, and another with a Horde 3 and an unpopulated server.

3.5.2. Solution

First, dump the Horde 2 database from the original PostgreSQL server.

sudo su - postgres
export PGDATA=/var/lib/pgsql/data
# For Solaris:
# export PGDATA=/var/pgsql/data
cd /tmp
pg_dump -d -h localhost horde > horde.sql

3.5.2.1. IMP

Extract from horde.sql all the "INSERT INTO horde_prefs" statements into a file called horde_prefs.sql.

We need to adjust the values in the rows of the horde_prefs table. The dump adds extra whitespace to the first three columns that we don't want.

sed "s/'\([a-z0-9_@.]\+\) \+',/'\1',/g" horde_prefs.sql > horde_prefs_fixed.sql

Start up postgresql on the new server, if it's not currently running:

sudo service postgresql start

Create the stub Horde 3 databases:

(as the postgres user)
export PGDATA=/var/lib/pgsql/data
cd /var/www/html/horde/scripts/sql
psql -d template1 -f create.pgsql.sql
psql -qc "ALTER USER horde WITH PASSWORD '<password>';" template1 postgres
psql -d horde -f horde_users.sql
psql -d horde -f horde_prefs.sql
psql -d horde -f horde_datatree.sql

Read in the horde_prefs rows from the old server:

(as the postgres user)
export PGDATA=/var/lib/pgsql/data
psql -d horde -f horde_prefs_fixed.sql

Make sure your servers.php setting for IMP for our IMAP server looks like this:

$servers['imap'] = array(
    'name' => '<domain>',
    'server' => 'localhost',
    'hordeauth' => false,
    'protocol' => 'imap/notls',
    'port' => 413,
    'folders' => '',
    'namespace' => 'INBOX.',
    'maildomain' => '<domain>',
    'smtphost' => 'localhost',
    'smtpport' => 25,
    'realm' => '<domain>',
    'preferred' => true,
    'dotfiles' => false,
    'hierarchies' => array()
);

The "realm" setting is vital, else the Horde prefs engine will look for prefs setting with prefs_uid set to <username> instead of <username>@<domain>, which was the Horde 2 default. This applies to Turba address book entries, too.

3.5.2.2. Turba

Extract from horde.sql all the statements relating to turba_objects: "INSERT INTO turba_object", "CREATE TABLE turba_object", etc.

Read in the turba_objects rows from the old server:

(as the postgres user)
export PGDATA=/var/lib/pgsql/data
psql -d horde -f turba_objects.sql

Now add the extra columns that Turba 2.0 needs:

(as the postgres user)
export PGDATA=/var/lib/pgsql/data
cd /var/www/html/horde/turba/scripts/upgrades
psql -d horde -f 1.2_to_2.0.mysql.sql

Don't fear the "mysql" in the file name — there are only straight SQL statements in that file.

3.5.3. Discussion

3.5.3.1. Update: Apr 1, 2005

Recent versions of Horde include a postgresql migration script.

Chapter 4. Apache

4.1. Using PAM to authenticate users in Apache 2.x under Linux

4.1.1. Problem

My people have a lot of passwords to remember: one for their email, one for their NT account, and then those for any other accounts they have at non-Caltech sites (Yahoo, Ebay, Amazon, etc.). Minimizing the number of passwords our people have to know helps both them, and us.

I also want to minimize the number of different accounts people have here simply so that I have fewer different accounts to set up for each new user.

Thus, when possible, if we need to set up a password protected web page, it would be nice to be able to authenticate via PAM instead of via an htpasswd or htdigest file.

This recipe is specifically for Apache 2.x for our Fedora and RH9 systems. For our Solaris systems, see Using PAM to authenticate users in Apache 1.3.x under Solaris.

4.1.2. Solution

We use combination of mod_auth_external, pwauth, and groupcheck (an HSS local package) to authenticate our users via PAM.

I've chosen to distribute a master group file (think /etc/groups for AuthGroupFile style groups) which contains all our AuthGroupFile style groups. I've done this so that we can use the same Apache directives in all the cases in which we want to restrict by group. First add these directives to httpd.conf either in the global context, or in a <VirtualHost> block — they cannot go in an .htaccess file:

AddExternalAuth pwauth /usr/bin/pwauth
SetExternalAuthMethod pwauth pipe

AddExternalGroup groupcheck "/usr/bin/groupcheck -e -f /etc/httpd/conf/htgroups.basic"
SetExternalGroupMethod groupcheck pipe

Now add this block either to an .htpasswd file or to a <Location>,<File> or <Directory> block. If you're using the .htaccess method, ensure that you have "AllowOverride AuthConfig" in httpd.conf for the directory into which you want to put the .htaccess file.

AuthType Basic
AuthName <authname>
AuthExternal pwauth
GroupExternal groupcheck
Require group <groupname>

4.1.3. Discussion

Here are some generic instructions:

4.1.3.1. Configure mod_auth_external

The mod_auth_external RPM we use drops a file into /etc/httpd/conf.d with an appropriate LoadModule directive, so we don't have to add that.

4.1.3.2. Virtual hosts

If you want to use mod_auth_external in a virtual host, you must include the AddExternalAuth, SetExternalAuthMethod, AddExternalGroup and SetExternalGroupMethod inside the <VirtualHost> block. The <VirtualHost> does not inherit them from the global context.

4.1.3.3. Restricting access by valid-user

Add this stanza either in the global context, or in a <VirtualHost> block. If you want to enable this for a virtual host, you must add this inside the <VirtualHost> block.

AddExternalAuth pwauth /usr/bin/pwauth
SetExternalAuthMethod pwauth pipe

Add this stanza either in a <Location>,<File>, or <Directory> block, or in an .htaccess file. If you're using the .htaccess method, ensure that you have "AllowOverride AuthConfig" in httpd.conf for the directory into which you want to put the .htaccess file.

AuthType Basic
AuthName <authname>
AuthExternal pwauth
Require valid-user

4.1.3.4. Restricting access by group

If you want to also restrict users by UNIX group, add these lines to the same place you added the "AddExternalAuth" line:

AddExternalGroup groupcheck "/usr/bin/groupcheck -e"
SetExternalAuthMethod groupcheck pipe

If you want to restrict users by AuthGroupFile style group (if, for instance, you have to admit some users who do not have UNIX accounts), add these lines, instead:

AddExternalGroup groupcheck "/usr/bin/groupcheck -f /path/to/group.file"
SetExternalAuthMethod groupcheck pipe

If you want to restrict users by UNIX group or AuthGroupFile style group (if, for instance, you want to admit some users who do not have UNIX accounts, plus some users who are in a standard UNIX group), add these lines, instead:

AddExternalGroup groupcheck "/usr/bin/groupcheck -e -f /path/to/group.file"
SetExternalAuthMethod groupcheck pipe

Then change your authorization block like this:

AuthType Basic
AuthName <authname>
AuthExternal pwauth
GroupExternal groupcheck
Require group <groupname>

4.1.3.5. Caveat: People without UNIX accounts

This isn't going to work for you out of the box if you need to give access to a site to people both with and without UNIX accounts.

4.1.3.6. Caveat: AuthGroupFile

When using AuthExternal, you can't restrict access to a group of people with AuthGroupFile — Apache will refuse to authenticate anyone if you specify an AuthGroupFile line when using AuthExternal. Use GroupExternal instead, as in the examples above.

4.1.4. See Also

4.1.4.1. mod_auth_pam

There is also mod_auth_pam, made (apparently) by the PAM maintaners, and you would think that this is what we would want to use. We don't use it because we use shadow passwords here, and mod_auth_pam can't read /etc/shadow.

/etc/shadow is owned by root, and is mode 0600. PAM is a library, and PAM enabled programs execute the PAM code as the EUID of the program. Since httpd runs as the apache user, it can't read /etc/shadow, and thus can't get the password hash for the user.

If we ever go to LDAP here, mod_auth_pam will become viable again.

4.2. Using PAM to authenticate users in Apache 1.3.x under Solaris

4.2.1. Problem

Our users have a lot of passwords to remember: one for their email, one for their NT account, one for their ITS account, and then those for any other accounts they have at non-Caltech sites (Yahoo, Ebay, Amazon, etc.). Minimizing the number of passwords our people have to know helps both them, and us.

I also want to minimize the number of different accounts people have here simply so that I have fewer different accounts to set up for each new user.

This recipe is specifically for Apache 1.3.x for our Solaris systems. For our Linux systems, see Using PAM to authenticate users in Apache 2.x under Linux.

4.2.2. Solution

We use combination of mod_auth_external, pwauth, and groupcheck (an HSS local package) to authenticate our users via PAM.

I've chosen to distribute a master group file (think /etc/groups for AuthGroupFile style groups) which contains all our AuthGroupFile style groups. I've done this so that we can use the same Apache directives in all the cases in which we want to restrict by group. First add these directives to httpd.conf either in the global context, or in a <VirtualHost> block — they cannot go in an .htaccess file:

LoadModule external_auth_module /software/libexec/apache/mod_auth_external.so
AddModule mod_auth_external.c

AddExternalAuth pwauth /software/bin/pwauth
SetExternalAuthMethod pwauth pipe

AddExternalGroup groupcheck "/software/bin/groupcheck -e -f /etc/apache/htgroups.basic"
SetExternalGroupMethod groupcheck pipe

Now add this block either to an .htpasswd file or to a <Location>,<File> or <Directory> block. If you're using the .htaccess method, ensure that you have "AllowOverride AuthConfig" in httpd.conf for the directory into which you want to put the .htaccess file.

AuthType Basic
AuthName <authname>
AuthExternal pwauth
GroupExternal groupcheck
Require group <groupname>

4.2.3. Discussion

Here are some generic instructions:

4.2.3.1. Restricting access by valid-user

Add this stanza either in the global context, or in a <VirtualHost> block. If you want to enable this for a virtual host, you must add this inside the <VirtualHost> block.

AddExternalAuth pwauth /software/bin/pwauth
SetExternalAuthMethod pwauth pipe

Add this stanza either in a <Location>,<File>, or <Directory> block, or in an .htaccess file. If you're using the .htaccess method, ensure that you have "AllowOverride AuthConfig" in httpd.conf for the directory into which you want to put the .htaccess file.

AuthType Basic
AuthName <authname>
AuthExternal pwauth
Require valid-user

4.2.3.2. Restricting access by group

If you want to also restrict users by UNIX group, add these lines to the same place you added the "AddExternalAuth" line:

AddExternalGroup groupcheck "/software/bin/groupcheck -e"
SetExternalAuthMethod groupcheck pipe

If you want to restrict users by AuthGroupFile style group (if, for instance, you have to admit some users who do not have UNIX accounts), add these lines, instead:

AddExternalGroup groupcheck "/software/bin/groupcheck -f /path/to/group.file"
SetExternalAuthMethod groupcheck pipe

If you want to restrict users by UNIX group or AuthGroupFile style group (if, for instance, you want to admit some users who do not have UNIX accounts, plus some users who are in a standard UNIX group), add these lines, instead:

AddExternalGroup groupcheck "/software/bin/groupcheck -e -f /path/to/group.file"
SetExternalAuthMethod groupcheck pipe

Then change your authorization block like this:

AuthType Basic
AuthName <authname>
AuthExternal pwauth
GroupExternal groupcheck
Require group <groupname>

4.2.3.3. Caveat: People without UNIX accounts

This isn't going to work for you out of the box if you need to give access to a site to people both with and without UNIX accounts.

4.2.3.4. Caveat: AuthGroupFile

When using AuthExternal, you can't restrict access to a group of people with AuthGroupFile — Apache will refuse to authenticate anyone if you specify an AuthGroupFile line when using AuthExternal. Use GroupExternal instead, as in the examples above.

4.2.4. See Also

4.2.4.1. mod_auth_pam

There is also mod_auth_pam, made (apparently) by the PAM maintaners, and you would think that this is what we would want to use. We don't use it because we use shadow passwords here, and mod_auth_pam can't read /etc/shadow.

/etc/shadow is owned by root, and is mode 0600. PAM is a library, and PAM enabled programs execute the PAM code as the EUID of the program. Since httpd runs as the apache user, it can't read /etc/shadow, and thus can't get the password hash for the user.

If we ever go to LDAP here, mod_auth_pam will become viable again.

4.3. Getting pwauth to work with pam_pwdfile in the context of mod_auth_external

4.3.1. Solution

mod_auth_external is a module for Apache that allows you to call an external program to do the actual authentication. Importantly for us, that allows us to call a program that is SUID root, and thus will have rights to read /etc/shadow.

The external program we use is pwauth, which is PAM aware. This means we can use several different authentication methods for several different sets of users when authenticating people to our websites.

There are two important groups: those with UNIX shell accounts, and those without. Another way of looking at this is that the former group represents people who have full access to all our services — e-mail, UNIX shell, Samba and web services authentication — while the latter represents people who only have access to web services.

pam_pwdfile is a PAM module which can allow PAM to authenticate against an htpasswd style password file. Using pam_pwdfile allows us to separate users into "real" users (those in /etc/passwd) and limited access users (those in the pam_pwdfile password file).

But pwauth doesn't work out of the box with pam_pwdfile.

4.3.2. Discussion

4.3.2.1. Patch pam_pwdfile to provide pam_sm_acct_mgmt()

First, you need to patch pam_pwdfile to supply a pam_sm_acct_mgmt() callback function — pwauth expects this, and will fail the authentcation if pam_pwdfile doesn't provide it. Here's the patch:

diff -Naur pam_pwdfile-0.99/pam_pwdfile.c pam_pwdfile-0.99.new/pam_pwdfile.c
--- pam_pwdfile-0.99/pam_pwdfile.c      2003-12-20 11:21:19.000000000 -0800
+++ pam_pwdfile-0.99.new/pam_pwdfile.c  2005-06-23 17:27:55.000000000 -0700
@@ -58,6 +58,7 @@
 #include <security/pam_appl.h>

 #define PAM_SM_AUTH
+#define PAM_SM_ACCOUNT
 #include <security/pam_modules.h>

 #include "md5.h"
@@ -414,6 +411,13 @@
     return PAM_SUCCESS;
 }

+/* another expected hook */
+PAM_EXTERN int pam_sm_acct_mgmt(pam_handle_t *pamh, int flags, int argc, const char **argv)
+{
+    return PAM_SUCCESS;
+}
+
+
 #ifdef PAM_STATIC
 struct pam_module _pam_listfile_modstruct = {
     "pam_pwdfile",

4.3.2.2. Ensure that pwauth is not compiled with UNIX_LASTLOG=1

Ensure that, when building pwauth, you do not have UNIX_LASTLOG enabled in config.h. If it is enabled, pwauth tries to do some sanity checks on the UID of the user authenticating. Since users in pam_pwdfile's password file don't have UIDs, pwauth always rejects the login.

4.3.2.3. Set up /etc/pam.d/pwauth properly

Make /etc/pam.d/pwauth look like this:

auth       sufficient   pam_pwdfile.so pwdfile $passwordfile
auth       sufficient   pam_stack.so service=system-auth
account    sufficient   pam_pwdfile.so pwdfile $passwordfile
account    sufficient   pam_stack.so service=system-auth

where $passwordfile is the full path to your pam_pwdfile password file.

4.3.3. Discussion

The RPMS we use in HSS for pam_pwdfile and pwauth have the appropriate patches and configurations.

4.4. Setting up an SSL VirtualHost on a NameVirtualHost IP address

4.4.1. Problem

Sometimes I just want to SSL encrypt a page or two (maybe because people have to enter passwords there) and I don't want to burn a whole static IP for it.

If you're using name-based virtual hosting on that site, then setting this up can be tricky.

4.4.2. Solution

The key to this is to make sure that all sites only listen on the ports they need to listen on.

Here are the appropriate Apache directives:

Listen 1.2.3.4:80
Listen 1.2.3.4:443

ServerName realname.domain

# The SSL virtual host
<VirtualHost 1.2.3.4:443>
    ServerName realname.domain
    ....
</VirtualHost>

NameVirtualHost 1.2.3.4:80

# Non-SSL name-based virtual host #1
<VirtualHost 1.2.3.4:80>
    ServerName othername1.domain
    ...
</VirtualHost>

# Non-SSL name-based virtual host #2
<VirtualHost 1.2.3.4:80>
    ServerName othername2.domain
    ...
</VirtualHost>

# Further non-SSL name-based virtual hosts
# below here.
...

4.4.3. Discussion

A side effect of setting up SSL on realname.domain is that all name-based virtual hosts will be accessible via HTTPS. But there's a catch.

The Apache docs for SSL on name-based virtual (see "Name-based vs. IP-based Virtual Hosts in Apache 2.x") says:

 
Name-based virtual hosting cannot be used with SSL secure servers
because of the nature of the SSL protocol.
 
 --

This is almost true.

You can set up SSL for the main (non-virtual host) site name ("realname.domain" in the example above), and it will work fine. Plus, SSL will work on the name-based virtual hosts in that your connection to them will be SSL encrypted.

The problem is that every name-based virtual host will use the same SSL certificate. Apache figures out which SSL certificate to present to the client based on the IP address, and since all the name-based virtual hosts have same IP address, Apache will present the same certificate (the one named in the SSL virtual host block) for all sites.

So if you go to https://othername1.domain, your browser will warn you with a warning like:

 
You have attempted to establish a connection with "othername1.domain".
However, the security certificate presented belongs to
"realname.domain". It is possible, though unlikely, that someone may be
trying to intercept your communication with this web site.
 
 --

If you can't live with that, you'll need to do real IP based virtual hosting for "othername1.domain", and separate a separate SSL certifcate and virtual host for it.

4.5. Redirecting from HTTP to HTTPS

4.5.1. Problem

You have a site or page that you want to be accessible only via https://, but to avoid confusion you want people who go to the http:// site to be directed automatically to the https:// site.

4.5.2. Solution

4.5.2.1. Option 1: mod_rewrite

One solution is to use mod_rewrite. Put these lines either in the global context, outside all <VirtualHost> blocks (for redirecting from the main site to the main site's SSL version), or inside the appropriate non-SSL <VirtualHost> block.

This only seems to work for directories.

RewriteEngine on
RewriteRule /path/to/directory/(.*) https://site.domain/path/to/directory/$1

4.5.2.2. Option 2: Redirect

Here's an example of using Redirect. You must put the Redirect line in a <VirtualHost> block and not in the global context — especially if you're redirecting /. You'll get a redirection loop if you put the Redirect outside a <VirtualHost> block because virtual hosts inherit Redirect lines from the global context.

<VirtualHost 1.2.3.4:80>
    ServerName site.domain
    ServerAdmin webmaster@domain
    Redirect /path https://site.domain/path
</VirtualHost>

<VirtualHost 1.2.3.4:443>
    ServerName site.domain
    ServerAdmin webmaster@domain
    ...
</VirtualHost>

4.6. Setting up a Windows XP compatible WebDAV Folder under Apache 1.3.x

4.6.1. Problem

Under Windows XP, you're supposed to be able to go to "My Network Places", click "Add Network Place …", enter an "http://" URL to a WebDAV repository, and have the repository mounted like a network drive on your box. This does work, so long as you don't want to password protect your WebDAV repository.

The agony comes when you do want to password protect it. Whatever XP uses to access "http://" URLs in "Add Network Place …" is broken, broken, broken.

Using either Digest (mod_auth_digest, not mod_digest, which just plain doesn't work anywhere) or Basic authentication, Windows XP seems convinced that it is supposed to be presenting NTLM (LanManager) credentials to the WebDAV server, which are of the form "hostname/username". The WebDAV server is, of course, looking for "username"

You can tell this is happening because your first authentication attempt will fail, and XP will show the password dialog with the Username field filled in as "hostname/username".

4.6.2. Solution

I know two solutions to this problem: mod_encoding; use an https:// URL.

4.6.2.1. mod_encoding

Obtain mod_encoding from http://webdav.todo.gr.jp/download/, compile it up, and put it somewhere Apache can see it.

Add the following code to your httpd.conf file:

LoadModule encoding_module    /path/to/mod_encoding.so
LoadModule headers_module     /path/to/mod_headers.so
LoadModule dav_module         /path/to/libdav.so
LoadModule auth_module        /path/to/mod_auth.so

AddModule mod_encoding.c
AddModule mod_headers.c
AddModule mod_dav.c
AddModule mod_auth.c

#
# Broken WebDAV for Windows XP
#
BrowserMatch "^WebDAVFS/1.[012]" redirect-carefully
BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully
BrowserMatch "Microsoft-WebDAV-MiniRedir/5.1.2600" redirect-carefully
BrowserMatch "^WebDrive" redirect-carefully
BrowserMatch "^WebDAVFS" redirect-carefully

<IfModule mod_headers.c>
    Header add MS-Author-Via "DAV"
</IfModule>

<IfModule mod_encoding.c>
    EncodingEngine on
    NormalizeUsername on
</IfModule>

Next, add your DAV directives to httpd.conf. If "directory" is the directory you wish to DAV-ify (just an example — modify to suit your needs):

DAVLockDB /path/to/lockfile/DAVLock
DAVMinTimeout 600

<Directory "directory">
    DAV On
    AuthType Basic
    AuthName test
    AuthUserFile  /path/to/passwd/file
    AuthGroupFile /dev/null
    Require user test
</Directory>

Ensure that the directory into which you're writing your lock file (/path/to/lockfile) is writable by the user which runs the httpd process!

Now populate the passwd file:

htpasswd -c /path/to/passwd/file test

Ensure that the passwd file is readable by the user that runs httpd!

Restart httpd, and you should have a working setup.

4.6.2.2. Use an https:// URL

See Setting up a Windows XP compatible WebDAV Folder under Apache 2.x and just follow the instructions; they should be directly translatable to Apache 1.3.x.

4.7. Setting up a Windows XP compatible WebDAV Folder under Apache 2.x

4.7.1. Problem

Under Windows XP, you're supposed to be able to go to "My Network Places", click "Add Network Place …", enter an "http://" URL to a WebDAV repository, and have the repository mounted like a network drive on your box. This does work, so long as you don't want to password protect your WebDAV repository.

The agony comes when you do want to password protect it. Whatever XP uses to access "http://" URLs in "Add Network Place …" is broken, broken, broken.

What happens is that instead of presenting you with the Basic Auth login window (which displays the value set by AuthName directive`), you get an authentication window with a key icon on it — the NetBIOS authentication window, maybe. You cannot authenticate to the Apache server from that window.

4.7.2. Solution

No need for mod_auth_msfix, or another special Apache module which munges the authencation data from the XP machine.

No need to append the port number to the hostname in the URL, or to hack the registry, or to add "header add MS-Author-Via "DAV"" to your httpd.conf, or extra BrowserMatch directives.

Just put your "DAV On" and your BasicAuth directives in your SSL virtual host, and then direct your XP users to access your WebDAV repository only via https://. That's it.

Here is how to set up a basic WebDAV directory which will be accessible to your XP machines. These instructions are for a Fedora Core 3 machine, and have the following assumptions:

  • The user that runs httpd is called apache
  • Your DocumentRoot is /var/www/html
  • httpd.conf lives in /etc/httpd/conf
  • /etc/httpd/conf.d/ssl.conf has a SSL virtual host section for the main IP address served by the web server.

Make the repository directory:

(as root)
mkdir /var/www/html/davtest
chown apache /var/www/html/davtest
If your repository directory is not owned by `apache`, the web server
will not be able to write to it, and thus your XP client will not be
able to write to it.

Make sure you have this line in /etc/httpd/conf/httpd.conf (If you're using the stock httpd.conf from Fedora, you do):

DAVLockDB /var/lib/dav/lockdb

Make a password file:

(as root)
htpasswd -c /etc/httpd/conf/davtest.passwd test

Then, add this stanza inside the <VirtualHost _default_:443></VirtualHost> block in /etc/httpd/conf.d/ssl.conf:

<Directory "/var/www/html/davtest">
    DAV On
    AuthType Basic
    AuthName "WebDAV Test"
    AuthUserFile  /etc/httpd/conf/davtest.passwd
    AuthGroupFile /dev/null
    require valid-user
</Directory>

And restart your web server:

(as root)
service httpd restart

Now point your XP host at it, and you should be able to authenticate and use the WebDAV repository.

4.7.2.1. Self signed SSL certificates

If you're using a self-signed certificate, and you haven't installed your Certifying Authority's root key in the XP client's certificate store, you'll be presented with a message about it. You won't be able to proceed until you install the certificate into your certificate store.

4.7.3. Discussion

4.7.3.1. Other authentication schemes

I describe using AuthBasic here, but I would bet that AuthDigest, et. al. would work as well. The real thing we're doing here is convincing XP that no, this is not a CIFS share.

4.7.3.2. XP version and patch levels

First, I'm talking about XP Professional, not XP Home. I don't have any XP Home machines.

Second, I'm talking about XP machines with Service Pack 2 installed. I don't have any SP1 or before XP boxes lying around, so your milage may vary if you're using one of those as your DAV client.

4.7.3.3. Why does this work?

Why does connecting via https://host.domain/davdir work when connecting via http://host.domain/davdir fails?

XP uses a different DAV client library for the https:// URLs than for the http:// URLs. The one used for http:// URLs is broken, while the one used for https:// URLs is not.

Below are some Apache logs from my DAV server machine which show this. Accesses are from the same XP client. Note that for the http:// accesses, the client is "Microsoft-WebDAV-MiniRedir/5.1.2600", while for the https:// accesses, the client is "Microsoft Data Access Internet Publishing Provider DAV".

4.7.3.4. http://

1.2.3.4 - - [13/Apr/2005:16:40:21 -0700] "OPTIONS / HTTP/1.1" 200 - "-" "Microsoft-WebDAV-MiniRedir/5.1.2600"
1.2.3.4 - - [13/Apr/2005:16:40:21 -0700] "PROPFIND /davtest HTTP/1.1" 500 624 "-" "Microsoft-WebDAV-MiniRedir/5.1.2600"
1.2.3.4 - - [13/Apr/2005:16:40:21 -0700] "PROPFIND /davtest HTTP/1.1" 500 624 "-" "Microsoft Data Access Internet Publishing Provider DAV"
1.2.3.4 - - [13/Apr/2005:16:41:19 -0700] "PROPFIND /davtest HTTP/1.1" 401 491 "-" "Microsoft-WebDAV-MiniRedir/5.1.2600"
1.2.3.4 - - [13/Apr/2005:16:41:19 -0700] "PROPFIND /davtest HTTP/1.1" 401 491 "-" "Microsoft-WebDAV-MiniRedir/5.1.2600"
1.2.3.4 - - [13/Apr/2005:16:41:19 -0700] "PROPFIND /davtest HTTP/1.1" 401 491 "-" "Microsoft-WebDAV-MiniRedir/5.1.2600"

4.7.3.5. https://

1.2.3.4 - - [13/Apr/2005:17:03:48 -0700] "OPTIONS / HTTP/1.1" 200 - "-" "Microsoft Data Access Internet Publishing Provider Protocol Discovery"
1.2.3.4 - test [13/Apr/2005:17:03:48 -0700] "OPTIONS /davtest HTTP/1.1" 200 - "-" "Microsoft Data Access Internet Publishing Provider Protocol Discovery" 1.2.3.4 - test [13/Apr/2005:17:03:48 -0700] "PROPFIND /davtest HTTP/1.1" 207 906 "-" "Microsoft Data Access Internet Publishing Provider DAV"
1.2.3.4 - test [13/Apr/2005:17:03:48 -0700] "PROPFIND /davtest HTTP/1.1" 207 964 "-" "Microsoft Data Access Internet Publishing Provider DAV"

4.7.4. See Also

mod_auth_msfix
http://www.luluware.com/index.php?option=com_content&task=view&id=13&Itemid=38
FileStorage entry on FastMail Wiki
http://wiki.fastmail.fm/index.php/FileStorage
Microsoft Web Folder Client (MSDAIPP.DLL) Versions and Issues List
http://greenbytes.de/tech/webdav/webfolder-client-list.html
Microsoft WebDAV Mini-Redirector (MRXDAV.SYS) Versions and Issues List
http://greenbytes.de/tech/webdav/webdav-redirector-list.html

Microsoft Knowledge Base articles:

#315621: Cannot Add FQDN Web Folders that Require Basic Authentication to "My Network Places"
http://support.microsoft.com/?kbid=315621
#831805: Cannot verify WebDAV support for some Web sites from a Windows XP-based computer
http://support.microsoft.com/?kbid=831805
#841215: You cannot connect to a document library in Windows SharePoint Services by using Windows shell commands or by using Explorer View
http://support.microsoft.com/default.aspx?scid=kb;en-us;841215

4.8. Enabling CGI scripts for people

4.8.1. Problem

Someone has come to you and asked to run a CGI script. How do we handle this? CGI scripts can be potential security hazards and resource sinks (in that, if poorly written, they can consume a lot of resources: memory and disk throughput).

Home directories do not have ExecCGI on by default.

4.8.2. Solution

First, check out the hss-cf-repository package from our Subversion repository (read Maintaining configuration files fully, and follow the directions there for checking out the repository and adding directories and files to it).

Now decide where you're going to host this script.

4.8.2.1. On corinth

Set up a directory for them.

mkdir /infosys/www/corinth.hss.caltech.edu/docroot/$username
chown $username /infosys/www/corinth.hss.caltech.edu/docroot/$username

Add a CGI enabling stanza to apache/httpd.conf/corinth.hss.caltech.edu:

<Directory "/infosys/www/corinth.hss.caltech.edu/docroot/$username/">
    AllowOverride None
    Options +ExecCGI
</Directory>

And publish the changes.

4.8.2.2. In a home directory

Make a directory for them for the CGI stuff. Probably best not to make a generic cgi-bin directory for them, as they'll feel that they can start dumping things there. Although if they're smart (and this is Caltech, after all, so they're smart) they'll probably figure out that the directory you made for them is equivalent to a cgi-bin directory. So what choo gonna do.

(as them)
mkdir /home/$username/public_html/$directory

Add a CGI enabling stanza to apache/vhost.d/memphis.hss.caltech.edu/www.hss.caltech.edu/www.hss.caltech.edu:

<Directory "/home/$username/public_html/$directory/">
    AllowOverride None
    Options +ExecCGI
</Directory>

And publish the changes.

4.8.2.3. Afterwards

Tell the person:

  • Put their files in the directory we created
  • CGI scripts must end in ".cgi"
  • chmod a+x *.cgi

4.8.3. Discussion

The real questions regarding setting up someone for a CGI are: should we do it at all, and if so, what URL should it be accessible via.

4.8.3.1. Should we do it at all?

This is just a matter of technical and political judgement.

  • Is this an appopriate thing to do on a divisional server? Legal? Ethical? Consistent with our image?
  • Risk level: is the CGI dangerous (a security risk) or resource intensive?

4.8.3.2. What URL?

Where should we run this CGI?

  • Get Susan involved if this is to have a www.hss.caltech.edu URL that is not in a home directory.
  • Low risk? www.hss.caltech.edu/~$username URLs are ok
  • High resource usage, but low danger: corinth or an approprate group server.

Chapter 5. Databases

5.1. Setting the MySQL root password

5.1.1. Problem

When we first install a machine, the root user for the MySQL database has no password.

5.1.2. Solution

mysqladmin -u root -p password '<password>'
mysqladmin -u root -h <FQDN> -p password '<password>'

Each time mysqladmin asks you for a password, just hit enter.

5.2. Dumping and restoring a MySQL database

5.2.1. Problem

I configure MySQL on our servers to write its data to the local drive (for speed and good locking). If we're going to rebuild the machine, or if we're going to upgrade MySQL in such a way that the database needs to be dumped and restored, we need to get the unique data out of the database somehow, and then back in.

5.2.2. Solution

5.2.2.1. Backups

You'll need to know the password for the mysql root user for this.

To dump, in general:

mysqldump -u root -p -A --opt > ~/<hostname>.sql

5.2.2.2. Restores

To restore, first set the root password on the database appropriately and then:

mysql -u root -p mysql < <hostname>.sql
mysql -u root -p mysql -e 'flush privileges;'

If you don't do the flush privileges;, all MySQL users other than root will have no privileges — you won't be able to log in as them, or access any databases. Maybe restarting mysql will accomplish the same thing.

5.3. Dumping and restoring a PostgreSQL database

5.3.1. Problem

I configure PostgreSQL on our servers to write its data to the local drive (for speed and good locking). If we're going to rebuild the machine, or if we're going to upgrade PostgreSQL in such a way that the database needs to be dumped and restored, we need to get the unique data out of the database somehow, and then back in.

5.3.2. Solution

5.3.2.1. Backups

su - postgres
pg_dumpall > <hostname>.sql

5.3.2.2. Restores

su - postgres
psql -e template1 < <hostname>.sql

Chapter 6. RPM Building and Maintenance

6.1. Setting up to use the a non-root RPM build environment

6.1.1. Problem

The usual build directories are in /usr/src/redhat, which is owned by root. Building RPMs as root is a bad thing, because while building the package we may accidentally strew files all over our live filesystem and really hose up our systems.

Thus, we need a way to build RPMs as a non root user, typically us.

It would also be nice for our build directories to persist through reinstalls of our build machine, as we might have packages in there we're working on and don't want to lose.

6.1.2. Solution

6.1.2.1. Make the build environment

I use /infosys/rpmbuild to hold the user writable build environments for our OSes. In it, there is one build environment for each of the platforms we support.

This directory is on the file server, so we won't lose our data when we rebuild our machines. I used to store all this stuff on my machine's local hard drive, and would have to remember to save and restore my build environment each time I rebuilt my machine.

To make a new build environment for a new OS, do this:

os_version=$(rpm -qf --qf='%{VERSION}' /etc/redhat-release)
os_name=$(if grep -i fedora /etc/redhat-release > /dev/null 2>&1 ; then echo fedora-core; else echo redhat; fi)
topdir="$os_name-$os-version"

cd /infosys/rpmbuild
mkdir $topdir
mkdir $topdir/SPECS
mkdir $topdir/SOURCES
mkdir $topdir/SRPMS
mkdir $topdir/RPMS
mkdir $topdir/BUILD

chgrp -R cf /infosys/rpmbuild/$topdir
chmod -R g+w /infosys/rpmbuild/$topdir
chmod -R g+s /infosys/rpmbuild/$topdir

6.1.2.2. Setup your ~/.rpmmacros file

To use the build environments, add these lines to your "~/.rpmmacros" file:

%fulldistversion %(rpm -qf --qf='%{VERSION}' /etc/redhat-release)
%distname %(if grep -i fedora /etc/redhat-release > /dev/null 2>&1 ; then echo fedora-core; else echo redhat; fi)
%_topdir    /infosys/rpmbuild/%{distname}-%{fulldistversion}

When working on a package, cd to the build directory appropriate to the platform you're building the package for, and work on your RPMs there.

6.2. Automatically adding the platform name to the RPM release

6.2.1. Problem

When I build RPMs, I want to to satisfy these criteria:

  • When I build the package, I want it named
%{package}-%{version}-0.(tg|hss).%{revision}.(rh73|rh9|fc1).%{arch}.rpm
  • I don't want to have to edit the spec file every time I want to rebuild the package for a new platform.
  • I don't want a lot of code in the spec file itself which figures out the platform and sets the release properly, as it adds more code to the spec file, and if I decide to change the naming scheme, I have to touch all my spec files again.
  • Finally, if someone downloads my SRPM and wants to modify and rebuild it, I don't want them to need to have all my macros. (ATrpms RPMs really annoy me because of this — you have to have his special build environment to build his RPMs. Grrr.)

6.2.2. Solution

We can satisfy all these requirements with appropriate use of rpm macros.

First, add these lines to your "~/.rpmmacros" file:

%distversion %(rpm -qf --qf='%{VERSION}' /etc/redhat-release | sed "s/\\.//")
%distinitials %(if grep -i fedora /etc/redhat-release > /dev/null 2>&1 ; then echo fc; else echo rh; fi)
%disttag %{distinitials}%{distversion}
%tgrelease() %1.tg.%2.%{disttag}
%hssrelease() %1.hss.%2.%{disttag}

Add these lines to the spec file of RPMs destined for the Toughguy repository:

%{!?tgrelease: %define tgrelease() %2}
Release: %tgrelease 0 <real release number>

… and these lines to the spec file of RPMs that are HSS specific:

%{!?hssrelease: %define hssrelease() %2}
Release: %hssrelease 0 <real release number>

6.2.3. Discussion

For "%distversion" and "%distinitials", the "%()" construction sets the value of the macro to the stdout of the enclosed shell script. For some reason, for this you don't use the "%define" prefix; the assignment won't work if you do use "%define".

For "%tgrelease" and "%hssrelease" the "()" after the macro name names it as a macro that takes arguments. The "%2" gets replaced with the 2nd argument to the macro upon expansion; similarly, "%1" will be replaced with the 1st argument.

You use such macros like so:

%macroname <arg1> <arg2> ...

You can see an example of this in the "Release: …" line.

This line:

%{!?tgrelease: %define tgrelease() %2}

says: "if the tgrelease macro is not defined at build time, define it as "%2", which turns %tgrelease into a macro that doesn't actually do anything. I don't actually know what happens if you call %tgrelease with two arguments.

6.2.4. See Also

/usr/share/doc/rpm-*/macros
This file was helpful in teaching me how macros work.

6.3. Adding a code patch to an RPM

6.3.1. Problem

From time to time, we find an RPM whose code must be patched, either to fix a security problem, or to add a feature.

6.3.2. Solution

RPM allows code patches via "Patch:" lines in the preamble, and via the "%patch" macro in the "%setup" section.

First, create the patching environment:

rpm -i <package>-<version>-<revision>.src.rpm
cd <RPM build environment>/SPECS
rpmbuild -bp <package>.spec
pushd ../BUILD
mv <package>-<version> <package>-<version>.orig
popd
rpmbuild -bp <package>.spec
pushd ../BUILD/<package>-<version>

Make your changes to files in the BUILD/<package>-<version> directory.

Now make the patch file:

cd ..
diff -Naur <package>-<version>.orig <package>-<version> > ../SOURCES/<descriptive name>.patch
popd

That finishes the actual patch building. Now to add it to the spec file. Edit the spec file and add this line to the preamble:

Patch<n>: <descriptive-name>.patch

and this line to the "%setup" section:

%patch<n> -p1

Where "<n>" is the patch file number: if there are no other patches listed in the preamble, use "<n> = 1", otherwise use a number that doesn't conflict with the existing patches.

6.3.3. Discussion

If you're using the HSS RPM build environment, replace <RPM build environment> with /infosys/rpmbuild/<os token>.

Try to name your patch file something like "<package>-<description>.patch" so we'll know that it's a patch for a package named "<package" that does something vaguely like "<description>".

6.3.3.1. Partial patches

I had a situation where I had modified some files in a live directory and wanted to port the changes I had made back into the RPM.

I tar'ed up the directory and wanted to use that in the "diff -Naur …" statement, but ran into problems because live directory only had a subset of all the files in the build directory; my patch ended up with tons of sections which would remove the "missing" files (files in the build directory which were not in the live one) when the patch was applied.

What I ended up doing was using

cd ..
diff -aur <package>-<version>.orig <package>-<version> | egrep -v "^Only in <package>-<version>.orig" > ../SOURCES/<descriptive name>.patch
popd

6.4. Building the JPackage java 1.4.2 nosrc RPM

6.4.1. Problem

JPackage provides loads of Java related RPMs, including one for Java itself. Due to licensing concerns, the RPM for java they provide is actually only an SRPM — without the source. They call it a "nosrc" SRPM.

The instructions for building the actual binary java RPM are sparse, and acutally, wrong.

Every time we roll out a new OS, I have to rediscover how to build the RPM.

6.4.2. Solution

6.4.2.1. Download the necessary packages

Download the nosrc SRPM from JPackage: java-1.4.2-sun.

You will also need their jpackage-utils package: jpackage-utils

Finally, get the java distribution from sun. Go to http://java.sun.com/j2se/1.4.2/download.html and get the J2SE SDK distribution. You end up with a file called "j2sdk-1_4_2_04-linux-i586.bin".

6.4.2.2. Prepare to build

Install jpackage-utils onto your Java build machine.

rpm -i jpackage-utils-1.5.38-1jpp.noarch.rpm
# or
yum install jpackage-utils

Install the package to the build environment:

rpm -i java-1.4.2-sun-1.4.2.04-1jpp.nosrc.rpm

Install the java distribution into the SOURCES directory

cp j2sdk-1_4_2_04-linux-i586.bin /infosys/rpmbuild/fedora-core-1/SOURCES
chmod a+x /infosys/rpmbuild/fedora-core-1/SOURCES j2sdk-1_4_2_04-linux-i586.bin

6.4.2.3. Build the RPM

cd /infosys/rpmbuild/fedora-core-1/SPECS
rpmbuild -ba java-1.4.2-sun.spec | auto-install-rpm

6.4.3. Discussion

Add jpackage-utils to the yum repository, because we'll need it to build other java packages from JPackage.

6.5. Building the JPackage java 1.5.0 nosrc RPM

6.5.1. Problem

JPackage provides loads of Java related RPMs, including one for Java itself. Due to licensing concerns, the RPM for java they provide is actually only an SRPM — without the source. They call it a "nosrc" SRPM.

The instructions for building the actual binary java RPM are sparse, and acutally, wrong.

Every time we roll out a new OS, I have to rediscover how to build the RPM.

6.5.2. Solution

6.5.2.1. Download the necessary packages

Download the nosrc SRPM from JPackage: java-1.5.0-sun.

You will also need their jpackage-utils package: jpackage-utils

Finally, get the java distribution from sun. Go to http://java.sun.com/j2se/1.5.0/download.html and get the J2SE 5.0 JDK distribution. You end up with a file called "jdk-1_5_0-linux-i586.bin".

6.5.2.2. Prepare to build

Install jpackage-utils onto your Java build machine.

rpm -i jpackage-utils-1.5.38-1jpp.noarch.rpm
# or
yum install jpackage-utils

Install the package to the build environment:

rpm -i java-1.5.0-sun-1.5.0-2jpp.nosrc.rpm

Install the java distribution into the SOURCES directory

cp jdk-1_5_0-linux-i586.bin /infosys/rpmbuild/fedora-core-1/SOURCES
chmod a+x /infosys/rpmbuild/fedora-core-1/SOURCES jdk-1_5_0-linux-i586.bin

6.5.2.3. Build the RPM

cd /infosys/rpmbuild/fedora-core-1/SPECS
rpmbuild -ba java-1.5.0-sun.spec | auto-install-rpm

6.5.3. Discussion

Add jpackage-utils to the yum repository, because we'll need it to build other java packages from JPackage.

6.6. Building RPMs of Perl modules

6.6.1. Problem

There are a gazillion Perl modules out there in the world, and we have only a tiny fraction of them installed on our systems. Thus, it is likely that we will need to install one that we don't have at some point. Since all the rest of our perl modules are installed as RPMs, we want to build this additional one as an RPM, as well.

6.6.2. Solution

The Perl module RPM::Specfile (installed on all our systems) provides us with the handy script cpanflute2 to do all the work of building perl module RPMs.

  1. Go to http://search.cpan.org and download the module to /tmp.
  2. Un-tar the module, read its README and determine if there are any optional modules to download. If so, and they seem useful, download them and build RPMs of them.
  3. Run cpanflute2 on the tar file:
release = <some release string>

cpanflute2 --buildall --release=$release --test <tar filename>

6.6.3. Discussion

cpanflute2 (or rpm itself, not sure which) should be able to figure out the Requires: in terms of perl modules needed by the module. If you know of other requirements that don't get auto-detected (non-Perl programs, for example), add them with the "—requires" flag to cpanflute2.

Similarly, if you find that you need certain packages installed just to build this module RPM, add those with the "—buildrequires" flag.

You can supply your name and e-mail address with the "—name" and "—email" flags.

6.7. Building RPMs of Python modules

6.7.1. Problem

There are a lot of Python modules out there in the world, and we have only a fraction of them installed on our systems. Thus, it is likely that we will need to install one that we don't have at some point. Since all the rest of our python modules are installed as RPMs, we want to build this additional one as an RPM, as well.

Additionally, it's likely that we'll need to install a version of the module for each of the several different versions of python that are installed on a machine: python core code gets updated with cool things much more quickly than RedHat or Fedora are willing to deal with, so I usually have two versions of Python installed — the RedHat/Fedora old version, and a more current one.

6.7.2. Solution

If this is a python module that I want to build for both Fedora Core 3 and Fedora Core 1 systems and older, and want the module for the older systems to be installed for Python 2.3.x, I use this spec file stub:

%define rname <real name>
%define version <version>

%{!?_with_python23: %{!?_without_python23: %define _without_python23 --without-python23}}

%if %{?_with_python23:1}%{!?_with_python23:0}
%define python python2.3
%else
%define python python
%endif

Summary: Python <something> module
%if %{?_with_python23:1}%{!?_with_python23:0}
Name: python-%{rname}-py2.3
%else
Name: python-%{rname}
%endif
Version: %{version}
Release: 1
Source0: %{rname}-%version.tar.gz
License: GPL
Group: Development/Python
BuildRoot: %{_tmppath}/%{name}-buildroot
Requires: %{python}
BuildRequires: %{python}

%description
python-<module name> provides a python interface to <something>.

%prep
%setup -q -n %rname-%version

%build
%{python} setup.py build

%install
rm -rf %buildroot
%{python} setup.py install --root %buildroot

%clean
rm -rf %buildroot

%files
%defattr(-,root,root,-)
%_libdir/python*/site-packages/*

Where:

  • rname is the name on the tar file. So if the tar file is "foo-1.0.0.tar.gz", rname would be "foo".
  • This code was written for RedHat 9 and Fedora Core 1, where the default version of python is v2.2 and the more current version is v2.3.
  • To build the module for the default version of python, do the following (you'll end up with an RPM named python-<rname>-<version>-<release>.toughguy.<distro>.i386.rpm)

    rpmbuild -ba <specfile>.spec
  • To build the module for the current version of python, do the following (you'll end up with an RPM named python-<rname>-py2.3-<version>-<release>.toughguy.<distro>.i386.rpm)

    rpmbuild -ba --with python23 <specfile>.spec

If I only want to build the RPM for Fedora Core 3, which uses Python 2.3.x natively, I use this stub:

%define rname <real name>
%define version <version>

%define python python

Summary: Python <something> module
Name: python-%{rname}
Version: %{version}
infdef::private[]
Release: 1
Source0: %{rname}-%version.tar.gz
License: GPL
Group: Development/Python
BuildRoot: %{_tmppath}/%{name}-buildroot
Requires: %{python}
BuildRequires: %{python}

%description
python-<module name> provides a python interface to <something>.

%prep
%setup -q -n %rname-%version

%build
%{python} setup.py build

%install
rm -rf %buildroot
%{python} setup.py install --root %buildroot

%clean
rm -rf %buildroot

%files
%defattr(-,root,root,-)
%_libdir/python*/site-packages/*

6.8. Building unsquashable Fedora replacement packages

6.8.1. Problem

Sometimes you need to install your own version of an package and ensure that it won't be overwritten when Fedora releases updates for that package.

Example:

For the hosts here that use MySQL 4.x, all packages which depend on the mysql shared libraries — php, for example — must be rebuilt to use the new shared libraries.

All I need to do is to install Mysql 4.x and rebuild the php RPM. But since I don't change the version and release in the spec file, if Fedora ever updates php so that the release is later than that of our replacement package, our special mysql4 linked php gets overwritten by the updated Fedora mysql3 linked php, and php-mysql package no longer works.

6.8.2. Solution

The solution is to increase the Epoch in the spec file. If there's no "Epoch: <number>" line in the spec file, add this line to the header:

Epoch: 1

Otherwise, increase the epoch by one.

6.8.2.1. Sub-packages

If the package has sub-packages which depend on each other — if there is a "Requires:" line in the sub-package which requires another sub-package or the main package — and the "Requires:" references a specific version, prepend the version string with "<epoch>:" where <epoch> is the epoch you set above.

Example:

In the Fedora Core 3 php-4.3.9 spec file, the devel sub package header looks like this:

%package devel
Group: Development/Libraries
Summary: Files needed for building PHP extensions.
Requires: php = %{version}-%{release}

I add this to the main package header:

Epoch: 1

And then change the requires line in the main package to this:

Requires: php = 1:%{version}-%{release}

6.8.3. Discussion

When yum is trying to determine which version of a package to install, the epoch takes precedence over the the version and release. So if two packages have different epochs, but the same version and release, the package with the greater value of epoch gets installed.

I think it's a safe bet that Fedora will not be changing the Epoch of a package during the life cycle of a Fedora release. Thus our package will always take precedence over

6.8.3.1. Sub-packages

If you don't fix the "Requires:" lines in the sub package headers, when yum goes to update your packages, it will claim that it can't find those requires. Prepending the specific epoch to the version fixes that.

6.9. Adding a user and group for a service in an RPM for a Fedora system

6.9.1. Problem

When packaging up a daemon into an RPM, sometimes we want the daemon to run as a particular user instead of root. If that user does not exist, we need to create the account for the user, and potentially a group.

6.9.2. Solution

These instructions contain paths appropriate for Fedora systems.

6.9.2.1. You just need a certain username

Add this line at the top of the file:

%define service_uname <uname>

Add these lines after the %install block:

%pre
/usr/sbin/useradd -c "<description>" -M -n -r %{service_uname} &> /dev/null || :

6.9.2.2. You need user with a specific UID

Add these lines at the top of the file:

%define service_uname <uname>
%define service_uid <uid>

Add these lines after the %install block:

%pre
/usr/sbin/useradd -c "<description>" -M -u %{service_uid} %{service_uname} &> /dev/null || :

6.9.2.3. You need a user with a specific UID, and a group with a specific GID

Add these lines at the top of the file:

%define service_uname <uname>
%define service_uid <uid>
%define service_group <group name>
%define service_gid <gid>

Add these lines after the %install block:

%pre
/usr/sbin/groupadd -g %{service_gid} %{service_group} &> /dev/null || :
/usr/sbin/useradd -c "<description>" -M -n -u %{service_uid} -g %{service_group} %{service_uname} &> /dev/null || :

6.9.3. Discussion

The "-r" flag to useradd says "create a system account": UID < 500.

The "-M" flag to useradd says "don't create the home directory".

The "-n" flag to useradd says "don't create a group with same name as the user".

If you need a home directory to be created in a certain place, replace the "-M" flag to useradd with "-d <homedir>". If you want the files from /etc/skel to be copied to the new home dir, use "-m <homedir>" instead of "-d <homedir>".

See "man useradd" for a full description.

6.10. Adding a menu entry and icon for a GUI program for a Fedora system

6.10.1. Problem

If you're packaging up a GUI program in an RPM, it can be nice to add an entry to the global GNOME/KDE menu system, but it sure isn't obvious how to do so.

6.10.2. Solution

To add a menu entry for your program, you need to install a ".desktop" file into /usr/share/applications and possibly install an icon for the menu entry into /usr/share/pixmaps.

6.10.2.1. Install the icon

If you're going to install a special icon for your application, first choose and size your icon. It should be a PNG file, and should be 32x32.

Copy your icon to your SOURCES directory, and add these code bits:

Preamble:

Source99: <icon file>

%install section:

%{__mkdir_p} $RPM_BUILD_ROOT%{_datadir}/pixmaps
%{__install} -m 644 %{SOURCE99} %$RPM_BUILD_ROOT%{_datadir}/pixmaps/<appname>.png

%files section:

%{_datadir}/pixmaps/<appname>.png

6.10.2.2. Install the .desktop file

Now for the menu entry file:

Preamble:

Prereq:      desktop-file-utils >= 0.9

%install section:

%{__mkdir_p} $RPM_BUILD_ROOT%{_datadir}/applications
cat << EOF > $RPM_BUILD_ROOT%{_datadir}/applications/%{name}.desktop
[Desktop Entry]
Encoding=UTF-8
Name=<short name>
Comment=<description>
Type=Application
Exec=<exec line>
Icon=<icon>
Categories=<categories>
X-Desktop-File-Install-Version=0.3
EOF
short name
name to appear in the menu
description
text that will appear during a mouse over of the icon
exec line
the command line to use when running the application (If you need it, see List of valid Exec parameter variables in the Desktop Entry Specification section in the standards document listed See Also)
icon
the icon filename in /usr/share/pixmaps
categories
the menu categories this app should be listed under (see Discussion for more detail). Make sure one of them is X-Red-Hat-Base if you want your app listed in a top level menu instead of an "Extras" section.

%post and %postun sections

%post
update-desktop-database %{_datadir}/applications

%postun
update-desktop-database %{_datadir}/applications

6.10.3. Discussion

6.10.3.1. Icons

There are many locations for icons on a Fedora Core 3 system: /usr/share/pixmaps, /usr/share/icons, etc. I know that installing an icon into /usr/share/pixmaps works, but I'm sure you must be able to install them elsewhere, as well.

For instance, I know you can create icons of different sizes and put them into different directories, and the menu will use the most appropriately sized one.

6.10.3.2. Categories

The Categories list in the .desktop file is a semicolon separated list of desktop entry categories. Go to the Desktop Menu Specification and look at Appendix A: Registered Categories to get a list of acceptable categories.

6.10.3.3. GNOME and KDE Applications

Gnome 2.2 applications: add "StartupNotify=true" to the .desktop file.

KDE applications: add "X-KDE-StartupNotify=true" to the .desktop file.

Chapter 7. Programming

7.1. Importing an existing software package into a CVS repository

7.1.1. Problem

You have a software package you've been working on that you now want to place under revision control via CVS.

7.1.2. Solution

Let $package be the name of the directory containing your sources. Let $label be a label with which you want to tag the release

export CVSROOT=<cvsroot directory>

cd $package
cvs import -kk -m "Comment about starting new project" \ $package $label
initial
cd ..
mv $package $package.bak
cvs co $package

If the "cvs co" gives you grief about versions, then do "cvs co -r HEAD $package".

7.2. Importing an existing software package into a Subversion repository

7.2.1. Problem

You have a software package you've been working on that you now want to place under Subversion on svn.hss.caltech.edu.

7.2.2. Solution

Let $package be the name of the directory containing your sources.

svnhost = <subversion server hostname>

svn import $package http://$svnhost/svn/trunk/$package
mv $package $package.bak
svn co http:/$svnhost/svn/trunk/$package

7.3. Importing a CVS repository into Subversion

7.3.1. Problem

You've been using CVS to maintain your code base for a while, but now you want to migrate to using Subversion. You want to import your CVS repository into your Subversion repository while keeping your revision history, branches and tags.

7.3.2. Solution

Use the program cvs2svn to import your CVS data into your Subversion repository.

If it's not installed on your system, you can get it like so:

http://www.hss.caltech.edu/yum/fedora-core-3/en/os/i386/RPMS.extras/cvs2svn-1.1.0-0.tg.2.fc3.noarch.rpm

First, make sure all extant changes to things in your CVS repository have been committed. Duh.

Since (as described in the man page) cvs2svn uses the current directory is as scratch space for data files, let's change to the local drive for speed.

cd /tmp
cvs2svn --existing-svnrepos -s <svn repo topdir> <cvs repo topdir>

7.3.3. Discussion

If you use the Apache + mod_dav_svn method to access your repository, the repository needs to be owned by apache user.

The group "svn" contains you and the apache user so that you can use svnadmin to modify the repository, and apache can read and write to it.

7.3.3.1. A more complicated import

If there are a lot of projects in the CVS repository, we want to think about what kind of subversion layout to import them into. We can import these into subversion in one of three ways:

  1. 0ne repository per project
  2. Group projects by similarity and create a repository for each group
  3. All projects in one repository

See "Choosing a Repository Layout" section from

http://svnbook.red-bean.com/en/1.1/ch05s04.html#svn-ch-5-sect-6.1

for some pros and cons of each method.

Some more specific details are:

Creating a new repository requires someone to login to the Subversion server, run svnadmin, create a subversion access control file, and add a stanza to the Apache config. So structure (1) implies administrative overhead for each project to be created.

On the other hand, Subversion revisions apply to the entire repository, not to projects or files. So do commit messages. Hooks also fire for any change in the repository. So structure (3) implies that revision numbers and commit messages can be ambiguous, and that hooks may need to be somewhat complicated.

Structure (2) has both problems, at a lesser level.

Here's a shell script which can sort projects into any of structures (1), (2) or (3).

#!/bin/bash

topdir="/infosys/subversion"
workingcvsdir="/scratch/cvs"
realcvsdir="/infosys/cvs"
etcdir="/infosys/subversion/etc"

function create_access () {

    if ! test -e $etcdir/$1.access; then
        cat > $etcdir/$1.access << __EOF__
[/]
* = rw
__EOF__

    fi
}

function create () {

    if ! test -e $topdir/$1; then
        svnadmin create --fs-type fsfs $topdir/$1
    fi
    create_access $1
}


function import_single () {
   # Usage: import_single <repository name> <cvs project name>
   #
   # Resulting structure looks like
   #
   # /
   #   /branches
   #   /tags
   #   /trunk

   cvs2svn --existing-svnrepos -s $topdir/$1 $workingcvsdir/$2
}

function import_multi () {
   # Usage: import_multi <repository name> <cvs project name>
   #
   # Resulting structure looks like
   #
   # /
   #   /$project
   #     /$project/branches
   #     /$project/tags
   #     /$project/trunk

   cvs2svn --dump-only --dumpfile /tmp/$2.dump  $workingcvsdir/$2
   svn mkdir file://$topdir/$1/$2 --message ""
   svnadmin load --parent-dir $2 $topdir/$1 < /tmp/$2.dump
   rm /tmp/$2.dump
}

function fixperms () {
    chown -R apache:svn $topdir/$1
    chmod g+w $topdir/$1
    find $topdir/$1 -type d | xargs chmod g+s
}

rsync -a ${realcvsdir}/ $workingcvsdir

# ---------------------
#         MAIN
# ---------------------

for f in repos1 repos2; do
    create $f
done

import_single repos1 project1
import_multi repos2 project2
import_multi repos2 project3

for f in repos1 repos2; do
    fixperms $f
done

rm -rf $workingcvsdir

7.4. Tagging a release in Subversion

7.4.1. Problem

Under CVS, you could use "cvs tag" to tag all files in a project with a symbolic tag. You might want to do this to freeze the code at the release a certain version of the software, so that at a future date you could be sure you could retrieve the code for that release from the repository. How do you do this with subversion?

7.4.2. Solution

Subversion doesn't seem to have a concept of tagging, but you can make a copy of a project, name the copy for the release, and stick it in the "tags" subdirectory.

svnhost = <subvsersion server hostname>

svn copy http://$svnhost/svn/trunk/<project> \
   https://$svnhost/svn/tags/<project>-<version>

7.5. Importing the contents of another Subversion repository into yours

7.5.1. Problem

Sometimes people who have been using their own Subversion repostories want to have their repository maintained by us as part of our repository.

7.5.2. Solution

First, make a dump of the old repository on the machine on which it currently resides. You need to do the dump with the same version of subversion that the repository was created with: newer versions of subversion may not be backwards compatible.

svnadmin dump /path/to/old/repository > repo.dump

Copy the resulting repo.dump file to your Subversion server.

Prepare a holding directory for the old repository in our repository:

svnhost = <subversion server hostname>
svn mkdir -m "Created holding directory for <old repos name> repository>" \
    http://$svnhost/svn/<old repos name>

… and import the dump file:

repospath = <path to the directory which your repository (has conf, dav, dirs etc)

svn load $repospath --parent-dir <old repos name> < repo.dump

At this point you can elect to either leave things as they are or merge the files from the old repository into our main repository trunk directory.

7.5.2.1. 1. Leave things as they are

If you leave things as they are, the URL for old repository things is:

svnhost = <subversion server hostname>

http://$svnhost/svn/<old repo name>/{trunk,tags,branches}/<target>

7.5.2.2. 2. Merge the old repository into the main repository

If you want to merge the old repository into the main one, use "svn move" to move everything from

svnhost = <subversion server hostname>

http://$svnhost/svn/<old repo name>/{trunk,tags,branches}

to

http://$svnhost/svn/{trunk,tags,branches}

resolving any naming conflicts as you do so.

Then,

svn remove http://$svnhost/svn/<old repo name>

7.6. Populating a Subversion repository

7.6.1. Problem

So you've created your repository with Setting up a Subversion server under Fedora, or Adding a new subversion repository, and now you want to add projects and files to it.

7.6.2. Discussion

I'm skipping the typical "Solution" section for this recipe, because there is no one solution for repository structure.

Your first stop should be the "Version Control with Subversion" book. It will explain things in much greater detail and breadth than I will.

7.6.2.1. Repositories, files and directories

A repository is a container for files and directories. Your repository is empty — it currently contains no files or directories (except "/", the root directory).

When it comes down to it, subversion doesn't really incorporate a concept of a project. What it knows about is files and directories, which each have property sets and change histories. We humans are free to group directories and files in the repository into projects in our minds, but subversion doesn't do that internally.

7.6.2.2. Organizational structures

The first thing you should decide is how you are going to structure your repository. Subversion does not require repositories to have any particular directory structure. You have two options:

  1. Use your repository for a single project
  2. Use your repository for a many projects

See "Choosing a Repository Layout" for some pros and cons of each.

Some more specific details are:

Creating a new repository requires someone (typically a systems administrator) to login to the Subversion server, run svnadmin, create a subversion access control file, and add a stanza to the Apache config. So structure (1) implies administrative overhead for each project to be created. Which implies delay for you, unless you are an admin.

On the other hand, Subversion revisions apply to the entire repository, not to projects or files. So do commit messages. Hooks also fire for any change in the repository. So structure (2) implies that revision numbers and commit messages can be ambiguous, and that hooks may need to be somewhat complicated.

I use structure (2) for the sysadmin repository, while Charlie Hornberger has started using structure (1) for certain projects. I may also start using (1) for certain projects.

7.6.2.3. Examples

Let's say you're going with (2). The first thing you need is a name for your project: "test". Let's make a directory for it:

svn mkdir https://svn.hss.caltech.edu/<repo name>/test

At this point you could create some organizational placeholders under "test" which will hold different kinds of code snapshots. For instance

svn mkdir https://svn.hss.caltech.edu/<repo name>/test/trunk
svn mkdir https://svn.hss.caltech.edu/<repo name>/test/tags
svn mkdir https://svn.hss.caltech.edu/<repo name>/test/branches

"trunk" holds the main development branch, "tags" hold tagged releases (like what you would make with "cvs tag" under CVS) and "branches" hold alternate code development branches. Note that these names are arbitrary; you could call them "icecream", "pinetree" and "subaru" if you felt like it. You could also not create these directories (and just use https://svn.hss.caltech.edu/<repo name>/test to hold your code) or create many more directories ("tags-release", "tags-unstable", etc.). This is just an organizational structure, but one used by many people.

Now let's checkout your project:

svn co https://svn.hss.caltech.edu/<repo name>/test/trunk test

You should now have an empty directory called "test" in the current directory. This is your working copy of the "test" project. Now create files within the "test" directory, tell subversion about them with "svn add <filename>" and commit your changes with "svn commit".

If you have existing code you want to import into the "test" project in subversion, you could do

svn import <local project directory>
https://svn.hss.caltech.edu/<repo name>/test/trunk

Appendix A. asciidoc include tree for this document

cookbook.asc:
    users.asc:
        adduser.asc
        addapacheuser.asc
        adduserdavid.asc
        davidcvsweb.asc
        newsysadmin.asc
        forwarding.asc
        vacationnospam.asc
        installxwin32.asc
        multiusershare.asc
        changeshellenvironment.asc
        firefoxpdf.asc
        sharecalendar.asc
    install.asc:
        newdistro.asc
        stresstest.asc
        newinstall.asc
        standalone.asc
        laptop.asc
        upgrade-linux.asc
        nvidia.asc
        cinemadisplay23.asc
        dell2005fpw.asc
    hostspecific.asc:
        monitoringhostupgrade.asc
        mailhostupgrade.asc
        rebuildstrongboxmaster.asc
        rebuildknossos.asc
        rebuildbackupserver.asc
        rebuildalexandria.asc
        rebuildpylos.asc
        rebuildcoruscant.asc
    maintain.asc:
        modifycf.asc
        stresstest.asc
        patchsolaris.asc
        machine-labels.asc
        changeipaddress.asc
        addinganipaddress.asc
        movinganipaddress.asc
        restoresolarispasswd.asc
        solarisaddswap.asc
        uninstallbootloader.asc
        configurex.asc
        onceonlybootdefault.asc
        linuxusbdrive.asc
    software.asc:
        installbugzilla.asc
        setupsubversion.asc
        newsvnrepo.asc
        linuxvpnclient.asc
        mathematica.asc
    hardware.asc:
        changedrive.asc
        suntopcpartmap.asc
        addharddrivesolaris.asc
        fedoradvd+rw.asc
        fc4sound.asc
    email.asc:
        updatespamrules.asc:
            spamrulesets.asc
        spamlists.asc
        deletemailmanlist.asc
        movemailmanlists.asc
        dualmailman.asc
        impsavesent.asc
        migratehordedb.asc
    ssl.asc:
        makesslcert.asc
        makethawtesslcert.asc
        couriercertificates.asc
        renewcertificate.asc
        renewcacert.asc
    monitoring.asc:
        cactiaddhost.asc
        diskiosnmp.asc
        diskusagechange.asc
        webalizer.asc
    printing.asc:
        addprinterlprng.asc
        addprinter.asc
        setupmacprinter.asc
        maclprng.asc
        setuplexmarkprinter.asc
    apache.asc:
        cronologfiler.asc
        vhost-setup.asc
        facultysearch.asc
        httpdandpamlinux.asc
        httpdandpamsolaris.asc
        pwauth-pwdfile.asc
        sslnamevirtualhost.asc
        redirecttossl.asc
        hsswebdav.asc
        apache1xpwebdav.asc
        apache2xpwebdav.asc
        cgiscripts.asc
    databases.asc:
        setmysqlpasswd.asc
        dumprestoremysql.asc
        dumprestorepostgresql.asc
        setupmysql.asc
        setuppostgresql.asc
        dailymysqldump.asc
        dailypostgresdump.asc
    rpms.asc:
        rpmbuildroot.asc
        autorelease.asc
        addpatch.asc
        java1.4.2rpm.asc
        java1.5.0rpm.asc
        perlmodulerpm.asc
        pythonmodulerpm.asc
        fedorareplacementpkgs.asc
        rpmadduser.asc
        addiconandmenurpm.asc
    programming.asc:
        cvsimport.asc
        svnimport.asc
        svncvsimport.asc
        svnfreezetag.asc
        importsvnrepo.asc
        populate-svn-repo.asc
    backups.asc:
        restore.asc
        filerfullrestore.asc
        filerforcefull.asc
        caladanforcefull.asc
    cookbookincludes.asc
    cookbookstyle.asc:
        ../generalstyle.asc
    cookbookmaintainence.asc:
        buildcookbook.asc

Appendix B. Style Guide for HSS Cookbook recipes

B.1. Recipe Template

I've modelled the HSS Cookbook on O`Reilly's Perl Cookbook. Each recipe should be structured like so:

Short, yet descriptive, subject
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem
^^^^^^^
A brief description of the problem this recipe tries to solve.
Solution
^^^^^^^^
The recipe itself.  Keep this terse: put any explanation or exposition
into the Discussion section.
Discussion
^^^^^^^^^^
Elaborations on steps in the Solution section, exposition, anecdotes, etc.
go here.

B.2. Code fragments or command sequences

A single command in a block of normal text should be enclosed in quotes and monospaced, like so: "postfix reload".

Code fragments or command sequences should be enclosed in asciidoc literal blocks:

--------------------------------------
if code_fragment:
    self.enclose_in_literal_block()
--------------------------------------
--------------------------------------
cd /install/cf/snmp/snmpd.conf
make install
--------------------------------------

If a command sequence should be run on a particular machine, or as root, put that in parenthesis on the first line of the literal block:

--------------------------------------
(on delphi, as root)
postfix reload
--------------------------------------

If a command in a command sequence takes a context specific argument, represent that argument as a brief description enclosed in angle brackets:

--------------------------------------
useradd -c <comment> <username>
--------------------------------------

B.3. Text markup

Names of programs
Use monospaced fonts for the name of programs: postfix, /usr/bin/useradd

Appendix C. Maintaining the HSS UNIX Cookbook

C.1. Building the HTML and PDF versions of this book

C.1.1. Problem

You want to rebuild the web pages for the HSS UNIX Cookbook or build the printable book.

C.1.2. Solution

The HSS UNIX Cookbook is part of the hss-runbook package, in the HSS sysadmin Subversion repository. You must be in the "sysadmins" group in /infosys/subversion/etc/subversion.access to be able to check out the hss-runbook package.

Check it out from the repository:

svn checkout https://svn.hss.caltech.edu/svn/trunk/hss-runbook
cd hss-runbook/cookbook

To build the HTML pages, do:

make html

To install the HTML pages to their live locations in ~sysadmin/public_html, do:

make install

To build the PDF pages, do:

make pdf

Index