Mar 26 - Sorry, maintenence underway, things will be fragile for a bit

Unix Admin Notes

This is a miscellaneous collection of things I want to remember when administering my machines. These are notes for me managing my Debian systems, so there's little effort made to explain things (more than I need to know). Maybe someone else will find something useful here too, but you should nothing here as completely accurate.

  • What's in an ISO image?
      sudo mount -o loop -t iso9660 /tmp/floppy.iso /mnt/iso
      mkdir /tmp/jjj
      sudo mount -o loop /mnt/iso/boot/boot.img /tmp/jjj
      Now you can see a directory structure of data in /tmp/jjj

  • What's in an initrd file? See
      sudo mkdir /dev/shm/initrd
      sudo mount -o loop -t cramfs /boot/initrd.img-2.6.14-mos /dev/shm/initrd
      Note you can get a shell just before the pivot is done by changing
      DELAY in /etc/mkinitrd/mkinitrd.conf

  • Set up locales fix errors like this:
       perl: warning: Setting locale failed.
       perl: warning: Please check that your locale settings:
          LANGUAGE = "en_US",
          LC_ALL = (unset),
          LANG = "en_US"
       are supported and installed on your system.
       perl: warning: Falling back to the standard locale ("C").

    You only need install the package locales. In particular people seem to think localeconf is a bad thing.

     sudo vi /etc/environment    # Should just have LANG="en_US.UTF-8"
     sudo vi /etc/locale.gen     # If exists, it should look like:
                                 #  en_US.UTF-8 UTF-8
                                 #  en_US ISO-8859-1
     sudo dpkg-reconfigure locales
     Enable just the two en_US languages above. Default should be 'en_US'

  • Snatch X11 screen image from a remote machine.

    Sometimes I'll need to run a LiveCD to do diagnostics or some other maintenance which is long running. Recent versions of the Dell Remote Access Controller (RAC) have their JAVA console broken for 95% of the browsers in the universe. This hack allows me to snatch the X11 screen image so I can tell when the maintenance is complete. Note, this only applies if the remote machine has an X11 session open. It's also proof of why root should normally never run X11 sessions.

      ssh -X remotemachine        # SSH with X support to remote machine
      ls -la /root/.Xauthority    # Xauthority for owner of console session
      export DISPLAY=:0.0         # DISPLAY value for local console
      xauth merge /root/.Xauthority   # Allow access to new DISPLAY
      xwd -root -out /tmp/console.xwd # Grab console of local machine
      scp -p /tmp/console.xwd user@host:/tmp    # Screen shot to MY machine
      gimp /tmp/console.xwd       # View remote console on my machine

  • Make package for stable from another level. See
      Find the package of interest in some other distribution
      Move to the package description. In the blue box on the RH side find 'Download Source Package'
        click on and save the *.orig.tar.gz file
        click on and save the *diff.gz file
      Move to a place where the source is to be extracted and built
        Note this may be on a machine with the correct libs installed
        cd ~/src
        tar xzvf /tmp/packagename.orig.tar.gz
        cd packagename
        gunzip -c /tmp/packagename*diff.gz | patch -Np1
      Make the package  (requires packages devscripts fakeroot)
        debuild -i -us -uc -b

  • Set up Debian mirror
      Create mirror and local package place
        cd /home/placeformirror
        mkdir debian
      Find from web. Edit it and set up the TO=
      to /home/placeformirror/debian and
      and MAILTO="proper@mailtoaddress"
      sudo nohup         # No mesages, takes a long time 1st time
      Set up a web server like lighttpd. Configure /etc/lighttpd/lighttpd.conf
      Start server with sudo /etc/init.d/lighttpd restart

  • Create a partition for devices > 2TB See Careful here, parted makes changes immediately to disk.
        use /dev/sdc
        unit MB
        print                     # Note size of device
          Disk geometry for /dev/sdc: 5624000MB
          Disk label type: none   # <=== should be msdos (<2TB) or gpt (>2TB)
        mklabel msdos  (or gpt)   # Probably latter
        # Note that "parted" uses MB for the partition boundaries
        mkpart primary ext3 0 5624000    # Will not use ext3 filesystem  Extra 000
        print                     # Should look like you expect 
      After the partition is created, do mkfs.xfs etc as usual

  • Set up NFS See
      Install the packages
        sudo apt-get install nfs-kernel-server
      sudo vi /etc/hosts.allow      add this line
      sudo vi /etc/exports
        #   Allow cluster clients to get at NFSROOT
      Restart server if need be:
        sudo /etc/init.d/nfs-kernel-server restart
      To just read the file by hand, send daemon -HUP kill signal
        sudo /etc/init.d/nfs-kernel-server reload
      Useful commands:
        exportfs -f          # Flush all data
        exportfs -u          # Unexport all directories
        exportfs -r          # Reexport all directories
        rpcinfo -p           # See if NFS is up
      Mount NFS drive on the remote machine
        mount  /remotemachine/home
        vi /etc/fstab
          remotemachine:/home  /remotemachine/home  nfs   ro,soft,intr,proto=udp  0    0

  • Software RAID setup See
     Install software packages
       sudo apt-get install mdadm
      Create the device.  In theory mdadm --auto can do this, but... (still needed?)
        sudo mknod /dev/md0 b 9 0
      Find and allocate the partitions using 'fdisk /dev/sdX'
      Create the RAID device
        sudo mdadm --create --verbose /dev/md0 --auto yes  --level 1
           --raid-devices 2 /dev/sda4 /dev/sdb4
        This can start a initialization of the device in the background
        that can take a very very long time. While this is going on
        the system is highly unusable. Patience grasshopper.
      Create filesystem on disk
        sudo mkfs.xfs -f /dev/md0
      Create file so device is known at boot to /etc/init.d/mdadm* scripts
        sudo vi /etc/mdadm/mdadm.conf
          DEVICE  /dev/sdX1 /dev/sdX2 
          ARRAY   /dev/md0 devices=/dev/sdX1,/dev/sdX2
      Queries to see the device:
        sudo mdadm --detail --scan
        sudo cat /proc/mdstat 
      Add the device to /etc/fstab to be automatically mounted at boot

  • Set up ATA over Ethernet (AOE) See
      Be sure kernels on server (where physical disk is to be found)
      and client (where AOE disk will be mounted) have CONFIG_ATA_OVER_ETH set.
        grep ATA_OVER /boot/config-`uname -r`
      Load the aoe module: sudo modprobe aoe
        sudo tail /var/log/syslog      # See 'kernel: aoe: AoE v32 initialised'
        Install software:  sudo apt-get install vblade
        Start daemon to manage the device. Each AoE device is identified by
        a couple Major/Minor, with major between 0-65535 and minor between 0-255.
        AoE is based just over Ethernet on the OSI models so we need to
        indicate which ethernet card we'll use. We export /dev/sda1 with
        a major value of 0 and minor if 1 on the eth0 interface.
        vbladed 0 1 eth0 /dev/sda1
        As above check the kernel has AOE enabled. Get the module loaded.
        Install software:  sudo apt-get install aoetools
        Find an AOE devices:
          ls -al /dev/etherd/       # See details on device that was found/created
        Treat the new device (/dev/etherd/e0.1) just like any other device
        mkfs.xfs /dev/etherd/e0.1
        mkdir /aoe
        mount -t xfs /dev/etherd/e0.1 /aoe

  • Set up LVM combine many partitions together as one 'virtual disk' See and
      Install software
        sudo apt-get install lvm2 dmsetup xfsprogs
      Partition disks, fdisk each with one partition (e.g. /dev/sdb1)
      Create physical volumes
        sudo pvcreate /dev/sdb1
        sudo pvcreate /dev/sdc1
        sudo pvcreate /dev/sdd1
        sudo pvscan
      Create a volume group and make it available
        sudo vgcreate fhome /dev/sdb1 /dev/sdc1 /dev/sdd1
        sudo vgchange -a y fhome
        sudo vgdisplay
        sudo lvcreate --size 3493G --name fanhome fhome
        sudo lvdisplay
        mount /dev/bak/backups /BAK/backups
      Rename a logical volume group
        sudo vgdisplay -v shome
        sudo vgchange -a n /dev/shome
        sudo vgrename /dev/shome /dev/ext
        sudo vgchange -a y /dev/ext
        sudo vgdisplay -v
        sudo vgdisplay -v ext
      Rename a logical volume
        sudo lvdisplay  /dev/ext/snowhome
        sudo lvrename /dev/ext/snowhome /dev/ext/nfsdata
        sudo lvdisplay  /dev/ext/nfsdata
      Make a file system on the disks
        sudo mkfs.xfs -f /dev/fhome/fanhome
        sudo mount /dev/fhome/fanhome /home
        sudo mount -t xfs /dev/ext/nfsdata /mnt
      And again
        sudo vgcreate shome /dev/sde1
        sudo vgchange -a y shome
        sudo lvcreate --size 698G --name snowhome shome
        sudo mkfs.xfs -f /dev/shome/snowhome
        sudo mount /dev/shome/snowhome /snowhome
      Add a new drive to a VG
        sudo vgdisplay -v     # See what disks are used
        # create partition, format as usual
        umount /BAK/backups
        sudo pvcreate /dev/sdf1   # Prepare new partition
        sudo vgextend bak /dev/sdf1
        sudo lvresize --size 5.999T /dev/shome/home    # Increase size 
        sudo mount /dev/bak/backups /BAK/backups
        sudo xfs_growfs /BAK/backups

  • Set up iSCSI support See
      Install software. Requires iscsi-tcp should be compiled into kernel.
        sudo apt-get install open-iscsi parted
      Find the initiator name for this node:
        tail -1 /etc/initiatorname.iscsi 
      Make sure that this initiator is defined to the iSCSI device
        firefox --no-remote http://raid3
        set Host LUN -> Create iSCSI Initiator
      Figure out what iSCSI initiators are around and their LUNs
        sudo iscsiadm -m discovery -t sendtargets -p
      Creates files in /etc/iscsi/nodes AND changes the node.startup value.
      Do not do this unless you correct the startup values.
      sudo vi /etc/iscsi/iscsid.conf     #  Make sure 'node.startup = automatic'
    * Set up iscsi devices to be started automatically
      cd /etc/iscsi/nodes
      sudo vi */*/default
        change 'node.startup = manual' to 'node.startup = automatic'
      The following did not work for me:
        To automate login to a node, use the following with the record ID (record ID
        is the targetname and portal) of the node discovered in the discovery above:
      	iscsiadm -m node -T targetname -p ip:port --op update -n node.conn[0].startup -v automatic
      Perhaps restart and check what happened    
        sudo /etc/init.d/open-iscsi restart
        sudo tail -20 /var/log/syslog
      Find out what is going on
        sudo iscsiadm -m session
        tcp: [1],1
      See statistics about activity
        sudo iscsiadm -m session -s -r ID
      Make file system as usual and mount
        mkfs.xfs -f /dev/sdc1