My ongoing experiences with Ubuntu, and later Mythbuntu, as a media center with MythTV. I'm also using the system for a virtual machine server, a mediawiki server and a general all around home infrastructure base.

Sunday, March 29, 2009

Trying out 'unattended upgrades' to replace cron-apt

A while back I posted about a problem I was having with my auto update process sometimes getting hung up reinstalling a kernel update over and over. A reader left a comment about not having a similar problem with the "unattended upgrades" paclage.

I poked around the net, but could not find a lot of information about this package. The best information I found was this blog post on vanutsteen.nl, which was enough to go on.

At this point I've installed unattended upgrades on one of my virtual machines. I haven't noticed any difference so far, but I haven't had a kernel upgrade such as the one that caused my problems, so I'm in a waiting mode now.

Here is what I did to install unattended upgrades, based on the vanutsteen.nl post.

First, install the package and got rid of cron-apt:
# apt-get install unattended-upgrades update-notifier-common
# apt-get remove cron-apt
Then I edited /etc/apt/apt.conf.d/50unattended-upgrades and uncommented the following line:
"Ubuntu intrepid-updates";
Then I edited /etc/apt/apt.conf.d/10periodic to look like:
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "1";
APT::Periodic::Unattended-Upgrade "1";
That's it. I'll post again when I come to some conclusion about unattended upgrades vs cron-apt.

Redoing auto-updates to reduce email

I've gotten a little tired about the amount of email I receive daily from my automatic daily package updates, particularly now that I've installed logwatch, which provides me with much of the same information.

I have two types of machines that I want to update:
  1. Virtual machines that I just want to auto update and I don't want to hear about it unless there is an error. Yes, I understand I risk configuration problems, but so far this hasn't been a problem so I'm happy living on the edge.

  2. My key server and my laptop. On these machines I do the conservative download the packages and once a week, on the weekend, I sit down and manually install them. (If some critical security update occurs, I assume I will hear about it via other channels and jump in manually).
On all systems: make cron-apt quiet
I started by editing cron-apt configuration on all my systems and configuring it so it only sends email on errors by changing the MAILON value to "error":
# vi /etc/cron-apt/config
# grep ^MAILON /etc/cron-apt/config
MAILON="error"
Key Server and Laptop
Then I turned to my server system and laptop. I want to run cron-apt daily and then once per week send me a reminder to update (assuming there is something to update, which generally there is). I could just run cron-apt weekly, but this way if there is a critical security update, it will already be downloaded and I just have to install it.

To run cron-apt dailly, I add it to /etc/cron.daily:
# ln -s /usr/sbin/cron-apt /etc/cron.daily/
Then I created the following script to send me a weekly reminder me to install the upgrades (based on the script from this page):
# vi /etc/cron.weekly/show-upgrades
# chmod +x show-upgrades
And here is the script:
#!/bin/sh
tmpFile=`mktemp`
apt-get --simulate dist-upgrade > $tmpFile 2>&1
if test $? -ne 0 ; then
echo "Error running apt-get --simulate dist-upgrade:"
cat $tmpFile
rm -f $tmpFile
exit 1
fi
grep Inst $tmpFile > /dev/null
if test $? -eq 0; then
echo "Upgrades are pending. Run 'apt-get dist-upgrade' to install."
grep Inst $tmpFile
echo
echo "Full output:"
cat $tmpFile
fi
rm -f $tmpFile
exit 0
Virtual Machines
Now, turning to my virtual machines. What I wanted here was automatically daily upgrade of all new packages with no output unless there was an error. This involved editing to the auto-update script I had created previously:
# vi /etc/cron.daily/auto-update
Here is the script:
#!/bin/bash
/usr/sbin/cron-apt
tmpFile=`mktemp`
/usr/bin/apt-get -y dist-upgrade > $tmpFile 2>&1
if test $? -ne 0 ; then
echo "Error doing '/usr/bin/apt-get -y dist-upgrade':"
cat $tmpFile
fi
rm -f $tmpFile

host clock rate changes again

Updated April 16, 2009: And the answer is running 'vmware-config.pl', which I did recently after a kernel update, causes the 'host.useFastClock = FALSE' line to disappear from /etc/vmware/config.

Original post follows...

Argh, once again my syslog on my VMWare server was getting filled up with host clock rate change messages:
# tail /var/log/syslog
Mar 29 08:31:46 casey kernel: [1894820.064260] [925]: host clock rate change request 54 -> 58
Mar 29 08:31:57 casey kernel: [1894831.040153] [925]: host clock rate change request 58 -> 24
Mar 29 08:31:57 casey kernel: [1894831.040272] [925]: host clock rate change request 24 -> 54
Mar 29 08:31:59 casey kernel: [1894832.676158] [925]: host clock rate change request 54 -> 58
Once again "host.useFastClock = FALSE" was missing from /etc/vmware/config

Once again, I added it and restarted vmware (/etc/init.d/vmware restart).

Once again, this cleared up the problem.

I decided to add a daily check for this line to try and figure out why it disappears:
# vi check-vmware-config
# cat /etc/cron.daily/check-vmware-config
#!/bin/sh
grep "useFastClock" /etc/vmware/config > /dev/null
if test $? -ne 0 ; then
echo "WARNING: useFastClock line not found in /etc/vmware/config"
fi
# chmod +x /etc/cron.daily/check-vmware-config
Nothing to do now but sit back and wait....

ddclient woes

I previously setup automatic updates of my dyndns account. This had been working fine, until this morning when I got the following email from DynDNS:
Your account vwelch at DynDNS.com is due to expire in 5 days.

DynDNS.com expires accounts that have no activity during a 30 day period.
So I logged into my server and checked to see what was going on. First thing I notices is I had a lot of ddclient processes running, hung reading from whatismyip.org:
# ps auxwww | grep ddclient | wc
23 386 2486
# ps auxwww | grep ddclient
...
root 22048 0.0 0.1 6196 4184 ? SN Mar10 0:00 ddclient - read from whatismyip.org port 80
root 24041 0.0 0.1 6196 4224 ? SN Mar24 0:10 ddclient - read from whatismyip.org port 80
root 24791 0.0 0.1 6196 4180 ? SN Mar17 0:00 ddclient - read from whatismyip.org port 80
root 26332 0.0 0.1 6196 4224 ? SN Mar11 0:04 ddclient - reading from whatismyip.org port 80
root 28622 0.0 0.1 6196 4224 ? SN Mar25 0:09 ddclient - read from whatismyip.org port 80
My guess here is that at some point there was a network problem that gummed verything up. I started by killing off all of these hung processes:
# killall -9 ddclient
# ps auxwww | grep ddclient
root 15680 0.0 0.0 3236 796 pts/1 S+ 08:12 0:00 grep ddclient
Looking at the daily cron script I had previously written, use of the '-daemon' flag seems wrong since I'm running it regularly from cron. When the script runs, I want it to do its thing and exit. So I took the -daemon flag out, leaving /etc/cron.daily/ddclient looking as follows:
#!/bin/sh
/usr/sbin/ddclient -syslog
After this, I ran the script and turned my browser to DynDNS and saw that my Last Updated field was current.

Wednesday, March 18, 2009

Using cronic to reduce "cram" (cron spam)

I'm working on reducing the about of email I'm getting from my cron jobs. One thing I stumbled across is cronic. I used it as following to quiet my cron jobs.

First, to install it:
# wget -O /usr/local/bin/cronic http://habilis.net/cronic/cronic
...
# chmod 755 /usr/local/bin/cronic
Now to use it, edit your cron scripts to use cronic as a wrapper. Before:
# cat /etc/cron.daily/backup-web-server
#!/bin/sh
/usr/local/bin/backup-web-server.py /etc/backup-web-server.conf
And after:
# vi /etc/cron.daily/backup-webserver
# cat /etc/cron.daily/backup-web-server
#!/bin/sh
/usr/local/bin/cronic /usr/local/bin/backup-web-server.py /etc/backup-web-server.conf
That's it. Now unless your cron job either has output to stderr or returns non-zero, it will produce no output.

Friday, March 13, 2009

Fixing apt-cache-report.pl

Update: Looks like this is fixed as of Ubuntu 9.04

I previously reported apt-cache-report was not working. In this post I will set about fixing it.

What I discoved is the apt-cacher log format has changed. Old log:
# gzip -dc /var/log/apt-cacher/access.log.6.gz | head -1
Thu Oct 30 22:01:39 2008|192.168.1.12|MISS|189|security.ubuntu.com_ubuntu_dists_hardy-security_Release.gpg
New log:
# head -1 /var/log/apt-cacher/access.log
Tue Mar 10 01:45:59 2009|19924|127.0.0.1|EXPIRED|189|archive.ubuntu.com_ubuntu_dists_intrepid-updates_Release.gpg
So the new format has an extra column (second column, I have no idea what it is) and rearranged a little. I started in changing /usr/share/apt-cacher/apt-cacher-report.pl to compensate for this. Not that it still needed to be able to parse the old logs, so some logic was involved to distinguish.

# cd /usr/share/apt-cacher/
# cp apt-cacher-report.pl apt-cacher-report.pl.orig
# vi apt-cacher-report.pl
# diff -c apt-cacher-report.pl.orig apt-cacher-report.pl

*** apt-cacher-report.pl.orig 2009-03-13 15:35:14.000000000 -0500
--- apt-cacher-report.pl 2009-03-13 17:04:45.000000000 -0500
***************
*** 109,121 ****
{
#$logfile_line =~ s/ /\+/g;
@line = split /\|/, $logfile_line;
! $req_date = $line[0];
! # $req_ip = $line[1];
! $req_result = $line[2];
! $req_bytes = 0;
! $req_bytes = $line[3] if $line[3] =~ /^[0-9]+$/;
! # $req_object = $line[4];
!
$lastrecord = $req_date;
if(!$firstrecord) {
$firstrecord = $req_date;
--- 109,133 ----
{
#$logfile_line =~ s/ /\+/g;
@line = split /\|/, $logfile_line;
! if (scalar(@line) == 5)
! {
! # 5 columns == old format
! $req_date = $line[0];
! # $req_ip = $line[1];
! $req_result = $line[2];
! $req_bytes = 0;
! $req_bytes = $line[3] if $line[3] =~ /^[0-9]+$/;
! # $req_object = $line[4];
! } else {
! # Assume new 6 column format
! $req_date = $line[0];
! # I don't know what $line[1] is
! # $req_ip = $line[2];
! $req_result = $line[3];
! $req_bytes = 0;
! $req_bytes = $line[4] if $line[4] =~ /^[0-9]+$/;
! # $req_object = $line[5];
! }
$lastrecord = $req_date;
if(!$firstrecord) {
$firstrecord = $req_date;


Now to test things, by rerunning apt-cacher:

# /etc/cron.daily/apt-cacher
...this takes a while (~5 min)...

And then I browsed to http://localhost:3142/report/ and now I see an updated report:

summary

Item Value
Report generated 2009-03-13 17:04:50
Administrator root@localhost
First request Thu Oct 30 22:01:39 2008
Last request Fri Mar 13 16:46:42 2009
Total requests 37119
Total traffic 11.487 GB

cache efficiency


Cache hitsCache missesTotal
Requests 24007 (64.67%)13112 (35.32%)37119
Transfers 8.018 GB (69.8%)3.468 GB (30.19%)11.487 GB

If anyone knows where I should submit this patch to, please drop me a comment.

Saturday, March 7, 2009

Fixing MythTV Database: Table './mythconverg/credits' is marked as crashed

My MythTV box became unresponsive earlier this week, prompting a power cycle. It seemed to come back up OK, except a cron job was spitting out the following email every 10 minutes:
DBD::mysql::st execute failed: Table './mythconverg/credits' is marked as crashed and last (automatic?) repair failed at /usr/share/perl5/MythTV/
Program.pm line 195.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at /usr/share/perl5/MythTV/Program.pm line 196.
I tried backing up the database manually and got a similar error:
#/etc/cron.weekly/mythtv-database
mythconverg.credits
warning : Table is marked as crashed and last repair failed
warning : 1 client is using or hasn't closed the table properly
error : Found 123809 keys of 124267
error : Corrupt
mysqldump: Got error: 144: Table './mythconverg/credits' is marked as crashed and last (automatic?) repair failed when using LOCK TABLES
Using google turned up the solution. But I didn't find the optimize_mythdb.pl script where the poster indicated. I did find it elsewhere, copied it over and set permissions on it:
# mkdir /usr/share/mythtv/contrib
# cp /usr/share/doc/mythtv-backend/contrib/optimize_mythdb.pl /usr/share/mythtv/contrib
# chmod 755 /usr/share/mythtv/contrib/optimize_mythdb.pl
At this point I was able to run the script as the poster indicated:
# /usr/share/mythtv/contrib/optimize_mythdb.pl
Repaired/Optimized: `mythconverg`.`archiveitems`
Analyzed: `mythconverg`.`archiveitems`
....snip...
Now the backup script ran cleanly:
# /etc/cron.weekly/mythtv-database
#
The last thing I did, as the comments in the optimize_mythdb.pl script suggest, was add the script to the daily anacron list (update March 15, 2009: anacron apparently doesn't run scripts with a .pl suffix, so need to link this to a filename without the suffix):
# ln -s /usr/share/mythtv/contrib/optimize_mythdb.pl /etc/cron.daily/optimize_mythdb