View unanswered posts    View active topics

All times are UTC - 6 hours





Post new topic Reply to topic  [ 7 posts ] 
Print view Previous topic   Next topic  
Author Message
Search for:
PostPosted: Sun Jun 04, 2006 3:04 pm 
Offline
Joined: Sun Jun 27, 2004 12:49 am
Posts: 17
Location: Ashton, Ontario, Canada
I'm sure many of you are like me: afraid that if I make a change such as installing a new ivtv driver, or even upgrading to the latest KnoppMyth release, that I'll break something and have the family impatiently standing over my shoulder until I fix it...

tjc's new backup/restore tools in R5C7 will provide a recovery mechanism for many cases.

Here's something that will provide an extra safety net for more extreme cases. The following script will create a backup of the primary KnoppMyth partition, which can be used as an emergency recovery vehicle when things REALLY go astray.
Code:
#! /bin/bash
# Utility to back up the primary KnoppMyth partition
# Submitted by Ken Scales (ashtonp)
# Mostly a wrapper around a tar command to back up a single partition.

if test $(id -u) != 0; then
 echo "Error: You must be root to run this script!"
 exit
fi

DATE=`/bin/date +%Y%B%d`

echo ""
echo "Note: disregard any messages about \"different filesystem\" or"
echo "  \"socket\" not being archived. These are intentional."
sleep 3
echo ""
echo "Starting backup for file system partition \"/\"..."
echo ""
echo "  Stopping mythtv-backend..."
/etc/init.d/mythtv-backend stop
echo "  Stopping mysql..."
/etc/init.d/mysql stop

# For an uncompressed archive, uncomment the next 2 lines, and comment the 2
# similar lines following them. Also see similar changes near end of script.
#echo "Executing: /bin/tar clvf /myth/backup/slash_$DATE.tar --exclude=/var/log / 2>&1 > /myth/backup/tar_clvf_slash_$DATE.log"
# /bin/tar clvf /myth/backup/slash_$DATE.tar --exclude=/var/log / 2>&1 > /myth/backup/tar_clvf_slash_$DATE.log

echo "Executing: /bin/tar czlvf /myth/backup/slash_$DATE.tgz --exclude=/var/log / 2>&1 > /myth/backup/tar_czlvf_slash_$DATE.log"
/bin/tar czlvf /myth/backup/slash_$DATE.tgz --exclude=/var/log / 2>&1 > /myth/backup/tar_czlvf_slash_$DATE.log

echo ""
echo "  \"tar\" file creation complete."
echo ""
echo "  Restarting mysql..."
/etc/init.d/mysql start

echo "  Restarting mythtv-backend..."
/etc/init.d/mythtv-backend start

echo ""
# For uncompressed archive, uncomment the next 2 lines, and comment the 2
# similar lines following them. Also see similar changes earlier in script.
#echo "Backup was created as: \"/myth/backup/slash_$DATE.tar\"."
#echo "  Logfile is in: \"/myth/backup/tar_clvf_slash_$DATE.log\"."
echo "Backup was created as: \"/myth/backup/slash_$DATE.tgz\"."
echo "  Logfile is in: \"/myth/backup/tar_czlvf_slash_$DATE.log\"."
echo ""
echo "File system backup for partition \"/\" is complete."
echo ""

Paste the above code into a file (say "/usr/local/bin/backup_slash_partition", then:
Code:
chmod 755 /usr/local/bin/backup_slash_partition

This script temporarily shuts off both mythtv-backend and mysql, so make sure that no events are scheduled before you run it.

To run the script, as "root":
Code:
/usr/local/bin/backup_slash_partition

This will create a compressed file in /myth/backup containing an image of the main "/" partition (excluding log files and sockets). On my system (AMD 2800+ Athlon; KnoppMyth R5B7) it takes about 5 minutes to execute, and creates a compressed backup of about 900Mb.

Note the backup filename displayed (specifically, "slash_YYYYMonthDD.tgz" -- substitute appropriately below.)

There are several cases where this backup file could be used for recovery, but one might be an unsuccessful upgrade installation. In this case, the steps would (ROUGHLY) be:
Code:
reboot with the KnoppMyth installation CD
select "exit" from the installation menu
umount /dev/hda1
mkfs.ext3 /dev/hda1
mount /dev/hda1 /mnt/hdinstall
mkdir /mnt/myth
mount /dev/hda4 /mnt/myth
# Change the above to reflect where your /myth partition is located (/mnt/hda3 on R5[BC]7 fresh installs)
cd /mnt/hdinstall
tar xzvf /mnt/myth/backup/slash_YYYYMonthDD.tgz

You may also need to update lilo:
Code:
chroot /mnt/hdinstall
lilo
exit

Re-boot, and you should be back to the point where you took your snapshot, including all database and recording info. If your database has been updated since the snapshot was taken, use the standard KnoppMyth Backup/Restore tools (e.g., tjc's toolbox) to recover those changes.

Note: any changes to the /myth directory (e.g., /myth/avimanager, /myth/mythburn, and /myth/nuv2disc) made during a KnoppMyth upgrade/installation will not be undone by this method, and they may need to be re-installed. Also, if you are using LVM for your /myth partition, you will need to do some extra steps that I'm not familiar with.

_________________
ashtonp (AKA Ken)
KnoppMyth-ing since July, 2004...
Linux Registered User #79908


Top
 Profile  
 
 Post subject:
PostPosted: Sun Jun 04, 2006 4:09 pm 
Offline
Joined: Thu Mar 25, 2004 11:00 am
Posts: 9551
Location: Arlington, MA
That's a whole lot of data (roughly 3.1Gb on my box). I'd recommend using the j option for your tar command to get bzip2 compression, rather than z which just gets you gzip. I used this for compressing the old DB dumps despite the time penalty there because it gave me an extra 20-30% reduction in disk usage.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Jun 04, 2006 5:07 pm 
Offline
Joined: Sun Jun 27, 2004 12:49 am
Posts: 17
Location: Ashton, Ontario, Canada
tjc wrote:
I'd recommend using the j option for your tar command to get bzip2 compression, rather than z which just gets you gzip. I used this for compressing the old DB dumps despite the time penalty there because it gave me an extra 20-30% reduction in disk usage.

Glad you raised that point. When I first started making these backups for myself, I didn't use compression at all -- I knew my storage availability, and I watched it closely. However, when I thought about sharing the idea with others, I realized that many folks would be more concerned with file size.

In fact my first thought was to use the "j" (bzip2) option for the better compression it gives, but then remembered that the backend is stopped while the backup is being created, and I decided to compromise a bit on the compression to minimize the duration.

It's easily changed if folks would prefer that.

Cheers.

_________________
ashtonp (AKA Ken)
KnoppMyth-ing since July, 2004...
Linux Registered User #79908


Top
 Profile  
 
 Post subject:
PostPosted: Sun Jun 04, 2006 7:42 pm 
Offline
Joined: Thu Mar 25, 2004 11:00 am
Posts: 9551
Location: Arlington, MA
When you're doing it "on the fly" it doesn't seem to add significant overhead. I'm guessing that other factors predominate then.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Jun 04, 2006 8:51 pm 
Offline
Joined: Sun Jun 27, 2004 12:49 am
Posts: 17
Location: Ashton, Ontario, Canada
ashtonp wrote:
In fact my first thought was to use the "j" (bzip2) option for the better compression it gives, but then remembered that the backend is stopped while the backup is being created, and I decided to compromise a bit on the compression to minimize the duration.

Well, I thought I'd gather some comparison data so folks could compare and provide their opinions.

I created an alternate script that uses bzip2 compression, then used the 'time' utility to determine how long each version took:
    For gzip compression (tar czlvf):
    real 4m0.581s
    user 3m7.055s
    sys 0m17.819s
    -rw-r--r-- 1 root root 900164106 Jun 4 21:28 slash_2006June04.tgz

    For bzip2 compression (tar cjlvf):
    real 17m52.152s
    user 16m20.330s
    sys 0m18.166s
    -rw-r--r-- 1 root root 809244478 Jun 4 21:48 slash_2006June04.tar.bz2

So using bzip2 reduced the file size in my case by about 10% (91M, or about 3-6 minutes of video, or 90 minutes of mp3s.), but increased the run time from 4 minutes to almost 18 minutes. (tjc, I suspect the difference from what you observed with the files you were using is due to the average types of data and file size being compressed in each case.)

Another approach would be to make it a 2-stage process, first doing an uncompressed tar (during which the backend would be suspended), then running bzip2 on the tarfile. But now what was originally a simple utility starts taking on additional complexity. And from the user's perspective, it would probably still take about 18 minutes to complete.

Cheers.

_________________
ashtonp (AKA Ken)
KnoppMyth-ing since July, 2004...
Linux Registered User #79908


Top
 Profile  
 
 Post subject:
PostPosted: Sun Jun 04, 2006 10:20 pm 
Offline
Joined: Thu Mar 25, 2004 11:00 am
Posts: 9551
Location: Arlington, MA
Yeah, the numbers are eye opening. Your results are worse than I remember but confirm my basic observation. There's enough other stuff disk bound stuff going on that the extra compression overhead is less obvious. Unfortunately they're far from negible. My tests with the standard backup show about the same 4 to 1 time ratio for about the same wimpy 10% space savings. The extra time was made even less obvious by the fact that the overall time for the standard backup is shorter. On the other hand my timing ratios for compressing mythconverg.sql are far more dramatic.
Code:
root@black2:/myth/backup# time bzip2 mythconverg.sql

real    3m35.315s
user    3m27.658s
sys     0m0.815s
root@black2:/myth/backup# ls -l mythconverg.sql.bz2
-rw-------  1 root root 9691910 May 16 21:02 mythconverg.sql.bz2
root@black2:/myth/backup# time bunzip2 mythconverg.sql.bz2

real    0m9.685s
user    0m8.578s
sys     0m0.664s
root@black2:/myth/backup# time gzip mythconverg.sql

real    0m5.985s
user    0m5.509s
sys     0m0.266s
root@black2:/myth/backup# ls -l mythconverg.sql.gz
-rw-------  1 root root 13173224 May 16 21:02 mythconverg.sql.gz
root@black2:/myth/backup# time gunzip mythconverg.sql.gz

real    0m1.882s
user    0m1.120s
sys     0m0.515s


Here you can see the 25% reduction in size which is nice, but frankly the extra time (22x) is so absurd I'm thinking about sending Cecil an updated version of the backup/restore scripts which uses gzip for compression and autodetects the right decompression options. That way the space hungry can recompress after the fact if needed and everything still works.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Jun 05, 2006 8:12 pm 
Offline
Joined: Sun Jun 27, 2004 12:49 am
Posts: 17
Location: Ashton, Ontario, Canada
tjc wrote:
Yeah, the numbers are eye opening.

Wow, your example of 22x execution time for a 25% space saving sure does grab one's attention!

I thought about this discussion today, and I realized that the 91M of extra compression that I got with bzip2 would represent about 8 hours of download time on my 28.8k dial-up, though it doesn't represent very much compared to the 580G storage on my Myth box. Different perspectives...

Hmm... maybe we should re-title this thread "Duel of the zippers"?
Cheers.

_________________
ashtonp (AKA Ken)
KnoppMyth-ing since July, 2004...
Linux Registered User #79908


Top
 Profile  
 

Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 7 posts ] 


All times are UTC - 6 hours




Who is online

Users browsing this forum: No registered users and 7 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group

Theme Created By ceyhansuyu