rageek

A place for Unix Thoughts and Ideas

Taming OSX Time Machine Backups

OSX’s Time machine backup feature is very simple to enable and to use.

Unfortunately, it is almost too simple and there are no mechanisms for capping the amount of storage used for backups and it will eventually grow and take over any sized drive.

Really the best way to work with it is to dedicate a partition to just Time Machine and nothing else.

Time Machine will prune backups as they age and  when you run out of space, but depending on that functionality is very limiting.

It turns out that Time Machine has a very handy command line interface called tmutil for listing and deleting backups. It also has some additional compare commands that look like they could be very useful for tracking down changed files.

m-m:~ $ tmutil
Usage: tmutil help <verb>

Usage: tmutil version

Usage: tmutil enable

Usage: tmutil disable

Usage: tmutil startbackup [-b|--block]

Usage: tmutil stopbackup

Usage: tmutil enablelocal

Usage: tmutil disablelocal

Usage: tmutil snapshot

Usage: tmutil delete snapshot_path ...

Usage: tmutil restore [-v] src dst

Usage: tmutil compare [-a@esmugtdrvEX] [-D depth] [-I name]
       tmutil compare [-a@esmugtdrvEX] [-D depth] [-I name] snapshot_path
       tmutil compare [-a@esmugtdrvEX] [-D depth] [-I name] path1 path2

Usage: tmutil setdestination mount_point
       tmutil setdestination [-p] afp://user[:pass]@host/share

Usage: tmutil addexclusion [-p] item ...

Usage: tmutil removeexclusion [-p] item ...

Usage: tmutil isexcluded item ...

Usage: tmutil inheritbackup machine_directory
       tmutil inheritbackup sparse_bundle

Usage: tmutil associatedisk [-a] mount_point volume_backup_directory

Usage: tmutil latestbackup

Usage: tmutil listbackups

Usage: tmutil machinedirectory

Usage: tmutil calculatedrift machine_directory

Usage: tmutil uniquesize path ...

Use `tmutil help <verb>` for more information about a specific verb.

The following is a example of listing my backups and then deleting one. Read more of this post

Programmatically determining the closest server

We recently updated our server naming standard to be location agnostic.

Due to this change, I had to work out a new mechanism to programmatically locate the nearest server for my imaging, build, and update scripts.

My end solution involves using ping to find the average ping time and comparing the average to determine the closest server.

In the case they are the same, it uses the first server.

 
FIRSTSR=server1
SECSR=server2

case `uname -s` in
SunOS)
        FST=`ping  -vs $FIRSTSR 20 5 | awk -F/ '/^round|^rtt/{printf("%d\n",$6+.5)}'`
        SST=`ping  -vs $SECSR 20 5 | awk -F/ '/^round|^rtt/{printf("%d\n",$6+.5)}'`
;;
*)
        FST=`ping -s 20 -c 5 -v $FIRSTSR | awk -F/ '/^round|^rtt/{printf("%d\n",$6+.5)}'`
        SST=`ping -s 20 -c 5 -v $SECSR | awk -F/ '/^round|^rtt/{printf("%d\n",$6+.5)}'`
;;
esac;

if [ $FST -le $SST ]; then
        echo "Using $FIRSTSR for nfs mount"
else
        echo "Using $SECSR for nfs mount"
fi

Easily add log output to any shell script

I wrote this script a couple years ago for the purpose of being able to better track and save the output of my build scripts during the jumpstart process.

I was looking to create a easy way to have my script output to go to both the console and a log, without having to make extensive changes to all of my scripts.

The script works by creating new file descriptors and re-mapping stdout and stderr.

Here is a download link for log_include.sh

It can be added to any script by sourcing it at the top of the script.

if [ -f /admin/include_log.sh ]; then
    # Adds script logging output
    # run pre_exit prior to exiting/rebooting to close log and reset stdout/stderr
    #
    . /admin/include_log.sh
fi

By default this will write to the default log directory specified in log_include.sh.

When sourced in the file, it will immediately output to a default log directory with a log file named by the call script and date:


Saving output to /var/sadm/system/include_log/solaris_qa_082712_2318.log

Optionally, variables can be set in the calling script prior to sourcing to modify the logging behaviors:

Variables:

_PIPEFILE – Specify name of Fifo file, defaults to /tmp/${Script basename}.pipe-_${date & time}
_CONSOLEOUT – Write output to Console in addition to stdout (no effect if running on console)
_CONSOLEDEV – Path to Console Character Device on System
_LOGFILENAME – The full path name of output log file
_LOGDIR – The directory to use for writing logs to, defaults to _DEFAULT_LOGDIR variable
_LOGFILEBASE – The base part of the filename to use, default is to use the calling script name

This should work in both Linux and Solaris

Prior to exiting your scripts, you will want to call the pre_exit function will close the log file and reset stdout/stderr

Configuring ODM devices in a Solaris 11 Zone

I was happy to see that Symantec supplied a IPS repository for Storage Foundation 6.0pr1.

I was disappointed to see that the documentation for installed and enable Storage foundation for Solaris 11 zones was incomplete and didn’t work.

After digging through the documentation and performing a little troubleshooting, here is the procedure for installing and enable ODM support for Solaris 11 Zones.

1. The first step is to add the IPS repository as a publisher &  install the packages, then unset the publisher

root@testzone: # ls
VRTSpkgs.p5p  info
pkg set-publisher -P -g `pwd`/VRTSpkgs.p5p Symantec
pkg install --accept   VRTSvlic VRTSodm VRTSperl

If you are using the Zone with VCS, you can also install the 3 VCS packages specified in the install docs

pkg install --accept   VRTSvcs VRTSvcsag VRTSvcsea

Unset the publisher

pkg unset-publisher Symantec

2. Now we will update the zone configuration to add the lofs mount for the veritas license files, the odm device mapping and then the adding permission to the zone to make a odm mount. You will want to reboot the zone after this step.
Read more of this post

Determining global zone name from inside a solaris 11 zone

The default security of Solaris zones masks the name of the host’s global zone.

In my environment we don’t have the requirement to mask the global zone name and it is very useful for our internal customers as wells as our engineering staff to have this info easily available.

There are a couple ways to go about this. I had 2 primary requirements for my solution. The information provided to the zone needed to be consistently accurate and up to date in the case that I migrate a zone to another hosts and I also do not want it to be obvious that this information is being provided to the zone.

One easy solution is to create a file on the global zone and loopback mount it into the zone.
A Very simple solution, but LOFS mounts appear in the zone’s df output and I think that is being a little too obvious.

Previously, I used a trick with the arp cache and probe-based IPMP to figure out the global zone.
But in Solaris 11, transitive probing and virtual nics have put an end to that.

My solution for the problem uses lofi and device mapping to make the information available to the zone.

Lofiadm is great tool which allows you to map files to devices, this is very handy for mounting iso’s or updating boot miniroots. The file can be arbitrary, but must be a multiple of 512bytes. This can be accomplished using the mkfile command and cat.

The first thing I do is generate the file in the global zone. I do by using the hostname command, and then creating another file to pad the original file to 512bytes.  Read more of this post

Mapping OS Devices to Veritas Device names in Veritas 5.1

With Veritas 5.1, if you are using native DMP, enclosure based naming is mandatory.

Now this is good as 5.1 has enhancements where the device name now includes the LDEV number for the array and it makes it easy for storage grows.

But this can kind of be a PITA if you are building fresh or you don’t have the LDEV number and are just trying to find the veritas device name for a set of luns.

This is a expansion of a 1 liner I posted awhile back, it has been expanded to include the Veritas Device name in the output.

Here it is and you can copy/paste it into a bash session. You can change the order and padding by modifying the awk statement

for c in $(iostat -En | grep Soft | awk '{print $1}' | cut -d t -f1 | sort | uniq); do 
for i in `iostat -En | grep Soft | awk '{print $1}' | grep "$c"`;do 
vxdisk list $i &>/dev/null || continue
DEV=`vxdisk list $i | grep Device | awk '{print $2}'`
SZ=$(iostat -En $i | grep Size | cut -d'<' -f2)
echo "$i ${SZ%% *} $DEV" | awk '{printf ( "%s\t%s %4d GB (%d MB)\n", $1, $3, $2/1024/1024/1024+.05, $2/1024/1024+.05) }'
done | sort -t d +1 -n; done

This is the output of vxdisk list

Read more of this post

Strickly Limiting ZFS arc cache size

On the majority of my servers I use ZFS just for the root filesystem and allowing the arc to grow uncheck is counterproductive for tracking server utilization and running some applications.

Consequently I severely limit the amount of memory used and set it at 100MB.

If your going to limit the arc cache, just about every ZFS tuning guide suggests capping the arc cache limit via zfs:zfs_arc_max

However, I was digging into the memory utilization of one of my Tibco servers and noticed that the ZFS arc cache was quite a bit larger than value specified in /etc/system

root@testserver # kstat zfs:0:arcstats:size | grep size | awk '{printf "%2dMB\n",  $2/1024/1024+0.5}'
1990MB

I actually noticed this before in the past, but didn’t research any further due since it was insignificant compared to the free ram on that server.

I checked a couple other servers and noticed that it was consistently around 2GB for most of my servers.

root@testserver # grep zfs /etc/system
set zfs:zfs_arc_max = 104857600

In checking Kstat, I noticed a minimum parameter for zfs that I hadn’t noticed before that looked very similar to my arc size.

root@testserver # kstat -p zfs:0:arcstats | head -4
zfs:0:arcstats:c        2101237248
zfs:0:arcstats:class    misc
zfs:0:arcstats:c_max    104857600
zfs:0:arcstats:c_min    2101237248

Referring the Oracle Solaris Tunable guide, the zfs_arc_min parameter is set to 1/32nd of the physical memory or a minimum of 64MB. 2GB on a 64GB system, 4GB on 128GB one.

So I now include the maximum and minimum values in /etc/system and now the limits are occurring as predicted.

root@testserver # grep zfs_arc /etc/system
set zfs:zfs_arc_max = 104857600
set zfs:zfs_arc_min = 104857600

root@testserver # kstat -p zfs:0:arcstats | head -3
zfs:0:arcstats:c        104857600
zfs:0:arcstats:c_max    104857600
zfs:0:arcstats:c_min    104857600

root@testserver # kstat zfs:0:arcstats:size | grep size | awk '{printf "%2dMB\n",  $2/1024/1024+0.5}'
100MB

Now that I have realized that the majority of my servers have arc caches set to 1/32 of ram, I can take a good look at whether I should increase my intended defaults or leave them as is.

Vertically mounting the new Airport Express

Apple recently released a new update to their Apple Express product.

I have been very pleased with my Airport Extreme. However, now that I finally got a iPad and have been using it various spots around the house, I have been noticing a definite drop off on the wireless reception on the back half of my house.

This is easily solved with  “extending” my current “ wireless network.

Now I like all the improvements on the new airport express, primarily the ability to extend the network on both the 2.4 Ghz & 5Ghz  bands, but I’m one of those people who really liked how nicely the old one just hung off a outlet on the wall.

Looking at the plug for the power cord and I realized it look very familiar and fit perfectly with the my left over power plug from my macbook pro.

Now I can happily let sit on the wall.

Now I will admit that this is probably not the most optimum placement for the antenna. But for how I’m using it, it is perfect!

ILOM Quick Reference

Now that alom Mode has been eliminated with the T3/T4 platforms, ILOM mode is it going forward.

While I think alom is easier to remember (IMHO), it does now provide a interface that is consistent across sun platforms and i’m sure there are some other good reasons for going this direction.

Here is a quick reference for common task on the ILOM.

There is also a nice cheat sheet provided by Oracle for iLom Basic CLI Commands

Power On/Off/Reset system 

start /SYS
stop /SYS stop -force /SYS

reset  /SYS

Start console

start /SP/console
start -script /SP/console
start -script -force /SP/console

Updating firmware

load -source tftp://10.0.0.10/ilom.X4150_X4250-3.0.6.15.c.r62872.pkg
load -source http://10.0.0.10/ilom.X4150_X4250-3.0.6.15.c.r62872.pkg

Configure Network

Read more of this post

Customizing Your Solaris 11 Auto Installer builds using a first boot script

Unlike Solaris 10, there are no equivalents to finish scripts with Solaris 11.

What is done instead, is to create a script which is installed as a IPS package during the install and run on firstboot and then removed.

Oracle outlines this here
http://docs.oracle.com/cd/E23824_01/html/E21798/firstboot-1.html

I will be going over the process, giving the example from my site and going over adding it into your AI manifest.

The first thing you will need to do is create the firstboot SVC manifest and the script itself.

The manifest is described here:
http://docs.oracle.com/cd/E23824_01/html/E21798/firstboot-2.html

And the logic behind the first boot script here:
http://docs.oracle.com/cd/E23824_01/html/E21798/glcfe.html

As with all things unix there are multiple ways and philosophies for how to manage build scripts.

For my Solaris 11 baseline, I choose to write a first boot script that determines my closest package repository and then mounts and copies over my directory of admin scripts.
It then runs a script called frist_boot_config.sh. This script actually does all of the work on the first boot. Some could argue that it may be better to have all the work done in original script run on boot and then version the IPS package, but I was looking to keep things simple and consistent with my previous builds and easy to update, especially while I was refining my Solaris 11 baseline. I might move it into the one script in the future.

The general flow to my System build is:
Read more of this post

Follow

Get every new post delivered to your Inbox.