rageek

A place for Unix Thoughts and Ideas

Tag Archives: Unix

Programmatically determining the closest server

We recently updated our server naming standard to be location agnostic.

Due to this change, I had to work out a new mechanism to programmatically locate the nearest server for my imaging, build, and update scripts.

My end solution involves using ping to find the average ping time and comparing the average to determine the closest server.

In the case they are the same, it uses the first server.

 
FIRSTSR=server1
SECSR=server2

case `uname -s` in
SunOS)
        FST=`ping  -vs $FIRSTSR 20 5 | awk -F/ '/^round|^rtt/{printf("%d\n",$6+.5)}'`
        SST=`ping  -vs $SECSR 20 5 | awk -F/ '/^round|^rtt/{printf("%d\n",$6+.5)}'`
;;
*)
        FST=`ping -s 20 -c 5 -v $FIRSTSR | awk -F/ '/^round|^rtt/{printf("%d\n",$6+.5)}'`
        SST=`ping -s 20 -c 5 -v $SECSR | awk -F/ '/^round|^rtt/{printf("%d\n",$6+.5)}'`
;;
esac;

if [ $FST -le $SST ]; then
        echo "Using $FIRSTSR for nfs mount"
else
        echo "Using $SECSR for nfs mount"
fi
Advertisements

Easily add log output to any shell script

I wrote this script a couple years ago for the purpose of being able to better track and save the output of my build scripts during the jumpstart process.

I was looking to create a easy way to have my script output to go to both the console and a log, without having to make extensive changes to all of my scripts.

The script works by creating new file descriptors and re-mapping stdout and stderr.

Here is a download link for log_include.sh

It can be added to any script by sourcing it at the top of the script.

if [ -f /admin/include_log.sh ]; then
    # Adds script logging output
    # run pre_exit prior to exiting/rebooting to close log and reset stdout/stderr
    #
    . /admin/include_log.sh
fi

By default this will write to the default log directory specified in log_include.sh.

When sourced in the file, it will immediately output to a default log directory with a log file named by the call script and date:


Saving output to /var/sadm/system/include_log/solaris_qa_082712_2318.log

Optionally, variables can be set in the calling script prior to sourcing to modify the logging behaviors:

Variables:

_PIPEFILE – Specify name of Fifo file, defaults to /tmp/${Script basename}.pipe-_${date & time}
_CONSOLEOUT – Write output to Console in addition to stdout (no effect if running on console)
_CONSOLEDEV – Path to Console Character Device on System
_LOGFILENAME – The full path name of output log file
_LOGDIR – The directory to use for writing logs to, defaults to _DEFAULT_LOGDIR variable
_LOGFILEBASE – The base part of the filename to use, default is to use the calling script name

This should work in both Linux and Solaris

Prior to exiting your scripts, you will want to call the pre_exit function will close the log file and reset stdout/stderr

Strickly Limiting ZFS arc cache size

On the majority of my servers I use ZFS just for the root filesystem and allowing the arc to grow uncheck is counterproductive for tracking server utilization and running some applications.

Consequently I severely limit the amount of memory used and set it at 100MB.

If your going to limit the arc cache, just about every ZFS tuning guide suggests capping the arc cache limit via zfs:zfs_arc_max

However, I was digging into the memory utilization of one of my Tibco servers and noticed that the ZFS arc cache was quite a bit larger than value specified in /etc/system

root@testserver # kstat zfs:0:arcstats:size | grep size | awk '{printf "%2dMB\n",  $2/1024/1024+0.5}'
1990MB

I actually noticed this before in the past, but didn’t research any further due since it was insignificant compared to the free ram on that server.

I checked a couple other servers and noticed that it was consistently around 2GB for most of my servers.

root@testserver # grep zfs /etc/system
set zfs:zfs_arc_max = 104857600

In checking Kstat, I noticed a minimum parameter for zfs that I hadn’t noticed before that looked very similar to my arc size.

root@testserver # kstat -p zfs:0:arcstats | head -4
zfs:0:arcstats:c        2101237248
zfs:0:arcstats:class    misc
zfs:0:arcstats:c_max    104857600
zfs:0:arcstats:c_min    2101237248

Referring the Oracle Solaris Tunable guide, the zfs_arc_min parameter is set to 1/32nd of the physical memory or a minimum of 64MB. 2GB on a 64GB system, 4GB on 128GB one.

So I now include the maximum and minimum values in /etc/system and now the limits are occurring as predicted.

root@testserver # grep zfs_arc /etc/system
set zfs:zfs_arc_max = 104857600
set zfs:zfs_arc_min = 104857600

root@testserver # kstat -p zfs:0:arcstats | head -3
zfs:0:arcstats:c        104857600
zfs:0:arcstats:c_max    104857600
zfs:0:arcstats:c_min    104857600

root@testserver # kstat zfs:0:arcstats:size | grep size | awk '{printf "%2dMB\n",  $2/1024/1024+0.5}'
100MB

Now that I have realized that the majority of my servers have arc caches set to 1/32 of ram, I can take a good look at whether I should increase my intended defaults or leave them as is.