Monthly Archives: May 2010

[Solaris Tip] savecore: not enough space

Our server had panic and don’t have any space left on you /var partition, worry no more, there is still a way to generate the core files with the help of savecore and make Sun Support don’t wait for another panic to happen before they get their core files.

From the man pages of savecore(1M):

The savecore utility  saves  a  crash  dump  of  the  kernel (assuming 
that one was made) and writes a reboot message in the shutdown log. It is
invoked by the dumpadm service  each time the system boots.

Check the /var/adm/messages for the size of the core to be retrieved:

[email protected]# grep savecore /var/adm/messages
May 26 16:01:51 solaris savecore: [ID 570001 auth.error] reboot after panic: 
sync initiated
May 26 16:01:51 solaris savecore: [ID 353609 auth.error] not enough space in 
/var/crash/solaris (4692 MB avail, 48151 MB needed)

Find a directory on the system sufficient enough to carry the size of the core like /tmp

[email protected]# df -h /tmp
Filesystem             Size   Used  Available Capacity  Mounted on
swap                   169G    88K       169G     1%    /tmp

Go to that directory and create a temporary directory to save the core files:

[email protected]# cd /tmp
[email protected]# mkdir corefiles
[email protected]# cd corefiles

Then re-genrate the core files:

[email protected]# savecore -dv .

The . (dot) signifies the current directory your in and saves the core to this place.

See you on my next note!

This entry was posted in Uncategorized on by .

[How To] Restore Solaris with ufsrestore

We had a hardware failure and we cannot seem to boot to our system. Our only option is to restore from our backup. The good thing is that we have foreseen this incident and took the liberty to have a backup of our OS. We will now use ufsrestore to bring our server up and running.

From the man pages of ufsrestore(1M):

The ufsrestore utility  restores  files  from  backup  media created  with the 
ufsdump command. ufsrestores's actions are controlled by the key argument. The 
key is exactly one function letter  (i,  r, R , t, or x) and zero or more 
function modifiers (letters). The key string contains no SPACE  characters.  
Function  modifier arguments are listed on the command line in the same order 
as their corresponding  function modifiers appear in the key string.

Boot the machine into cdrom single user-mode:

ok> boot cdrom -s

Re-partitionn your disk just like the old disk. Then format all the slices using newfs:

[email protected]# newfs /dev/rdsk/c0t0d0s0

Then mount this to any mount point:

[email protected]# mount /dev/dsk/c0t0d0s0 /a

Verify the existence of the tape:

[email protected]# mt -f /dev/rmt/0 status

or
[email protected]# mt status

If in case the tape drive is not recognized:

[email protected]# devfsadm -C
or
[email protected]# devfsadm -c tape
or
[email protected]# drvconfig; tapes; devlinks

Rewind the tape:

[email protected]# mt rewind

Make your way to the directorywhere you want to restore and start the restoration:

[email protected]# cd /a
[email protected]# ufsrestore rvf /dev/rmt/0n

After the restoration, install the bootblk:

[email protected]# cd /a/usr/platform/`uname -i`/lib/fs/ufs/
[email protected]# installboot bootblk /dev/rdsk/c0t0d0s0

Then restart and boot your way to your newly restored Solaris OS.

[email protected]# init 0
ok> boot

There you have it fellow SysAdmins! See you on my next note!

This entry was posted in Uncategorized on by .

Getting the most out of your cores

While the battle of the cores is underway with Intel, AMD and IBM, enterprises must also pay attention to the servers, systems and applications used to take advantage of this new processing power.

One of technology’s most interesting battles is occurring among server chip makers like IBM Corp., AMD Inc. and Intel Corp. These vendors are continually looking to one-up each other — pushing out eight-, 12- and 16-core processors — and bringing unprecedented levels of high performance computing power to enterprise IT shops.

But while the focus is often on the amount of cores these new chips offer, enterprises must also pay attention to the servers, systems and applications used to take advantage of this new processing power.

Neil Bunn, a technology architect for deep computing at IBM Canada Ltd., said that one of the challenges with technology in the last couple of years has been the dramatic increase in the number of processor cores available to applications. The software industry, he said, has lagged in developing apps to correctly parallelize workloads to take advantage of multiple cores.

“Being able to run a single application or a single job against an extremely large system is definitely a very large issue in HPC today,” Bunn said. “In fact, it’s an issue where there are a lot of perspectives on how we’re going to solve it, but no clearly defined path on which one is going to win out.”

More on Getting the most out of your cores

This entry was posted in Uncategorized on by .

[How To] Backup Solaris with ufsdump

As a good old saying says – An Apple a Day keeps the Doctor away – is also applicable on having good OS backup that will always keep headaches lesser when the hard times come. Now comes ufsdump, a usefull command to help us backup our Solaris Operating System.

Based on the man pages of ufsdump(1M):

ufsdump backs up all files specified by files_to_dump  (usually either a whole 
file system or files within a file sytem changed after a certain date) to 
magnetic tape, diskette, or disk file.

The ufsdump command can only be used on unmounted file  systems,  or  those  
mounted  read-only.  Attempting  to dump a mounted, read-write file system might  
result  in  a  system disruption  or the inability to restore files from the 
dump. Consider using the fssnap(1M) command to create a file  system  snapshot  
if  you  need a point-in-time image of a file system that is mounted.

If a filesystem was mounted with the logging option,  it  is strongly  
recommended that you run ufsdump as the root user. Running the command as  a 
non-root user might result in  the creation of an inconsistent dump.

Here are the steps for us to utilize this command given that our root (/) partition resides under c0t0d0s0:

It is recommennded to put our system on Single User Mode:

[email protected]# init -s
or
[email protected]# reboot -- -s

Check the partition for any inconsistencies:

[email protected]# fsck /dev/rdsk/c0t0d0s0

Insert the tape into the drive and verify:

[email protected]# mt -f /dev/rmt/X stat    (where X is the drive number)

Back up the system:

[email protected]# ufsdump 0uf /dev/rmt/0n /
This entry was posted in Uncategorized on by .

[Solaris Tip] Trim wtmpx file

Our root (/) partition is nearing 100% utilization and upon further investigation the wtmpx file is the main culprit. We need to trim or flush this file but we need to have a backup of this file for audit purposes.

Well from the wtmpx(1) man pages:

The utmpx and wtmpx files are extended database files that have superseded
the obsolete utmp and wtmp database files.

The utmpx database contains user access and accounting information for commands
such as who(1), write(1), and login(1). The wtmpx database contains the history
of user access and accounting information for the utmpx database.

If you really need this for your accounting, here are the steps to safely convert it to human readable and truncate it.

[email protected]# /usr/lib/acct/fwtmp < /var/adm/wtmpx > /tmp/wtmpx.orig
[email protected]# cat /dev/null > /var/adm/wtmpx
[email protected]# gzip /tmp/wtmpx.orig
[email protected]# cp /tmp/wtmpx.orig.gz /var/adm/

There you have it a truncated wtmpx file with a full and zipped backup of the old wtmpx.  I strongly recommend that you use wtmpx.<date> rather than wtmpx.orig if in case you need to truncate again in the future.

Update:
Below is the modified procedure making your backup wtmpx having a date on its filename:

[email protected]# /usr/lib/acct/fwtmp < /var/adm/wtmpx > /tmp/wtmpx.`date +%Y%m%d`
[email protected]# cat /dev/null > /var/adm/wtmpx
[email protected]# gzip /tmp/wtmpx.`date +%Y%m%d`
[email protected]# mv  /tmp/wtmpx.`date +%Y%m%d`.gz /var/adm/
This entry was posted in Uncategorized on by .

Different UNIX Shell

I have stumbled upon a great site which gives an in depth guide on the different shells that a UNIX system can have. As a fellow Systems Administrator, I would like to share this great find to everyone and hopefully could help us fully understand each shell’s pros and cons.

This table below lists most features that I think would make you choose one shell over another. It is not intended to be a definitive list and does not include every single possible feature for every single possible shell. A feature is only considered to be in a shell if in the version that comes with the operating system, or if it is available as compiled directly from the standard distribution. In particular the C shell specified below is that available on SUNOS 4.*, a considerable number of vendors now ship either tcsh or their own enhanced C shell instead (they don’t always make it obvious that they are shipping tcsh.

sh bash ksh csh zsh

Different UNIX Shells

Key to the table above.

Y      Feature can be done using this shell.
N      Feature is not present in the shell.
F      Feature can only be done by using the shells function mechanism.
L      The readline library must be linked into the shell to enable this Feature.

Notes to the table above

1. This feature was not in the orginal version, but has since become almost standard.
2. This feature is fairly new and so is often not found on many versions of the shell, it is gradually making its way into standard distribution.
3. The Vi emulation of this shell is thought by many to be incomplete.
4. This feature is not standard but unoffical patches exist to perform this.
5. A version called ‘pdksh’ is freely available, but does not have the full functionality of the AT&T version.
6. This can be done via the shells programmable completion mechanism.
7. Only by specifing a file via the ENV environment variable.

For more information follow the site here.

This entry was posted in Uncategorized on by .

[Solaris Tip] Merge Files

In response from our last post regarding splitting of large files, we will now discuss on merging these files for us to be able to use it again. We will also be discussing on checking the md5 hash and chksum.

To merge small amount of split files

[email protected]# cat filename.tar.gz.splitaa > filename.tar.gz
[email protected]# cat filename.tar.gz.splitab >> filename.tar.gz
[email protected]# cat filename.tar.gz.splitac >> filename.tar.gz
[email protected]# cat filename.tar.gz.splitad >> filename.tar.gz

To merge files at the same time

[email protected]# cat filename.tar.gz.split* > filename.tar.gz

If in case your splitted files are too many to be handled by the shell you could your this work around:

[email protected]# ls filename.tar.gz.split* | xargs cat > filename.tar.gz

For us to be able to verify the integrity of our files here are the two ways to check

Using digest to get the md5 hash

[email protected]# digest -a md5 filename.tar.gz
04164efcfc801d813dbcb624626a38d5

Using chksum

[email protected]# cksum filename.tar.gz
1904556195      11875601        filename.tar.gz
This entry was posted in Uncategorized on by .

[Solaris Tip] Split Large Files

We have a very large core file and we need this to send to our vendor for analysis. The gzip’d file of the core is 20GB and the FTP server of the vendor does not like that. Therefore we need to split our very large file to smaller chunks that the FTP server would accept.

Based on the man pages – split(1)

NAME
split - split a file into pieces

SYNOPSIS
split [-linecount | -l linecount]  [-a suffixlength] [  file
[name]]

split [ -b  n | nk | nm] [-a suffixlength] [ file [name]]

DESCRIPTION
The split utility reads file and writes it in linecount-line
pieces  into  a  set  of output-files. The name of the first
output-file is name with aa appended, and so on lexicograph-
ically,  up  to  zz  (a  maximum  of 676 files). The maximum
length of  name  is  2  characters  less  than  the  maximum
filename  length  allowed by the filesystem.

Check on the file to send

[email protected]# ls -l
-rw-r-----   1 root     root     21474836480 Apr  8 10:33 core_files.tar.gz

Split the file into 200MB chunks

[email protected]# split -b 200m core_files.tar.gz core_files.tar.gz.split

List the generated files

[email protected]# ls -l
-rw-r-----   1 root     root    21474836480 Apr  8 10:33 core_files.tar.gz
-rw-r--r--   1 root     root    209715200 Apr  8 12:52 core_files.tar.gz.splitaa
-rw-r--r--   1 root     root    209715200 Apr  8 12:53 core_files.tar.gz.splitab
-rw-r--r--   1 root     root    209715200 Apr  8 12:53 core_files.tar.gz.splitac
-rw-r--r--   1 root     root    209715200 Apr  8 12:53 core_files.tar.gz.splitad
-rw-r--r--   1 root     root    209715200 Apr  8 12:53 core_files.tar.gz.splitae
-rw-r--r--   1 root     root    209715200 Apr  8 12:53 core_files.tar.gz.splitaf
<truncated>
-rw-r--r--   1 root     root    28265984 Apr  8 12:53 core_files.tar.gz.splitat

You will notice that split(1) has appended letters into the generated files to distinguish the hirearchy.

This entry was posted in Uncategorized on by .

VirtualBox 3.1.8 Released

Oracle released VirtualBox 3.1.8, a maintenance release of VirtualBox 3.1 that improves stability and fixes regressions. It also supports new platforms like Ubuntu 10.04 (Lucid Lynx). Presently, VirtualBox runs on Windows, Linux, Macintosh and OpenSolaris hosts and supports a large number of guest operating systems, including but not limited to Windows (NT 4.0, 2000, XP, Server 2003, Vista, Windows 7), DOS/Windows 3.x, Linux (2.4 and 2.6), Solaris and OpenSolaris, and OpenBSD. See the Changelog for details on the updates. The download is available from the VirtualBox or Oracle Web sites. …

Read this article:
VirtualBox 3.1.8 Released

This entry was posted in Uncategorized on by .

How to Remove EMC Dead Paths on Solaris

We have already discussed on how to add an EMC storage to a Solaris box. Now we have a dead path on our system where the storage team has decommisioned. Here are the steps to follow on how to remove these dead paths from our system.

Check on your current paths

[email protected]# /etc/powermt display
Symmetrix logical device count=20
CLARiiON logical device count=50
==============================================================================
----- Host Bus Adapters ---------  ------ I/O Paths -----  ------ Stats ------
### HW Path                        Summary   Total   Dead  IO/Sec Q-IOs Errors
==============================================================================
2304 [email protected][email protected][email protected] optimal     121      1       -     0      0
2305 [email protected][email protected][email protected] optimal     120      0       -     0      0

Remove the dead paths

[email protected]# /etc/powermt check
Warning: CLARiiON device path c7t4d0s0 is currently dead.
Do you want to remove it (y/n/a/q)? y
[email protected]# /etc/powermt config
[email protected]# /etc/powermt save

Verify if the dead paths are removed

[email protected]# /etc/powermt display
Symmetrix logical device count=20
CLARiiON logical device count=50
==============================================================================
----- Host Bus Adapters ---------  ------ I/O Paths -----  ------ Stats ------
### HW Path                        Summary   Total   Dead  IO/Sec Q-IOs Errors
==============================================================================
2304 [email protected][email protected][email protected] optimal     120      0       -     0      0
2305 [email protected][email protected][email protected] optimal     120      0       -     0      0
This entry was posted in Uncategorized on by .