OpenIndiana 2019.10 Released

OpenIndiana

I’m hardly getting any chance to work on Sun/Oracle hardware and Solaris anymore, but still like tracking the releases. OpenIndiana is a free Solaris implementation based on the Illumos project, and OpenIndiana 2019.10 just got released.

Improvements in OpenIndiana 2019.10

  • Latest improvements from the Illumos project (namely illumos-gate)
  • Lots of documentation updated and moved from old wiki to https://docs.openindiana.org
  • Python dependent tools (including IPS) upgraded to support Python 3
    • IPS ships Python 3.5 tools and libraries
  • Bash updated; sudo updated; vim updated – these are like my top 3 commands! 🙂
  • Development toolchain is refreshed
  • Server software packages updated
    • nginx 1.16.1
    • BIND 9.14
    • Samba 4.11 (plus lots of improvements for SMBv3 from Illumos)

More information can be found here: OpenIndiana 2019.10 Release Notes.

Installing OpenIndiana 2019.10

This section is a placeholder: I plan on downloading OpenIndiana ISO image and installing it inside a virtual machine.

Will update this post with links to the OpenIndiana install notes and first screenshots shortly.

See Also




How To Confirm Solaris 11 version

oracle-solaris-11.jpg

I’ve finally gotten the time to work on another Unix Tutorial project – Install Solaris 11 in a VirtualBox VM. Will publish step-by-step instructions next weekend, so for now it’s just a quick post about a topic long overdue: confirming Solaris 11 version.

Use pkg Command to Confirm Solaris 11 Version

One of the most recent but also the most recommended ways to confirm Solaris 11 release version is to use the. Specifically, we use it to inspect the “entire” package which is a virtual package made for indicating and enforcing a Solaris 11 release:

greys@solaris11:~$ pkg info entire
Name: entire
Summary: Incorporation to lock all system packages to the same build
Description: This package constrains system package versions to the same
build. WARNING: Proper system update and correct package
selection depend on the presence of this incorporation.
Removing this package will result in an unsupported system.
Category: Meta Packages/Incorporations
State: Installed
Publisher: solaris
Version: 11.4 (Oracle Solaris 11.4.0.0.1.15.0)
Branch: 11.4.0.0.1.15.0
Packaging Date: 17 August 2018 at 00:42:03
Size: 2.53 kB
FMRI: pkg://solaris/[email protected]:20180817T004203Z

As you can see from the output, my brand new Solaris 11 VM is sporting the Solaris 11.4 release.

Use /etc/release to Confirm Solaris 11 Version

This is the more traditional way, the one that’s worked from at least Solaris 8. Simply inspect the /etc/release file and it should indicate both the Solaris release and the platform it’s running on – in my case it’s Solaris 11.4 and x86:

greys@solaris11:~$ cat /etc/release
Oracle Solaris 11.4 X86
Copyright (c) 1983, 2018, Oracle and/or its affiliates. All rights reserved.
Assembled 16 August 2018

Use uname command to Confirm Solaris 11 Version

Another fairly traditional approach is to use the uname command. As you can see below, it will show you the OS release (5.11) and the release version (11.4.0.15.0):

greys@solaris11:~$ uname -a
SunOS solaris11 5.11 11.4.0.15.0 i86pc i386 i86pc

See Also




Veritas Infoscale Availability

I have just started my next full time contract with a major integrator here in Dublin in a capacity of Senior Linux and Solaris engineer.

It is quite fun to refresh my Solaris knowledge and to pick it up where I left off:

  • explore Solaris 10 to Solaris 11 migrations
  • catch up on major software package upgrades (like Veritas)
  • revisit zones and LDOM management
  • refresh ZFS commands

I didn’t know, but Veritas has moved away from Symantec in recent years and biggest offerings are now rebranded as Veritas Infoscale. So Veritas Cluster Server is now Veritas Infoscale Availability.

Can’t wait to learn what improvements or new features are there in Infoscale, but also hope most of my VCS and VSF command line knowledge still works!

See Also




Troubleshooting: "du: fts_read: No such file or directory" error

I’ve just spent a good few hours trying to find any clues to the problem I was getting. du command would fail with a mysterious “fts_read” error, and there didn’t seem to be any good answers on the net with explanations why. I figured someday this post will be found and might save someone a lot of time. It’s a lengthy post and I believe the first one on this blog to be truly “advanced” in a technical sense.

My scenario and symptoms of the  du fts_read problem

Before going into any further details, I’d like to briefly explain what I’ve been trying to do: my task was to automate scanning of large directories with the du command to later parse this data and do some space usage trends for the most important directories.

Nothing fancy, just a du command in its simplest form:

solaris$ du -k /bigdir

What puzzled me is that du would fail with the fts_read error at seemingly random parts of the /bigdir I was scanning. The fact that I was running it from a cronjob didn’t help at all, and the whole problem seemed much more mysterious because the same command line seemed to have worked if run manually, but failed in cronjobs.

The error would appear to be fatal, so du scan would stop and return me something like this:

solaris$ du -k /bigdir
...
15372   /bigdir/storage1
887612  /bigdir/storage2
du: fts_read failed: No such file or directory

The strangest thing was that the files and directories appeared to be all there, nothing was missing and so the error message seemed bogus.

I also think it’s the cron involvement that distracted me from fts_read, but as soon as I got a first failure when running the command manually, I had fully turned my attention to fts_read.

What is fts_read?

This was the first question I asked myself after a few consecutive failures of the original command of mine. After looking the fts_read up, I’ve discovered this in a man page for fts_read:

The fts functions are provided for traversing file hierarchies. A simple overview is that the fts_open () function returns a “handle” on a file hierarchy, which is then supplied to the other fts functions. The function fts_read () returns a pointer to a structure describing one of the files in the file hierarchy.

On its own, this didn’t help much, and so I ended up looking into the source code of the du command I’ve used (it was a du from the latest GNU coreutils package):

static bool
du_files (char **files, int bit_flags)
{
  bool ok = true;

  if (*files)
    {
      FTS *fts = xfts_open (files, bit_flags, NULL);

      while (1)
        {
          FTSENT *ent;

          ent = fts_read (fts);
          if (ent == NULL)
            {
              if (errno != 0)
                {
                  /* FIXME: try to give a better message  */
                  error (0, errno, _("fts_read failed"));
                  ok = false;
                }
              break;
            }
          FTS_CROSS_CHECK (fts);

          ok &= process_file (fts, ent);
        }

      /* Ignore failure, since the only way it can do so is in failing to
         return to the original directory, and since we're about to exit,
         that doesn't matter.  */
      fts_close (fts);
    }

  if (print_grand_total)
    print_size (&tot_dui, _("total"));

  return ok;
}

The main thing I gained from this code is that the error cannot be worked around, otherwise a solution would be implemented in this function. If fts_read problem occurs, it’s a big deal and du terminates right there on the spot.

Further googling also confirmed that the same error was seen across various operating systems, which also suggested it’s not an issue specific to my OS (Solaris 10u4) or a du implementation.

The reason fts_read happens in du

Since I couldn’t find much more in the code, I had to think about my scenario of using du. Eventually I realized what was happening, and I bet it’s the same reason so many others have seen this issue before.

The error message wasn’t bogus, and was telling me the exact reason for the du command failing:

du: fts_read failed: No such file or directory

I started double-checking the files and directories and realized that due to the huge size of my /bigdir, it was taking 3h+ to scan it in full. And this means that as du command drilled down into all the subdirectories of it, some files and even whole subdirs were not there anymore.

This surely did upset the du command. Not only it seems suspicious to have a whole directory missing from where you expected it to be, but it’s also a problem for calculating and reporting the disk usage stats, and so there’s really no other option but to abort the du mission.

This also explained the seemingly random nature of the failure – the dynamics of underlying data in /bigdir isn’t distributed evenly, which means that some directories are only changed or removed once a week, while others can be created, processed and removed within a few minutes. It’s just that in some cases I was lucky to run du at a relatively quiet time where most of files and directories in /bigdir were’nt moving around.

Is there a workaround for the fts_read du issue?

The fts_read command itself is only a messenger in this case. The real problem occurs on a deeper level, inside the fts_build function which does the actual scan of directories and files in a specified mountpoint.

Not being a developer, I can’t really confirm a workaround possibility, but I have a theory, and it’s explained below.

Work-around 1: Start at a lower level of subdirectories tree

The first thing to try is to obviously drop the idea of starting your scan this high up in the directory structure. Iinstead of doing

solaris$ du -k /bigdir

start doing

solaris$ du -k /bigdir/storage1/dataset1
solaris$ du -k /bigdir/storage2/dataset2

The idea here is that du will be building and scanning subdirectories tree much further down the directory structure. Each command will take shorter time to run, which means there’s less of a chance that some data underneath it will be changed (removed) during the scan.

Work-around 2: Use max-depth to limit how far du will go

Another option that I think might help is the max-depth command line parameter for the du command, it will prevent the du from drilling down all too deep. This means that only the higher-level subdirectories will need the disk usage stats calculated and reported.

Depending on the nature of your data, larger (higher level) subdirectories are less likely to disappear right in the middle of your du scan, and hence the likelihood of fts_read (and, ultimately, fts_build underneath it) not finding something there is much lower.

Now, fts_build function is smart enough to check and double-check the subdirectories when scanning a directory tree, but I believe it can get really upset if it travels down a deep enough subdirectories tree and then finds itself in the middle of nowhere – not only without an immediate subdirectory to scan, but also without a few levels of parent directories.

Here’s an example of what I mean:

If we have an unlimited depth for the du command (default behavior), then scanning /bigdir1 might lead fts_build into a directory like this:

/bigdir/storage1/dataset1/subset1/dir1

Now, if during the scan of files in this directory someone decides to remove /bigdir/storage1/dataset1 altogether, fts_build will lose the files it’s working on, will attempt to go back one level (try to chdir back to dir1), fail, will attempt to go one level further up (subset1), and may still fail and eventually abort.

We’re going to need help of a seasoned developer here to confirm this theory of mine, but it’s only to prove the workaround. The reason for the failure still stays the same: the larger your directory structure is, the more likely it is to be dynamically changing, and the more likely it is that some files and directories won’t be there at the time of a scan.




How to see future file timestamps in Solaris

I know I’ve spoken about timestamps already, but I’d like to further expand the topic.

While there’s a great GNU stat command in Linux systems, there’s no such thing in Solaris by default, and so you usually depend on ls command with various options to look at file’s creation, modification or access time.

The standard /bin/ls command in Solaris doesn’t always show you the full timpestamp, usually if it’s about a time too far in the past or a bit into the future – so today I’m going to show you a trick to work around it and still confirm such timestamps for any file.

Standard ls command in Solaris doesn’t always show full timestamps

Here’s an example: BigBrother monitoring suite creates np_ files for internal tracking of times to send out email notifications. It deliberately alters the timestamps so that they’re set for a future date – that’s how it tracks the time elapsed between the event and the next notification about it.

However, not all of these np_ files are shown with their full timestamps, some just show the date, with no time:

solaris$ ls -l *myserver1*
-rw-r--r--   1 bbuser   bbgroup       48 Jan  9  2009 [email protected]_myserver1.conn
-rw-r--r--   1 bbuser   bbgroup       50 Jan  9 10:41 [email protected]_myserver1.cpu
-rw-r--r--   1 bbuser   bbgroup       51 Jan  9 10:41 [email protected]_myserver1.disk
-rw-r--r--   1 bbuser   bbgroup       53 Jan  9 10:36 [email protected]_myserver1.memory
-rw-r--r--   1 bbuser   bbgroup       51 Jan  9 10:41 [email protected]_myserver1.msgs
-rw-r--r--   1 bbuser   bbgroup       52 Jan  9  2009 [email protected]_myserver1.procs

If you remember, the default behaviour for ls is to show the modification time of each file. So in this example you can see that two files only have the date, and not the time of their modification timestamps shown. For other files, the full timestamp is present.

Before we continue, let’s confirm the current time on this Solaris server:

solaris$ date
Fri Jan  9 10:44:06 GMT 2009

For the two files with only date shown,  ls recognizes that the file can’t really have a future modification timestamp, and only shows the part is agrees with – the date which is valid (today).

What can we do? First, double-check other times – like the creation time of these files:

solaris$ ls -lc *myserver1*
-rw-r--r--   1 bbuser   bbgroup       48 Jan  9 10:14 [email protected]_myserver1.conn
-rw-r--r--   1 bbuser   bbgroup       50 Jan  9 09:56 [email protected]_myserver1.cpu
-rw-r--r--   1 bbuser   bbgroup       51 Jan  9 09:56 [email protected]_myserver1.disk
-rw-r--r--   1 bbuser   bbgroup       53 Jan  9 09:51 [email protected]_myserver1.memory
-rw-r--r--   1 bbuser   bbgroup       51 Jan  9 09:56 [email protected]_myserver1.msgs
-rw-r--r--   1 bbuser   bbgroup       52 Jan  9 10:31 [email protected]_myserver1.procs

All of them are showing correct full timestamps from some time in the past, so that’s okay.

How to show future timestamps in Solaris

And now comes the moment to reveal the little trick I was talking about. Even though the standard /bin/ls command won’t show you the future timestamps, you can still check them using the /usr/ucb/ls version of the ls command. The syntax is very similar, but you can also see the future timestamps:

solaris$ /usr/ucb/ls -al *myserver1*
-rw-r--r--   1 bbuser         48 Jan  9 10:59 [email protected]_myserver1.conn
-rw-r--r--   1 bbuser         50 Jan  9 10:41 [email protected]_myserver1.cpu
-rw-r--r--   1 bbuser         51 Jan  9 10:41 [email protected]_myserver1.disk
-rw-r--r--   1 bbuser         53 Jan  9 10:36 [email protected]_myserver1.memory
-rw-r--r--   1 bbuser         51 Jan  9 10:41 [email protected]_myserver1.msgs
-rw-r--r--   1 bbuser         52 Jan  9 11:16 [email protected]_myserver1.procs

Looking at them, you can see that BigBrother simply set the modification time for these files to be 45min into the future.

That’s it for today – hope you liked this trick!

See also:




How To Parse Text Files Line by Line in Unix scripts

I’m finally back from my holidays and thrilled to be sharing next of my Unix tips with you!

Today I’d like to talk about parsing text files in Unix shell scripts. This is one of the really popular areas of scripting, and there’s a few quite typical limitations which everyone comes across.

Reading text files in Unix shell

If we agree that by “reading a text file” we assume a procedure of going through all the lines found in a clear text file with a view to somehow process the data, then cat command would be the simplest demonstration of such procedure:

redhat$ cat /etc/redhat-release
Red Hat Enterprise Linux Client release 5 (Tikanga)

As you can see, there’s only one line in the /etc/redhat-release file, and we see what this line is.

But if you for whatever reason wanted to read this file from a script and assign the whole release information line to a Unix variable, using cat output would not work as expected:

bash-3.1$ for i in `cat /etc/redhat-release`; do echo $i; done;
RedHat
Enterprise
Linux
Client
release
5
(Tikanga)

Instead of reading a line of text from the file, our one-liner splits the line and outputs every word on a separate line of the output. This happens because of the shell syntax parsing – Unix shells assume space to be a delimiter of various elements in a list, so when you do a for loop, Unix shell interpreter treats each line with spaces as a list of elements, splits it and returns elements one by one.

How to read text files line by line

Here’s what I decided: if I can’t make Unix shell ignore the spaces between words of each line of text, I’ll disguise these spaces. Since my solution was getting pretty bulky for a one-liner, I’ve made it into a script. Here it is:

bash-3.1$ cat /tmp/cat.sh
#!/bin/sh
FILE=$1
UNIQUE='-={GR}=-'
#
if [ -z "$FILE" ]; then
        exit;
fi;
#
for LINE in `sed "s/ /$UNIQUE/g" $FILE`; do
        LINE=`echo $LINE | sed "s/$UNIQUE/ /g"`;
        echo $LINE;
done;

As you can see, I’ve introduced an idea of a UNIQUE variable, something containing a unique combination of characters which I can use to replace spaces in the original string. This variable needs to be a unique combination in a context of your text files, because later we turn the string back into its original version, replacing all the instances of $UNIQUE text with plain spaces.

Since most of the needs of mine required such functionality for a relatively small text files, this rather expensive (in terms of CPU cycles) approach proved to be quite usable and pretty fast.

Update: please see comments to this post for a much better approach to the same problem. Thanks again, Nails!

Here’s how my script would work on the already known /etc/redhat-release file:

bash-3.1$ /tmp/cat.sh /etc/redhat-release
Red Hat Enterprise Linux Client release 5 (Tikanga)

Exactly what I wanted! Hopefully this little trick will save some of your time as well. Let me know if you like it or know an even better one yourself!

Related books

If you want to learn more, here’s a great book:


classic-shell-scripting
Classic Shell Scripting

See also:




How To Show a Processes Tree in Unix

Showing your processes in a hierarchical list is very useful for confirming the relationship between every process running on your system. Today I’d like to show you how you can get tree-like processes lists using various commands.

Showing processes tree with ptree

In Solaris, there’s quite a few commands which make the life of any system administrator much easier, they’re the process commands (p-commands). One of them which I particularly like is the ptree command which shows you a list of processes.

As you run the command, you get a hierarchical list of all the processes running on your Solaris system, along with process IDs (PIDs). To me, this is a very useful command, because it shows you how exactly each process relates to others in your system.

Here’s a fragment of the ptree output:

bash-3.00$ ptree
7     /lib/svc/bin/svc.startd
  250   /usr/lib/saf/sac -t 300
    268   /usr/lib/saf/ttymon
  260   -sh
    5026  -csh
9     /lib/svc/bin/svc.configd
107   /usr/lib/sysevent/syseventd
136   /usr/lib/picl/picld
140   /usr/lib/crypto/kcfd
159   /usr/sbin/nscd
227   /usr/sbin/rpcbind
234   /usr/lib/nfs/statd
235   /usr/sbin/keyserv
236   /usr/lib/netsvc/yp/ypserv -d
  237   rpc.nisd_resolv -F -C 8 -p 1073741824 -t udp
241   /usr/lib/nfs/lockd
247   /usr/lib/netsvc/yp/ypbind
263   /usr/lib/utmpd
286   /usr/sadm/lib/smc/bin/smcboot
  287   /usr/sadm/lib/smc/bin/smcboot
  288   /usr/sadm/lib/smc/bin/smcboot

Processes tree with pstree

In most Linux distributions, you can find a pstree command, very similar to ptree.

That’s how you may use it (-p is an option to show PIDs and -l uses long output format):

ubuntu$ pstree -pl
init(1)─┬─NetworkManager(5427)
        ├─NetworkManagerD(5441)
        ├─acpid(5210)
        ├─apache2(6966)─┬─apache2(2890)
        │               ├─apache2(2893)
        │               ├─apache2(7163)
        │               ├─apache2(7165)
        │               ├─apache2(7166)
        │               ├─apache2(7167)
        │               └─apache2(7168)
        ├─atd(6369)
        ├─avahi-daemon(5658)───avahi-daemon(5659)
        ├─bonobo-activati(7816)───{bonobo-activati}(7817)
...

Showing processes tree with ps –forest

ps command found in Linux has a –forest option, which shows you a tree structure of processes.

The best in my experience is to use it like this:

ubuntu$ ps -aef --forest
UID        PID  PPID  C STIME TTY          TIME CMD
...
107       5473     1  0 10037  4600   0 Apr28 ?        00:00:02 /usr/sbin/hald
root      5538  5473  0  5511  1288   0 Apr28 ?        00:00:00  \_ hald-runner
root      5551  5538  0  6038  1284   0 Apr28 ?        00:00:01      \_ hald-addon-input: Listening on /dev/input
107       5566  5538  0  4167   992   1 Apr28 ?        00:00:00      \_ hald-addon-acpi: listening on acpid socke
root      5600  5538  0  6038  1272   1 Apr28 ?        00:00:15      \_ hald-addon-storage: polling /dev/scd0 (ev
root      5476     1  0 10272  2532   0 Apr28 ?        00:00:00 /usr/sbin/console-kit-daemon
root      5627     1  0 12728  1176   1 Apr28 ?        00:00:00 /usr/sbin/sshd
root      9151  5627  0 17536  3032   0 10:53 ?        00:00:00  \_ sshd: greys [priv]
greys     9162  9151  0 17538  1892   1 10:54 ?        00:00:00      \_ sshd: greys@pts/3
greys     9168  9162  0  5231  3820   1 10:54 pts/3    00:00:00          \_ -bash
greys     9584  9168  0  3802  1124   0 11:27 pts/3    00:00:00              \_ ps -aeF --forest

This output is for demonstration purpose only, and so I’ve taken the first lines of the output out because they weren’t serving the purpose of this example very well.

For thins fragment of the output you can see how you get all the vital information about each process. I really like this way of running the ps command.

That’s it for today! Do you know any other neat way of looking at processes tree? Let me know!




Solaris Devices

This is a very brief introduction into navigating the device paths in Solaris. I’m using a Solaris 10 installed on Sun v490 for all the commands shown below.

Device files in Solaris

Even though all the block and character special device files are traditionally found under /dev directory, if you look closer at your Solaris 10 setup you will notice that they’re not the device files themselves, but instead are just symbolic links to device files under /devices directory.

Solaris uses /devices directory for representing all the physical hierarchy of installed devices and buses found on your hardware system.

The directory tree under /devices copies the actual physical configuration of your hardware, and looks like this:

bash-3.00# ls /devices
iscsi                              pci@8,700000:reg
iscsi:devctl                       pci@9,600000
memory-controller@0,400000         pci@9,600000:devctl
memory-controller@0,400000:mc-us3  pci@9,600000:intr
memory-controller@2,400000         pci@9,600000:reg
memory-controller@2,400000:mc-us3  pci@9,700000
options                            pci@9,700000:devctl
pci@8,600000                       pci@9,700000:intr
pci@8,600000:devctl                pci@9,700000:reg
pci@8,600000:intr                  pseudo
pci@8,600000:reg                   pseudo:devctl
pci@8,700000                       scsi_vhci
pci@8,700000:devctl                scsi_vhci:devctl
pci@8,700000:intr

Most of these names are directories, so if you cd into /devices/pseudo directory, you will see all the software (hence pseudo) devices present in your system. Other directories will refer to various elements found on the system bus, for instance you can find a directory /devices/pci@9,600000/SUNW,qlc@2/ which will represent a built-in FC-AL Host Adapter which manages hard drives on my system.

Device instance numbers

As you know, it’s quite possible to have more than one device of the same kind in your system. Because of this, all the physical devices are mapped to instance numbers. Even if you have only one device of a particular kind, it will get an instance number. Numeration starts with 0.

For example, on Sun v490 there are 2 on-board gigabit network interfaces, and they’re referred to as ce0 and ce1. Similarly, all other devices are numbered and mapped.

All the physical device mappings to their instances are recorded in /etc/path_to_inst file. That’s the file used by Solaris to keep instance numbers persistent across reboots:

bash-3.00# more /etc/path_to_inst # #       Caution! This file contains critical kernel state
#
"/pseudo" 0 "pseudo"
"/scsi_vhci" 0 "scsi_vhci"
"/options" 0 "options"
"/pci@8,700000" 0 "pcisch"
"/pci@8,700000/ide@6" 0 "uata"
"/pci@8,700000/ide@6/sd@0,0" 1 "sd"
"/pci@8,600000" 1 "pcisch"



How To Mount An ISO image

Mounting an ISO image of a CD/DVD before burning it is one of the basic steps to verifying you’re going to get exactly the desired result. It’s also a neat trick to access files from a CD/DVD image when you only need a file or two and not a whole CD. Why burn it at all when you can access files much quicker and easier by simply mounting the ISO image?

Every Unix OS has a way to access ISO filesystem, and today I’ll only give you examples for Linux and Solaris. In both cases, the two things you need for the example to work are the ISO image itself and an available mount point (basically, an empty directory) on your filesystem to mount it under.

Here’s how to mount an ISO in Linux:

# mount -o loop /net/server/linux-bootcd.iso /mnt

Here’s how to mount an ISO in Solaris:

First, you need to associate your ISO image with a virtual device:

# lofiadm -a /net/server/linux-bootcd.iso

lofiadm approach allows you to have virtual devices associated with as many ISO images as you like, and you can view the list of current associations at any moment:

# lofiadm
Block Device File
/dev/lofi/1 /net/server/linux-bootcd.iso

To mount a virtual device, you use the following command:

# mount -F hsfs /dev/lofi/1 /mnt

See Also




How To: Find Out the Release Version of Your UNIX

Different UNIX-like operating systems store information about their release versions differently. If you know what OS you have, but not sure about the version, then here’s how you can find out:

RedHat Linux

bash-3.1$ cat /etc/redhat-release
Red Hat Enterprise Linux Client release 5 (Tikanga)

Ubuntu Linux

bash-3.1$ cat /etc/issue
Ubuntu 6.10 n l

SUSE Linux

~> cat /etc/SuSE-release
SUSE Linux Enterprise Desktop 10 (x86_64)
VERSION = 10

Sun Solaris

bash-2.03$ cat /etc/release Solaris 8 2/04 s28s_hw4wos_05a SPARC Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Assembled 08 January 2004

See Also