Unix Tutorial Digest – February 28th, 2019




Monitor processes, CPU and RAM with htop

I’ve been using htop for so long that it’s now my go-to tool for the visual representation of key process performance metrics on a server: CPU usage, RAM, Swap, average load and most resource-hungry processes.

htop command for process monitoring

This is how a default htop screen looks on a properly configured colour-capable terminal: just run “htop” wihtout any parameters.

Screen Shot 2019-02-27 at 22.48.58.png

How To Install htop in Linux

htop is available via EPEL repository for CentOS/RedHat/Fedora projects:

reys@rhel:~ $ yum whatprovides htop
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors
* base: centos.quelquesmots.fr
* epel: mirror.ibcp.fr
* extras: centos.mirrors.proxad.net
* updates: centos.crazyfrogs.org
htop-2.2.0-3.el7.x86_64 : Interactive process viewer
Repo : epel

Once EPEL is activated, you’ll be able to just install htop with yum.

How To Install htop in MacOS

On MacOS I’ve been using brew to install htop:

greys@maverick:~ $ brew install htop

or

greys@maverick:~ $ brew upgrade htop
==> Upgrading 1 outdated package:
htop 2.0.2 -> 2.2.0_1
==> Upgrading htop
==> Installing dependencies for htop: ncurses
==> Installing htop dependency: ncurses
==> Downloading https://homebrew.bintray.com/bottles/ncurses-6.1.mojave.bottle.tar.gz
######################################################################## 100.0%
==> Pouring ncurses-6.1.mojave.bottle.tar.gz
==> Caveats
ncurses is keg-only, which means it was not symlinked into /usr/local,
because macOS already provides this software and installing another version in
parallel can cause all kinds of trouble.

If you need to have ncurses first in your PATH run:
echo 'export PATH="/usr/local/opt/ncurses/bin:$PATH"' >> ~/.bash_profile

For compilers to find ncurses you may need to set:
export LDFLAGS="-L/usr/local/opt/ncurses/lib"
export CPPFLAGS="-I/usr/local/opt/ncurses/include"

For pkg-config to find ncurses you may need to set:
export PKG_CONFIG_PATH="/usr/local/opt/ncurses/lib/pkgconfig"

==> Summary
🍺 /usr/local/Cellar/ncurses/6.1: 3,869 files, 8.3MB
==> Installing htop
==> Downloading https://homebrew.bintray.com/bottles/htop-2.2.0_1.mojave.bottle.tar.gz
######################################################################## 100.0%
==> Pouring htop-2.2.0_1.mojave.bottle.tar.gz
==> Caveats
htop requires root privileges to correctly display all running processes,
so you will need to run `sudo htop`.
You should be certain that you trust any software you grant root privileges.
==> Summary
🍺 /usr/local/Cellar/htop/2.2.0_1: 11 files, 188KB
Removing: /usr/local/Cellar/htop/2.0.2... (11 files, 185KB)
==> Caveats
==> ncurses
ncurses is keg-only, which means it was not symlinked into /usr/local,
because macOS already provides this software and installing another version in
parallel can cause all kinds of trouble.

If you need to have ncurses first in your PATH run:
echo 'export PATH="/usr/local/opt/ncurses/bin:$PATH"' >> ~/.bash_profile

For compilers to find ncurses you may need to set:
export LDFLAGS="-L/usr/local/opt/ncurses/lib"
export CPPFLAGS="-I/usr/local/opt/ncurses/include"

For pkg-config to find ncurses you may need to set:
export PKG_CONFIG_PATH="/usr/local/opt/ncurses/lib/pkgconfig"

==> htop
htop requires root privileges to correctly display all running processes,
so you will need to run `sudo htop`.
You should be certain that you trust any software you grant root privileges.

That’s it for today. Hope you find htop command useful!

See Also




How To: Remove Old Kernels in CentOS

CentOS-linux-logo.pngFor dedicated servers and virtual machines that you keep upgrading in-place, you will eventually reach the situation where there’s a number of old kernel packages installed. That’s because when you’re updating OS packages and get new kernel installed, the old ones are not auto-removed – allowing you to fall back if there are issues with the latest kernel.

How To List Old Kernels in CentOS/Red Hat Linux

rpm -q command comes to the resque! just run it for the kernel packages:

root@centos:~ # rpm -q kernel
kernel-3.10.0-327.28.3.el7.x86_64
kernel-3.10.0-327.36.3.el7.x86_64
kernel-3.10.0-693.21.1.el7.x86_64
kernel-3.10.0-957.5.1.el7.x86_64

You can use the uname command to verify the current kernel you’re running:

root@centos:~ # uname -a
Linux centos.ts.fm 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

How To Remove Old Linux Kernels in CentOS

There’s actually a special command for doing this, but it’s probably not installed by default. It’s part of the yum-utils package that you may have to install like this first:

root@centos:~ # yum install yum-utils

Now that it’s installed, we’ll use the package-cleanup command. It takes the number of most recent kernels that you want to keep. So if you want to keep just the currently used kernel, the number should be 1. I recommend you keep 2 kernels – current and the one before it, so the count should be 2.

Just to be super sure, the package-cleanup -oldkernels command will ask you if you’re positive about removing the listed kernel packages before progressing:

root@centos:~ # package-cleanup --oldkernels --count=2
Loaded plugins: fastestmirror, langpacks
--> Running transaction check
---> Package kernel.x86_64 0:3.10.0-327.28.3.el7 will be erased
---> Package kernel.x86_64 0:3.10.0-327.36.3.el7 will be erased
--> Finished Dependency Resolution
epel/x86_64/metalink | 22 kB 00:00:00

Dependencies Resolved

===============================================================
Package Arch Version Repository Size
=============================================================== 
Removing:
kernel x86_64 3.10.0-327.28.3.el7 @centos-updates 136 M
kernel x86_64 3.10.0-327.36.3.el7 @updates 136 M

Transaction Summary
=============================================================== 
Remove 2 Packages

Installed size: 272 M
Is this ok [y/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Erasing : kernel.x86_64 1/2
Erasing : kernel.x86_64 2/2
Verifying : kernel-3.10.0-327.36.3.el7.x86_64 1/2
Verifying : kernel-3.10.0-327.28.3.el7.x86_64 2/2

Removed:
kernel.x86_64 0:3.10.0-327.28.3.el7 kernel.x86_64 0:3.10.0-327.36.3.el7

Complete!

… and yes, don’t worry to be left without any Linux kernels! I checked, and specifying count=0 will not result in the package-cleanup killing your operating system:

root@centos:~ # package-cleanup --oldkernels --count=0
Loaded plugins: fastestmirror, langpacks
Error should keep at least 1 kernel!

That’s it for today. Hope you enjoyed the article!

See Also




Restart Stopped Containers in Docker

docker-containers-unixtutorial

Sometimes an issue on one of your servers may interrupt your Docker based development and stop all the containers that you haven’t fully configured to be auto-started just yet. In such cases, it will be useful for you to know how to find stopped containers and restart them all using a single command.

List Stopped Containers in Docker

Using the filtering functionality of the docker ps command, we can quickly get all the necessary information for the stopped containers:

root@xps:~# docker ps -a -f status=exited
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
014a746dbb9d wordpress "docker-entrypoint.s…" 21 hours ago Exited (0) 21 hours ago romantic_fermi
080cf6412ac4 hello-world "/hello" 3 days ago Exited (0) 3 days ago modest_mestorf

Since we want to restart of these containers, we’ll probably need to pass their docker container IDs to another command, like docker start.

Hence the command above should be run with the -q parameter, which skips all the non-essential info and only returns the list of docker containers:

root@xps:~# docker ps -a -q -f status=exited
014a746dbb9d
080cf6412ac4

Restart all the Stopped Containers in Docker

Now all we have left to do is pass the above command to the docker start, like shown below. One by one, all the container IDs will appear as Docker restarts them:

root@xps:~# docker start $(docker ps -a -q -f status=exited)
014a746dbb9d
080cf6412ac4

Sure enough, when we do docker ps now, we should see these containers:

root@xps:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e7115e34496 wordpress "docker-entrypoint.s…" 19 hours ago Up 19 hours 127.0.0.1:80->80/tcp, 127.0.0.1:443->443/tcp wordpress
014a746dbb9d wordpress "docker-entrypoint.s…" 21 hours ago Up 2 seconds 80/tcp romantic_fermi
c397a72fbd58 mariadb:latest "docker-entrypoint.s…" 21 hours ago Up 21 hours 3306/tcp db

I can see the 014a746dbb9d container, but the other one is not running. Want to know why? It’s because this was a Hello, world Docker container – it’s not mean to stay running in background. Instead, it shows Hello, world and exits. It’s usually run like this:

root@xps:~# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/


That's it for today. Enjoy!

See Also




Pro Puppet

Pro Puppet 1st Edition
Pro Puppet 1st Edition

Back in 2011, James Turnbull partnered  with Jeffrey McCune to produce a marvelous and technically complete sequel to his first book – Pulling Strings with Puppet, The called that sequel book Pro Puppet.

I’m moving Unix books reviews from another website, want to keep them here on Unix Tutorial in the Unix Book Reviews section before I start publishing more recent reviews. This review is for the 1st edition of the Pro Puppet, but I know there’s been a 2nd edition of Pro Puppet written in 2013.

As it should be obvious from the title, this book is aimed at experienced users of the Puppet configuration management system, most likely seasoned systems administrators which have been managing systems with Puppet for a while but feel there is room for improvement.

The Pro Puppet book does not disappoint: not only is it an updated introductory material for those of you only discovering Puppet, but it is also a step-by-step, full source code examples kind of a guide to solving more complex issues facing a serious Puppet deployment – scalability, Puppet modules,  stored configurations and MCollective are just some of the topics explained in plenty of detail.

Puppet basics revisited

The first few chapters are talking about fundamental features and basic scenarios of deploying Puppet management system within your environment. You’ll learn about the super-easy way of describing Puppet nodes and using nodes inheritance in Puppet server’s config.

Naturally, there are full-text examples of creating your own configurations using Puppet classes and modules. There’s even a quick intro in case you decide to write a function or two – these are Puppet functionality elements running on the server side.

Class inheritance is shown quite expertly – not just the basics of having separate modules for managing different services with Puppet, but the acctual class-based approach to stopping-starting services – essentially you can have the same class used for installing software (like DNS or NTP server), and then have the flexibility of using different classes for toggling the enabled/disabled state of the freshly installed service for different nodes

Cool stuff you can do with Puppet

Apparently, there is now a new provider specifically for auditing files – very similar to the File one, it only reports the compliance in terms of permissions and ownership for a given file. There is enough flexibility to get the audit reports exactly the way you need them.

Another really cool thing I’ve learned is that it’s possible and quite convenient to have classes require specific files to be in place before the class functionality is applied.  I’ve been familiar with dependencies before but benefited from extra examples involving custom classes.

I always thought it would be great to use Puppet system for deploying Pupppet infrastructure itself – server and nodes. Turns out, this is entirely possible – the book includes example of a completely self-referential Puppet deployment.

Scaling Puppet environment

There are quite a few challenges you’ll be facing when your Puppet environment grows to be  large enough. The Pro Puppet book gives you advices for most scenarios.

First things first – you have got to use multiple deployment environments, for instance test/dev/prod. From the Puppet server perspective, this will mean getting familiar with how you describe these environments in the puppet.conf file and also creating separate directories for your modules. The approach given in the book will help you cater for both different environments (multiple nodes belonging to production or test environment) and for properly managing stages of Puppet module developments.

The really good thing is that you’ll have plenty of examples of how to manage it all with a source control system (git).

When it comes to horizontally scaling the server aspects of Puppet, you’ll find a lot of instructions for fronting Puppet instance with Apache webserver via mod_rails (Passenger) module. Naturally, some of the most probable scenarios are described and provided with solutions, so if you’re stuck for some immediate help on making your crawling Puppet server run nice and fast, you’ll find some easy to follow steps.

What I enjoyed throughout the book is its attention to detail: it’s easy to see how some chapters address not just an isolated issue but the full-scale solution. In case of scaling, you’ll certainly appreciate hints on automating the data synchronization between Puppet backends – unless they reside on the same Unix environment, you’ll need some behind-the-scenes tricks to make sure all the backeds are in full sync – be it for the Puppet modules/files or SSL certficiates for the Puppet CA element.

Externalizing Puppet configs (storing nodes info in a database

As soon as your Puppet nodes.pp file grows past the first few hundred hosts, you’ll get this feeling that things could be greatly improved if you managed nodes list in a database of some sorts. Puppet server comes with such an abstraction planned from the very beginning, so it should be easy enough for you to externalize the nodes configuration. You can start off by using external text file or even a shell script, but the same approach and interface can be taken for Ruby or Perl, LDAP or MySQL.

Full text examples make it very easy to get started, you are ready to plug whole scripts into your infrastructure as even LDAP ldif files are provided for your convenience.

Exporting and storing configs of your Puppet managed node

You can configure Puppet master to use MySQL DB for storing all the configs related to managed nodes. In contrast with the nodes list externalization, this functionality will actually store metadata about your nodes – things like Facter facts – which normally reside locally on each node. Once configured, such a setup may prove to be very useful for syncing configurations between nodes.

A really cool example given in the book is the one for collecting public ssh keys and then distributing the in the updated known_hosts file form.

Puppet modules using Puppet Forge

If you end up using Puppet for managing your environments, it will be only a matter of time before you get curious enough to attempt a development of your own Puppet module. You are in luck: the Pro Puppet book will gve you all the info you need to get started. Apart from learning how to use Puppet Forge for downloading new Puppet modules for use in your environment, the are some steps for configuring multiple source control trunks to take care of all the stages of a typical module development lifecycle. And if you think your newly created module will make a good addition to Puppet Forge, there are instructions on how to upload your module.

Extending Puppet and Facter

If you want to get the most out of your Puppet deployment, you’ll probably appreciate the sections of the book talking about Puppet improvements like writing your own functions (remember, they are server side!) or custom Facter facts. There are always many different ways to  make your changes or deploy custom Facts, and even if they are not shown in every single detail, there is certainly enough information to show you how things are done and help you get moving in the right direction with your Puppet infrastructure.

Using MCollective with Facter and Puppet

One of the reasons many people are buying the Pro Puppet book is the chapter talking about Marionette Collective – MCollective. It’s a message bus solution for rapid scanning of your Unix servers and for instant command execution. Instead of using SSH or similar mechanism for connecting to each client, MCollective relies on a message system like ActiveMQ or RabbitMQ (both freely available online) so that all the clients are listening to a queue and execute commands as soon as something relevant shows up.

The really powerful way to use MCollective is to leverage the power of custom facts of Facter. Essentially this means that you abstract from the common list of nodes and instead use specific facts about each node to compile the list you’re interested in. Instead of generating a list of hosts, you can have MCollective instantly compile a list base on the OS flavor or environment description fact, and target your query at that list.

Summary for the Pro Puppet book

Without a doubt, this is one of the most useful books you can find on Puppet configuration management today. Whether you’re after a high level introduction or enjoy all the possible technical details, you will find the Pro Puppet to be very relevant, highly educational and amazingly thorough about quite a number of Puppet related topics.

Puppet Configuration Management – links




Confirm VMware Tools version

Snag_15cccf.png

If you need to find out the version of VMware Tools running in a remote VM, here is the command line to do it:

greys@ubuntu:~$ vmware-toolbox-cmd --version
10.2.0.1608 (build-7253323)

See Also




screenFetch in Linux Mint

screenfetch-linux-mint-19-1-xps.png

Great stuff, I have just installed Linux Mint 19.1 on my Dell XPS 13 laptop! Naturally, one of the first things to be run is the screenFetch utility.

Install screenFetch on Linux Mint

Based on Ubuntu Linux, Linux Mint enjoys abundant software repositories, which means it’s super easy to install screenFetch on the new system:

root@xps:~# apt-get install screenfetch
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Recommended packages:
scrot
The following NEW packages will be installed:
screenfetch
0 upgraded, 1 newly installed, 0 to remove and 245 not upgraded.
Need to get 50.6 kB of archives.
After this operation, 236 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 screenfetch all 3.8.0-8 [50.6 kB]
Fetched 50.6 kB in 0s (308 kB/s) 
Selecting previously unselected package screenfetch.
(Reading database ... 249721 files and directories currently installed.)
Preparing to unpack .../screenfetch_3.8.0-8_all.deb ...
Unpacking screenfetch (3.8.0-8) ...
Setting up screenfetch (3.8.0-8) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

screenFetch in Linux Mint 19.1

This is the output of screenFetch on my laptop:

screenfetch-linux-mint-19-1-xps.png

See Also




Bootable USB from ISO with Etcher

Screen Shot 2019-02-21 at 22.35.04.png

I have decided to try using Dell XPS 13 laptop in addition to my MacBook Pro 15. It is not a coincident that Dell XPS 13 – I have been researching it for a while – is one of the best laptops available for Linux these days, Dell even sells Ubuntu-only version of it.

I got the Windows 10 version of the laptop, but think it’s easy enough to install Linux Mint for dual boot. I’ll be installing using USB disk or microSD card, so Etcher seemed like the perfect tool for it.

Download OS ISO image

I found Linux Mint 19.1 image and downloaded it from the local mirror:

greys@maverick:/Volume/Stuff/dist/ISOs $ wget http://ftp.heanet.ie/pub/linuxmint.com/stable/19.1/linuxmint-19.1-cinnamon-64bit.iso
--2019-02-21 22:11:47-- http://ftp.heanet.ie/pub/linuxmint.com/stable/19.1/linuxmint-19.1-cinnamon-64bit.iso
Resolving ftp.heanet.ie... 193.1.193.64
Connecting to ftp.heanet.ie|193.1.193.64|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1973911552 (1.8G) [application/octet-stream]
Saving to: ‘linuxmint-19.1-cinnamon-64bit.iso’

linuxmint-19.1-cinnamon-64bit.iso 100%[=======================================================================>] 1.84G 8.37MB/s in 4m 22s

2019-02-21 22:16:14 (7.18 MB/s) - ‘linuxmint-19.1-cinnamon-64bit.iso’ saved [1973911552/1973911552]

Now that image is downloaded, let’s verify it’s valid – follow the Linux Mint – Verify ISO instructions.  instructions.

First, we download a cleartext file with the md5 checksum:

greys@maverick:/Volume/Stuff/dist/ISOs $ wget https://ftp.heanet.ie/mirrors/linuxmint.com/stable/19.1/sha256sum.txt
--2019-02-21 22:17:24-- https://ftp.heanet.ie/mirrors/linuxmint.com/stable/19.1/sha256sum.txt
Resolving ftp.heanet.ie... 193.1.193.64
Connecting to ftp.heanet.ie|193.1.193.64|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 584 [text/plain]
Saving to: ‘sha256sum.txt’

sha256sum.txt 100%[=======================================================================>] 584 --.-KB/s in 0s

2019-02-21 22:17:25 (5.86 MB/s) - ‘sha256sum.txt’ saved [584/584]

Here’s how it looks:

greys@maverick:/Volume/Stuff/dist/ISOs $ cat sha256sum.txt
b580052c4652ac8f1cbcd9057a0395642a722707d17e1a77844ff7fb4db36b70 *linuxmint-19.1-cinnamon-32bit.iso
bb4b3ad584f2fec1d91ad60fe57ad4044e5c0934a5e3d229da129c9513862eb0 *linuxmint-19.1-cinnamon-64bit.iso
ca86885e2384373f8fbb2121e2abb6298674e37fc206d3f23661ab5f1f523aba *linuxmint-19.1-mate-32bit.iso
5bc212d73800007c7c3605f03c9d5988ad99f1be9fc91024049ea4b638c33bb4 *linuxmint-19.1-mate-64bit.iso
039d619935c2993e589705e49068a6fa4dc4f9a5eb82470bc7998c9626259416 *linuxmint-19.1-xfce-32bit.iso
7b53b29a34cfef4ddfe24dac27ee321c289dc2ed8b0c1361666bbee0f6ffa9f4 *linuxmint-19.1-xfce-64bit.iso

We can now use the standard file checksum technique. The result of shasum command will be a checksum, and it must match the one found in the sha256sum.txt file exactly. As you can see, it does:

greys@maverick:/Volume/Stuff/dist/ISOs $ shasum -a 256 linuxmint-19.1-cinnamon-64bit.iso
bb4b3ad584f2fec1d91ad60fe57ad4044e5c0934a5e3d229da129c9513862eb0 linuxmint-19.1-cinnamon-64bit.iso

All seems to be correct, so let’s start Etcher and burn the ISO image.

IMPORTANT: please remember that Etcher is perfect for Linux and Unix-like operating systems, but not so good for Windows ISOs. The tool should warn you if you attempt burning Windows ISO. All hope is not lost and work on supporting Windows ISOs is in progress.

We’ll be burning the Linux ISO, so I don’t expect any issues.

Using Etcher to burn ISO image to USB

First, we click the Select image button:Screen Shot 2019-02-21 at 22.32.11.png

… and pick the linuxmint iso image from the list of available ISOs:

Screen Shot 2019-02-21 at 22.33.53.png

Next, we click the obvious Select drive button:

Screen Shot 2019-02-21 at 22.34.14.png

and simply click the USB flash drive. It’s a 16GB SanDisk in my case:

Screen Shot 2019-02-21 at 22.34.27.png

The ISO will be burned in a few minutes:

Screen Shot 2019-02-21 at 22.35.04.png

… followed by an even faster USB disk verification process:

Screen Shot 2019-02-21 at 22.37.39.png

and we’re done:

Screen Shot 2019-02-21 at 22.38.18.png

See Also

 




chown example

chown-user-group-file.jpg.jpg

One of the most useful and powerful basic Unix commands, chown command allows you to change ownership of specified files and directories – change user or group owner.

chown Must be Run as root

chown is one of these commands that must be run as root. Running it as a regular user will not work: one user can’t change even its own files so that they belong to another user.

Basic chown Example

I’ll start in my home directory /home/greys. Let’s use touch command to create a file named try and then use sudo command to become root:

[greys@rhel8 ~]$ touch try
[greys@rhel8 ~]$ sudo -i
[sudo] password for greys:
[root@rhel8 ~]# cd /home/greys
[root@rhel8 /home/greys]# ls -ald try
-rw-rw-r--. 1 greys greys 0 Feb 20 06:44 try

As you can see, the file rightfully belongs to me and my group: greys:greys.

Let’s change the owner and owner group to root:

[root@rhel8 /home/greys]# chown root:root try
[root@rhel8 /home/greys]# ls -al try
-rw-r--r--. 1 root root 0 Feb 20 06:44 try

chown with Verbose Reporting

If we use the -v command line option for chown, it will confirm every action:

[root@rhel8 /home/greys]# chown -v greys:greys try
changed ownership of 'try' from root:root to greys:greys

chown Using Reference File

A really cool way of using chown and also a great gateway to shell scripting is making chown inspect a given (reference) file and then apply its ownership information to other specified files. So you’re making chown command confirm owner and group of a file and then apply this to lots of other files – all without really knowing or specifying the actual ownership info. That’s while such a file is called reference file.

For instance, look at the chrony config files in /etc directory. See how /etc/chrony.keys file belongs to root:chrony?

[root@rhel8 /home/greys]# ls -al /etc/chrony.*
-rw-r--r--. 1 root root 1083 Apr 4 2018 /etc/chrony.conf
-rw-r-----. 1 root chrony 481 Apr 4 2018 /etc/chrony.keys

Here’s how you can make chown apply the same ownership details to my /home/greys/try file:

[root@rhel8 /home/greys]# chown --reference=/etc/chrony.keys try
[root@rhel8 /home/greys]# ls -la try
-rw-r--r--. 1 root chrony 0 Feb 20 06:44 try

Pretty cool, huh?

See Also




How To: List Files with SELinux Contexts

Snag_21dc154.png

When running a SELinux based setup, it might be useful to know how to quickly inspect files and directories to confirm their current SELinux context.

What is SELinux Context?

Every process and file in SELinux based environment can be labeled with additional information that helps fulfill RBAC (Role-Based Access Control), TE (Type Enforcement) and MLS (Multi-Level Security).

SELinux context is the combination of such additional information:

  • user
  • role
  • type
  • level

In the following example we can see that unconfined_u is the SELinux user, object_r is the role, user_home_dir_t is the object type (home user directory) and the SELinux sensitivity (MCS terminology) level is s0:

drwx------. 17 greys greys unconfined_u:object_r:user_home_dir_t:s0 4096 Feb 19 12:14 .

Use ls -Z to show SELinux Context

Using ls command with -Z option will show the SELinux contexts. This command line option is totally made to be combined with other ls command options:

[greys@rhel8 ~]$ ls -alZ .
total 64
drwx------. 17 greys greys unconfined_u:object_r:user_home_dir_t:s0 4096 Feb 19 12:14 .
drwxr-xr-x. 3 root root system_u:object_r:home_root_t:s0 19 Jan 15 17:34 ..
-rw-------. 1 greys greys unconfined_u:object_r:user_home_t:s0 2035 Feb 19 12:14 .bash_history
-rw-r--r--. 1 greys greys unconfined_u:object_r:user_home_t:s0 18 Oct 12 17:56 .bash_logout
-rw-r--r--. 1 greys greys unconfined_u:object_r:user_home_t:s0 218 Jan 28 17:42 .bash_profile
-rw-r--r--. 1 greys greys unconfined_u:object_r:user_home_t:s0 312 Oct 12 17:56 .bashrc
drwx------. 12 greys greys unconfined_u:object_r:cache_home_t:s0 4096 Jan 21 06:41 .cache
drwx------. 14 greys greys unconfined_u:object_r:config_home_t:s0 278 Jan 21 06:41 .config
drwx------. 3 greys greys unconfined_u:object_r:dbus_home_t:s0 25 Jan 20 18:28 .dbus
drwxr-xr-x. 2 greys greys unconfined_u:object_r:user_home_t:s0 6 Jan 20 18:28 Desktop
drwxr-xr-x. 2 greys greys unconfined_u:object_r:user_home_t:s0 6 Jan 20 18:28 Documents
drwxr-xr-x. 2 greys greys unconfined_u:object_r:user_home_t:s0 6 Jan 20 18:28 Downloads
-rw-------. 1 greys greys unconfined_u:object_r:pulseaudio_home_t:s0 16 Jan 15 19:15 .esd_auth
-rw-------. 1 greys greys unconfined_u:object_r:iceauth_home_t:s0 1244 Jan 20 18:46 .ICEauthority
-rw-------. 1 greys greys unconfined_u:object_r:user_home_t:s0 3434 Jan 22 18:06 id_rsa_4k
-rw-r--r--. 1 greys greys unconfined_u:object_r:user_home_t:s0 737 Jan 22 18:06 id_rsa_4k.pub
-rw-rw-r--. 1 greys greys unconfined_u:object_r:user_home_t:s0 21 Jan 28 17:53 infile2.txt
-rw-------. 1 greys greys unconfined_u:object_r:user_home_t:s0 38 Jan 22 18:05 .lesshst
drwxr-xr-x. 3 greys greys unconfined_u:object_r:gconf_home_t:s0 19 Jan 20 18:28 .local
drwxr-xr-x. 2 greys greys unconfined_u:object_r:audio_home_t:s0 6 Jan 20 18:28 Music
-rw-rw-r--. 1 greys greys unconfined_u:object_r:user_home_t:s0 0 Jan 22 18:01 newkey
drwxr-xr-x. 2 greys greys unconfined_u:object_r:user_home_t:s0 6 Jan 20 18:28 Pictures
drwxrw----. 3 greys greys unconfined_u:object_r:home_cert_t:s0 19 Jan 20 18:28 .pki
drwxr-xr-x. 2 greys greys unconfined_u:object_r:user_home_t:s0 6 Jan 20 18:28 Public
drwxrwxr-x. 4 greys greys unconfined_u:object_r:user_home_t:s0 165 Jan 16 11:00 screenFetch
-rw-------. 1 greys greys unconfined_u:object_r:xauth_home_t:s0 150 Jan 20 18:44 .serverauth.1859
-rw-------. 1 greys greys unconfined_u:object_r:xauth_home_t:s0 50 Jan 20 18:39 .serverauth.1893
drwx------. 2 greys greys unconfined_u:object_r:ssh_home_t:s0 70 Jan 22 18:07 .ssh
-rw-rw-r--. 1 greys greys unconfined_u:object_r:user_home_t:s0 0 Jan 21 07:49 system_u:object_r:shell_exec_t:s0
drwxr-xr-x. 2 greys greys unconfined_u:object_r:user_home_t:s0 6 Jan 20 18:28 Templates
drwxr-xr-x. 2 greys greys unconfined_u:object_r:user_home_t:s0 6 Jan 20 18:28 Videos
-rw-------. 1 greys greys unconfined_u:object_r:user_home_t:s0 2874 Jan 29 04:40 .viminfo
-rw-------. 1 greys greys unconfined_u:object_r:xauth_home_t:s0 260 Feb 19 12:14 .Xauthority

See Also