Secure Keyboard Input in bash Scripts

Secure keyboard input in bash

I needed to create a very simple bash script that required a password to be provided using keyboard and then used further in the script. Turns out, there’s just a way to do it with the bash built-in function called read.



Standard user input in bash

Here’s how a normal user input works: you invoke read function, pass it a variable name. A user is prompted for input by the bash script, and when input is provided it’s shown (echoed) back into terminal – so you can see what you type.

First, let’s create the script:

$ vi input.sh

this will be the content for our file:

#!/bin/bash
echo "Type your password, please:"
read PASS
echo "You just typed: $PASS"

Save the file (press Esc, then type :wq) and make it executable:

$ chmod a+rx input.sh

Now we can run the script and see how it works:

$ ./input.sh
Type your password, please:
mypass
You just typed: mypass

Works great, but seeing the typed password is not ideal. In a real world example I wouldn’t be printing the password back either.

Secure keyboard input in bash

Turns out, read function supports this scenario – just update the script to this:

read -s PASS

-s is obviously short for secure.

Save the script and run it again, this time typing will not show, but later command should output our input just fine:

$ ./input.sh
Type your password, please:
You just typed: mypass

Pretty cool, huh?

See Also




Show Next Few Lines with grep

Showing next few lines with grep

Now and then I have to find some information in text files – logs, config files, etc. with a unique enough approach: I have a hostname or parameter name to search by, but the actual information I’m interested in is not found in the same line. Simple grep will just not do – so I’m using one of the really useful command line options for the grep command.

Default grep Behaviour

By default, grep shows you just the lines of text files that match the parameter you specify.

I’m looking for the IP address of the home office server called “server” in .ssh/config file.

Here’s what default grep shows me:

greys@maverick:~ $ grep server .ssh/config
Host server

So it’s useful in a sense that this confirms that I indeed have a server profile defined in my .ssh/config file, but it doesn’t show any more lines of the config so I can’t really see the IP address of my “server”.

Show Next Few Lines with grep

This is where the -A option (A – After) for grep becomes really useful: specify number of lines to show in addition to the line that matches search pattern and enjoy the result!

I’m asking grep to show me 2 more lines after the “server” line:

greys@maverick:~ $ grep -A2 server .ssh/config
Host server
     HostName 192.168.1.55
     ForwardAgent yes

Super simple but very powerful way of using grep. I hope you like it!

See Also




How To Confirm Symlink Destination

Today I show you how to confirm symlink destination using readlink command.
Showing symlink destination

Back to basics day: a very simple tip on working with one of the most useful things available in Unix and Linux filesystems: symbolic links (symlinks).

How Symlinks Look in ls command

When using long-form ls output, symlinks will be shown like this:

greys@mcfly:~/proj/unixtutorial $ ls -la
total 435736
-rwxr-xr-x 1 greys staff 819 14 Dec 2018 .gitignore
drwxr-xr-x 3 greys staff 96 20 Nov 09:27 github
drwxr-xr-x 4 greys staff 128 20 Nov 10:58 scripts
lrwxr-xr-x 1 greys staff 30 10 Dec 20:40 try -> /Users/greys/proj/unixtutorial
drwxr-xr-x 44 greys staff 1408 20 Nov 10:54 wpengine

Show Symlink Destination with readlink

As with pretty much everything in Unix, there’s a simple command (readlink) that reads a symbolic link and shows you the destination it’s pointing to. Very handy for scripting:

greys@mcfly:~/proj/unixtutorial $ readlink try
/Users/greys/proj/unixtutorial

It has a very basic syntax: you just specify a file or directory name, and if it’s a symlink you’ll get the full path to the destination as the result.

If readlink returns nothing, this means the object you’re inspecting isn’t a symlink at all. Based on the outputs above, if I check readlink for the regular scripts directory, I won’t get anything back:

greys@mcfly:~/proj/unixtutorial $ readlink scripts
greys@mcfly:~/proj/unixtutorial $

See Also




Convert Epoch Time with Python

Convert Unix Epoch with Python

I’m slowly improving my Python skills, mostly by Googling and combining multiple answers to code a solution to my systems administration tasks. Today I decided to write a simpe converter that takes Epoch Time as a parameter and returns you the time and date it corresponds to.

datetime and timezone Modules in Python

Most of the functionality is done using the fromtimestamp function of the datetime module. But because I also like seeing time in UTC, I decided to use timezone module as well.

epoch.py script

#!/Library/Frameworks/Python.framework/Versions/3.6/bin/python3

import sys
from datetime import datetime, timezone

if len(sys.argv)>1:
  print ("This is the Epoch time: ", sys.argv[1])

  try:
      timestamp = int(sys.argv[1])

      if timestamp>0:
        timedate = datetime.fromtimestamp(timestamp)
        timedate_utc = datetime.fromtimestamp(timestamp, timezone.utc)

        print ("Time/date: ", format(timedate))
        print ("Time/date in UTC: ", format(timedate_utc))
  except ValueError:
        print ("Timestamp should be a positive integer, please.")
else:
  print ("Usage: epoch.py <EPOCH-TIMESTAMP>")

FIXME: I’ll revisit this to re-publish script directly from GitHub.

Here’s how you can use the script:

greys@maverick:~/proj/python $ ./epoch.py 1566672346
 This is the Epoch time:  1566672346
 Time/date:  2019-08-24 19:45:46
 Time/date in UTC:  2019-08-24 18:45:46+00:00

I implemented basic checks:

  • script won’t run if no command line parameters are passed
  • an error message will be shown if command line parameter isn’t a number (and therefore can’t be a timestamp)

Do you see anything that should be changed or can be improved? Let me know!

See Also




Advice: Safely Removing a File or Directory in Unix

Unix Tutorial

Removing files and directories is a very common task, so in some environments support engineers or automation scripts delete hundreds of files per day. That’s why I think it’s important to be familiar with different ways and safety mechanisms you should use when it comes to removing Unix directories. This article has a number of principles that should help you make your day-to-day files and directories operations safer.



DISCLAIMER: one can never be too careful when using a command line, and removing files or directories in Unix is not an exception. That’s why please take extra care and spend time planning and understanding commans and command line options before executing them on production data. I’m sharing my own advice and my approach, but DO YOUR OWN RESEARCH as I accept no responsibility for any possible loss cuased by direct or indirect use of the suggested commands.

Safely Removing Files and Directories

Advice in this article is equally applicable to commands you type and to the automation solutions you create. Be it a single command line or a complex Ansible playbook – safety mindset should be applied whenever you’re creating an actionable plan for working with important data.

If you can think of any more advice related to this topic, please let me know!

1. Double-check Directories Before Removing

I wouldn’t call this out if it hasn’t saved me so many times. No matter who made the request, no matter how urgent the task is, no matter how basic and obvious the directory name seems, always double-check directories before removing!

Basic approach is: replace rm/rmdir with ls -l command.

So instead of

$ rm -rf /etc /bin

you type ls command and review the output:

$ ls -l /etc /bin

Things you’re checking for are:

  1. Is this a user/task specific directory or a global directory?
  2. Does it seem to be part of the core OS?
  3. Will removing these files break any functionality you can think of?
  4. Does the directory contain any files?
  5. Does the number of files seem different from what you expected?

For instance: you’re asked to delete an empty directory. Do a quick ls, and if it has files – double-check if they should be deleted as well. Also, check if it’s one of the common core OS directories like /etc or /bin or /var – it could be that you got the name by mistake but removing directory without checking would become an even bigger mistake.

2. Consider Moving Instead of Removing

In troubleshooting, many requests are made so that you free up a directory or tidy up filesystem structure. But the issues are mostly around file and directory names, rather than the space they take up.

So if filesystem space is not an issue right now, consider moving the directory instead of removing (deleting) it completely:

$ mv /home/greys/dir1 /home/greys/dir1.old

The end result will be that /home/greys/dir1 directory is gone, but you still have time to review and recover files from /home/greys/dir1.old if necessary.

3. Use root Privilege Wisely

Hint: don’t use root unless you absolutely have to. If the request is to remove a subdirectory in some application path – find out what user the application is running as and become that user before removing the directory.

For instance:

# su - javauser
# rm -rf /opt/java/logs/debug

Run as root user, this will let you become a javauser and attempt to remove the /opt/java/logs/debug directory (debug subdirectory in /opt/java/logs directory).

If there’s an issue (like getting permissions denied error) – review and find out what the problem is instead of just becoming root and removing the directory or files anyway. Specifically: permission denied suggests that files belong to another user or group, meaning they are potentially used and needed for something else and not just the application you’re working on.

4. Double-check Any Masks or Variables

If you’re dealing with expanding filename masks, double-check them to have correct and non-zero values.

Consider this:

$ rm -rf /${SOMEDIR}

if you’re not careful validating it, then it’s quite possible that $SOMEDIR is not initialised (or initialised under some other user session), thus resulting in the vastly different command with catastrophic results (yes, I know: this exact example below is NOT that bad, because as regular user it simply won’t work. But run as run will result in OS self-destruct):

$ rm -rf /

Similarly, if there are filenames to be expanded, verify that expansion works as intended. Very important thing to realise is that your filename masks will be expanded as your current user.

$ <SUDO> rm /$(ls /root) 
ls: cannot access '/root/': Permission denied
 rm: cannot remove '/': Is a directory

This example above is using shell expansion: it runs ls /root command that will return valid values if you have enough permissions. But run as regular use this will give an error and also alter the path used for the rm command. It will be as if you tried to run the following with sudo privileges:

$ rm /

Again, I’m not giving you the full commans as it’s all too easy to break your Unix like OS beyond repair when you run full commands as root without double-checking.

5. echo Each Command Before Running

The last principle I find very useful is to prepend any potentially dangerous command with echo. This means your shell will attempt expanding any command line parameters and substitutions, but then show the resulting command line instead of actually executing it:

greys@becky:~ $ echo rm -rf /opt/java/logs/${HOSTNAME}
rm -rf /opt/java/logs/becky

See how it expanded ${HOSTNAME} variable and replaced it with the actual hostname, becky?

Use echo just to be super sure about what you think the Unix shell will execute.

That’s it for today, hope you like this collection of safety principles. Let me know if you want more articles of this kind!

See Also

return nothing




Projects: Automatic Keyboard Backlight for Dell XPS in Linux

 

1C3D5091-6BB5-4201-9ABF-3B213879770C.JPG

Last night I finished a fun mini project as part of Unix Tutorials Projects. I have writted a basic enough script that can be added as root cronjob for automatically controlling keyboard backlight on my Dell XPS 9380.

Bash Script for Keyboard Backlight Control

As I’ve written just a couple of days ago, it’s actually quite easy to turn keyboard backlight on or off on a Dell XPS in Linux (and this probably works with other Dell laptops).

Armed with that knowledge, I’ve written the following script:

#!/bin/bash

WORKDIR=/home/greys/scripts/backlight
LOCKFILE=backlight.kbd
LOGFILE=${WORKDIR}/backlight.log
KBDBACKLIGHT=`cat /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness`

HOUR=`date +%H`

echo "---------------->" | tee -a $LOGFILE
date | tee -a $LOGFILE

if [ $HOUR -lt 4 -o $HOUR -gt 21 ]; then
echo "HOUR $HOUR is rather late! Must turn on backlight" | tee -a $LOGFILE
BACKLIGHT=3
else
echo "HOUR $HOUR is not too late, must turn off the backlight" | tee -a $LOGFILE
BACKLIGHT=0
fi

if [ $KBDBACKLIGHT -ne $BACKLIGHT ]; then
echo "Current backlight $KBDBACKLIGHT is different from desired backlight $BACKLIGHT" | tee -a $LOGFILE

FILE=`find ${WORKDIR} -mmin -1440 -name ${LOCKFILE}`

echo "FILE: -$FILE-"

if [ -z "$FILE" ]; then
echo "No lock file! Updating keyboard backlight" | tee -a $LOGFILE

echo $BACKLIGHT > /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness
touch ${WORKDIR}/${LOCKFILE}
else
echo "Lockfile $FILE found, skipping action..." | tee -a $LOGFILE
fi
else
echo "Current backlight $KBDBACKLIGHT is the same as desired... No action needed" | tee -a $LOGFILE
fi

How My Dell Keyboard Backlight Script Works

This is what my script does when you run it as root (it won’t work if you run as regular user):

  • it determines the WORKDIR (I defined it as /home/greys/scripts/backlight)
  • it starts writing log file backlight.log in that $WORKDIR
  • it checks for lock file backlight.kbd in the same $WORKDIR
  • it confirms current hour and checks if it’s a rather late hour (when it must be dark). For now I’ve set it between 21 (9pm) and 4 (4am, that is)
  • if checks current keyboard backlight status ($KDBBACKLIGHT variable)
  • it compares this status to the desired state (based on which hour that is)
  • if we need to update keyboard backlight setting, we check for lockfile.
    • If a recent enough file exists, we skip updates
    • Otherwise, we set the backlight to new value
  • all actions are added to the $WORKDIR/backlight.log file

Log file looks like this:

greys@xps:~/scripts $ tail backlight/backlight.log 
---------------->
Tue May 28 00:10:00 BST 2019
HOUR 00 is rather late! Must turn on backlight
Current backlight 2 is different from desired backlight 3
Lockfile /home/greys/scripts/backlight/backlight.kbd found, skipping action...
---------------->
Tue May 28 00:15:00 BST 2019
HOUR 00 is rather late! Must turn on backlight
Current backlight 2 is different from desired backlight 3
Lockfile /home/greys/scripts/backlight/backlight.kbd found, skipping action...

How To Activate Keyboard Backlight cronjob

I have added this script to the root user’s cronjob. In Ubuntu 19.04 running on my XPS laptop, this is how it was done:

greys@xps:~/scripts $ sudo crontab -e
[sudo] password for greys:

I then added the following line:

*/5 * * * * /home/greys/scripts/backlight.sh

Depending on where you place similar script, you’ll need to update full path to it from /home/greys/scripts. And then update WORKDIR variable in the script itself.

Keyboard Backlight Unix Tutorial Project Follow Up

Here are just a few things I plan to improve:

  • see if I can access Ubuntu’s Night Light settings instead of hardcoding hours into the script
  •  fix the timezone – should be IST and not BST for my Dublin, Ireland location
  • Just for fun, try logging output into one of system journals for journalctl

See Also

 




How To: Show Colour Numbers in Unix Terminal

256-terminal-colors-unix-linux.png
I’m decorating my tmux setup and needed to confirm colour numbers for some elements of the interface. Turns out, it’s simple enough to show all the possible colours with a 1-liner in your favourite Unix shell – bash shell in my case.

Using ESC sequences For Using Colours

I’ll explain how this works in full detail sometime in a separate post, but for now will just give you an example and show how it works:

hello-color-bash-output.png

So, in this example, this is how we achieve colorized text output:

  1. echo command uses -e option to support ESC sequences
  2. \e[38;5;75m is the ESC sequence specifying color number 75.
  3. \e[38;5; is just a special way of telling terminal that we want to use 256-color style

List 256 Terminal Colours with Bash

Here’s how we get the colours now: we create a loop from 1 until 255 (0 will be black) and then use the ESC syntax changing colour to $COLOR variable value. We then output the $COLOR value which will be a number:

for COLOR in {1..255}; do echo -en "\e[38;5;${COLOR}m${COLOR} "; done; echo;

Here’s how running this will look in a propertly configured 256-color terminal:

bash-show-256-colors.png

Bash Script to Show 256 Terminal Colours

Here’s the same 1-liner converted into proper script for better portability and readability:

#!/bin/bash

for COLOR in {1..255}; do
echo -en "\e[38;5;${COLOR}m"
echo -n "${COLOR} "
done

echo

If you save this as bash-256-colours.sh and chmod a+rx bash-256-colours.sh, you can now run it every time you want to refresh your memory or pick different colours for some use.

See Also




Check For Available Updates with YUM

If you’re using CentOS, Fedora or Red Hat Linux, you are probably familiar with the yum package manager. One of the really useful options for yum is checking whether there are any available updates to be installed.

Check For Updates with YUM

If you use check-update parameter with yum, it will show you the list of any available updates:

root@centos:~ # yum check-update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: rep-centos-fr.upress.io
* epel: mirror.in2p3.fr
* extras: rep-centos-fr.upress.io
* updates: ftp.pasteur.fr

ansible.noarch 2.7.8-1.el7 epel
datadog-agent.x86_64 1:6.10.1-1 datadog
libgudev1.x86_64 219-62.el7_6.5 updates
nginx.x86_64 1:1.15.9-1.el7_4.ngx nginx
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 extras
polkit.x86_64 0.112-18.el7_6.1 updates
systemd.x86_64 219-62.el7_6.5 updates
systemd-libs.i686 219-62.el7_6.5 updates
systemd-libs.x86_64 219-62.el7_6.5 updates
systemd-python.x86_64 219-62.el7_6.5 updates
systemd-sysv.x86_64 219-62.el7_6.5 updates

Using yum check-update in Shell Scripts

One thing that I didn’t know and am very happy to discover is that yum check-update is actually meant for shell scripting. It returns a specific code after running, you can use the value to decide what do to next.

As usual: return value 0 means everything is fully updated, so no updates are available (and no action is needed). A value of 100 would mean you have updates available.

All we need to do is check the return value variable $? for its value in something like this:

#!/bin/bash

yum check-update

if [ $? == 100 ]; then
    echo "You've got updates available!"
else
    echo "Great stuff! No updates pending..."
fi

Here is how running this script would look if we saved the script as check-yum-updates.sh script:

root@s2:~ # ./check-yum-updates.sh
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: rep-centos-fr.upress.io
* epel: mirror.in2p3.fr
* extras: rep-centos-fr.upress.io
* updates: ftp.pasteur.fr

ansible.noarch 2.7.8-1.el7 epel
datadog-agent.x86_64 1:6.10.1-1 datadog
libgudev1.x86_64 219-62.el7_6.5 updates
nginx.x86_64 1:1.15.9-1.el7_4.ngx nginx
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 extras
polkit.x86_64 0.112-18.el7_6.1 updates
systemd.x86_64 219-62.el7_6.5 updates
systemd-libs.i686 219-62.el7_6.5 updates
systemd-libs.x86_64 219-62.el7_6.5 updates
systemd-python.x86_64 219-62.el7_6.5 updates
systemd-sysv.x86_64 219-62.el7_6.5 updates
You've got updates available!

I’ll revisit this post soon to show you a few more things that can be done with yum check-update functionality.

See Also

 




pwd command and PWD variable

  • PWD variable.jpg

pwd command, and you probably know, reports the current working directory in your Unix shell. PWD stands for Present Working Directory. In addition to pwd, there’s also a special variable – one of the user environment variables – called PWD, that you can also use.

pwd command

Just to remind you, pwd command is a super simple way of confirming where you are in the filesystem tree.

Usually, your shell session starts in your home directory. For me and my username greys, this means /home/greys in most distributions:

greys@xps:~$ pwd
/home/greys

If I then use cd command to visit some other directory, pwd command will help me confirm it:

greys@xps:~$ cd /tmp
greys@xps:/tmp$ pwd
/tmp

PWD environment variable

Most Unix shells have PWD as a variable. It reports the same information as pwd command but saves children processes the trouble of using pwd command and getpwd system call just to confirm the working directory of their parent process.

So, you can just do this to confirm $PWD value:

greys@xps:/tmp$ echo $PWD
/tmp

… which really helps in shell scripting, cause you can do something like this:

#!/bin/bash
echo "Home directory: $HOME"
echo "Current directory: $PWD"

if [ $HOME != $PWD ]; then
    echo "You MUST run this from your home directory!"
    exit
else
    echo "Thank you for running this script from your home directory."
fi

When we run this, the script will compare standard $HOME variable (your user’s homedir) to the $PWD variable and will behave differently if they match.

I’ve created and saved pwd.sh in my projects directory for bash scripts: /home/greys/proj/bash:

greys@xps:~/proj/bash$ ./pwd.sh 
Home directory: /home/greys
Current directory: /home/greys/proj/bash
You MUST run this from your home directory!

If I now change back to my home directory:

greys@xps:~/proj/bash$ cd /home/greys/

… the script will thank me for it:

greys@xps:~$ proj/bash/pwd.sh
Home directory: /home/greys
Current directory: /home/greys
Thank you for running this script from your home directory.

Have fun using pwd command and $PWD variable in your work and shell scripting!

See Also

 




Review Latest Logs with tail and awk

Part of managing any Unix system is keeping an eye on the vital log files.

Today I was discussing one of such scenarios with a friend and we arrived at a pretty cool example involving awk command and eventually a bash command substitution.

Let’s say we have a directory with a bunch of log files, all constantly updated at different times and intervals. Here’s how I may get the last 10 lines of the output from the most recent log file:

root@vps1:/var/log# cd /var/log
root@vps1:/var/log# ls -altr *log
-rw-r--r-- 1 root root 32224 Jul 10 22:49 faillog
-rw-r----- 1 syslog adm 0 Jul 25 06:25 kern.log
-rw-r--r-- 1 root root 0 Aug 1 06:25 alternatives.log
-rw-r--r-- 1 root root 2234 Aug 8 06:34 dpkg.log
-rw-rw-r-- 1 root utmp 294044 Aug 15 22:32 lastlog
-rw-r----- 1 syslog adm 12248 Aug 15 22:35 syslog
-rw-r----- 1 syslog adm 5160757 Aug 15 22:40 auth.log

Ok, now we just need to get that filename from the last line (auth.log).

Most obvious way would be to use tail command to extract the last line, and awk to show the 9th parameter in that line – which would be the filename:

root@vps1:/var/log# ls -altr *log | tail -1 | awk '{print $9}'
auth.log

Pretty cool, but can be optimised using awk’s END clause:

root@vps1:/var/log# ls -altr *log | awk 'END {print $9}'
auth.log

Alright. Now we wanted to show the 10 lines of output, which we can use tail -10 for.

A really basic approach is to assing the result of the line we’re using to a variable in Bash, and then access that variable, like this:

root@vps1:/var/log# FILE=`ls -altr *log | tail -1 | awk '{print $9}'`
root@vps1:/var/log# tail -10 ${FILE}
Aug 15 22:40:37 vps1 sshd[26578]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=159.65.145.196
Aug 15 22:40:39 vps1 sshd[26578]: Failed password for invalid user Fred from 159.65.145.196 port 47934 ssh2
Aug 15 22:40:39 vps1 sshd[26578]: Received disconnect from 159.65.145.196 port 47934:11: Normal Shutdown, Thank you for playing [preauth]
Aug 15 22:40:39 vps1 sshd[26578]: Disconnected from 159.65.145.196 port 47934 [preauth]
Aug 15 22:41:15 vps1 sshd[26580]: Connection closed by 51.15.4.190 port 44958 [preauth]
Aug 15 22:42:02 vps1 sshd[26585]: Connection closed by 13.232.227.143 port 40054 [preauth]
Aug 15 22:43:23 vps1 sshd[26587]: Connection closed by 51.15.4.190 port 52454 [preauth]
Aug 15 22:44:08 vps1 sshd[26589]: Connection closed by 13.232.227.143 port 47542 [preauth]
Aug 15 22:45:01 vps1 CRON[26604]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 15 22:45:01 vps1 CRON[26604]: pam_unix(cron:session): session closed for user root

But an ever shorter (better?) way to do this would be to use the command substitution in bash: the output of a command becomes the command itself (or string value in our case).

Check it out:

root@vps1:/var/log# tail -10 $(ls -altr *log | tail -1 | awk '{print $9}')
Aug 15 22:42:02 vps1 sshd[26585]: Connection closed by 13.232.227.143 port 40054 [preauth]
Aug 15 22:43:23 vps1 sshd[26587]: Connection closed by 51.15.4.190 port 52454 [preauth]
Aug 15 22:44:08 vps1 sshd[26589]: Connection closed by 13.232.227.143 port 47542 [preauth]
Aug 15 22:45:01 vps1 CRON[26604]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 15 22:45:01 vps1 CRON[26604]: pam_unix(cron:session): session closed for user root
Aug 15 22:45:26 vps1 sshd[26610]: Connection closed by 51.15.4.190 port 59872 [preauth]
Aug 15 22:46:15 vps1 sshd[26612]: Connection closed by 13.232.227.143 port 55030 [preauth]
Aug 15 22:46:23 vps1 sshd[26608]: Connection closed by 18.217.190.140 port 40804 [preauth]
Aug 15 22:47:28 vps1 sshd[26614]: Connection closed by 51.15.4.190 port 39044 [preauth]
Aug 15 22:48:20 vps1 sshd[26616]: Connection closed by 13.232.227.143 port 34286 [preauth]

So in this example $(ls -altr *log | tail -1 | awk ‘{print $9}’) is a substituion – bash executes the command in the parenthesis and then passes the resulting value to further processing (becoming a parameter for the tail -10 command).

In our command above, we’re essentially executing the following command right now:

root@vps1:/var/log# tail -10 auth.log

only auth.log is always the filename of the log file that was updated the latest, so it could become syslog or dpkg.log if they’re updated before next auth.log update.

See Also