RHEL 8 has more software repositories available via various subscriptions than ever. Each subscription maps your operating system to a number of related repos, providing utmost granularity to installing and updating software.
I had to learn how to list repos because I wanted to install Ansible packages, turning one of my servers into an Ansible deployment server. Although Ansible is an open-source project, it’s not a core element of Red Hat Enterprise Linux, and that means it’s not available via core RHEL 8 repositories.
Instead, you need to find and enable Ansible repo in RHEL 8 (I’ll show how it’s done in the next few days).
How To List Software Repositories in RHEL 8
Simply run subscription-manager command with repos parameter, you’ll get quite a number of repositories reported back (I’m only showing you the first few):
root@rhel8:~ # subscription-manager repos
+----------------------------------------------------------+
Available Repositories in /etc/yum.repos.d/redhat.repo
+----------------------------------------------------------+
Repo ID: rhel-atomic-7-cdk-2.4-rpms
Repo Name: Red Hat Container Development Kit 2.4 /(RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel/atomic/7/7Server/$basearch/cdk/2.4/os
Enabled: 0
Repo ID: satellite-tools-6.6-for-rhel-8-x86_64-eus-rpms
Repo Name: Red Hat Satellite Tools 6.6 for RHEL 8 x86_64 - Extended Update Support (RPMs)
Repo URL: https://cdn.redhat.com/content/eus/rhel8/$releasever/x86_64/sat-tools/6.6/os
Enabled: 0
Repo ID: codeready-builder-for-rhel-8-x86_64-rpms
Repo Name: Red Hat CodeReady Linux Builder for RHEL 8 x86_64 (RPMs)
Repo URL: https://cdn.redhat.com/content/dist/rhel8/$releasever/x86_64/codeready-builder/os
Enabled: 0
Repo ID: satellite-tools-6.7-for-rhel-8-x86_64-rpms
Repo Name: Red Hat Satellite Tools 6.7 for RHEL 8 x86_64 (RPMs)
Repo URL: https://cdn.redhat.com/content/dist/layered/rhel8/x86_64/sat-tools/6.7/os
When I say “quite a number“, I mean a lot of subscriptions are available:
I’m catching up on my systemd knowledge, this is almost a note to myself – a summary of the systemd unit types (yes, it’s a lot more than just startup scripts!).
How To Tell a systemd Unit Type
The quickest way to determine a systemd unit type is to just look at the last part of the unit file. For instance, if I list systemd units in /lib/systemd/system directory, I’ll find quite a mix. Here’s a fragment:
-rw-r--r-- 1 root 1196 Jan 29 18:07 systemd-time-wait-sync.service -rw-r--r-- 1 root 659 Jan 29 18:07 systemd-tmpfiles-clean.service -rw-r--r-- 1 root 490 Feb 14 2019 systemd-tmpfiles-clean.timer -rw-r--r-- 1 root 732 Jan 29 18:07 systemd-tmpfiles-setup-dev.service -rw-r--r-- 1 root 772 Jan 29 18:07 systemd-tmpfiles-setup.service -rw-r--r-- 1 root 635 Feb 14 2019 systemd-udevd-control.socket -rw-r--r-- 1 root 610 Feb 14 2019 systemd-udevd-kernel.socket
I have highlighted the last part of each filename, and it shows the type of a particular unit: service, timer or socket (there’s more types, see below).
Types of systemd Units
Here are the systemd unit types I’ve come across so far. They must be the most common ones:
service: thats the one you’ve probably heard about, it’s a unit type for configuring and managing a software service (startup/shutdown) just like init scripts used to do – but in a far more flexible way
device – anything and everything for managing device files – stuff like operating files in /dev filesystem, etc
mount – Systemd style of managing filesystem mounts – for now these are mostly internal OS use filesystems of special types. The more traditional filesystems like / or /var are still managed in /etc/fstab
timer – scheduling system for running low-level tasks like OS self-healing and maintenance – this is where mdcheck (software RAID arrays) runs and how apt/yum repos are updated.
target – similar to milestones in Solaris 10, this is a boot management mechanism where you create these targets with meaningful names which become logical points of alignment for system initialisation and startup. There are targets for printing, rebooting, system update or multi-user mode – so other Systemd units can be depedencies and dedepdants for such targets.
When reinstalling servers with new versions of operating system or simply reprovisioning VMs under the same hostname, you eventually get this Host Key Verification Failed scenario. Should be easy enough to fix, once you’re positive that’s a valid infrastructure change.
Host Key Verification
Host key verification happens when you attempt to access remote server with SSH. Before verifying if you have a user on the remote server and whether your password or SSH key match that remote user, SSH client must do basic sanity checks on the lower level.
Specifically, SSH client checks if you attempted connecting to the remote server before. And whether anything changed since last time (it shouldn’t have).
Server (host) keys must not change during a normal life cycle of a server – they are generated at server/VM build stage (when OpenSSH starts up the first time) and remain the same – it’s the server’s identity.
This means if your SSH client has one keyprint for a particular server, and then suddenly detects it’s a different one – it’s flagged as an issue: at best, you’re looking at the new, legit server replacement with the same hostname. At worst, someone’s trying to intercept your connection and/or pretend to be your server.
Host Key Verification Failed
Here’s how I get this error on my Macbook (s1.unixtutorial.org doesn’t really exist, it’s just a hostname I show here as example):
greys@maverick:~ $ ssh s1.unixtutorial.org Warning: the ECDSA host key for 's1.unixtutorial.org' differs from the key for the IP address '51.159.18.142' Offending key for IP in /Users/greys/.ssh/known_hosts:590 Matching host key in /Users/greys/.ssh/known_hosts:592 Are you sure you want to continue connecting (yes/no)?
At this stage your default answer should always be “no”, followed by inspection of the known_hosts file to confirm what happened and why identity appears to be different.
If you answer no, you’ll get the Host Key Verification Failed error:
greys@maverick:~ $ ssh s1.unixtutorial.org Warning: the ECDSA host key for 's1.unixtutorial.org' differs from the key for the IP address '51.159.18.142' Offending key for IP in /Users/greys/.ssh/known_hosts:590 Matching host key in /Users/greys/.ssh/known_hosts:592 Are you sure you want to continue connecting (yes/no)? no Host key verification failed.
How To Solve Host Key Verification Errors
The output above actually tells you what to do: inspect file known_hosts and look at the lines 590 and 592 specifically. One of them is likely to be obsolete, and if you remove it the issue will go away.
Specifically, if you (like me) just reinstalled the dedicated server or VM with a new OS but kept the original hostname, then the issue is expected (new server definitely generated a new host key), so the solution is indeed to remove old key from the known_hosts file and re-attempt the connection.
First, I edited the /Users/greys/.ssh/known_hosts file and removed the line 590, which looked something like this. We simply need to find the line with given number, or look for the hostname we just tried to ssh into (s1.unixtutorial.org in my case):
We can try reconnecting now, answer yes and connect to the server:
greys@maverick:~ $ ssh s1.unixtutorial.org
The authenticity of host 's1.unixtutorial.org (51.159.xx.yy)' can't be established.
ECDSA key fingerprint is SHA256:tviW39xN2M+4eZOUGi8UFvBZoHKaLaijBA581Nrhjac.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 's1.unixtutorial.org,51.159.xx.yy' (ECDSA) to the list of known hosts.
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Fri Feb 7 21:18:35 2020 from unixtutorial.org
[greys@s1 ~]$
As you can see, the output now makes a lot more sense: our SSH client can’t establish authenticity of the remote server s1.unixtutorial.org – this is because we removed any mention of that server from our known_hosts file in previous step. Answering yes adds info about s1.unixtutorial.org, so any later SSH sessions will work just fine:
greys@maverick:~ $ ssh s1.unixtutorial.org
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Sat Feb 8 18:31:39 2020 from 93.107.36.193
[greys@s1 ~]$
Copying Host Keys to New Server
I should note that in some cases your setup or organisation would require the same host keys to be kept even with server reinstall. In this case, you’ll need to use last know backup of old server to grab SSH host keys from, to re-deploy them onto the new server – I’ll show/explain this in one of the future posts.
I needed to create a very simple bash script that required a password to be provided using keyboard and then used further in the script. Turns out, there’s just a way to do it with the bash built-in function called read.
Standard user input in bash
Here’s how a normal user input works: you invoke read function, pass it a variable name. A user is prompted for input by the bash script, and when input is provided it’s shown (echoed) back into terminal – so you can see what you type.
First, let’s create the script:
$ vi input.sh
this will be the content for our file:
#!/bin/bash
echo "Type your password, please:"
read PASS
echo "You just typed: $PASS"
Save the file (press Esc, then type :wq) and make it executable:
$ chmod a+rx input.sh
Now we can run the script and see how it works:
$ ./input.sh
Type your password, please:
mypass
You just typed: mypass
Works great, but seeing the typed password is not ideal. In a real world example I wouldn’t be printing the password back either.
Secure keyboard input in bash
Turns out, read function supports this scenario – just update the script to this:
read -s PASS
-s is obviously short for secure.
Save the script and run it again, this time typing will not show, but later command should output our input just fine:
$ ./input.sh
Type your password, please:
You just typed: mypass
Getting the difference between shutting a Unix system down versus halting it is kind of important.
Graceful shutdown
It’s important that your Unix/Linux system completes startup or shutdown in a graceful manner. What this means is that every process gets a chance to stop properly, rather than gets killed. It also means that these processes get stopped in a orderly manner – specifically following the order of appropriate startup/shutdown scripts or dependencies.
shutdown command does exactly that: it puts your desktop or server into a state of stopping services and preparing the server to be powered off.
Depending on your Unix/Linux implementation and command line options, a number of things will be requested by shutdown command before powering the system off:
warning about pending shutdown is broadcast to all the users logged into your system
a grace period (usually 1 minute) is started before shutdown proceeds
stop scripts are executed to correctly stop networked services
login attempts are blocked (new users won’t be able to log in)
processes are gracefully killed – meaning they can save data before shutting down
Relevant shutdown command options
If you want to immediately begin the shutdown procedure, you need to specify keyword now. If you want the server to power off and stay down, specify -h (for halt), if you want it to be rebooted, specify -r (for reboot).
IMPORTANT: If you don’t specify -h or -r, your Unix multiuser environment will be stopped and most OS services shut down, but the physical/virtual hardware will not be powered off. You’ll probably end up in a single user mode – where you can run admin commands as root.
Complete shutdown command for immediately bringing server down:
$ sudo shutdown -h now
Ungraceful shutdown: halt
halt command is another way to stop your Unix-like environment, but it’s more aggressive: no shutdown scripts or graceful process completion is allowed – it just stops Unix kernel.
halt also doesn’t really power your system off – it just stops your Unix/Linux environment from running. You still need to press the power button or activate the power switch.
Sometimes it’s not enough to know that a certain package is installed on your Linux system. You want to know the full list of files installed by the package, with exact locations of such files. This is when dpkg-query command may help.
Get List of Files Installed by a Package in Ubuntu
I mentioned xz-utils package for XZ archives yesterday, so let’s look at the xz-utils package. This is how I can get the full list of files installed by it:
Simple grep will make the previous example even more useful. Let’s say we just want to know if a package installs any binaries, here’s how we can do it:
Sometimes your testing scenario or cycle is so far ahead of your infrastructure that you don’t even have time or opportunity to procure proper SSL certificates for you website.
If there’s a certificate missing or expired, or a domain name mismatch in the certificate of the website you’re connecting to, most of browsers and command line tools will warn you.
For instance, curl will show you something like this:
greys@mcfly:~ $ curl https://unixtutorial.test
curl: (60) SSL: no alternative certificate subject name matches target host name 'unixtutorial.test'
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
If you really know what you’re doing, it’s possible to ignore SSL warnings and attempt to download the content anyway.
WARNING: by really knowing what you’re doing I mean understanding of what SSL errors mean. For instance, the one above is suggesting that webserver doesn’t have a domain like unixtutorial.test in its certificates – so even though the download may succeed, we’ll probably get wrong content (some other website’s content).
How To Make curl Ignore SSL Warnings
Specify the –insecure option for curl and it will ignore the SSL warnings and download the content anyway:
greys@mcfly:~ $ curl --insecure https://unixtutorial.test
Site Not Configured | 404 Not Found
<br /> @import url(//fonts.googleapis.com/css?family=Open+Sans:300);</p> <pre><code>body { color: #000; </code></pre>
As I predicted, the webserver returned content, but it’s actually a “Not Found” page because there’s no such website (unixtutorial.test is a fictitious domain) found.
Input/output redirection is a fundamental functionality in Unix and Linux. It’s a great way of manipulating data exchange between commands that you run.
Today I just want to show you an example of using input/output redirection – follow my steps and let me know if you get the same results. Any questions – please get in touch and I’ll update this port and the Unix Input/Output Redirect reference.
Standard Output Redirect
Let’s say we want to create a simple text file with a message “Hello”. One way to do this would be to output the Hello message using echo command, and then to redirect its standard output using the > redirection:
greys@maverick:~ $ echo Hello > /tmp/try.hello
Basic use for redirection is: you run any commands you like, and then finish the command line with the > sign that invokes redirection, and specify the file where redirection should be written to.
USEFUL: Such standard output is called STDOUT.
If we check contents of the /tmp/try.hello file using cat command now, we can see our Hello:
greys@maverick:~ $ cat /tmp/try.hello Hello
Since we can redirect output of any commands like this, we can redirect the result of this cat /tmp/try.hello command into another file, perhaps /tmp/try.hello2, and it will then also contain our Hello message:
Similar to Standard Output, there’s also a Standard Input – called STDIN. What it does is sources content of a specified file to use it for input to whatever you’re running.
So we use the < sign to redirect (take) input from a file. For instance:
greys@maverick:~ $ cat < /tmp/try.hello
Hello
Now, this is the simplest example and not the most useful one: most commands in Unix expect input file as one of the first parameters anyway. So we don’t really have to forward input like this – we could just run “cat /tmp/try.hello“.
But it’s important to recognise the difference: in this example with STDIN redirection above, cat command is not aware of any input files. It’s run without parameters and as such expects someone to type input or source it using redirection just like we did.
Standard Error Output Redirect
Now, what happens if the command you run generates an error? That’s not a standard command behaviour or standard output. It’s an error message, or standard error output: STDERR.
What this mean is that Unix/Linux is rather clever – error messages will be treated as a separate error destination. So even though in your console you’ll get both errors and standard output, the redirection will treat them separately.
Here’s an example. I’m trying to cat a non-existent file:
greys@maverick:~ $ cat /tmp/try.hello3 cat: /tmp/try.hello3: No such file or directory
This “cat: /tmp/try.hello3: No such file or directory” is an error message, not the standard output. That’s why, when I’m redirecting it to a file using standard output redirection, nothing is captured and put into the redirection output file:
greys@maverick:~ $ cat /tmp/try.hello3 > /tmp/redirected.out
cat: /tmp/try.hello3: No such file or directory
greys@maverick:~ $ cat /tmp/redirected.out
greys@maverick:~ $
Pretty cool, huh?
In order to redirect specifically error messages, we need to use special form of redirection, for STDERR error messages. We use number 2 before the redirection symbol, which refers to STDERR:
greys@maverick:~ $ cat /tmp/try.hello3 2> /tmp/redirected.out greys@maverick:~ $ cat /tmp/redirected.out
cat: /tmp/try.hello3: No such file or directory
Two things happened:
Our command returned no output. Because all of the result (standard error it generated) got forwarded to the /tmp/redirected.out file
The /tmp/redirected.out file now contains our error message
Think this is enough for one post. Will copy most of this into the Unix redirects reference page and come back some other day with more on this.
So, long story… After upgrading to macOS Catalina my years-old automount.sh script running via cron stopped working. It’s been a long enough journey of fixing the script itself (sudo permissions, PATH variable not having some important directories in it when run as a script), but after script was fixed I faced another problem: cron processes keep piling up.
Why is this a problem? Eventually, my Macbook would end up with having more than 10 thousand (!) cron related processes and would just run out of process space – no command can be typed, no app can be started. Only shutdown and power on would fix this.
I’ve been looking at this problem for quite some time, and now that I’m closer to solving it I’d like to share first findings.
What is this cron thing?
Don’t remember if I mentioned cron much on Unix Tutorial, so here’s a brief summary: cron is a system service that helps you schedule and regularly run commands. It has crontabs: files which list recurrence pattern and the command line to run.
Here’s an example of a crontab, each asterisk represents a parameter like “day of the week”, “hour”, “minute”, etc. Asterisk means “every value”, so this below would run my script every minute:
* * * * /Users/greys/scripts/try.sh
And here’s my automounter script, it runs every 15 minutes (so I’m specifying all the valid times with 15min interval – 0 minutes, 15 minutes, 30 minutes and 45 minutes):
Every user on your Unix-like system can have a crontab (and yes, there’s a way to prohibit cron use for certain users), and usually root or adm user has lots of OS specific tidy-up scripts in Linux and Solaris systems.
The thing with cron is it’s supposed to be this scheduler that runs your tasks regularly and then always stays in the shadows. It’s not meant to be piling processes up, as long as your scripts invoked from cron are working correctly.
Debugging cron in macOS
Turns out, /usr/sbin/cron has quite a few options for debugging in macOS:
-x debugflag[,...]
Enable writing of debugging information to standard output. One or more of the
following comma separated debugflag identifiers must be specified:
bit currently not used
ext make the other debug flags more verbose
load be verbose when loading crontab files
misc be verbose about miscellaneous one-off events
pars be verbose about parsing individual crontab lines
proc be verbose about the state of the process, including all of its offspring
sch be verbose when iterating through the scheduling algorithms
test trace through the execution, but do not perform any actions
What I ended up doing is:
Step 1: Kill all the existing crons
mcfly:~ greys$ sudo pkill cron
Step 2: Quickly start an interactive debug copy of cron as root
When I say “quickly” I’m referring to the fact that cron service is managed by launchd in macOS, meaning you kill it and it respawns pretty much instantly.
So I would get this error:
mcfly:~ root# /usr/sbin/cron -x ext,load,misc,pars,proc,sch -sh: kill: (23614) - No such process debug flags enabled: ext sch proc pars load misc log_it: (CRON 24156) DEATH (cron already running, pid: 24139) cron: cron already running, pid: 24139
And the approach I took is kill that last running process and restart cron in the same command line:
I’ll admit: this is probably way too much information, but when you’re debugging an issue there’s no such thing as too much – you’re getting all the clues you can get to try and understand the problem.
In my case, nothing was found: cron would start my cronjob, let it finish, report everything was done correctly and then still somehow leave an extra process behind:
[17464] TargetTime=1579264860, sec-to-wait=0
[17464] load_database()
[17464] spool dir mtime unch, no load needed.
[17464] tick(41,12,16,0,5)
user [greys:greys::…] cmd="/Users/greys/scripts/try.sh"
[17464] TargetTime=1579264920, sec-to-wait=60
[17464] do_command(/Users/greys/scripts/try.sh, (greys,greys,))
[17464] main process returning to work
[17464] TargetTime=1579264920, sec-to-wait=60
[17464] sleeping for 60 seconds
[17473] child_process('/Users/greys/scripts/try.sh')
[17473] child continues, closing pipes
[17473] child reading output from grandchild
[17474] grandchild process Vfork()'ed
log_it: (greys 17474) CMD (/Users/greys/scripts/try.sh)
[17473] got data (56:V) from grandchild
I’m still going to revisit this with a proper fix, but there’s at least an interim one identified for now: you must forward all the output from each cronjob to /dev/null.
In daily (Linux-based) practice, I don’t redirect cronjobs output because if there’s any output generated – it’s likely an error that I want to know about. cron runs a command, and if there’s any output, it sends an email to the user who scheduled the command. You see the email, inspect and fix the problem.
But in macOS Catalina, it seems this won’t work without further tuning. Perhaps there are some mailer related permissions missing or something like that, but fact is that any output generated by your cronjob will make cron process keep running (even though your cronjob script has completed successfully).
So the temporary fix for me was to turn my crontab from this:
That’s it for now! I’m super glad I finally solved this – took a few sessions of reviewing/updating my script because frankly I focused on the script and not on the OS itself.