How To Extract XZ Files

XZ archives can be unpacked using XZ utils

I’ve come across XZ files more than once, most recently when downloading Kali Linux. It should be a fairly common knowledge now, but some operating systems still don’t support is – so I decided to research and to document it.

What is XZ file?

XZ is a modern lossless compression algorithm, it’s more efficient that gzip and bzip2. Apparently, many Linux distros are using XZ for compressing their software packages or ISO images. XZ is an open-source format maintained via Tukaani Project – XZ.

Support for XZ in tar

Modern Linux distros (certainly Ubuntu 19.x and CentOS 7.x when I checked today) have xz utils package installed, which allows tar command to automatically unpack XZ files.



If xz-utils didn’t come preinstalled, you can install it like this in Ubuntu/Debian:

# apt install xz-utils

or like this in CentOS/Fedora/RedHat:

# yum install xz

gzip/gunzip Support for XZ in macOS

Although tar in macOS doesn’t support XZ format natively:

greys@mcfly:~/Downloads $ tar xzvf kali-linux-2020.1-rpi3-nexmon-64.img.xz
tar: Error opening archive: Unrecognized archive format

… you can still use gunzip to decompress the XZ file in macOS:

greys@mcfly:~/Downloads $ ls -al kali-linux-2020*
-rw-r--r--@ 1 greys  staff      259647 30 Jan 22:57 kali-linux-2020-1.png
-rw-r--r--@ 1 greys  staff  1048328860 30 Jan 23:06 kali-linux-2020.1-rpi3-nexmon-64.img.xz
greys@mcfly:~/Downloads $ gunzip kali-linux-2020.1-rpi3-nexmon-64.img.xz
greys@mcfly:~/Downloads $ ls -ald kali-linux-2020.1-rpi3-nexmon-64.img
-rw-r--r--  1 greys  staff  6999999488 30 Jan 23:06 kali-linux-2020.1-rpi3-nexmon-64.img

xz-utils in macOS

If you insist on managing XZ files using xz utils, you’ll need to install them with brew:

… and then a whole bunch of XZ commands becomes available:

xz
xzcat
xzcmp
xzdec
xzdiff
xzegrep
xzfgrep
xzgrep
xzless
xzmore

See Also




Kali Linux 2020.1

Kali Linux 2020.1

This time I’m definitely installing it on my Raspberry Pi 4!

Kali Linux just released the very first release of the year 2020 with the following much welcome improvements:

  • Single installer image – I find it refreshing that there’s only one image to download which then allows you to choose the desktop environment of your preference (Gnome, KDE, Mate or LXDE)
  • kali user is the new default, used to be root
  • Python 2 is officially End Of Life
  • Theme refinements (kali-undercover looks even more like Windows now)

Just to remind you: Kali Linux is a Debian-based Linux distro made by security professionals for security assessments, penetration testing and digital forensics. 

See Also




Uninstalling minikube

Deleting minikube

Don’t remember how, but I ended up with two Kubernetes installs on my mcfly desktop with macOS: the one that came with Docker Desktop for macOS and the minikube variety that I must have downloaded and installed in the past.

What is minikube?

minikube is a local Kubernetes environment for testing and development purposes. It spins up a lightweight virtual machine (will work even on a modest laptop) and runs an entire Kubernetes cluster in it.

Kubernetes (I haven’t written about it on Unix Tutorial yet) is an open-source system for managing containerized applications – deploying, scaling and failing them over using cluster architecture.

Deleting minikube

It’s great that minikube has support for such scenarios, so I just stopped it and invoked delete command like this:

greys@mcfly:~ $ minikube stop
 ✋  Stopping "minikube" in hyperkit …
 🛑  "minikube" stopped.
 greys@mcfly:~ $ minikube delete
 🔥  Deleting "minikube" in hyperkit …
 💔  The "minikube" cluster has been deleted.
 🔥  Successfully deleted profile "minikube"

To be sure things aren’t left behind, I also deleted the minikube configurartion directory:

and even the binary symlink itself:

Docker-desktop variety of Kubernetes is now the only one left, so I can continue my experiments and will publish more in the coming days:

See Also




Linux Kernel 5.5

Linux Kernel 5.5

Yesterday Linux Kernel 5.5 was released, almost exactly 2 months since the Linux Kernel 5.4.

Here’s what I find interesting in this release:

  • improved Raspberry Pi 4 support – that’s pretty cool! must give it a try since my RPi4 is still not running any software
  • further filesystem improvements: exFAT, ext4 and btrfs all get further refinements. Notable is direct I/O via iomap for ext4
  • initial Thunderbolt 3 support – that’s pretty cool, I have a TB3 Add-In card for my 5K monitor as you remember
  • AMD Navi GPU overclocking support
  • NVMe drive temperature driver – also very convenient. I’m getting this info on Macbook out of the box, so will be interesting to try NVMe functionality on a Linux desktop.

See Also




How To Ignore SSL Warnings in curl

Making curl ignore SSL warnings

Sometimes your testing scenario or cycle is so far ahead of your infrastructure that you don’t even have time or opportunity to procure proper SSL certificates for you website.

If there’s a certificate missing or expired, or a domain name mismatch in the certificate of the website you’re connecting to, most of browsers and command line tools will warn you.

For instance, curl will show you something like this:

greys@mcfly:~ $ curl https://unixtutorial.test
 curl: (60) SSL: no alternative certificate subject name matches target host name 'unixtutorial.test'
 More details here: https://curl.haxx.se/docs/sslcerts.html
 curl failed to verify the legitimacy of the server and therefore could not
 establish a secure connection to it. To learn more about this situation and
 how to fix it, please visit the web page mentioned above.

If you really know what you’re doing, it’s possible to ignore SSL warnings and attempt to download the content anyway.

WARNING: by really knowing what you’re doing I mean understanding of what SSL errors mean. For instance, the one above is suggesting that webserver doesn’t have a domain like unixtutorial.test in its certificates – so even though the download may succeed, we’ll probably get wrong content (some other website’s content).

How To Make curl Ignore SSL Warnings

Specify the –insecure option for curl and it will ignore the SSL warnings and download the content anyway:

greys@mcfly:~ $ curl --insecure https://unixtutorial.test

     Site Not Configured | 404 Not Found 
 
     <br />     @import url(//fonts.googleapis.com/css?family=Open+Sans:300);</p> <pre><code>body { color: #000; </code></pre>

As I predicted, the webserver returned content, but it’s actually a “Not Found” page because there’s no such website (unixtutorial.test is a fictitious domain) found.

That’s it for today!

See Also




Unix Input/Output Redirection

Error Output STDERR Redirection

Input/output redirection is a fundamental functionality in Unix and Linux. It’s a great way of manipulating data exchange between commands that you run.

There’s lots of examples here and this probably calls for a Unix Input/Output Redirection Reference page (I’ll create it soon).

Today I just want to show you an example of using input/output redirection – follow my steps and let me know if you get the same results. Any questions – please get in touch and I’ll update this port and the Unix Input/Output Redirect reference.

Standard Output Redirect

Let’s say we want to create a simple text file with a message “Hello”. One way to do this would be to output the Hello message using echo command, and then to redirect its standard output using the > redirection:

greys@maverick:~ $ echo Hello > /tmp/try.hello

Basic use for redirection is: you run any commands you like, and then finish the command line with the > sign that invokes redirection, and specify the file where redirection should be written to.

USEFUL: Such standard output is called STDOUT.

If we check contents of the /tmp/try.hello file using cat command now, we can see our Hello:

greys@maverick:~ $ cat /tmp/try.hello
Hello

Since we can redirect output of any commands like this, we can redirect the result of this cat /tmp/try.hello command into another file, perhaps /tmp/try.hello2, and it will then also contain our Hello message:

greys@maverick:~ $ cat /tmp/try.hello > /tmp/try.hello2 greys@maverick:~ $ cat /tmp/try.hello2 Hello

Standard Input Redirect

Similar to Standard Output, there’s also a Standard Input – called STDIN. What it does is sources content of a specified file to use it for input to whatever you’re running.

So we use the < sign to redirect (take) input from a file. For instance:

greys@maverick:~ $ cat < /tmp/try.hello
Hello

Now, this is the simplest example and not the most useful one: most commands in Unix expect input file as one of the first parameters anyway. So we don’t really have to forward input like this – we could just run “cat /tmp/try.hello“.

But it’s important to recognise the difference: in this example with STDIN redirection above, cat command is not aware of any input files. It’s run without parameters and as such expects someone to type input or source it using redirection just like we did.

Standard Error Output Redirect

Now, what happens if the command you run generates an error? That’s not a standard command behaviour or standard output. It’s an error message, or standard error output: STDERR.

What this mean is that Unix/Linux is rather clever – error messages will be treated as a separate error destination. So even though in your console you’ll get both errors and standard output, the redirection will treat them separately.

Here’s an example. I’m trying to cat a non-existent file:

greys@maverick:~ $ cat /tmp/try.hello3
cat: /tmp/try.hello3: No such file or directory

This “cat: /tmp/try.hello3: No such file or directory” is an error message, not the standard output. That’s why, when I’m redirecting it to a file using standard output redirection, nothing is captured and put into the redirection output file:

greys@maverick:~ $ cat /tmp/try.hello3 > /tmp/redirected.out
cat: /tmp/try.hello3: No such file or directory
greys@maverick:~ $ cat /tmp/redirected.out
greys@maverick:~ $

Pretty cool, huh?

In order to redirect specifically error messages, we need to use special form of redirection, for STDERR error messages. We use number 2 before the redirection symbol, which refers to STDERR:

greys@maverick:~ $ cat /tmp/try.hello3 2> /tmp/redirected.out greys@maverick:~ $ cat /tmp/redirected.out 
cat: /tmp/try.hello3: No such file or directory 

Two things happened:

  1. Our command returned no output. Because all of the result (standard error it generated) got forwarded to the /tmp/redirected.out file
  2. The /tmp/redirected.out file now contains our error message

Think this is enough for one post. Will copy most of this into the Unix redirects reference page and come back some other day with more on this.

Have fun!

See Also




DEBUG: cron keeps piling up in macOS

cron processes piling up in macOS Catalina

So, long story… After upgrading to macOS Catalina my years-old automount.sh script running via cron stopped working. It’s been a long enough journey of fixing the script itself (sudo permissions, PATH variable not having some important directories in it when run as a script), but after script was fixed I faced another problem: cron processes keep piling up.

Why is this a problem? Eventually, my Macbook would end up with having more than 10 thousand (!) cron related processes and would just run out of process space – no command can be typed, no app can be started. Only shutdown and power on would fix this.

I’ve been looking at this problem for quite some time, and now that I’m closer to solving it I’d like to share first findings.

What is this cron thing?

Don’t remember if I mentioned cron much on Unix Tutorial, so here’s a brief summary: cron is a system service that helps you schedule and regularly run commands. It has crontabs: files which list recurrence pattern and the command line to run.

Here’s an example of a crontab, each asterisk represents a parameter like “day of the week”, “hour”, “minute”, etc. Asterisk means “every value”, so this below would run my script every minute:

* * * * /Users/greys/scripts/try.sh

And here’s my automounter script, it runs every 15 minutes (so I’m specifying all the valid times with 15min interval – 0 minutes, 15 minutes, 30 minutes and 45 minutes):

0,15,30,45 * * * * /Users/greys/scripts/automount.sh

Every user on your Unix-like system can have a crontab (and yes, there’s a way to prohibit cron use for certain users), and usually root or adm user has lots of OS specific tidy-up scripts in Linux and Solaris systems.

The thing with cron is it’s supposed to be this scheduler that runs your tasks regularly and then always stays in the shadows. It’s not meant to be piling processes up, as long as your scripts invoked from cron are working correctly.

Debugging cron in macOS

Turns out, /usr/sbin/cron has quite a few options for debugging in macOS:

 -x debugflag[,...]
         Enable writing of debugging information to standard output.  One or more of the
         following comma separated debugflag identifiers must be specified:

         bit   currently not used
         ext   make the other debug flags more verbose
         load  be verbose when loading crontab files
         misc  be verbose about miscellaneous one-off events
         pars  be verbose about parsing individual crontab lines
         proc  be verbose about the state of the process, including all of its offspring
         sch   be verbose when iterating through the scheduling algorithms
         test  trace through the execution, but do not perform any actions

What I ended up doing is:

Step 1: Kill all the existing crons

mcfly:~ greys$ sudo pkill cron

Step 2: Quickly start an interactive debug copy of cron as root

mcfly:~ root# /usr/sbin/cron -x ext,load,misc,pars,proc,sch

When I say “quickly” I’m referring to the fact that cron service is managed by launchd in macOS, meaning you kill it and it respawns pretty much instantly.

So I would get this error:

mcfly:~ root# /usr/sbin/cron -x ext,load,misc,pars,proc,sch
-sh: kill: (23614) - No such process
debug flags enabled: ext sch proc pars load misc
log_it: (CRON 24156) DEATH (cron already running, pid: 24139)
cron: cron already running, pid: 24139

And the approach I took is kill that last running process and restart cron in the same command line:

mcfly:~ root# kill -9 24281; /usr/sbin/cron -x ext,load,misc,pars,proc,sch
debug flags enabled: ext sch proc pars load misc
[24299] cron started
[24299] load_database()
        greys:load_user()
linenum=1
linenum=2
linenum=3
linenum=4
linenum=5
linenum=6
linenum=7
load_env, read <* * * * * /Users/greys/scripts/try.sh &> /dev/null>
load_env, parse error, state = 7
linenum=0
load_entry()…about to eat comments
linenum=1
linenum=2
...

I’ll admit: this is probably way too much information, but when you’re debugging an issue there’s no such thing as too much – you’re getting all the clues you can get to try and understand the problem.

In my case, nothing was found: cron would start my cronjob, let it finish, report everything was done correctly and then still somehow leave an extra process behind:

[17464] TargetTime=1579264860, sec-to-wait=0
[17464] load_database()
[17464] spool dir mtime unch, no load needed.
[17464] tick(41,12,16,0,5)
user [greys:greys::…] cmd="/Users/greys/scripts/try.sh"
[17464] TargetTime=1579264920, sec-to-wait=60
[17464] do_command(/Users/greys/scripts/try.sh, (greys,greys,))
[17464] main process returning to work
[17464] TargetTime=1579264920, sec-to-wait=60
[17464] sleeping for 60 seconds
[17473] child_process('/Users/greys/scripts/try.sh')
[17473] child continues, closing pipes
[17473] child reading output from grandchild
[17474] grandchild process Vfork()'ed
log_it: (greys 17474) CMD (/Users/greys/scripts/try.sh)
[17473] got data (56:V) from grandchild

Here’s how the processes would look:

0 17464 17213   0 12:40pm ttys003    0:00.01 /usr/sbin/cron -x ext,load,misc,pars,proc,sch
0 17473 17464   0 12:41pm ttys003    0:00.00 /usr/sbin/cron -x ext,load,misc,pars,proc,sch
0 17476 17473   0 12:41pm ttys003    0:00.00 (cron)
0 17520 17464   0 12:42pm ttys003    0:00.00 /usr/sbin/cron -x ext,load,misc,pars,proc,sch
0 17523 17520   0 12:42pm ttys003    0:00.00 (cron)

How To Avoid Crons Piling Up in Catalina

I’m still going to revisit this with a proper fix, but there’s at least an interim one identified for now: you must forward all the output from each cronjob to /dev/null.

In daily (Linux-based) practice, I don’t redirect cronjobs output because if there’s any output generated – it’s likely an error that I want to know about. cron runs a command, and if there’s any output, it sends an email to the user who scheduled the command. You see the email, inspect and fix the problem.

But in macOS Catalina, it seems this won’t work without further tuning. Perhaps there are some mailer related permissions missing or something like that, but fact is that any output generated by your cronjob will make cron process keep running (even though your cronjob script has completed successfully).

So the temporary fix for me was to turn my crontab from this:

0,15,30,45 * * * * /Users/greys/scripts/automount.sh
* * * * /Users/greys/scripts/try.sh

to this:

0,15,30,45 * * * * /Users/greys/scripts/automount.sh >/dev/null 2>&1                                             * * * * /Users/greys/scripts/try.sh >/dev/null 2>&1

That’s it for now! I’m super glad I finally solved this – took a few sessions of reviewing/updating my script because frankly I focused on the script and not on the OS itself.

See Also




Advanced USB discovery with usb-devices

usb-devices example

If you’re working with USB devices a lot, Linux platform offers a great way to learn more about what you have connected to your system: usb-devices command.

Using usb-devices Command

Simply type usb-devices and enjoy the output. Below is an example from one of my virtual machines:

greys@mint:~ $ usb-devices
T:  Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#=  1 Spd=480 MxCh=15
D:  Ver= 2.00 Cls=09(hub  ) Sub=00 Prot=00 MxPS=64 #Cfgs=  1
P:  Vendor=1d6b ProdID=0002 Rev=04.15
S:  Manufacturer=Linux 4.15.0-54-generic ehci_hcd
S:  Product=EHCI Host Controller
S:  SerialNumber=0000:00:1d.7
C:  #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA
I:  If#= 0 Alt= 0 #EPs= 1 Cls=09(hub  ) Sub=00 Prot=00 Driver=hub

T:  Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#=  6 Spd=480 MxCh= 0
D:  Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs=  1
P:  Vendor=203a ProdID=fffa Rev=01.00
S:  Manufacturer=Parallels
S:  Product=Virtual Printer (Print to PDF (Mac Desktop))
S:  SerialNumber=TAG11d87aca0
C:  #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=0mA
I:  If#= 0 Alt= 0 #EPs= 1 Cls=07(print) Sub=01 Prot=01 Driver=usblp

T:  Bus=01 Lev=01 Prnt=01 Port=01 Cnt=02 Dev#=  9 Spd=480 MxCh= 0
D:  Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs=  1
P:  Vendor=203a ProdID=fffa Rev=01.00
S:  Manufacturer=Parallels
S:  Product=Virtual Printer (Brother HL-L5200DW series)
S:  SerialNumber=TAG1c8860e8b
C:  #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=0mA
I:  If#= 0 Alt= 0 #EPs= 1 Cls=07(print) Sub=01 Prot=01 Driver=usblp

Understanding usb-devices Output

There’s a lot of information there, but almost every line is useful.

First line confirms USB port location – system bus and port on it. It’s also a great way of seeing the speed of the port – 480Mbit suggests it’s a USB2 port:

T:  Bus=01 Lev=01 Prnt=01 Port=01 Cnt=02 Dev#=  9 Spd=480 MxCh= 0

There’s a few lines documenting the device manufacturer and model name. Because it’s a virtual machine running on my Parallels Desktop software, manufacturer is reported as Parallels. But next line clarifies that this is actually the Brother laser printer from my home office network:

S:  Manufacturer=Parallels
S:  Product=Virtual Printer (Brother HL-L5200DW series)

There’s a line that I imagine will be more useful on laptops, reporting the power consumption reported for this USB port:

C:  #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=0mA

And last, but not least, is the line confirming device driver responsible for functionality of the USB device – should be very handy for troubleshooting:

I: If#= 0 Alt= 0 #EPs= 1 Cls=07(print) Sub=01 Prot=01 Driver=usblp

I’ll run this usb-devices command on my Dell XPS laptop with Ubuntu 19.10 soon and will share my findings.

See Also




systemd services Status

Example of systemctl status

I’ve just learned by accident that it’s possible to run systemctl status without specifying a name of systemd service – this way you get the listing and status of all the services available in a neat tree structure.

SystemD services

As you may remember, startup services are no longer managed by /etc/init.d scripts in Linux. Instead systemd services are created – this is handy for both managing services and confirming their status (journalctl is great for showing latest status messages like error log).

Show systemd Services Status with systemctl

Run without any parameters, systemctl status command will show you a tree structure like this:

greys@sd-147674:~$ systemctl status
 ● sd-147674
     State: running
      Jobs: 0 queued
    Failed: 0 units
     Since: Sat 2019-11-23 08:45:20 CET; 1 months 20 days ago
    CGroup: /
            ├─user.slice
            │ └─user-1000.slice
            │   ├─[email protected]
            │   │ └─init.scope
            │   │   ├─19250 /lib/systemd/systemd --user
            │   │   └─19251 (sd-pam)
            │   └─session-1309.scope
            │     ├─19247 sshd: greys [priv]
            │     ├─19264 sshd: greys@pts/0
            │     ├─19265 -bash
            │     ├─19278 systemctl status
            │     └─19279 pager
            ├─init.scope
            │ └─1 /sbin/init
            └─system.slice
              ├─systemd-udevd.service
              │ └─361 /lib/systemd/systemd-udevd
              ├─cron.service
              │ └─541 /usr/sbin/cron -f
              ├─bind9.service
              │ └─587 /usr/sbin/named -u bind
              ├─systemd-journald.service
              │ └─345 /lib/systemd/systemd-journald
              ├─mdmonitor.service
              │ └─484 /sbin/mdadm --monitor --scan
              ├─ssh.service
              │ └─599 /usr/sbin/sshd -D
              ├─openntpd.service
              │ ├─634 /usr/sbin/ntpd -f /etc/openntpd/ntpd.conf
              │ ├─635 ntpd: ntp engine
              │ └─637 ntpd: dns engine
              ├─rsyslog.service
              │ └─542 /usr/sbin/rsyslogd -n -iNONE
...

In this output, you can see systemd service names like cron.server or ssh.service and then under them is the process name and numerical process ID that indicate the how each service is provided.

INTERESTING: Note how openNTPd.service is provided by 3 separate processes: ntpd and two other ntpd processes (NTP engine and DNS engine).

See Also




Unix Tutorial – Annual Digest – 2019

As promised, this is my very first annual summary of interesting things in my industry (Unix/Linux administration) and on my Unix Tutorial blog.

Please get in touch to arrange a technical consultation or book a training!

Unix Tutorial News

2019 has been a tremendous year for my blog: almost a million visits to my posts and pages, hundreds of interesting topics researched and even more planned for the year ahead.

Here are just some of the notable changes on Unix Tutorial:

Unix and Linux News

Quite a few great changes happened in 2019:

Software News

  • Brave 1.0 browser got released – my primary browser that keeps blocking ads and trackers at an impressive rate
  • VirtualBox 6.1 released – this is the must-have software on Linux and Windows platforms, such a great and stable desktop virtualization product
  • tmux 3.0 arrived – I already upgraded tmux to 3.0a on my macOS systems to tmux 3.0a version
  • Firefox established new release cycle so improved versions are made available much sooner now
  • Homebrew 2.0.0 was released
  • Perl 6 (can’t believe it’s been around since 2015!) was renamed into Raku
  • Swift 5 was released by Apple
  • Java SE 12 arrived
  • HTTP/3 gained adoption and full support in Chrome and Firefox. Naturally, nginx led the way with an HTTP/3 module.
  • Jekyll 4 arrived – I really like using it for my static sites so I upgraded my macOS systems to Jekyll 4
  • Glimpse, a fork of GIMP graphics editor, finally became available

Scary Stuff

It didn’t always seem like it, but 2019 turned out to be a very scary year in terms of exploits, hardware and software vulnerabilities and hacks of major software repos

  • Docker Hub was hacked and information about 190k users (including password hashes) got leaked in April
  • PEAR (PHP) repository got hacked
  • Even more hardware attacks got identified for both Intel and AMD processes
  • GitHub, Bitbucket and GitLab all got affected by ransom attacks encrypting repositories
  • In May Firefox had that incident with intermediary certificates which instantly blocked browser extensions in millions of browsers
  • In September, Richard Stallman was forced to resign from Free Software Foundation

We live in exciting times. It’s been fun to try new products and services in 2019 and all the things indicate that 2020 will be even more impressive in terms of innovations and rapid adoption of new standards and technologies.

That’s it for the Year of 2019!

See Also