Unix Tutorial Digest – March 31st, 2019




tmux Sessions Survive Desktop Logouts

tmux-session-running-date.png

Just realised another cool reason for using tmux even for local sessions on my Dell XPS laptop with Linux Mint 19: tmux sessions will survive desktop logouts! I started a basic loop reporting current timestamp every second and it hasn’t interrupted at all even though I logged out form Gnome and then got back in.

I’m still tweaking Linux Mint desktop and need to log out from time to time. I mostly worked on local projects in the past few days and had Terminal tabs instead of tmux managing my sessions. One of these times I logged out I lost output in about 10 separate tabs of various stages of progress on the same mini project. Not cool!

Turns out, tmux survives graphical environment log out – which means I’m not going to lose anything unless I reboot laptop or something like that. This is one of the fundamental features of tmux, but somehow I’ve been forgetting to use it for local laptop work.

See Also




How To Install Jekyll in Linux Mint 19

jekyll-serve-glebreys-unixtutorial.jpg

I’m still fascinated with the Jekyll approach to website management, and am working on converting one of my blogs (not Unix Tutorial just yet!) to Jekyll website. This short post shows how to install Jekyll on a Linux Mint system.

Install Ruby and Bundler

First things first: you need to install Ruby:

greys@xps:~/proj$ sudo apt install ruby
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following package was automatically installed and is no longer required:
libssh-4
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed:
libruby2.5 rake ruby-test-unit ruby2.5
Suggested packages:
ri ruby-dev
Recommended packages:
fonts-lato libjs-jquery
The following NEW packages will be installed:
libruby2.5 rake ruby ruby-test-unit ruby2.5
0 upgraded, 5 newly installed, 0 to remove and 317 not upgraded.
Need to get 3,227 kB of archives.
After this operation, 14.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ruby2.5 amd64 2.5.1-1ubuntu1.1 [48.6 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby amd64 1:2.5.1 [5,712 B]
Get:3 http://archive.ubuntu.com/ubuntu bionic/main amd64 rake all 12.3.1-1 [45.1 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-test-unit all 3.2.5-1 [61.1 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libruby2.5 amd64 2.5.1-1ubuntu1.1 [3,066 kB]
Fetched 3,227 kB in 1s (3,576 kB/s) 
Selecting previously unselected package ruby2.5.
(Reading database ... 272296 files and directories currently installed.)
Preparing to unpack .../ruby2.5_2.5.1-1ubuntu1.1_amd64.deb ...
Unpacking ruby2.5 (2.5.1-1ubuntu1.1) ...
Selecting previously unselected package ruby.
Preparing to unpack .../ruby_1%3a2.5.1_amd64.deb ...
Unpacking ruby (1:2.5.1) ...
Selecting previously unselected package rake.
Preparing to unpack .../archives/rake_12.3.1-1_all.deb ...
Unpacking rake (12.3.1-1) ...
Selecting previously unselected package ruby-test-unit.
Preparing to unpack .../ruby-test-unit_3.2.5-1_all.deb ...
Unpacking ruby-test-unit (3.2.5-1) ...
Selecting previously unselected package libruby2.5:amd64.
Preparing to unpack .../libruby2.5_2.5.1-1ubuntu1.1_amd64.deb ...
Unpacking libruby2.5:amd64 (2.5.1-1ubuntu1.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up rake (12.3.1-1) ...
Setting up ruby2.5 (2.5.1-1ubuntu1.1) ...
Setting up ruby (1:2.5.1) ...
Setting up ruby-test-unit (3.2.5-1) ...
Setting up libruby2.5:amd64 (2.5.1-1ubuntu1.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...

… and now use the gem command to install bundler gem:

greys@xps:~/proj$ sudo gem install bundler
Successfully installed bundler-2.0.1
Parsing documentation for bundler-2.0.1
Done installing documentation for bundler after 1 seconds
1 gem installed

Install the Jekyll gem

Excellent, we’re really close to getting this working.

Only small problem: when installing the jekyll gem, there’s an error:

greys@xps:~/proj$ sudo gem install jekyll
Building native extensions. This could take a while...
ERROR:  Error installing jekyll:
ERROR: Failed to build gem native extension.

    current directory: /var/lib/gems/2.5.0/gems/ffi-1.10.0/ext/ffi_c
/usr/bin/ruby2.5 -r ./siteconf20190329-23694-7cpq9s.rb extconf.rb
mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

extconf failed, exit code 1

Gem files will remain installed in /var/lib/gems/2.5.0/gems/ffi-1.10.0 for inspection.

Results logged to /var/lib/gems/2.5.0/extensions/x86_64-linux/2.5.0/ffi-1.10.0/gem_make.out
I first thought I did something wrong or that I have an old version of Ruby (2.5.0 as you can see). But now, it seems the minimal required version for Jekyll is Ruby 2.1.0 so it should all work.
The hint is in the error message:
mkmf.rb can’t find header files for ruby at /usr/lib/ruby/include/ruby.h
This include file is indeed missing, because previously we simply installed Ruby binaries, but not the development packages.
Once we install ruby-dev package:
greys@xps:~/proj$ sudo apt install ruby-dev
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following package was automatically installed and is no longer required:
libssh-4
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed:
ruby2.5-dev
Recommended packages:
ruby2.5-doc
The following NEW packages will be installed:
ruby-dev ruby2.5-dev
0 upgraded, 2 newly installed, 0 to remove and 317 not upgraded.
Need to get 68.3 kB of archives.
After this operation, 351 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ruby2.5-dev amd64 2.5.1-1ubuntu1.1 [63.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-dev amd64 1:2.5.1 [4,604 B]
Fetched 68.3 kB in 0s (198 kB/s) 
Selecting previously unselected package ruby2.5-dev:amd64.
(Reading database ... 273441 files and directories currently installed.)
Preparing to unpack .../ruby2.5-dev_2.5.1-1ubuntu1.1_amd64.deb ...
Unpacking ruby2.5-dev:amd64 (2.5.1-1ubuntu1.1) ...
Selecting previously unselected package ruby-dev:amd64.
Preparing to unpack .../ruby-dev_1%3a2.5.1_amd64.deb ...
Unpacking ruby-dev:amd64 (1:2.5.1) ...
Setting up ruby2.5-dev:amd64 (2.5.1-1ubuntu1.1) ...
Setting up ruby-dev:amd64 (1:2.5.1) ...
The header files for Ruby should appear now (although in a slightly different path):
/usr/include/ruby-2.5.0/ruby/ruby.h
/usr/include/ruby-2.5.0/ruby.h
and the Jekyll gem install will complete:
greys@xps:~/proj$ sudo gem install jekyll
Building native extensions. This could take a while...
Successfully installed ffi-1.10.0
Fetching: rb-inotify-0.10.0.gem (100%)
Successfully installed rb-inotify-0.10.0
Fetching: rb-fsevent-0.10.3.gem (100%)
Successfully installed rb-fsevent-0.10.3
Fetching: listen-3.1.5.gem (100%)
Successfully installed listen-3.1.5
Fetching: jekyll-watch-2.2.1.gem (100%)
Successfully installed jekyll-watch-2.2.1
Fetching: sass-listen-4.0.0.gem (100%)
Successfully installed sass-listen-4.0.0
Fetching: sass-3.7.3.gem (100%)

Ruby Sass is deprecated and will be unmaintained as of 26 March 2019.

* If you use Sass as a command-line tool, we recommend using Dart Sass, the new
primary implementation: https://sass-lang.com/install

* If you use Sass as a plug-in for a Ruby web framework, we recommend using the
sassc gem: https://github.com/sass/sassc-ruby#readme

* For more details, please refer to the Sass blog:
http://sass.logdown.com/posts/7081811

Successfully installed sass-3.7.3
Fetching: jekyll-sass-converter-1.5.2.gem (100%)
Successfully installed jekyll-sass-converter-1.5.2
Fetching: concurrent-ruby-1.1.5.gem (100%)
Successfully installed concurrent-ruby-1.1.5
Fetching: i18n-0.9.5.gem (100%)
Successfully installed i18n-0.9.5
Fetching: http_parser.rb-0.6.0.gem (100%)
Building native extensions. This could take a while...
Successfully installed http_parser.rb-0.6.0
Fetching: eventmachine-1.2.7.gem (100%)
Building native extensions. This could take a while...
Successfully installed eventmachine-1.2.7
Fetching: em-websocket-0.5.1.gem (100%)
Successfully installed em-websocket-0.5.1
Fetching: colorator-1.1.0.gem (100%)
Successfully installed colorator-1.1.0
Fetching: public_suffix-3.0.3.gem (100%)
Successfully installed public_suffix-3.0.3
Fetching: addressable-2.6.0.gem (100%)
Successfully installed addressable-2.6.0
Fetching: jekyll-3.8.5.gem (100%)
Successfully installed jekyll-3.8.5
Parsing documentation for ffi-1.10.0
Installing ri documentation for ffi-1.10.0
Parsing documentation for rb-inotify-0.10.0
Installing ri documentation for rb-inotify-0.10.0
Parsing documentation for rb-fsevent-0.10.3
Installing ri documentation for rb-fsevent-0.10.3
Parsing documentation for listen-3.1.5
Installing ri documentation for listen-3.1.5
Parsing documentation for jekyll-watch-2.2.1
Installing ri documentation for jekyll-watch-2.2.1
Parsing documentation for sass-listen-4.0.0
Installing ri documentation for sass-listen-4.0.0
Parsing documentation for sass-3.7.3
Installing ri documentation for sass-3.7.3
Parsing documentation for jekyll-sass-converter-1.5.2
Installing ri documentation for jekyll-sass-converter-1.5.2
Parsing documentation for concurrent-ruby-1.1.5
Installing ri documentation for concurrent-ruby-1.1.5
Parsing documentation for i18n-0.9.5
Installing ri documentation for i18n-0.9.5
Parsing documentation for http_parser.rb-0.6.0
Installing ri documentation for http_parser.rb-0.6.0
Parsing documentation for eventmachine-1.2.7
Installing ri documentation for eventmachine-1.2.7
Parsing documentation for em-websocket-0.5.1
Installing ri documentation for em-websocket-0.5.1
Parsing documentation for colorator-1.1.0
Installing ri documentation for colorator-1.1.0
Parsing documentation for public_suffix-3.0.3
Installing ri documentation for public_suffix-3.0.3
Parsing documentation for addressable-2.6.0
Installing ri documentation for addressable-2.6.0
Parsing documentation for jekyll-3.8.5
Installing ri documentation for jekyll-3.8.5
Done installing documentation for ffi, rb-inotify, rb-fsevent, listen, jekyll-watch, sass-listen, sass, jekyll-sass-converter, concurrent-ruby, i18n, http_parser.rb, eventmachine, em-websocket, colorator, public_suffix, addressable, jekyll after 26 seconds
17 gems installed
Great stuff! Now we can serve my project and browse to the Jekyll page (I skip the jekyll steps for starting new website because they’re described in my Jekyll website with GitHub Pages article):
greys@xps:~/proj/glebreys.com$ bundle exec jekyll serve
Configuration file: /home/greys/proj/glebreys.com/_config.yml
Source: /home/greys/proj/glebreys.com
Destination: /home/greys/proj/glebreys.com/_site
Incremental build: disabled. Enable with --incremental
Generating... 
Jekyll Feed: Generating feed for posts
done in 0.614 seconds.
Auto-regeneration: enabled for '/home/greys/proj/glebreys.com'
Server address: http://127.0.0.1:4000/
Server running... press ctrl-c to stop.
That’s it for today!

See Also




Remove Virtual Machine in KVM

linux kvm unixtutorial

I’ve been tidying up some of my dedicated servers and needed to remove some of the VMs under KVM setup. This post shows you how to use virsh command to do just that.

List virtual machines using virsh

As you can see, there are quite a few VMs not running and possibly pending decommission:

root@s2:/ # virsh list --all

Id Name State
----------------------------------------------------
1 m running
2 dbm1 running
3 v15 running
- centos7 shut off
- elk shut off
- infra shut off
- jira shut off
- v10.ts.im shut off
- v9.ts.im shut off

List VM storage using virsh

centos7 VM was definitely there for some quick test, so should be safe to remove.

Let’s confirm the virtual disk files it has:

root@s2:/ # virsh dumpxml --domain centos7 | grep source
<source file='/var/lib/libvirt/images/rhel7.0-3.qcow2'/>
<source bridge='vbr1'/>
<source bridge='vbr0'/>

This is a large enough file with virtual disk:

root@s2:/var/lib/docker/containers # ls -lad /var/lib/libvirt/images/rhel7.0-3.qcow2
-rw------- 1 root root 17182752768 Apr 11 2018 /var/lib/libvirt/images/rhel7.0-3.qcow2
root@s2:/var/lib/docker/containers # du -sh /var/lib/libvirt/images/rhel7.0-3.qcow2
17G /var/lib/libvirt/images/rhel7.0-3.qcow2

Remove KVM virtual machine with storage files

Time to remove our virtual machine and its virtual storage:

root@s2:/var/lib/docker/containers # virsh undefine centos7 --remove-all-storage
Domain centos7 has been undefined
Volume 'vda'(/var/lib/libvirt/images/rhel7.0-3.qcow2) removed.

That’s it for today!

See Also




How To Confirm Solaris 11 version

oracle-solaris-11.jpg

I’ve finally gotten the time to work on another Unix Tutorial project – Install Solaris 11 in a VirtualBox VM. Will publish step-by-step instructions next weekend, so for now it’s just a quick post about a topic long overdue: confirming Solaris 11 version.

Use pkg Command to Confirm Solaris 11 Version

One of the most recent but also the most recommended ways to confirm Solaris 11 release version is to use the. Specifically, we use it to inspect the “entire” package which is a virtual package made for indicating and enforcing a Solaris 11 release:

greys@solaris11:~$ pkg info entire
Name: entire
Summary: Incorporation to lock all system packages to the same build
Description: This package constrains system package versions to the same
build. WARNING: Proper system update and correct package
selection depend on the presence of this incorporation.
Removing this package will result in an unsupported system.
Category: Meta Packages/Incorporations
State: Installed
Publisher: solaris
Version: 11.4 (Oracle Solaris 11.4.0.0.1.15.0)
Branch: 11.4.0.0.1.15.0
Packaging Date: 17 August 2018 at 00:42:03
Size: 2.53 kB
FMRI: pkg://solaris/[email protected]:20180817T004203Z

As you can see from the output, my brand new Solaris 11 VM is sporting the Solaris 11.4 release.

Use /etc/release to Confirm Solaris 11 Version

This is the more traditional way, the one that’s worked from at least Solaris 8. Simply inspect the /etc/release file and it should indicate both the Solaris release and the platform it’s running on – in my case it’s Solaris 11.4 and x86:

greys@solaris11:~$ cat /etc/release
Oracle Solaris 11.4 X86
Copyright (c) 1983, 2018, Oracle and/or its affiliates. All rights reserved.
Assembled 16 August 2018

Use uname command to Confirm Solaris 11 Version

Another fairly traditional approach is to use the uname command. As you can see below, it will show you the OS release (5.11) and the release version (11.4.0.15.0):

greys@solaris11:~$ uname -a
SunOS solaris11 5.11 11.4.0.15.0 i86pc i386 i86pc

See Also




Erasing disks with dd

Using dd command to quickly copy something or generate a file full of random or zero bytes is a really old trick. And almost every sysadmin knows how to use this command to erase disks:

dd if=/dev/zero of=/dev/sdX

But while this worked really well in the past, really large capacity of hard disks today means you need a trick or two if you want to track progress of running a dd against a 2TB or 3TB disk.

As part of upgrading NAS server in my home office, I wanted to re-sell some of the older disks, which means even disks with encrypted filesystems needed to be erased.

DISCLAIMER

use this article and examples at your own risk. It’s very easy to confuse disk names and accidentally erase wrong disk (including boot disk or the only disk you have on your server), so double-check disk device names and their sizes and mounted filesystems before using root superpowers for running any commands shown.

Please do your own research as I will accept no responsibility for any data corruption or data loss caused by using dd examples below.

General approach to erasing disks with dd

Best results are usually achieved when you zero the disk – write zeros to all the data blocks available on the storage device.

IMPORTANT: Test your approach and learn dd command basics on a regular file in your home directory and as regular user. For me, it can be /home/greys/file1:

mint ~ $ dd if=/dev/zero of=/home/greys/file1 bs=64M count=10
10+0 records in
10+0 records out
671088640 bytes (671 MB) copied, 0.484989 s, 1.4 GB/s

Once you are comfortable enough with dd, you can identify which disk you need and then erase it as shown below.

Here’s how one would erase the /dev/sdf disk:

mint ~ # dd if=/dev/zero of=/dev/sdf bs=64M
188+0 records in
188+0 records out
12616466432 bytes (13 GB) copied, 81.2798 s, 155 MB/s
199+0 records in
199+0 records out
13354663936 bytes (13 GB) copied, 86.0636 s, 155 MB/s
211+0 records in
211+0 records out
14159970304 bytes (14 GB) copied, 91.2245 s, 155 MB/s
553+0 records in
553+0 records out
37111201792 bytes (37 GB) copied, 239.566 s, 155 MB/s
1212+0 records in
1212+0 records out
81335943168 bytes (81 GB) copied, 525.101 s, 155 MB/s
2201+0 records in
2201+0 records out
147706609664 bytes (148 GB) copied, 958.028 s, 154 MB/s
3537+0 records in
3537+0 records out
237364051968 bytes (237 GB) copied, 1546.11 s, 154 MB/s
5219+0 records in
5219+0 records out
350241161216 bytes (350 GB) copied, 2296.43 s, 153 MB/s
5228+0 records in

But for even larger disks, it’s probably best to use pv command to show progress.

First, install the pv command:

$ sudo yum install pv

Now, split dd command in two using Unix pipe. What we do if use one of the default behaviours of most commands in Unix: if there’s no parameter specifying input or input file, the command checks standard input stream. Likewise, if there’s no output file specified, a standard output stream is used.

Unix pipe, as you may know already, takes standard output from one command and forwards it (pipes into) the standard input of the next command.

Thus our initial example of:

mint ~ $ dd if=/dev/zero of=/home/greys/file1 bs=64M count=10

can be rewritten with pipe and two dd commands like this:

 mint ~ $ dd if=/dev/zero | dd of=/home/greys/file1 bs=64M count=10
dd: warning: partial read (66048 bytes); suggest iflag=fullblock
0+10 records in
0+10 records out
101376 bytes (101 kB) copied, 0.00823315 s, 12.3 MB/s

Excellent! Now, we simply insert pv between dd commands. This is how the command will look:

mint ~ $ dd if=/dev/zero | pv | dd of=/home/greys/file1 bs=64M count=10
dd: warning: partial read (2048 bytes); suggest iflag=fullblock
35.5kiB 0:00:00 [79.9MiB/s] [ <=> ]
0+10 records in
0+10 records out
36352 bytes (36 kB) copied, 0.00952251 s, 3.8 MB/s

If you compare the outputs, pv provides this transfer throughput in megabytes per second and a progress indicator (<=>) that will move simply to confirm that data transfer is still in progress.

Erase disk with dd and pv commands

Armed with the dd and pv knowledge, let’s start erasing the /dev/sdf disk.

VERY IMPORTANT: please pause and double-check that you know which disk you want to erase. I’ve specifically given you an example of /dev/sdf disk which won’t even be found unless you have multi-disk system, but please check again and again so that you don’t wipe the wrong disk.

mint ~ # dd if=/dev/zero | pv | dd of=/dev/sdf bs=64M
0+335852 records in6MB/s] [ <=> ]
0+335851 records out
5909268992 bytes (5.9 GB) copied, 134.987 s, 43.8 MB/s
0+343712 records in7MB/s] [ <=> ]
0+343712 records out
6043738624 bytes (6.0 GB) copied, 137.992 s, 43.8 MB/s
0+345803 records in5MB/s] [ <=> ]
0+345803 records out

Now, a really secure way to erase disk would be to fill it with random numbers, but using /dev/urandom (or even slower /dev/random) instead of /dev/zero will take so much longer than it’s not going to be feasible to secure erase 1TB disks this way.

So the next best thing is to overwrite each data block on the disk at least twice, preferably with different data. Assuming we let the previous command complete, got the disk filled with zeros.

This little trick below uses tr command to translate zeros into 1s. Specifically, this changes all the bits that were 0 into 1s. So you’ll end up writing a lot of charactes with hex code $FF instead of $00 into the resulting file. \377 is the octal value for hex number $FF (tr command doesn’t take hex values as parameters).

Our final command is this:

mint ~ # tr '\0' '\377' < /dev/zero | dd bs=64M of=/dev/sdf
0+93225 records in
0+93224 records out
768778240 bytes (769 MB) copied, 8.70987 s, 88.3 MB/s
0+111582 records in
0+111581 records out
919052288 bytes (919 MB) copied, 10.5316 s, 87.3 MB/s
0+120755 records in
0+120754 records out
994623488 bytes (995 MB) copied, 11.4484 s, 86.9 MB/s
0+126488 records in
0+126487 records out
1042391040 bytes (1.0 GB) copied, 12.0236 s, 86.7 MB/s
0+9158660 records in
0+9158659 records out
75524096000 bytes (76 GB) copied, 883.944 s, 85.4 MB/s
0+33863403 records in
0+33863403 records out
280563752960 bytes (281 GB) copied, 3354.46 s, 83.6 MB/s
0+35904256 records in
0+35904256 records out
297578594304 bytes (298 GB) copied, 3576.74 s, 83.2 MB/s

As you can see, the throughput is almost twice lower because of the translation, but the result is all the bits on the disk will be definitely overwritten now.

See Also




Does Docker Need Hardware Virtualization?

docker-containers-unixtutorial

This is a quick post to explain that by default Docker does not need hardware virtualization (VT-X).

Is Docker a Virtualization?

In a sense of allowing you to run multiple independent environments on the same physical host, yes. Docker containers allow you to run processes in isolation from each other and from the base OS – you decide and specify if you want base system to share any resources (IP addresses, TCP ports, directories with files) with any of the containers.

The key difference from KVM or VMware virtualization is that Docker is not using hardware virtualization. Instead, it leverages Linux functionality: namespaces and control groups.

Linux namespaces are provided and supported by Linux kernel to allow separation (virtualization) of process ID space (PID numbers), network interfaces, interprocess communication (IPC), mount points and kernel information.

Control groups in Linux allow accurate resource control: using control groups allows Docker to limit CPU or memory usage for each container.

Does Docker use Hardware Virtualization?

The short answer is: no. Docker needs a 64-bit Linux OS running a modern enough kernel to operate properly. Which means if that what you have happily running on your hardware without hw virtualization support, it will be plenty enough for Docker.

Now, this gets a bit tricky when you’re talking about Docker in Windows or MacOS. They don’t have a native Linux environment, so they have to run a Linux virtual machine that runs the Docker engine. You then typically have command line tools installed in your base OS (Windows or MacOS) that allow seamless management of the Docker containers in the Docker VM.

Does Your CPU Support Hardware Virtualization?

You can grep the special /proc/cpuinfo file for a quick answer:

  • if it contains vmx – you have an Intel CPU and it supports HW virtualization
  • if it contains svm – you have an AMD CPU and it supports HW virtualization

Here’s how this looks on my XPS laptop:

intel-proc-cpuinfo.png

See Also




Get AWS Instance Info with ec2-metadata

aws-amazon-web-services-logo.png

If you are just starting with AWS, you may not know that Amazon Linux images have a special command for confirming most useful information about EC2 instance: ec2-metadata.

Instance info with ec2-metadata

If you just run the command without options, you’ll get to see all the available meta info.

Here’s an example from the vps2.unixtutorial.org instance (give me a shout if you want access to it for learning Linux!):

[ec2-user@ip-10-10-0-245 ~]$ ec2-metadata
ami-id: ami-466768ac
ami-launch-index: 0
ami-manifest-path: (unknown)
ancestor-ami-ids: not available
block-device-mapping:
ami: /dev/xvda
root: /dev/xvda
instance-id: i-0fe057ab7e5ff2XYZ
instance-type: t2.nano
local-hostname: ip-10-10-0-245.eu-west-1.compute.internal
local-ipv4: 10.10.0.245
kernel-id: not available
placement: eu-west-1b
product-codes: not available
public-hostname:
public-ipv4: 52.209.10.113
public-keys:
keyname:unixtutorial
index:0
format:openssh-key
key:(begins from next line)
ssh-rsa //P6MR9UiNDVDOi7vZ7Um/O+nJwtfVNnqiPfzRFqvm11yIo1WmiyvC3Ilhcowqfd2WJBSb3gwuJVxZk9paBp+CvVU2i99OJO+ss10656g3hBgS2xlMatPWyM/Ab6uZOO0X6NbTL/kSbThsnSyZadue36Qt1pcPWoIp0cV unixtutorial
ramdisk-id: not available
reservation-id: r-0b1d0abebXYZ
security-groups: launch-wizard-15
user-data: not available

Command line options for ec2-metadata

There are also lots of command line options to show individual fields from this output.

Confirm AMI image ID with ec2-metadata

[root@ip-10-10-0-245 ~]# ec2-metadata -a
ami-id: ami-466768ac

Confirm Current EC2 Instance ID with ec2-metadata

[root@ip-10-10-0-245 ~]# ec2-metadata -i
instance-id: i-0fe057ab7e5ff2f1d

Confirm Internal IP addresses with ec2-metadata

[root@ip-10-10-0-245 ~]# ec2-metadata -o
local-ipv4: 10.10.0.245

Confirm external IP address with ec2-metadata

This will only work if you actually have external IPv4 address mapped to your instance.

[root@ip-10-10-0-245 ~]# ec2-metadata -v
public-ipv4: 52.209.10.113

Are you interested in more AWS related posts and Unix tutorials? Please let me know!

See Also




Unix Tutorial Projects: Compiling Brave browser on Linux Mint

brave-logotype-full-color

Some of you may have noticed: I added the link to Brave browser to the sidebar here on Unix Tutorial. That’s because I’m giving this new browser a try and support its vision to reward content producers via Brave’s Basic Attention Token cryptocurrency. If you aren’t using Brave browser already, download and try Brave browser using my link.

In this Unix Tutorial Project, just because it seems fun and educational enough, I’ll attempt compiling Brave browser on my Dell XPS 13 laptop running Linux Mint 19. There’s a much easier way to install Brave browser from official repositories: official instructions here.

Make sure you have enough disk space

This project suprised me a bit. I had 20GB of space and thought it would be enough! Then I saw the git download alone would be almost 15GB, but hoped I had enough.

I was wrong! Ended up resizing Windows 10 partition on my laptop to free up space for another 100GB Linux filesystem.

The final space consumption is 67GB, that’s a lot of source code with an impressive amount (32 thousand of them!) object files (intermidiary binary files you need when compiling large project. they’re used to make up the final binary:

root@xps:/storage/proj# du -sh brave-browser
67G brave-browser

Prepare Linux Mint 19 for Compiling Brave Browser

Following instructions from https://github.com/brave/brave-browser/wiki/Linux-Development-Environment, I first installed the packages:

greys@xps:~$ sudo apt-get install build-essential libgnome-keyring-dev python-setuptools npm
[sudo] password for greys: 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
build-essential is already the newest version (12.4ubuntu1).
The following package was automatically installed and is no longer required:
libssh-4
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed:
gir1.2-gnomekeyring-1.0 gyp libc-ares2 libgnome-keyring-common libgnome-keyring0 libhttp-parser2.7.1 libjs-async libjs-inherits libjs-node-uuid libjs-underscore
libssl1.0-dev libssl1.0.0 libuv1-dev node-abbrev node-ansi node-ansi-color-table node-archy node-async node-balanced-match node-block-stream node-brace-expansion
node-builtin-modules node-combined-stream node-concat-map node-cookie-jar node-delayed-stream node-forever-agent node-form-data node-fs.realpath node-fstream
node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-hosted-git-info node-inflight node-inherits node-ini node-is-builtin-module node-isexe
node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream node-node-uuid node-nopt node-normalize-package-data node-npmlog
node-once node-osenv node-path-is-absolute node-pseudomap node-qs node-read node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-slide
node-spdx-correct node-spdx-expression-parse node-spdx-license-ids node-tar node-tunnel-agent node-underscore node-validate-npm-package-license node-which node-wrappy
node-yallist nodejs nodejs-dev python-pkg-resources
Suggested packages:
node-hawk node-aws-sign node-oauth-sign node-http-signature debhelper python-setuptools-doc
Recommended packages:
javascript-common libjs-jquery nodejs-doc
The following packages will be REMOVED:
libssh-dev libssl-dev
The following NEW packages will be installed:
gir1.2-gnomekeyring-1.0 gyp libc-ares2 libgnome-keyring-common libgnome-keyring-dev libgnome-keyring0 libhttp-parser2.7.1 libjs-async libjs-inherits libjs-node-uuid
libjs-underscore libssl1.0-dev libuv1-dev node-abbrev node-ansi node-ansi-color-table node-archy node-async node-balanced-match node-block-stream node-brace-expansion
node-builtin-modules node-combined-stream node-concat-map node-cookie-jar node-delayed-stream node-forever-agent node-form-data node-fs.realpath node-fstream
node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-hosted-git-info node-inflight node-inherits node-ini node-is-builtin-module node-isexe
node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream node-node-uuid node-nopt node-normalize-package-data node-npmlog
node-once node-osenv node-path-is-absolute node-pseudomap node-qs node-read node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-slide
node-spdx-correct node-spdx-expression-parse node-spdx-license-ids node-tar node-tunnel-agent node-underscore node-validate-npm-package-license node-which node-wrappy
node-yallist nodejs nodejs-dev npm python-pkg-resources python-setuptools
The following packages will be upgraded:
libssl1.0.0
1 upgraded, 80 newly installed, 2 to remove and 286 not upgraded.
Need to get 10.7 MB of archives.
After this operation, 37.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libgnome-keyring-common all 3.12.0-1build1 [5,792 B]
Get:2 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libgnome-keyring0 amd64 3.12.0-1build1 [56.1 kB]
...
Get:81 http://archive.ubuntu.com/ubuntu bionic/universe amd64 npm all 3.5.2-0ubuntu4 [1,586 kB]
Fetched 10.7 MB in 2s (6,278 kB/s)
Extracting templates from packages: 100%
Preconfiguring packages ...
(Reading database ... 267928 files and directories currently installed.)
...

You should end up with a whole bunch of npm (node-*) packages installed.

You need to install gperf package as well – npm run build (last step below) failed for me because gperf wasn’t found.

greys@xps:~$ sudo apt-get install gperf

Clone Brave Browser git Repo

We’re now ready to clone the repo:

greys@xps:~/proj$ git clone [email protected]:brave/brave-browser.git
Cloning into 'brave-browser'...
Enter passphrase for key '/home/greys/.ssh/id_rsa': 
remote: Enumerating objects: 43, done.
remote: Counting objects: 100% (43/43), done.
remote: Compressing objects: 100% (36/36), done.
remote: Total 6466 (delta 27), reused 17 (delta 7), pack-reused 6423
Receiving objects: 100% (6466/6466), 1.28 MiB | 833.00 KiB/s, done.
Resolving deltas: 100% (4425/4425), done.

and then do npm install. This is how it should look:

git-clone-brave-browser.png

Download Chromium source code using npm

npm run init command will download the source code of Chromium browser (open source original Chrome is being built on), Brave browser is based on it. This should take a while – on my 100Mbit connection it took 25min to download 13.5GB (that’s comporessed, mind you!) of Chromium’s source code and then another 25min to download the rest of dependencies:

greys@xps:~/proj/brave-browser$ npm run init

> [email protected] init /home/greys/proj/brave-browser
> node ./scripts/sync.js --init

git submodule sync
git submodule update --init --recursive
Submodule 'vendor/depot_tools' (https://chromium.googlesource.com/chromium/tools/depot_tools.git) registered for path 'vendor/depot_tools'
Submodule 'vendor/jinja' (git://github.com/pallets/jinja.git) registered for path 'vendor/jinja'
Cloning into '/home/greys/proj/brave-browser/vendor/depot_tools'...
Cloning into '/home/greys/proj/brave-browser/vendor/jinja'...
Submodule path 'vendor/depot_tools': checked out 'eb2767b2eb245bb54b1738ebb7bf4655ba390b44'
Submodule path 'vendor/jinja': checked out '209fd39b2750400d51bf571740fe5ba23008c20e'
git -C /home/greys/proj/brave-browser/vendor/depot_tools clean -fxd
git -C /home/greys/proj/brave-browser/vendor/depot_tools reset --hard HEAD
HEAD is now at eb2767b2 Roll recipe dependencies (trivial).
gclient sync --force --nohooks --with_branch_heads --with_tags --upstream
WARNING: Your metrics.cfg file was invalid or nonexistent. A new one will be created.

________ running 'git -c core.deltaBaseCacheLimit=2g clone --no-checkout --progress https://chromium.googlesource.com/chromium/src.git /home/greys/proj/brave-browser/_gclient_src_JunGAS' in '/home/greys/proj/brave-browser'
Cloning into '/home/greys/proj/brave-browser/_gclient_src_JunGAS'...
remote: Sending approximately 14.36 GiB ... 
remote: Counting objects: 161914, done 
remote: Finding sources: 100% (949/949) 
Receiving objects: 3% (362855/12095159), 163.33 MiB | 10.38 MiB/s 
[0:01:00] Still working on:
[0:01:00] src
Receiving objects: 5% (632347/12095159), 267.23 MiB | 9.94 MiB/s 
[0:01:10] Still working on:
[0:01:10] src
...
├─┬ [email protected] 
│ ├── [email protected] 
│ ├─┬ [email protected] 
│ │ └── [email protected] 
│ ├── [email protected] 
│ ├── [email protected] 
│ ├── [email protected] 
│ ├─┬ [email protected] 
│ │ ├─┬ [email protected] 
│ │ │ └── [email protected] 
│ │ └─┬ [email protected] 
│ │   ├─┬ [email protected] 
│ │   │ ├── [email protected] 
│ │   │ └─┬ [email protected] 
│ │   │   └── [email protected] 
│ │   └── [email protected] 
│ └── [email protected] 
└── [email protected] 

npm WARN [email protected] requires a peer of ajv@^5.0.0 but none was installed.
npm run build

> [email protected] build /home/greys/proj/brave-browser/src/brave/components/brave_sync/extension/brave-crypto
> browserify ./index.js -o browser/crypto.js

Hook '/usr/bin/python src/brave/script/build-simple-js-bundle.py --repo_dir_path src/brave/components/brave_sync/extension/brave-crypto' took 27.09 secs
Running hooks: 100% (83/83), done.

Build Brave Browser from Source Code

Here we go! Let’s build this thing. Should take an hour or two on a fast PC:

greys@xps:~/proj/brave-browser$ npm run build Release

This is a release build, meaning this is a fully performance and release-grade build of the source code. If you’re going to contribute to Brave browser open source project, you should know that npm run build (without Release parameter) will provide a debug build.

This is how the end process looks (took a few hours to compile on the 8-core CPU of my XPS laptop):

brave-browser-fully-compiled.png

Start the Newly built Brave Browser

This is it! Let’s try starting the browser, this should complete our Unix Tutorial project today:

brave-browser-you-are-not-a-product.png

And an about page, just for the history:

brave-browser-about.png

That’s it for today!

See Also




Check For Available Updates with YUM

If you’re using CentOS, Fedora or Red Hat Linux, you are probably familiar with the yum package manager. One of the really useful options for yum is checking whether there are any available updates to be installed.

Check For Updates with YUM

If you use check-update parameter with yum, it will show you the list of any available updates:

root@centos:~ # yum check-update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: rep-centos-fr.upress.io
* epel: mirror.in2p3.fr
* extras: rep-centos-fr.upress.io
* updates: ftp.pasteur.fr

ansible.noarch 2.7.8-1.el7 epel
datadog-agent.x86_64 1:6.10.1-1 datadog
libgudev1.x86_64 219-62.el7_6.5 updates
nginx.x86_64 1:1.15.9-1.el7_4.ngx nginx
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 extras
polkit.x86_64 0.112-18.el7_6.1 updates
systemd.x86_64 219-62.el7_6.5 updates
systemd-libs.i686 219-62.el7_6.5 updates
systemd-libs.x86_64 219-62.el7_6.5 updates
systemd-python.x86_64 219-62.el7_6.5 updates
systemd-sysv.x86_64 219-62.el7_6.5 updates

Using yum check-update in Shell Scripts

One thing that I didn’t know and am very happy to discover is that yum check-update is actually meant for shell scripting. It returns a specific code after running, you can use the value to decide what do to next.

As usual: return value 0 means everything is fully updated, so no updates are available (and no action is needed). A value of 100 would mean you have updates available.

All we need to do is check the return value variable $? for its value in something like this:

#!/bin/bash

yum check-update

if [ $? == 100 ]; then
    echo "You've got updates available!"
else
    echo "Great stuff! No updates pending..."
fi

Here is how running this script would look if we saved the script as check-yum-updates.sh script:

root@s2:~ # ./check-yum-updates.sh
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: rep-centos-fr.upress.io
* epel: mirror.in2p3.fr
* extras: rep-centos-fr.upress.io
* updates: ftp.pasteur.fr

ansible.noarch 2.7.8-1.el7 epel
datadog-agent.x86_64 1:6.10.1-1 datadog
libgudev1.x86_64 219-62.el7_6.5 updates
nginx.x86_64 1:1.15.9-1.el7_4.ngx nginx
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 extras
polkit.x86_64 0.112-18.el7_6.1 updates
systemd.x86_64 219-62.el7_6.5 updates
systemd-libs.i686 219-62.el7_6.5 updates
systemd-libs.x86_64 219-62.el7_6.5 updates
systemd-python.x86_64 219-62.el7_6.5 updates
systemd-sysv.x86_64 219-62.el7_6.5 updates
You've got updates available!

I’ll revisit this post soon to show you a few more things that can be done with yum check-update functionality.

See Also