Confirm VMware Tools version

Snag_15cccf.png

If you need to find out the version of VMware Tools running in a remote VM, here is the command line to do it:

greys@ubuntu:~$ vmware-toolbox-cmd --version
10.2.0.1608 (build-7253323)

See Also




How To: Install VMware Tools in Ubuntu

I’m testing VMware Workstation 15 on my new laptop these days, and thought it’s a great opportunity to finally test and document the procedures for installing and upgrading VMware Tools.

Install VMware Tools for a VM

Kick off the VMware Tools install

Preferably when VM is shutdown, select the VMware Workstation menu to install the VMware Tools. Would still work even if VM is online like shown below:

Snag_c33a532.png

Log in and mount the virtual CD that has VMware Tools

Snag_c36bcc3.png

Unpack the VMware Tools

This will show a lot of files, but I’m just showing you the first few lines of the output:

root@ubuntu:/mnt# cd /tmp
root@ubuntu:/tmp# tar xvf /mnt/VMwareTools-10.3.2-9925305.tar.gz
vmware-tools-distrib/
vmware-tools-distrib/bin/
vmware-tools-distrib/bin/vm-support
vmware-tools-distrib/bin/vmware-config-tools.pl
vmware-tools-distrib/bin/vmware-uninstall-tools.pl
vmware-tools-distrib/vgauth/
vmware-tools-distrib/vgauth/schemas/
vmware-tools-distrib/vgauth/schemas/xmldsig-core-schema.xsd
vmware-tools-distrib/vgauth/schemas/XMLSchema.xsd
...

Run the VMware Tools installer

root@ubuntu:/tmp# cd vmware-tools-distrib/
root@ubuntu:/tmp/vmware-tools-distrib# ls
bin caf doc etc FILES INSTALL installer lib vgauth vmware-install.pl
root@ubuntu:/tmp/vmware-tools-distrib# ./vmware-install.pl
The installer has detected an existing installation of open-vm-tools packages
on this system and will not attempt to remove and replace these user-space
applications. It is recommended to use the open-vm-tools packages provided by
the operating system. If you do not want to use the existing installation of
open-vm-tools packages and use VMware Tools, you must uninstall the
open-vm-tools packages and re-run this installer.
The packages that need to be removed are:
open-vm-tools
Packages must be removed with the --purge option.
The installer will next check if there are any missing kernel drivers. Type yes
if you want to do this, otherwise type no [yes]

INPUT: [yes] default

Creating a new VMware Tools installer database using the tar4 format.

Installing VMware Tools.

In which directory do you want to install the binary files?
[/usr/bin]

INPUT: [/usr/bin] default

What is the directory that contains the init directories (rc0.d/ to rc6.d/)?
[/etc]

INPUT: [/etc] default

What is the directory that contains the init scripts?
[/etc/init.d]

INPUT: [/etc/init.d] default

In which directory do you want to install the daemon files?
[/usr/sbin]

INPUT: [/usr/sbin] default

In which directory do you want to install the library files?
[/usr/lib/vmware-tools]

INPUT: [/usr/lib/vmware-tools] default

The path "/usr/lib/vmware-tools" does not exist currently. This program is
going to create it, including needed parent directories. Is this what you want?
[yes]

INPUT: [yes] default

In which directory do you want to install the documentation files?
[/usr/share/doc/vmware-tools]

INPUT: [/usr/share/doc/vmware-tools] default

The path "/usr/share/doc/vmware-tools" does not exist currently. This program
is going to create it, including needed parent directories. Is this what you
want? [yes]

INPUT: [yes] default

The installation of VMware Tools 10.3.2 build-9925305 for Linux completed
successfully. You can decide to remove this software from your system at any
time by invoking the following command: "/usr/bin/vmware-uninstall-tools.pl".

Before running VMware Tools for the first time, you need to configure it by
invoking the following command: "/usr/bin/vmware-config-tools.pl". Do you want
this program to invoke the command for you now? [yes]

INPUT: [yes] default


You have chosen to install VMware Tools on top of an open-vm-tools package.
You will now be given the option to replace some commands provided by
open-vm-tools. Please note that if you replace any commands at this time and
later remove VMware Tools, it may be necessary to re-install the open-vm-tools.

WARNING: It appears your system is missing the required /usr/bin/vmhgfs-fuse

Initializing...


Making sure services for VMware Tools are stopped.

Stopping VMware Tools services in the virtual machine:
VMware User Agent (vmware-user): done
Unmounting HGFS shares: done
Guest filesystem driver: done


The module vmci has already been installed on this system by another installer
or package and will not be modified by this installer.

The module vsock has already been installed on this system by another installer
or package and will not be modified by this installer.

The module vmxnet3 has already been installed on this system by another
installer or package and will not be modified by this installer.

The module pvscsi has already been installed on this system by another
installer or package and will not be modified by this installer.

The module vmmemctl has already been installed on this system by another
installer or package and will not be modified by this installer.

The VMware Host-Guest Filesystem allows for shared folders between the host OS
and the guest OS in a Fusion or Workstation virtual environment. Do you wish
to enable this feature? [yes]

INPUT: [yes] default

The vmxnet driver is no longer supported on kernels 3.3 and greater. Please
upgrade to a newer virtual NIC. (e.g., vmxnet3 or e1000e)

VMware automatic kernel modules enables automatic building and installation of
VMware kernel modules at boot that are not already present. This feature can
be enabled/disabled by re-running vmware-config-tools.pl.

Would you like to enable VMware automatic kernel modules?
[yes]

INPUT: [yes] default

Creating a new initrd boot image for the kernel.
update-initramfs: Generating /boot/initrd.img-4.4.0-116-generic
The configuration of VMware Tools 10.3.2 build-9925305 for Linux for this
running kernel completed successfully.

Enjoy,

--the VMware team

Found VMware Tools CDROM mounted at /mnt. Ejecting device /dev/sr0 ...
umount: /mnt: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
/usr/bin/eject: unmount of `/mnt' failed
Eject Failed: If possible manually eject the Tools installer from the guest
cdrom mounted at /mnt before canceling tools install on the host.

Reboot the VM and check VMware Tools kernel modules

Finally, reboot the VM and check that VMware Tools modules are loaded:

Snag_c46277a.png

That’s it, let me know if you want me to answer any other questions!

See Also




Ignore SSL errors in VMware PowerCLI

vmware-logo.jpg

I have a few weeks left on a couple of the dedicated servers I’m no longer using, so I figured it would be a good opportunity to refresh my VMware skills and perhaps learn something new with ESXi 6.5 (one of the best free hypervisors available, fully using hardware virtualization).

Turns out, you can install and use VMware PowerCLI on MacOS, pretty cool! As a pre-requisite, you need to install Windows PowerShell using Homebrew first. All the commands below are the PowerShell ones (you start it by typing pwsh).

SSL connection errors in VMware PowerCLI

Because it’s a freshly installed ESXi instance with a self-signed SSL certificate, it will through a warning when connecting to the server using browser or using VMware PowerCLI:

PS /Users/greys> Connect-VIServer -Server 62.210.x.y                                                           

Specify Credential
Please specify server credential
User: root
Password for user root: *********

Connect-VIServer : 12/02/2019 23:43:52 Connect-VIServer The SSL connection could not be established, see inner exception.
At line:1 char:1
+ Connect-VIServer -Server 62.210.x.y
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : NotSpecified: (:) [Connect-VIServer], ViError
+ FullyQualifiedErrorId : Client20_ConnectivityServiceImpl_Reconnect_SoapException,VMware.VimAutomation.ViCore.Cmdlets.Commands.ConnectVIServer
This is because by default PowerCLI doesn’t have a defined approach to invalid SSL certificates.
You can confirm it with the Get-PowerCLIConfiguration command:
Screen Shot 2019-02-13 at 08.01.21.png

Ignore Invalid SSL Certificates in VMware PowerCLI

Let’s update tihe InvalidCertificateAction setting to “Ignore”:

Screen Shot 2019-02-13 at 08.01.33.png

And now our connection should work just fine:

PS /Users/greys> Connect-VIServer -Server 62.210.x.y

Specify Credential
Please specify server credential
User: root
Password for user root: *********

Name Port User
---- ---- ----
62.210.x.y 443 root

PS /Users/greys>
That’s it! Will publish more as I learn, so stay tuned!

See Also




Remove Unused Volumes in Docker

docker-containers-unixtutorial

You can list unused volumes using the filtering option of docker volume ls command. Once these are identified, it’s easy enough to remove such volumes altogether.

List dangling volumes in Docker

Relying on the dangling flag of a volume object, we can list all the dangling volumes as shown below:

root@s5:~ # docker volume ls -q -f "dangling=true"
7b8baf0d804862c1a5daf8b5770b7d21ce7051d429815412e15b5ded95cb1d36
15a2e1552ab9ac37d7ab4f65d168894c3ea7d798000e90d8ab6426f5fecf42f3
81c68768733ae6fce19d256e20859aa6772875448126c7655c8d455c0067fdb3
83f9e08b9e23f4218189fe115b96c88eec59142f9ba9c39f7e7181298a7c9a70
92cf29ef5e3d42ab77d6467aae7a201f2aa04d92f369ffd6b09b84f217398565
307f5debc78882b6b46695c8f7a66298e0f90bdbf5b89833d6096775a1eed843
3013b332c97cff9e344ac48aa1d850075dba83ab68d177e340c4d357834812df
3910ca56fae2a73b546ebd9e8e665bea9770287f5a86a8c217b03e731843e9b7
5577dfc4792afad0bca22cd7d3193bfdf5148d0440e3a4bdec3d8cfeea5cccd0
374764091f95d2a998045b81ad0bc860cddf8474500e4069df275b400c6c43fe
a754d3eda4c754952ffdbd62d311d1ae7db80ff94721a2df56a5092c53bb2e10
a4031051a268838724a1d7ca2cd5306c345074d2893556a7a16c1399c9682085
afa4d8b0cfd547f27246156cd6337f9a38d856ee54e998b501c6e99b62bb190b
bf2c587c94688e358ebf622ce892a77f684242c0bf4d638e2e4937bab29b99b5
c5d5c25dcfc4e001fffc7a4d72e2f9506512f9b2720ad6d1d10c85c47aa1dab8
c83435df3fd742e8095c2933ea4032968db712726bbb8bcbdda433f1f019dd63
cb5732ba63df4dc76c3d1655b02f869517ad3edcc214515ef8f4ad6abd087eed
d2d5479a0c376001951fe446c018e02808211d0cc2a0989a2beb52dd54aa19a1
d8aaf5d10c1034573558009eb6dcdfb65443cdb3f82f40ffa6a3c1d79419e7c8
dcf8989f67dda9a65792d9b2a29083d1e24363cd75cdc3c9ef5c179274416c64
e07e0049de9fac106a82be1cb61fe1734087d97f4c6c553d121991dacf376c5d
e53ee798261fd220db2f023713cfe34ca7ed101165addd63352e3d6090c251f6
f7092dc19a8bb1b5a388d15f9b7ab4f4bf7aa6986ab57f4af3a2a703b737bfed

Excellent! Now let’s remove these volumes.

Remove unused (dangling) volumes in Docker

We’ll use the docker volume ls output as the list of command line options for the docker volume rm command.

As always, we’ll get confirmation about each volume that’s just been removed:

root@s5:~ # docker volume rm $(docker volume ls -q -f "dangling=true")
7b8baf0d804862c1a5daf8b5770b7d21ce7051d429815412e15b5ded95cb1d36
15a2e1552ab9ac37d7ab4f65d168894c3ea7d798000e90d8ab6426f5fecf42f3
81c68768733ae6fce19d256e20859aa6772875448126c7655c8d455c0067fdb3
83f9e08b9e23f4218189fe115b96c88eec59142f9ba9c39f7e7181298a7c9a70
92cf29ef5e3d42ab77d6467aae7a201f2aa04d92f369ffd6b09b84f217398565
307f5debc78882b6b46695c8f7a66298e0f90bdbf5b89833d6096775a1eed843
3013b332c97cff9e344ac48aa1d850075dba83ab68d177e340c4d357834812df
3910ca56fae2a73b546ebd9e8e665bea9770287f5a86a8c217b03e731843e9b7
5577dfc4792afad0bca22cd7d3193bfdf5148d0440e3a4bdec3d8cfeea5cccd0
374764091f95d2a998045b81ad0bc860cddf8474500e4069df275b400c6c43fe
a754d3eda4c754952ffdbd62d311d1ae7db80ff94721a2df56a5092c53bb2e10
a4031051a268838724a1d7ca2cd5306c345074d2893556a7a16c1399c9682085
afa4d8b0cfd547f27246156cd6337f9a38d856ee54e998b501c6e99b62bb190b
bf2c587c94688e358ebf622ce892a77f684242c0bf4d638e2e4937bab29b99b5
c5d5c25dcfc4e001fffc7a4d72e2f9506512f9b2720ad6d1d10c85c47aa1dab8
c83435df3fd742e8095c2933ea4032968db712726bbb8bcbdda433f1f019dd63
cb5732ba63df4dc76c3d1655b02f869517ad3edcc214515ef8f4ad6abd087eed
d2d5479a0c376001951fe446c018e02808211d0cc2a0989a2beb52dd54aa19a1
d8aaf5d10c1034573558009eb6dcdfb65443cdb3f82f40ffa6a3c1d79419e7c8
dcf8989f67dda9a65792d9b2a29083d1e24363cd75cdc3c9ef5c179274416c64
e07e0049de9fac106a82be1cb61fe1734087d97f4c6c553d121991dacf376c5d
e53ee798261fd220db2f023713cfe34ca7ed101165addd63352e3d6090c251f6
f7092dc19a8bb1b5a388d15f9b7ab4f4bf7aa6986ab57f4af3a2a703b737bfed

And justt to be sure, let's re-run the command that lists unused Docker volumes:
root@s5:~ # docker volume ls -q -f "dangling=true"
root@s5:~ #

As you can see, there’s nothing returned now – which means all the volumes were indeed removed.

See Also

 




Migrate Docker container to new server

docker-containers-unixtutorial

There are many ways of migrating Docker containers to a different server, today I’ll show you one of the possible approaches.

IMPORTANT: it’s a beginner’s tutorial for copying basic Docker containers (no external dependencies like additional networks or storage volumes).

If you have filesystem volumes attached to your original Docker container, this procedure will not be enough. I’ll publish a more advanced tutorial soon – stay tuned.

This is a simple enough procedure. Steps 1, 2 and 3 should be done on the old server, Steps 4, 5 and 6 should be done on the new server. All you need is root access on both servers and a way to transfer images between the two servers (scp, for instance).

Step 1: Stop Docker container

I’m hoping to transfer the database container called db (container id c745794419a9 below):

root@oldserver:/ # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1b8b1657736e datadog/agent:latest "/init" 9 months ago Up 26 hours (healthy) 8125/udp, 8126/tcp dd-agent
c745794419a9 mariadb:latest "docker-entrypoint.s…" 9 months ago Up 29 minutes 3306/tcp db
32cd3e477546 nginx:latest "nginx -g 'daemon of…" 12 months ago Up 26 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx

Let’s stop the container:

root@oldserver:/ # docker stop db
db

…then make sure it’s down:

root@oldserver:/ # docker ps --all | grep c745794419a9
c745794419a9 mariadb:latest "docker-entrypoint.s…" 9 months ago Exited (0) About an hour ago db

Step 2. Commit Docker container to image

root@oldserver:/ # docker commit c745794419a9
sha256:9d07849ed7c73f8fecd1e5e3e2aedc3592eea6b02f239fa6efba903f1a1ef835

Step 3: Save Docker image to a file

root@oldserver:/ # docker save 9d07849ed7c73f8fecd1e5e3e2aedc3592eea6b02f239fa6efba903f1a1ef835 > s5-db.tar

Step 4: Transfer Docker image file

Step 5: Load Docker image from a file

On the new server, we docker load the image. Note how it is the same image ID:

root@newserver:/ # cat s5-db.tar | docker load
4bcdffd70da2: Loading layer [==================================================>] 129.3MB/129.3MB
ae12d30e1dfc: Loading layer [==================================================>] 345.1kB/345.1kB
7a065b613dee: Loading layer [==================================================>] 3.178MB/3.178MB
cb2872ddbc2c: Loading layer [==================================================>] 1.536kB/1.536kB
328a5e02ea3f: Loading layer [==================================================>] 15.05MB/15.05MB
736f4a72442b: Loading layer [==================================================>] 25.6kB/25.6kB
3fbb3db5b99e: Loading layer [==================================================>] 5.12kB/5.12kB
fbf207c08d17: Loading layer [==================================================>] 5.12kB/5.12kB
c61ded92b25c: Loading layer [==================================================>] 257MB/257MB
74569dcf2238: Loading layer [==================================================>] 8.704kB/8.704kB
b954e0840314: Loading layer [==================================================>] 1.536kB/1.536kB
9b819b273348: Loading layer [==================================================>] 2.56kB/2.56kB
Loaded image ID: sha256:9d07849ed7c73f8fecd1e5e3e2aedc3592eea6b02f239fa6efba903f1a1ef835

Step 6: Start Docker container

Now let’s start a Docker container from this image:

root@newserver:/ # docker run -d --name db-new 9d07849ed7c73f8fecd1e5e3e2aedc3592eea6b02f239fa6efba903f1a1ef835
1ca6041d6e1e6c661234e24b16c0d23b0a302586f8628809020d5469e3acd405

As you can see, it’s running now:

root@newserver:/ # docker ps | grep db-new
1ca6041d6e1e 9d07849ed7c7 "docker-entrypoint..." 5 seconds ago Up 3 second

See Also




Docker: Stop All Containers

docker-containers-unixtutorial

Now and then, especially when working on a development environment, you need to stop multiple Docker containers. Quite often, you need to stop all of the currently running containers. I’m going to show you one of the possible ways.

Docker: Stop a Container

You need to use a container name or container ID with the docker stop command.

For example, I have an nginx load balancer container:

root@s5:~ # docker ps -f name=nginx
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
32cd3e477546 nginx:latest "nginx -g 'daemon of…" 11 months ago Up About a minute 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx

Based on this output, I can stop my nginx container like this:

root@s5:~ # docker stop nginx
nginx

… or like that:

root@s5:~ # docker stop 32cd3e477546
32cd3e477546

Docker: Stop Multiple Containers

Since I also have a MariaDB container named db, I might need stop it together with nginx.

Here’s the info on the db container:

root@s5:~ # docker ps -f name=db
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c745794419a9 mariadb:latest "docker-entrypoint.s…" 9 months ago Up 4 seconds 3306/tcp db

If I ever decide to stop both nginx and db together, I can do it like this:

root@s5:~ # docker stop nginx db
nginx
db

Docker: Stop All Containers

As you can see from previous examples, docker stop simply takes a list of containers to stop. If there’s more than one container, just use space as a delimiter between container names or IDs.

This also allows us to use a clever shell expansion trick: you can some other command, and pass its output to the docker stop container.

For instance, this shows us the list of all the IDs for currently running Docker containers:

root@s5:~ # docker ps -q
510972d55d8c
1b8b1657736e
c745794419a9
32cd3e477546

What we can do now is pass the result of this command as the parameter for the docker stop command:

root@s5:~ # docker stop $(docker ps -q)
510972d55d8c
1b8b1657736e
c745794419a9
32cd3e477546

And just to check, running docker ps now won’t show any running containers:

root@s5:~ # docker ps -q

IMPORTANT: make sure you double-check what you’re doing! Specifically, run docker ps -q, compare it to docker ps, this kind of thing. Because once containers stopped you may not have an easy way to generate the list of same containers to restart.

In my case, I’m just specifying them manually as the parameters for docker start:

root@s5:~ # docker start 510972d55d8c 1b8b1657736e c745794419a9 32cd3e477546
510972d55d8c
1b8b1657736e
c745794419a9
32cd3e477546

That’s it for today! Hope you enjoyed this quick how-to, let me know if you have any questions, Docker and whatnot!

See Also




screenFetch in RHEL 8

Look what I have finally installed in one of my VirtualBox 6.0 virtual machines yesterday:

Screen Shot 2019-01-16 at 16.01.08.png

Yes, you guessed it right – the installation steps for screenFetch CentOS work in Red Hat just fine!

Red Hat Enterprise Linux 8 beta

I’m surprised the screenFetch isn’t reporting release version. Had to use the hostnamectl command to do this:

[greys@rhel8 ~]$ hostnamectl
Static hostname: rhel8
Icon name: computer-vm
Chassis: vm
Machine ID: 02b5e17ce41846fbaa965ee1c3678162
Boot ID: b36e64b343934359843d2e76db34e8af
Virtualization: oracle
Operating System: Red Hat Enterprise Linux 8.0 Beta (Ootpa)
CPE OS Name: cpe:/o:redhat:enterprise_linux:8.0:beta
Kernel: Linux 4.18.0-32.el8.x86_64
Architecture: x86-64

See Also




HW Virtualization

  • Hardware Virtualization-Desktop-Virtualization-Example.png

This is a quick follow-up post to the rather popular What Hardware Virtualization Really Means post of mine. I constantly see “hw virtualization” queries to this blog, so would like to expand a bit on this topic and key terminology around hardware virtualization.

HW virtualization definition

HW virtualization means the same thing as hardware virtualization: it’s a virtualization solution where end result (virtualization unit, usually a Virtual Machine) provides a completely isolated virtual representation of a hardware platform, running on top of specialised software and hardware.

The purpose of hw virtualization is to allow you run multiple virtual environments with virtual hardware, all sharing physical hardware available on your system. This achieves higher density: instead of running one OS environment per physical server, you can potentially run dozens of same or different OS environments on the same server.

Most common examples of such solutions are desktop virtualization products:

All of these software solutions allow you to create a virtual machine by specifying desired number of virtual CPUs, virtual disks and allocated memory.

Hardware Virtualization vs Hardware Emulation

If you look closely, most of the well known solutions today all provide virtualized environments of the same architecture: x86. This usually means the host (physical server) should be of the same hardware architecture. That’s because virtualization software provides all the VMs with virtualized interfaces to real hardware on your system. The most important component of that is processor (CPU) and CPU instruction set.

If you are looking for a way to run different (non-x86) architectures inside virtual environments, you are going to need a different kind of software – called emulator. Such solutions exist and some indeed provide virtualization capabilities (let you run multiple emulated VMs on top of the same physical system), but at a performance cost: emulators have to emulate (implement using relatively slow software code) every CPU instruction, instead of virtualizing CPU – allowing virtual machines abstracted access to physical CPU where CPU runs its instructions much faster.

HAV – Hardware Assisted Virtualization

This is actually what most people mean when they say hw virtualization: they’re referring to hardware assisted virtualization on desktop and server grade hardware.

Namely, both Intel and AMD processors have special functionality that allows for much more flexible and performant virtualization. Both operating systems and virtualization software check for such HAV support on your system and will usually fail to virtualize on older processors which don’t have hardware assisted virtualization.

Running a Typical x86 Virtual Machine

Once a VM is created, you can attach a virtual DVD drive to it and slot a virtual disk into it – an ISO image of your favourite operating system. Such virtual machine will then be booted and run the installation from ISO image, allowing you to complete OS install and end up with a virtual environment that looks and feels like a real desktop PC, only running rather slowly and inside a window of desktop virtualization software:

Screen Shot 2019-01-10 at 20.10.19.png
VirtualBox 6 running CentOS 7.4 VM

Specifically for improving virtualization and performance of certain components like I/O, memory management and virtual networking, all of such virtualization solutions supply a collection of purpose-built drivers: VMware Tools or Open-VM-Tools, for example. Installing these drivers lets you access VM disks from your host system, set advanced display resolutions, etc.

Running Multiple Virtual Machines

The really attractive benefit of using hardware virtualization is running multiple VMs on the same physical system. Using desktop virtualization example diagram at the top of this post, you can see how a single desktop system running Ubuntu Linux can host and run many virtual machines at once.

Here are examples of various virtualization processes happening in a typical desktop virtualization scenario:

  • Processor virtualization – so instead of 1 physical CPU with 4 cores, you can present 1 or more virtual CPUs (vCPUs) to each of the virtual machines
  • Disk is virtualized and shared – instead of presenting your whole desktop’s disk to each VM, you can create relatively small virtual disks – one or more of them assigned and attached exclusively to each VM.
  • RAM is shared – each VM is presented with a small portion of the actual RAM physically available on the desktop.
  • Network virtualization – each VM has its own virtual network adapter (and you can have more than one), with static or DHCP IP addresses and various network access modes – shared or NAT, etc
  • USB and DVD drives – you can map a physical resource like DVD drive or USB port into a particular VM – this means that the installation DVD with your favourite OS can be used to boot and install OS inside such a VM.

See Also




Docker – List Containers

docker-containers-unixtutorial

If you’re just getting started with Docker containers, you may be a bit confused how there doesn’t seem to be a command called “list” to show the containers available on your system. There is ineed no such command, but listing functionality is certainly there.

List currently running Docker containers

You need the docker ps command – it lists containers in a readable table:

root@dcs:~ # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1b8b1657736e datadog/agent:latest "/init" 8 months ago Up 8 months (healthy) 8125/udp, 8126/tcp dd-agent
c745794419a9 mariadb:latest "docker-entrypoint.s…" 8 months ago Up 8 months 3306/tcp db
32cd3e477546 nginx:latest "nginx -g 'daemon of…" 11 months ago Up 4 months 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx

Executed without any command line options, docker ps shows only the active containers – the ones running at this very moment.

List all the Docker containers

In case you experience some trouble with one of the containers, where the Docker container will start and immediately go offline, your docker ps won’t help – by the time you run it the container will disappear from the list.

This is where you need to use the –all command line option:

root@dcs:~ # docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4b4ae616898 wordpress "docker-entrypoint.s…" 7 months ago Exited (0) 7 months ago competent_johnson
1b8b1657736e datadog/agent:latest "/init" 8 months ago Up 8 months (healthy) 8125/udp, 8126/tcp dd-agent
c745794419a9 mariadb:latest "docker-entrypoint.s…" 8 months ago Up 8 months 3306/tcp db
4c82fa3d5d1c mariadb:latest "docker-entrypoint.s…" 9 months ago Exited (1) 9 months ago mysql
78fd23e82bba confluence:latest "/sbin/tini -- /entr…" 11 months ago Exited (143) 10 months ago wiki_https
73c9ca67c77b confluence:latest "/sbin/tini -- /entr…" 11 months ago Exited (143) 11 months ago wiki
56728d0f1ab5 mariadb:latest "docker-entrypoint.s…" 11 months ago Exited (0) 10 months ago mariadb
32cd3e477546 nginx:latest "nginx -g 'daemon of…" 11 months ago Up 4 months 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx
496b0d371a70 hello-world "/hello" 11 months ago Exited (0) 11 months ago stoic_brattain

List containers filtered by a specified criteria

You’ll soon realise that on a busy Docker host you probably need to apply some filter when listing containers. This functionality allows you to filter lists by many common Docker container properties.

For example, this is how we can list just the containers with a specific name:

root@dcs:~ # docker ps -f name=nginx
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
32cd3e477546 nginx:latest "nginx -g 'daemon of…" 11 months ago Up 4 months 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx

And this is how you can show just the Docker containers with “exited” status:

root@dcr:~ # docker ps -f status=exited
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4b4ae616898 wordpress "docker-entrypoint.s…" 7 months ago Exited (0) 7 months ago competent_johnson
4c82fa3d5d1c mariadb:latest "docker-entrypoint.s…" 9 months ago Exited (1) 9 months ago mysql
78fd23e82bba confluence:latest "/sbin/tini -- /entr…" 11 months ago Exited (143) 10 months ago wiki_https
73c9ca67c77b confluence:latest "/sbin/tini -- /entr…" 11 months ago Exited (143) 11 months ago wiki
56728d0f1ab5 mariadb:latest "docker-entrypoint.s…" 11 months ago Exited (0) 10 months ago mariadb
496b0d371a70 hello-world "/hello" 11 months ago Exited (0) 11 months ago stoic_brattain

That’s it for today! Let me know if you’re using Docker and whether you need help with anything!

See Also




VirtualBox 6.0

Screen Shot 2019-01-04 at 17.45.34.pngTurns out, VirtualBox 6.0 was released on December 18th, 2018.

Looking at the release notes I have found the following intersting features that I’ve yet to try:

VirtualBox 6.0

  • Nested virtualization – avaialbe only on AMD CPUs for now – this allows you to install a hypervisor like KVM or VirtualBox inside a VirtualBox guest VM – this still needs hw virtualization.
  • Hyper-V support – apparently, VirtualBox will detect if it’s running on a Windows server with Hyper-V activated, and will use Hyper-V as virtualization engine – albeit, it might run slower than native VirtualBox or Hyper-V guest VMs
  • Moving stuff – both disk images and VM metadata can now be moved very easily to a new location
  • Closing VMs improved – there’s now an option to keep the same hardware UUID when closing a guest VM
  • FUSE mount for vdisk images – on Mac OS hosts it’s possible to use a vboximg-mount command for raw access to the virtual disks

I’ve updated my VirtualBox software page with the above notes and will be testing features and sharing.

See Also