Linux Kernel 5.4

This past Sunday saw the announcement of Linux Kernel 5.4, this release brings a number of significant improvements.

I think it will be cool to try the following:

  • new virtiofs filesystem – a FUSE based implementation for sharing physical host filesystems with virtual machine guests.
  • exFAT and sdFAT implementations – although my issues on Linux laptop are more to do with card reader than the exFAT filesystem on the microSD Cards
  • booting from CIFS (Wndows share) – don’t quite know how it works, but sounds too cool not to try!
  • lockdown module – a feature aimed to minimise access to Linux kernel even for root user – meaning no direct access to memory and device ports, limited calls and fully controlled debugfs amd kprobes.

Lots of new graphics cards are added into both AMD and Intel drivers, will be interesting to see if anything is improved for my Ubuntu 19.10 laptop.

See Also




Show Process Limits Using /proc Filesystem

Show process limits with /proc filesystem

I think I mentioned the special /proc filesystem before, it’s available in Linux distros and helps you obtain system and process information via normal files created in a special structure. Today I’d like to show you another cool trick /proc has.

Show Process Info Using /proc

Just to remind you, here’s what I mean: on my Red Hat PC I have this sshd daemon process running:

root@redhat:/ # ps -aef | grep [o]penssh
root 5130 1 0 Oct03 ? 00:00:00 /usr/sbin/sshd -D [email protected],[email protected],aes256-ctr,aes256-cbc,[email protected],aes128-ctr,aes128-cbc [email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha1,[email protected],hmac-sha2-512 -oGSSAPIKexAlgorithms=gss-gex-sha1-,gss-group14-sha1- [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1 -oHostKeyAlgorithms=rsa-sha2-256,ecdsa-sha2-nistp256,[email protected],ecdsa-sha2-nistp384,[email protected],rsa-sha2-512,ecdsa-sha2-nistp521,[email protected],ssh-ed25519,[email protected],ssh-rsa,[email protected] -oPubkeyAcceptedKeyTypes=rsa-sha2-256,ecdsa-sha2-nistp256,[email protected],ecdsa-sha2-nistp384,[email protected],rsa-sha2-512,ecdsa-sha2-nistp521,[email protected],ssh-ed25519,[email protected],ssh-rsa,[email protected]

So the sshd process ID (PID) is 5130. That means I can use /proc filesystem to learn quite a bit about the process:

root@redhat:/ # cd /proc/5130
root@redhat:/proc/5130 # ls
 attr        cmdline          environ  io         mem         ns             pagemap      sched      smaps_rollup  syscall        wchan
 autogroup   comm             exe      limits     mountinfo   numa_maps      patch_state  schedstat  stack         task
 auxv        coredump_filter  fd       loginuid   mounts      oom_adj        personality  sessionid  stat          timers
 cgroup      cpuset           fdinfo   map_files  mountstats  oom_score      projid_map   setgroups  statm         timerslack_ns
 clear_refs  cwd              gid_map  maps       net         oom_score_adj  root         smaps      status        uid_map

Each file or directory in this /proc/5130 location shows some information specific to this PID 5130.

For instance, if we list files in the fd directory there, we’ll see all the files and sockets open by sshd at the moment:

root@redhat:/proc/5130 # ls -al fd/*
lr-x------. 1 root root 64 Oct 3 14:10 fd/0 -> /dev/null
lrwx------. 1 root root 64 Oct 3 14:10 fd/1 -> 'socket:[39555]'
lrwx------. 1 root root 64 Oct 3 14:10 fd/2 -> 'socket:[39555]'
lr-x------. 1 root root 64 Oct 3 14:10 fd/3 -> /dev/urandom
lr-x------. 1 root root 64 Oct 3 14:10 fd/4 -> /var/lib/sss/mc/passwd
lrwx------. 1 root root 64 Oct 3 14:10 fd/5 -> 'socket:[45446]'
lrwx------. 1 root root 64 Oct 3 14:10 fd/6 -> 'socket:[45450]'
lr-x------. 1 root root 64 Oct 3 14:10 fd/7 -> /var/lib/sss/mc/group
lrwx------. 1 root root 64 Oct 3 14:10 fd/8 -> 'socket:[45452]'

TODO: I’ll be sure to write a separate post on the /proc filesystem with more thorough walkthrough.

Show Process Limits Using /proc

One of the files in /proc subdirectories is file called limits, and it’s super useful for confirming the current OS limits applied to the process in question.

So for the sshd process with PID 5130, here’s what we can see:

root@redhat:/proc/5130 # cat limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size unlimited unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 127372 127372 processes
Max open files 1024 4096 files
Max locked memory 16777216 16777216 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 127372 127372 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us

Basically, this confirms that I haven’t fine-tuned anything on this new desktop just yet – open files count of 1024 is small cause it’s not a server that requires serving multiple files simultaneously.

Hope you find this useful!

See Also




How To Install Jekyll in Linux Mint 19

jekyll-serve-glebreys-unixtutorial.jpg

I’m still fascinated with the Jekyll approach to website management, and am working on converting one of my blogs (not Unix Tutorial just yet!) to Jekyll website. This short post shows how to install Jekyll on a Linux Mint system.

Install Ruby and Bundler

First things first: you need to install Ruby:

greys@xps:~/proj$ sudo apt install ruby
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following package was automatically installed and is no longer required:
libssh-4
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed:
libruby2.5 rake ruby-test-unit ruby2.5
Suggested packages:
ri ruby-dev
Recommended packages:
fonts-lato libjs-jquery
The following NEW packages will be installed:
libruby2.5 rake ruby ruby-test-unit ruby2.5
0 upgraded, 5 newly installed, 0 to remove and 317 not upgraded.
Need to get 3,227 kB of archives.
After this operation, 14.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ruby2.5 amd64 2.5.1-1ubuntu1.1 [48.6 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby amd64 1:2.5.1 [5,712 B]
Get:3 http://archive.ubuntu.com/ubuntu bionic/main amd64 rake all 12.3.1-1 [45.1 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-test-unit all 3.2.5-1 [61.1 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libruby2.5 amd64 2.5.1-1ubuntu1.1 [3,066 kB]
Fetched 3,227 kB in 1s (3,576 kB/s) 
Selecting previously unselected package ruby2.5.
(Reading database ... 272296 files and directories currently installed.)
Preparing to unpack .../ruby2.5_2.5.1-1ubuntu1.1_amd64.deb ...
Unpacking ruby2.5 (2.5.1-1ubuntu1.1) ...
Selecting previously unselected package ruby.
Preparing to unpack .../ruby_1%3a2.5.1_amd64.deb ...
Unpacking ruby (1:2.5.1) ...
Selecting previously unselected package rake.
Preparing to unpack .../archives/rake_12.3.1-1_all.deb ...
Unpacking rake (12.3.1-1) ...
Selecting previously unselected package ruby-test-unit.
Preparing to unpack .../ruby-test-unit_3.2.5-1_all.deb ...
Unpacking ruby-test-unit (3.2.5-1) ...
Selecting previously unselected package libruby2.5:amd64.
Preparing to unpack .../libruby2.5_2.5.1-1ubuntu1.1_amd64.deb ...
Unpacking libruby2.5:amd64 (2.5.1-1ubuntu1.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up rake (12.3.1-1) ...
Setting up ruby2.5 (2.5.1-1ubuntu1.1) ...
Setting up ruby (1:2.5.1) ...
Setting up ruby-test-unit (3.2.5-1) ...
Setting up libruby2.5:amd64 (2.5.1-1ubuntu1.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...

… and now use the gem command to install bundler gem:

greys@xps:~/proj$ sudo gem install bundler
Successfully installed bundler-2.0.1
Parsing documentation for bundler-2.0.1
Done installing documentation for bundler after 1 seconds
1 gem installed

Install the Jekyll gem

Excellent, we’re really close to getting this working.

Only small problem: when installing the jekyll gem, there’s an error:

greys@xps:~/proj$ sudo gem install jekyll
Building native extensions. This could take a while...
ERROR:  Error installing jekyll:
ERROR: Failed to build gem native extension.

    current directory: /var/lib/gems/2.5.0/gems/ffi-1.10.0/ext/ffi_c
/usr/bin/ruby2.5 -r ./siteconf20190329-23694-7cpq9s.rb extconf.rb
mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

extconf failed, exit code 1

Gem files will remain installed in /var/lib/gems/2.5.0/gems/ffi-1.10.0 for inspection.

Results logged to /var/lib/gems/2.5.0/extensions/x86_64-linux/2.5.0/ffi-1.10.0/gem_make.out
I first thought I did something wrong or that I have an old version of Ruby (2.5.0 as you can see). But now, it seems the minimal required version for Jekyll is Ruby 2.1.0 so it should all work.
The hint is in the error message:
mkmf.rb can’t find header files for ruby at /usr/lib/ruby/include/ruby.h
This include file is indeed missing, because previously we simply installed Ruby binaries, but not the development packages.
Once we install ruby-dev package:
greys@xps:~/proj$ sudo apt install ruby-dev
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following package was automatically installed and is no longer required:
libssh-4
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed:
ruby2.5-dev
Recommended packages:
ruby2.5-doc
The following NEW packages will be installed:
ruby-dev ruby2.5-dev
0 upgraded, 2 newly installed, 0 to remove and 317 not upgraded.
Need to get 68.3 kB of archives.
After this operation, 351 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ruby2.5-dev amd64 2.5.1-1ubuntu1.1 [63.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-dev amd64 1:2.5.1 [4,604 B]
Fetched 68.3 kB in 0s (198 kB/s) 
Selecting previously unselected package ruby2.5-dev:amd64.
(Reading database ... 273441 files and directories currently installed.)
Preparing to unpack .../ruby2.5-dev_2.5.1-1ubuntu1.1_amd64.deb ...
Unpacking ruby2.5-dev:amd64 (2.5.1-1ubuntu1.1) ...
Selecting previously unselected package ruby-dev:amd64.
Preparing to unpack .../ruby-dev_1%3a2.5.1_amd64.deb ...
Unpacking ruby-dev:amd64 (1:2.5.1) ...
Setting up ruby2.5-dev:amd64 (2.5.1-1ubuntu1.1) ...
Setting up ruby-dev:amd64 (1:2.5.1) ...
The header files for Ruby should appear now (although in a slightly different path):
/usr/include/ruby-2.5.0/ruby/ruby.h
/usr/include/ruby-2.5.0/ruby.h
and the Jekyll gem install will complete:
greys@xps:~/proj$ sudo gem install jekyll
Building native extensions. This could take a while...
Successfully installed ffi-1.10.0
Fetching: rb-inotify-0.10.0.gem (100%)
Successfully installed rb-inotify-0.10.0
Fetching: rb-fsevent-0.10.3.gem (100%)
Successfully installed rb-fsevent-0.10.3
Fetching: listen-3.1.5.gem (100%)
Successfully installed listen-3.1.5
Fetching: jekyll-watch-2.2.1.gem (100%)
Successfully installed jekyll-watch-2.2.1
Fetching: sass-listen-4.0.0.gem (100%)
Successfully installed sass-listen-4.0.0
Fetching: sass-3.7.3.gem (100%)

Ruby Sass is deprecated and will be unmaintained as of 26 March 2019.

* If you use Sass as a command-line tool, we recommend using Dart Sass, the new
primary implementation: https://sass-lang.com/install

* If you use Sass as a plug-in for a Ruby web framework, we recommend using the
sassc gem: https://github.com/sass/sassc-ruby#readme

* For more details, please refer to the Sass blog:
http://sass.logdown.com/posts/7081811

Successfully installed sass-3.7.3
Fetching: jekyll-sass-converter-1.5.2.gem (100%)
Successfully installed jekyll-sass-converter-1.5.2
Fetching: concurrent-ruby-1.1.5.gem (100%)
Successfully installed concurrent-ruby-1.1.5
Fetching: i18n-0.9.5.gem (100%)
Successfully installed i18n-0.9.5
Fetching: http_parser.rb-0.6.0.gem (100%)
Building native extensions. This could take a while...
Successfully installed http_parser.rb-0.6.0
Fetching: eventmachine-1.2.7.gem (100%)
Building native extensions. This could take a while...
Successfully installed eventmachine-1.2.7
Fetching: em-websocket-0.5.1.gem (100%)
Successfully installed em-websocket-0.5.1
Fetching: colorator-1.1.0.gem (100%)
Successfully installed colorator-1.1.0
Fetching: public_suffix-3.0.3.gem (100%)
Successfully installed public_suffix-3.0.3
Fetching: addressable-2.6.0.gem (100%)
Successfully installed addressable-2.6.0
Fetching: jekyll-3.8.5.gem (100%)
Successfully installed jekyll-3.8.5
Parsing documentation for ffi-1.10.0
Installing ri documentation for ffi-1.10.0
Parsing documentation for rb-inotify-0.10.0
Installing ri documentation for rb-inotify-0.10.0
Parsing documentation for rb-fsevent-0.10.3
Installing ri documentation for rb-fsevent-0.10.3
Parsing documentation for listen-3.1.5
Installing ri documentation for listen-3.1.5
Parsing documentation for jekyll-watch-2.2.1
Installing ri documentation for jekyll-watch-2.2.1
Parsing documentation for sass-listen-4.0.0
Installing ri documentation for sass-listen-4.0.0
Parsing documentation for sass-3.7.3
Installing ri documentation for sass-3.7.3
Parsing documentation for jekyll-sass-converter-1.5.2
Installing ri documentation for jekyll-sass-converter-1.5.2
Parsing documentation for concurrent-ruby-1.1.5
Installing ri documentation for concurrent-ruby-1.1.5
Parsing documentation for i18n-0.9.5
Installing ri documentation for i18n-0.9.5
Parsing documentation for http_parser.rb-0.6.0
Installing ri documentation for http_parser.rb-0.6.0
Parsing documentation for eventmachine-1.2.7
Installing ri documentation for eventmachine-1.2.7
Parsing documentation for em-websocket-0.5.1
Installing ri documentation for em-websocket-0.5.1
Parsing documentation for colorator-1.1.0
Installing ri documentation for colorator-1.1.0
Parsing documentation for public_suffix-3.0.3
Installing ri documentation for public_suffix-3.0.3
Parsing documentation for addressable-2.6.0
Installing ri documentation for addressable-2.6.0
Parsing documentation for jekyll-3.8.5
Installing ri documentation for jekyll-3.8.5
Done installing documentation for ffi, rb-inotify, rb-fsevent, listen, jekyll-watch, sass-listen, sass, jekyll-sass-converter, concurrent-ruby, i18n, http_parser.rb, eventmachine, em-websocket, colorator, public_suffix, addressable, jekyll after 26 seconds
17 gems installed
Great stuff! Now we can serve my project and browse to the Jekyll page (I skip the jekyll steps for starting new website because they’re described in my Jekyll website with GitHub Pages article):
greys@xps:~/proj/glebreys.com$ bundle exec jekyll serve
Configuration file: /home/greys/proj/glebreys.com/_config.yml
Source: /home/greys/proj/glebreys.com
Destination: /home/greys/proj/glebreys.com/_site
Incremental build: disabled. Enable with --incremental
Generating... 
Jekyll Feed: Generating feed for posts
done in 0.614 seconds.
Auto-regeneration: enabled for '/home/greys/proj/glebreys.com'
Server address: http://127.0.0.1:4000/
Server running... press ctrl-c to stop.
That’s it for today!

See Also




GitHub: Private Repositories are Free Now

Octocat.png
Octocat – GitHub.com

Good news, everyone!

Starting yesterday, GitHub allows free accounts to have unlimited number of private repositories. The number of collaborators for such repos is limited to 3, but this is still a massive improvement and something I’ve personally been faiting for. There’s just too many little things in a sysadmin’s life that could benefit from git tracking but won’t justify a premium price tag.

Updated GitHub pricing

This is how pricing looks now:

Screen Shot 2019-01-08 at 16.45.51.png

How To Create a Private Repository in GitHub

Assuming you already have a GitHub account and you’re logged in, creating new repository is fairly straightforward:

Screen Shot 2019-01-08 at 09.30.23.png

Previously, selecting the Private type of repo would show a pop-up asking for paid upgrade of your account, but as you can see on the screenshot above, this is not the case anymore!

Once you click the Create Repository button, you should see your brand new repo:

Screen Shot 2019-01-08 at 09.30.35.png

Adding your SSH key to GitHub repository

If you haven’t done this yet, now would be the time to access Settings in your profile (open URL https://github.com/settings/profile in another browser tab) and go to the SSH and GPG keys section there.

This will let you upload your existing SSH key that you later can use for accessing your GitHub repositories:

Screen Shot 2019-01-08 at 09.32.45.png

As seen on the screenshot, you provide some title to the SSH key and then copy-paste the whole key (I’m not including it in the screenshot fully).

The good sign that your key is added should be something that shows it like this:

Screen Shot 2019-01-08 at 09.33.10.png

 

Connecting to your GitHub repo using SSH

Going back to your GitHub repository, in the top right section you should see a green button called Clone or download. If you click it, you’ll see a window with URL to your private repo. Don’t forget to click the Use SSH there and you should see something like this:

Screen Shot 2019-01-08 at 09.31.44.png

Copy this onto your Linux/Unix desktop and run this in the command line:

greys@maverick:~/proj/unixtutorial/github $ git clone [email protected]:greys/unixtutorial.git
Cloning into ‘unixtutorial’…
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (3/3), done.

You should see a new subdirectory created in your location:
greys@maverick:~/proj/unixtutorial/github $ ls
unixtutorial

and if you change into that directory, it would contain your private GitHub repository copy – which at this early stage only has the README.md

file:greys@maverick:~/proj/unixtutorial/github $ cd unixtutorial/
greys@maverick:~/proj/unixtutorial/github/unixtutorial $ ls
README.md

That’s it! Hope you like the good news about free private GitHub repositories and stay tuned for more news and Unix/Linux How-To’s!

See Also

 




How To Check RAID Progress with /proc/mdstat

I explained how to read the /proc/mdstat in my recent post How To Identify RAID Arrays in Linux, so today is a super quick follow up using one of my systems.

I use Synology NAS in my office and disks in the storage array are getting old, so I decided to swap them out one by one in the next few months. Synology runs a Linux based proprietary OS called DSM, and ultimately it relies on software RAID configured and managed with md devices. So all the setup is done using web-based GUI, but I always like double-checking what’s going on by logging directly onto the appliance.

Here’s how I use /proc/mdstat to track the faulty disk replacement:

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid6 sda5[9] sdb5[1] sdh5[7] sdg5[6] sdf5[8] sde5[4] sdd5[3] sdc5[2]
 17552612736 blocks super 1.2 level 6, 64k chunk, algorithm 2 [8/7] [_UUUUUUU]
 [==>..................] recovery = 13.4% (393456896/2925435456) finish=1578.6min speed=26731K/sec

This tells me:

  • md2 is the name of a RAID array device
  • RAID type is RAID6 (confirmed by raid6 personality AND level 6 words)
  • my array consists of 8 disks (sda5/sdb5/…/sdh5)
  • RAID block size is 64K
  • 7 devices are up (that’s what each of the Us mean in the [_UUUUUUU] section)
  • 1 device is down (that’s what the underscore _ means in [_UUUUUUU])
  • Array is going through a recovery procedure, we’re 13.4% there with another 26 hours (finish=1578.6min) to go
  • Speed of the RAID array recovery (effectively that’s the speed of populating the new disk with parity-based data from other disks in the array) is 26.1MB/sec (speed=26731K/sec)



Bash Scripts – Examples

I find it most useful when I approach any learning with a clear goal (or at least a specific enough task) to accomplish.

Here’s a list of simple Bash scripts that I think could be a useful learning exercise:

  • what’s your name? – asks for your name and then shows a greeting
  • hello, world! (that also shows hostname and list server IP addresses)
  • write a system info script – that shows you a hostname, disks usage, network interfaces and maybe system load
  • directory info – script that counts disks space taken up by a directory and shows number of files and number of directories in it
  • users info – show number of users on the system, their full names and home directories (all taken from /etc/passwd file)
  • list virtual hosts on your webserver – I actually have this as a login greeting on my webservers – small script that highlights which websites (domains) your server has in web server (Apache or Nginx) configuration.

Do you have any more examples of what you’d like to do in a Linux shell? If not, I’ll just start with the examples above. The plan is to replace each example name in the list above with a link to the Unix Tutorial post.




Using Dropbox with Unix

Although last week saw some pretty exciting developments in the cloud storage (Google Drive announcement and SkyDrive free 25Gb space), the truth is that Dropbox is still the king of the cloud storage hill – it’s hands down the easiest to use and integrate.

I’ve been a Dropbox user for a few years now, but have started using it actively only in the last 12 months or so. It’s been an invaluable tool for me thanks to its integration with 1Password, the password tool of my choice. Dropbox also helps with lots of day-to-day tasks and thats why I decided it’s time to share some of the tips.

Having used Dropbox extensively on Windows systems (XP on laptop and Win7 on desktops), I’ve recently moved on to using Dropbox with my Mac OSX desktop and Linux hosting.

So here are the top tips for using Dropbox with Unix – each one does wonders for me and so I hope you like them as well.

Important: If you’re not a Dropbox user yet, please use this link to sign up – it means I’ll get a small bonus (extra 500MB to my free account) for referring you.

Storing all the common apps and tools in Dropbox

Dropbox is really smart when it comes to uploading your files into the cloud storage and making it universally accessible across all the devices that you choose to pair with your Dropbox account.

One thing I particularly like using Dropbox for is storing the latest (or sometimes not the latest but verified to be fully working) versions of apps and tools I find handy to have on my desktops. In addition to having installers for all your favourite tools avaialble on each workstation, Dropbox account is also handy for simply storing all the necessary software in one location. When traveling, for example, I can open my Dropbox account and safely download the exact version of a particular tool that I need. It saved me a lot of time because I don’t have to go to each website and search for that download link.

Syncronizing scripts and config files between hosting systems

I have a dedicated server and use it for running a number of Ubuntu VMs. I’ve created a separate Dropbox account for my hosting needs, and this means that I now have 2.5Gb of space available for my VMs to exchange files or store immediate backups. Because Dropbox takes care of synchronizing all the content (and it has a LAN sync feature meaning VMs transfer files directly to each other instead of uploading back to the Dropbox site), it’s super easy and super fast to have a particular script updated and deployed to multiple systems.

I’m not quite there yet with actually running stuff like important automation or whole websites straight from Dropbox directory, but I use it for deploying scripts and configs all the time – once I get something working properly on one VM, I can then hope from one system to another and run the same set of commands against the files which are synchronized by Dropbox.

Transferring files to and from my hosting

This is a very recent addition to the things I do with Dropbox, but it’s an incredibly useful one. Having setup a separate Dropbox account for hosting, I shared one of the folders with my personal Dropbox account, and this means that transferring any files to and from my hosting had gotten to be this much easier. By putting a file into a local directory on my desktop, I have it accessible accross all the VMs on my hosting within seconds.

Likewise, if I’m reading logs or working on updating a particular config file, I can always copy it into Dropbox directory and have it synced back to my desktop.

Prior to this setup I had to rely on scp (passwordless logins using passphrase), and although it was pretty convenient to use, Dropbox approach is much more robust. Because files appear to be local, you get to work with them and manage directories as you like. You don’t have to remember the directory tree structure or follow any naming conventions – your files are the same across all the systesm and you don’t have to remeber to always sync.

Keeping backups of DBs or websites in Dropbox

Since majority of my websites are publicly available blogs, I don’t consider most of the backups to be a sensitive information. To be clear, I don’t store my passwords (wp-config.php file or htpasswd files) in Dropbox account, but everything else gets copied into it as a first level backup. I also have been doing automatic backups to Amazon’s S3 storage for about 5 years now, this means I can recover from most disasters quickly enough.

The reason Dropbox wins is because I don’t have to pay for each minor transfer or for storing an extra gigabyte or two – and yes, every little helps even though Amazon’s services are quite affordable. Another major reason I started doing backups to Dropbox is because it’s a local directory – I don’t have to use any extra tools to access all the backups in a simple directories/files structure. With Amazon’s S3 it’s also possible but setup is not as trivial.

Using Dropbox for controlling Unix systems remotely

With a few minutes and a really simple script, it’s possible to setup your own mission control for all the VMs in your hosting.

For example, if you create a cronjob which looks for a particular file, you can control which DB server your systems will connect to or which directory you’ll get the latest important log file copied into.

I’m also playing with services management based on the Dropbox account. If there’s a file present, I keep a particular service running. As soon as the file is gone, my cronjob gracefully stops the service. A slightly more sophisticated approach involves storing services names and system names associations in a Dropbox synchronized file – this allows for more flexibility as I can specify which service I want to be running on which nodes.

Sure enough, this isn’t the most straightforward way to manage your system, but such an approach can be used on the go from your iPhone. For example, I can restart a webserver by just touching a file from my iPhone, while previously I would have to find the nearest computer I trust, download SSH client, connect to the box and only then fix the problem.

Have I convinced you enough? Did you like any tips, or do you have some more perhaps? Let me know in the comments section!

pS: if you don’t have a Dropbox account or perhaps if I persuaded you to create a separate one for your hosting – please use this link so that I get some extra Dropbox space for referring. Thanks!

 




Passwordless SSH with encrypted homedir in Ubuntu

Quite recently I came across a very interesting issue: while configuring passwordless SSH (it’s public key based, so depending on you have it configured it may not be completely passwordless) access to some of my VPS servers, I found that the same keypair just wouldn’t work on one of the servers.

Not only that, but the behaviour was quite bizzare: upon my first attempt to connect the public key would get rejected and a regular password would be requested by the ssh session. But once I successfully logged in with my password, any subsequent ssh connections would happily authenticate by my public key and would let me in without a problem.

Those of you using home dir encrypiton in Ubuntu are probably smiling right now! 🙂 But becase I have never consciously configured or used this feature, it took me a good few hours to troubleshoot the issue and come up with the fix.

Why public-key based SSH doesn’t work with encrypted home directories

The answer is quite simple: before your server can decide whether you are providing a valid and trusted SSH key, it must read your public key stored in your homedir. But if your homedir is encrypted, this becomes a classical chicken-and-egg scenario – until you log in and therefore decrypt your homedir the server won’t gain access to your public key. Only you wouldn’t be needing the public key by then, would you?

Store your authorized SSH keys outside your encrypted home directory

If you happen to like your homedir encryption AND would like to use public/private key SSH authentication,  there is a way out: you need to store your authorized keys outside of your encrypted homedir.

The usual access restrictions and directory/file permissions still apply, so the only thing you’re changing is moving your authorized keys outside of the encrypted homedir on your server. This way things will work exactly as you expect: you authenticate with your private key and this results in your automatically mounted and decrypted homedir.

Here are the steps to make this happen. You’re going to need superuser privileges for my scenario because it caters for all the users on your Ubuntu server, not just one account that belongs to you (use sudo to become root).

Step 1: create a directory structure for your authorized keys.

First, the main directory, I created it under /var – seems quite a safe choice since this directory is unlikely to grow and is equally unlikely to get removed by accident.

# mkdir /var/openssh

Perfect! Now we need to create user-specific directories, just to keep this dir really tidy. My username is “greys“, so here is the directory:

# mkdir /var/openssh/greys
# chown greys /var/openssh/greys

Step 2: copy existing authorized keys file into new location

(you must log in as your username for this, otherwise the homedir will stay encrypted)

$ cp /home/greys/.ssh/authorized_keys /var/openssh/greys

Step 3: update SSHd config with new location for authorized_keys file

You’re going to do this as root once again:

# vi /etc/ssh/sshd_config

update the value of the AuthorizedKeysFile so that it looks like this:

AuthorizedKeysFile        /var/openssh/%u/authorized_keys

Step 4: Restart SSH service

# service ssh restart
ssh start/running, process 3708

That’s it! Give it a try and let me know how it worked out.

Recommended books:

[AMAZONPRODUCTS asin=”1590594762″]

See also




Use /proc/version to identify your Linux release

Hi everyone, I’m finally back from my holidays, and simply cannot wait to share some more Unix tips with all of you!

Today I’ll talk a bit more about yet another way of learning version information about your Linux OS: the /proc/version file. I mentioned it briefly in one of the previous posts, but would like to finish the explanations.

What you can learn from /proc/version

This file will not show you the name of the actual OS release, but will instead give you specifics about the version of Linux kernel used in your distribution, and confirm the version of a GCC compiler used to build it.

If you cat the /proc/version file, this is what you’re going to see (I’m using a RedHat 5.2 system for this):

rhel52# cat /proc/version
Linux version 2.6.18-92.el5 ([email protected]) (gcc version 4.1.2 20071124 (Red Hat 4.1.2-41)) #1 SMP Tue Apr 29 13:16:15 EDT 2008

In this output, you get to see the following information:

  1. Exact version of the Linux kernel used in your OS: Linux version 2.6.18-92.el5
  2. Name of the user who compiled your kernel, and also a host name where it happened: [email protected]
  3. Version of the GCC compiler used for building the kernel: gcc version 4.1.2 20071124
  4. Type of the kernel – SMP here means Symmetric MultiProcessing kernel, the one that supports systems with multiple CPUs or multiple cpu cores
  5. Date and time when the kernel was built: Tue Apr 29 13:16:15 EDT 2008

It’s absolutely normal that the kernel is older than the overall release of yours. My example above, generated on the RedHat Enterprise Linux 5.2 system (RHEL5.2), shows the kernel birthday to be Apr 29, 2008. But the actual RHEL5.2 release became available to all the customers only a month later, on May 21st 2008 (here’s the original RedHat 5.2 announcement).

The reason your kernel is a bit older than the rest of the distribution is because kernel is only one part of the final product you’re getting – it may take a while to compile and integrate the rest of the OS before it can be used.

Different ways to find out Linux release information

By now, you should know quite a few ways of confirming release information about your Linux distro. Just to remind you, here they are:

This should be more than enough even for the most curious Linux users. Enjoy!

See also:




How To Monitor Linux Memory Usage with Watch Command

Hi all, today I’m going to teach you not one, but two really cool things in one post! First, I’ll introduce you to advanced memory usage stats available on Linux systems through /proc/meminfo file, and then I’ll explain the basics of using the watch command.

Memory usage with /proc/meminfo

As you know, quite a few Unix-like systems use the so-called pseudo file systems like /proc. It’s not a real filesystem, but just a convenient representation of processes managed by your Unix OS. In Linux systems, this directory also contains quite a few files allowing you to access various information about your system. /proc/meminfo is one of such files, it gives you access to most of the memory usage stats.

To get a snapshot of the current state of memory usage on your Linux system, simply cat the /proc/meminfo file:

ubuntu$ cat /proc/meminfo
MemTotal:       523008 kB
MemFree:         35336 kB
Buffers:         85560 kB
Cached:         137220 kB
SwapCached:      24480 kB
Active:         327420 kB
Inactive:        91308 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:       523008 kB
LowFree:         35336 kB
SwapTotal:     1048568 kB
SwapFree:       998960 kB
Dirty:             504 kB
Writeback:           0 kB
Mapped:         212232 kB
Slab:            39140 kB
CommitLimit:   1310072 kB
Committed_AS:   655992 kB
PageTables:       4748 kB
VmallocTotal: 34359738367 kB
VmallocUsed:       628 kB
VmallocChunk: 34359737739 kB

This probably gives you more information about memory usage that you’ll ever want to know, but there’s quite a few really useful stats there like the MemFree or SwapFree ones, they’re useful for making sure your OS environment is healthy enough in terms of having plenty of free memory for efficient operation.

Using watch command to track progress

watch command is a really neat tool which does a simple but incredibly useful thing: it repeatedly runs a given command line and shows you the output. So, you’re effectively monitoring a progress of some process by watching relevant files.

The default interval is 2, which gives enough dynamics for most of the needs.

Here’s how you use this command:

ubuntu$ watch cat /proc/meminfo

So it’s the same command we used in previous example, cat /proc/meminfo, but this time we’re asking the watch command to re-run the command every 2 seconds and show us the output.

The result of running a watch command is going to be a constantly refreshed console showing something like this:

Every 2.0s: cat /proc/meminfo       Fri Feb 13 03:51:01 2009

MemTotal:       523008 kB
MemFree:         46396 kB
Buffers:         82636 kB
Cached:         131044 kB
SwapCached:      24480 kB
Active:         308512 kB
Inactive:        99372 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:       523008 kB
LowFree:         46396 kB
SwapTotal:     1048568 kB
SwapFree:       998960 kB
Dirty:             832 kB
Writeback:           0 kB
Mapped:         211076 kB
Slab:            39132 kB
CommitLimit:   1310072 kB
Committed_AS:   654860 kB
PageTables:       4856 kB
VmallocTotal: 34359738367 kB
VmallocUsed:       628 kB
VmallocChunk: 34359737739 kB

This output gets refreshed every 2 seconds, so the numbers shown are constantly updated.

That’s it for today! There are limitless possibilities for monitoring various processes using watch command and I’ll be sure to cover them in the future, but for now – have a great weekend and hope Friday 13th turns out great!

See also: