Card Reader Issues in Ubuntu 19.04 on Dell XPS 13 9380

unix-tutorial

It appears there’s a long-standing malfunction of various microSD card readers running Linux. In my particular case, the issue happens on XPS 13 9380 laptop running latest Ubuntu 19.04 with all the updates as of early July 2019. I’ll update this post once I confirm the fix.

Card Reader Device on Dell XPS 13 9380

I believe this is the device I have have:

root@xps:~ #  lspci | grep -i reader
01:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS525A PCI Express Card Reader (rev 01)

mmc0: error -110 whilest initialising SD card

The error message is a bit strange: I’m not trying to initialise my SD card, but instead want to read it. It’s a pretty standard 128GB microSD by SanDisk, but I think part of the problem is that it’s a high-speed SDXC card and the issue is that card reader can’t support the card because it’s running on slower speeds by default.

Here’s how the error looks:

Jul 2 14:02:43 xps kernel: [18743.768947] mmc0: error -110 whilst initialising SD card
Jul 2 14:02:44 xps kernel: [18745.108865] mmc0: error -110 whilst initialising SD card
Jul 2 14:02:46 xps kernel: [18746.452902] mmc0: error -110 whilst initialising SD card

Reloading SDHCI Kernel Module with debug_quirks

One of the common fixes for the problem is to reload kernel module sdhci with debug parameters that assist with improved voltage required for higher speeds.

Unfortunately, this fix didn’t work for me:

$ sudo modprobe sdhci debug_quirks2="0x80000000"

Syslog reports that module has been reloaded:

Jul 06 12:22:01 xps kernel: sdhci: Secure Digital Host Controller Interface driver 
Jul 06 12:22:01 xps kernel: sdhci: Copyright(c) Pierre Ossman

… but when I insert the code I still get the same error:

Jul 06 12:24:43 xps kernel: mmc0: error -110 whilst initialising SD card
Jul 06 12:24:45 xps kernel: mmc0: error -110 whilst initialising SD card
Jul 06 12:24:46 xps kernel: mmc0: error -110 whilst initialising SD card

I’m glad I also have an external card reader with USB-C interface, it works just fine with perfect access to the same microSD card. But ideally I want to fix this issue for the build-in card reader.

See Also

 




Trying NTP from Cloudflare

cloudflare-logo

I’ve read a really interesting article on Cloudflare blog about NTP – Network Time Protocol – and its current set of security issues. As always, before actual service offering is provided, Cloudflare lists available alternatives and even provides history of the NTP protocol and its implementation – something I really enjoyed reading.

Confirming current NTPd State in Linux

becky, one of the Raspberry Pi systems I have, had the default Debian NTP pools configured and running like this:

greys@becky:~ $ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
0.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.001
1.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.001
2.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.001
3.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.001
-euphoric.ca 213.251.128.249 2 u 156 512 377 111.267 3.890 0.954
-de-user.deepini 195.13.23.5 3 u 149 512 377 43.721 3.085 0.623
*194.80.204.184 .GPS. 1 u 29 64 377 29.516 0.410 0.082
+ntp-ext.cosng.n 146.213.3.181 2 u 230 256 377 51.244 -0.209 0.534
-kabel.akku.expr .DCFa. 1 u 232 256 377 47.555 -2.329 22.182
+bray.walcz.net 140.203.204.77 2 u 11 256 377 10.880 -0.030 0.434

Trying the time.cloudflare.com NTP

I decided to add time.cloudflare.com as a pool to the /etc/ntp.conf file.

I changed section of the /etc/ntp.conf file from this:

pool 0.debian.pool.ntp.org iburst
pool 1.debian.pool.ntp.org iburst
pool 2.debian.pool.ntp.org iburst
pool 3.debian.pool.ntp.org iburst

to this:

pool time.cloudflare.com iburst
pool 0.debian.pool.ntp.org iburst
pool 1.debian.pool.ntp.org iburst
pool 2.debian.pool.ntp.org iburst
pool 3.debian.pool.ntp.org iburst

After restarting NTPd:

greys@becky:~ $ sudo systemctl restart ntp

… we can now see new NTP servers in the mix, specifically two new servers from Cloudflare:

greys@becky:~ $ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
time.cloudflare .POOL. 16 p - 64 0 0.000 0.000 0.001
0.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.001
1.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.001
2.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.001
3.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.001
+162.159.200.1 10.52.8.83 3 u 13 64 377 10.284 -0.916 0.432
-162.159.200.123 10.52.8.83 3 u 12 64 377 10.464 -0.763 0.498
+ec2-52-17-231-7 193.120.142.71 2 u 11 64 377 10.457 -0.630 0.343
-tbag.heanet.ie 140.203.204.77 2 u 79 128 377 10.354 -1.393 0.589
*ntp4.bit.nl .PPS. 1 u 16 64 377 27.093 -0.242 0.122

No Easy NTS Implementation Yet

I wanted to give NTS (Network Time Security) implementation a try, but seems it’s not possible yet with standard NTPd in Raspbian/Debian. The article described NTS at quite a length, so it should be fascinating when this becomes yet another core service properly secured with TLS or similar approach.

I’m not sure I want to switch another service of mine to Cloudflare on all the servers and systems just yet, but generally it’s an interesting idea. So we’ll see how this works (must really start graphing NTP in my monitoring setup)!

See Also




Show All TCP Connections with lsof

unix-tutorial-blue

I’ve spoken about lsof command briefly a few times, but think it deserves a few more mentions just because it’s such a universally useful tool.

Show TCP Connections with lsof

Using the –i tcp option, you can get lsof to report all the TCP connections currently open by any process in your system.

For example (it’s a long list so I’m just showing the first few lines):

greys@MacBook-Pro:/ $ lsof -i tcp | head -10
COMMAND     PID  USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
cloudd      361 greys  197u  IPv4 0x90d8806378f8ff3d      0t0  TCP localhost:55919->localhost:nfsd-status (ESTABLISHED)
cloudd      361 greys  199u  IPv4 0x90d88063ab22823d      0t0  TCP localhost:65345->localhost:nfsd-status (ESTABLISHED)
rapportd    368 greys    3u  IPv4 0x90d8806374a56f3d      0t0  TCP *:65115 (LISTEN)
rapportd    368 greys    4u  IPv6 0x90d88063935504fd      0t0  TCP *:65115 (LISTEN)
rapportd    368 greys   11u  IPv4 0x90d8806378f91bbd      0t0  TCP macbook-pro.localdomain:65115->iphonex.localdomain:61268 (ESTABLISHED)
rapportd    368 greys   14u  IPv4 0x90d880637350a8bd      0t0  TCP macbook-pro.localdomain:65115->glebs-ipad-2.localdomain:59156 (ESTABLISHED)
SetappAge   555 greys    3u  IPv4 0x90d880637c43c53d      0t0  TCP localhost:codasrv (LISTEN)
SetappAge   555 greys    5u  IPv6 0x90d8806379b196fd      0t0  TCP localhost:codasrv (LISTEN)
Spillo      637 greys    9u  IPv4 0x90d880637350bbbd      0t0  TCP *:8490 (LISTEN)

Show TCP Connections of Specific Process

For individual processes, it’s easier to just show everything lsof can find about a process and then grep fot TCP.

I have an SSH session to one of my local systems:

greys@MacBook-Pro:/ $ ps -aef | grep ssh
501 11070 11053 0 8:35pm ttys008 0:05.59 ssh server

… so this lsof command example shows me that this process PID=11070 has TCP session open to ssh port (22) on server.localdomain server (MacOS adds localdomain everywhere):

greys@MacBook-Pro:/ $ lsof -p 11070 | grep TCP
ssh 11070 greys 3u IPv4 0x90d880638e66623d 0t0 TCP macbook-pro.localdomain:63830->server.localdomain:ssh (ESTABLISHED)

Pretty cool!

See Also

 




SSH: Too Many Authentication Failures

unix-tutorial-grey

Here I was trying to ssh from my XPS laptop to MacBook Pro for some quick command, when SSH started giving me the too many authentication failures error. I decided to capture findings here as a blog post.

Too Many Authentication Failures

Here’s how the error looked from my Ubuntu 19.04 command line:

greys@xps:~ $ ssh greys@maverick 
Received disconnect from 192.168.1.200 port 22:2: Too many authentication failures 
Disconnected from 192.168.1.200 port 22

The weird thing is that this was happening without any passwords asked, so at first it seemed really strange: you get authentication failures but you actually haven’t tried authenticating at all.

Why Too Many Authentication Failures Occur

So yes, these errors happen when you attempt to log in using some credentials and you are denied access for a few times in a row due to incorrect credentials.

Something as fundamental as SSH client and server are rarely wrong in such basic things. So thinking about the error a bit more (and Googling around, of course) I realised that authentication attempts were made using SSH keys I have configured on my Ubuntu laptop. There’s quite a few and SSH client was offering them one after another to the MacBook’s SSH daemon in attempts to log me in.

So I never got asked for a password because my SSH client already offered a few SSH keys and remote SSH server counted each offering as an authentication attempt. So when this maxed out the SSH server limit, I got the error.

MaxAuthTries Setting

Related to the error above is this MaxAuthTries setting in /etc/ssh/sshd_config file.

This option is set to something fairly reasonable usually, in MacOS Mojave it’s set to 6 by default. But because I changed it to 3 in the past for better security, it limited my access when my SSH client was offering more than 3 SSH keys to log in.

Working around the Too Many Authentication Attemps Problem

There’s a number of approaches, all of them to do with SSH identities used for remote access. They are managed by SSH agent, a special software usually starting automatically with your laptop login that tracks all the usernames and SSH keys you have to try them when accessing things remotely.

Disable SSH agent temporarily

So the easiest fix is to disable SSH agent temporarily and try login again (for password logins it does the trick).

I’ll quickly show the steps but will need to write a separate proper post on using SSH agent usage soon.

Step 1: we check user variables for SSH_AUTH_SOCK

This variable will usually confirm if you have SSH Agent. If this variable exists and points to the valid file, that’s the Unix socket used by your SSH agent:

greys@xps:~ $ env | grep SSH 
SSH_AUTH_SOCK=/home/greys/.ssh/ssh-auth-sock.xps 
SSH_AGENT_PID=1661

Step 2: we reset the SSH_AUTH_SOCK variable

Let’s set this variable to empty value and check it:

greys@xps:~ $ SSH_AUTH_SOCK= 
greys@xps:~ $ env | grep SSH 
SSH_AUTH_SOCK= 
SSH_AGENT_PID=1661

That’s it, now logins to Macbook laptop should work again:

greys@xps:~ $ ssh greys@maverick 
Password: 
Last login: Wed Jun 12 12:31:33 2019 from 192.168.1.60 
greys@maverick:~ $

That’s it for today! Quite a few advanced topics in just one post, so I’ll be sure to revisit it and expand with further posts on the concepts of SSH ports, SSH agent, passwordless SSH and generating SSH keys.

See Also

 




Use OfflineIMAP For Receiving Email

unix-tutorial

This week’s Unix Tutorial Project is super geeky and fun: I’m setting up text-based email archive system using Mutt (NeoMutt, actually), OfflineIMAP and hopefully NotMuch. Will publish a project summary on the weekend.

Why use OfflineIMAP

OfflineIMAP tool is an open-source tool for downloading your email messages and storing them locally in a Maildir format (meaning each email message is stored in a separate file, each folder/GMail tag is a separate directory).

As the name suggests, this tool’s primary objective is to let you read your emails offline. Contrary to the other part of the name, offlineimap is NOT an IMAP server implementation.

I’d like to explore OfflineIMAP/Neomutt setup as a backup/archive solution for my cloud email accounts. I used to be with Fastmail but switched to gSuite email last year. I think it’s very important to keep local copies of any information you have in any cloud – no matter how big/reliable the service provider is, there are many scenarios where your data could be completely lost, and responsibility for keeping local backups is always with you.

Both gMail and Fastmail solutions are perfect for web browser use but any local email software is invariably bulkier and slower compared to web interface. I’m not giving up on finding the acceptably performance and reliable solution though.

This is one of the most recent attempts to download all emails and to have them easily searchable on my local PCs and laptops.

OfflineIMAP Configuration Steps

I’m only learning this tool, so this is probably the most basic usage:

  1. Confirm your mail server details (IMAP)
  2. Confirm your mailbox credentials (for Google, gSuite and even Fastmail you need to generate an app password – it’s separate and different from your primary email password)
  3. Create .offlineimaprc file in your home directory as shown below
  4. If necessary, create credentials file (for now – with cleartext app password for email access) – mine is /home/greys/.creds/techstack.pass
  5. Run offlineimap (first time and every time you want your email refreshed)

My .offlineimaprc file

Here’s what I have in my .offlineimaprc file for this experiment:

[general]
ui = ttyui
accounts = techstack

[Account techstack]
localrepository = techstack-local
remoterepository = techstack-remote

[Repository techstack-local]
type = Maildir
localfolders = ~/Mail/techstack/

[Repository techstack-remote]
type = Gmail
remoteuser = [email protected]
remotepassfile = ~/.creds/techstack.pass
maxconnections = 5
ssl = yes
sslcacertfile = /etc/ssl/certs/ca-certificates.crt
folderfilder = lambda foldername: foldername not in ['Archive']
expunge = no

You can have multiple accounts in this one config file, they’ll be listed in the accounts section (accounts = techstack, unixtutorial would mean 2 accounts: techstack one and one for my Unix Tutorial email).

localfolders parameter specifies that I want OfflineIMAP to create a Mail directory in my homedir (so ) and then techstack subdirectory there – meaning you can have account subidrectories there like /home/greys/Mail/techstack and /home/greys/Mail/personal, etc.

You define two repositories, local and remote one. The task of OfflineIMAP is to sync the two.

IMPORTANT: The really important parameter is maxconnections one. Default is 3 and I’ve changed it to 5 for quicker email sync. Setting it to a higher value resulted in failures – probably because Google servers rate limit my connection.

CRITICAL: expunge parameter is set to yes by default, so you must set it to no if your plan is to keep emails on the mail server after you sync them. By default they will be removed from the server as soon as they are downloaded, meaning Gmail app won’t see any messages. Once deleted, it will be rather tricky to restore all the emails – so it’s important to get this setting right from the very start. Since my primary usage is still web and Gmail app based, I certainly want all my emails to stay in Google cloud even after I download them using OfflineIMAP – that’s why I configured it as expunge = no.

As you can see, this config references the /home/greys/.creds/techstack.pass file. This file has an clear-text application password I generated for my email address in gSuite admin panel. My understanding is that this can be improved, so I’ll do a follow-up post later.

How To Use OfflineIMAP

Simply run the offlineimap command and you should see something like this:

greys@xps:~ $ offlineimap 
OfflineIMAP 7.2.2
Licensed under the GNU GPL v2 or any later version (with an OpenSSL exception)
imaplib2 v2.57 (system), Python v2.7.16, OpenSSL 1.1.1b 26 Feb 2019
Account sync techstack:
*** Processing account techstack
Establishing connection to imap.gmail.com:993 (techstack-remote)
Folder 2016 [acc: techstack]:
Syncing 2016: Gmail -> Maildir
Folder 2016/01-January [acc: techstack]:
Syncing 2016/01-January: Gmail -> Maildir
Folder 2016/02-February [acc: techstack]:
Syncing 2016/02-February: Gmail -> Maildir
Folder 2016/01-January [acc: techstack]:

As you can see, it processes account techstack, connects to gmail and starts processing remote folders (gmail tags) like 2016, 2016/01-January, 2016-02-February etc – these are the tags I have in my gSuite account.

Initial download would take a while. My 150K messages took almost 3 days to download.

That’s all for today, hope you give OfflineIMAP a try!

See Also




How To: Change Graphics Mode for GRUB Bootloader

unix-tutorial-blue

One of the remaining things to fix on my new Dell XPS 13 laptop has been the graphics mode in GRUB bootloader that got activated with my Ubuntu 19.04 install. Somehow GRUB is smart enough to recognise 4K resolution on the laptop, so the GRUB boot menu looks so tiny that I can’t read any text (there’s no scaling applied to fonts). I finally decided to fix this.

Graphics modes in GRUB bootloader

GRUB is a simple enough software solution that traditionally used text console for presenting boot menu. In the last few years it introduced graphics mode: you still see a text menu with boot options, but they’re rendered in a graphics mode rather than shown in text mode.

Turns out, there’s a special option in /boot/grub/grub.cfg file that allows you to select a graphics resolution:

set gfxmode=1024x768

Change graphics mode for GRUB

To update this value properly, I suggest you edit the GRUB_GFXMODE in /etc/default/grub file:

GRUB_GFXMODE=1024x768

IMPORTANT: 1920×1080 mode is NOT supported, so don’t specify it. 1024×768 is a safe resolution that should be available on most hardware systems. I’ll write another post soon expanding on GRUB bootloader graphics resolutions topic.

Once this is done, re-build all the grub configuration files:

$ sudo grub-update

To verify that our resolution of 1024×768 made it into the config, grep for it:

greys@xps:~ $ grep 1024 /boot/grub/grub.cfg 
set gfxmode=1024x768

That’s it, you can reboot your PC or laptop now to enjoy a different resolution.

See Also




Using Chrony for time keeping

chrony is the default NTP service supplied with Red Hat Enterprise Linux 7 and newer RHEL versions, so it’s been around for some time. It’s a great Network Time Protocol implementation that is aimed to replace ntpd.

How To Install chrony

There are packages for Linux, FreeBSD, NetBSD, macOS and Solaris. In Red Hat Linux, it gets installed like this:

root@s2:~ # yum install chrony

Check current chrony status

Tracking shows if chrony is currently connected and what server it’s synchronising time from:

root@s2:~ # chronyc tracking
Reference ID : C39AAED1 (leeto.nicolbolas.org)
Stratum : 3
Ref time (UTC) : Sat Jun 01 22:49:11 2019
System time : 0.000001060 seconds fast of NTP time
Last offset : +0.000001358 seconds
RMS offset : 0.000014120 seconds
Frequency : 14.198 ppm slow
Residual freq : +0.000 ppm
Skew : 0.060 ppm
Root delay : 0.001231604 seconds
Root dispersion : 0.000468358 seconds
Update interval : 64.9 seconds
Leap status : Normal

Check chrony time sources

Sources will report your time synchronisation peers – the time-keepers that your server can get time from. As you can see, the server in previous (tracking) output, leeto.nicolbolas.org, is found on this list of sources as well.

root@s2:~ # chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* leeto.nicolbolas.org 2 6 377 8 +4139ns[+5545ns] +/- 1749us
^- eterna.binary.net 2 10 377 451 +92us[ +83us] +/- 89ms
^- web01.webhd.nl 3 9 377 206 -362us[ -358us] +/- 78ms
^- 148.ip-193-70-90.eu 3 9 377 139 -3724us[-3723us] +/- 66ms

This should be enough for the first try of using Chrony! Next step will be to setup GPS receiver on becky, Raspberry Pi I have. chronyc sources should report local device differently than a remote server, I think. We’ll see soon enough!

See Also




Controlling Dell Laptop Keyboard Backlight from Command Line

dell-xps-keyboard-backlight-devices.png

I didn’t take me long to start wondering if there was a way to control keyboard backlight on my Dell XPS 3890 laptop, and of course there is a simple enough solution.

Check Current Keyboard Backlight Level

IMPORTANT: This is Dell laptops (if not Dell XPS model) specific.

Go to the /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight directory and cat the file called brightess:

root@xps:/sys/devices/platform/dell-laptop/leds/dell::kbd_backlight # cat brightness
1

You can confirm the max value for brightness in a similar way:

root@xps:/sys/devices/platform/dell-laptop/leds/dell::kbd_backlight # cat max_brightness
2

Set Keyboard Backlight Brightness

If you just echo the numeric value you want, it will immediately apply it to your laptop’s keyboard.

IMPORANT: you  need to do it as root user.

Turning backlight off:

root@xps:/sys/devices/platform/dell-laptop/leds/dell::kbd_backlight # echo 0 > brightness

Setting it to max brightness:

root@xps:/sys/devices/platform/dell-laptop/leds/dell::kbd_backlight # echo 2 > brightness

I’m really excited! Must make this into a cronjob,

See Also

  1. Ubuntu 19.04
  2. Check For Available Updates with apt
  3. Use htop for monitoring CPU and memory

 




How To: Show Colour Numbers in Unix Terminal

256-terminal-colors-unix-linux.png
I’m decorating my tmux setup and needed to confirm colour numbers for some elements of the interface. Turns out, it’s simple enough to show all the possible colours with a 1-liner in your favourite Unix shell – bash shell in my case.

Using ESC sequences For Using Colours

I’ll explain how this works in full detail sometime in a separate post, but for now will just give you an example and show how it works:

hello-color-bash-output.png

So, in this example, this is how we achieve colorized text output:

  1. echo command uses -e option to support ESC sequences
  2. \e[38;5;75m is the ESC sequence specifying color number 75.
  3. \e[38;5; is just a special way of telling terminal that we want to use 256-color style

List 256 Terminal Colours with Bash

Here’s how we get the colours now: we create a loop from 1 until 255 (0 will be black) and then use the ESC syntax changing colour to $COLOR variable value. We then output the $COLOR value which will be a number:

for COLOR in {1..255}; do echo -en "\e[38;5;${COLOR}m${COLOR} "; done; echo;

Here’s how running this will look in a propertly configured 256-color terminal:

bash-show-256-colors.png

Bash Script to Show 256 Terminal Colours

Here’s the same 1-liner converted into proper script for better portability and readability:

#!/bin/bash

for COLOR in {1..255}; do
echo -en "\e[38;5;${COLOR}m"
echo -n "${COLOR} "
done

echo

If you save this as bash-256-colours.sh and chmod a+rx bash-256-colours.sh, you can now run it every time you want to refresh your memory or pick different colours for some use.

See Also




Test SSHd config on a different SSH port

Screen Shot 2019-05-24 at 16.25.35.png

Sometimes you need to tweak your SSH daemon on an important system and you just don’t know if particular settings will break connectivity to the server or not. In such cases it’s best to test new SSHd config using separate SSH daemon instance and separate SSH port – debug it there and only then apply new configs into your primary SSHd configuration.

Creating New SSHd Config

The easiest is to start by copying /etc/ssh/sshd_config file – you will need sudo/root privileges for that:

greys@s2:~ $ sudo cp /etc/ssh/sshd_config /home/greys

I then just remove everything I don’t need from it, leaving bare minimum. These are the parameters I kept (I ended up renaming my config to /home/greys/sshd_config.minimal after edits)

greys@s2:~ $ grep -v ^# /home/greys/sshd_config.minimal | uniq -u
Port 2222
HostKey /etc/ssh/ssh_host_rsa_key

RSAAuthentication yes
PubkeyAuthentication yes

AuthorizedKeysFile /var/ssh/%u/authorized_keys

PasswordAuthentication no

UsePAM yes

I only updated the SSH Port parameter – you can pick any other number instead of 2222.

Starting SSH daemon with custom config file

There’s a few rules for testing SSH configuration using separate file:

  • you need to have sudo/root privilege (mostly to avoid mess with host SSH keys)
  • it’s better to increase verbosity level to see what’s going on
  • it’s best to run SSHd in foreground (non-daemon) mode

With these principles in mind, here’s the command line to test the config shown above:

greys@s2:~ $ sudo /usr/sbin/sshd -f /home/greys/sshd_config.minimal -ddd -D
debug2: load_server_config: filename /home/greys/sshd_config.minimal
debug2: load_server_config: done config len = 194
debug2: parse_server_config: config /home/greys/sshd_config.minimal len 194
debug3: /home/greys/sshd_config.minimal:1 setting Port 2222
debug3: /home/greys/sshd_config.minimal:10 setting HostKey /home/greys/ssh_host_rsa_key
debug3: /home/greys/sshd_config.minimal:12 setting RSAAuthentication yes
/home/greys/sshd_config.minimal line 12: Deprecated option RSAAuthentication
debug3: /home/greys/sshd_config.minimal:13 setting PubkeyAuthentication yes
debug3: /home/greys/sshd_config.minimal:18 setting AuthorizedKeysFile /var/ssh/%u/authorized_keys
debug3: /home/greys/sshd_config.minimal:20 setting PasswordAuthentication no
debug3: /home/greys/sshd_config.minimal:22 setting UsePAM yes
debug1: sshd version OpenSSH_7.4, OpenSSL 1.0.2k-fips 26 Jan 2017
debug1: private host key #0: ssh-rsa SHA256:g7xhev6zJefXRFc0ClAG4rzpFI1Ts8H7PhQ/h3PTmLM
debug1: rexec_argv[0]=’/usr/sbin/sshd’
debug1: rexec_argv[1]=’-f’
debug1: rexec_argv[2]=’/home/greys/sshd_config.minimal’
debug1: rexec_argv[3]=’-ddd’
debug1: rexec_argv[4]=’-D’
debug3: oom_adjust_setup
debug1: Set /proc/self/oom_score_adj from 0 to -1000
debug2: fd 3 setting O_NONBLOCK
debug1: Bind to port 2222 on 0.0.0.0.
Server listening on 0.0.0.0 port 2222.
debug2: fd 4 setting O_NONBLOCK
debug3: sock_set_v6only: set socket 4 IPV6_V6ONLY
debug1: Bind to port 2222 on ::.
Server listening on :: port 2222.

That’s it, the configuration is ready to be tested (assuming your firewall on server doesn’t block port 2222).

Testing SSH connectivity using Different SSH Port

Here’s my login session in a separate window, connecting from my MacBook Pro to the s2 server on SSH port 2222 (I have masked my static IP with aaa.bbb.ccc.ddd and my s2 server’s IP with eee.fff.ggg.hhh):

greys@MacBook-Pro:~ $ ssh s2 -p 2222
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Last login: Fri May 24 15:53:59 2019 from aaa.bbb.ccc.ddd
debug3: Copy environment: XDG_SESSION_ID=14813
debug3: Copy environment: XDG_RUNTIME_DIR=/run/user/1000
Environment:
USER=greys
LOGNAME=greys
HOME=/home/greys
PATH=/usr/local/bin:/usr/bin
MAIL=/var/mail/greys
SHELL=/bin/bash
SSH_CLIENT=aaa.bbb.ccc.ddd 64168 2222
SSH_CONNECTION=aaa.bbb.ccc.ddd 64168 eee.fff.ggg.hhh 2222
SSH_TTY=/dev/pts/14
TERM=xterm-256color
XDG_SESSION_ID=14813
XDG_RUNTIME_DIR=/run/user/1000
SSH_AUTH_SOCK=/tmp/ssh-ajOUyvbR6i/agent.20996
greys@s2:~ $ uptime
16:18:08 up 86 days, 17:32, 2 users, load average: 1.00, 1.02, 1.05

Pretty cool, huh?

See Also