Now that I’m monitoring my logs using cetralised RSyslog, I regularly notice SSH attacks right when and as they happen. When it becomes obvious that someone’s trying to brute-force SSH, I don’t always wait to let fail2ban fix the issue – sometimes I ban the offending IP myself.
How To Ban Specific IP with fail2ban
Assuming a standard install, we’ll use the fail2ban-client command to notify sshd jail module to ban a specific IP.
Here’s how it works:
root@s1:/etc/fail2ban # fail2ban-client -vvv set sshd banip 202.70.66.228
30 7F0B121F6640 fail2ban.configreader INFO Loading configs for fail2ban under /etc/fail2ban
30 7F0B121F6640 fail2ban.configreader DEBUG Reading configs for fail2ban under /etc/fail2ban
31 7F0B121F6640 fail2ban.configreader DEBUG Reading config files: /etc/fail2ban/fail2ban.conf
31 7F0B121F6640 fail2ban.configparserinc INFO Loading files: ['/etc/fail2ban/fail2ban.conf']
31 7F0B121F6640 fail2ban.configparserinc TRACE Reading file: /etc/fail2ban/fail2ban.conf
31 7F0B121F6640 fail2ban.configparserinc INFO Loading files: ['/etc/fail2ban/fail2ban.conf']
31 7F0B121F6640 fail2ban.configparserinc TRACE Shared file: /etc/fail2ban/fail2ban.conf
32 7F0B121F6640 fail2ban INFO Using socket file /var/run/fail2ban/fail2ban.sock
32 7F0B121F6640 fail2ban INFO Using pid file /var/run/fail2ban/fail2ban.pid, [INFO] logging to SYSLOG
32 7F0B121F6640 fail2ban HEAVY CMD: ['set', 'sshd', 'banip', '202.70.66.228']
48 7F0B121F6640 fail2ban HEAVY OK : 1
48 7F0B121F6640 fail2ban.beautifier HEAVY Beautify 1 with ['set', 'sshd', 'banip', '202.70.66.228']
1
48 7F0B121F6640 fail2ban DEBUG Exit with code 0
Once you become comfortable, you can omit the -vvv option and skip all this verbose output:
root@s1:/etc/fail2ban # fail2ban-client set sshd banip 202.70.66.229
1
Nothing particularly new, but I found my blogging stride and enjoyed a very productive month. Lots of Ansible automation progress and a rework of my home office Raspberry Pi setup were my main focus.
Unix Tutorial in Russian is now a completely separate website, I’m going to find more time to add translations to it – it’s been growing nicely.
Unix and Linux News
New Releases section will get a few additions as per releases in February 2020:
I’ve been refreshing my gleb.reys.net website recently and experienced a weird error: pushing latest changes to GitHub resulted in my username and password prompted. Figured I should write down what the issue was and how easy it was to fix it.
git Repo Asks for Username/Password
greys@mcfly:~/proj/gleb.reys.net $ git push Username for ‘https://github.com’: greys Password for ‘https://[email protected]’: remote: Invalid username or password. fatal: Authentication failed for ‘https://github.com/greys/greys.github.io/’ greys@mcfly:~/proj/gleb.reys.net $ git remote origin greys@mcfly:~/proj/gleb.reys.net $ git remote show origin greys@mcfly:~/proj/gleb.reys.net $ git remote -v origin https://github.com/greys/greys.github.io (fetch) origin https://github.com/greys/greys.github.io (push)
greys@mcfly:~/proj/gleb.reys.net $ git push
fatal: The current branch master has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin master
greys@mcfly:~/proj/gleb.reys.net $ git push –set-upstream origin master Enumerating objects: 7, done. Counting objects: 100% (7/7), done. Delta compression using up to 16 threads Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 568 bytes | 568.00 KiB/s, done. Total 4 (delta 3), reused 0 (delta 0) remote: Resolving deltas: 100% (3/3), completed with 3 local objects. To github.com:greys/greys.github.io.git 018a8c0..e4f79d4 master -> master Branch ‘master’ set up to track remote branch ‘master’ from ‘origin’.
Raspberry Pi systems use microSD cards and therefore are more error prone than typical servers with hard disks or SSD. Such corruptions are especially tricky when the only storage available to Raspberry Pi is the microSD card which booted Raspbian OS.
How To Force Filesystem Check
Best thing is to update the /boot/cmdline.txt file to force filesystem repair on the next boot.
It’s taken me exactly two weeks to recognise the value of a native editor like Sublime Text 3 used for Ansible automation and Python scripting. I’m so impressed with the benefits brought by this approach that after another gentle reminder to perhpaps make the purchase already, I decided to hit the button.
WARNING: this is going to be a summary of my approach and view of local text editing on MacBook. HOWEVER, I just swiped something incorrectly and WordPress lost the whole post I’ve been typing up for about an hour. So I’ll have to revisit and fill in the blanks later.
My Background with Text Editors
Why Not Just Use Editor in Remote Session?
Can’t I use gVim or neoVim Instead?
Great Reasons to Use Native Editor
I’m listing things I enjoy specifically in Sublime Text 3 editing Ansible playbooks on my Macbook, but many modern editors bring similar improvements (unfairly) compared to out-of-the-box vim session on a typical Linux instance.
Functional Benefits of Editing Locally
better view of file navigation – Sublime Text 3 has folder view on macOS and even highlights git status for all the files
easier editing of multiple files with tabs – I like most apps with tabs in their interface, Sublime Text 3 definitely feels right with their approach. Tabs are easy to create and navigate, files are quick enough to be found using fuzzy search in filenames.
great syntax highlighting, navigation and indentation – Sublime Text 3 shows the big preview of your whole text time in the right section of the window, gives you line numbers and shows how many spaces/tabs you have in each line – makes it much easier to tabulate
bigger, more flexible access to source code and repos – your workstation can contain multiple projects and configuration files for various servers – you won’t get this on individual servers and probably won’t get it even on a dedicated automation server like Ansible/Puppet server – because they’re usually dedicated to a client/project
Quicker search in file and filesystem – I’m only learning how to use this properly in ST3, but there are plenty of ways to find something in a whole tree of directories pretty quickly
Convenient access to plugins – this is definitely one of my favourites. In many companies you just don’t get the same flexibility with installing software or plugins on any servers. You’re at best discouraged from installing non-signed/downloaded from Internet (not originating from a signed software repo) softare. And you’re completely blocked access at worst – firewall rules or AWS security groups need to be updated if you were to download anything directly on the server. Uploading plugin via scp is always an option, but many add-ons have dependencies which are a pain to resolve without readily availalbe Internet connection.
Quality Improvements in DevOps tasks
less risk to break syntax in some config – you’re comfortably editing, there are plugins checking and highlighting syntax, there may be commit hooks and peer reviews in your process to minimize risk
automatically more global and generic thinking – because you’re probably working on automation (Ansible) rather than actual config files, your mindset and your approach to solving an issue becomes more global. I notice that even when creating a playbook for just one server or VM, I’m already building it with scalability in mind – no hardcoded hostnames or functions, all the necessary flags and host specific variables. If time allows, I’m even implementing the same software install/configuration for both Debian and RedHat familiy of systems – just in case.
better source code control – most of things I write locally will be automation or documentation, meaning they almost definitely end up on github. There’s better history, better change trackign and more flexible way of rolling things back – all the usual improvements of using git
definitely less temptation to mess around with any server directly – I have all the information, all the configs and templates here locally in my git repo. Logging in remotely to some server for fixing a problem becomes an extra step if not a hassle – and I like it this way, because this means my solution will be this bit more robust by the time I get to deploy
I blogged about Generic Colouriser (grc) last week, cause I’m using it now to monitor syslog messages in my centralised RSyslog setup. I also mentioned that grc suppors many standard commands in addition to parsing common types of log files.
Colorized ls Output
Many Linux distros and even macOS support colorized file listing of the ls command, etc. Here’s how it usually looks:
Colorized ls Output with grc
Compare above example to how grc colorizes the same list of files:
Obviously, focus is on file permissions and ownership info.
I really like this, must be of great use for those of us just getting familiar with file/directory permissions in Unix/Linux.
I got one of my Raspberry Pi servers attempting to obtain DHCP IP address, a behavior that ignored my static IP address configuration.
Not sure why, but it appeared I’d be getting an extra DHCP address, from the same network segment, in addition to the static IP the Raspberry Pi already had.
Normally I’d just disable the service, but since my home office network is fairly static, I figured I would just remove the DHCP package.
WARNING: do not follow my steps unless you’re in the same situation and pretty sure you’re using static IP addressing.
Double Check that You’re Using Static IP
Check your /etc/network/interfaces file, it should have something similar for your primary interface – in wired network cable case it’s eth0:
Also, run ip a and make sure you’re seeing this same IP among the active interfaces:
greys@s7:~ $ ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether b8:27:ee:66:88:ff brd ff:ff:ff:ff:ff:ff
inet 192.168.1.99/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
Remove ISC DHCP Client
So I did this:
root@srv:~# apt-get remove isc-dhcp-client
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages were automatically installed and are no longer required:
libdns-export1104 libisc-export1100
Use 'apt autoremove' to remove them.
The following additional packages will be installed:
dhcpcd5
Suggested packages:
dhcpcd-gtk
The following packages will be REMOVED:
isc-dhcp-client
The following NEW packages will be installed:
dhcpcd5
0 upgraded, 1 newly installed, 1 to remove and 207 not upgraded.
All cool? Not really. If you read carefully, you’ll notice that I removed isc-dhcp-client, but automatically installed dhcpcd5 – which started making DHCP requests again.
Remove DHCPcD5
Next step then! Let’s remove DHCPcD5:
root@srv:~# apt-get remove dhcpcd5
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
isc-dhcp-client
Suggested packages:
avahi-autoipd isc-dhcp-client-ddns
The following packages will be REMOVED:
dhcpcd5
The following NEW packages will be installed:
isc-dhcp-client
0 upgraded, 1 newly installed, 1 to remove and 207 not upgraded.
Much better!
Or is it? If you look closer, you’ll see that this command installed isc-dhcp-client back.
Delete both DHCP Client Packages
This time I specified both packages to be removed. I even used apt-get purge instead of apt-get remove – to definitely destroy any configs:
root@srv:~# apt-get purge dhcpcd5 isc-dhcp-client
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages were automatically installed and are no longer required:
libdns-export1104 libisc-export1100
Use 'apt autoremove' to remove them.
The following additional packages will be installed:
pump
The following packages will be REMOVED:
dhcpcd5* isc-dhcp-client*
The following NEW packages will be installed:
pump
0 upgraded, 1 newly installed, 2 to remove and 207 not upgraded.
When this installed pump (that’s apparently another BOOTP/DHCP client – I never even heard about it before), I got curious.
Having researched online, it appears one can configure static IP in Raspberry Pi using DHCP client configs. Doesn’t sound right to me! There’s also the systemd way to disable dhcpd.service, but at this stage I was not looking for half measures.
Having carefully considered this, I decided to unstall the whole lot. It also removed wicd* (Wired and Wireless Network Connection Manager) bunch which is another set of packages for managing network interfaces and connections.
I’m honestly suprised and seriously suprised how involved a network interface and IP address configuration is! Since I’m not using any of these niceties and because this is a static server-like environment where I’m not switching Wi-Fi networks or changing connection profiles all the time, I’m comfortable letting it all go.
Uninstalling DHPCP clients, pump and Wicd
WARNING: be super sure you’re using static IP addressing on your Raspberry Pi system before running the next command.
Here’s the final uninstall command:
root@s7:~# apt-get remove dhcpcd5 isc-dhcp-client pump
Reading package lists… Done
Building dependency tree
Reading state information… Done
Package 'isc-dhcp-client' is not installed, so not removed
Package 'pump' is not installed, so not removed
The following packages were automatically installed and are no longer required:
libdns-export1104 libisc-export1100 openresolv python-wicd
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
dhcpcd5 wicd wicd-daemon wicd-gtk
0 upgraded, 0 newly installed, 4 to remove and 207 not upgraded.
FINALLY! No more DHCP requests from this server 🙂
pS: on a somewhat relevant note, think I’ll upgrade all them 207 packages – but first will complete a reboot to check network configuration still works for the static IP.
I noticed that this server hasn’t been terribly available since upgrade to Ubuntu 19.04: more than once I went looking for the server and it was offline.
Now that I’m finally progressing with centralized RSyslog setup at home, I noticed that the UniFi controller server was reporting the following in logs recently:
So, it appears the power management has improved enough to start bringing this server to sleep every hour or so.
Since this is a recent enough version of Ubuntu, I figured there should be a way to disable power management using systemctl. Turns out, there is.
Confirm Sleep Status with systemd
IMPORTANT: I didn’t run this command on server, so this is example from another system: I’m running it on my XPS laptop with Ubuntu, just to show you expected output.
root@server:/ # sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
Created symlink /etc/systemd/system/sleep.target → /dev/null.
Created symlink /etc/systemd/system/suspend.target → /dev/null.
Created symlink /etc/systemd/system/hibernate.target → /dev/null.
Created symlink /etc/systemd/system/hybrid-sleep.target → /dev/null.
root@server:/etc/pm/sleep.d#
This is obviously a very simple way of disabling power management, but I like it because it’s standard and logical enough – there’s no need to edit config files or create cronjobs manually controlling sleep functionality.
The service is dead, no power management is happening and most importantly, my server has been up for 12 hours now.
greys@server:~$ systemctl status sleep.target
● sleep.target
Loaded: masked (Reason: Unit sleep.target is masked.)
Active: inactive (dead)
(re) Enabling Sleep in Ubuntu with systemctl
When the time comes and I would like to re-enable power management and sleep/hibernation, this will be the command I’ll run:
I have just started deploying my new VPS server online: vps1.unixtutorial.org
The idea is that I’ll be hosting a number of VPS servers for us you learn Linux basics. vps1 is Ubuntu, future VPS servers will probably be CentOS and Amazon Linux 2 (hosted on AWS). All of these will be accessible on SSH (probably non-standard port) with shell access.
vps1.unixtutorial.org specs
Location: Paris, Scaleway datacentre
OS version: Ubuntu Bionic 18.04.2 LTS
CPU: 2 vCPUs @ 2Ghz
RAM: 2GB
Disk: 20GB SSD (16GB available)
Here’s how it looks:
Basic Unix Tutorial VPS rules
you get standard user access with Bash shell
no root access, but sudo can be provided for specific commands
no Internet access (you can log in and run local commands, but can’t download or upload anything from VPS server). no port forwarding .
zero tolerance policy for hacking/exploit testing – if I notice any of you trying something like this, access will be revoked
limited resource usage – please don’t leave tasks running in background, etc.
Access to Unix Tutorial VPS servers
VPS access is completely FREE, but you need to become a Patreon for Unix Tutorial supporter or member of my Unix Tutorial group on Facebook. Please get in touch with me either via group or Patreon page regarding your username setup:
preferred username (mine is greys)
your full name (will be visible to other users of the VPS server)
your email (so that I can email you the initial password)
Getting Help and Learning More
If you need guidance or mentorship with learning Linux, I provide some of this on the Active Learner membership level. If you want even more support or need specific lab setup learning a particular skill – this is possible with the Club Member level on Patreon.