Ban Specific IP Manually with fail2ban

fail2ban

Now that I’m monitoring my logs using cetralised RSyslog, I regularly notice SSH attacks right when and as they happen. When it becomes obvious that someone’s trying to brute-force SSH, I don’t always wait to let fail2ban fix the issue – sometimes I ban the offending IP myself.

How To Ban Specific IP with fail2ban

Assuming a standard install, we’ll use the fail2ban-client command to notify sshd jail module to ban a specific IP.

Here’s how it works:

root@s1:/etc/fail2ban # fail2ban-client -vvv set sshd banip 202.70.66.228
30 7F0B121F6640 fail2ban.configreader     INFO  Loading configs for fail2ban under /etc/fail2ban
30 7F0B121F6640 fail2ban.configreader     DEBUG Reading configs for fail2ban under /etc/fail2ban
31 7F0B121F6640 fail2ban.configreader     DEBUG Reading config files: /etc/fail2ban/fail2ban.conf
31 7F0B121F6640 fail2ban.configparserinc  INFO    Loading files: ['/etc/fail2ban/fail2ban.conf']
31 7F0B121F6640 fail2ban.configparserinc  TRACE     Reading file: /etc/fail2ban/fail2ban.conf
31 7F0B121F6640 fail2ban.configparserinc  INFO    Loading files: ['/etc/fail2ban/fail2ban.conf']
31 7F0B121F6640 fail2ban.configparserinc  TRACE     Shared file: /etc/fail2ban/fail2ban.conf
32 7F0B121F6640 fail2ban                  INFO  Using socket file /var/run/fail2ban/fail2ban.sock
32 7F0B121F6640 fail2ban                  INFO  Using pid file /var/run/fail2ban/fail2ban.pid, [INFO] logging to SYSLOG
32 7F0B121F6640 fail2ban                  HEAVY CMD: ['set', 'sshd', 'banip', '202.70.66.228']
48 7F0B121F6640 fail2ban                  HEAVY OK : 1
48 7F0B121F6640 fail2ban.beautifier       HEAVY Beautify 1 with ['set', 'sshd', 'banip', '202.70.66.228']
1
48 7F0B121F6640 fail2ban                  DEBUG Exit with code 0 

Once you become comfortable, you can omit the -vvv option and skip all this verbose output:

root@s1:/etc/fail2ban # fail2ban-client set sshd banip 202.70.66.229
1

That’s it for today! Have fun!

See Also




Unix Tutorial Digest – March 9th, 2020

Unix Tutorial Digest

Here’s the monthly summary of Unix/Linux news and Unix Tutorial posts.

Please get in touch to arrange a technical consultation with me or suggest a useful link for the next digest here at Unix Tutorial.

Unix Tutorial News

Nothing particularly new, but I found my blogging stride and enjoyed a very productive month. Lots of Ansible automation progress and a rework of my home office Raspberry Pi setup were my main focus.

Unix Tutorial in Russian is now a completely separate website, I’m going to find more time to add translations to it – it’s been growing nicely.

Unix and Linux News

New Releases section will get a few additions as per releases in February 2020:

Software News

Interesting and Useful

Unix Tutorial articles

Full list of posts published on Unix Tutorial in February 2020, 29 posts in total:

That’s it for the month of February 2020!

See Also




git push Asks for Username and Password

I’ve been refreshing my gleb.reys.net website recently and experienced a weird error: pushing latest changes to GitHub resulted in my username and password prompted. Figured I should write down what the issue was and how easy it was to fix it.

git Repo Asks for Username/Password

greys@mcfly:~/proj/gleb.reys.net $ git push
Username for ‘https://github.com’: greys
Password for ‘https://[email protected]’:
remote: Invalid username or password.
fatal: Authentication failed for ‘https://github.com/greys/greys.github.io/’
greys@mcfly:~/proj/gleb.reys.net $ git remote
origin
greys@mcfly:~/proj/gleb.reys.net $ git remote show
origin
greys@mcfly:~/proj/gleb.reys.net $ git remote -v
origin https://github.com/greys/greys.github.io (fetch)
origin https://github.com/greys/greys.github.io (push)

At first I couldn’t see it

Update The Origin in git Repository

greys@mcfly:~/proj/gleb.reys.net $ git remote rm origin
greys@mcfly:~/proj/gleb.reys.net $ git remote -v

greys@mcfly:~/proj/gleb.reys.net $ git remote add origin [email protected]:greys/greys.github.io.git
greys@mcfly:~/proj/gleb.reys.net $ git remote -v
origin [email protected]:greys/greys.github.io.git (fetch)
origin [email protected]:greys/greys.github.io.git (push)

Push (with set-upstream)

greys@mcfly:~/proj/gleb.reys.net $ git push
fatal: The current branch master has no upstream branch.
To push the current branch and set the remote as upstream, use

git push --set-upstream origin master

greys@mcfly:~/proj/gleb.reys.net $ git push –set-upstream origin master
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 16 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 568 bytes | 568.00 KiB/s, done.
Total 4 (delta 3), reused 0 (delta 0)
remote: Resolving deltas: 100% (3/3), completed with 3 local objects.
To github.com:greys/greys.github.io.git
018a8c0..e4f79d4 master -> master
Branch ‘master’ set up to track remote branch ‘master’ from ‘origin’.

That’s it! I’m using Netlify for automatic build and hosting of my Jekyll website, so a minute or two after the git push shown above the website got refreshed to the latest version. Nice!

See Also




Force Filesystem Repair in Raspbian

Raspberry Pi

Raspberry Pi systems use microSD cards and therefore are more error prone than typical servers with hard disks or SSD. Such corruptions are especially tricky when the only storage available to Raspberry Pi is the microSD card which booted Raspbian OS.

How To Force Filesystem Check

Best thing is to update the /boot/cmdline.txt file to force filesystem repair on the next boot.

Change the file from this:

greys@becky:~ $ cat /boot/cmdline.txt
dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait plymouth.enable=0

to this:

greys@becky:~ $ cat /boot/cmdline.txt
dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes fsck.mode=force rootwait plymouth.enable=0

And then just reboot:

greys@becky:~ $ sudo shutdown -r now

See Also




Why I Bought Sublime Text 3

Registered Copy of Sublime Text 3

It’s taken me exactly two weeks to recognise the value of a native editor like Sublime Text 3 used for Ansible automation and Python scripting. I’m so impressed with the benefits brought by this approach that after another gentle reminder to perhpaps make the purchase already, I decided to hit the button.

WARNING: this is going to be a summary of my approach and view of local text editing on MacBook. HOWEVER, I just swiped something incorrectly and WordPress lost the whole post I’ve been typing up for about an hour. So I’ll have to revisit and fill in the blanks later.

My Background with Text Editors

Why Not Just Use Editor in Remote Session?

Can’t I use gVim or neoVim Instead?

Great Reasons to Use Native Editor

I’m listing things I enjoy specifically in Sublime Text 3 editing Ansible playbooks on my Macbook, but many modern editors bring similar improvements (unfairly) compared to out-of-the-box vim session on a typical Linux instance.

Functional Benefits of Editing Locally

  • better view of file navigation – Sublime Text 3 has folder view on macOS and even highlights git status for all the files
  • easier editing of multiple files with tabs – I like most apps with tabs in their interface, Sublime Text 3 definitely feels right with their approach. Tabs are easy to create and navigate, files are quick enough to be found using fuzzy search in filenames.
  • great syntax highlighting, navigation and indentation – Sublime Text 3 shows the big preview of your whole text time in the right section of the window, gives you line numbers and shows how many spaces/tabs you have in each line – makes it much easier to tabulate
  • bigger, more flexible access to source code and repos – your workstation can contain multiple projects and configuration files for various servers – you won’t get this on individual servers and probably won’t get it even on a dedicated automation server like Ansible/Puppet server – because they’re usually dedicated to a client/project
  • Quicker search in file and filesystem – I’m only learning how to use this properly in ST3, but there are plenty of ways to find something in a whole tree of directories pretty quickly
  • Convenient access to plugins – this is definitely one of my favourites. In many companies you just don’t get the same flexibility with installing software or plugins on any servers. You’re at best discouraged from installing non-signed/downloaded from Internet (not originating from a signed software repo) softare. And you’re completely blocked access at worst – firewall rules or AWS security groups need to be updated if you were to download anything directly on the server. Uploading plugin via scp is always an option, but many add-ons have dependencies which are a pain to resolve without readily availalbe Internet connection.

Quality Improvements in DevOps tasks

  • less risk to break syntax in some config – you’re comfortably editing, there are plugins checking and highlighting syntax, there may be commit hooks and peer reviews in your process to minimize risk
  • automatically more global and generic thinking – because you’re probably working on automation (Ansible) rather than actual config files, your mindset and your approach to solving an issue becomes more global. I notice that even when creating a playbook for just one server or VM, I’m already building it with scalability in mind – no hardcoded hostnames or functions, all the necessary flags and host specific variables. If time allows, I’m even implementing the same software install/configuration for both Debian and RedHat familiy of systems – just in case.
  • better source code control – most of things I write locally will be automation or documentation, meaning they almost definitely end up on github. There’s better history, better change trackign and more flexible way of rolling things back – all the usual improvements of using git
  • definitely less temptation to mess around with any server directly – I have all the information, all the configs and templates here locally in my git repo. Logging in remotely to some server for fixing a problem becomes an extra step if not a hassle – and I like it this way, because this means my solution will be this bit more robust by the time I get to deploy

See Also




Log fail2ban Messages to Syslog

fail2ban logging into syslog

With quite a few servers accepting SSH connections and protecting themselves using fail2ban, you very quickly recognize one thing: it makes a lot of sense to centralize fail2ban reporting using syslog.

To update fail2ban logging. you need to edit the /etc/fail2ban/fail2ban.conf file and replace this:

logtarget /var/log/fail2ban.log

with this:

logtarget = SYSLOG

Here’s how my section looks when I’m editing a file with vim:

Switching fail2ban log target to SYSLOG

Restart fail2ban service and enjoy:

root@s7:/var/log # systemctl reload fail2ban

See Also




Colorized ls with grc

Colorized output of ls command

I blogged about Generic Colouriser (grc) last week, cause I’m using it now to monitor syslog messages in my centralised RSyslog setup. I also mentioned that grc suppors many standard commands in addition to parsing common types of log files.

Colorized ls Output

Many Linux distros and even macOS support colorized file listing of the ls command, etc. Here’s how it usually looks:

Colorized ls Output with grc

Compare above example to how grc colorizes the same list of files:

Colorized ls with grc

Obviously, focus is on file permissions and ownership info.

I really like this, must be of great use for those of us just getting familiar with file/directory permissions in Unix/Linux.

Have fun!

See Also




Removed DHCP Daemons in Raspbian

Raspberry Pi 4

I got one of my Raspberry Pi servers attempting to obtain DHCP IP address, a behavior that ignored my static IP address configuration.

Not sure why, but it appeared I’d be getting an extra DHCP address, from the same network segment, in addition to the static IP the Raspberry Pi already had.

Normally I’d just disable the service, but since my home office network is fairly static, I figured I would just remove the DHCP package.

WARNING: do not follow my steps unless you’re in the same situation and pretty sure you’re using static IP addressing.

Double Check that You’re Using Static IP

Check your /etc/network/interfaces file, it should have something similar for your primary interface – in wired network cable case it’s eth0:

auto eth0
iface eth0 inet static
    address 192.168.1.99
    netmask 255.255.255.0
    gateway 192.168.1.1

Also, run ip a and make sure you’re seeing this same IP among the active interfaces:

greys@s7:~ $ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:ee:66:88:ff brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.99/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever

Remove ISC DHCP Client

So I did this:

root@srv:~# apt-get remove isc-dhcp-client
 Reading package lists… Done
 Building dependency tree
 Reading state information… Done
 The following packages were automatically installed and are no longer required:
   libdns-export1104 libisc-export1100
 Use 'apt autoremove' to remove them.
 The following additional packages will be installed:
   dhcpcd5
 Suggested packages:
   dhcpcd-gtk
 The following packages will be REMOVED:
   isc-dhcp-client
 The following NEW packages will be installed:
   dhcpcd5
 0 upgraded, 1 newly installed, 1 to remove and 207 not upgraded.

All cool? Not really. If you read carefully, you’ll notice that I removed isc-dhcp-client, but automatically installed dhcpcd5 – which started making DHCP requests again.

Remove DHCPcD5

Next step then! Let’s remove DHCPcD5:

root@srv:~# apt-get remove dhcpcd5
 Reading package lists… Done
 Building dependency tree
 Reading state information… Done
 The following additional packages will be installed:
   isc-dhcp-client
 Suggested packages:
   avahi-autoipd isc-dhcp-client-ddns
 The following packages will be REMOVED:
   dhcpcd5
 The following NEW packages will be installed:
   isc-dhcp-client
 0 upgraded, 1 newly installed, 1 to remove and 207 not upgraded.

Much better!

Or is it? If you look closer, you’ll see that this command installed isc-dhcp-client back.

Delete both DHCP Client Packages

This time I specified both packages to be removed. I even used apt-get purge instead of apt-get remove – to definitely destroy any configs:

root@srv:~# apt-get purge dhcpcd5 isc-dhcp-client
 Reading package lists… Done
 Building dependency tree
 Reading state information… Done
 The following packages were automatically installed and are no longer required:
   libdns-export1104 libisc-export1100
 Use 'apt autoremove' to remove them.
 The following additional packages will be installed:
   pump
 The following packages will be REMOVED:
   dhcpcd5* isc-dhcp-client*
 The following NEW packages will be installed:
   pump
 0 upgraded, 1 newly installed, 2 to remove and 207 not upgraded.

When this installed pump (that’s apparently another BOOTP/DHCP client – I never even heard about it before), I got curious.

Having researched online, it appears one can configure static IP in Raspberry Pi using DHCP client configs. Doesn’t sound right to me! There’s also the systemd way to disable dhcpd.service, but at this stage I was not looking for half measures.

Having carefully considered this, I decided to unstall the whole lot. It also removed wicd* (Wired and Wireless Network Connection Manager) bunch which is another set of packages for managing network interfaces and connections.

I’m honestly suprised and seriously suprised how involved a network interface and IP address configuration is! Since I’m not using any of these niceties and because this is a static server-like environment where I’m not switching Wi-Fi networks or changing connection profiles all the time, I’m comfortable letting it all go.

Uninstalling DHPCP clients, pump and Wicd

WARNING: be super sure you’re using static IP addressing on your Raspberry Pi system before running the next command.

Here’s the final uninstall command:

root@s7:~# apt-get remove dhcpcd5 isc-dhcp-client pump
Reading package lists… Done
Building dependency tree
Reading state information… Done
Package 'isc-dhcp-client' is not installed, so not removed
Package 'pump' is not installed, so not removed
The following packages were automatically installed and are no longer required:
  libdns-export1104 libisc-export1100 openresolv python-wicd
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
  dhcpcd5 wicd wicd-daemon wicd-gtk
0 upgraded, 0 newly installed, 4 to remove and 207 not upgraded.

FINALLY! No more DHCP requests from this server 🙂

pS: on a somewhat relevant note, think I’ll upgrade all them 207 packages – but first will complete a reboot to check network configuration still works for the static IP.

See Also




How To: Disable Sleep on Ubuntu Server

Ubuntu 19.10

You may remember that I have a small automation server in my home office that’s running Ubiquiti UniFi Controller software and where I upgraded UniFi Controller on Ubuntu 19.04.

I noticed that this server hasn’t been terribly available since upgrade to Ubuntu 19.04: more than once I went looking for the server and it was offline.

Now that I’m finally progressing with centralized RSyslog setup at home, I noticed that the UniFi controller server was reporting the following in logs recently:

So, it appears the power management has improved enough to start bringing this server to sleep every hour or so.

Since this is a recent enough version of Ubuntu, I figured there should be a way to disable power management using systemctl. Turns out, there is.

Confirm Sleep Status with systemd

IMPORTANT: I didn’t run this command on server, so this is example from another system: I’m running it on my XPS laptop with Ubuntu, just to show you expected output.

As you can see, my laptop rests well and often:

greys@xps:~ $ systemctl status sleep.target
 ● sleep.target - Sleep
    Loaded: loaded (/lib/systemd/system/sleep.target; static; vendor preset: enabled)
    Active: inactive (dead)
      Docs: man:systemd.special(7)
 Feb 24 13:18:08 xps systemd[1]: Reached target Sleep.
 Feb 26 13:29:31 xps systemd[1]: Stopped target Sleep.
 Feb 26 13:29:57 xps systemd[1]: Reached target Sleep.
 Feb 26 13:30:19 xps systemd[1]: Stopped target Sleep.

Disable Sleep in Ubuntu with systemd

This is what I did on my server:

root@server:/ # sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
 Created symlink /etc/systemd/system/sleep.target → /dev/null.
 Created symlink /etc/systemd/system/suspend.target → /dev/null.
 Created symlink /etc/systemd/system/hibernate.target → /dev/null.
 Created symlink /etc/systemd/system/hybrid-sleep.target → /dev/null.
 root@server:/etc/pm/sleep.d#

This is obviously a very simple way of disabling power management, but I like it because it’s standard and logical enough – there’s no need to edit config files or create cronjobs manually controlling sleep functionality.

The service is dead, no power management is happening and most importantly, my server has been up for 12 hours now.

greys@server:~$ systemctl status sleep.target
● sleep.target
   Loaded: masked (Reason: Unit sleep.target is masked.)
   Active: inactive (dead)

(re) Enabling Sleep in Ubuntu with systemctl

When the time comes and I would like to re-enable power management and sleep/hibernation, this will be the command I’ll run:

root@server:/etc/pm/sleep.d# sudo systemctl unmask sleep.target suspend.target hibernate.target hybrid-sleep.target

That’s all for now. Have a great day!

See Also




VPS1 Server is Online

I have just started deploying my new VPS server online: vps1.unixtutorial.org

The idea is that I’ll be hosting a number of VPS servers for us you learn Linux basics. vps1 is Ubuntu, future VPS servers will probably be CentOS and Amazon Linux 2 (hosted on AWS). All of these will be accessible on SSH (probably non-standard port) with shell access.

vps1.unixtutorial.org specs

  • Location: Paris, Scaleway datacentre
  • OS version: Ubuntu Bionic 18.04.2 LTS
  • CPU: 2 vCPUs @ 2Ghz
  • RAM: 2GB
  • Disk: 20GB SSD (16GB available)

Here’s how it looks:

SSH login to vps1 server

Basic Unix Tutorial VPS rules

  • you get standard user access with Bash shell
  • no root access, but sudo can be provided for specific commands
  • no Internet access (you can log in and run local commands, but can’t download or upload anything from VPS server). no port forwarding .
  • zero tolerance policy for hacking/exploit testing – if I notice any of you trying something like this, access will be revoked
  • limited resource usage – please don’t leave tasks running in background, etc.

Access to Unix Tutorial VPS servers

VPS access is completely FREE, but you need to become a Patreon for Unix Tutorial supporter or member of my Unix Tutorial group on Facebook. Please get in touch with me either via group or Patreon page regarding your username setup:

  • preferred username (mine is greys)
  • your full name (will be visible to other users of the VPS server)
  • your email (so that I can email you the initial password)

Getting Help and Learning More

If you need guidance or mentorship with learning Linux, I provide some of this on the Active Learner membership level. If you want even more support or need specific lab setup learning a particular skill – this is possible with the Club Member level on Patreon.

See Also