I use Elasticsearch to get a visualization of OSSEC alerts across all of my personal Virtual Private Servers, and I noticed several invalid / failed ssh authentications to one of my VPS hosts that spanned a few hours. The attacker was running under the radar by trying a different username every ten or so minutes.
So I wrote a quick frequency rule in OSSEC to trigger an active response:
<rule id="100011" level="10" timeframe="3600" frequency="1"> <if_matched_sid>5710</if_matched_sid> <same_source_ip /> <description>Multiple ssh auths as non-existent user.</description> <group>authentication_failures,</group> </rule>
The above will throw a level 10 alert if the same IP address attempts to login as an invalid user via ssh three or more times within a window of one hour.
Don’t ask about why the frequency value is interpreted by OSSEC as value+2. This is a documented behavior.
The problem is that when I tested the rule, it was not firing.
It turns out, the OSSEC decoder for ssh-invalid-user is invalid itself.
The stock decoder in 2.9.1 and 2.9.2 (latest as of right now) is:
<decoder name="ssh-invalid-user"> <parent>sshd</parent> <prematch>^Invalid user|^Illegal user</prematch> <regex offset="after_prematch"> from (\S+)$</regex> <order>srcip</order> </decoder>
A sample sshd log entry (OpenSSH sshd 7.4p1) I am trying to fire on is:
Oct 25 08:00:57 hostname sshd[1234]: Invalid user admin from 1.2.3.4 port 1234
The problem with this is rooted in how OSSEC constructs decoders. The initial regex (prematch) to classify the log entry under the ssh-invalid-user decoder matches “Invalid user”, but the ‘post-match’ regex fails because it starts to tokenize based on:
admin from 1.2.3.4 port 1234
Note there is leading white space in the above disassociated string.
The post-match regex fails to tokenize the disassociated string because:
- The invalid username ‘admin’ does not match the “from (\S+)$” regex due to the fact it only expects a single leading whitespace prior to the ‘from’ string.
- The “from (\S+)$” regex is anchored to the end, and conflicts with the string ‘port’ as well as the actual numerical port string value.
To resolve this, the ssh-invalid-user needs to be corrected. The following is what I used:
<decoder name="ssh-invalid-user"> <parent>sshd</parent> <prematch>^Invalid user|^Illegal user</prematch> <regex offset="after_prematch"> (\w+) from (\S+) port (\S+)$</regex> <order>user,srcip,srcport</order> </decoder>
I expanded the “after_prematch” regex to match a format of <username> from <ip> port <port>, and also tokenized that post-match string to also obtain the (invalid) username and source port values, along with the IP address.
This resolved the issue as proven with ossec-logtest output:
# Output of third invalid user log entry Oct 25 08:00:57 hostname sshd[1234]: Invalid user admin from 1.2.3.4 port 1234 **Phase 1: Completed pre-decoding. full event: 'Oct 25 08:00:57 hostname sshd[1234]: Invalid user admin from 1.2.3.4 port 1234' hostname: 'hostname' program_name: 'sshd' log: 'Invalid user admin from 1.2.3.4 port 1234' **Phase 2: Completed decoding. decoder: 'sshd' dstuser: 'admin' srcip: '1.2.3.4' srcport: '1234' **Phase 3: Completed filtering (rules). Rule id: '100011' Level: '10' Description: 'Multiple ssh auths as non-existent user.' **Alert to be generated.
Note that this affected by instance of sshd (OpenSSH 7.4p1) and it is likely most sshd logs use a similar “Invalid user” log format. So if you use OSSEC, give the fixed decoder a try.
jeremy
are you masking port 22 or actually using 1234? allowing 22 or 1234 to all?
If I have a need to get direct access (sshd) i use dynDNS and update iptables every x mins, but I always try to connect home or DC and connect to my gear via the VPN. everything I run is hub and spoke even to the VPS all pass over the ipsec tunnels and paloalto units to filter traffic. over kill for most.
ocabj
The 1234 in the logs is the port of the originating host, not my VPS. I still run SSHD on the default port 22. I used to restrict sshd access to only certain VPN port on all my VPSes, but I stopped doing this. Since I use key auth only (no password accepted) in conjunction with MFA, it found it easier to remote to certain VPSes without having to go to a VPN.
jeremy
good stuff, keys are best!