<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2975524&amp;fmt=gif">
Skip to content

Visibility Solutions

Garland Technology is committed to educating the benefits of having a strong foundation of network visibility and access. By providing this insight we protect the security of data across your network and beyond.

Resources

Garland Technology's resource library offers free use of white papers, eBooks, use cases, infographics, data sheets, video demos and more.

Blog

The TAP into Technology blog provides the latest news and insights on network access and visibility, including: network security, network monitoring and appliance connectivity and guest blogs from Industry experts and technology partners

Partners

Our extensive technology partnership ecosystem solves critical problems when it comes to network security, monitoring, application analysis, forensics and packet inspection.

Company

Garland Technology is dedicated to high standards in quality and reliability, while delivering the greatest economical solutions for enterprise, service providers, and government agencies worldwide.

Contact

Whether you are ready to make a network TAP your foundation of visibility or just have questions, please contact us. Ask us about the Garland Difference!

Visibility Solutions

Garland Technology is committed to educating the benefits of having a strong foundation of network visibility and access. By providing this insight we protect the security of data across your network and beyond.

Resources

Garland Technology's resource library offers free use of white papers, eBooks, use cases, infographics, data sheets, video demos and more.

Blog

The TAP into Technology blog provides the latest news and insights on network access and visibility, including: network security, network monitoring and appliance connectivity and guest blogs from Industry experts and technology partners

Partners

Our extensive technology partnership ecosystem solves critical problems when it comes to network security, monitoring, application analysis, forensics and packet inspection.

Company

Garland Technology is dedicated to high standards in quality and reliability, while delivering the greatest economical solutions for enterprise, service providers, and government agencies worldwide.

Contact

Whether you are ready to make a network TAP your foundation of visibility or just have questions, please contact us. Ask us about the Garland Difference!

Intrusion Detection 101: Rinse and Repeat

Erik_Hjelmvik-

I am a long time skeptic when it comes to blacklists and other forms of signature based detection mechanisms. The information security industry has also declared the signature based anti-virus approach dead several times during the past 10 years. Yet, we still rely on anti-virus signatures, IDS rules, IP blacklists, malware domain lists, YARA rules etc. to detect malware infections and other forms of intrusions in our networks. 

What can I say; the world is truly upside down...

Know your Network

I would like to share this blog post, which was first published on www.netresec.com to briefly describe an effective blacklist-free approach for detecting malware and intrusions just by analyzing network traffic. My approach relies on a combination of whitelisting and common sense anomaly detection (i.e. not the academic statistical anomaly detection algorithms that never seem to work in reality). I also encourage CERT/CSIRT/SOC/SecOps units to practice Sun Tzu's old ”know yourself”, or rather ”know your systems and networks” approach.

The outdated approach of signature based detection systems puts a high administrative burden on IT and security operations today, since we need to keep all our signature databases up to date, both when it comes to end point AV signatures as well as IDS rules and other signature based detection methods and threat feeds. Many organizations probably spend more time and money on updating all these blacklists and signature databases than actually investigating the security alerts these detection systems generate.

My method doesn't rely on any dark magic, it is actually just a simple Rinse-Repeat approach built on the following steps:

  1. Look at the network traffic
  2. Define what's normal (white list)
  3. Remove the normal
  4. Go back to number 1...Rinse and Repeat

After looping through these steps a few times you'll be left with some odd network traffic, which will have a high ratio of maliciousness. The key here is, of course, to know what traffic to classify as ”normal”. This is where ”know your systems and networks” comes in.


How to Guide: Optimizing Network Design in Security Projects

What Traffic is Normal?

I recently realized that Mike Poor seems to be thinking along the same lines, when I read his foreword to Chris Sanders' and Jason Smith's Applied NSM:

The next time you are at your console, review some logs. You might think... "I don't know what to look for". Start with what you know, understand, and don't care about. Discard those. Everything else is of interest. 

Applied NSM

Following Mike's advice we might, for example, define “normal” traffic as:

  • HTTP(S) traffic to popular web servers on the Internet on standard ports (TCP 80 and 443).
  • SMB traffic between client networks and file servers.
  • DNS queries from clients to your name server on UDP 53, where the servers successfully answers with an A, AAAA, CNAME, MX, NS or SOA record.
  • ...any other traffic which is normal in your organization. 

Whitelisting IP ranges belonging to Google, Facebook, Microsoft and Akamai as ”popular web servers” will reduce the dataset a great deal, but that's far from enough. One approach we use is to perform DNS whitelisting by classifying all servers with a domain name listed in Alexa's Top 1 Million list as ”popular”.

You might argue that such a method just replaces the old blacklist-updating-problem with a new whitelist-updating-problem. Well yes, you are right to some extent, but the good part is that the whitelist changes very little over time compared to a blacklist. So you don't need to update very often. Another great benefit is that the whitelist/rinse-repeat approach also enables detection of 0-day exploits and C2 traffic of unknown malware, since we aren't looking for known badness – just odd traffic.

Hunting with Rinse-Repeat

Mike Poor isn't the only well merited incident handler who seems to have adopted a strategy similar to the Rinse-Repeat method; Richard Bejtlich (former US Air Force CERT and GECIRT member) reveal some valuable insight in his book “The Practice of Network Security Monitoring”:

I often use Argus with Racluster to quickly search a large collection of session data via the command line, especially for unexpected entries. Rather than searching for specific data, I tell Argus what to omit, and then I review what’s left.

In his book Richard also mentions that he uses a similar methodology when going on “hunting trips” (i.e. actively looking for intrusions without having received an IDS alert):

The Practice of NSM
Sometimes I hunt for traffic by telling Wireshark what to ignore so that I can examine what’s left behind. I start with a simple filter, review the results, add another filter, review the results, and so on until I’m left with a small amount of traffic to analyze.

I personally find Rinse-Repeat Intrusion Detection ideal for hunting, especially in situations where you are provided with a big PCAP dataset to answer the classic question “Have we been hacked?”. However, unfortunately the “blacklist mentality” is so conditioned among incident responders that they often choose to crunch these datasets through blacklists and signature databases in order to then review thousands of alerts, which are full of false positives. In most situations such approaches are just a huge waste of time and computing power, and I'm hoping to see a change in the incident responders' mindsets in the future.

I teach this “rinse-repeat” intrusion detection method in Netresec's Network Forensics Training. In this class students get hands-on experience with a dataset of 3.5 GB / 40.000 flows, which is then reduced to just a fraction through a few iterations in the rinse-repeat loop. The remaining part of the PCAP dataset has a very high ratio of hacking attacks as well as command-and-control traffic from RAT's, backdoors and botnets.

Download the guide, Optimizing Network Design in Security Projects, for best practice on proteching your company's assets through security network design. 

Written by Erik Hjelmvik

Erik Hjelmvik is an experienced incident handler and software developer who who has specialized in network forensics and network security monitoring. Erik is also known in the network forensics community for having created NetworkMiner, which is an open source network forensics analysis tool. Since the release of NetworkMiner in 2007 it has become a popular tool among incident response teams and law enforcement. Today, NetworkMiner is used by companies and organizations all over the world and is included on popular live-CDs such as Security Onion and REMnux. Erik is also one of one of the founders behind the Swedish company Netresec, which is an independent software vendor with spearhead competence in network security monitoring and network forensics. Netresec develops and sells software products specially designed to capture and analyze network traffic on the wire as well as in pcap files.

Authors

Topics

Sign Up for Blog Updates