720-891-1663

Defensive Best Practices For Destructive Malware

The NSA released a 5 page document last week on keeping malware out of your network.  5 pages with links to hundreds of pages of other NSA documents.  It would probably take a year just to read and absorb them.  Then you have to deal with implementing the suggestions.  Some are simple, some are hard.  As i always say, it is a matter of business risk management to decide what you want to do.  Then implementing it.  Then maintaining it.  Simple, huh?  Not quite, but with the right resources, it is possible.

Here is the condensed version of what the NSA is recommending.  Since they control their workforce completely, they can do all of this.  You, probably, will have to pick and choose.

Prevent, Detect and Contain

  • Segregate your network so that when an attacker does get in, he or she cannot roam your entire universe.  An example would be at Target – getting in to the vendor management network should not allow you access to the point of sale system.  In Target’s case, this was way to easy.  This can be a lot of work, but it has slight impact on your users once it is set up and almost no performance impact.
  • Protect and restrict administrative privileges.  Unfortunately, the NSA is the poster child for this one.  When Edward Snowden went rogue, he had way too much access.  This is transparent to your users and a pain in the rear for your administrators.  Still, they have the keys to the cookie jar, so you decide.
  • Deploy application whitelisting.  Whitelisting means that only approved versions of approved applications can be installed anywhere on your network.  This mostly impacts your users and I would rate the impact high.  If users cannot run downloaded software or infected versions of approved software, it makes the hackers job very hard.
  • Limit workstation to workstation communication.  This makes it harder for malware to spread.  I rate the impact low on users and medium on administrators.  I rate it difficult to implement.
  • Implement robust network boundary defenses such as firewalls.  This takes some effort to implement but when it is done, for the most part, the users won’t notice.  The US Government is working on this – they had thousands of connections to the outside world.  How do you protect that many connections?  How many connections do you have?  What about the ones you don’t know about like that wifi connection between someone’s laptop and their personal wifi hotspot that they bought from Verizon for $49?
  • Maintain and monitor host and network logging.  This one is completely transparent to the users but takes a lot of work and likely some money.  Every device on your network – from a server to the refrigerator needs to send it’s logs to a central server.  Then, those logs have to be crunched for unusual events.  Then people have to act on the alerts.  That is what really killed Target.  Their logging and alerting system generated an alert, IT reviewed it and bounced it up the food chain and management decided not to take any action.
  • Implement mitigations like PassTheHash and Microsoft Enhanced Mitigation Experience Toolkit.  I rate these high pain levels for both the users and administrators.
  • Implement Host Intrusion Prevention Systems (HIPS) to detect anomalous behavior.  I rate this low for the user; higher for the admins (to setup and monitor) and some cost depending on the solution chosen.
  • Finally, patch software in a timely manner so that known bugs cannot be easily exploited.  There is some pain to the user, although a lot of this can be automated with some work.  There is a lot of work for IT to find all the patches, figure out where they need to go, test all the affected systems and deploy the patches.

Prepare for incident response and recovery

  • Backup, backup, backup.  Then test.  If you cannot restore the backup to a bare metal box, it doesn’t solve the problem.  If new systems are added and not included in the backups, you have a problem.  I know of a company who’s backups hadn’t been successful for a year, but no one was checking.  When they had a problem, it suddenly became a huge problem.
  • Establish an incident response and recovery plan.  Then test it at least once a quarter.  When I was a kid, the regional hospital was affected by a big blackout that covered the whole east coast.  The hospital had generators to provide power.  Unfortunately, no one knew how to get them up and running.  That was embarrassing.  Luckily, no patients died before they did get the generators running.
  • At the conclusion of an incident, conduct a lessons learned exercise and actually learn from the experience.

I managed to reduce this to about one printed page.  Actually doing this requires person years of effort, including planning, implementing, testing, monitoring, training and documentation.  Your goal is to make it harder for the bad guys to attack your system than the one next door.  You don’t need to be perfect.  Just harder to attack than your neighbor.

This is a good checklist to review as part of your business risk mitigation efforts.

Mitch

Facebooktwitterredditlinkedinmailby feather