Americas

  • United States
sandra_henrystocker
Unix Dweeb

Unix best practices: Remember, what you’re protecting is not systems but productivity

Tip
Mar 09, 20158 mins
Data CenterLinuxOpen Source

A waterproof safe dropped into the deepest part of the ocean is not as 'secure' as it is useless

When considering the issue of best practice as it applies to the administration of Unix servers, there are a number of questions you might want to ask yourself. They begin with “What is best practice?”, quickly followed by “How do you pursue best practice?”, and “How do you know when you’re meeting best practice?”.

The word “best” is not intended to suggest that the highest possible standards are applied through the most stringent means. Instead, efficiency and effectiveness are key to developing and adhering to practices that are likely to be both affordable and sustainable.

What is best practice?

Best practices evolve. What was acceptable ten years ago might be considered negligent today. In addition, best practices generally represent a compromise between your goals and your means. Standards that are too high can be impractical or too costly to implement.

Cambridge Dictionaries Online defines best practice as “a working method or set of working methods that is officially accepted as being the best to use in a particular business or industry, usually described formally and in detail”. Webopedia, on the other hand, says that best practice is a business buzzword that describes “a set of defined methods, processes, systems or practices used by a company or organization to meet performance and efficiency standards within their industry or organization. Best practices are guidelines which are used to obtain the most efficient and effective way of completing a task using repeatable and proven procedures.”

Most commonly viewed as a set of industry recommendations, “best practice” doesn’t necessarily mean the most efficient, effective, or secure practices. It does, however, imply due diligence (reasonable steps have been taken) and a high degree of professionalism. When you follow best practice, your standards should be fairly comparable with or better than other organizations in your line of business.

Best practices are, however, context-sensitive and will change with the peculiarities of your business model. They are never a “one-size-fits-all” solution. It’s also good to keep in mind a quote that is credited to Voltaire — “The best is the enemy of the good”, although he would have said “Le mieux est l’ennemi du bien”. It’s possible to overdo security and end up damaging productivity. Many years ago, when helping to organize a security conference, I noted that the idea that a system wrapped in a waterproof safe and dropped into the deepest part of the seas was not as “secure” as it was useless. What most of us want are systems that will both be reliable and available. The CIA (confidentiality, integrity, and availability) model is a good reminder that what we’re protecting is not systems but productivity.

How do you pursue best practice?

Traditional Unix best practices include:

  • disabling unused services. Fewer services means fewer vulnerabilities.
  • enabling logging, especially if you can send log data to a syslog server. Log files can generally tell you a lot about what is happening on your servers.
  • implementing log monitoring scripts to alert you to emerging problems. Don’t even try to review logs by hand unless you’re looking for something specific.
  • monitoring performance and disk usage, memory usage. Give yourself a chance to see performance and uptime problems before they crescendo into serious problems.
  • acting on detected problems in a timely manner (e.g., during a maintenance hour the following weekend). The longer you leave problems unresolved, the more time problems have to fester and the longer you will have to keep them on your radar.
  • disabling telnet and other insecure services. There is no excuse for not using replacement services that are more secure than their old clear text counterparts.
  • enforcing good passwords by configuring minimum length, aging, character diversity and non-reuse settings. Keep in mind that good password length standards have increased dramatically in the past few years. Recommendations have gone from 8 characters to 12 or 14.
  • periodically resetting root’s password. Ensure that it’s a good password and that almost no one knows what it is. There are better ways to manage a system than sharing the root password.
  • using good backup tools and verify that backups are successful. If you don’t test your backups periodically, you might be in for a shock when you need them. Besides, being practiced at file recovery will help when the pressure is on.
  • planning and testing disaster recovery. Know ahead of time how you would restore services at another location or site and make sure that staff are prepared to do the work if it is ever needed.
  • disallowing suid settings. These settings should never be used, especially when they are setting user to root.
  • using sudo for superuser tasks. Using sudo allows you to give users specific superuser powers without giving them root’s passsword.
  • considering using tools such a tripwire and TCP wrappers. Add higher levels of security when you can.
  • considering use of SELinux if it’s an option on your servers for significantly improved overall security.
  • committing to periodic patching, particularly when it comes to high impact vulnerabilities.
  • making use (on Linux systems) of host-based firewalls such as iptables and firewalld. Don’t depend on border firewalls to protect your Unix systems. Some threats come from inside and you may be the only one who knows what needs protection on the systems you manager.
  • removing or locking accounts whenever someone leaves the organization or moves to a different job function. Accounts should be locked or removed as soon as possible, but you might want to verify whether the contents of accounts should be removed altogether or just blocked.
  • reviewing accounts at least twice a year and closing any overlooked accounts that are no longer needed or no longer valid. Also be careful to identify service accounts (accounts that are used by system services rather than by individuals). If you don’t document what they are for, they might not get the proper review and scrutiny.
  • avoiding shared accounts to the extent possible. You lose accountability and it’s hard to track who is actually using the accounts.
  • planning, testing, implementing and then verifying changes only after you have a backout strategy worked out in case something goes wrong.
  • implementing changes to critical servers in a test environment before moving to the production environment. Your test environments should be as close to possible as the original servers.
  • scanning for weaknesses in the systems you manage. Use vulnerability scans to detect your systems’ weaknesses and, once identified, make plans for addressing them.
  • scanning for signs of compromise. This ties in with using tools like TripWire and reviewing log files. Be alert.
  • setting up redundant or backup power configurations whenever possible so that systems are unlikely to ever lose power.
  • limiting physical access to your servers.
  • using virtual servers when possible, especially if you can migrate the images to a different hosting platform when a problem arises.

Another source of information on best practice, though not specific to Unix systems, is ISO 27002. This document, which is part of the ISO 27001 standard, outlines a code of practice for information security and suggests hundreds of what it calls “controls” — steps that you can take to improve your security posture. Unless you’re considering becoming ISO 27001 certified, however, the standard is an expensive way to obtain a list of security best practices. The ISO 27002 document (i.e., not the complete ISO 27001 standard) will cost you about $160. It will, on the other hand, provide you with quite a bit of industry consensus on what constitutes good practice for information security and this goes beyond the scope of system security since it covers physical security and human resource security as well.

How do you know when you’re meeting best practice?

One way to track the security of your servers is to develop a server profile that includes each server’s function, its primary users, its location, and a checklist of best practices that have been implemented. Take note of the important assets — hardware, software, information, and services — that reside on each server so that you are aware of the system’s appeal to anyone who might seek to compromise it. Consider the ways in which those assets might be compromised and how your practices help to reduce the associated risks.

Tracking user access privileges on each of your servers is also an important part of your server profiles. Unless you work in a shop where all servers have the same configuration, keeping track of who has privileged access on hundreds or thousands of servers can be complicated and time-consuming. Making use of an access governance system which collects and reports on this kind of information can go a long way toward helping you maintain your server profiles and an accurate view of access privileges.

And a closing thought

Achieving best practice should never be considered justification for not reviewing your security posture. Never stop asking “Is this good enough?” and always remember that best practices evolve. And don’t be afraid of exceeding best practice. The care that you take should be commensurate with what you are protecting.

sandra_henrystocker
Unix Dweeb

Sandra Henry-Stocker has been administering Unix systems for more than 30 years. She describes herself as "USL" (Unix as a second language) but remembers enough English to write books and buy groceries. She lives in the mountains in Virginia where, when not working with or writing about Unix, she's chasing the bears away from her bird feeders.

The opinions expressed in this blog are those of Sandra Henry-Stocker and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.