In-Depth

Best Practices for Securing Private Keys and Code-Signing Certificates

By Dean Coclin, Senior Director of Business Development, Symantec

The security world is abuzz over Stuxnet, perhaps the most sophisticated malware attack ever. It appears to have targeted certain facilities in Iran, particularly nuclear facilities, and infiltrated networks one would think to be actively secured. Stuxnet used many new and innovative tools to perform this infiltration, including program files digitally signed with the private keys of two legitimate companies.

Such an attack was inevitable. Code signing places certain responsibilities on users, and not all users are responsible. This article explains the measures you can take so your organization’s certificates, private keys, and good name don’t become the tools of malicious hackers.

The Basics of Code Signing

Code signing is a process that uses Public Key Infrastructure (PKI) technology to create a digital signature based on a private key and the contents of a program file, and packages that signature either with the file or in an associated catalog file. Users combine the file, the certificate, and its associated public key to verify the identity of the file signer and the integrity of the file.

Code signing starts with public and private keys created by a developer. Developers can create their own digital certificate containing the public key using one of the many available free tools, or can purchase one from a trusted certificate authority (CA) to whom they provided the public key. The user provides a name for the entity, typically a company, and other identifying information. The CA provides a certificate to the user. The certificate is also signed by the certificate authority.

It is essential that users keep private keys secure and confidential, restricting access only to those who absolutely need them. Anyone who has access to a private key can create software that will appear to be signed by the owner of the certificate.

A reputable CA that sells a code signing certificate will not take the applicant’s word for their identity. The CA will perform checks on the company names, phone numbers, and other information that is required to prove identity -- a process that can take up to several days.

With such a certificate and the associated private key, a programmer can digitally sign files distributed with the software. Software such as Windows can check these signatures and follow policies based on them, generally to ask whether they trust the publisher of the file.

When a certificate has become compromised, the certificate authority revokes it. The certificate itself contains links to where clients can check to see if it is revoked.

Stuxnet -- What Happened?

Stuxnet is (in)famous for many traits: It exploited a record four zero-day vulnerabilities in Windows, installed rootkits in Windows and in certain PLCs (Programmable Logic Controllers) controlled by Windows PCs, and appears to have a code quality far above that of normal malware. Finally, two of its executables are digitally signed with the private keys and code signing certificates from two separate legitimate companies.

How could this have happened? The two companies aren’t saying, and we can only speculate, but it’s well understood that it’s the developers’ responsibility to keep the private key secure. Clearly, Stuxnet’s perpetrators were sophisticated and had good resources at hand, but it’s also likely that the certificate owners did not go as far as they should have in the protection of their private keys.

Keys, Codes, and Breaches

The damage to the reputation of a company that suffers a code signature breach is serious enough. All security decisions reduce to trust decisions at some level, and trust must suffer in the case of such a breach.

The breach almost certainly indicates a severe security weakness, possibly in the building security, and certainly in the network security of the company. If the code signing private keys were stolen, what else was? What was modified? Is it possible that Trojan Horse code was inserted into the company’s source code? The number of distressing possibilities is large.

Meanwhile, the company will have to have the CA revoke their code signing certificates. If there are other private keys that were stored in a similar manner to those that were stolen, all of them need to be revoked. This magnifies the impact of the problem.

The company must decide whether they can determine the date of the intrusion, or if it was definitely after a certain date. If so, they must set that as the date of the certificate revocation. If they can’t, they must revoke the certificates, and probably revoke all certificates that were obtained as of the acquisition date of the compromised certificate.

The company must replace any code in the hands of customers which was signed with what is now a revoked certificate. This means contacting customers and explaining what happened, which you probably should do in any event. It’s embarrassing, but it’s the right thing to do if you hope to regain customer trust.

Best Practices: How to Avoid a Breach

There’s an old joke that the only safe computer is one that’s completely shut off from the rest of the world, and there’s more than a grain of truth to it. That’s why the most secure systems, including those with access to genuine code signing certificates, need to have the least connection possible to the outside world.

Such systems should be protected using the principle of least privilege, multi-factor authentication for access to the system and network, blocking all but the most necessary network ports, installing all available security updates, and running an updated antivirus scanner. Give developers at least two systems to work with; actual development systems should be connected to a separate network with separate credentials from those used for ordinary corporate computing, such as e-mail.

Separate Test Signing and Release Signing: You need to test your code when it's signed, but there's no reason to expose your real private keys and signing mechanisms more often or to more users than necessary. One best practice is to set up a parallel code-signing infrastructure using test certificates generated by an internal test root certificate authority. In this way, the code will be trusted only on systems with access to the test CA; if any pre-release code escapes the development/test network, it won't be trusted. These practices minimize the chances that something will be signed which shouldn’t be.

Cryptographic Hardware Modules: Keys stored in software on general-purpose computers are susceptible to compromise. It is more secure, and a best practice, to store keys in a secure, tamper-proof, cryptographic hardware device. These devices are less vulnerable to compromise and, in many cases, theft of the key requires theft of the physical device.

Three types of such devices are typically used:

  • Smart cards
  • Smart card-like devices such as USB tokens
  • Hardware security modules (HSMs)

Such devices are trusted for the most critical applications. For example, Symantec (formerly VeriSign Authentication) has been using HSMs to hold and protect the private keys they use to sign digital certificates ever since the company was first started.

Some HSMs will also never allow the export of keys, which is a significant security benefit. In such a case, To steal the keys you would need to steal the actual HSM, and even then, without further credentials, you may not be able to use the keys in it.

FIPS (Federal Information Processing Standards) 140-2 () is a NIST standard for conformance testing of cryptographic modules performed by NIST-certified test labs. It has four levels of security at which the device may be certified. For instance, at level 3, in addition to supporting certain cryptographic functions, the device must be highly resistant to tampering.

HSMs, typically in the format of an add-on card or network-attached device, can also perform cryptographic operations internally. Critically, they can generate and operate on keys internally and back them up in encrypted form externally, so that they need never be stored in plain text outside of the device. Some smart cards also support key generation.

HSMs have several important advantages, such as dedicated cryptographic processors with high performance, automation features, and internal-key destruction.

Conclusion

Sometimes we don’t take security measures we know to be necessary until the threat is more appreciable than just a theory in a journal. This is what the case has been for the security of code signing private keys for many development shops.

Fortunately, if you value your intellectual property and your reputation, there are measures you can take to protect them. By securing your developer networks and facilities, formalizing code-signing processes with test signing, placing final code signing in a highly secured environment, and using hardware cryptographic devices to implement the signing, you can make yourself a difficult target to attack.

Dean Coclin is senior director of business development at Symantec, where he is responsible for driving the company’s strategic alliance with distribution, OEM, and technology partners. Coclin has more than 25 years of experience in software, security, and telecommunications, and was a founder of the Internet security firm ChosenSecurity, acquired by PGP (which was acquired by Symantec). He has previous experience with GeoTrust, Betrusted, Baltimore Federal Systems, CyberTrust Solutions, and GTE Government Systems. Coclin holds a BSEE from George Washington University and an MBA from Babson College. You can contact the author at http://www.linkedin.com/pub/dean-coclin/0/546/753

Must Read Articles