Moon

Copyright© Miklos Szegedi, 2021.

Abstract

This is my last article in the security series. Different aspects to take into consideration when dealing with proactive security.

Staffing

First of all, there are many lines of code that you need to maintain. My suggestion is to have a separate engineer for every 20K lines of software code. For example, developers may share knowledge. You may need to account for the management of six levels above engineers at more prominent companies, adding 20-30% to the cost per employee. The good rule of thumb is that a new full-time employee can read and memorize about 1K lines of code a day. They can ramp up in about a month. Having an assigned codebase helps with integrity. Imagine that a push update or a redirected cloud connection shows a different codebase today than yesterday. An assigned developer will spot any discrepancies right away, preventing incidents. The rate of code lines per engineer may vary by the operating system, programming language, and industry. Test code or boilerplate code may need fewer engineers, and kernel code may need a more rigorous approach. Keep security staff on FTE payroll.

Complexity

The size of the codebase also matters. Do not take into account only the owned sources. You need to audit any build system, tool, or even the operating system. Auditing source code is more straightforward. You must check binaries, and you should sign them digitally against the sources and the compilers. Do not check-in binary sources. It helps to verify changes easily. Backups should test the active codebase occasionally.

Versioning

Agile software practices gained traction in the past ten years. There are some issues, though. A hundred changes in a year may add at least one vulnerability with a 63% probability assuming just a one percent error rate. Continuous integration is essential, but it has risks. Systems like Golang will be more trustworthy once their codebase becomes more stable. The waterfall model and rare releases are helpful if the company has many object code customers. It saves time and money.

Dependencies

Dependencies are also a risk. It is essential to fetch and audit every change. All build systems have the capabilities to cache dependencies. It may also be necessary to build dependencies from sources, as Golang does. Every time you add a dependency, it is an essential consideration whether to fetch the required algorithms or not. Also, unrelated or unneeded dependency changes pose additional risks.

Regulatory

It depends on the culture of the country how many lawyers and government employees a company employs. It is good to know the rights, to protect against government actions. Those are the most sophisticated, and they may involve social engineering, like injecting staff into the workforce. Transparency and pubicly disclosing incidents may deter government actors, especially if they use rarely used vulnerabilities, where the disclosure would make them obsolete. It is good to clarify that senior management, the CEO, and the CFO are responsible to shareholders, customers, and the general public. Local governments can use the court system. Random hiring of staff with adequate skill sets helps to avoid team pressure.

Common risks

Less advanced attackers may be an annoyance, but they may be more costly in resolving issues. Regular updates help to fix problems found on the black web. Training is helpful to prevent scams.

Networking

Secure sockets have been around for a while. It is vital to secure private traffic and check whether TLS1.3 is still a reliable way to do it. Sign public traffic digitally like binaries. It allows scanning incoming packages for errors. Caching of signed but unencrypted downloads save on network traffic. General information does not need to be encrypted but audited. It is a frequent misunderstanding.

Microservices

Many new services run in a microservice environment as a group of services running in separate containers. Each container has its codebase, dependencies, and build systems, and they should not share secrets like private TLS keys. Encryption should be done in separate containers on physically connected machines to these microservices. A microservice with a backdoor cannot steal TLS certificates and service tokens this way.

Space war

There are some nuanced considerations to take into account. TLS still relies on precise timing. It may be essential to employ a company-wide time server. Depending on GPS timing is unnecessary for most services. Also, while public key certificate authorities are the industry norm, some organizations may benefit from issuing their certificates by their self-signed authorities. They reduce their attack surface, and they do not need to disclose sensitive information and organizational practices like withdrawals due to lost private keys.

Hardware

Hardware selection may be a daunting task for IT professionals. You cannot see the entire design. A good rule of thumb is to choose systems with no or just a tiny number of jumpers on the PCB. It helps to avoid hardly noticeable tampering with random number generators or privilege elevation traps. It is better to disconnect as many ports as possible. USB may be unnecessary for server hardware. However, it may be valuable to an attacker who gets into the server room as janitorial. The USB stack of kernels may contain vulnerabilities. Vulnerable ports are USB3, firewire, SATA, or even the I2C port of HDMI. Disable suspicious settings in the firmware to turn off side processors with bus access, etc. While ECC RAM may be helpful, it can contain malware that can access the system bus. GPU cards change a lot these days. They have powerful DMA engines and processors that can access the entire PCI bus. It is better to choose from a known vendor and not to share hardware for sensitive workloads. Embedded systems should employ only as much hardware as needed, not used for recording or backdoors.

Semiconductors

Chip design is even more distant from the average IT professional. EE designers audit and sign their code. The chip industry has the most rigorous auditing and testing I have ever seen. Still, many use off-the-shelf machines. Any malware may open up the designer machines and inject backdoors into the blueprints that are difficult to verify. As technologies shrink in nanometers, complexity increases, so the expectation is that hardware will be the next target of attackers. A general free-market approach in the semiconductor industry will help. If many vendors produce the same processors from many fabs, software and data analytics will eventually catch the faulty lots of chips. It makes the attacker’s life more difficult. It is good to have lot ids and hardware ids registered to connect to security incidents.

Backups

Backups are vulnerable to data leaks. Backup monitoring is not as rigorous, and it is rare. An attacker can leverage them to steal data. It is especially true to company metadata that stays unchanged for years, like customer data. It may be essential to verify production systems against backups to catch tapering and to ensure that backups are still in place when needed. It makes sense to back up incoming traffic rather than normalized data to make issues reproducible. There are standards to destroy backup hardware.

Cryptocurrencies

While cryptocurrencies may be a lucrative way to leverage stale servers, they have risks as well. They generate constant traffic that is difficult to monitor. They may have malicious code inside. Also, while they may pay off financially, the servers running may burden the environment, especially if the power comes from burning coal or other fossil materials.

Virtual

Virtual environments save cost. However, large-scale operations may benefit from running bare metal. Comparing bare metal and virtual environment results may help to resolve issues. Also, boot drives are better to be replaceable. Code separation from computing tasks improves observability. Problems are easy to debug if there are fewer parameters on storage to change.

Updates

One of the main tasks of IT professionals is keeping the systems up to date. Not every update pays off. One may argue that the rate of the almost endless number of knowledge base articles is constant, so you introduce new issues with the same probability as you resolve them. It is true. The issues resolved are known to attackers. The new ones are less likely to be exploited. Updates are useful.

Ransomware

Ransomware got publicity recently. A good rule of thumb is to avoid paying attackers and not fueling them with more cash. A better approach is to spend the extra budget on training and testing backup and restoring operations so that IT staff can recover their systems with a button press. Append only backup systems are better than random access ones.

Denial

Denial of service attacks may be more common using artificial intelligence soon. Simple asynchronous operations can fault by injecting random delays, where artificial intelligence can train on unit tests. An attacker can silently generate support cases and increase costs, making the company unprofitable. It helps if you fix all flaky test cases. Cloud, load balancers, and security companies’ use in case of sudden spikes may help to prevent external attacks.

Cloud

Cloud computing is great for bursting, public interfaces, and backups. Some companies may prefer an on-premises backup in the background. Identical on-premises systems for fail-proofing cloud scenarios also help. VPN is a great way to connect to the cloud and avoid attackers analyzing network traffic. The author suggests using physical tokens for authentication vs. relying only on passwords.

Tampering

Spoofing and tampering with the data became common these days. Logging, digital signing vs. full encryption, and analytics help to reveal inconsistent data. Restore inconsistent data from backups.

Theft

Data theft can be a big issue since it may be recognized when data appears on the dark web. One rule of thumb is not to collect what is not needed. The other is to give staff only remote desktops so that the edge gets only the required information. It does not get sensitive raw data. Recording and monitoring may deter insider threats. Once the data is exposed, there are reporting requirements based on different jurisdictions. Collecting data on the dark web reduces its value and the chances that it is attractive to attackers. It also exposes incidents to fix. Professional security companies and insurance may help to resolve rare incidents.

Artificial intelligence

Be careful about the extensive use of AI. It is one thing that you need to tell your boss that an incident happened. It is a way different thing that you do not know why. AI may be problematic to debug when an issue occurs. It is great for analytics and training. I would be wary about extensive real-world use. It is better to design actors to have a fail-safe mode. It helps in case the AI has a bug. Do not allow backdoors to AI systems. AI may repeat mistakes so that human supervision will limit their use at scale. If AI surrounded us and took care of communication, eventually, it made us obsolete, so we will probably have our own distinct space like self-driving free lanes. Also, brain-controlled tools may cause psychological effects mixing thought, intent, and actions causing damage to decision making.