The British government recently released a "Code of Practice" for IoT device developers to responsibly design secure devices. Is this list of best practices a precursor to regulation? Should it be?

It is estimated that there are over 23 billion IoT devices currently connected worldwide and this number is expected to increase to 75 billion by 2025. Securing these devices has become a global conversation.

Individual companies are expected to have security in mind when they design, manufacture, and sell their devices. But, with more and more IoT devices coming into operation, governments are looking at becoming involved.

 

Understanding Security Threats for IoT Devices

While IoT devices are often simple and unable to process large amounts of data (especially when compared to desktop PCs and tablets), they do possess one ability that requires very little processing power: they can send messages across the internet and often record data about their environment. One IoT device on its own cannot do much—but when combined with a thousand other devices then suddenly a DoS (Denial of Service) attack becomes possible.

In order for an authorized party to gain access to an IoT device, they first need to hack the device either physically or remotely.

Physical hacks involve reverse engineering the circuitry and trying to find unsecured bus lines or debug ports that allow the hacker to gain entry into the firmware. From there, potentially sensitive information such as usernames and passwords can be found, as well as certificates and even security flaws in the system. Remote hacking involves trying to hijack the IoT device remotely (i.e., from anywhere in the world) whereby a hacker can attempt to login to the device and update the firmware with one that contains malware (to allow control of the device).

Remote hacking using techniques such as brute force for login details can be applied to weak passwords but more times than not hacking can be done in as few as 60 username/password combinations.

Surely such malware must be incredibly well engineered if hacks can be done in under 60 combinations? Does the malware use a special byte-code that can unlock machinery? The true answer is shocking and has governments seriously concerned to the point where they are considering regulating the industry.

 

 

Security Ignorance: A Key Failing for Cybersecurity

Many products on the market take advantage of pre-made packages such as Linux which can be made to run on small ARM microcontrollers. While this provides a useful platform to build applications on, it also comes with some serious security risks.

Unfortunately, most (if not all) operating systems come with default username and passwords (such as “admin” and “password”) and, if the user does not change these, then almost any hacker can gain entry in seconds. Such systems may also have legacy services running (such as telnet) that give attackers entry points. When combined with default usernames, the result is a device that can easily be accessed remotely.

This is what happened with the Mirai infection, which would scan the internet for specific IP address of IoT devices and then attempt to login using telnet and 60 common usernames/passwords. Once in, Mirai then installs its own firmware which turns the IoT device into a bot and adds it to a collective which can be used to launch DoS attacks.

Mirai was not able to spread easily because of hardware flaws or unforeseen security holes. The severity of this particular attack was due to either engineers from multiple companies not having a basic understanding of security, not appreciating the potential dangers of IoT devices as bots, not understanding the Linux system—or just because of general ignorance.

While these issues are certainly important from an industry standpoint, they're also crucial from an infrastructure standpoint as the IoT integrates into the very systems that run modern life.

 

Hacking Infrastructure, Today and Tomorrow

Hacks of payment systems via streaming platforms (Spotify, Sony, etc.) have spurred various reactions from regulatory bodies. But while information security is a concern, there is considerably more apprehension regarding public services such as traffic control, power distribution, and water.

In the past, such services have had electronic equipment but they were not accessible via the internet and the only way such services could be targeted was through physical intervention. Now, with the rise of the internet, cyber attacks can be potentially launched from remote locations to disrupt such services. 

As it turns out, infrastructure attacks are already happening—and have been for years. Ukraine has reportedly sustained years of attacks to their infrastructure via cyber attacks, sometimes leaving thousands without electricity.

Some examples are less dire, such as when hackers activated the tornado siren system in Dallas, Texas last year. Activating the system when it isn't needed does have negative repercussions, but the implication that hackers could disable the system when it is needed is more unsettling. 

Concern for the security of high-tech systems is growing by the day. Ballot processing has been an area of contention as vulnerabilities in electronic voting machines could have far-reaching consequences on the global scale.

The long-heralded smart city concept also represents a veritable quagmire of vulnerabilities. IoT sensor systems embedded throughout cities have promised more efficient utilities and better safety for years. They may also one day soon become the lynchpin for the autonomous vehicle industry, guiding with self-driving cars through V2X communication. 

With this much of our technological future at stake, preventative action against cyber attacks is paramount. But who defines what cybersecurity best practices are?

 

Cybersecurity Oversight and Best Practices

Security is clearly a point of discussion for governments all over the world. On one hand, oversight infrastructure must first be created in order for action to even be possible. The US Congress, for example, proposed the Cybersecurity and Infrastructure Security Agency Act of 2018 (or CISA) in the US Congress.

On the other hand, authorities can issue best practices. The British Government recently launched its “Code of Practice”, which is a guideline of suggestions for engineers who design IoT projects. Suggestions include giving each device unique usernames and passwords, not using unencrypted message protocols (HTTPS over HTTP), providing certificates for each IoT device, taking advantage of specialized hardware that can store keys securely, and keeping software up-to-date.

Currently, these are only suggestions and not the law, which means engineers are still free to use default passwords and HTTP for communication. However, some security experts, such as Bruce Schneier, believe that intervention is necessary—even to the point of slowing innovation in order to give security time to catch up. Others question whether it's necessary to make basic security practices mandatory.

Such ideas could arguably make it harder for businesses to sell products due to red tape and the possible requirements of submitting technical documents to a security board. By the same token, some believe that these measures could prevent damage to critical services and protect economies.

 


As an engineer, I often dislike regulation which may make it harder to get products on the market with pointless paperwork and bureaucracy. On the issue of security, however, I am in favor of regulation of some kind. Banks, for example, have to follow strict requirements to ensure that accounts are secure. Does it make sense for hardware and software developers to follow some standard? This could involve microcontrollers using unsecured buses to communicate to memory modules that store passwords, using software that has default passwords, or using plaintext messaging schemes over the internet.

What are your reactions to the prospect of regulations placed on security for device design?

 

Read More

 

Comments

2 Comments


  • lisandropm 2018-10-26

    There should be a law stating that if a product gets discontinued it’s firmware/software should be fully released. Else there would be no way to fix them.

  • macgvr 2018-10-27

    I have grave concerns regarding IOT security. I am not implementing any IOT devices because they are simply not designed with security in mind. I am unwilling to deal with the aftermath of a hack attack as a result of unsecured devices in my home, or the company I work for. I also don’t want to be responsible for DoS attacks because of devices I own. It is insanity to make devices that have default passwords and other such insecurities in our present climate where Internet facing systems are under constant attack. I watch our company’s firewall logs and see a never ending stream of attacks probing for an opening in our network. I have had to disable RDP because hackers were locking users out of their computers by trying to guess their passwords. Until coders wake up and companies start producing secure devices, these devices will have no place in my home or the company I work for.