News

Breaking the System to Fix It: The “Hackers” That Hunt for Security Vulnerabilities

November 25, 2017 by Chantelle Dubois

Sometimes the people trying to hack your devices are an important part of the security echo system.

From white hat hackers to security firms, sometimes the people trying to hack our devices are an important part of the security echo system.

Typically the term “hacker” conjures visions of shady individuals hunkered over keyboards, using vast technical knowledge to do nefarious deeds. While there are plenty of well-known ethical hackers who got their start that way, there are also a number of them that exist today who commit their careers or research to improving security.

While some do it as a career for companies that offer rewards for reported flaws, others just enjoy the challenge of trying to break a system. Others still take it upon themselves to test the security of devices they suspect are vulnerable (just ask EE Anthony Rose about BLE smart locks).

Academics, security experts, and "hackers" alike dedicate their time and skills to identify and mitigate security threats. These professionals push the limits of systems to test their resiliency against potential threats so that we can fix them before they're taken advantage of.

Here's a look at some of the work of the people who meddle with our systems and devices—for our benefit. 

Biometric Security

Physical security on personal devices has become a more commonplace feature. For example, there are now a large number of smartphone devices and laptops that allow users to log in using their fingerprints.

Most recently, Apple rolled out the iPhone X, which can be unlocked using facial recognition. The company advertised the feature as being significantly more secure than fingerprint scans, with 1 in 1 million accuracy compared to the 1 in 500,00 accuracy in fingerprinting (of false positive recognition). The system is supposed to be “attention aware” so that it knows when you are looking at your device, and maps 30,000 dots using infrared to get more than just surface physical features in a 2D image. It’s supposed to work with or without makeup, in different lighting conditions, and in general be one of the most sophisticated facial recognition devices available using “anti-spoofing neural networks” to prevent false positive scans.

However, since the launch of this new feature, there have been a number of groups working to prove the security of facial recognition to be vulnerable. They do this by putting the feature to the test.

Wired ran a series of experiments using highly detailed and textured masks out of silicon, vinyl, and gelatin. Experts helped the team create a custom, flush-fitted mask to imitate the likeness of an individual, down the hair follicles in the eyebrows. In a series of trials, they never were able to successfully unlock the iPhone.

Another group in Vietnam called Bkav claims to have successfully beat the system using a 3D-printed face with paper cutouts of the victim’s eyes, mouth, and nose. The team describes the 3D print to be highly detailed, with a thorough scan of the owner’s face. A video online circulated showing the team doing so. However, considering their method is a lot more rudimentary than many other attempts, the authenticity of their demonstration has been questioned. 

It has also been shown that it’s possible for twins and family members closely resembling each other to unlock each other’s phones. But so far, for a malicious user to unlock a randomly targeted phone, to obtain a high-resolution 3D scan of their face would take a lot of effort and time, as well as some knowledge on how the device authenticates those features.

Whether or not their methods succeed, the work of these groups helps identify the methods that may or may not work to beat the iPhone’s facial recognition systems. In the event that they are successful, Apple can take note of how they did it and possibly roll out updates to secure it.

Infotainment Security in Automotives

Automotive security is a relatively new domain, now that more vehicles are equipped with smart systems. In response, the Future of Automotive Security Technology Research (FASTR) was founded in 2016 by Aeris, Intel, and Uber to address security flaws by developing standards and norms and building awareness.

Most recently, researchers from Ixia, a security firm, have demonstrated that the infotainment system, standard in most vehicles today, are particularly vulnerable. This is largely because they connect freely to devices like your smartphone, and most users will sync them with sensitive data such as contacts, text messages, and call history. Such information is stored securely on phones—but once it is saved onto the infotainment system, it is unencrypted and in plain text. 

Ixia engineers also highlighted that these infotainment systems have access to and can control other features and information, all with easy-to-access debugging tools. This can allow users to extract information or mess with settings for systems like GPS.

The researchers demonstrated these vulnerabilities by plugging in a USB stick with some BASH scripts. They were not only able to extract this data, but regularly upload it by automatically connecting to unsecured Wi-Fi connections, and could even extract GPS coordinates for near real-time tracking.

This work is important to highlight just how vulnerable car infotainment systems are, and by showing exactly what they can extract and how, can help automakers identify the need to think more carefully about infotainment system security. 

 

Image courtesy of GM

Google Vulnerability Rewards Program

In 2012, Google launched Pwnium, a competition with over a million dollars worth of prizes to anyone able to find bugs in the Chrome browser. The largest prize available was $60,000 for flaws that enabled “Chrome/Windows 7 local OS user account persistence using only bugs in Chrome itself”—in other words, a flaw that allowed full control of the system.

Not even two weeks after the launch of the competition, a Russian student named Sergey Glazunov submitted a flaw in the browser. The flaw allowed for the bypassing of the Chrome browser sandbox by taking advantage of a bug. Once bypassed, nearly any action can be taken with full user privileges. 

All bugs entered successfully into the competition are patched by Google, helping the company keep its browser as secure as possible. By providing cash incentives, it gives motivation for hackers to report flaws to the company, and allows Google to enlist the help of individuals outside of the company. 

Google scrapped the "Pwnium" project in 2014 in favor of its Vulnerability Rewards Program, but the spirit is the same. In 2016, Google paid out over $3 million in rewards for reported vulnerabilities (or "bug bounties"). Their philosophy appears to be that, if people are trying to find vulnerabilities in their products, they want it to be more lucrative to sell that information to Google itself (rather than to the highest bidder). 

 


 

The skills that make hackers dangerous are the same skills that can identify the vulnerabilities they capitalize on. In the arms race between companies and malicious third parties, it's nice to know that some of the people testing the limits of our technology are on our side.