New Book Announcement: Click Here to Kill Everybody
[2018.09.04] I am pleased to announce the publication of my latest book: Click Here to Kill Everybody: Security and Survival in a Hyper-connected World. In it, I examine how our new immersive world of physically capable computers affects our security.
I argue that this changes everything about security. Attacks are no longer just about data, they now affect life and property: cars, medical devices, thermostats, power plants, drones, and so on. All of our security assumptions assume that computers are fundamentally benign. That, no matter how bad the breach or vulnerability is, it's just data. That's simply not true anymore. As automation, autonomy, and physical agency become more prevalent, the trade-offs we made for things like authentication, patching, and supply chain security no longer make any sense. The things we've done before will no longer work in the future.
This is a book about technology, and it's also a book about policy. The regulation-free Internet that we've enjoyed for the past decades will not survive this new, more dangerous, world. I fear that our choice is no longer between government regulation and no government regulation; it's between smart government regulation and stupid regulation. My aim is to discuss what a regulated Internet might look like before one is thrust upon us after a disaster.
Click Here to Kill Everybody is available starting today. You can order a copy from Amazon, Barnes & Noble, Books-a-Million, Norton's webpage, or anyplace else books are sold. If you're going to buy it, please do so this week. First-week sales matter in this business.
Reviews so far from the Financial Times, Nature, and Kirkus.
** *** ***** ******* *********** *************
Speculation Attack Against Intel's SGX
[2018.08.16] Another speculative-execution attack against Intel's SGX.
At a high level, SGX is a new feature in modern Intel CPUs which allows computers to protect users' data even if the entire system falls under the attacker's control. While it was previously believed that SGX is resilient to speculative execution attacks (such as Meltdown and Spectre), Foreshadow demonstrates how speculative execution can be exploited for reading the contents of SGX-protected memory as well as extracting the machine's private attestation key. Making things worse, due to SGX's privacy features, an attestation report cannot be linked to the identity of its signer. Thus, it only takes a single compromised SGX machine to erode trust in the entire SGX ecosystem.
News article.
The details of the Foreshadow attack are a little more complicated than those of Meltdown. In Meltdown, the attempt to perform an illegal read of kernel memory triggers the page fault mechanism (by which the processor and operating system cooperate to determine which bit of physical memory a memory access corresponds to, or they crash the program if there's no such mapping). Attempts to read SGX data from outside an enclave receive special handling by the processor: reads always return a specific value (-1), and writes are ignored completely. The special handling is called "abort page semantics" and should be enough to prevent speculative reads from being able to learn anything.
However, the Foreshadow researchers found a way to bypass the abort page semantics. The data structures used to control the mapping of virtual-memory addresses to physical addresses include a flag to say whether a piece of memory is present (loaded into RAM somewhere) or not. If memory is marked as not being present at all, the processor stops performing any further permissions checks and immediately triggers the page fault mechanism: this means that the abort page mechanics aren't used. It turns out that applications can mark memory, including enclave memory, as not being present by removing all permissions (read, write, execute) from that memory.
EDITED TO ADD: Intel has responded:
L1 Terminal Fault is addressed by microcode updates released earlier this year, coupled with corresponding updates to operating system and hypervisor software that are available starting today. We've provided more information on our web site and continue to encourage everyone to keep their systems up-to-date, as it's one of the best ways to stay protected.
I think this is the "more information" they're referring to, although this is a comprehensive link to everything the company is
saying about the vulnerability.
** *** ***** ******* *********** *************
New Ways to Track Internet Browsing
[2018.08.17] Interesting research on web tracking: "Who Left Open the Cookie Jar? A Comprehensive Evaluation of Third-Party Cookie Policies:
Abstract: Nowadays, cookies are the most prominent mechanism to identify and authenticate users on the Internet. Although protected by the Same Origin Policy, popular browsers include cookies in all requests, even when these are cross-site. Unfortunately, these third-party cookies enable both cross-site attacks and third-party tracking. As a response to these nefarious consequences, various countermeasures have been developed in the form of browser extensions or even protection mechanisms that are built directly into the browser.
In this paper, we evaluate the effectiveness of these defense mechanisms by leveraging a framework that automatically evaluates the enforcement of the policies imposed to third-party requests. By applying our framework, which generates a comprehensive set of test cases covering various web mechanisms, we identify several flaws in the policy implementations of the 7 browsers and 46 browser extensions that were evaluated. We find that even built-in protection mechanisms can be circumvented by multiple novel techniques we discover. Based on these results, we argue that our proposed framework is a much-needed tool to detect bypasses and evaluate solutions to the exposed leaks. Finally, we analyze the origin of the identified bypass techniques, and find that these are due to a variety of implementation, configuration and design flaws.
The researchers discovered many new tracking techniques that work despite all existing anonymous browsing tools. These have not yet been seen in the wild, but that will change soon.
Three news articles. Boing Boing post.
** *** ***** ******* *********** *************
James Mickens on the Current State of Computer Security
[2018.08.20] James Mickens gave an excellent keynote at the USENIX Security Conference last week, talking about the social aspects of security -- racism, sexism, etc. -- and the problems with machine learning and the Internet.
Worth watching.
** *** ***** ******* *********** *************
"Two Stage" BMW Theft Attempt
[2018.08.21] Modern cars have alarm systems that automatically connect to a remote call center. This makes cars harder to steal, since tripping the alarm causes a quick response. This article describes a theft attempt that tried to neutralize that security system. In the first attack, the thieves just disabled the alarm system and then left. If the owner had not immediately repaired the car, the thieves would have returned the next night and -- no longer working under time pressure -- stolen the car.
** *** ***** ******* *********** *************
Good Primer on Two-Factor Authentication Security
[2018.08.22] Stuart Schechter published a good primer on the security issues surrounding two-factor authentication.
While it's often an important security measure, it's not a panacea. Stuart discusses the usability and security issues that you have to think about before deploying the system.
** *** ***** ******* *********** *************
John Mueller and Mark Stewart on the Risks of Terrorism
[2018.08.23] Another excellent paper by the Mueller/Stewart team: "Terrorism and Bathtubs: Comparing and Assessing the Risks":
Abstract: The likelihood that anyone outside a war zone will be killed by an Islamist extremist terrorist is extremely small. In the United States, for example, some six people have perished each year since 9/11 at the hands of such terrorists -- vastly smaller than the number of people who die in bathtub drownings. Some argue, however, that the incidence of terrorist destruction is low because counterterrorism measures are so effective. They also contend that terrorism may well become more frequent and destructive in the future as terrorists plot and plan and learn from experience, and that terrorism, unlike bathtubs, provides no benefit and exacts costs far beyond those in the event itself by damagingly sowing fear and anxiety and by requiring policy makers to adopt countermeasures that are costly and excessive. This paper finds these arguments to be wanting. In the process, it concludes that terrorism is rare outside war zones because, to a substantial degree, terrorists don't exist there. In general, as with rare diseases that kill few, it makes more policy sense to expend limited funds on hazards that inflict far more damage. It also discusses the issue of risk communication for this hazard.
** *** ***** ******* *********** *************
Future Cyberwar
[2018.08.27] A report for the Center for Strategic and International Studies looks at surprise and war. One of the report's cyberwar scenarios is particularly compelling. It doesn't just map cyber onto today's tactics, but completely reimagines future tactics that include a cyber component (quote starts on page 110).
The U.S. secretary of defense had wondered this past week when the other shoe would drop. Finally, it had, though the U.S. military would be unable to respond effectively for a while.
The scope and detail of the attack, not to mention its sheer audacity, had earned the grudging respect of the secretary. Years of worry about a possible Chinese "Assassin's Mace" -- a silver bullet super-weapon capable of disabling key parts of the American military -- turned out to be focused on the wrong thing.
The cyber attacks varied. Sailors stationed at the 7th Fleet' s homeport in Japan awoke one day to find their financial accounts, and those of their dependents, empty. Checking, savings, retirement funds: simply gone. The Marines based on Okinawa were under virtual siege by the populace, whose simmering resentment at their presence had boiled over after a YouTube video posted under the account of a Marine stationed there had gone viral. The video featured a dozen Marines drunkenly gang-raping two teenaged Okinawan girls. The video was vivid, the girls' cries heart-wrenching the cheers of Marines sickening And all of it fake. The National Security Agency's initial analysis of the video had uncovered digital fingerprints showing that it was a computer-assisted lie, and could prove that the Marine's account under which it had been posted was hacked. But the damage had been done.
There was the commanding officer of Edwards Air Force Base whose Internet browser history had been posted on the squadron's Facebook page. His command turned on him as a pervert; his weak protestations that he had not visited most of the posted links could not counter his admission that he had, in fact, trafficked some of them. Lies mixed with the truth.
Soldiers at Fort Sill were at each other's throats thanks to a series of text messages that allegedly unearthed an adultery ring on base.
The variations elsewhere were endless. Marines suddenly owed hundreds of thousands of dollars on credit lines they had never opened; sailors received death threats on their Twitter feeds; spouses and female service members had private pictures of themselves plastered across the Internet; older service members received notifications about cancerous conditions discovered in their latest physical.
Leadership was not exempt. Under the hashtag # PACOMMUSTGO a dozen women allegedly described harassment by the commander of Pacific command. Editorial writers demanded that, under the administration's "zero tolerance" policy, he step aside while Congress held hearings.
There was not an American service member or dependent whose life had not been digitally turned upside down. In response, the secretary had declared "an operational pause," directing units to stand down until things were sorted out.
Then, China had made its move, flooding the South China Sea with its conventional forces, enforcing a sea and air identification zone there, and blockading Taiwan. But the secretary could only respond weakly with a few air patrols and diversions of ships already at sea. Word was coming in through back channels that the Taiwanese government, suddenly stripped of its most ardent defender, was already considering capitulation.
I found this excerpt here. The author is Mark Cancian.
** *** ***** ******* *********** *************
NotPetya
[2018.08.28] Andy Greenberg wrote a fascinating account of the Russian NotPetya worm, with an emphasis on its effects on the company Maersk.
Boing Boing post.
** *** ***** ******* *********** *************
CIA Network Exposed through Insecure Communications System
[2018.08.29] Interesting story of a CIA intelligence network in China that was exposed partly because of a computer security failure:
Although they used some of the same coding, the interim system and the main covert communication platform used in China at this time were supposed to be clearly separated. In theory, if the interim system were discovered or turned over to Chinese intelligence, people using the main system would still be protected -- and there would be no way to trace the communication back to the CIA. But the CIA's interim system contained a technical error: It connected back architecturally to the CIA's main covert communications platform. When the compromise was suspected, the FBI and NSA both ran "penetration tests" to determine the security of the interim system. They found that cyber experts with access to the interim system could also access the broader covert communications system the agency was using to interact with its vetted sources, according to the former officials.
In the words of one of the former officials, the CIA had "[f*cked] up the firewall" between the two systems.
U.S. intelligence officers were also able to identify digital links between the covert communications system and the U.S. government itself, according to one former official -- links the Chinese agencies almost certainly found as well. These digital links would have made it relatively easy for China to deduce that the covert communications system was being used by the CIA. In fact, some of these links pointed back to parts of the CIA's own website, according to the former official.
People died because of that mistake.
The moral -- which is to go back to pre-computer systems in these high-risk sophisticated-adversary circumstances -- is the right one, I think.
** *** ***** ******* *********** *************
Cheating in Bird Racing
[2018.08.30] I've previously written about people cheating in marathon racing by driving -- or otherwise getting near the end of the race by faster means than running. In China, two people were convicted of cheating in a pigeon race:
The essence of the plan involved training the pigeons to believe they had two homes. The birds had been secretly raised not just in Shanghai but also in Shangqiu.
When the race was held in the spring of last year, the Shanghai Pigeon Association took all the entrants from Shanghai to Shangqiu and released them. Most of the pigeons started flying back to Shanghai.
But the four specially raised pigeons flew instead to their second home in Shangqiu. According to the court, the two men caught the birds there and then carried them on a bullet train back to Shanghai, concealed in milk cartons. (China prohibits live animals on bullet trains.)
When the men arrived in Shanghai, they released the pigeons, which quickly fluttered to their Shanghai loft, seemingly winning the race.
** *** ***** ******* *********** *************
Eavesdropping on Computer Screens through the Webcam Mic
[2018.08.31] Yet another way of eavesdropping on someone's computer activity: using the webcam microphone to "listen" to the computer's screen.
** *** ***** ******* *********** *************
Using a Smartphone's Microphone and Speakers to Eavesdrop on Passwords
[2018.09.05] It's amazing that this is even possible: "SonarSnoop: Active Acoustic Side-Channel Attacks":
Abstract: We report the first active acoustic side-channel attack. Speakers are used to emit human inaudible acoustic signals and the echo is recorded via microphones, turning the acoustic system of a smart phone into a sonar system. The echo signal can be used to profile user interaction with the device. For example, a victim's finger movements can be inferred to steal Android phone unlock patterns. In our empirical study, the number of candidate unlock patterns that an attacker must try to authenticate herself to a Samsung S4 Android phone can be reduced by up to 70% using this novel acoustic side-channel. Our approach can be easily applied to other application scenarios and device types. Overall, our work highlights a new family of security threats.
News article.
** *** ***** ******* *********** *************
Five-Eyes Intelligence Services Choose Surveillance Over Security
[2018.09.06] The Five Eyes -- the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) -- have issued a "Statement of Principles on Access to Evidence and Encryption" where they claim their needs for surveillance outweigh everyone's needs for security and privacy.
...the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution.
Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that
appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority.
The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations.
To put it bluntly, this is reckless and shortsighted. I've repeatedly written about why this can't be done technically, and why trying results in insecurity. But there's a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a "defense dominant" strategy for securing the Internet and everything attached to it.
This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and -- now that they can affect the world in a direct physical manner -- affect our lives and property as well.
This is what I just wrote, in Click Here to Kill Everybody:
There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There's no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world.
This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It's actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals' safe houses would be more secure, but it's pretty clear that this downside would be worth the trade-off of protecting everyone's house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses.
Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won't make it impossible for law enforcement to solve crimes; I'll get to that later in this chapter.) Regardless, it's worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We've got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one.
We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions.
Cory Doctorow has a good reaction.
Slashdot post.
** *** ***** ******* *********** *************
Reddit AMA
[2018.09.07] I did a Reddit AMA on Thursday, September 6.
** *** ***** ******* *********** *************
Using Hacked IoT Devices to Disrupt the Power Grid
[2018.09.11] This is really interesting research: "BlackIoT: IoT Botnet of High Wattage Devices Can Disrupt the Power Grid":
Abstract: We demonstrate that an Internet of Things (IoT) botnet of high wattage devices -- such as air conditioners and heaters -- gives a unique ability to adversaries to launch large-scale coordinated attacks on the power grid. In particular, we reveal a new class of potential attacks on power grids called the Manipulation of demand via IoT (MadIoT) attacks that can leverage such a botnet in order to manipulate the power demand in the grid. We study five variations of the MadIoT attacks and evaluate their effectiveness via state-of-the-art simulators on real-world power grid models. These simulation results demonstrate that the MadIoT attacks can result in local power outages and in the worst cases, large-scale blackouts.
Moreover, we show that these attacks can rather be used to increase the operating cost of the grid to benefit a few utilities in the electricity market. This work sheds light upon the interdependency between the vulnerability of the IoT and that of the other networks such as the power grid whose security requires attention from both the systems security and power engineering communities.
I have been collecting examples of surprising vulnerabilities that result when we connect things to each other. This is a good example of that.
Wired article.
** *** ***** ******* *********** *************
Security Vulnerability in Smart Electric Outlets
[2018.09.12] A security vulnerability in Belkin's Wemo Insight "smartplugs" allows hackers to not only take over the plug, but use it as a jumping-off point to attack everything else on the network.
From the Register:
The bug underscores the primary risk posed by IoT devices and connected appliances. Because they are commonly built by bolting on network connectivity to existing appliances, many IoT devices have little in the way of built-in network security.
Even when security measures are added to the devices, the third-party hardware used to make the appliances "smart" can itself contain security flaws or bad configurations that leave the device vulnerable.
"IoT devices are frequently overlooked from a security perspective; this may be because many are used for seemingly innocuous purposes such as simple home automation," the McAfee researchers wrote.
"However, these devices run operating systems and require just as much protection as desktop computers."
I'll bet you anything that the plug cannot be patched, and that the vulnerability will remain until people throw them away.
Boing Boing post. McAfee's original security bulletin.
** *** ***** ******* *********** *************
Security Risks of Government Hacking
[2018.09.13] Some of us -- myself included -- have proposed lawful government hacking as an alternative to backdoors. A new report from the Center of Internet and Society looks at the security risks of allowing government hacking. They include:
• Disincentive for vulnerability disclosure
• Cultivation of a market for surveillance tools
• Attackers co-opt hacking tools over which governments have lost control
• Attackers learn of vulnerabilities through government use of malware
• Government incentives to push for less-secure software and standards
• Government malware affects innocent users.
These risks are real, but I think they're much less than mandating backdoors for everyone. From the report's conclusion:
Government hacking is often lauded as a solution to the "going dark" problem. It is too dangerous to mandate encryption backdoors, but targeted hacking of endpoints could ensure investigators access to same or similar necessary data with less risk. Vulnerabilities will never affect everyone, contingent as they are on software, network configuration, and patch management. Backdoors, however, mean everybody is vulnerable and a security failure fails catastrophically. In addition, backdoors are often secret, while eventually, vulnerabilities will typically be disclosed and patched.
The key to minimizing the risks is to ensure that law enforcement (or whoever) report all vulnerabilities discovered through the normal process, and use them for lawful hacking during the period between reporting and patching. Yes, that's a big ask, but the alternatives are worse.
This is the canonical lawful hacking paper.
** *** ***** ******* *********** *************
Quantum Computing and Cryptography
[2018.09.14] Quantum computing is a new way of computing -- one that could allow humankind to perform computations that are simply impossible using today's computing technologies. It allows for very fast searching, something that would break some of the encryption algorithms we use today. And it allows us to easily factor large numbers, something that would break the RSA cryptosystem for any key length.
This is why cryptographers are hard at work designing and analyzing "quantum-resistant" public-key algorithms. Currently, quantum computing is too nascent for cryptographers to be sure of what is secure and what isn't. But even assuming aliens have developed the technology to its full potential, quantum computing doesn't spell the end of the world for cryptography.
Symmetric cryptography is easy to make quantum-resistant, and we're working on quantum-resistant public-key algorithms. If public-key cryptography ends up being a temporary anomaly based on our mathematical knowledge and computational ability, we'll still survive. And if some inconceivable alien technology can break all of cryptography, we still can have secrecy based on information theory -- albeit with significant loss of capability.
At its core, cryptography relies on the mathematical quirk that some things are easier to do than to undo. Just as it's easier to smash a plate than to glue all the pieces back together, it's much easier to multiply two prime numbers together to obtain one large number than it is to factor that large number back into two prime numbers. Asymmetries of this kind -- one-way functions and trap-door one-way functions -- underlie all of cryptography.
To encrypt a message, we combine it with a key to form ciphertext. Without the key, reversing the process is more difficult. Not just a little more difficult, but astronomically more difficult. Modern encryption algorithms are so fast that they can secure your entire hard drive without any noticeable slowdown, but that encryption can't be broken before the heat death of the universe.
With symmetric cryptography -- the kind used to encrypt messages, files, and drives -- that imbalance is exponential, and is amplified as the keys get larger. Adding one bit of key increases the complexity of encryption by less than a percent (I'm hand-waving here) but doubles the cost to break. So a 256-bit key might seem only twice as complex as a 128-bit key, but (with our current knowledge of mathematics) it's 340,282,366,920,938,463,463,374,607,431,768,211,456 times harder to break.
Public-key encryption (used primarily for key exchange) and digital signatures are more complicated. Because they rely on hard mathematical problems like factoring, there are more potential tricks to reverse them. So you'll see key lengths of 2,048 bits for RSA, and 384 bits for algorithms based on elliptic curves. Here again, though, the costs to reverse the algorithms with these key lengths are beyond the current reach of humankind.
This one-wayness is based on our mathematical knowledge. When you hear about a cryptographer "breaking" an algorithm, what happened is that they've found a new trick that makes reversing easier. Cryptographers discover new tricks all the time, which is why we tend to use key lengths that are longer than strictly necessary. This is true for both symmetric and public-key algorithms; we're trying to future-proof them.
Quantum computers promise to upend a lot of this. Because of the way they work, they excel at the sorts of computations necessary to reverse these one-way functions. For symmetric cryptography, this isn't too bad. Grover's algorithm shows that a quantum computer speeds up these attacks to effectively halve the key length. This would mean that a 256-bit key is as strong against a quantum computer as a 128-bit key is against a conventional computer; both are secure for the foreseeable future.
For public-key cryptography, the results are more dire. Shor's algorithm can easily break all of the commonly used public-key algorithms based on both factoring and the discrete logarithm problem. Doubling the key length increases the difficulty to break by a factor of eight. That's not enough of a sustainable edge.
There are a lot of caveats to those two paragraphs, the biggest of which is that quantum computers capable of doing anything like this don't currently exist, and no one knows when -- or even if ¬- we'll be able to build one. We also don't know what sorts of practical difficulties will arise when we try to implement Grover's or Shor's algorithms for anything but toy key sizes.
(Error correction on a quantum computer could easily be an unsurmountable problem.) On the other hand, we don't know what other techniques will be discovered once people start working with actual quantum computers. My bet is that we will overcome the engineering challenges, and that there will be many advances and new techniques¬but they're going to take time to discover and invent. Just as it took decades for us to get supercomputers in our pockets, it will take decades to work through all the engineering problems necessary to build large-enough quantum computers.
In the short term, cryptographers are putting considerable effort into designing and analyzing quantum-resistant algorithms, and those are likely to remain secure for decades. This is a necessarily slow process, as both good cryptanalysis transitioning standards take time. Luckily, we have time. Practical quantum computing seems to always remain "ten years in the future," which means no one has any idea.
After that, though, there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover's algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It's possible that quantum computers will someday break all of them, even those that today are quantum resistant.
If that happens, we will face a world without strong public-key cryptography. That would be a huge blow to security and would break a lot of stuff we currently do, but we could adapt. In the 1980s, Kerberos was an all-symmetric authentication and encryption system. More recently, the GSM cellular standard does both authentication and key distribution -- at scale -- with only symmetric cryptography. Yes, those systems have centralized points of trust and failure, but it's possible to design other systems that use both secret splitting and secret sharing to minimize that risk. (Imagine that a pair of communicants get a piece of their session key from each of five different key servers.) The ubiquity of communications also makes things easier today. We can use out-of-band protocols where, for example, your phone helps you create a key for your computer. We can use in-person registration for added security, maybe at the store where you buy your smartphone or initialize your Internet service. Advances in hardware may also help to secure keys in this world. I'm not trying to design anything here, only to point out that there are many design possibilities. We know that cryptography is all about trust, and we have a lot more techniques to manage trust than we did in the early years of the Internet. Some important properties like forward secrecy will be blunted and far more complex, but as long as symmetric cryptography still works, we'll still have security.
It's a weird future. Maybe the whole idea of number theory¬-based encryption, which is what our modern public-key systems are, is a temporary detour based on our incomplete model of computing. Now that our model has expanded to include quantum computing, we might end up back to where we were in the late 1970s and early 1980s: symmetric cryptography, code-based cryptography, Merkle hash signatures. That would be both amusing and ironic.
Yes, I know that quantum key distribution is a potential replacement for public-key cryptography. But come on -- does anyone expect a system that requires specialized communications hardware and cables to be useful for anything but niche applications? The future is mobile, always-on, embedded computing devices. Any security for those will necessarily be software only.
There's one more future scenario to consider, one that doesn't require a quantum computer. While there are several mathematical theories that underpin the one-wayness we use in cryptography, proving the validity of those theories is in fact one of the great open problems in computer science. Just as it is possible for a smart cryptographer to find a new trick that makes it easier to break a particular algorithm, we might imagine aliens with sufficient mathematical theory to break all encryption algorithms. To us, today, this is ridiculous. Public- key cryptography is all number theory, and potentially vulnerable to more mathematically inclined aliens. Symmetric cryptography is so much nonlinear muddle, so easy to make more complex, and so easy to increase key length, that this future is unimaginable. Consider an AES variant with a 512-bit block and key size, and 128 rounds. Unless mathematics is fundamentally different than our current understanding, that'll be secure until computers are made of something other than matter and occupy something other than space.
But if the unimaginable happens, that would leave us with cryptography based solely on information theory: one-time pads and their variants. This would be a huge blow to security. One-time pads might be theoretically secure, but in practical terms they are unusable for anything other than specialized niche applications. Today, only crackpots try to build general-use systems based on one-time pads -- and cryptographers laugh at them, because they replace algorithm design problems (easy) with key management and physical security problems (much, much harder). In our alien-ridden science-fiction future, we might have nothing else.
Against these godlike aliens, cryptography will be the only technology we can be sure of. Our nukes might refuse to detonate and our fighter jets might fall out of the sky, but we will still be able to communicate securely using one-time pads. There's an optimism in that.
This essay originally appeared in IEEE Security and Privacy.
** *** ***** ******* *********** *************
Click Here to Kill Everybody Reviews and Press Mentions
[2018.09.14] It's impossible to know all the details, but my latest book seems to be selling well. Initial reviews have been really positive: Boing Boing, Financial Times, Harris Online, Kirkus Reviews, Nature, Politico, and Virus Bulletin.
I've also done a bunch of interviews -- either written or radio/podcast -- including the Washington Post, a Reddit AMA, "The 1A " on NPR, Security Ledger, MIT Technology Review, and WNYC Radio.
There have been others -- like the Lawfare, Cyberlaw, and Hidden Forces podcasts -- but they haven't been published yet. I also did a book talk at Google that should appear on YouTube soon.
If you've bought and read the book, thank you. Please consider leaving a review on Amazon.
** *** ***** ******* *********** *************
Upcoming Speaking Engagements
[2018.08.31] This is a current list of where and when I am scheduled to speak:
• I'm giving a book talk at Fordham Law School in New York City on September 17, 2018.
• I'm giving an InfoGuard Talk in Zug, Switzerland on September 19, 2018.
• I'm speaking at the IBM Security Summit in Stockholm on September 20, 2018.
• I'm giving a book talk at Harvard Law School's Wasserstein Hall on September 25, 2018.
• I'm giving a talk on "Securing a World of Physically Capable Computers" at the University of Rochester in Rochester, New York on October 5, 2018.
• I'm keynoting at SpiceWorld in Austin, Texas on October 9, 2018.
• I'm speaking at Cyber Security Nordic in Helsinki on October 10, 2018.
• I'm speaking at the Cyber Security Summit in Minneapolis, Minnesota on October 24, 2018.
• I'm speaking at ISF's 29th Annual World Congress in Las Vegas, Nevada on October 30, 2018.
• I'm speaking at Kiwicon in Wellington, New Zealand on November 16, 2018.
• I'm speaking at the The Digital Society Conference 2018: Empowering Ecosystems on December 11, 2018.
• I'm speaking at the Hyperledger Forum in Basel, Switzerland on December 13, 2018.
The list is maintained on this page.
** *** ***** ******* *********** *************
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of 14 books -- including the New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security and the CTO of IBM Resilient.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM, IBM Security, or IBM Resilient.
Copyright © 2018 by Bruce Schneier.