NATIONAL SECURITY

PROTECTING THE HOMELAND: NEW PERSPECTIVE

 

OVERVIEW: STAYING AHEAD OF STRATEGIC THREATS

 

For nation states in the 21st century, security is less about the risk of nuclear war and maintaining a deterrent, and more about non-kinetic threats to its citizens, its institutions, and the fabric of society. In place of bullets and bombs, the threats to a country’s day-to-day well-being are to do with cyber, the economy, the environment, health, and political security. The proliferation of these threats comes from a wider array of sources than just state actors such as hacktivists, cyber criminals, and terrorists using online means to achieve their aims. Even pandemics such as COVID-19 have created unprecedented challenges for our general sense of national security and our way of life. 

 

Just as the threats faced by nation states have changed, so the response mechanisms have also changed. Alongside conventional forces and - for some - nuclear strike capabilities, the world’s militaries are increasingly likely to have hybrid responses and tactics that involve online digital capabilities. 

 

Civilian administrations are also adopting a corresponding posture for the protection of their citizens. Healthcare delivery, economic well-being and the pursuit of sound environmental policies are all dependent on access to and the ability to process large quantities of data that is both accurate and available at the time and place when it is needed. In this way, while access to good data becomes a key enabler for national security, in its own right it becomes a national vulnerability susceptible to exploitation by the very actors it is helping to protect against. 

 

TABLE OF CONTENTS:

  1. How is data used to address national security risks?

  2. What role does big data and Artificial Intelligence play?

  3. How do threat actors try to disrupt or circumvent this data-driven view?

  4. How can we ensure the Integrity and availability of data for decision makers?

  5. What are the current limitations in the use of data to safeguard national security?

  6. Some suggestions for the way ahead

HOW IS DATA USED TO ADDRESS NATIONAL SECURITY RISKS?

 

In a traditional national security context, data has played a central role in informing policymakers and military commanders. This has included anything from informing alerts regarding a rapidly evolving threat - such as an increased risk of a terrorist incident - to dry, technical indicators and warnings concerning strategic threats, for example changes to a hostile nation’s deployment of nuclear forces, and long gestation reporting on unseen threats such as hostile foreign intelligence action.

 

The same also applies to the delivery of intelligence against a far wider range of risks posed to nation states in the 21st century. Understanding the scale and implications of a cyber attack, for example who were the likely perpetrators and how can the attack be mitigated; the impact of the COVID-19 health pandemic, for example how readily is the disease passed from person to person, how quickly do people recover, and so on; or providing detailed assessments of serious or organised crime such as bringing gang members to justice, all rely on the timely delivery and availability of accurate intelligence.  The following table gives an illustration of the wide range of risks posed to nation states, in this case for the UK.


 


 

 

 

 

 

 

 

 

 

 

 

 

As the key underpinning ingredient of information or intelligence, data plays a central role in each scenario. The quality of any decisions, which result from understanding or synthesising the underlying data, will have a direct relationship with the quality (and possibly quantity) of that data, together with its availability and, in the case of classified intelligence, the ability to maintain the confidentiality of how the data was sourced. 

 

WHAT ROLE DOES BIG DATA AND ARTIFICIAL INTELLIGENCE PLAY?

 

While the data itself can contain significant value when forming information or intelligence, it is possible to generate further value when substantial quantities of data are brought to bear on a subject or set of subjects. The term big data has no numerical definition but is generally taken to mean a data set so large that it cannot be meaningfully processed on a single computer. Furthermore, the process of extracting additional value can only occur when the appropriate skills are applied, data science being central. 

 

By applying data science techniques to these large volumes, value can in some cases be derived even when the underlying data yields limited value, for example where it is encrypted or when records appear worthless when considered individually. This has been understood for several years by companies storing and processing data at scale, for example search engines and social media platforms - and intelligence agencies.

 

Deriving value from big data on a sustained and dependable basis takes more than data science. Here, intelligence and security agencies look to the power of machine learning, deep learning and other components of artificial intelligence (AI) to generate their intelligence picture.

 

HOW DO THREAT ACTORS TRY TO DISRUPT OR CIRCUMVENT THIS DATA-DRIVEN VIEW?

 

Dependency on data from external service providers brings with it an additional risk: the delivery of a public service is only as robust as the supply chain behind it. This vulnerability is not only potentially compounded as the supply chain lengthens, it also becomes very hard to mitigate directly. 

 

For many years, the concept of the denial-of-service (DoS) attack has been well known, especially when launched from multiple points (a distributed denial-of-service or DDoS). Without suitable mitigation, these attacks can cause websites and other online services to become unusable. The effect can be multiplied when large, widely-used service providers and other vendors are customers of the targeted site. This occurred in 2016 when Dyn corporation was targeted by a malware-based DDoS launched from a large number of connected devices such as webcams. The attack took Dyn offline and also caused outages for a number of their clients, including CNN, GitHub and the Swedish government. 

 

HOW CAN WE ENSURE THE INTEGRITY AND AVAILABILITY OF DATA FOR DECISION MAKERS?

 

Information and cybersecurity specialists refer to the set of primary security objectives for data and systems as the “Holy Trinity” of Confidentiality, Integrity and Availability. In other words data can only be viewed by individuals with legitimate access (Confidentiality), the data is in its intended form and has not been changed or otherwise tampered with (Integrity), and the data is accessible whenever legitimate users have a need to access it (Availability).

 

For many entities, including government departments and agencies, the security objectives are often thought of in terms of protecting their own data and systems. However, the significant and increasing use of data from external sources presents policy makers with a problem: how to ensure the security objectives for data and systems not within their control. This applies especially to the Integrity and Availability of externally sourced data: how can they be sure the data is accurate, or that it will be available when needed, for example when it is used in the delivery of a public service.

 

While the challenge of assuring or validating externally-sourced data for accuracy and provenance is well known, there is no simple or single answer. Instead, a combination of measures will be needed, for example from using data from known and trusted sources or collaborators, to applying technical controls such as mathematical mechanisms at the point of data generation. Although this approach has merit, it can be severely constraining on its own in that not all government agencies or departments have the knowledge or capability to identify and implement controls. 

 

WHAT ARE THE CURRENT LIMITATIONS IN THE USE OF DATA TO SAFEGUARD NATIONAL SECURITY?

 

As a consequence of what are now considered national security threats, many more agencies and public bodies have become involved in keeping the nation safe, whether in central government, healthcare, the police, immigration and border protection, or the military. But their reasons to exist and their very different structures and abilities do not make them all equal or capable in deriving value from data, even when the result might be a significant uplift in delivering their core missions. Often they will lack the expertise, or the technical capability to collect, store and process data. 

 

This does not make public bodies unique. Many if not most commercial organisations do not realise the value of the data they store and process, or even in some cases, that the data might have value at all. There are also other similarities between the public and private sectors. Many elements of national security are now less about countering threats (for example, in a traditional military sense having enough personnel and material to deter a potentially hostile state from attacking) and more about assessing and understanding risks, not dissimilar to a core function of a corporation’s board when they consider financial, reputational or legal risks. Unfortunately, public sector bodies are often unable to build the required capability either because of competing priorities or budgetary constraints.

 

SOME SUGGESTIONS FOR THE WAY AHEAD

 

Like many enterprises, governments operate in silos, both internally and with each other. With the right access to the appropriate information, their decision-making could be transformed away from these silos, while automation could revamp their manual processes and workflows. As the breadth of national security has increased, so has the number of executive agencies and other entities requiring the ability to generate and consume data-driven insights. For example, law enforcement, border & immigration and coastguard units could greatly benefit from data-driven insights to generate tactical information in real time and to drive live operations.

 

In terms of using self-generated data, governments exist on a spectrum ranging from those who can generate and consume data while not realising the data’s full potential, to governments which generate data, share it (with others) and ingest it (from others). In each case, even for those entities with greater maturity in recognising the value of data, governments can also be  restricted in terms of identifying and consuming the appropriate data feeds or data analysis from third parties. Emerging advanced technologies, however, can achieve much more, and validated AI and the big data behind it has the potential to transform this situation.

 

Access to AI-based tools and capabilities at the very least would provide enough uplift to enable individual departments to meet their national security responsibilities more effectively and efficiently. Departments and agencies outside the Intelligence Community could generate at least similar insights to adjust their current processes and procedures into those delivering actionable insights and intelligence. Due to recent advances, emerging AI technologies require less data and time to “train” the system, for example in a new language. This is done within an existing supervised-learning technique using data previously categorised as “valid” within a given national security context, for example the language used by people smugglers, including their euphemisms and styles.

  • More specifically, emerging data science and AI techniques can understand context, style, sarcasm and intent within the data and can derive insights through intuition, much like a person.

  • It can detect sentiment, syntax and dialects, and in modern communication, whether spoken or text-based; it can understand the use of special characters, emojis and slang to discern the full meaning of a message.

By adopting AI tools, government departments have an opportunity to overhaul their current processes and ways of working, for example with the help of AI-enabled monitoring of an environment to identify people, groups and activities of interest. Human analysts can then conduct assessments and draw conclusions based on considerably reduced volumes of data with significantly higher value. Both individually and collaboratively, government departments and agencies could transform key elements in their processes and services by generating risk insights derived from cutting-edge technology. In turn, this would enable executive agencies to use their resources in a highly targeted way, with demonstrable and measurable changes in effectiveness and efficiency.