- "Tackle" cybercrime.
- Build resilience against cybercrime.
- Contribute to an "open, stable and vibrant" cyberspace.
- Attract & build upon skills to support all of these objectives.
In my view, the fourth is by far the most important of these objectives. There is a great deal that we do not yet know about Internet-mediated harms. Not only is the nature of the threat constantly developing, but so much is still to be learned about the individuals who drive criminality online.
Much of the Strategy is concerned with building defences against such harms. Certainly, resilience is very important, and I am pleased that the Strategy acknowledges that both industry standards and security awareness are important to reducing the opportunities for such harms. However, resilience strategies ultimately amount to nothing more than target hardening, ignoring the underlying issues. No crime, in my view, can adequately be understood as a 'routine activity', and policies which make this mistake can never hope to "tackle" crime.
The Strategy seems to be driven by a belief that it is possible to "design out crime online" (page 30, para. 4.32). Unfortunately, this is simply not possible. As Murray has argued, code-based solutions are ultimately 'worked around' by the active matrix: skilled and motivated users capable, ultimately, of rejecting code-based regulation by subversion. Further, any infrastructure-based solutions (e.g. the 'Internet kill switch', in any case useful only against certain kinds of harm) risk compromising the openness and neutrality of the network (see Carolina on the bordered Internet): an indirectly topical area in light of the Leveson Inquiry into the proper relationship between the state and the free media.
I say that the solution demands a better understanding of the social causes of cybercrimes, of all flavours. Tackle these, and we stand a chance of enjoying a network shaped not just by code, but by common norms. Technical resilience is important, but social resilience and Durkheim's "internal policeman" cannot be overlooked, else in my view we risk "liddism" in our policymaking (Rogers 2002, Reiner 2010).
Clearly, achieving an understanding of cybercrimes in all their flavours is a big nut to crack, however I suspect that there are certain commonalities between offenders operating across each 'generation' of cybercrime (Wall 2007). In my dissertation 'Policing the Internet', I relabel Wall's three generations: e-jemmies (crimes committed within discrete computer systems), network crime (crimes exploiting connectedness to inflict harms remotely), and distributed crime (crimes which use networks to distribute and automate the infliction of remote harm). Wall predicts a fourth generation will exploit emerging ambient intelligence technologies as we move towards the 'Internet of Things'. Analysing this tendency in people to exploit new technologies could well reveal something important about the nature of criminality more broadly — especially important in light of this impending fourth generation of cybercrime. While it is likely that offenders for whom the Internet is merely a tool for offending that would happen in a world without the Internet remain the elusive province of ordinary criminologies, research into the nature and degree of social exclusion of other categories of cyber-offenders is desirable to understand what it is about the Internet that fuels offending, and how we can work to address it.
Worryingly, the Report seems to make assumptions about the nature of offending online: the sentence "criminals still regard exploiting cyberspace as a profitable and low-risk option" (emphasis added) marks an assumption that all cybercriminals are rational agents. Perhaps some behaviour can be so explained, but not all, and in any case there seems to be very little evidence (certainly none of which appears in the Strategy) to support this view.
Buried at page 23 of the Strategy is this precious sentence: "We want a UK where[, inter alia, p]eople are clear that, as in the offline world, we are each responsible for our behaviour in cyberspace". How do we (as a society, not just a state) inject this sense of responsibility, this normative code into the disparate and privatised fabric of cyberspace? Certainly not, in my view, by focusing entirely upon "threat management" in a technical sense.
I cannot stress enough that the Strategy certainly has laudable aims, and much is to be said for its content. It pledges to improve the police response to cybercrime and work internationally to deny cybercriminals 'safe havens' where regulation is weak, backed by a timely pledge to reinforce the Convention on Cybercrime as the UK enters its chairmanship of the Council of Europe. The judiciary are to be encouraged to respond to new threats using existing laws, whilst Parliament reviews the fitness for purpose of the Computer Misuse Act (see my earlier post on the risk of moral panic and over-criminalisation). Security agencies are to be encouraged to focus on the disruption of cybercrime even where convictions are unlikely: again, a laudable and realistic view in light of the inevitable jurisdictional issues inherent in e-crime, but one that flirts with defeatist 'liddism' (see above).
Also to be commended is this official recognition of the crucial role of the private sector. But how is this awkward relationship to be managed? Only 2% of the budget is allocated to this project (page 26), the rest being spent mostly on resilience measures. The Strategy mentions a partnership with the private sector to develop "cyber-relevant sanctions": what are these? Mention is made of "online sanctions for online offences" — does this allude to the controversial disconnection rule, or are we talking about something else? How would these sanctions be metered out, and by whom? On what standard of proof? Questions of legitimacy, oppression and subversion seem far too important to be answered by a fraction of a 2% cut of the cyber security budget. (See Laidlaw on the human rights responsibilites of Internet Information Gatekeepers.)
Returning to the fourth and overarching objective, the Strategy strongly encourages "R&D". What is not clear is whether the government had technical or social research in mind. Technical research is of course important to continue to improve resilience. However, we need social research to answer the question of normativity in cyberspace.
In summary, we seem to be focusing too much on target hardening and disruption. The deterrence and prevention of cybercrimes can only be effective in the long run if we understand the threat as necessarily social rather than merely technological. Is it merely opportunity that drives cybercrime? Is it true that those who commit crimes online would just be committing crimes offline anyway? Perhaps not.
As Clemente points out, the extent to which even this Strategy can be delivered in an age of austerity remains to be seen. However, what I am suggesting need not cost any more. I am simply arguing that the policymakers behind the Strategy seem to be making assumptions about the criminologies of cyberspace that could ultimately be the undoing of their otherwise laudable objectives. The involvement of foreign states and the private sector are crucial if we are to build this "open, stable and vibrant cyberspace", and I am pleased that the Strategy recognises this. However, by focusing too much on target hardening and too little on understanding the causes of cybercrime, we risk losing an opportunity to nip cybercriminality in the bud. My plea therefore is to legislators and independent research councils the world over, to fund the social research that these policies so badly need.