Subscripe to be the first to know about our updates!
According to modest estimates, by 2020 the sheer growth in the volume of data is expected to reach more than 16 zettabytes, that is, 16 trillion GB. Less conservative estimates put the data production per annum at 90 zettabytes. To put it in context, that’s more data produced in one year “than all data produced since the advent of computers.” By 2025 data growth will reach 175 zettabytes, that is, enough data to make the journey to the moon and back five times.
Concerns over the management of that growth and related apprehensions over the velocity, variety, veracity, and value of data are growing in tandem. Private and public sectors of the economy are increasingly engaged not only in the proliferation of information but also in its effective and responsible management, analysis, and knowledge extraction; elements essential in giving them a competitive advantage in a densely interconnected and highly networked global economy. Not surprisingly, therefore, in the whirl of information-driven business models, questions over data security and data privacy surface to the forefront of many regulatory regimes around the world, most prominently in the European Union. There, the newly enacted 2018 General Data Protection Regulation (GDPR) and Directive 95/46/EC oversee the harmonization of privacy and data protection rules among the EU member states and exert significant extraterritorial regulatory influence over foreign tech companies and foreign service providers wishing to conduct business in the European Union, while also protecting individual consumers or data subjects. In the United States, the CLOUD Act or Clarifying Lawful Overseas Use of Data Act promises to provide robust protections for privacy and civil liberties while establishing processes and procedures for law enforcement requests for data in domestic and international contexts.
Analysts see a disruptive and transformative potential of data-intensive industries on multiple sectors of the public and economic life of societies. Healthcare, transportation, finance, manufacturing, retail, military, energy, and telecommunications stand to gain from the sui generis explosion of information technologies, challenging entrenched value systems and altering the relationship between clients and service providers as well as citizens and their governments. However, the evolving technological landscape is not without its risks. Rapid data growth will require good data governance and strong processes for acquiring, validating, storing, protecting, and processing data. Proofing the system against data corruption, data breaches, and data remanence will remain one of the enduring challenges.
In the twenty-first century, the industrial prowess and health of nations will be measured in bits and bytes and countries, as well as public and private entities lagging in providing adequate, reliable, and trustworthy measures enabling information sharing, data generation, objective interpretation, and protection, will very likely fall behind. More sober-minded assessments predict that data-driven revolution in artificial intelligence and automation will create technological disruptions responsible for an overwhelming economic segmentation of societies.
Scholars speculate that a prospective data arms race in AI, robotics, or bioengineering may very well lead to ‘data colonialism’ or ‘digital dictatorships’, which unlike their predecessors will utilize algorithmic advantage to restructure entire societies and usher in a dystopian future of surveillance control for the world’s most vulnerable populations.
Research suggests that modern societies face three existential challenges: nuclear war, ecological collapse, and technological disruption. It is the last of the three that has the potential of affecting the developmental trajectory of states and societies. Emerging threats to data-intensive industries in the form of data breaches may cripple industries as varied as healthcare, transportation, energy, and finance. Government surveillance on the other hand may compound social repression, especially in authoritarian regimes. But citizens of advanced democracies, too, have recently started to question the efficacy of Covid passports and the regime of government-sanctioned medical surveillance that it is likely to usher.
Surveillance and advances in data extraction, however, can have a positive application in crime and terrorism investigations. Expertly performed data tracing and analysis of data points can locate and identify both the victims and perpetrators of domestic and international crimes. The use of data-intensive technologies carries especial importance to investigators and litigators of international crimes, particularly those related to incidents of domestic and international terrorism. The Special Tribunal for Lebanon (STL), for example, relies on new methods of data collection and cellular tracing to establish the elements of crime leading up to the 2005 assassination of former Lebanese prime minister Rafic Hariri and 21 other victims of the terrorist attack of domestic nature. The case sets a precedent in that it is the first known effort by investigators and legal experts to use cellular data-intensive tracing methods to identify the perpetrators of the crime and establish a reliable timeline of events to effectively prosecute the case under the aegis of international criminal law.
Data offers itself as an invaluable tool in the intelligence collection toolbox in developed democracies as well as authoritarian regimes. The United States National Security Agency under the PRISM program operated under the supervision of the U.S. Foreign Intelligence Surveillance Court (FISA Court) pursuant of the Foreign Intelligence Surveillance Act (FISA) authorizes electronic data collection activities of targeted U.S. citizens and non-U.S. persons. As part of the program, the NSA conducts surveillance of live communication, email, video and voice chat, videos, photos, social networking, and file transfers. The NSA maintains extensive links with trusted U.S. companies to obtain access to data traffic without an explicit warrant. Many other countries, including Canada, Israel, Mexico, New Zealand, and the United Kingdom have developed similar surveillance programs.
Predictive analytics promise to preempt investigations into violent crimes before they occur and thwart their reoccurrence. The diffusion of new technologies offers thus considerable risks to privacy but also opportunities for countering the perceived threats to national security, the state’s institutional structures, and continuity of its democratic processes.
On the other hand, widely used facial recognition technologies indexing information based on ethnic group affiliation, criminal history, and special facial features can and often do result in repressive outcomes. The accumulation of social credits based on sophisticated surveillance methods as a precondition for travel or access to basic services has proved controversial in China.
According to media reports, Chinese reliance on some 200 million surveillance cameras and 400 million more expected to be in place by 2021, can augur a future of permanent scrutiny and control as well as determine life prospects and career outcomes of those falling under their perpetual gaze. Already, some 1 million Uighurs and other minorities have been subject to intimidation or detention on national security grounds and human rights advocates warn against a “’dystopian surveillance state’ … where citizens are constantly monitored for signs of unrest or dissent.” The use of CCTV cameras and control over online infrastructure is closely supervised by the Communist Party’s state entities such as the Ministry of Industry and Information Technology and the Ministry of Information Technology under the aegis of the SkyNet Program.
The use of technologies of surveillance, however, is on the incline worldwide with a projected global market for the technology growth forecast to reach $7 billion by 2022. Facial recognition for public surveillance, border control and the identification of unknown suspects are being rapidly incorporated into the contemporary security governance architecture. In established democracies, such as the United States, technological innovation and countless data generation appeals to consumer needs for safety and convenience, as with airports and airlines using facial recognition for passenger check-in or iPhone apps monitoring the pulse and daily step count of their owners. Yet, the new technological frontier requiring the ever more intrusive surrender of biometric data from citizens and consumers of elemental goods and services is a constant in the political and economic equation of fully functioning autocracies and democracies alike.
The overwhelming velocity and volume of data produced have direct implications on the quality of information being extracted, processed, and subsequently utilized for policy and decision making. While preserving the image of transparency, many governments around the world are actively seeking out methods of control and repression. Recent coronavirus outbreak in China illustrates this point well. While the response to the public health crisis has been surprisingly swift and decisive in a country otherwise veiled in secrecy, China’s government nevertheless fought an active campaign of censorship to control the flow of information to its citizens to maintain political coherence and social stability, while at the same time reassuring the world community of their vigorous efforts at containing the spread of the virus.
The method of overt state control of information in the digital age has been dubbed by policy experts as ‘digital authoritarianism.’ Its expressed purpose is to “surveil, repress, and manipulate domestic and foreign populations” and in so doing to “reshape the power balance between democracies and autocracies.”
Recent policy briefs focus on two models of digital disinformation campaigns: (1) The Chinese model and (2) the Russian model. In the first instance, state-owned entities subservient to the Communist Party doctrine view information technology in terms of economic development and recognize its value to Chinese foreign policy. The government invests heavily in subsidizing its tech sector and AI startups such as Sensetime, Megvii, or the more established Huawei and ZTE to advance their competitive advantage in the information technology sector and strategically export its prowess via the Road and Belt Initiative.
The Russian model on the other hand puts heavy emphasis on government surveillance of data flows with repressive intent. The former Soviet intelligence agency (KGB) developed early prototypes of phone surveillance technologies, which have been subsequently expanded to active monitoring of email and digital data traffic. According to policy analysts, Russia is implementing organized state repression and censorship of internet content, developing its own version of the internet which can be de-linked from the global web to block undesirable content, and is promoting its digital authoritarian practices and technologies across the former Soviet satellite states.
The 2017 declassified report on Russian interference in the U.S. election argues that digital authoritarians exploit a complex “blend of covert intelligence operations – such as cyber activity – with overt efforts by Russian Government agencies, state-funded media, third-party intermediaries, and paid social media users ‘trolls’” to disrupt the democratic process. To effectively manage the flow of disinformation, policy analysts recommend: (1) implementing targeted sanctions on digital authoritarian regimes and companies aiding and assisting their efforts; (2) encouraging the policymaking community in the United States and Europe to “offer compelling models of digital surveillance that enhance security while still protecting civil liberties and human rights”; and (3) developing a common code of conduct governing digital exchanges and investing in raising public awareness around data and information manipulation.
Bots and algorithms have become the defining templates for information sharing and communication. Along with responsible reporting and information sharing come disinformation and info-demics. The widespread use of technological advancements for data collection puts significant pressure on governmental and corporate entities to regulate the ‘virality’ of misinformation and disinformation campaigns perpetrated by state and non-state actors, and substantially enhance their data-protection measures. Targeted advertising and algorithmic decision-making will require a new generation of laws governing their usage. Regulating entities should therefore establish the extent to which rights and fundamental freedoms which accrue to human persons also extend to artificial entities, such as bots. Do bots have free speech rights? Does curtailing of free speech rights of bots violate any of the fundamental rights which normally accrue to human persons? The legal puzzle which the question presents and the lack of explicit guidelines regulating conduct on the web encourage the proliferation of disinformation campaigns and embolden authoritarian regimes and corporate entities to use data and multiply content to advance their political agenda or commercial ends.
The RDR Corporate Accountability Index, in responding to growing concerns over corporate responsibility in the digital world, recommends that governments undertake measures that regulate and sustain human rights online. Among their recommendations are: (1) protection of human rights through a regime of laws and compliance mechanisms that ensure users’ rights to freedom of expression and privacy; overhaul of surveillance laws; and implementation of robust oversight; (2) corporate accountability requiring companies to conduct risk assessments to identify potential human rights impacts and harms; (3) transparency requiring companies to disclose their adherence to and enforcement of government-issued rules and regulations; (4) remedy giving users a meaningful and effective recourse when their rights are subverted in the digital space; and (5) global collaboration and engagement mobilizing of civil society to proactively advance human rights on the internet and establishing a roadmap for addressing public security threats.
Digital technologies permeate every aspect of the modern-day life of individuals and define the depth and scope of social relations. As more and more of human existence moves online, the intangible impact of technological advancement carries implications for the social wellbeing of individuals and the overall health and wealth of societies. Whilst the digital-world enhancing lifestyle augurs the age of ever-growing improvement of the species it, nevertheless, carries along with it manifold risks. A digitalized body and a networked self, which yields an overabundance of data points are prime subjects for data mining and commercialization of the human subject.
Digital anthropologists have studied the effects of the human-to-machine interaction and point to a growing reliance on ‘smart’ devices to measure a wide array of functions. Discreet data points which encapsulate our personal lives report on anything from bodily movements to geo-location to our consumer preferences and purchasing habits giving a new dimension to the notions of selfhood and privacy. ‘The privatization of the public and publicization of the private” has become a co-extension and a defining feature of human personhood in the twenty-first century as much as non-coercive disciplinary power/surveillance has modified our behavior.
Modern-day crises reveal the enormous potential of new technologies for the benefit of the individual and society. Autonomous robots digging through a debris field at the World Trade Center site following the terrorist attacks of 9/11 or use of unmanned aerial vehicles to conduct surveillance and reconnaissance operations in war zones have proved invaluable. The COVID-19 pandemic showed that advanced use of digital technologies such as the internet of things (IoT), 5G telecommunication networks, big-data analytics, artificial intelligence (IA) that utilizes deep learning and blockchain technology and their subsequent usage in clinics and hospitals “facilitates the establishment of a highly interconnected digital ecosystem, enabling real-time data collection at scale, which could then be used by AI and deep learning systems to understand healthcare trends, model risk associations, and predict outcomes.”
Yet, digitalization and automation can also prove harmful to the civil and human rights of those kept under surveillance. COVID-19 caused an unprecedented explosion of wearable technologies enabling effective tracking of individuals subject to state-mandated quarantine measures to limit the contagion. Government entities and national security agencies have relied on tracking bracelets (Hong Kong), smartphone location tracking (Israel), dedicated apps, (USA), and telecommunication data (Singapore) to perfect their study of the risks associated with human movement in the age of global pandemic. Civil rights activists worry that such practices may become unduly normalized and in the long term threaten basic civil liberties and rights of individuals subject to onerous regimes of digital monitoring.
The United States would benefit from national legislation uniformly protecting and regulating data transfers and commercial activities connected with data mining, acquisition, and sale. Rather than relying on evolving case law and patchy data privacy laws, a comprehensive and enforceable regulation would streamline data transfers and endow individual users with clear and specific rights and entitlements. Ensuring clarity around data would also benefit small and large businesses and allow for the development of technologies that sift and by default protect data against commercial and non-commercial use and abuse. The United States would do well to follow the EU’s General Data Protection Regulation framework and mandate respect for individual users’ digital rights.
A comprehensive national agenda and considerable investment in strengthening the digital economy and spurring innovation would give the United States the necessary competitive technological advantage. Public-private partnerships spearheaded by the national security sector would allow the United States to remain a leader in innovative technological advancement, which remains in close competition with China and the European Union.
A lack of legal clarity regarding extraterritorial application of data processing laws complicates the ability to govern data flows. Emerging threats stemming from unilateral decision-making powers of democracies and authoritarian regimes alike in the absence of a binding international legal framework governing data use further obfuscates the evolving digital landscape.
Democratic states will need to evolve an effective balance between information gathering for national security purposes and protection of constitutional and human rights. Collectively, the international community will need to address geographic and legalistic gaps in technological advancement and access to digital infrastructure.
The digital economy and the management of big data systems rely entirely on energy systems and infrastructures. Little attention is being paid to the mechanisms and investments necessary in maintaining a healthy and reliable grid system. National security regimes and private corporate entities involved ought to pay close attention to the ways their virtual exchanges benefit from electric systems enabling electronic data transfers and support development and investment in this sector.
Source; Geopolitical Monitor
Subscripe to be the first to know about our updates!
Follow our latest news and services through our Twitter account