AI, Social Epidemiology, Diseases of Despair, and the Law in Transforming Societies
The Plague issues warnings. It identifies the evil that lies coiled within human nature as the main enemy and threat: the plague is within us.
1. INTRODUCTION
Albert Camus’ novel, The Plague, seems to have a basic theme of human solidarity. The novel features a threat scenario: external threats combined with inner moral complicity. The Plague was prompted by a large-scale disaster: World War II and the German occupation of France. Camus wrote in retrospect, rather than in prophetic dread, but The Plague can be seen as both.
In keeping with dystopian works such as Brave New World and Nineteen Eighty-Four, The Plague issues warnings. It identifies the evil that lies coiled within human nature as the main enemy and threat: the plague is within us. The novel is a conception of that internal form of plague. For Camus, plague is indifference to human suffering. We are currently in the midst of a large-scale disaster, but it is revealed in epidemiological data, rather than in wartime newsreels. Our inner moral complicity, the plague within us, remains the same. Once again, we are more afraid of identifying the disease than the disease itself.
My thesis analyzes the legal and societal implications of the application of Artificial Intelligence (AI) technology (data-intensive science) to the provision of services and benefits within the medical system and insurance/compensation sector. In this regard I will focus my research on the trend towards the application of AI to a particular class of epidemic chronic medical conditions, including psychological, neurocognitive (Mizoguchi et al, 2000), somatic pain-type (Nickerson B, 2010) and related substance abuse disorders; sometimes referred to collectively as “diseases of despair” (Muennig et al, 2018) (Shanahan et al, 2019).
According to one estimate from the Institute of Medicine (IOM) “more than 100 million Americans struggle with chronic pain, at an annual cost of as much as $635 billion in treatment and lost productivity. Further, the misuse of potent opioid painkillers, meant to help manage pain, can increase the risk of addiction and abuse” (Harvard Health Publishing, 2016).The prevalence of these conditions is similar in Canada (Schopflocher et al, 2011).
My thesis will implicate ‘Chronic Stress Theory’ (Kopp M, 2007) as a valid and testable theory on the ‘social epidemiology’ (Krieger N, 2001) of these conditions. I will also argue that the accelerating application of AI technologies to the legal, medical, and insurance/compensation systems, in many ways in direct response to the increasing financial cost of the alarming epidemic of these types of (posited to be stress related) medical conditions, raises significant concerns for law, regulation and governance; particularly with respect to the potential impact on at-risk or vulnerable groups (Pasquale F, and Carswell A, 2018) (Danaher, J, 2016).
2. LITERATURE REVEW
Human beings, viewed as behaving systems, are quite simple. The apparent complexity of our behavior over time is largely a reflection of the complexity of the environment in which we find ourselves. — Herbert A. Simon
AI Unbound
As AI (and associated AI-hype) grows more pervasive in our lives, its impact on society is ever more significant, raising ethical concerns and challenges regarding issues such as privacy, safety and security, surveillance, inequality, data handling and bias, personal agency, power relations, effective modes of regulation, accountability, sanctions, and workforce displacement.
Only a multi-disciplinary effort can find the best ways to address these concerns, including experts from various disciplines, such as law, philosophy, economics, sociology and anthropology, psychology, informatics, communication and media studies, and political science, as well as those with lived experience in relation to the impacts of AI systems. These impacts, which require transdisciplinary analysis, includes but are not limited to the following:
· Data acquisition and consent in a learning health care system: law, regulation and governance;
· AI for Organizing: Distillation, Categorization, and Prediction
· AI and surveillance and manipulation of people
· Impact of AI on jobs and work
· Meaningful control, safety, and security of AI
· AI and vulnerable groups
· Ethical models/frameworks around AI and data
· Black box systems and modes of investigation and explanation
· Challenges and contradictions in AI-related design, development and ethics
Lending urgency to these matters is the use of machine learning and algorithmic predictive analytics in judicial contexts (TLABC, 2019) and administrative tribunals (CRT, 2019)[1]. Frank Pasquale, Professor of Law at the University of Maryland Francis King Carey School of Law, in his book ‘The Black Box Society: The Secret Algorithms That Control Money and Information’ notes that this development represents:
An emerging jurisprudence of behaviorism, as it rests on a fundamentally Skinnerian model of cognition as a black-boxed transformation of inputs into outputs. In these models, persuasion is passé; what matters is prediction (Pasquale, 2016).
Pasquale and Cashwell, in their 2018 research paper ‘Prediction, Persuasion, and the Jurisprudence of Behaviorism’added:
As a method of enhancing the legitimacy and efficiency of the legal system, such modeling is all too likely to become one more tool deployed by richer litigants to gain advantages over poorer ones. Moreover, it should raise suspicions if it is used as a triage tool to determine the priority (or the validity) of cases. Such predictive analytics are also only as good as the training and data they depend on. While fundamental physical laws rarely if ever change, human behavior can change dramatically in a short period of time. Therefore, one should always be cautious when applying automated methods in the human context, where factors as basic as free will and political change make behavior of both decision-makers, and those they impact, impossible to predict with certainty. (Pasquale F, and Carswell A, 2018)
The application of AI technologies on the healthcare field and insurance/compensation industry is already having a significant impact on segments of the legal industry (including workforce displacement), society generally, and ultimately on the principle of judicial fairness and “the Law” itself. Pasquale and Cashwell (2018) warn us that:
Predictive analytics (are not) immune from bias. Just as judges bring biases into the courtroom, algorithm developers are prone to incorporate their own prejudices and priors into their machinery. Nor are biases easier to address in software than in decisions justified by natural language. Such judicial opinions (or even oral statements) are generally much less opaque than machine learning algorithms. Unlike many proprietary or hopelessly opaque computational processes proposed to replace them, judges and clerks can be questioned and rebuked for discriminatory behavior.
The Special Importance of Chronic Stress Models
Concurrently, it is important for researchers, legal professionals, and healthcare advocates to understand that the accelerating application of AI technologies to the medical system and insurance/compensation system (s) is in part an effort to respond to a serious and very real problem, that is, some very costly epidemiological trends currently being observed (Case A, and Deaton A, 2015); which may occur in what has been termed a transforming society (Kopp M, 2007).
‘Chronic Stress Theory’ is an integrating model that can be applied to a discussion of disruptive trends leading to suddenly changing patterns of morbidity and premature mortality rates in transforming societies (Kopp M, 2007). A follow-up 2017 study by Case and Deaton indicated that in populations affected: “each successive cohort reports more pain, more mental distress, heavier drinking, as well as lack of social connection. Each is observed to have higher mortality rates from drugs, alcohol and suicide than the preceding cohort.” (Case A, Deaton A, 2017).
A recently published Health Canada ‘National Report: Apparent Opioid-related Deaths in Canada’ (Health Canada, 2019) stated that more than “12,800 apparent opioid-related deaths occurred between January 2016 and March 2019: 4,588 deaths occurred in 2018; this means that one (1) life was lost every 2 hours related to opioids. Opioid-related deaths are now far and away the leading cause of unnatural death in BC (BC Coroners Service, 2022).
Figure 1 - (BC Coroners Service, 2022)
My hypothesis is that chronic stress results in adverse health effects through biological, social and behavioral pathways (Shanahan et al, 2019), and that ‘chronic stress theory’ (Kopp M, 2007) might also have the best explanatory power to understand the morbidity and premature mortality crisis in North America.
Further, I propose that the special features of AI technologies (Sarfaty G, 2017) applied to the increasing morbidity crisis (leading directly to the premature mortality crisis), rather than being used merely as a tool for surveillance (Mir A, & Mann S, 2013), control, and analytically-determined outcomes, could be applied as an experimental model to better understand the human consequences of chronic stress in societies, especially societies in transformation.
Such insights, I predict, could then lead to the development of laws and regulations to ensure adequate protections, particularly for at-risk or already vulnerable groups, and also provide insights into the development and implementation of appropriate policies, as well complementary improvements to AI design, in order to help ameliorate these conditions.
3. THEORY and RESEARCH METHODOLOGY:
Predictive power, Explanatory power, and Integrating Models
What should be the goals of scientific inquiry and its intersection with legal theory? Science isn't about discovering Truth. This is for the very simple, practical reason that Truth is very hard to positively identify – how would you establish that a prospective Truth wasn't just a very good approximation or merely a generally accepted or traditional belief?
There are all sorts of philosophical discussions to be had here, but we can sidestep these and take a more practical approach. Specifically, we can choose to care about Predictive Power and Explanatory Power.
Predictive Power is the ability of a given theory to allow us to make predictions about the natural world. We know that Newtonian gravity is an approximation (to General Relativity, at the very least), but it's very good at predicting where the planets in our solar system will be. This is a practical consideration – if a theory can't make predictions, it's not very useful (and some would argue that it's not even science).
Explanatory Power is the quality of a theory that gives us some deeper understanding of what's going on in a physical system. For example, knowing about atomic electron orbitals allows us to make sense of the periodic table and chemical interactions. It gives us ways to develop other theories.
So, what we're looking for from a scientific theory is the ability to make predictions and for some explanatory insight as to why something happens, so that we can use that insight to develop further theories.
This is also relevant for statistical modeling (and hence data-intensive science), because we can build our models to address either or both of these. Predictive algorithms such as neural networks can perform very well, but the structure of the model is often hard to interpret in any kind of explanatory way. Conversely, a linear regression model might tell us a lot about which variables are important but may not make good predictions. Ideally, it would be nice to build models that are useful for both prediction and explanation (Shmueli, 2010). In other words, an integrating model.
Generally, the concept of Predictive Power differs from explanatory and descriptive power (where phenomena that are already known are retrospectively explained or described by a given theory) in that it allows a prospective test of theoretical understanding.
In March of 2015 I presented a paper (Nickerson B, 2015) at an accredited conference co-sponsored by the Trial Lawyers Association of BC (TLABC) and the Committee on Accreditation of Continuing Medical Education (CACME). At this conference I offered a theory intended to provide Explanatory Power on the vexing public health problem of subjective chronic medical conditions, primarily as they relate to subjective complaints of pain. The increasing frequency and cost of these types of conditions are a troubling epidemiological phenomenon generally, and as noted, they can also be seen as ground zero for much of the ongoing opioid epidemic.
After describing my explanatory theory to the 2015 joint session of the TLABC and CACME, I went onto provide an integrating model to help better understand what was likely really going on and what could be done about it. I then made a number of predictions about what might likely occur with the operations of the Insurance Corporation of British Columbia (ICBC), the largest insurance company in Western Canada with gross premiums in 2019 of over $6.0 billion, anticipated by the explanatory theory and integrating model.
In January 2018 global professional services and audit firm PricewaterhouseCoopers
(PwC) released a comprehensive ‘Operational Review of ICBC’ (PwC, 2018). In it we find that a number of the key predictions that I made in my 2015 paper were borne out, particularly with respect to the application of data-intensive science to the structural problems I had identified and explained three (3) years earlier.
The strong correlation between the information provided in PwC’s 2018 Operational Review and the predictions contained in my 2015 paper confirmed the Explanatory Power of the theory I presented, and confirmed the Predictive Power of the epistemic model I had developed (and have successfully deployed in my professional work), particularly with respect to:
a. ICBC’s (at the time) undisclosed looming financial crisis; the company recently reported a total operating loss of $2.5 billion for the 2018-2019 fiscal period (ICBC, 2019), and
b. The comprehensive application of AI technologies by the company, particularly predictive analytics, in response.
As I noted in 2015, the application of predictive analytics is intended to provide insights about the likelihood of future claim events, ferret out instances of fraudulent claims activity, and assess a claimant’s eligibility to receive appropriate medical treatment.
Bearing this in mind, the integrating model that I introduced in March of 2015 allows us to easily decode why ICBC senior management, given the financial pressures it has since been confirmed the company was under, decided it was necessary to implement AI technologies. This decision also signaled, as I predicted, prospective changes to the legal and regulatory environment in which the company operates, and the eventual introduction by the British Columbia Provincial Government of “regulatory reforms”. In line with this prediction, on April 1, 2019 the Minor Injury Guidelines (ICBC, 2019) were introduced, imposing compensation limits and other restrictions on certain defined categories of motor vehicle related injury claims[2] (Nickerson B, 2019).
Social Epidemiology and Theories of Disease Distribution
In order to help better understand the phenomena of “diseases of despair” (Shanahan et al, 2019), theories in ‘social epidemiology’, the study of social factors in the etiology of disease, will be invoked in terms of ‘chronic stress theory’, including: (1) psychosocial, (2) social production of disease and/or political economy of health, and (3) ecosocial theory and related multi-level frameworks (Krieger N, 2001). ‘Social epidemiology’, as a marriage of sociological frameworks to epidemiological inquiry, seeks to elucidate principles capable of explaining social inequalities in health, and represents what Krieger (2001) has termed theories of disease distribution, which presume but cannot be reduced to mechanism-oriented theories of disease causation.
This aspect of our review, focusing on ‘social epidemiology’, will help us to better understand the origin (or etiology), temporality, and dynamics of many of the conditions that the application of AI to the medical system and insurance/compensation system purports to be able to help (re) solve.
Otherwise, AI applied to these systems is at risk of becoming primarily a tool for surveillance, monitoring and control (Mir A, & Mann S, 2013). Further, due to the “black box” nature of many of these programs they can be unaccountable and largely free from meaningful legal or legislative oversight (Pasquale, 2016).
The primary definition of the word “surveillance” is: “a watch kept over a person, group, etc., especially over a suspect, prisoner, or the like.” The etymology of this word is from the French word “surveiller” which means “to watch over”. The definition of surveillance I will apply to this project is:
Monitoring undertaken by an entity in a position of authority, with respect to the intended subject of the surveillance, that is transmitted, recorded, or creates an artifact. In this definition, an entity having a position of authority means that the possessor of that authority has both ability and legitimacy, in a normative sense, to enforce their will. (Mir A, & Mann S, 2013)
Also noteworthy is the application of “military-capable” analytics software such as NetReveal® from BAE Systems[3] -- among the world’s largest defense contractors – to the health and insurance sectors[4] [1] (PwC, 2018).
Along with the potential for the deployment of powerful ‘black-box’ technologies to increase the disempowerment of the vulnerable (or merely ill-informed), there are several allied conceptual bridges between psychological alterations and the risks, onset and prognosis of chronic stress conditions that are of significance. Depending on the field of research there are parallel concepts, which analyze practically the same phenomena. These are the stress theories in physiology (Golkar et al, 2014), learned helplessness and control theory in psychology, depression research in psychiatry, and the concept of vital exhaustion and the psychosocial risk research in sociology (Kopp M, 2007). These will be contrasted and synthesized into our integrating model (Nickerson B, 2015).
GLOSSARY – Key AI Terminology:
Robotic Process Automation or RPI (noted in PwC’s 2018 Operational Review of ICBC)
With the advancement in and growing maturity of Artificial Intelligence (AI) it is likely that we will soon see the convergence of these two technologies – RPA and AI. Robotic process automation (or RPA) is an emerging form of business process automation technology based on the notion of metaphorical software robots or artificial intelligence (AI) workers that take over the roles and tasks of humans and interact with customers which mimic human interaction with user interfaces of software systems; also termed Avatar Analytics (Nickerson B, 2015). Interesting times indeed.
Predictive Analytics or Modeling
Prediction is applying AI approaches to learn from past (and possibly other) data to predict what will happen. A very simple example is the Spam Filter algorithm used in email systems. Based on past email that has been identified as Spam or not Spam (frequently called Ham) then a predictive model can be developed that will predict whether a new, never before seen, email is Spam or Ham.
Insurance claim professionals were pioneers in the use of predictive data analytics. Well before the term “Big Data” was coined, examiners were digging into the data within filed claims to unearth kernels of wisdom. These insights illuminated ways to reduce claims duration and costs (severities), or return injured or ill employees to health, work, or wellness on a faster timetable. Now a more robust form of predictive analytics is at hand, claiming to enhance the way insurance claims are handled. Unlike previous analytical processes that were focused entirely on an organization’s structured internal claims data, systems now have been advanced to mine the wealth of unstructured and external data that heretofore have been challenging to analyze for all but the most perspicacious claims examiners.
Distillation
Distillation is applying AI approaches to automate making large data volumes interpretable. Just like miners distill tons of raw ore into ounces of gold using machines, the goal is to automate the identification of value in big data.
Categorization/Segmentation
Categorization or Segmentation is applying AI approaches to automate the labeling and organization of large data volumes, so that data can be routed, processed, and interpreted in the “right” way, and customers/claimants/humans categorized and sorted into groups, such as those categorized as fitting within ICBC’s Minor Injury Guidelines – thus denying them a human right they formerly held: the legal right, at inception, to full and ready access to the legal system and the courts and for redress (Nickerson B, 2019).
Bibliography
Case A, and Deaton A. (2015). Rising morbidity and mortality in midlife among white non- Hispanic Americans in the 21st century. Princeton, NJ 08544: Princeton University.
Case A, and Deaton A. (2017). Mortality and Morbidity in the 21st Century. Brookings Papers on Economic Activity, Spring 2017.
Danaher, J. (2016). The Threat of Algocracy: Reality, Resistance and Accommodation. Philosophy & Technology 23(3).
Golkar A, Johansson E, Kasahara M, Osika W, Perski A, and Savic A. (2014). The Influence of Work-Related Chronic Stress on the Regulation of Emotion and on Functional Connectivity in the Brain. PLoS ONE 9(9): e104550. doi:10.1371/journal.pone.0104550.
Harvard Health Publishing. (2016, November 10). The Chronic Pain Epidemic - What's to be Done? Retrieved from: https://www.health.harvard.edu/the-chronic-pain-epidemic
Kopp M. (2007). Public Health Burden of Chronic Stress in a Transforming Society. International Journal of Epidemiology, 30: 668–677.
Krieger N. (2001). Theories for social epidemiology in the 21st century: an ecosocial perspective. International Journal of Epidemiology, 30:668–677.
Mir A, and Mann S. (2013). The inevitability of the transition from a surveillance-society to a veillance-society: Moral and economic grounding for sousveillance. 2013 IEEE International Symposium on Technology and Society.
Mizoguchi K, Yuzurihara M, Ishige A, Sasaki H, Chui D, and Tabira T. (2000). Chronic Stress Induces Impairment of Spatial Working Memory Because of Prefrontal Dopaminergic Dysfunction. Journal of Neuroscience 15 February 2000, 20 (4) 1568-1574.
Muennig P, Reynolds M, Fink D, Zafari Z, and Geronimus A. (2018). America's Declining Well-Being, Health, and Life Expectancy: Not Just a White Problem. Am J Public Health;108(12): 1626–1631. doi:10.2105/AJPH.2018.304585.
Nickerson B. (2010, November). Chronic Subjective Injury Claims. Canadian Underwriter: https://www.canadianunderwriter.ca/features/cc-chronic-subjective-injury-claims/
Nickerson B. (2015). Establishing Genuineness in Chronic Subjective Injury Claims: The Integral Map & Adaptive Heuristics. TLABC: 2015 Chronic Pain Conference, Vancouver, BC. https://www.tlabc.org/index.cfm?pg=Chronic_Pain_Conference_2015
Nickerson B. (2019). RoadMap: How a Capped “Minor Injury” Could Resolve to a Full Tort Claim. Vancouver, BC: Prepared for the TLABC Conference: Life after CAPS: “Minor” Injuries, May 10 – 11, 2019.
Pasquale F, and Carswell A. (2018). Prediction, Persuasion, and the Jurisprudence of Behaviorism. U of Maryland Legal Studies Research Paper No. 2017-34.
Pasquale, F. (2016). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
PwC. (2018, January ). Operational Review: The Insurance Corporation of British Columbia. Retrieved from: https://www2.gov.bc.ca/gov/content/governments/organizational-structure/ministries-organizations/crown-corporations/insurance-corporation-of-british-columbia
Sarfaty G. (2017). Can Big Data Revolutionize International Human Rights Law? University of Pennsylvania Journal of International Law, Vol. 39, Issue 1, p.73, 2017.
Schopflocher D, Taenzer P, Jovey R. (2011). The prevalence of chronic pain in Canada. Pain Res Manag. 2011;16(6):445–450. doi:10.1155/2011/876306
Shanahan L, Hill S, Gaydosh L, Steinhoff A, Costello E, Dodge K , Harris K, and Copeland W. (2019). Does Despair Really Kill? A Roadmap for an Evidence-Based Answer. American Journal of Public Health 109, 854-858.
Shmueli, G. (2010). To Explain or to Predict? Statistical Science 2010, Vol. 25, No. 3, 289-31.
[1] The BC Civil Resolution Tribunal describes itself as: “Canada’s first online tribunal.”
[2] Including “mild” concussion and “persistent pain” (ICBC, 2019).
[3] BAE’s NetReveal® allegedly “uncovers suspicious behaviour by identifying, linking and scoring people, places, events, businesses and other … attributes; using machine learning and network analytics to uncover how they are connected.” https://www.baesystems.com/en/cybersecurity/product/insurance-fraud
[4] ICBC payments to BAE SYSTEMS APPLIED INTELLIGENCE CANADA totaled $8,605,892 for the fiscal years 2017-2019. See endnote (a) for a detailed breakdown.
[1] Breakdown of ICBC payments to BAE SYSTEMS APPLIED INTELLIGENCE CANADA:
2017 - $3,423,244
2018 - $2,706,689
2019 - $2,475,959
Sources:
https://www.icbc.com/about-icbc/company-info/Documents/Statement-of-financial-info-2017.pdf (pg 103)
https://www.icbc.com/about-icbc/company-info/Documents/Statement-of-financial-info-2018.pdf (pg 94)
https://www.icbc.com/about-icbc/company-info/Documents/financial-info-2019.pdf (pg 102)