Reshaping the Human Experience and Exploiting the Human Condition: The Disturbing Reality and Risk of Unregulated Technologic Developments

August 2023 • Source: Elizabeth S. Fitch, Melissa Lin, and Kyle James, Righi Fitch Law Group

The most disturbing reality with emerging disruptive technologies is the absence of ethical and regulatory oversight.  A Google whistleblower is claiming that Google built a machine that has human consciousness.  Google immediately fired him and issued a press release that Artificial Intelligence (“A.I.”) is nowhere close to human consciousness.  But how would the average human know?  We don’t!  Another disturbing technologic development is Deep Fake technology.  Deep Fake software developers are hard pressed to articulate why this technology is helpful to humanity, yet forge ahead at light speed to get their products into the market.  While the value of vehicle telematics to cell phone data tracking to reduce risk is well documented, there is still very little oversight and thought about the downsides and misuse of these technologies.  It is critical that insurers and attorneys understand the risks presented by these emerging and disruptive technologies so that claims professionals and defense lawyers can begin to build strategies and initiatives to handle unique claims from the implementation of these technologies. 

I. Artificial Intelligence 

"A.I. is probably the most important thing humanity has ever worked on.  I think of it as something more profound than electricity or fire." – Sundar Pichai 

Both businesses and individuals have become increasingly reliant upon A.I. and current trends show that we are only going to become more dependent as time goes on.  In the McKinsey Global survey conducted in 2021, 56% of businesses reported the adoption of A.I. to perform operation functions.  As the current trend toward A.I. increases, it is important to regulate these machines or negative consequences may follow what Pichai believes is “the most important thing humanity has ever worked on.”  

A.I. in the medical field could help doctors focus more on procedures instead of administrative tasks.  A GPT-3 based chatbot was created to aid doctors in dealing with patients.  Its design would help schedule appointments or talk with those struggling with mental health.  Unfortunately, the A.I. reportedly had multiple issues handling simple tasks such as determining the price of X-rays and had time-management problems while scheduling patients.  The A.I. also drew major concerns over its handling of mental health, telling a fake patient in an experiment to commit suicide during one of its tests.  The consequences of lack of oversight in this scenario could have caused a loss of business or even life.  When these machines are programmed they cannot discern whether such harms like telling a patient to kill themselves are good or bad; they simply do what they are programmed too.  This can cause additional harm when the biases belonging to society are replicated in A.I.

Automated systems designed to be impartial and eliminate human bias can at times magnify the biases people have instead of mitigating them.  An example can be seen in the criminal justice system.  As trends toward A.I. reliance increase, the harm of potential automated decision making in the criminal justice system grows.  Predpol, a software developed by the Los Angeles Police Department, was designed to predict when, where, and how crime would occur.  This approach wound up disproportionately targeting lower income and minority neighborhoods.  When a study was performed on the overall crime in the city, the study showed crime was much more evenly distributed than the A.I. indicated.  Sentencing can also be decided by A.I., assessing whether a defendant will recidivate.  Without the proper oversight, the criminal justice system may take the A.I. probability for recidivation as an impartial estimate free from bias.  A 2016 study by ProPublica determined one such A.I. was twice as likely to incorrectly label black prisoners as being at high risk of reoffending as white prisoners.  While the race of the prisoner was not directly considered by the A.I., the other variables that were considered clearly disfavored black Americans.  As a result, many black prisoners may be getting stricter jail sentences or higher bail because these A.I. are incorrectly labeling them. 

Such biases potentially held by A.I. can spread through other institutions.  An audit of a resume screening tool found that the two main factors that were strongly associated with a positive job performance were the name Jared and whether the applicant played lacrosse, two factors that are more prevalent in whites instead of nonwhites.  If this is combined with a belief that A.I. is impartial and there is no oversight looking for prejudicial factors such as these, affected job applicants can be left with no legal claim to address potential employment discrimination.  Issues like these don’t often reach wealthier hires for high paying roles; these potential employees are often looked at by other people.  Instead A.I. is more likely to review workers that apply for lower income jobs that may not have the resources to seek relief.  

A. ChatGPT*

ChatGPT is one such A.I. software, already being used across the country. The emergence of ChatGPT, a revolutionary language model developed by OpenAI and free for anyone on the planet to use, has brought both excitement and trepidation to the realm of artificial intelligence.  Just as with other disruptive technologies, the concerns about ethical and regulatory oversight are becoming increasingly pertinent within the domain of A.I. language models.

ChatGPT operates based on patterns and information present in the massive datasets it was trained on.  While its ability to generate human-like text is impressive, it also inherits the biases encoded within these datasets.  These biases can range from gender and racial biases to socio-economic and cultural prejudices.  Just as seen with A.I. systems in other sectors, ChatGPT's outputs may inadvertently amplify societal biases, exacerbating rather than alleviating them.  For instance, if prompted with text containing subtle biases, ChatGPT might unknowingly generate responses that reinforce those biases.  This can lead to harmful consequences when used in contexts such as providing customer support or generating content for various industries.  Imagine a scenario where ChatGPT is employed to assist in human resources, and its responses subtly favor certain genders or ethnicities during candidate evaluations, perpetuating discrimination in hiring processes.

ChatGPT's remarkable ability to generate coherent and contextually relevant text can sometimes blur the lines between factual accuracy and fabrication.  The model lacks a true understanding of the world; generating responses based on patterns it learned during training.  This becomes a significant concern when considering its use in sensitive sectors like healthcare or law.  Like the medical A.I. discussed earlier, ChatGPT could generate responses that, although coherent, might contain misinformation or potentially dangerous advice.  For example, if asked about medical symptoms and treatments, ChatGPT might produce information that, while plausible sounding, is incorrect and potentially harmful if followed.

One of the pressing challenges with ChatGPT and similar A.I. models is their potential to contribute to the spread of misinformation.  Given their ability to generate text that resembles human-authored content, these models can unwittingly contribute to the dissemination of false or misleading information.  In a world where fake news and misinformation are already significant concerns, the unregulated deployment of A.I. language models like ChatGPT adds another layer of complexity to the battle against false information.

While the capabilities of ChatGPT and similar A.I. language models are undeniably impressive, their unfettered use raises serious ethical and regulatory issues.  Just as with A.I. in other domains, there is a pressing need for oversight, transparency, and bias mitigation in the deployment of ChatGPT.  As A.I. continues to weave itself into the fabric of human society, it is imperative that we engage in thorough discussions and implement safeguards to ensure that these technologies contribute positively to our world without exacerbating existing challenges.

*Everything in this section was written by ChatGPT 3.5. We inputted the A.I. section of the article into ChatGPT’s textbox and prompted it to write the ChatGPT section following the style and content of the preceding information. As you can see, it “recognizes” the potential pitfalls of its widespread use and the clear need for oversight.  What is also apparent is its constant need to compliment itself.                           

B. Artificial Intelligence Oversight 

A.I. oversight is a necessity if manufacturers wish to mitigate some of the issues and biases of A.I.  In 2015, there was only one bill that proposed regulations involving A.I.  In 2021, there were 130.  This increase shows that more people are becoming aware of the prospects A.I. has to offer as well as the threats A.I. can pose if not properly controlled.  If the public perception surrounding A.I. is that they are perfect, issues with A.I. regarding bias can be brushed off as simply following a program created to be objective.  

A recent poll shows Americans fear that A.I. may come with other negative consequences that may potentially create new legal issues.  A national poll on behalf of Stevens Institute of Technology showed a loss of privacy is one of the leading issues surrounding A.I. with GenZers being the least concerned (62%) and Baby Boomers being the most concerned (80%).  Most respondents (72%) believe that countries or businesses may irresponsibly use A.I. and most of them (71%) also believe that individuals will use A.I. irresponsibly.  While this does show concern for the growth of A.I. technology, more than a quarter of respondents polled (37%) do not believe that A.I. will lead to gender bias.  These respondents could be discriminated against in ways they could not readily perceive and not have the resources to file a claim. 

The respondents show that while A.I. may be accepted in society, sufficient oversight is a necessity.  A possible solution is to employ more diverse A.I. teams to help produce the data sets.  Diversity can create data sets that are better representatives of society at large and can potentially mitigate some of the biases that A.I. technology may have.   

II. Deepfake Technology

One of the more feared A.I. technologies are deepfakes.  The inherent danger of deepfakes comes from the inclination of people to believe what they see and/or hear.  The term “deepfake” means a technology that involves a subset of machine learning techniques or “deep learning techniques.”  Deep learning is a subset of machine learning, which is a subset of A.I.  Here, a model uses training data for a specific task.  More data given to the task will allow for sharper and higher quality models.  This data can be used in replicating videos, images, audio, text, and potentially other forms of media.  In 2020, a deepfake programmer was able to produce realistic tracks by Jay-Z, Elvis, and Frank Sinatra by using old, released music.  

One of the more commonly known uses of deepfake technology is the face swap.  This is when the face of one is placed upon another.  The face placed on another seems to come to life matching the mannerisms of someone speaking or doing some other activity.  It can be seen in movies when the face of an actor is made to be younger and placed on screen on the body of another actor and has even been used to show deceased actors faces on screen for a scene.

Although the movie industry has benefitted, actors such as Kristin Bell, Scarlett Johansson and Gal Gadot have all fallen victim to a harmful usage of face swaps called deepfake pornography.  Deepfake pornography is when a face swap is used on an individual (commonly someone famous) and placed on pornographic content.  The idea behind this is to make the individual look as if they were engaging in the pornographic conduct themselves.  Face swaps can be accompanied by a deepfake technique called lip syncing.  Here, voice recordings from different videos are recorded to make a subject appear to say something.  This technology is accessible to most people and examples of deepfakes that fooled hundreds can be seen on apps such as Tik-Tok, YouTube, and Facebook.  

The accessibility of deepfake technology bring questions as to potentially what can be done to protect against its misuse as its issues become more pervasive.  In 2020-21 over 100,000 deepfake pornographies of women were created without their consent or knowledge.  In 2021 a deepfake called A.I. Dungeon generated sexually explicit content of children.  Such technology is rapidly improving and drawing concerns over how to counter or regulate it.  The possibilities of harm that may result from deep fake technology are numerous.  These harms include possible fraud, cyberbullying, spreading of misinformation, fake evidence in court proceedings, and even child predators masking their age when attempting to meet minors. 

A. Deepfake Oversight:

While deepfake technology is new, bills are being passed to help combat it.  Virginia currently has an amended revenge porn law that includes deepfake pornographic content of a nonconsenting individual.  Current laws can also be applied to deepfakes depending on how they are used.  For example, if one uses a deepfake for extortion or fraud they would be charged with those respective crimes.  As the quality of deepfakes improve our awareness of them does as well and companies have been suggested to improve awareness of these deepfakes to mitigate the harm that may potentially come.   

III. Telematics

A. Telematic dangerous propensities

Telematics can also be dangerous without proper oversight.  Telematics refers to the combination of telecommunications and information processing.  This brand of technology is responsible for tracking (GPS) and insurance assessing risk factors.  Insurance companies can oversee whether an individual is an accident risk and adjust insurance premiums accordingly.  Insurance companies may even be held liable if nothing was done in response to potentially risky driving behaviors.  Telematics have provided vehicles with benefits like the ability to disable a car when stolen, or the software to unlock doors and enact heated seats.  While the introduction of telematics may seem all well, without oversight the tracking feature in vehicles and cellphones can provide negative consequences.  

B. Telematic Oversight

Around 26 states have shown growing concerns over privacy violations that may be present in both vehicle and cellphone tracking.  While this has been a growing concern for citizens, multiple states are acting to place vehicular or cellular tracking under the statute of stalking.  Data tracking on cellphones is another issue that people are largely aware of and has potentially worse consequences than vehicle tracking.  Cell phones are always on our persons and can reveal the most intimate details of our lives.  The federal appeals court stated, “a person who knows all of another’s travels can deduce whether he is a weekly churchgoer, a heavy drinker, a regular at the gym, an unfaithful husband, an outpatient receiving medical treatment, an associate of particular individuals or political groups — and not just one such fact about a person, but all such facts.” Legal precedent has been able to alleviate some concern for governmental intrusion into cellphone data and tracking.  The fourth amendment would apply here, and in most cases requires a warrant to track an individual’s cellphone location data. 

IV. Claims Professional Response to A.I. and Telematic Liability

With an increase in A.I. and telematics, claims professionals will need to make a fundamental shift in the processing and evaluation of claims.  These claims will require far more technological sophistication.  The claims handler will be well-served by developing a deep understanding of technology and approaching A.I. and other emerging technology claims like complex product liability as opposed to simple negligence cases.  This is because any accident that involved the product could have been caused by its A.I.  Claims professionals will have to be prepared to follow the chain of production for any A.I. sold to determine which point of the manufacturing process may have been responsible for the damages.  Therefore, it will be crucial for claims professionals to find experts for various types of A.I. to analyze claims and determine if the A.I. malfunctioned and who is to blame if it did.  Additionally, claims professionals that cover producers of A.I. products will need to adjust their rates based on how predictable the A.I.’s behavior is and the products’ potential to cause damages if the A.I. malfunctions.

The evolution of technology necessarily results in the evolution of insurance products.  New insurance products are already being developed to respond to the risks associated with artificial intelligence and emerging technology.  Claims professionals will need to keep abreast of the insurance product iterations to conduct a proper coverage analysis at the outset of the claims handling process.  

Like with A.I. products, claims professionals will also need to gather new resources and experts to evaluate the unique dangers IoT devices present.  Claims professionals will not only need to be able to tell if an IoT device’s programming was the cause of damages in a claim, but also if a lack of cybersecurity caused the damages.  Furthermore, because any company could be liable for a cybersecurity breach, claims professionals will need to evaluate the cybersecurity measures companies are taking for IoT devices connected to their network to determine risk and evaluate claims.

Conclusion:

While the introduction of emerging technologies can aid society, it requires sufficient oversight.  These machines are not perfect, and if they are used negatively, can do more harm than good.  As technology grows, it is important that the legal system and the society that manufactures A.I. grow in awareness of the negative consequences such systems may have, and use the oversight necessary to remedy, mitigate, or take accountability for any potential pitfalls.