FMCSA Publishes Draft Of Medical Examiner’s Handbook For Proposed Regulatory Guidance On Driver Physical Qualifications
August 18, 2022 • Source: Joe Pappalardo, Gallagher Sharp LLP
On August 16, 2022, the Federal Motor Carrier Safety Administration (FMCSA) published a 122-page draft of the new Medical Examiner’s Handbook that could become a guide for certified medical examiners who determine whether a driver meets the physical qualifications for commercial driving. The Agency also proposed changes to the Medical Advisory Criteria now published in the Code of Federal Regulations, 49 CFR part 391, Appendix A.
Under the current regulations, medical examiners make physical qualification determinations of drivers on a case-by-case basis and may refer to the related Medical Advisory Criteria for guidance. See 49 CFR 391.41 through 391.49.
According to the FMCSA, the handbook would provide medical examiners clearer information on specific regulatory requirements relative to a driver’s physical qualifications and offer further guidance to medical providers when making such determinations.
The current draft of the handbook, however, also offers recommendations on identifying drivers at risk for moderate-to-severe obstructive sleep apnea, a condition currently not required for testing by medical examiners under the Federal Motor Carrier Safety Regulations (FMCSRs). Should the handbook be formally adopted by the Agency, this would pose the question as to whether the handbook becomes a catalyst for future regulatory requirements, such as screening commercial drivers for obstructive sleep apnea, under the FMCSRs.
The debate about regulatory requirements for CMV drivers with obstructive sleep apnea is nothing new. In March 2016, the FMCSA published an Advance Notice of Proposed Rulemaking (ANPRM) that would have required CMV drivers who exhibit multiple risk factors for obstructive sleep apnea to undergo evaluations and treatment by a healthcare professional with expertise on sleep disorders. However, this ANPRM was withdrawn in 2017.
FMCSA is now accepting public comments about the proposed regulatory guidance on or before September 30, 2022. |
When Artificial Intelligence and the Internet of Things Collide
August 8, 2022 • Source: Elizabeth Fitch & Melissa Lin, Righi Fitch Law Group
Introduction:
As technology continues to rapidly advance, the use of artificial intelligence (“AI”) and internet of things (“IoT”) devices have become a more integral part of our daily lives. With the pressure to get new technology products and programs into the market quickly. The exponential increase in use of these technologies comes with increase in the liability risks to consumers, programmers, and manufacturers alike. Numerous poorly developed AI have shown patterns of discrimination and proposed flawed or even life-threatening solutions to problems. IoT devices, have continued to injure and kill people when they malfunction while also being a major cybersecurity risk. With the rapid adoption of these technologies, the legal and insurance industries have struggled to deal with the new and unanticipated risks that accompany them. This article addresses the emerging risks associated with unchecked and unexpected consequences of artificial intelligence and IOT going rogue, and how claims professionals can best prepare to handle these claims.
II. Emerging Sources of Liability
a) Artificial Intelligence
Both businesses and individuals have become increasingly reliant upon AI and current trends show that we are only going to become more dependent as time goes on. In the McKinsey Global survey conducted in 2021, 56% of businesses reported the adoption of AI to perform operation functions. Businesses are not the only ones employing AI, individuals and governments have both realized the potential AI presents. In 2021, private investment in AI totaled $93.5 billion and more than doubled the private investments made by individuals in 2020. Additionally, federal lawmakers in America have started to notice the impact AI can have. While, in 2015, there was only one bill that proposed regulations involving AI, 130 were proposed in 2021. This increase of attention shows that more people are becoming aware of the prospects AI has to offer as well as the threats that AI can pose if not properly controlled. However, as the prevalence of AI continues to increase in our society, so too will the risk of liability caused by AI malfunctioning.
Microsoft’s “Tay” chatbot offers a cautionary tale of an AI system gone rogue. Tay “learned” from Twitter users to make extremist and bigoted statements. Amazon scrapped a machine-learning tool for selecting the top candidate resumes because it discriminated against women. The most recent artificial intelligence gaffe was Google’s facial recognition app labelled a black couple as “gorillas”. While these incidents caused embarrassment and outrage over the conclusions these faulty AI produced, these damages are still of the lower end of the spectrum of the harm AI can produce.
Other instances of faulty AI have had the potential to ruin individuals’ lives or cause serious bodily harm or death. For example, many law enforcement agencies now use AI to help them find criminals. However, such AI often misidentifies people as criminals and can lead to false arrests and put them in danger. Amazon’s Recognition AI was one of these faulty AI that misidentifies twenty-seven professional athletes as criminals during a test run by the ACLU that compared pictures of people with criminal mugshots. Fortunately, this was only a test, so the AI was not actually being used to arrest people. However there have been cases where AI was used to arrest innocent people. In June 2020, police in Detroit relied on a facial recognition AI to make the arrest of Robert Julian-Borchak Williams for felony larceny. The program compared a blurry picture from surveillance video of the real perpetrator shoplifting $3,800 worth of timepieces. As a result, Williams was arrested in front of his wife and children and wrongfully detained. Although Williams was eventually able to prove his innocence, this incident shows the potential for people to be endangered by the mistakes of AI.
Errors made by AI also could have direct lethal consequences, as shown by IBM’s AI, Watson. In 2017, Watson was used to recommend treatments for cancer patients. In one case involving a 65-year-old lung cancer patient that developed severe bleeding, Watson recommended giving the patient a drug that could cause a fatal hemorrhage. Fortunately, the doctor supervising the AI knew better and refused to give the patient the medication. However, as AI becomes more prevalent in our society there will surely be cases where professionals will incorrectly default to an AI’s recommendations and severely injure others.
For these reasons, it is clear that AI left to run unchecked can represent major and even existential risks to insureds. Accordingly, claims professionals will need to prepare for the liability threats that AI reliance may cause their clients.
b) Internet of Things
IoT is used to describe devices that have sensors, software, and other technologies that connect and exchange data with other devices over the internet. While this connectivity has led to many improvements to our quality of life, there are also inherent risks that the IoT presents. By 2025, 75 billion active IOT devices will be connecting to the Internet. Accordingly, claims professionals can expect to see claims ranging from property damage and business interruption due to threat actors taking down the grid to wrongful death and catastrophic injury claims.
i) Dangers of Autonomous Vehicles
When IoT devices malfunction, they have the potential to wreak havoc and create unexpected liability exposures. Cautionary tales include everything from rogue crop dusting IOT devices destroying crops to semi-automated vehicle failures resulting in serious accidents. In the last couple of years alone, people have been involved in numerous crashes because of their reliance on the self-driving capabilities of newer vehicles. The Tesla autopilot feature, for example, has become the poster child of this issue. Ever since Tesla sold cars with their autopilot feature, people have been getting into crashes as they metaphorically and literally fell asleep at the wheel. Fortunately, most crashes have been relatively minor up to this point, but there have been more severe accidents that resulted in serious injury or even death. With an estimated 765,000 Tesla vehicles in the United States that have similar autopilot functions and even more vehicles with a similar feature, claims professionals should be prepared to see an increase in crashes involving such malfunctions.
Similarly, Uber’s self-driving car has not been without controversy. In fact, one of Uber’s test drivers for their fully autonomous vehicles is now facing negligent homicide charges for allowing the car to hit and kill a pedestrian who was walking her bike across the road. Investigations revealed that the test driver who was supposed to be watching the road was streaming a television show instead. Although Uber was not held criminally liable in this situation, they received heavy criticism that caused them to halt their autonomous vehicle testing. However, this event did not deter them for good. In 2020, Uber’s self-driving cars were allowed to continue testing in California, along with 65 other transport firms, and have shown a commitment to improving and implementing their autonomous cars. As such, claims professionals should prepare to see more accidents caused by distracted drivers.
ii) IOT Manufacturing Malfunctions
IOT manufacturing malfunctions are becoming an all too frequent occurrence. Robotics in manufacturing plants have also proven to be deadly when they malfunction. There have been many instances where workers have been crushed and bludgeoned by robotics in these facilities when these robots go haywire. Often these accidents are caused by the robots’ strict adherence to their coding, so when a robot encounters a novel situation, it often does not know how to respond. In 2016, a study found 144 deaths, 1,391 patient injuries, and 8061 device malfunctions related to robotic surgical machines from 2000 to 2013. The reported list of malfunctions during surgeries included uncontrolled movements of the robot, spontaneously switching on and off, loss of video feed, system error codes, and electrical sparks causing burns. Overall, the study found 550 adverse surgical events per 100,000 surgeries.
There is no denying that AI and IoT devices will become more and more integrated into our daily lives as time goes on. However, as that happens, new and unanticipated risks will begin to emerge alongside them. Therefore, it has become abundantly clear that claims professionals will have to adapt to the changing times to understand how to best handle the damages that result from the technological advancement of society.
III. Legal Response to AI and IoT
As AI and IoT are relatively new fields of technology that have only been widely commercially available for the last few years, there is not much established law regarding liabilities caused by them. Accordingly, there is no established consensus regarding how damages caused by these technologies should be handled. However, that has not stopped courts and legal scholars from developing their own legal theories for the allocation of liability stemming from AI and IoT.
c) Legal Theories for AI Liability
AI presents particularly unique legal liability challenges because, in theory, the software program that was sold to the user will not be the same program that caused the liability. This is because the machine learning capabilities of AI necessarily results in the software rewriting its own code to evolve with the data that it is receiving and become more efficient or accurate in performing its task. As such, the legal community is debating who should be held liable for damages caused by AI. Is the consumer that used the AI liable because the defect manifested itself while under his control or should the manufacturers and programmers have predicted the defect and prevented it from manifesting preemptively? If the suppliers are to blame, which link in the chain of production is at fault for which percentage of damages? Can an AI be treated as a legal entity — like corporations are — and be held directly responsible for damages it causes? To answer these questions, several legal theories have been proposed to allocate liability for damages caused by AI.
One legal theory proposed is to use the doctrine of Respondeat Superior for AI liability. Respondeat Superior, also known as the “Master-Servant Rule,” states that a principal should be liable for the actions of an agent who was negligent while working within his scope of employment. A similar theory is to treat AI as if it was an ordinary tool. Just as a person would be liable for negligently operating heavy machinery, an AI consumer would be liable for negligently implementing AI. Under this theory, a person who buys or uses the AI will be liable for damages caused by the AI while acting under the person’s control. This creates a principal-agent like relationship between the consumer and the AI so that actions taken by the AI could be imparted upon the consumer. Therefore, the consumer would not be able to evade liability for damages caused by the AI simply because he did not intend for the AI to act in the way it did. Under this theory, the AI user could hold the AI developer liable for damages only if he can prove the AI was defective while under the developer’s control and that defect was the proximate cause of the damages.
Alternatively, some believe that AI should be treated as a legal entity, similar to corporations. The argument for this position is that AI are capable of rational thought and independent decisions, so they should be held liable for damages they cause. The benefit of this approach is that it becomes easy to identify the liable party because the AI itself will be directly suable when it malfunctions instead of the user or creators who could not have anticipate the failure.
Perhaps the most likely approach that the courts will take for AI liability is to adapt the current laws for product liability to AI. This is because product liability law has historically had to evolve alongside emerging technologies in the past, so it is likely to evolve with the emergence of AI to address novel issues. If that is the case, an AI user could be found liable for the damages caused by an AI if he used the AI in a negligent manner to cause the damages. However, a user would not be liable for damages if he used the AI in a reasonably foreseeable way that inadvertently causes the AI to develop a defect. In such a case, software companies that developed the AI would likely be found liable under product liability law for failing to anticipate how an AI could develop defects through reasonably foreseeable interactions with humans. Courts would likely perform a risk-utility test to determine if the safety precautions could have been taken to decrease risk of AI malfunctions without lowering the AI’s utility or unnecessarily increasing the cost of producing the AI.
d) Liability for IoT Damages
IoT claims landscape is equally complex. As the cybersecurity risks of IoT becomes more prominent with the rise of IoT, courts will also likely be more willing to find companies that produce IoT liable for products that lack adequate security measures. Product liability arises when a product was in a defective condition while under a producer’s control that made it unreasonably dangerous and was the proximate cause of a plaintiff’s damages. PeoIoT producers may not be the only ones held liable for IoT cybersecurity issues; IoT users will likely also face liability.
Most companies today use IoT in some capacity for their day-to-day operation. While conducting business, they will inevitably collect sensitive customer data that they have a duty to protect. The problem is many IoT devices that are vulnerable to hacking are not monitored and their software is never updated. Unsurprisingly, these devices often get hacked and act as backdoors for hackers to gain more access to sensitive customer information. As these cyberattacks become more frequent, courts will likely start holding companies at a higher standard of care to take proper precautions in ensuring all IoT devices connected to a network are secure. Therefore, negligence claims against companies that have substandard IoT cybersecurity will likely increase in the years to come.
IV. Claims Professional Response to AI and IoT Liability
With an increase in AI products, claims professionals will need to make a fundamental shift in the processing and evaluation of claims. These claims will require far more technological sophistication. The claims handler will be well served by developing a deep understanding of technology and approaching A/I and IOT claims like complex have to prepare product liability claims as opposed to simple negligence cases. This is because any accident that involved the product could have been caused by its AI. Claims professionals will have to be prepared to follow the chain of production for any AI sold to determine which point of the manufacturing process may have been responsible for the damages. Therefore, it will be crucial for claims professionals to find experts for various types of AI to analyze claims and determine if the AI malfunctioned and who is to blame if it did. Additionally, claims professionals that cover producers of AI products will need to adjust their rates based on how predictable the AI’s behavior is and the products potential to cause damages if the AI malfunctions.
The evolution of technology necessarily results in the evolution of insurance products. New insurance products are already being developed to respond to the risks associated with artificial intelligence and IOT devices. Claims professionals will need to keep abreast of the insurance product iterations to conduct a proper coverage analysis at the outset of the claims handling process.
Like with AI products, claims professionals will also need to gather new resources and experts to evaluate the unique dangers IoT devices present. Claims professionals will not only need to be able to tell if an IoT device’s programming was the cause of damages in a claim, but also if a lack of cybersecurity caused the damages. Furthermore, because any company could be liable for a cybersecurity breach, claims professionals will need to evaluate the cybersecurity measures companies are taking for IoT devices connected to their network to determine risk and evaluate claims.
V. Conclusion
Evaluating accidents involving IOT devices and artificial intelligence is unique and requires an understanding of how IOT and AI contributed to accidents. Ongoing education of all in the risk industry from claims professionals to lawyers on technology developments and the legal liabilities associated with IOT failures and artificial intelligence unintended consequences will be critical to managing risk.
Thoughts: I enjoyed this read, it gave a great overview of the scary parts of technology. Would it be possible to put in some good that tech is doing to keep drivers safe and process claims more efficiently? Don’t want to paint a complete doom and gloom scenario (unless that is what is intended). |
|
Colorado Court Grants Harris, Karstaedt, Jamison & Powers, P.C.'s Motion for Dismissal
July 28, 2022 • Source: Jamey Jamison, Harris, Karstaedt, Jamison & Powers, P.C.
Attorneys from Harris, Karstaedt, Jamison & Powers, P.C.'s Motion for Dismissal filed a motion for dismissal in the Colorado Court of Appeals Case, Fierst v. Greenwood Athletics. In this case, the plaintiff signed a membership agreement with the client, an athletic club, which contained an exculpatory clause. The courts found that the agreement was clear and unambiguous. Read the full opinion. |
Fire Protection Systems Need Fire Protection Engineers
October 21, 2021 • Source: Adam Farnham, Technical Lead, Fire Protection Engineering, Envista Forensics and Andrew Bennett, Assistant Technical Director, Fire & Explosion, Envista Forensics
Fire protection systems are often the silent and forgotten part of a structure until the moment they are needed. We rely on fire protection systems to aid in alerting occupants, calling for help, extinguishing or slowing the fire, and containing the danger if possible. When the systems do not achieve these goals, then what? Besides the possibility of lives lost and damage to property, what should investigators look for during an origin and cause investigation? What are the recovery avenues?
Automatic fire sprinklers are one of many types of fire protection systems. According to National Fire Protection Association (NFPA) data, when fires are large enough to activate automatic sprinklers, they function as designed approximately 92% of the time. Approximately 60% of the fires where the sprinklers failed to operate, were due to the system being shut off.1
Fire Protection and Notification Systems
The vast spectrum of fire protection and notification systems are typically designed and installed in response to building code requirements. These can encompass hazard classifications, code requirements, or requests made by an owner when deciding on which system to implement. Automatic extinguishing systems (AES) can include:
Wet Pipe, Dry Pipe, and Deluge Sprinklers
Wet pipe sprinkler systems are the most widespread active systems in use today. They consist of water-filled piping with individually activated sprinklers within a building designed to provide a specified density of water spray when thermally activated. Wet pipe systems are monitored for water flow, which are read by the monitoring company as a fire alarm. They are also monitored for valve tamper, which will alarm should the water supply valve close.
Dry pipe sprinkler systems are similar to wet pipe sprinkler systems, as they possess individual thermally activated sprinklers. The piping and sprinklers within the system may be located in areas subject to freezing temperatures, so the pipes are pressurized with low-pressure air to prevent freezing from happening. When activated, a valve at the base of the water supply riser opens as the air pressure drops, flooding the system. In addition to water flow and valve tamper, these systems are monitored for low air pressure. Low air pressure occurs when there is a leak large enough that the air maintenance device cannot keep up with pressurizing the system. Response to a low air pressure alarm may save the system from falsely tripping when only an air leak is present.
Deluge water spray systems are found at hazards where a rapid-fire development is expected, such as chemical processing facilities, large transformers, highway tunnels, or fuels processing/distribution. These systems are activated either by heat or smoke detection and have open sprinklers where all the heads spray at once. They are monitored for similar alarm and trouble points to a wet pipe system, but the releasing systems have additional monitoring points, like loss of main power, ground fault, and supervisory air.
High and Low Expansion Systems
High and low expansion systems typically protect facilities where hydrocarbon fuels are present and subject to pooling. This includes aircraft manufacturing and repair, truck repair, and fuel processing or storage. Both high and low systems are similar, each using a water-based foam to smother a fire. The low expansion foams have an air to water ratio of 20:1 or less, and high-expansion foams have an air to water ratio of greater than 20:1. Water is supplied to the systems using deluge valves and fast-acting control valves with proportioning mechanisms to deliver the foam/water mixture to foam generators. The foam generators are located above the hazard and use blowers to form the foam, which blankets the protected hazard quickly, cools the fire, and forms a layer to exclude oxygen.
UL 300 and Dry Chemical Extinguishing Systems
The wet agent version of UL 300, or dry chemical extinguishing systems, use a water-based soap-like solution. These are typically used for areas like commercial kitchens, paint spray booths, or another special hazard. The dry chemical type is similar to a fire extinguisher. Both use pressurized gas to push the agent out of storage tanks, but UL 300 uses piping, as opposed to nozzles, at the protected hazard. The protected hazard is then blanketed with the medium. Additionally, these systems are designed to shut off fuel to the cookline or hazard, and newer installations include an activation alarm.
Gaseous Agent Systems: Carbon Dioxide, Halon/Replacement Agent
Gaseous agent systems are used for protection of hazards that are susceptible to severe damage from water. This can include printing presses, computer servers, and power generation equipment. The systems are specifically designed to blanket the hazard at a specific density and for a specific amount of time and are typically activated by smoke, heat, or flame detection. Monitoring points vary depending on the detection system used but can include system discharge, backup battery failure, power failure, or processing system trouble.
All of these systems are subject to design and routine testing and maintenance per NFPA specifications. It is the property owner’s responsibility to provide proper testing and maintenance, but responsibility for these functions can be contracted to a knowledgeable entity. It is important to determine who provided what maintenance on which system and how often. The records will be critical to the analysis.
Monitoring and Notification
Although all fire protection systems are typically activated and monitored by a detection or alarm system, these systems can vary just as much in structure as the fire protection system. They can provide vital information regarding the timeline of a fire and its progression, and any problems that occurred with the system before or during the incident.
Simple fire protection systems can monitor smoke detection, have manual pull stations, and deploy sprinkler water flow. They can also record trouble conditions, such as loss of power or internal failures of alarm initiating devices (smoke detectors, for example) or wiring. This information can be used to determine where the fire was first noted and at what time.
More complex systems can include multiple reporting data points relative to the type of system employed. For example, very early smoke detection apparatus (VESDA) systems, which use a powered vacuum pump to sample atmospheric conditions at multiple points within a facility, can monitor background particulate levels at various diameters. The systems can report how much and what type of particulate matter was in the air before the fire began, as well as during its incipient growth phase. This provides invaluable information for origin and cause analyses and determinations.
Origin and Cause Investigations
During a fire scene investigation, the investigator needs to not only focus on the fire-damaged area when identifying potential failures of equipment but the overall structure and property. The investigator should look for why the fire spread the way it did and how it relates to not only the origin and cause but to the detection and suppression systems in place.
Since fire investigators are generally the first forensic investigators at a loss, their initial photographic documentation can be extremely important. Securing the loss site is also essential to the preservation of evidence and the mitigation of spoilation potential. That being said, having an investigator on the site as soon as possible plays a major role in completing these initial tasks.
Information regarding the activation of a building fire alarm system can also be obtained by neighborhood cavasses and interviews with the tenant or employees. Once a system is activated, audio and visual notifications are made to those in the area by way of strobes, tone alerts, and/or verbal directions, depending on the system setup. Automatic extinguishing systems with water flow alarms also provide audio notification both inside and outside a structure. During interviews, investigators should find out what was heard or observed by the occupants or neighbors before or during the incident. Many times, people will take photographs or videos with their phones, which can provide documentation regarding the activation or function of the system and aid in the timeline of events during the incident.
Subrogation Considerations
While fire protection systems are usually not the direct origin or cause of the fire, should a system fail to activate correctly, there is potential for liability against the system manufacture or the contractor for potential negligence or product defects. Each U.S. state has different rules regarding such subrogation. The cause of action against any contractor or manufacturer is likely secondary, meaning it was not the cause of the fire, only that the system failed and allowed the fire to spread, causing additional damages. Such actions are often governed under the superior equities rule.
For example, in subrogation litigation in California, the doctrine of superior equities is critical in determining whether a right of subrogation exists.2 This restrictive principle prevents an insurer from recovering against a party whose equities are equal or superior to those of the insurer. This doctrine is usually invoked in the context where the insurer is seeking to subrogate against third parties who are not other insurers. Cases against parties who were not the original tortfeasor can be subject to this rule.
A legal analysis of each set of circumstances is likely needed to determine whether subrogation exists. Should it exist, an origin and cause investigator must document the system and provide an opinion on the fire’s spread and additional damages incurred due to the system's failure. If the system was functioning properly, likely no cause of action against the fire protection system exists.
Fire Protection Engineering
When a fire protection system fails or a fire loss occurs, a fire protection engineer is able to amplify and refine detection, suppression system, and activation system data. Once the time stamp information is correlated, detection system information can be used to determine how large the fire likely was once it was detected and what type of material was likely burning. Activation and control system timing and reporting points also become useful for determinations of when the extinguishing system was activated. Design criteria can be cross correlated to the probable effects of dampening down the fire or failure to dampen.
Detection and alarm system data can also be used by fire protection engineers for modeling purposes. Along with structural parameters, such as wall, ceiling, and floor construction, various configurations and timeframes for known system activations versus hypothetical conditions can be analyzed via computational modeling. This aids the scientific method process of fire analysis and can help reduce error and eliminate unwarranted hypotheses from consideration, refining conclusions of causality.
Thank you to co-author Jordan Everakes, Partner at Grotefeld Hoffmann.
1 S. Experience with Sprinklers, Aherns, July 2017, NFPA#USS14-REV
2 Jones v. Aetna Casualty & Surety Co., supra, 26 Cal.App.4th at p. 1724; Rokeby–Johnson v. Aquatronics International., Inc. (1984) 159 Cal.App.3d 1076, 1084. |
|