We analyze an injurer’s incentives to improve her information about accident risk. In contrast to the preceding literature, injurers can continuously improve their understanding of the expected harm ...their activity will impose on others. Regarding social incentives, the marginal benefit from improved risk information is increasing, possibly making either no or a perfect understanding of risk socially optimal. Turning to private incentives when the injurer’s asset constraint is non-binding, strict liability induces the first-best outcome, whereas the negligence rule induces excessive information acquisition. By contrast, when the injurer’s asset constraint is binding, under both liability rules, the injurer’s incentives to acquire information about risk is too small in many circumstances but can also be excessive in other circumstances.
•Analysis of incentives to improve risk information when improvement is continuous.•Marginal social benefit from improved risk information is increasing.•With non-binding asset constraint, strict liability induces the first-best outcome.•With non-binding asset constraint, negligence induces excessive information acquisition.•With potential judgment proofness, information acquisition incentives often too small.
Rarely does a book - let alone one on torts - come along with true staying power. 'Tort law and the construction of change' is such a book. It stopped me in my tracks when I first read it, and it has ...been a book to which I have returned again and again while teaching torts and probing new research projects. With 'Tort law and the construction of change', Professors Kenneth Abraham and G. Edward White, who have inspired generations of torts students and scholars, have truly energized and inspired this nearly 20-year veteran in the field.
The concept of distributed moral responsibility (DMR) has a long history. When it is understood as being entirely reducible to the sum of (some) human, individual and already morally loaded actions, ...then the allocation of DMR, and hence of praise and reward or blame and punishment, may be pragmatically difficult, but not conceptually problematic. However, in distributed environments, it is increasingly possible that a network of agents, some human, some artificial (e.g. a program) and some hybrid (e.g. a group of people working as a team thanks to a software platform), may cause distributed moral actions (DMAs). These are morally good or evil (i.e. morally loaded) actions caused by local interactions that are in themselves neither good nor evil (morally neutral). In this article, I analyse DMRs that are due to DMAs, and argue in favour of the allocation, by default and overridably, of full moral responsibility (faultless responsibility) to all the nodes/agents in the network causally relevant for bringing about the DMA in question, independently of intentionality. The mechanism proposed is inspired by, and adapts, three concepts: back propagation from network theory, strict liability from jurisprudence and common knowledge from epistemic logic. This article is part of the themed issue 'The ethical impact of data science'.
This essay proposes a way of dealing with the strict liability of Internet sellers of other manufacturers' products, such as Amazon under its 'Fulfillment by Amazon' program. I discuss and reject two ...approaches to the problem that have been proposed by the courts, and advance a view according to which the relevant inquiry is whether Internet intermediaries such as Amazon could have prevented a defective product from reaching the US market. This view accounts in a satisfactory manner for the notion of responsibility that is at the core of US strict products liability law, and avoids the pitfalls of alternative policies. However, since this view also entails a de facto quasi-immunity to lawsuits for Internet intermediaries in many cases, safeguards to that quasi-immunity are also addressed. While the essay focuses on US law, the principles and policies under discussion should be applicable in other jurisdictions as well.
AI-driven vehicles and other artificial intelligence (AI) systems may cause serious injury to people while operating independently. Besides vehicles progress may be seen in the use of autonomous ...weapon systems, AI in medicine and care robots. It seems that soon AI systems will increasingly be making decisions previously made by humans. A Swedish inquiry argued that existing criminal law rules on responsibility are not suitable for automated vehicles (when in the self-driving mode). The human in the driver’s seat would not be blamed if an accident occurs. Conversely, the Proposal for a Regulation on Artificial Intelligence places an emphasis on oversight by human beings to an extent. A battle for the hearts and minds of people might be underway here. It seems that further exploration of the matter is warranted, especially through the criminal law lens—are proposals where the human user is absolved of blame viable at this point in time?
Abstract
This article compares the views of Grotius and subsequent authors on the doctrines of necessity and strict liability. This comparison takes place at two levels. On the one hand, there is a ...comparison of the views of Grotius with those of Pufendorf, Smith, Kant and recent Kantian authors. On the other hand, there is a comparison between the doctrines of necessity and strict liability. This exercise leads to the conclusion that strict liability does not have to be a mere matter of choice opted for by positive law, but in some instances can also be thought of as a requirement of a private law framework expressing the fundamental moral equal freedom of man.