Legal scholarship: Is it high-impact? Should Unjournal evaluate it? Call for participation/scoping
By david_reinstein, Dylan R. @ 2024-12-19T22:44 (+8)
This was written in consultation with Dylan (JD, BCL), and after a few conversations with two legal scholars and others in this field.
The Unjournal (unjournal.org, see our ‘in a nutshell’) focuses on impactful research in quantitative social science and economics – see a discussion of our focus in our Gitbook here; see our output at unjournal.pubpub.org.
We have not considered legal scholarship, because our team is not familiar with it, because of limited funding, and because we would need to adjust our evaluation model and approaches. But, we think this could be a strong expansion opportunity. Indeed, even though we didn't ask for it, some people have suggested relevant work in this area. We assess this opportunity below and consider some ‘crux’ questions for the success/failure of this.
We're looking for lawyers, law students, researchers, and practitioners in relevant areas to help us explore, plan, and pilot an approach. There's a CtA below, linking this expression of interest form.
The case to expand into legal scholarship
Legal research has a tangible impact, and visible public evaluations can have an influence
Legal research seems to have a concrete impact on global-priority issues, such as animal welfare and AI safety regulations. Legal scholarship doesn’t only inform prioritization and strategy–it directly influences how legislation is written and how courts make decisions.[1] Legal scholarship also impacts students' syllabi and reading. It influences their thinking and practice. Students who hear these ideas may become lawyers and argue those ideas in court (yielding court-made law). They may write essays or blog posts on these ideas that are read by people in power. They may become policy analysts and plant the seed of an idea in a lawmaker’s mind.[2]
Litigation and jurisprudence is an adversarial process, and it values authoritative expert opinions and logical counter-arguments. Judges read legal scholarship and cite it in their decisions. Rigorous evaluation and feedback from The Unjournal may strengthen and promote legal research in impactful areas, making this more likely .
Where legal research is cited, lawyers and judges can also use Unjournal evaluations to reinforce their case or demonstrate weaknesses on the other side. If The Unjournal's evaluations are visible, credible, and well-written, they may be directly taken up and cited in future decisions.
A 'gap in the market': lack of peer review
In North America, there is a lack of substantive expert review for the most prestigious and actively-cited research. In fact, the top-ranked law journals are not really peer-reviewed. They are run by law students who have only studied law for about two years, with little to no active research experience. There is some guidance from one or more faculty members, but students make most of the filtering decisions.
According to an Assistant Professor of Law, paraphrased:
There are basically two submission windows per year: February and August. You can submit to 50-100 journals at the same time through a general portal, withh a small cost per submission. Papers are assigned to the submissions team at each journal (law students), which must go through over large piles of papers. They may read the abstract, maybe the introduction. If they like it, they may read further.
Typically, you get an "offer" from a lower-ranked journal, which you can use to get higher-ranked journals to consider making you an offer. The "offer" is key: after you accept an offer from a journal, there may be some further suggestions, but these are generally optional; a paper that gets (and accepts) an offer will nearly always be published in that journal.
At some of the very top journals, once a paper is prioritized and close to getting an offer, the journal will send articles out to experts in the relevant field before the offer is finalized. But this is not done consistently or transparently, and it's not clear how common this is. And most of the work is rejected (perhaps inappropriately) by the students' decision before it even gets to this point.
In response to the criticisms of student-edited law reviews, some peer-reviewed legal journals have emerged. These are typically managed by faculty or professional organizations (e.g., Journal of Legal Studies or Law & Society Review). But these don’t seem to be perceived as the top-ranked/highest-status outlets.
This suggests that credible expert evaluation of legal research could provide a light in this relative darkness. If we can get actual legal scholars to publicly assess this research, this could provide a valuable benchmark. This could establish a more informative, sophisticated, higher-prestige standard for judging legal scholarship and a more informative career metric. The research with the strongest public evaluations may be more heavily cited and used in legislation and case law. [3] By providing this outlet, we would also have a platform to nudge the field towards a greater focus on high-impact areas and approaches.[4]
Cruxes for this project to succeed
Is legal scholarship likely to be impactful?
There’s lots of legal scholarship that we suspect is high impact. Sometimes, the scholarship is cited in a major court decision, and the way the justice cites it suggests it was a defining factor. For example, the right to privacy was first articulated in an 1890 article by Samuel Warren and Louis Brandeis. That has since been cited many times in major decisions: for example, in Griswold v. Connecticut (1965), which recognized that states cannot make using contraception illegal for couples since they have a right to marital privacy.
Some resources seem likely to point towards high-impact legal research:
- Legal priorities research: A research agenda
- Institute for Law and AI research
- (Preliminary and Unverified) Impactful Legal Research Ideas (Julian Guidote: covers AI, catastrophic risk, and biosecurity)
- Animal welfare: potentially Studies in Global Animal Law (Peters, 2020), and work coming from the Cambridge Centre for Animal Rights Law and the Harvard Animal Law & Policy Program.
Here are some specific examples
- Liability for artificial intelligence and other emerging digital technologies
- Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence (Weil, 2024)
Do relevant questions persist over time?
Our evaluation process and dissemination is somewhat slow relative to the needs in some areas (although it may be similar to the timeline for standard law reviews). [5] E.g., we may need about six months to prioritize, find evaluators, receive and manage their evaluations, give the authors a chance to respond, and synthesize the results. Will this timeline be too long for our evaluation to still be influential?
Are there ‘big works of legal scholarship’ that have a persistent influence and a longer shelf-life of relevance?
Are there ‘things that can be evaluated meaningfully in a legible way’?
Legal scholarship is not generally assessed using real-world evidence, data, experimentation, statistics, or the scientific method (as is much of the work The Unjournal covers). According to a law professor we spoke to, there may be less of a "ground truth" in legal scholarship than in economics or other areas. There are also different schools of legal thought (originalism, textualism, etc.), and we would need to take steps to ensure that our evaluations don't merely echo ideological disagreements.
But, we still suspect that meaningful evaluations are possible.
To illustrate, law articles can be contrasted to their peers and existing legislation. Since it’s a contained system, a meaningful evaluation looks at how well one author’s idea stands out from the crowd. When an idea stands out, then one of three things happens:
- Maybe the idea had a mistaken premise. If so, and if most experts can readily see the error, then the piece should receive a poor evaluation.
- The idea is completely novel, so it’s evaluation depends on its logical consistency and robustness against ‘soft’ rules (e.g., culture or expectations); or,
- It calls for overturning previous ideas because something meaningful has changed in the real world (e.g., technological developments), so its evaluation depends on how well they characterize the issue, and the robustness of solutions that go with it.
To give an example, Weil (2024) claims that some AI logically falls under the category of ‘abnormally dangerous’ tech, and that classifying it as such is desirable and logical. This is a claim about legal definitions and the logical consequence of legal precedents; evaluators can consider the correctness and consistency of each of these (~1-2 above). It’s also an empirical claim about the potential harms from AI; evaluators can consider whether this is an accurate characterization, and whether the stated implications are reasonable.
The above is merely one perspective (Dylan's). There’s more to talk about here, and we would love to hear input (including through the CtA below).
Will people engage?
Will legal scholars participate? Will they join our team?
We need:
- Expertise and credibility
- People to suggest work to evaluate and/or authors submit their own research
- People to help us prioritize among these
- Legal scholars to help do these evaluations (with compensation). Will legal professors do these, anonymously or signed? Note that they are used to having students do most of this work, so it may be a big ask, even if we offer our typical ~$450 compensation.
- (Ideally) authors to respond to the evaluations.
To make this happen (paraphrasing the Asst. Prof. of Law) "people would have to be convinced that anyone cares." This professor suggested that it would make a big difference if we could convince top legal journals to consider our evaluations in their screening process (mentioned above) and to note this publicly. [6] Given the large volume of submitted work they need to consider, they may find our evaluations and suggestions helpful.
Can we gain credibility in this space?
Crucial: Finding a qualified, motivated legal scholar willing to provide expertise, credibility, and networks, and take on some leadership (with potential compensation/funding).
Call for participants and involvement
What We’re Looking For
We want to (1) systematically source and prioritize research for impact and relevance for evaluation, and (2) have the research systematically evaluated for its credibility, usefulness, and general quality. We’re looking for people to help us consider and set up a process, an approach, and some criteria and metrics. Our current approach is mainly focused on ~empirical quantitative economics research; this will surely need some adjustments to the legal scholarship context
Evaluating the expected impact of legal research may be challenging. Timeliness of the subject matter may make it easier to predict whether a judge or legislator will read it, but knowing what they will do with it is hard to predict. The impact of past research impact gives us some ideas, but we need more.
So, we need help figuring out,
I. Where and how to identify relevant legal research to consider for evaluation
II. How to prioritize this – i.e., for a given piece of legal research, how to evaluate its potential for impact to help us know whether to prioritize it for evaluation;
III. How to find, choose, communicate with, reward, and manage the work of potential expert evaluators (aka ‘referees’)
IV. How to ask these evaluators to consider, discuss, and rate the research, considering, e.g.,
- Its overall credibility and quality
- Potentially including comparisons to existing work and existing measures like journal tiers. Aspects of its quality, e.g.,
- Logical consistency, completeness, reasoning transparency
- Communication
- Understanding and incorporation of previous work
- Accurate and informed depiction of the real-world context and policy issues
- Adherence to law and doctrine.
V. How to promote and communicate these evaluations to maximize their impact and strengthen our initiative.
Following up/CtA
If this is something that you think is important, and you have the bandwidth to contribute ~4 hours of your time to work with ~3 other colleagues on it in early 2025 then we encourage you to fill out this expression of interest form.
We’ll aim to get back to all selected candidates within a few weeks. We will be able to provide some compensation as an honorarium (but it will likely be modest, given our current funding constraints).
If you have any outstanding questions, feel free to reach out at contact@unjournal.org.
- ^
It also has an indirect impact, through the arguments of advocacy groups.
- ^
As Lord Reed, a revered judge in the common law, once put it: Keynes famously observed that practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Academic analysis of the kind carried out in this book has an influence on the development of the law whether its practitioners are consciously aware of it or not.
- ^
As already noted, the evaluations themselves may also be cited.
- ^
According to a law professor I (David Reinstein) spoke to at a recent EAG, legal scholarship underemphasizes practical scholarship in general According to them...
[Paraphrasing] The scholarship is dichotomized into case analysis vs theoretical analysis (with some under-appreciated comparative law analysis). However 'the middle 'practical' scholarship isn't being written'
There's a need for research prioritization in the legal scholarship and X-risk space; this is topical and timely.
I suspect this is probably also true in the animal-welfare law space.
- ^
Although their submission-to-decision turnaround time may be faster, research is generally submitted in batches only twice per year.
- ^
While our ultimate aim is for evaluation to replace academic journals, this could be an important step in the right direction.
- ^
Indeed, even though we didn't ask for it, some people have suggested relevant work in this area.