Topic Modeling the President with AI
Artificial Intelligence (AI), chiefly in the forms of machine learning, natural language processing, and computational topic modeling, is fueling the new generation of e-discovery and contract due diligence tools exploding on the legal market. But AI is also taking hold in my more wonky world of legal academia.
In Topic Modeling the President: Conventional and Computational Methods (or here), recently published in the George Washington Law Review with co-authors John Nay and Jonathan Gilligan, we demonstrate how these tools can tap into large bodies of legal text to help reveal patterns and categories that might not be easily apparent to the human researcher’s eye. The (rather long) article abstract explains our project and the potential for using AI in legal studies:
Law is generally embodied in text, and lawyers have for centuries classified large bodies of legal text into distinct topics—that is, they “topic model” the law. But large bodies of legal documents present challenges for conventional topic modeling methods. The task of gathering, reviewing, coding, sorting, and assessing a body of tens of thousands of legal documents is a daunting proposition. Yet recent advances in computational text analytics, a subset of the field of “artificial intelligence,” are already gaining traction in legal practice settings such as e-discovery by leveraging the speed and capacity of computers to process enormous bodies of documents, and there is good reason to believe legal researchers can take advantage of these new methods as well. Differences between conventional and computational methods, however, suggest that computational text modeling has its own limitations. The two methods used in unison, therefore, could be a powerful research tool for legal scholars.
To explore and critically evaluate that potential, we assembled a large corpus of presidential documents to assess how computational topic modeling compares to conventional methods and evaluate how legal scholars can best make use of the computational methods. We focused on presidential “direct actions,” such as Executive orders, presidential memoranda, proclamations, and other exercises of authority the President can take alone, without congressional concurrence or agency involvement. Presidents have been issuing direct actions throughout the history of the republic, and although the actions have often been the target of criticism and controversy in the past, lately they have become a tinderbox of debate. Hence, although long ignored by political scientists and legal scholars, there has been a surge of interest in the scope, content, and impact of presidential direct actions.
Legal and policy scholars modeling direct actions into substantive topic classifications thus far have not employed computational methods. To compare the results of their conventional modeling methods with the computational method, we generated computational topic models of all direct actions over time periods other scholars have studied using conventional methods, and did the same for a case study of environmental-policy direct actions. Our computational model of all direct actions closely matched one of the two comprehensive empirical models developed using conventional methods. By contrast, our environmental-case-study model differed markedly from the only empirical topic model of environmental-policy direct actions using conventional methods, revealing that the conventional methods model included trivial categories and omitted important alternative topics.
Provided a sufficiently large corpus of documents is used, our findings support the assessment that computational topic modeling can reveal important insights for legal scholars in designing and validating their topic models of legal text. To be sure, computational topic modeling used alone has its limitations, some of which are evident in our models, but when used along with conventional methods, it opens doors towards reaching more confident conclusions about how to conceptualize topics in law. Drawing from these results, we offer several use cases for computational topic modeling in legal research. At the front end, researchers can use the method to generate better and more complete topic-model hypotheses. At the back end, the method can effectively be used, as we did, to validate existing topic models. And at a meta-scale, the method opens windows to test and challenge conventional legal theory. Legal scholars can do all of these without “the machines,” but there is good reason to believe we can do it better with them in the toolkit.
Stop and Smell the Smartphones
By Micah Bradley
Do you love waking up to the smell of sizzling bacon? In 2014, Oscar Mayer held a sweepstakes for a device that could plug into an iPhone to emit the aroma of bacon as a morning alarm rang. Oscar Mayer received almost 150,000 applications for the few thousand diffusers, and the company even won “Most Creative Use of Technology” at the Shorty Awards for Social Media.
Though previous scent technologies had limited success, growing interest in aromatherapy products and in scent advertising for brick-and-mortar stores will likely lead to scent diffusion devices for smart phones, or even technological integration into phones themselves. These scents might be triggered by a user through apps for relaxation or by companies through scented advertisements or shopping websites. Some current ventures include oNotes, which connects to phones via Bluetooth and has Spotify-style scent playlists, and Scentee, which sells cartridges that emit scents from phones.
The rise of scent technology begs the question—can you trademark a scent? Though it is possible, reportedly only about ten scents had been trademarked as of three years ago. However, brands have shown an increasing interest in trademarking scents. For example, Verizon recently protected its stores’ “flowery musk scent.”
Trademarking scents is difficult. The scent must be both “nonfunctional and distinctive.” Ironically, in order to be considered nonfunctional, if the product’s only purpose is smell related (such as a perfume), instead of helping to distinguish a brand, it is not trademarkable. In addition, there can be difficulties in applying for the trademark, such as providing samples of the scent to a government examiner. As of now, Verizon would be able to puff out its protected “flowery musk scent” while other brands have no protection for scents they want consumers to associate with their brands.
Besides intellectual property, two other issues that may come with scent technology are tort and criminal claims. Texting obnoxious smells like farts could result in nuisance claims. Phones could also emit smoke or chemical smells, resulting in criminal or negligence charges.
These technologies are still emerging, and it may be several years before we see their full incorporation into phones or other devices. Clients should stay ahead of the curve, as Verizon has, and trademark their signature scents now.
The Biases Percolating Algorithms: Will AI Facilitate Disparity and Discrimination?
By Emily Lamm
Cryptically crafted and living behind the façade of technology, algorithms have escaped the standards we hold ourselves to. The allure of coding and quantum computing arouses a sense of intrigue and elevates the status of the underlying algorithms. Yet, this charm should not obscure the fact that the authority afforded to technology is constructed and highly sensitive to context. For instance, when a deep learning, neural network is introduced to an incongruous object––an elephant within a living room––pixels are crossed and previously detected objects are misidentified. These types of errors are not uncommon, but they do take on forms far more sinister than an elephant-triggered kerfuffle. High-profile examples include LinkedIn’s platform showing high-paying job ads to men more frequently than women, and law enforcement officials and judges relying upon patently racist AI-powered tools.
On one hand, the United States has developed a robust body of laws combating discrimination. The Equal Protection Clause of the Fourteenth Amendment and Title VII of the Civil Rights Act have been paramount, and the Americans with Disabilities Act of 1990 is considered an immense success in protecting individuals with qualifying disabilities. On the other hand, the United States has no such analogue to offer protection from algorithmic bias. In effect, algorithms––just one step removed from humans––have escaped the rule of law despite being a reflection (or manifestation) of the implicit values of the very humans who created them.
Now, just because there is no general legislation or regulatory scheme to control for algorithmic bias, doesn’t mean there won’t be soon. Other countries have filled this gap by implementing a data protection regime. In due time, perhaps with a change of administration, we will begin to see a drastically different approach to Artificial Intelligence. Although Americans have been rather lackadaisical about data privacy (often trading their Facebook information for a quiz predicting what their child will look like), they have been quick to advocate against discrimination. Just look to the sweeping nature of the civil, women’s, and LGBT rights movements. Accordingly, there are numerous initiatives––launched by the likes of Facebook, IBM, Google, and Amazon––researching algorithmic bias and announcing tools to bolster AI fairness.
Lawyers are also not immune from the mysterious nature of algorithms. Indeed, most litigators interface with it regularly. Every time we run a search in Lexis Advance or Westlaw, the results we see are the product of algorithms hard at work behind the scenes. Recently, Fastcase provided the option for users to toggle with its research algorithms through factors like relevancy and authoritativeness. Although this tool appears to have little influence upon the results generated, it is responsive to a growing demand for algorithmic accountability. Undoubtedly, lawyers today must embrace and implement technology in order to remain at the forefront of the industry. Nevertheless, lawyers must also continue to be skeptical, discerning, and autonomous thinkers that refuse to grow complacent with inadequate technology.
As the United States citizenry grows increasingly diverse, technology’s “black box” must begin to encompass an intersectional awareness that accounts for the vast array of identities its users embody. Ensuring that technology is implemented and monitored responsibly should be at the forefront of everyone’s mind. Whether it be lobbying for new legislation or updating corporate policies, the time is ripe to seriously consider the role of law in algorithmic bias.
45 Legal Practice Fields that Didn’t Exist 5 Years Ago (or Even Yesterday)
Each year in my Law 2050 class at Vanderbilt Law School, students identify an emerging technological, economic, environmental, or social trend and project it into the future to explore how it might generate law and policy issues needing lawyers’ attention. They write a blog post about it, then a client alert memo, then a bar journal article. They can choose any practice perspective defining who they and their clients are: private practice, government, plaintiffs, public interest, international, etc. The goal is to instill curiosity, entrepreneurship, and writing skills to put them “on the map” as they start out in practice. (For a great example of this exercise in scenario building for lawyers, check out Carolyn Elefant’s excellent ebook: 41 Legal Practice Areas that Didn’t Exist 15 Years Ago.)
I’ve been doing this for six years, and it has amazed me how many new themes come into the picture each year that weren’t on the radar screen the year before. Even the themes that have come up before have evolved so rapidly that they present entirely new dimensions to explore.
Below are this year’s themes—what an impressive list! I’ll bet you haven’t even heard of some of them. I’m really looking forward to reading my student’s bar journal articles to see where they take these:
- Electric scooters
- Malicious audio/video editing
- Social credit system & facial recognition
- CRISPR
- Predictive policing
- Advanced energy storage for wind & solar
- Private space exploration
- Dark web policing
- Social media influencers
- Genealogy technology & policing
- Emerging technology trade controls
- Space trash
- Algorithm bias
- Health insurer role in opioid crisis
- Cannabis law legal conflicts
- Augmented reality
- In vitro fertilization parental tech and parental rights
- Scent technology
- Implantable microchips
- Alexa and criminal enforcement
- Private artificial islands
- Radio frequency electric charging
- Geoengineering—solar radiation management & CO2 removal
- Non-bank fintech
- Biometric privacy
- E-sports industry
- Medical record blockchain
- Cultured meat regulation
- Emotional AI
- NCAA rules for high school pro drafts
- Initial coin offerings
- Data privacy regulation (GDPR)
- Smart microgrids
- Law firm insourcing of non-legal services
- Freebooting video content
- Artificial embryos
- Hyperloop
- 23&Me health testing
- Voice cloning issues
- Arctic circle transportation and minerals
- CRISPR
- Alternative legal services providers and legal malpractice
- Blockchain and real estate titles
- Sport betting and machine learning
- Personal data sales and privacy regulation