skip to content

One Year In, Reflecting on the Wave of GenAI Transformation, Past and Future

by AJ Shankar

Reflecting on the past year, it’s remarkable to consider the advancements in the field of generative AI, particularly through large language model interfaces like ChatGPT. Launched just one year ago, ChatGPT marked an inflection point in generative AI, spawned a wave of rapid innovation, and set the trajectory for transformations that will continue to play out, in technology, in the practice of law, and in society as a whole, for years to come.

A true leap forward in generative AI technology, ChatGPT and a host of similar tools can perform tasks that would have seemed impossible just two years ago, whether drafting a speech, writing computer code, or passing the bar exam. These tools are powered by LLMs, or large language models, a type of neural network that can read and write in natural languages, such as English. Trained on billions of documents and refined with human guidance, these LLMs are competent to varying degrees in four key areas: fluency, creativity, knowledge, and logical reasoning. 

But we did not get here overnight. This journey, while seemingly sudden, is rooted in a rich history of AI development, bringing us to a pivotal moment in technological evolution. So while today we reflect on a few key milestones achieved over the past year, it’s important to keep in mind that the evolution of this technology dates back to the advancement of machine learning and neural networks from over 50 years ago. 

ChatGPT’s Release Marks a Milestone in Public Adoption and Accessibility

OpenAI released ChatGPT on November 30, 2022. Earlier versions of GPTs had been surprisingly successful at some generative tasks. GPT 2 marked a 10-fold increase in the model’s power, allowing it to respond to questions, summarize texts, and even translate – though it performed best with short, simple tasks. The subsequent improvement between GPT 2 and 3 was astounding and occurred in an incredibly short amount of time. 

With the launch of GPT 3.5 one year ago and its impressive ability to both understand and generate natural language, OpenAI truly created a firestorm. The emergent properties found in earlier models – for example, the ability to demonstrate skills that the models had not been explicitly trained on – were on display in a way not yet seen before. 

One of the most impactful decisions behind ChatGPT’s release was to make it available to the general public, at no cost and with an interface that was easy for anyone to use. In five days, OpenAI’s chatbot had over 1 million users, the fastest consumer adoption growth ever recorded at the time. To put that in perspective, it took previous consumer applications months to years to reach that same level of adoption – ChatGPT reached the million-user milestone over 200 times faster than Netflix and 30 times faster than Zoom.

While this user growth is impressive, the real benefit from this strategy was that anyone could access the technology and experience its potential for themselves. The potential myriad  impacts were not lost on the legal profession, either. 

As we heard at Everlaw Summit ‘23, within weeks nearly every major law firm had convened a panel or working group to investigate how GenAI could impact their practices – a mobilization around technology that has been completely unprecedented in the legal profession. 

I think it's fair to say that without making ChatGPT generally available to the public, we would not have seen such quick recognition of its potential. It created an industry.

AI Competition Quickly Heats Up

While ChatGPT was one of the first LLM-powered chatbots to capture the public eye, it is just one example of generative AI and the power of LLMs. GPT is part of a broader ecosystem, including Anthropic’s Claude and Google’s Bard. These tools exemplify the power of LLMs and the competition between them has helped fuel exceptionally fast development in this area.

Tracing back to the earliest GPT models, we’ve witnessed exponential growth in this domain. Each iteration, from GPT 1 to the rumored 1.76 trillion parameters of GPT 4, marks significant advancements in AI’s capabilities. This trajectory underscores the importance of understanding AI’s roots and its rapid development – and brings us to our second milestone: an infusion of capital into AI research that has helped spur intense competition in the field. 

Play this video on Vimeo

Following the success of ChatGPT’s launch, many major technology companies began investing heavily in LLM technology. Microsoft has reportedly invested more than $13 billion in OpenAI, pledging $10 billion of that in the past year alone. On February 3, 2023, Google announced its partnership with Anthropic, another AI company focused on LLMs, in a deal that mirrored Microsoft’s relationship with OpenAI. 

To put this in context, the U.S. government spent approximately $20 billion annually, adjusted for inflation, on the Apollo program during the height of the Space Race. Microsoft alone is expected to spend over $50 billion a year on generative AI, according to one research firm which describes this moment as “the largest-scale investment humanity has ever made in a new technological frontier.” 

Major platforms soon released core LLM tools in quick succession. Google’s Bard, Bing’s GenAI search engine, GitHub Copilot, Microsoft Copilot, and even Salesforce’s Einstein GPT are just some of the many tools that were announced in the early months of 2023. As the range of LLM applications has grown, so have the use cases. 

The competition has been instrumental in driving these advancements. And, in the end, it is the users who benefit.

This can be seen, for example, in the rapid growth of the information users can provide to a generative AI chatbot. Over the past year, the amount of information users have been able to submit has grown from several short paragraphs to hundreds of pages of text. 

Even more impressively, these large language models can now input and output images and audio, explaining, for example, the humor behind a comic strip or narrating a script with human-like speech. 

An infusion of capital into AI research that has helped spur intense competition in the field... And, in the end, it is the users who benefit.

To be clear, these advancements were not simple. They represent massive increases in the amount of data that can be handled by generative AI and require fundamental improvements to how these tools can scale. The breakneck speed at which these tools are developing is possible because of advances in every layer, from improvements in the GPU chips relied on for training neural networks, to the architecture these models are built on.

Meanwhile, the rest of the world gets to build atop these billion-dollar investments and foundational improvements. Pair these developments with enterprise-level applications that provide far greater security and control over user data and these tools are now better positioned than ever to meet the needs of professional use cases. 

The landscape of models is becoming increasingly robust, which is a good thing for practitioners everywhere. That’s one of the reasons that at Everlaw, our implementation of generative AI tools is model agnostic and built with flexibility from the get-go, allowing our users to leverage the best application of a specific tool in any particular context, so that they can benefit from the best of the best AI technology as it emerges.

As mentioned above, the potential impact of LLMs has not been lost on the legal profession. Nearly every firm, government agency, and corporate legal department that I have spoken to in the last year is investigating how they might leverage generative AI in their work and how their profession may be impacted overall. 

Indeed, in a midyear survey conducted by Everlaw, the International Legal Technology Association (ILTA), and the Association of Certified E-Discovery Specialists (ACEDS), of over 245 legal professionals, 72% of respondents said the industry is not yet ready for the impacts of generative AI. Forty percent were already using or planning to use GenAI tools anyway. The split reflects a profession on the cusp of massive change, with nearly half of the respondents recognizing the need to get ahead of generative AI, before they fall behind. 

At Everlaw, we announced our own entry into generative AI and the law in July, with the launch of EverlawAI Assistant. Currently available in beta, EverlawAI Assistant enables legal professionals to leverage the power of LLMs to provide insights at the document and narrative level in ways that promise to transform the litigation and investigations process. 

Integrated into two key workflows – the core review window where doc-by-doc analysis, tagging, and redaction occurs, and Storybuilder, our post-review drafting and narrative-building platform – and supported with direct citations to source documents, EverlawAI Assistant has already had a significant impact on law firm workflows. 

With generative AI, attorneys can now “compress into a single sentence the kind of multiyear development that occurs with an attorney that allows him or her to figure out whether a document is, in fact, very useful and also why,” explains Gordon Calhoun, Partner and Chair of the Electronic Discovery, Information Management & Compliance Practice at Lewis Brisbois.

In piloting EverlawAI Assistant, Lewis Brisbois saw immediate impacts on core legal work, reducing the time needed for critical tasks – such as summarizing key documents and preparing for depositions, mediations, dispositive motions and trials – by at least 50%.

“The benefits of the generative AI that Everlaw has made available are enormous and should have an immediate impact. The boon is that the more mundane, more time-consuming, and often frustrating aspects of document analysis are now going to be the realm of generative AI. This frees up attorneys to do what we’ve always aspired to do, which is to do good by serving clients.”

Applying generative AI to the practice of law requires a deep understanding of both. 

The law, simply, is different. 

The stakes are high and the questions that need to be answered are precise. Similarly, the capabilities of LLMs are unique; features that leverage them must be designed, as much as possible, to lean into their strengths and avoid their weaknesses.

At Everlaw, our approach is guided by our core generative AI principles of control, confidence, and transparency, privacy, and security. These principles are the building blocks supporting our central approach to all product development: building for the long term

Leveraging Generative AI in 2024 and Beyond

Of course, as I write this, the makeup of the AI development landscape continues to shift. But the fundamental power of LLM technology remains impressive – and untied to any one provider. Conscientiously, responsibly built tools like EverlawAI are designed to take advantage of the best GenAI technology wherever it lives. 

We are only at the beginning.

There is no question in my mind that generative AI is here to stay. As the past year has shown, limitations associated with GenAI may be significant, but they are falling one by one. The advancements we’ve seen in such a short period of time are unparalleled.

For legal professionals today, the question is not whether generative AI will transform your practice, it’s how. 

The potential for artificial intelligence in the legal industry is enormous. But its application needs to be responsible, considered, and well-thought-out.

Finding a partner for that transformation who deeply understands the needs of the profession and the capabilities of these tools is key to a successful transformation. 

It’s exciting to look back at the past year and see the rate of innovation that has been achieved so quickly. The years ahead promise even greater breakthroughs. We hope you join us for that journey.