What is generative artificial intelligence?
Large Language Models (LLM) are a type of generative AI, and allow for scaling to millions - or even billions - of parameters.
Generative AI refers to algorithms (you may be familiar with ChatGPT, Gemini, and others) that have the ability to create new content, based on what it’s learned from existing data. Output includes images, video, audio, code, text, product designs, and more.
What are Large Language Models?
Large Language Model AI for the Enterprise
What you need to know
What do you need to know about Large Language Models?
They are not all made equal.
Like ChatGPT, Gemini, or others you may be familiar with, public LLMs are trained on public data and language and are widely, freely available to the public.
Public LLM
These are LLMs that have been trained on customer-specific data, giving a high degree of specificity and accuracy. When deployed in tandem with a Domain LLM, which is trained on domain-specific data - for example, a contact center’s conversations and tasks - these LLMs are very useful and powerful.
Finetune LLM
PROS
CONS
PROS
CONS
Quick time to value –
a very low barrier to entry
Ability to enable self-serve
through prompt design
Vast public knowledge
Public LLM PROS:
Pick the right size for the right task
Finetune LLM PROS:
Continuously learn from feedback data, making the model ever more bespoke and tuned to your particular business needs
Reach high accuracy for critical tasks
Very large models can be costly and slow – as the model continues to grow, downstream costs in time and resources may be involved
Public LLM CONS:
Can’t continuously learn from feedback
Accuracy ceiling on critical tasks – public LLMs are trained on public data, not truth
Require effort to train
Finetune LLM CONS:
Difficult to self-serve
Per use case deployment
Here’s a simple way to think about it...
A car engine is an underlying technology that powers a wide variety of vehicles.
A consumer car is a general purpose vehicle and very useful for solving a broad range of problems.
An F1 race car uses the same underlying technology as a consumer car, but is custom-built to solve a narrow set of difficult technical problems like aerodynamics, power/weight ratio, and fast tire-changes
This is analogous to the Generative AI model:
GPT
This is analogous to a Public LLM interface like:
ChatGPT
This is analogous to the Finetune LLM used by:
CRESTA
Cresta is a trailblazing leader in our field
These are LLMs that have been trained on customer-specific data, giving a high degree of specificity and accuracy. When deployed in tandem with a Domain LLM, which is trained on domain-specific data - for example, a contact center’s conversations and tasks - these LLMs are very useful and powerful.
Finetune LLM
PROS
CONS
A deep bench of expertise and talent:
Cresta is spun out of Stanford’s AI Lab, with the help of lab director and Google X creator Sebastian Thrun
Cresta co-founder, Tim Shi, contributes to early research at OpenAI
Early in-house generative models.
Cresta is one of the first companies to deploy an in-house generative model similar to GPT, powering our Chat Agent Assist
Cresta is the first to develop product features like Suggested Responses and Smart Compose in Agent Assist, with a wholly unique generative modeling approach.
59% of contact centers worldwide go at least partially remote with the onset of the pandemic.
Contact center interaction volumes surge, stressing the importance of time-saving automations for remote agents.
Cresta develops and patents conversational AI response generation in response to these market needs.
First generative models deployed in production for contact centers - in the era before OpenAI’s GPT-3.
Cresta establishes itself as an AI innovation leader with Action-Directed GPT-2 and patented audio analysis for real-time response generation.
These new technologies are leveraged
to launch Agent Assist for Voice and
Virtual Agent.
Innovative conversational AI response generation to help contact centers deal with pandemic-era surges in interaction volume.
Pioneering GPT applications pave the way for product innovation.
Rapid pace of innovation on new generative AI capabilities.
Cresta develops patent-pending tools to take action based on
conversational context.
Cresta introduces generative AI-powered features in QA, Topic Discovery, Conversation Flow Modeling, and Auto Summarization and Note Taking
Cresta is a generative AI leader and innovator.
AI pioneer Ping Wu (cofounder of Google Contact Center AI) becomes CEO of Cresta
Cresta introduces generative AI-powered features in Outcome Insights and Virtual Agent
Cresta launches Ocean-1, the world’s first foundation model for the contact center
Cresta talks the talk - and walks the walk
GPT-4 technology is deployed today in many of Cresta’s existing features. For example...
Generative Virtual Agent
Topic Discovery
Conversation Flow Modeling
Natural-language Guidance Management
Conversation Simulator & Bot QA
Generative AI Bot Widget
Knowledge Assist
Smart Compose & Suggestions
Auto Note Taking
Auto Summarization
Real-Time Transcription
Behavioral & Compliance Hints
Topic Flow & Discovery
Auto QA
Outcome Insights
Emotion & Sentiment Detection
Triggers and Actions (Opera)
Performance Insights
Behavioral Discovery
BEFORE THE CALL:
DURING THE CALL:
AFTER THE CALL:
Bespoke AI products, not just an LLM
Cresta’s platform consists of
fully-fledged, mature, thoughtful products designed around generative AI models
Our tailored language model starts with a foundational model like GPT-4, and then learns continuously from feedback data
We handle all private data with the highest security standards.
Harness the power
Cresta’s intuitive platform lets you immediately start using generative AI to solve real problems in real time.
With a dedicated integration and delivery team and customer success support to build custom models, you’ll get the results you want and the data-driven insights you need.
REQUeST A DEMO