top of page

A Practical Guide for IT Leaders on Implementing Generative AI

Updated: Apr 18



Generative AI has rapidly emerged as a potentially transformative technology for

enterprises. Powered by large language models trained on vast datasets, generative AI systems like ChatGPT can understand and generate human-like text, images, code,

and more, simply from a natural language prompt.


As an IT leader, you're likely intrigued by its potential to drive productivity and innovation

but also have legitimate concerns about security, governance, and implementation.

This practical guide will help you get started on your generative AI journey.


Understanding the Generative AI Landscape

At its core, generative AI refers to AI systems that can generate new content like text,

images, videos, or code from just a natural language prompt or input. This is different

from traditional machine learning models trained on curated, labeled datasets for

specific tasks like computer vision or speech recognition.


The key innovation enabling generative AI is large language models (LLMs) - neural

networks trained on massive text datasets scraped from the internet, books, academic

papers, and other sources to understand and generate natural language. While

OpenAI's ChatGPT kicked off the generative AI frenzy, it's just one application of large language models. These same models can be adapted to generate images, code,

videos, and more by being exposed to those data types during training.


Addressing Data Privacy and LLM Ownership

A top concern for IT leaders is the risk of sensitive data like customer records, financial

models, formulas, or other proprietary information finding its way into these large

language models during use and being leaked or misused. This legitimate concern has

held back many enterprises from adopting generative AI.


There are two main considerations here: Data privacy and ownership of fine-tuned

LLMs. Fine-tuning is a process whereby a foundation (sometimes called base) LLM is

modified with new data provided by the enterprise. The fine-tuned model potentially

encodes proprietary data.


Commercial LLM providers like OpenAI are currently the most well-known offerings in

the marketplace. OpenAI states “We do not train on your business data (data from

ChatGPT Team, ChatGPT Enterprise, or our API Platform)”. Details can be found at

https://openai.com/enterprise-privacy. What this means is that any data prompts sent to

and responses from the LLM will not be used by OpenAI to train their foundation LLMs.

OpenAI provides the capability for enterprises to fine-tune their foundation LLMs and

guarantees that those fine-tuned models are accessible only to the enterprise.

Importantly, OpenAI owns those fine-tuned models.


A solution to the data privacy concern is to use a private commercial LLM. Cloud

providers like Microsoft Azure and Google Cloud AI offer the ability to provision a secure

virtual private cloud (VPC) with your choice of foundation LLM. Such environments can

be configured with strict access controls to ensure all data sent to/from the LLM during

inference are not accessible outside the enterprise. The data privacy risk is transferred

to the cloud provider. The ownership concern is not addressed in this situation.

AWS offers a similar VPC solution but they also offer open source foundation LLMs.

These open source LLMs are not owned by a commercial entity.


An alternative to a cloud solution is to host models internally in your own network. This

is the classic on-premises solution. This addresses the data privacy risk but typically

requires more DevOps efforts and a different cost structure than the cloud solution.

There are also a set of startups that now offer open source LLMs and fine-tuning

capabilities with the ability to be hosted in Azure or AWS VPCs. An example of this is

Predibase LLC (https://predibase.com). Predibase allows you to fine-tune a LLM based

on an open source LLM and then download the LLM and host it outside of Predibase

infrastructure.


We are confident that the data privacy and LLM ownership concerns can be addressed

today with existing offerings in the marketplace.


Getting Started with Generative AI

To begin your generative AI journey, a good first step is educating a small, innovative

group of employees through in-depth training workshops. Cover core concepts like large

language models, the generative AI landscape, security considerations, and hands-on

use cases.


Let this seed group start identifying and rapidly prototyping generative AI use cases

most relevant to your specific business. Common early use cases include:


  •  Generating marketing content like blogs, ads, social media posts

  •  Automating customer support emails/chat conversations

  •  Analyzing large volumes of customer feedback data.

  •  Creating documentation, training guides, manuals

  •  Coding assistance to boost developer productivity

  •  Building chatbots and workflow automation tools


Workshops will help build internal generative AI expertise as you establish data

governance policies on what information can and cannot be used with different

generative AI models and scenarios.


Future Roadmap

While today's large language models are highly capable of understanding and

generating natural language text, the field is evolving rapidly. Multimodal models that

can seamlessly combine text, images, video, and speech in both the inputs and outputs

are already starting to impact organizations. Vision is one of my favorite examples of a

technology that has a vast amount of potential use cases.


There is also an AI model arms race playing out, with rapid advances in model quality,

size, and efficiency coming from commercial labs like OpenAI, Anthropic, Google, and

open-source efforts. Building organizational competency with generative AI now allows

you to take full advantage of these future developments.

Comments


bottom of page