-->

Keep up with all of the essential KM news with a FREE subscription to KMWorld magazine. Find out more and subscribe today!

The user experience reimagined, thanks to AI

Article Featured Image

Generating prompts with prompts

The truly transformational nature of foundation models is not in how rapidly they can generate taxonomies, data models, business rules, or other knowledge management mainstays that humans can curate. It’s when businesses employ them to generate prompts that generate other prompts. This capacity supersedes even that of chain of thought in that one can “create workflows where a prompt creates another prompt, and it goes off and does the work,” Martin explained. “You state a higher-level goal to do something, and the LLM will generate instructions for a worker bee to go off and do some part of a task.”

This notion involves frameworks like LangChain (python.langchain.com/ docs/get_started/introduction.html) and is akin to bots creating other bots to do a job. For example, users might need to obtain sentiment analysis for a particular financial market. They can specify that prompt, and the language models will devise other prompts, resulting in discrete actions (sources to use, access via an API, metrics for sentiment, time frame, etc.) to accomplish the larger task. “People are using this to do whole financial strategies to run business ideas,” Martin mentioned.

Visual applications

In addition to absorbing unstructured text, documents, ideas, and approaches for fulfilling business objectives, foundation models can also generate images, videos, and digital objects. The user experience is transformed when this content is paired with textual applications to simplify requirements for completing tasks so that the model “does it for you,” commented Jeff Kaplan, SkyView Innovations CEO. These visual applications improve appli- cation design, workflows, and process automation when someone digitizes a form that contains images, for example, via the method Galal articulated. There are also solutions employing foundation models so “you give it a partial image, and have it generate the background or pan and zoom,” Galal noted.

Users can ask ad hoc questions and get responses in natural language, visualizations, or both. “You say: ‘Show me the budget numbers; now, put that through a chart; no, I want a line chart, not a bar chart; maybe a pie chart’s better,’ and it just does it,” Martin disclosed. It’s even possible for generative AI techniques to create entire business systems, operational settings, and training environments with digital twins, which perhaps epitomize the notion of revamping the user experience with digital transformation.

With these digital replications of real-time production and data systems, “things like training and simulations and supply chain optimization could be enormous,” Kaplan said. The visual accuracy and conformity to real-world systems depicted by digital twins for such use cases are impressive. “There is an AI technology that allows you to take a video and then bring it into Unreal Engine, which is the gaming platform that powers Fortnite and things like that, and now you have a full digital twin of your environment,” Kaplan divulged.

Data privacy

Perhaps the foremost concern orga- nizations have about employing LLMs, ChatGPT, and other iterations of foundation models is the issue of privacy. According to Kaplan, it’s not always apparent how to deploy these models so companies can successfully “adopt them to your business processes and meet your security needs, your data needs, your privacy, your authentication.” Browning referenced a scorecard from Stanford University’s Human-Centered Artificial Intelligence website “showing how current foundation models like OpenAI’ s GPT-4 and Meta’s LLaMA fail in the current draft of the European Union’s Artificial Intelligence Act. They are being judged on criteria like data sources and governance, copyrighted data, energy consumption, and risk and mitigations.” The copyrighted data issue is partic- ularly prominent. Users accessing foundation models potentially risk exposing their data to the models and their builders training or fine-tuning them and enabling competitors to profit from their data when their competition accesses those same models. Additional privacy considerations for accessing foundation models through public APIs pertain to regulatory compliance, data sovereignty, and personally identifiable information (PII) exposure.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues