Visual applications
In addition to absorbing unstructured text, documents, ideas, and approaches for fulfilling business objectives, foundation models can also generate images, videos, and digital objects. The user experience is transformed when this content is paired with textual applications to simplify requirements for completing tasks so that the model “does it for you,” commented Jeff Kaplan, SkyView Innovations CEO. These visual applications improve appli- cation design, workflows, and process automation when someone digitizes a form that contains images, for example, via the method Galal articulated. There are also solutions employing foundation models so “you give it a partial image, and have it generate the background or pan and zoom,” Galal noted.
Users can ask ad hoc questions and get responses in natural language, visualizations, or both. “You say: ‘Show me the budget numbers; now, put that through a chart; no, I want a line chart, not a bar chart; maybe a pie chart’s better,’ and it just does it,” Martin disclosed. It’s even possible for generative AI techniques to create entire business systems, operational settings, and training environments with digital twins, which perhaps epitomize the notion of revamping the user experience with digital transformation.
With these digital replications of real-time production and data systems, “things like training and simulations and supply chain optimization could be enormous,” Kaplan said. The visual accuracy and conformity to real-world systems depicted by digital twins for such use cases are impressive. “There is an AI technology that allows you to take a video and then bring it into Unreal Engine, which is the gaming platform that powers Fortnite and things like that, and now you have a full digital twin of your environment,” Kaplan divulged.
Data privacy
Perhaps the foremost concern orga- nizations have about employing LLMs, ChatGPT, and other iterations of foundation models is the issue of privacy. According to Kaplan, it’s not always apparent how to deploy these models so companies can successfully “adopt them to your business processes and meet your security needs, your data needs, your privacy, your authentication.” Browning referenced a scorecard from Stanford University’s Human-Centered Artificial Intelligence website “showing how current foundation models like OpenAI’ s GPT-4 and Meta’s LLaMA fail in the current draft of the European Union’s Artificial Intelligence Act. They are being judged on criteria like data sources and governance, copyrighted data, energy consumption, and risk and mitigations.” The copyrighted data issue is partic- ularly prominent. Users accessing foundation models potentially risk exposing their data to the models and their builders training or fine-tuning them and enabling competitors to profit from their data when their competition accesses those same models. Additional privacy considerations for accessing foundation models through public APIs pertain to regulatory compliance, data sovereignty, and personally identifiable information (PII) exposure.