Semantic Kernel: A bridge between large language models and your code

April 17, 2023

At first glance, building a large language model (LLM) like GPT-4 into your code might seem simple. The API is a single REST call, taking in text and returning a response based on the input. But in practice things get much more complicated than that. The API is perhaps better thought of as a domain boundary, where you’re delivering prompts that define the format the model uses to deliver its output. But that’s a critical point: LLMs can be as simple or as complex as you want them to be.

When we integrate an AI model into our code, we’re crossing a boundary between two different ways of computing, like the way in which programming a quantum computer is much like designing hardware. Here we’re writing descriptions of how a model is intended to behave, expecting its output to be in the format defined by our prompts. As in quantum computing, where constructs like the Q# language provide a bridge between conventional computing and quantum computers, we need tools to wrap LLM APIs and provide ways to manage inputs and outputs, ensuring that the models remain focused on the prompts we’ve defined and that outputs remain relevant.

To read this article in full, please click here

InfoWorld 

Article Categories:
Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *

Generated by Feedzy