Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prompty vnext with agentic support #216

Open
wants to merge 62 commits into
base: main
Choose a base branch
from
Open

prompty vnext with agentic support #216

wants to merge 62 commits into from

Conversation

sethjuarez
Copy link
Member

The following has been added + corrected in the current Python runtime:

Merge of sample section into inputs

The following treats these inputs as samples

inputs:
  firstName: Seth
  lastName: Juarez

and expands it into this at runtime:

inputs:
  firstName:
    type: string
    sample: Seth
  lastName:
    type: string
    sample: Juarez

This makes the sample section effectively deprecated. If this section is used, the runtime will produce warnings but
still adhere to the input type checking described below.

Input Type Checking

Added advanced type checking for inputs. Sample items are no longer treated as defaults and execution will result
in a runtime error if input values without defaults are not provided. This caused a number of weird bugs with items not
passed into the runtime and using the sample value instead.

Additional Role Params

Currently, prompty role boundaries only could generate { "role": "value", content: "value" } items as part of the messages
array. This proves to be a bit limiting so the following addition was added to the prompty template spec:

system:
You are a helpful assistant

user[name="Seth"]:
What is the meaning of life?

This has the effect of producing:

[{ "role": "system", "content": "You are a helpful assistant" },
 { "role": "user", "name": "Seth", "content": "What is the meaning of life?"}]

In general, any arbitrary key/value pair can be added to the role definition and will be included in the output when
parsed. It is up to the execution invoker, however, to determine what to do with those values. In the current
Azure OpenAI Invoker, it discards anything other than role, content, and name (with one exception - more on
that below)

Strict Mode

The template section has been updated to include a strict mode and the type has been changed to format:

template:
  format: jinja2
  parser: prompty
  strict: True

Under strict mode, only role sections included in the prompt are valid during execution. This means that one cannot
pass in their own system: or user: messages as part of any input. This is set to False but setting it to True will throw
exceptions if an attempt to inject user created roles is made.

The way the runtime protects against this is by injecting a nonce that is auto-generated per execution into each message
boundary before the template is rendered. This allows the executor to check for the existence of the valid nonce after
the render process has occurred. In general, this mechanism is used by invoker providers to ensure consistency and avoid
message injection attacks. strict mode will be made the default execution mode sometime soon.

Tools Section

As tools become more common place, the prompty runtime will now accept a tools section both in the prompty
frontmatter as well as in the prompt section. This allows for the creation of static and dynamic tools available
to the runtime should a provider with to use it. In the next several weeks, the runtime will use default function tools
the same way function calling is currently supported in the parameters section. More on this soon.

Inline Images

Added full path checking for inline images per request

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant