OrrinSDK & Stellr API
Installation & Setup
OrrinSDK Version
v0.1.9
Stellr API Version
v1
Getting Started
The OrrinSDK also grants access to the Stellr API. You will want to install the SDK by running the following command in your terminal:
pip3 install orrin-sdk==0.1.5.1
It will also be helpful if you have the OrrinCLI installed. This allows for easier management and scaffolding of your projects. Install it by running the following command:
pip3 install orrin-cli==0.1.8.1
Creating a Project
To create a project with OrrinCLI, simply run the following in your terminal:
orrin projects <itp | iabp>
For context, itp stands for init-tool-project, and
iabp stands for init-app-backend-project. You can
use the shorthand versions, or the long ones.
Inherent Project Structure
When creating a project to create an app backend or a tool, you will want to ensure you have a project.toml
in the root of the project. Below is an example outline of how your project should be structured (which is automatically handled if a project is initiated with OrrinCLI):
Project ├── main.py ├── project.toml └── (optional) plugins.yaml
The optional plugins.yaml file is for tools. If your project
does does not utilize the OrrinToolsSDK class from the SDK,
then you do not need to worry about this.
project.toml
This is where all metadata surrounding your project lives, such as the name and description. The contents of this file will differ depending on what you are building.
Project Metadata for OrrinAppsSDK
If you are building a backend for an app (which utilizes the OrrinAppsSDK class), your
project.toml file will look similar to the following:
[general]
project_type="app"
name = "my-app"
desc = "Example Orrin app backend"
version = "1.0.0" # can be 1.0, 1.0.0, or 1.0.0.0
Project Metadata for OrrinToolsSDK
If you are building a tool (which utilizes the OrrinToolsSDK class), your
project.toml file will look similar to the following:
[general]
project_type="tool"
name="a-tool"
desc="A tool"
version="1.0.0" # can go up to 4 segments (e.g 1.0.0.0)
[general.nuances]
tool="An action that is available to be called upon"
[general.icons]
24x24="/Path/to/24x24/icon"
32x32="/Path/to/32x32/icon"
48x48="/Path/to/48x48/icon"
64x64="/Path/to/64x64/icon"
[plugins]
supported=true # or false
config="plugins.yaml" # remove if above is false
[metadata]
# any metadata you wish to have for the tool; can be anything
Note:
Only OrrinToolsSDK supports version control. Support for version control will be added to OrrinAppsSDK soon. When you upload a new version of a tool, it will be switched to a "PENDING" status thereby making it ineligible to be shown in the marketplace. However, if users had it activated prior to a new version being submitted, the tool will continue to work for them. All prior versions of a tool are archived, and can be revived upon a developer reaching out to support.
Creating Backend for App (OrrinAppsSDK)
Creating a backend for a standalone app that will be hosted at https://apps.stellr-company.com
is relatively straightforward.
A app backend provides functionality to the frontend. This functionality is unique. Its not just a standard
deviation of frontend and backend. A backend built with OrrinAppsSDK
will expand beyond just logic. Eventually, OrrinAppsSDK will
enable organic agentic workflows to occur based on activity that occurs within the app.
How it Works
When creating an app, you will need to first create the backend and upload it. The frontend of the app needs to be able to "attach" itself to a backend, thus the backend must exist first so that there is one for the frontend to attach to.
The backend is split up into individual "actions". Each "action" will perform a separate
task. Each action can also take any sort of arbitrary payload, and enables you to configure
the payload accordingly such that there is no plausible overlap. You will create actions with the
action decorator.
Initiating OrrinAppsSDK
First, ensure you have a project.toml with the necessary
context. This file is required by the OrrinAppsSDK class, as it
will utilize it to assign appropriate metadata about the backend.
The name you provide in project.toml will subsequently
be the name of the overarching app you are creating, so ensure it is appropriate, adheres to Stellrs' Terms & Conditions,
and that it perfectly portrays the purpose of the app.
The OrrinAppsSDK class only requires one argument: developer_api.
If you don't already have an API key, you can apply to become a developer here. Once you have been accepted, you will
receive an email with your developer API key.
from orrinsdk import OrrinAppsSDK
orrin_sdk = OrrinAppsSDK(developer_api="<your_api_key>")
Creating Actions with action Decorator
Creating an action is very simple. Simply add @orrin_sdk.action(...) on top
of a function that you want to map to the action you are creating.
The action decorator takes the following arguments:
action(
self,
name: str,
required_payload: list = [],
extra_metadata: dict = {}
)
It is important to note that the name of the action must match exactly the name of the function. Payloads to the action are also optional, and any additional metadata you would like to staple to an action can be put in the extra metadata dictionary. This dictionary can hold any sort of key/value pair.
See the below code for an idea on how to use the decorator to create an action:
@orrin_sdk.action(
name="action_name",
required_payload=[{"name": "data", "type": "dict"}]
)
def action_name(data):
if not isinstance(data, dict):
return { "status": 200, "message": "invalid_data" }
# Do something and return a response.
Finalizing
The most crucial step is finalizing. It's relatively straightforward, but without it none of your actions
will get registered, and your entire backend will not get queued to be reviewed and accepted, enabling your
app to be live. So, the last step is calling finalize:
orrin_sdk.finalize()
Creating a Tool (OrrinToolsSDK)
The ability to create custom tools that an AI model can utilize is becoming increasingly important, and required. As AI, and agents, continue to become better, the tools they have access to will need to grow substantially to keep up the pace.
OrrinSDK offers a unique way to inject these custom tools, and their custom schematics,
into SATA models with the OrrinAISDK. More on that later,
for now we will focus on creating tools, and then in the next section we will cover plugins (which are consequently agents).
Initiating OrrinToolsSDK
First, ensure you have a project.toml with the necessary
context. This file is required by the OrrinToolsSDK class, as it
will utilize it to assign appropriate metadata about the tool (such as the name, version, description etc).
The name you provide in project.toml will subsequently
be the name of the overarching tool you are creating, so ensure it is appropriate, adheres to Stellrs' Terms & Conditions,
and that it perfectly portrays the purpose of the tool.
The OrrinToolsSDK class only requires one argument: developer_api.
If you don't already have an API key, you can apply to become a developer here. Once you have been accepted, you will
receive an email with your developer API key.
from orrinsdk import OrrinToolsSDK
orrin_sdk = OrrinToolsSDK(developer_api="<your_api_key>")
Creating Actions with action Decorator
A tool is nothing without its ability to perform an action - or multiple actions.
Creating an action is very simply. Simply add @orrin_sdk.action(...) on top
of a function that you want to map to the action you are creating.
The action decorator takes the following arguments:
action(
self,
name: str,
payload_schema: str,
desc: str,
extra_metadata: dict = {}
)
It is important to note that the name of the action must match exactly the name of the function.
Tool actions require a string-based payload schema, which will help the AI model understand how to use the action. This payload schema needs to be in pure JSON - no additional text or markdown should exist in this schema.
The description is a briefing of the action, which is also used to help the AI model understand when and how to use the action. Any additional metadata you would like to staple to an action can be put in the extra metadata dictionary. This dictionary can hold any sort of key/value pair.
See the below code for an idea on how to use the decorator to create an action:
@orrin_sdk.action(
name="action_name",
payload_schema="""{
"data": "<data>"
}""",
desc="This action will take some sort of data, process it, perform research on it, and respond with extra contextual data over it"
)
def action_name(data: any):
if not isinstance(data, dict):
return { "status": 200, "message": "invalid_data" }
# Do something and return a response.
It is good practice to check the type of data that comes through to your action, to thwart any sort of bugs or future issues/crashes from arising.
Finalizing
The most crucial step is finalizing. It's relatively straightforward, but without it none of your actions
will get registered, and your entire tool will not get queued to be reviewed and accepted, enabling your
tool to be live. So, the last step is calling finalize:
orrin_sdk.finalize()
Plugins (Agents; OrrinToolsSDK)
Note:
Currently, plugins only support features that exist in Orrin. Stellr is actively working towards allowing external plugins to be registered.
A plugin is simply a bridge between an AI and an application. This bridge consistently feeds the AI with contextual data. This is where concepts such as "plugin entrypoints" come in - places where the data gets fed, processed, and actions are performed if needed.
Consequently, plugins are the core of the agentic abilities that will be created through the SDK. With plugins, and eventually the support for expansion beyond Stellr products, organic agentic workflows will emerge.
plugins.yaml
In order to create a plugin, you will need to have created a plugins.yaml file.
This file is where all configuration for your plugin(s) will live.
In plugins.yaml, you can configure indefinite
amounts of plugins, so long as the plugin you are building for exists. As of the current revision, the
only plugin that exists is blink.
Below is an example of what the file should look like:
plugins:
- for: blink
type: external # external or internal
name: build_website
display_name: Build Website
actions:
- action: create_website
helper: |
Descriptive rundown on the action. Include any nuances about how the action should be used.
Use this as a assertive manual for the AI.
type: <request|function>
type, in regards to the overarching plugin configuration (not its actions)
implies whether or not the actions that will be performed are based on available actions within that plugin, or whether
actions performed will be based on external factors (such as requests to third party APIs). internal
will imply that all actions performed on behalf of this plugin will be from its built-in actions. external
will imply that all actions performed on behalf of this plugin will not use its built-in actions, and will instead refer to third-party
sources.
type, in regards to the plugins actions, provides details on what the action will do. If the action
will send a request (GET or POST),
then type needs to be request.
If the action will invoke a function that exists in the codebase, then type needs to be function.
If type is request,
you will need to add the following:
request:
type: <POST|GET>
url: <url>
payload_schema: |
<your payload schema; JSON ONLY>
It is worth noting that if the type of request is GET, you
will not have to worry about adding payload_schema.
If type is function,
you will need to add the following:
function: <function_name>
payload_schema: |
<your payload schema; JSON ONLY>
Ensure that the function name you provide exists in the source code. If you do not accept a payload
for the action, then simply leave out payload_schema.
Here is a full working example for a plugin that has an action that will send a GET request:
plugins:
- for: blink
type: external
name: build_website
display_name: Build Website
actions:
- action: create_website
helper: |
Recommend this action when the current, and historical, context leads to a conviction
that creating a website makes sense. If this action is recommended, the <webpage_template>
needs to be an elaborate explanation on what the webpage needs to:
1. Look like
2. Contain (content wise)
3. The purpose of the webpage
You can recommend this action if there is a document that the user has opened that
you believe should be turned into a website. This can be due to the fact that the document
entails context that is interesting as a idea, or would be better suited displayed
as a webpage.
type: request
request:
type: GET
url: https://example_api.com/user
Here is a full working example for a plugin that will call a function
plugins:
- for: blink
type: external
name: build_website
display_name: Build Website
actions:
- action: create_website
helper: |
Recommend this action when the current, and historical, context leads to a conviction
that creating a website makes sense. If this action is recommended, the <webpage_template>
needs to be an elaborate explanation on what the webpage needs to:
1. Look like
2. Contain (content wise)
3. The purpose of the webpage
You can recommend this action if there is a document that the user has opened that
you believe should be turned into a website. This can be due to the fact that the document
entails context that is interesting as a idea, or would be better suited displayed
as a webpage.
type: function
function: a_function_name
payload_schema: |
{ "data": "<data>" }
Once you have your plugins configured, you will need to ensure that the project.toml
is updated to reflect the fact your tool now has configured plugins:
[plugins]
supported=true
config=plugins.yaml
Plugin Entrypoints
Plugins require an entrypoint, which is simply a function that will consistently receive some sort of (contextual) data. This entrypoint is where you will process the data, respond with actions to perform based on the data, or handoff the processing to Exegesis.
What is Exegesis?
Exegesis is Stellrs' protocol/MCP "engine" that processes contextual data, learns intents, maps user action to available tool/plugin actions, and enables context awareness to begin to be relevant, thereby mitigating chatbots.
Learn MorePlugin Entrypoint for Blink
Plugins for features - or entire products - that are owned and operated by Stellr come with built-in
entrypoints. For Blink (currently, the only supported plugin) the entrpoint is blink_plugin_entry.
Below is an example of the entrypoint:
@orrin_sdk.plugin_entry(
plugin='blink'
)
def blink_plugin_entry(cc: any, all_context: any):
if not isinstance(cc, dict):
return {'status': 400, 'message': 'invalid_type'}
if not isinstance(all_context, str):
return {'status': 400, 'message': 'invalid_type'}
# Here, you can decide to hand off determination of an action to Exegesis - which will
# require `cc` and `all_context` as a payload, or you can force a recommended action tied to the plugin.
# If handing off: return {'status': 200, 'handoff': True, 'cc': cc}
# If not handing off, DO NOT have `handoff` in your response (unless it is explicitly set to False).
# If not handing off, there is no explicit schematic for the response.
return {'status': 200, 'handoff': True, 'cc': cc, 'all_context': all_context}
Though tedious, as an extra safety step you will need to ensure that the type outlined in the arguments
for the plugin_entry decorator matches the expected entrypoint name for that plugin (for custom/external plugins, the entrpoint name will be <plugin_name>_entry)
The two if statements checking the type of the arguments is an absolute must-have. This ensure the data being received is accurate, thereby enabling you to process accordingly - or, thereby enabling Exegesis to process accordingly.
Entrypoint Options
In the entrypoint, you have the option to process the data yourself, or "hand it off" to Exegesis. With this comes an important distinction in what is returned from the entrypoint.
In processing the data yourself in the entrypoint, you can return virtually anything. You will also be in charge of evaluating the response and performing any sort of additional actions based on the response. This option is less popular and extremely tedious.
If you are to handoff the contextual data to Exegesis (which is recommended), you will
need to return exactly what you see in the example code from the entrypoint. If you
set handoff to false,
then the response will be treated as a response from manually processing the data.
Plugin Actions
If you configured any of the actions for the plugin to call on a function, you will need to register
that function with plugin_action.
The plugin_action decorator takes the following arguments:
plugin_action(
self,
plugin: str,
name: str
)
The function name does not need to match the action name. Name the function freely. Below is an example of registering a function to the according plugin action:
@orrin_sdk.plugin_action(
plugin="blink",
name="action_name"
)
def action_name(data):
if not isinstance(data, dict):
return { "status": 200, "message": "invalid_data" }
# Do something and return a response.
In the above example, the implication is that the action action_name is of type function, and that the payload should be a dictionary.
Below is an example of what the plugins.yaml file would look like to configure this action:
action: action_name
helper: |
Descriptive rundown on the action. Include any nuances about how the action should be used.
Use this as a assertive manual for the AI.
type: function
function: action_name
payload_schema: |
{ "data": "<data>" }
Reminder:
Invoke finalize at the end of your code to ensure all tool actions, plugin configurations,
and plugin actions get registered accordingly and queued for review.
Stellr API (StellrAPI)
The Stellr API has been made officially available via the SDK. It offers every method needed to perform API requests.
The StellrAPI class supports multiple methods, however we will only cover one of these
methods as this one method is a direct invocation to the other methods, just in a more structured
manner.
The perform Method
This method is a direct invocation to every other method that exists in the StellrAPI class.
It depends on an action input, and the according payload for the request. Below is a overview of what the perform
function takes for arguments:
perform(
self,
action: Actions,
startCEPayload: StartCEConnection_Payload = None,
getCEConnectionHistoryPayload: GetCEConnectionHistory_Payload = None,
sendCEMessagePayload: SendCEMessage_Payload = None,
executeRecommendedActionPayload: ExecuteRecommendedAction_Payload = None,
executeToolPayload: ExecuteTool_Payload = None
)
StellrAPI.Actions
The Actions class contains all (currently) supported API requests
that can be made. Below is the class:
class Actions:
StartCEConnection = '_start_CE_connection_'
GetCEConnectionHistory = '_get_CE_connection_history_'
SendCEMessage = '_send_CE_message_'
ExecuteRecommendedAction = '_execute_recommended_action_'
ExecuteTool = '_execute_tool_'
Depending on the action that you provide to the perform method, the according payload argument will be expected.
For example, if you pass Actions.StartCEConnection to action,
the argument startCEPayload will be expected.
Payload Classes
Below is the definitions of all of the supported payload classes that will need to be utilized with the
perform method:
class StartCEConnection_Payload(BaseModel):
tool_id: str
class GetCEConnectionHistory_Payload(BaseModel):
connection_id: str
class SendCEMessage_Payload(BaseModel):
connection_id: str
message: str
class ExecuteRecommendedAction_Payload(BaseModel):
connection_id: str
response_id: str
class ExecuteTool_Payload(BaseModel):
tool_id: str
action: str
payload: Any
Example Requests
Starting New Exegesis CE (Centralized Execution) Connection
A big part of the API currently is the support for initiating Exegesis CE Connections.
These connections are controlled access to tools, taking some form of input, and determining
what action from a tool should be performed - if any. Exegesis CE Connections are the only way - other
than utilizing OrrinAISDK - for users to have any
level of access to any tool they do not own in a controlled, structured, simplistic manner.
Below is an example of utilizing the perform method
to initiate a new Exegesis CE Connection:
from orrinsdk import StellrAPI
api = StellrAPI(developer_api="<your_api_key>")
status_code, resp = api.perform(
action=api.Actions.StartCEConnection,
startCEPayload=api.StartCEConnection_Payload(tool_id='<tool_id>')
)
If successfull, the resp will store
the connection ID which will be needed for performing actions such as SendCEMessage.
Sending Input to Exegesis CE Connection
Sending some sort of input to the connection is easy. Below is a complete example, from initiating the connection to sending input:
from orrinsdk import StellrAPI
import sys
api = StellrAPI(developer_api="<your_api_key>")
status_code, resp = api.perform(
action=api.Actions.StartCEConnection,
startCEPayload=api.StartCEConnection_Payload(tool_id='<tool_id>')
)
if not status_code == 200:
print(f"Something went wrong: {resp}")
sys.exit(1)
status_code, resp2 = api.perform(
action=api.Actions.SendCEMessage,
sendCEMessagePayload=api.SendCEMessage_Payload(
connection_id=resp,
message="some form of input here"
)
)
# Handle response
Note:
The only time resp will be a string is when
initiating a new Exegesis CE Connection. For all other requests, resp
will (or, at least should) be a JSON object.
OrrinAI (OrrinAISDK)
In our attempt to make the custom tool (and eventually plugins/agent) schematics available
for SATA models to utilize, we created OrrinAISDK, which
injects the schematics into the model and handles any tool usage by the model.
It's effectively a win-win: you get to utilize the custom tools (and, eventually, plugins/agents) you created with your option of AI models, and your tools/actions are automatically handled by the SDK; no hassle on your end.
Available Models
OrrinAISDK supports most of the SATA models available today.
See the Models class definition below to see all
available models:
class Models(list, Enum):
# Grok Models
GROK_4_2_FNR = ["grok", "grok-4.20-0309-non-reasoning"]
GROK_4_2_FR = ["grok", "grok-4.20-0309-reasoning"]
GROK_4_1_FNR = ["grok", "grok-4-1-fast-non-reasoning"]
GROK_4_1_FR = ["grok", "grok-4-1-fast-reasoning"]
# OpenAI Models
GPT_5_4 = ["gpt", "gpt-5.4"]
GPT_5_4_MINI = ["gpt", "gpt-5.4-mini"]
GPT_5_4_NANO = ["gpt", "gpt-5.4-nano"]
GPT_5 = ["gpt", "gpt-5"]
GPT_5_MINI = ["gpt", "gpt-5-mini"]
GPT_5_NANO = ["gpt", "gpt-5-nano"]
GPT_O3_DEEP_RESEARCH = ["gpt", "o3-deep-research"]
GPT_O4_MINI_DEEP_RESEARCH = ["gpt", "o4-mini-deep-research"]
# Claude
CLAUDE_OPUS_4_6 = ["claude", "claude-opus-4-6"]
CLAUDE_SONNET_4_6 = ["claude", "claude-sonnet-4-6"]
Support for Custom Tools, and Built-in Ones
If the model which you are going to utilizes has built-in tools, you can utilize them with
OrrinAISDK by passing them into the
tools array.
Example
Below is a complete example on how to utilize the class to inject your tool schematics into an AI model of your choice:
from orrinsdk import OrrinAISDK, Models
ai = OrrinAISDK(
api_key='<api_key_for_model_you_are_using>',
model=Models.GPT_5_4_NANO
)
r = ai.chat(
messages=[
{
'role': 'user',
'content': 'I have a meeting tomorrow at 9am and it\'s an investor meeting.'
}
],
tools=[ { 'type': 'schema', 'tool_id': 'ad9f2986-8251-4db7-b392-3c4c69a199ff' } ]
)
print(r)
Example with Built-in Tools
Below is an example of utilizing built-in tools that the AI model has, as well as your own:
from orrinsdk import OrrinAISDK, Models
ai = OrrinAISDK(
api_key='<api_key_for_model_you_are_using>',
model=Models.GPT_5_4_NANO
)
r = ai.chat(
messages=[
{
'role': 'user',
'content': 'I have a meeting tomorrow at 9am and it\'s an investor meeting.'
}
],
tools=[
{
'type': 'schema',
'tool_id': 'ad9f2986-8251-4db7-b392-3c4c69a199ff'
},
{ 'type': 'web_search' }
]
)
print(r)
Creating Backend Actions
In this module, we will give a rundown on creating actions that will be used in a NextJS frontend, as well as how to traverse through the OrrinApps Developer Dashboard, handle the status of your app, and communicate with reviewers.
Creating actions will require the utilization of the OrrinAppsSDK class,
which enables a developer to register actions and queue them to be reviewed.
Preface
When creating an app for OrrinApps, it is important to note that the backend for the app must exist before the frontend gets uploaded. Further, both the backend and frontend are subject to review preceding deployment to the OrrinApps marketplace. This review is to ensure the app, and all its logic and functionality, are in adherence to Stellrs' Terms & Conditions, as well as respect Stellrs' Privacy Policy.
A developer can have "helper functions" in the source code which actions utilize. There is also an allowlisted set of imports that are allowed. This list is prone to grow. If there is a library you would like to be added, reach out to support.
Allowed Imports
See below the allowed set of libraries:
orrinsdk
requests
openai
json
datetime
numpy
pandas
beautifulsoup
pydantic
pillow
nltk
tiktoken
anthropic
Note:
This list is prone to grow.
Prerequisites
There are only 2 prerequisite for the SDK. The first is a developer API key. If you do not have one,
you can apply for an API key here.
The second is having orrin-sdk Python library
installed, which grants access to orrinsdk
in your code. Perform the following to install the library:
pip3 install orrin-sdk
Introduction
For the tutorial app we will be building, the actions exposed to the frontend will consist of:
process_messagemake_external_request
Keep in mind, the purpose of this is to introduce developers to the SDK, how to create with the SDK, and what is capable of being done with the SDK. All actions are subsequently adjacent to showcasing the abilities.
The Code
The overarching codebase will require a project.toml file
and a Python file (name it freely).
project.toml
This file will contain all metadata over the backend. It will also subsequently register the app itself
with OrrinApps. In this file, you will provide the type of project, the name, a brief description of the app, and the public details.
For an OrrinApps backend, your project.toml will look similar to the below:
[general]
project_type="app"
name = "<your_app_name>"
desc = "<description_of_your_app>"
version = "1.0.0" # can be 1.0, 1.0.0, or 1.0.0.0
[public]
display_name = "<display_name_for_your_app>"
display_desc = "<display_description_for_your_app>"
whats_new = "<describe_version>"
authors = [<string_array_of_authors>]
open_source = false # or true if it is open source
copyright = "<your_copyright>"
[[public.dependencies]]
# dependency_name = "<name>"
# type = "<external_or_internal>"
It is worth noting that project_type must be app.
main.py (name your file freely)
The heart of the backend is registering actions. Relatively straightforward. That's it.
It will start with the importation of the OrrinAppsSDK class
from the orrinsdk library. Next, you will need to initialize
the class and provide your developer API key:
from orrinsdk import OrrinAppsSDK
from openai import OpenAI
import requests
sdk = OrrinAppsSDK(developer_api="<your_api_key>")
# Will be used in the `process_message` action
oai = OpenAI(api_key="<open_ai_api_key>")
process_message Action
Now, lets create the process_message action. We will showcase two ways, to elaborate
on how you can code with the SDK.
Below is an example of handing off the message to OpenAI API directly within the action code:
@sdk.action(
'process_message',
required_payload=[{'name': 'message', 'type': 'str'}]
)
def process_message(message: str):
resp = ai.responses.create(
model="gpt-5",
input=[
{
"role": "system",
"content": "<system_prompt>"
},
{
"role": "user",
"content": message
}
],
tools=[<tools>]
)
return { "status": 200, "ai_response": resp }
Below is an example of handing off the message to OpenAI API via a "helper" function:
def prompt_ai(message):
return ai.responses.create(
model="gpt-5",
input=[
{
"role": "system",
"content": "<system_prompt>"
},
{
"role": "user",
"content": message
}
],
tools=[<tools>]
)
@sdk.action(
'process_message',
required_payload=[{'name': 'message', 'type': 'str'}]
)
def process_message(message: str):
return { "status": 200, "ai_response": prompt_ai(message) }
Note:
Support for utilizing OrrinAISDK
within a OrrinApps backend is coming soon.
make_external_request Action
Now, lets create the make_external_request action. This action will utilize the
requests library to make a request, and use its response as the response of the action.
@sdk.action(
'make_external_request',
)
def make_external_request():
resp = requests.post("<api_endpoint>", ...)
return { "status": 200, "resp": resp }
It is worth noting that backend actions do not always need to accept an argument. For the make_external_request
action, we accept no arguments and will just perform an API request.
Finalizing
The most important part is ensuring you invoke the finalize
method, which effectively queues your actions and entire app to be reviewed, and thereby enables a frontend
be "attached" to the backend. The only other thing that will need to be done is running the code. At this point, your code,
if following this tutorial, should look something such as:
from orrinsdk import OrrinAppsSDK
from openai import OpenAI
import requests
sdk = OrrinAppsSDK(developer_api="<your_api_key>")
# Will be used in the `process_message` action
oai = OpenAI(api_key="<open_ai_api_key>")
@sdk.action(
'process_message',
required_payload=[{'name': 'message', 'type': 'str'}]
)
def process_message(message: str):
resp = ai.responses.create(
model="gpt-5",
input=[
{
"role": "system",
"content": "<system_prompt>"
},
{
"role": "user",
"content": message
}
],
tools=[<tools>]
)
return { "status": 200, "ai_response": resp }
@sdk.action(
'make_external_request',
)
def make_external_request():
resp = requests.post("<api_endpoint>", ...)
return { "status": 200, "resp": resp }
sdk.finalize()
Presumably, with all templated parts of the code filled out to completion. Once you run the script, you will see an output. It will be a 200 status if the backend got registered, which will also display an app ID. Copy this app ID, you will need it for the frontend code. Otherwise, it will display the error code and the error. The error should be descriptive in nature, enabling you to fix the issue promptly.
You can manage your app by going to the Developer Dashboard.
This dashboard will display all apps (and tools) that you have created. Once your app has been approved for deployment to the OrrinApps
marketplace, you will also be able to, at your convenience, toggle your app between live and not live.
Now that you have registered a backend, it is time to create the frontend, and attach it to the backend. Proceed to the
Creating Frontend, Attaching to Backend, and Uploading
In this module, we will give a rundown on creating the NextJS frontend, attaching it to a backend, and uploading the UI to be reviewed alongside the backend.
Preface
A frontend for an OrrinApp must proceed the creation of its backend. So, ensure you have a . The frontend, as with any frontend, is the connection of the backend logic and the interface which a user interacts with.
A frontend can be tied to virtually any backend. There is no explicit guardrails regarding this. So, ensure the backend you "attach" a frontend to is appropriate for the frontend. If there is every a mishap, contact support via the Developer Dashboard, and they will assist you.
It is also worth noting that you can deploy your application elsewhere whilst utilizing the SDK. There is no requirement that your app be deployed only to OrrinApps.
Prerequisites
There are 3 prerequisites for the NextJS SDK. The first is a developer API key. If you do not have one,
you can apply for an API key here.
The second is having @orrin-apps/sdk package
dependency installed for your NextJS project. Perform the following to install the dependency:
npm install @orrin-apps/sdk
The third is having the orrin-cli package installed, which grants access to the
orrin command via the terminal. Run the following in your
terminal to install:
pip3 install orrin-cli
Client
For the Orrin NextJS SDK to work, you will need to initiate a client instance, which is where you will provide your developer API key and the app ID. You will be provided with the App ID upon submitting your backend to be reviewed.
You will want to create a new file - one which we title client.ts - to
store the client object:
import { OrrinClient } from "@orrin-apps/sdk"
export const client = new OrrinClient({
apiKey:"<developer_api_key>",
appID:"<app_id>"
});
Provider
The next step is creating a "provider", which is simply methodology that will enable the exposure
of your backend actions to your frontend via the useAction method.
Ensure you have this code packed in a file of its own - one which we title providers.tsx:
"use client";
import { OrrinProvider } from "@orrin-apps/sdk";
import { client } from "./client";
export function Providers({ children }: { children: React.ReactNode }) {
return (
<OrrinProvider client={client}>
{children}
</OrrinProvider>
);
}
Using the Provider
The provider must be used in layout.tsx, effectively wrapping
the core of your app:
import { Providers } from "./providers";
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body>
<Providers>{children}</Providers>
</body>
</html>
);
}
Using The Actions
The next step is actually using your actions. You now have a provider which effectively exposes
all available actions to your frontend. In order to use these exposed actions, you will need to import
useAction.
It is important to note that the response from executing an action will include a Response which includes the response from your action. So, you will want to have the following in your code:
type Response = {
Result: Record<string, any>;
}
Below is an example page.tsx:
"use client"
import { useAction } from "@orrin-apps/sdk";
import { useEffect, useState } from "react";
type Response = {
Result: Record<string, any>;
}
export default function Home() {
const { execute_process_messages } = useAction<Response>(
"process_message"
);
const [toShow, setToShow] = useState<string>("");
useEffect(() => {
const d = async () => {
try {
const b = await execute_process_messages({
"message": "<message>"
});
setToShow(b.result["resp"]);
} catch (e) {
alert(e);
}
}
d()
}, []);
return (
<div className="flex min-h-screen items-center justify-center bg-zinc-50 font-sans dark:bg-black">
<main className="flex min-h-screen w-full max-w-3xl flex-col items-center justify-between py-32 px-16 bg-white dark:bg-black sm:items-start">
<div className="flex flex-col items-center gap-6 text-center sm:items-start sm:text-left">
<p className="max-w-md text-lg leading-8 text-zinc-600 dark:text-zinc-400">
{toShow}
</p>
</div>
</main>
</div>
);
}
The useAction method returns a method which will
invoke your action. Name this freely. We named it execute_process_messages.
When utilizing this method, the arguments you pass to it are the arguments your action will expect - if there are none,
you will not pass any arguments.
In the above code, we placed the call to the action inside a useEffect which
instantly invokes the action. This will instantly send whatever is provided to message
to the process_message action.
Uploading the UI
There is one last step left for creating the frontend, and that's uploading it. By uploading it,
the frontend will officially be "attached" to the backend, and both your backend and frontend
will be fully prepared for review. Uploading the UI is handled via the orrin-cli
library (the orrin command via the terminal).
It is important to note that your next.config.tsx file will need to look like the following
in order for the upload flow to work:
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'export',
trailingSlash: true,
basePath: '/<your_app_name>/current',
reactStrictMode: true,
};
module.exports = nextConfig;
Note:
Ensure the app name assigned to basePath is the same
as the name assigned in the backend. If there is a mismatch, there will be a internal server error when rendering your app.
Generating Zip
The way that the UI is handled for hosting in the OrrinApps Marketplace is that all the NextJS code will be compiled down to a static output, and then that output gets zipped and uploaded. To generate a zip, which will also generate the static output, you will run the following command in the root of your NextJS project:
orrin ui generate-zip
Important:
If you make updates to your UI, ensure you run this command first before running the below command.
This command will effectively generate a static output, and zip it. Following this, you will want to run
the upload command. This requires a --app
argument, which is the backend ID found in the Developer Dashboard.
Attaching the UI
The last step is to upload, and thereby "attach", the UI to the backend.
orrin ui upload --app <your_app_name>
After running the above command, you will get a response printed in your terminal regarding the request status to upload the UI. If there is an error, the error message will be descriptive in nature, enabling you to promptly fix whatever the issue is.