InformationTitleOrchestration - AI Inference NodeURL NameOrchestration-AI-Inference-NodeStep-by-StepContents: OverviewUse CasesPrerequisitesEnabling & Adding the AI Inference NodeConfiguring the AI Inference NodeConsiderations when Using the AI Inference NodeFAQsSummary Overview The AI Inference Node (Beta) (also referenced as the AI Decision Node) is a premium FlowBuilder action node that lets you run a generative AI prompt during routing. You can send unstructured data (for example, free-text form responses, notes, or comments) to a supported AI provider and save the response into one or more LeanData variables for use in downstream nodes. The AI Inference Node behaves like a hold-until step. It pauses routing while it waits for the AI response or until the timeout is reached. Please Note: You are responsible for AI provider usage costs and token consumption through your own API key. Use Cases Use the AI Inference Node when you need to make routing decisions from text that is difficult to handle with rules, Regex, or standard field logic. Common use cases include: Categorize intent from a “Contact Us” comment or “Reason for Contact” field.Summarize long text (for example, recent activity or notes) into a short summary for downstream automation.Extract key data (for example, competitor name, domain name, product mentioned) from a free-text field.Analyze sentiment in survey feedback or case comments, then route escalations. Prerequisites Before you configure an AI Inference Node, confirm the following: You have access to FlowBuilder for the relevant router (Lead, Contact, Account, Opportunity, Case, and Any Object Routing).You have a supported AI provider account and an API key for at least one provider: OpenAIGoogle Gemini You have the permissions needed to configure LeanData (LeanData Custom Objects Full Access). Please note: The AI Inference Node does not change LeanData permission sets. Access is controlled by your existing LeanData permissions. Enabling & Adding the AI Inference Node Step 1: Authorize your LLM Integration (BYOLLM) LeanData uses a Bring Your Own LLM (BYOLLM) model. This means: You provide your own AI provider API key.Any token usage and costs are subject to the terms of your AI provider.LeanData does not operate this node using LeanData-managed keys. In the LeanData app, navigate to Integrations.Locate the integration tile for your preferred provider (for example, OpenAI or Google Gemini), then select Get Started.Enter your API Key, then complete the authorization flow. Step 2: Enable the AI Inference Node The AI Inference Node is disabled by default. If you wish to use the AI inference node, you must opt-in by enabling a setting. Navigate to Admin > Settings > AI Tools tab.Turn on the AI Inference Node toggle.Select the authorized AI provider integration you want to use. Step 3: Add an AI Inference Node in FlowBuilder Open the FlowBuilder graph you want to update.In the node bar, locate the Actions section.Drag the AI Inference Node onto your graph. Configuring Your Prompt in the AI Inference Node When you open the AI Inference Node, a configuration modal opens. Use the steps below to define what the AI should do, where LeanData should store the results, and how to test your setup. Step 1: Set the prompt Click the Edit Prompt buttonIn the configuration modal that appears, select a Model from the dropdown. This list shows the models available from your authorized LLM provider. Enter your Prompt in the prompt text area. You can insert variables that were defined earlier in your FlowBuilder graph.Variables can reference values from the routed record, matched records, and any other previously-defined variables.Prompts have a 2,000 character limit. Step 2: Define outputs Outputs determine where and how LeanData saves the AI response. Outputs will be stored in Variables which can then be referenced in downstream nodes in your FlowBuilder graph. In the Outputs section, either select an existing variable to populate, or create a new variable. To create a new variable, type a new variable name, then select it from the dropdown once it appears. For each output, ensure you select the correct data type. Choose a type that matches what you expect the model to return, such as text, number, true-false, or a specific object. Use the output Instruction box to add constraints for each output. This helps guide the model toward a response that can be stored correctly in the output type you selected. To capture additional values, select Add output and repeat the steps above. [SCREENSHOT: Outputs section showing variable selection, data type, and instruction fields] Step 3: Test and refine After you set your prompt and outputs, validate your configuration in the modal’s test area on the right. Select Get Results to generate a test output using your current configuration.If you included variables as inputs in your prompt, you will have to supply sample values for those inputs in order to test the results.Review the sample output and adjust your prompt and output configuration as needed. Update your prompt and output configuration based on what you see.Re-test until the results are consistent and usable. When you are satisfied, select Done Editing to save. Step 4: Direct the Node's Edges After configuring the prompt and outputs, define how the node should behave based on the AI provider's response. In the Advanced Settings section at the bottom of the node configuration modal, configure the exit paths for each outcome. The AI Inference Node has three possible exit paths: Next Node: The AI response is successfully received and stored in the output variable(s). Routing continues to the next node you specify.Time Out: The AI provider call exceeds the timeout threshold (approximately 1 minute). The record follows this path when no response is received in time.Error: An error occurs that is not a timeout (for example, API error, invalid key, or model refusal). The record follows this path when the request fails. Click Done when you are finished configuring the AI Inference Node. Considerations when Using the AI Inference Node Token Usage: The AI Inference Node utilizes your AI provider's API Key. You are responsible for AI provider usage costs and token consumption through your own API key. Data and security: Requests are sent using your API key. Make sure your internal security and data policies allow sending the selected fields to your AI provider.Character limits: Prompts are limited to 2000 characters.Timeouts and routing behavior: The AI provider call times out after approximately 1 minute.The overall node wait time has a system default of 5 minutes. Routing Preview limitations: Routing Preview does not call third-party integrations, including AI providers. Plan to validate with the node’s test tooling and in a safe environment before production use.Auditing: Audit logs capture the execution status, the input value, the prompt, the response payload, and the outcome (success, timeout, error). FAQs Is the AI Inference Node available for all routers? The AI Inference Node can be added to graphs in Orchestration for Lead, Contact, Account, Opportunity, Case, and Any Object Routing. Is the AI Inference Node available in Scheduling graphs? No. AI calls can take longer than the response times expected for scheduling experiences, so the AI Inference Node is not supported in Scheduling graphs. What happens if the AI provider returns an error or refuses the prompt? The node follows the Fail path when there is an API error, timeout, missing or invalid API key, invalid prompt, or model refusal. Summary The AI Inference Node (Beta) lets you use generative AI prompts in FlowBuilder to classify, extract, summarize, or interpret unstructured text and save the response into LeanData variables. After you set up your AI provider integration and enable the feature, you can add the node to your graphs, define outputs, test safely, and route records utilizing AI generated outputs. For questions or additional assistance, please contact LeanData Support.