Execution Nodes¶
Execution Nodes enable your pipeline to perform actions, call external tools, execute code, and integrate with APIs. These nodes form the "action" layer of your workflow, transforming data, triggering external systems, and performing computational tasks.
Available Execution Nodes:
- Function Node - Execute specific toolkit/MCP functions with direct parameter mapping
- Tool Node - LLM-assisted tool selection and execution based on task instructions
- Code Node - Execute custom Python code in a secure sandbox
Function Node¶
The Function Node executes specific tools from Toolkits or MCPs (Model Context Protocol servers) with direct parameter mapping. Unlike the Tool Node, which uses LLM intelligence to decide which tool to call, the Function Node directly invokes a pre-selected tool with explicitly mapped inputs.
Purpose¶
Use the Function Node to:
- Execute specific tools without LLM decision-making overhead
- Call external APIs through toolkit integrations (Jira, Confluence, GitHub, etc.)
- Perform deterministic actions where the tool and parameters are known upfront
- Map pipeline state directly to tool parameters
- Chain multiple tool calls in sequence with precise control
Function Node Scope
Function Nodes can use Toolkits and MCPs only. Agents and Pipelines now have their dedicated node types.
Parameters¶
| Parameter | Purpose | Type Options & Examples |
|---|---|---|
| Toolkit | Select which Toolkit or MCP contains the tool you want to execute | Toolkits - External service integrations MCPs - Model Context Protocol servers Selection Process: 1. Select Toolkit/MCP from dropdown 2. Tool dropdown appears 3. Select specific tool Example: jira_toolkit |
| Tool | Select the specific tool/function to execute from the chosen toolkit | Dropdown populated with all available tools from selected toolkit jira_toolkit: create_issue, update_issue, search_issues, list_comments |
| Input | Specify which state variables the Function node reads from | Default states: input, messagesCustom states: Any defined state variables Example: - project_id- issue_title- input |
| Output | Define which state variables the tool's result should populate | Default: messagesCustom states: Specific variables Example: - jira_ticket_id- messages |
| Input Mapping | Map pipeline state variables to the tool's required parameters (appears after tool selection) | F-String - Formatted string with variables Example: user_story_{ticket_id}_v{version}.mdVariable - Direct state reference Example: generated_contentFixed - Static value Example: production-reportsCategories: - Required parameters (must provide) - Optional parameters (can be null) |
| Interrupt Before | Pause pipeline execution before this node | Enabled / Disabled |
| Interrupt After | Pause pipeline execution after this node for inspection | Enabled / Disabled |
YAML Configuration
nodes:
- id: Function 1
type: function
tool: list_comments
input:
- project_id
- issue_title
- input
output:
- jira_ticket_id
- messages
structured_output: false
input_mapping:
issue_key:
type: fstring
value: user_story_{jira_ticket_id}
transition: END
toolkit_name: JiraAssistant
interrupt_before: []
state:
messages:
type: list
input:
type: str
project_id:
type: str
value: ''
jira_ticket_id:
type: str
value: ''
issue_title:
type: str
value: ''
Input Mapping Configuration
The Input Mapping section dynamically displays only the parameters required by the selected tool. Each toolkit/MCP tool has different required and optional parameters. Select your tool first to see available mapping options.
Single Toolkit and Tool Selection
Each Function Node can select only one toolkit and one tool. For multiple tool executions, create separate Function Nodes and chain them together.
Best Practices¶
- Map Required Parameters Correctly.
- Use Appropriate Type for Each Parameter
- Variable: When value comes from state
- F-String: When you need dynamic interpolation
- Fixed: For static, unchanging values
- Handle Optional Parameters: Set optional parameters to
nullif not needed: - Include Output Variables: Capture important results in output variables:
- Use Interrupts for Debugging: Enable interrupts when testing new integrations:
- Validate State Variables: Ensure input state variables exist before the Function node executes:
- Choose Function Over Tool Node: Use Function Node when:
- You know exactly which tool to call
- Parameters are straightforward to map
- No LLM decision-making is needed
- Performance is critical (no LLM overhead)
- Chain Function Calls: Create workflows by sequencing Function nodes:
Tool Node¶
The Tool Node uses LLM intelligence to analyze a task instruction, select appropriate tools from available Toolkits/MCPs, and execute them with LLM-generated parameters. Unlike the Function Node, which requires explicit tool selection and parameter mapping, the Tool Node makes intelligent decisions about which tools to call and how.
Purpose¶
Use the Tool Node to:
- Delegate tool selection to LLM based on natural language instructions
- Handle complex workflows where multiple tools might be needed
- Simplify configuration by avoiding manual parameter mapping
- Leverage LLM reasoning to choose the right tool for the task
- Execute multi-step tool chains dynamically
Parameters¶
| Parameter | Purpose | Type Options & Examples |
|---|---|---|
| Toolkit | Select which Toolkits or MCPs the LLM can choose tools from | Toolkits - External service integrations MCPs - Model Context Protocol servers Selection: Can select multiple How It Works: 1. User selects one or more toolkits 2. LLM receives tool descriptions 3. LLM analyzes task and selects tools 4. Pipeline executes selected tools Example: - confluence_toolkit |
| Tool | Select a specific tool from the chosen toolkit that the LLM will use | Dropdown populated with all available tools from selected toolkit Selection: Only one tool can be selected LLM Usage: LLM uses the selected tool to accomplish the task Example: read_page_by_id |
| Task | Provide natural language instructions describing what the node should accomplish | Read the Confluence page with ID {input} and extract the requirements section. |
| Input | Specify which state variables the Tool node reads from | Default states: input, messagesCustom states: Any defined state variables Example: - page_id- topic- messages |
| Output | Define which state variables the tool execution results should populate | Default: messagesCustom states: Specific variables Example: - search_results- created_page_id- messages |
| Structured Output | Force the LLM to return results in a structured format matching your output variables | Enabled - LLM formats tool results into specific state variables Disabled - LLM returns free-form summary in messagesExample: true or false |
| Interrupt Before | Pause execution before the LLM executes tools | Enabled / Disabled |
| Interrupt After | Pause execution after the LLM executes tools for inspection | Enabled / Disabled |
YAML Configuration
nodes:
- id: Tool 1
type: tool
tool: read_page_by_id
input:
- input
output:
- messages
structured_output: true
transition: END
toolkit_name: ELlggg
task: Read the Confluence page with ID {input} and extract the requirements section
interrupt_before:
- Tool 1
state:
messages:
type: list
input:
type: str
project_id:
type: str
value: ''
issue_title:
type: str
value: ''
page_id:
type: str
value: ''
Best Practices¶
- Write Clear Task Instructions: Provide specific, actionable tasks that clearly describe what the node should accomplish.
- Single Tool Selection: Each Tool Node can select only one tool. The LLM uses that selected tool to accomplish the task based on the provided instructions.
- Use Structured Output for Data Extraction: When you need specific values, enable structured output to extract data into defined state variables.
- Provide Context in Task: Include necessary context from state variables using f-string formatting in the task description.
- Use Interrupts for Debugging: Enable interrupts to review LLM tool execution and results during development.
- Handle Multi-Step Tasks: Break complex workflows into clear, sequential steps in the task description.
- Choose Tool Node Over Function Node: Use Tool Node when:
- Task requires LLM reasoning about how to use the tool
- You want natural language task specification
- Tool parameters are complex or context-dependent
- Monitor Tool Execution: Review the tool execution results to ensure expected behavior and optimize task instructions.
Code Node¶
The Code Node enables secure execution of custom Python code within a sandboxed environment (Pyodide/WebAssembly). It provides full Python capabilities for data processing, calculations, and custom logic without accessing the host system.
Purpose¶
Use the Code Node to:
- Execute custom Python logic for data transformation and processing
- Perform calculations that don't require external tool integrations
- Process pipeline state with full programming control
- Implement business rules and conditional logic in Python
- Transform data formats between pipeline nodes
- Call external APIs directly from Python
Parameters¶
| Parameter | Purpose | Type Options & Examples |
|---|---|---|
| Code | Provide the Python code to execute | Fixed - Static Python code block (most common) F-String - Code with dynamic variable interpolation Variable - Code sourced from state Full-Screen Editor: Value field supports full-screen mode with Python syntax highlighting, code validation, and multi-line editing Example (Fixed): python<br>numbers = alita_state.get('numbers', [])<br>return {"total": sum(numbers)}<br> |
| Input | Specify which state variables to inject into the code execution context | How It Works: Selected state variables become accessible via alita_state dictionaryExample: - user_data- configuration- previous_resultsCode Access: user_data = alita_state.get('user_data', {}) |
| Output | Define which state variables the code's return value should populate | Without Output Variables: Code return value added to messagesWith Output Variables: Code must return dictionary, only listed variables updated Example: - total- average- status- messages |
| Structured Output | Enable parsing of code return value as structured data for state variable updates | Enabled (true): Code must return dictionary, keys matching output variables update state Disabled (false): Code return value goes to messagesExample: true or false |
| Interrupt Before | Pause pipeline execution before code execution | Enabled / Disabled |
| Interrupt After | Pause pipeline execution after code execution for inspection | Enabled / Disabled |
YAML Configuration
nodes:
- id: calculate_metrics
type: code
code:
type: fixed
value: |
# Access state variables
scores = alita_state.get('scores', [])
threshold = alita_state.get('threshold', 70)
# Calculate metrics
if scores:
min_score = min(scores)
max_score = max(scores)
avg_score = sum(scores) / len(scores)
pass_count = sum(1 for s in scores if s >= threshold)
else:
min_score = max_score = avg_score = pass_count = 0
# Return structured data
return {
"min_score": min_score,
"max_score": max_score,
"avg_score": round(avg_score, 2),
"pass_count": pass_count
}
input:
- scores
- threshold
output:
- min_score
- max_score
- avg_score
- pass_count
- messages
structured_output: true
transition: END
interrupt_after:
- calculate_metrics
state:
scores:
type: list
value: []
threshold:
type: int
value: 70
min_score:
type: float
value: 0.0
max_score:
type: float
value: 0.0
avg_score:
type: float
value: 0.0
pass_count:
type: int
value: 0
messages:
type: list
State Access in Code
Access pipeline state via alita_state dictionary. When an Alita client is available, it's automatically injected as alita_client for accessing artifacts and other resources.
Output Variable Filtering
Only variables listed in output will be updated, even if the returned dictionary contains additional keys. Use structured_output: true for proper variable mapping.
Code Execution Environment
Code runs in a sandboxed Pyodide/WebAssembly environment with full Python standard library. Use import micropip; await micropip.install('package-name') for additional packages. Network access is enabled for external API calls.
Best Practices¶
- Return Structured Data: When using
structured_output: true, always return dictionaries with keys matching output variables. - Handle Errors Gracefully: Include try-except blocks to catch and return errors as part of the structured output.
- Validate Input Data: Check state variables exist and have expected types before processing using
alita_state.get(). - Use Descriptive Output Variables: Name output variables clearly to indicate their purpose (e.g.,
total_revenue,average_order_valueinstead ofresult1,result2). - Keep Code Focused: Each Code Node should have one clear purpose - avoid combining multiple unrelated operations in a single node.
- Document Complex Logic: Use Python comments to explain business rules, calculations, and non-obvious operations.
- Test with Interrupts: Enable interrupts to review code execution results and debug issues during development.
- Optimize Performance: Avoid heavy computations in frequently called nodes, cache expensive operations when possible, and use efficient data structures.
Execution Nodes Comparison¶
| Feature | Function Node | Tool Node | Code Node |
|---|---|---|---|
| Purpose | Execute specific tool with explicit parameter mapping | LLM-assisted tool selection and execution | Execute custom Python code |
| Toolkit Types | Toolkits, MCPs | Toolkits, MCPs | N/A (Python sandbox) |
| Tool Selection | Manual (user selects) | Automatic (LLM decides) | N/A |
| Parameter Mapping | Explicit Input Mapping (per tool) | LLM generates parameters from task | State via alita_state |
| Task Instruction | No task field | Required (natural language) | Python code |
| LLM Usage | No LLM | Yes (for tool selection and params) | No LLM |
| Configuration | UI-based parameter mapping | Natural language task + toolkit selection | Python code editor |
| Flexibility | Low (predefined tools) | High (LLM reasoning) | Very High (full Python) |
| Complexity | Medium | Low (natural language) | High (requires Python knowledge) |
| Performance | Fast (direct execution) | Slower (LLM overhead) | Fast (compiled sandbox) |
| Structured Output | Not applicable | Supported | Supported |
| Input Mapping | Required parameters + optional | LLM generates from task | alita_state dictionary |
| Use Case | Known tool, explicit parameters | Flexible tool selection, complex workflows | Custom logic, calculations, data processing |
| Best For | Deterministic tool calls (create Jira ticket, search Confluence) | Dynamic tool selection (research and document, multi-step workflows) | Data transformation, business logic, API calls |
When to Use Each Node¶
Function Node ✅¶
Choose Function Node when you:
- Know exactly which tool to call
- Have straightforward parameter mapping
- Need fast, deterministic execution
- Don't require LLM reasoning
- Want explicit control over tool execution
Example: Create a Jira ticket with known project, summary, and description.
Tool Node ✅¶
Choose Tool Node when you:
- Need LLM to decide which tool(s) to call
- Have complex, multi-step workflows
- Want natural language task specification
- Require dynamic tool selection based on context
- Need LLM reasoning about tool parameters
Example: "Search Confluence for authentication docs, then create a Jira ticket summarizing the findings."
Code Node ✅¶
Choose Code Node when you:
- Need custom Python logic
- Require data transformation or processing
- Implement business rules and calculations
- Call external APIs directly
- Have logic too complex for standard nodes
Example: Calculate tiered discounts based on customer segment, order value, and first-order status.
Related
- Nodes Overview - Understand all available node types
- Interaction Nodes - LLM and Agent nodes for AI-powered tasks
- Control Flow Nodes - Router, Condition, and Decision nodes
- States - Manage data flow through pipeline state
- Connections - Link nodes together
- YAML Configuration - See complete node syntax examples





