Skill Settings: Execution Options
A tutorial on how and when to use the new workflow execution options available in Moltin's Skill Settings.
Introduction
Workflows are only as smart as the way they're configured. In Moltin, you can orchestrate AI agents to handle complex tasks, but how those agents execute tasks matters just as much as what they do. The Skill Settings Execution Options let workspace admins fine-tune three critical aspects of how their workflows run: whether tasks happen one after another or all at once, which AI engine handles the processing, and what happens when something breaks.
These aren't buried toggles in a settings submenu. They're strategic choices that shape everything from how fast your workflows complete to how reliably they recover from errors. Get them right and your agents work like a well-oiled machine. Get them wrong and you'll spend more time debugging than deploying.
Execution Mode: Sequential, Parallel, or Let the AI Decide
Execution Mode controls the fundamental flow of your workflow. Do tasks line up and wait their turn, or do they all fire at once? Or should Moltin's AI figure it out based on task dependencies?
Benefits
The right execution mode can transform a sluggish workflow into something responsive. Sequential execution works when tasks depend on each other. If Agent A needs to finish analyzing a document before Agent B can summarize it, you can't run them simultaneously. That's where sequential shines. It ensures each task completes before the next one starts, maintaining strict order and preventing data conflicts.
Parallel execution does the opposite. It runs multiple independent tasks at the same time. If you've got five agents all pulling different data sources with no overlap, why make them wait? Parallel mode can cut your workflow completion time dramatically. A workflow that took fifteen minutes running tasks one by one might finish in three when tasks run simultaneously.
Intelligent mode is where Moltin gets interesting. The system analyzes your workflow's task dependencies and makes real-time decisions about which tasks can run in parallel and which need to wait. You don't have to map out the entire dependency tree yourself. The AI handles it. This matters most for complex workflows where some tasks depend on others but many don't. Intelligent mode balances speed with correctness, running parallel tasks when possible while respecting dependencies.
How to Use Execution Mode
Open your workflow settings by clicking the gear icon in the bottom right corner of your agent’s editor canvas. You'll see Execution Mode near the bottom of the Skill Settings modal. Click the dropdown menu. Three options appear: Sequential, Parallel, and Intelligent.
Choose Sequential if your workflow has tasks that must complete in a specific order. This is common in data processing pipelines where each step transforms the output of the previous one. Pick Parallel if you're confident your tasks don't depend on each other's results. This works well for workflows that gather information from multiple sources independently. Select Intelligent when you're not sure or when your workflow has a mix of dependent and independent tasks. Let Moltin's AI optimize the execution order for you.
After selecting your mode, update the settings and your change takes effect immediately. Your next workflow run will use the new execution mode.
Engine Type: Langchain or DSPy
Engine Type determines which AI framework powers your workflow. Langchain and DSPy take fundamentally different approaches to how language models process tasks.
Benefits
Langchain is the more mature option. It's a modular orchestration framework that chains together language model calls, data retrieval, and tool use. Langchain helps chain together interoperable components and third-party integrations to simplify AI application development, making it solid for workflows that need to connect multiple data sources, APIs, or external tools.
If your workflow pulls data from a CRM, queries a knowledge base, and then generates a report, Langchain handles those integrations smoothly. It's also got extensive documentation and a large community, so troubleshooting is easier.
DSPy takes a different route. Instead of manually crafting prompts for each task, DSPy allows you to iterate fast on structured code rather than brittle strings and offers algorithms that compile AI programs into effective prompts.
This means less time tweaking prompt language and more time focusing on what you want the AI to accomplish. DSPy automatically optimizes prompts based on your workflow's performance metrics. If a task isn't getting the results you want, DSPy's built-in optimizers can adjust the prompts without you touching them.
Here's where the practical difference shows up. Use Langchain when your workflow needs heavy data integration or when you're connecting to many external services.
For example, a customer support workflow that needs to query Zendesk tickets, search internal documentation, and update a Salesforce record would benefit from Langchain's extensive integration library.
Choose DSPy when your workflow involves multiple LLM calls that need to work together reliably. A research assistant workflow that extracts information from documents, synthesizes findings, and generates structured reports would perform better with DSPy's automatic prompt optimization.
DSPy also shows lower framework overhead in benchmarks, with DSPy showing the lowest framework overhead at approximately 3.53 milliseconds compared to Langchain's approximately 10 milliseconds.
How to Use Engine Type
In the same Skill Settings modal, locate the Engine Type option below Execution Mode. Click the dropdown. You'll see two choices: Langchain and DSPY.
Select Langchain if your workflow relies on integrating multiple external data sources, APIs, or tools. It's the safer bet for production workflows that need broad compatibility.
Pick DSPY if your workflow makes multiple language model calls and you want those calls optimized automatically. It's especially useful for workflows still in development where you're iterating on task definitions.
The engine type change applies to the next workflow run. You can switch engines anytime to compare performance. Some teams run A/B tests with the same workflow on different engines to see which performs better for their specific use case.
Continue on Error: Keep Going or Stop Everything
Continue on Error decides what happens when a task fails. Should the workflow halt completely, or should it mark the failed task and move on to the next one?
Benefits
Without Continue on Error enabled, a single failed task stops your entire workflow. That's fine for workflows where every task is critical. If you're processing financial transactions and one step fails, you probably want everything to stop so you can investigate. But many workflows don't need that level of rigidity.
With Continue on Error turned on, failed tasks get marked as failed and the workflow proceeds. The system logs the error details so you can review what went wrong later. This is crucial for workflows that process multiple independent items.
Imagine a workflow that analyzes a hundred customer reviews. If review number 32 causes an error (maybe it's malformed data or an unexpected format), you don't want the other 99 reviews sitting unprocessed. Continue on Error lets the workflow finish the rest and flags the problematic review for manual review.
This setting particularly matters for long-running workflows with many tasks. A workflow with fifty tasks that takes an hour to complete becomes a debugging nightmare if you have to restart it from scratch every time a single task hiccups. Continue on Error turns that nightmare into a manageable process: run the workflow, review which tasks failed, fix those specific issues, and rerun just the failed tasks.
How to Use Continue on Error
Find the Continue on Error toggle switch in your Skill Settings panel, beneath the Engine Type dropdown. It's a simple on/off switch.
Turn it on when your workflow processes multiple independent items or when individual task failures shouldn’t block the entire workflow. This is common in batch processing scenarios, data enrichment workflows, or content generation tasks. Leave it off when every task is critical and a failure in one task means the entire workflow’s output is unusable. Financial calculations, compliance checks, and sequential data transformations usually fall into this category.
When Continue on Error is enabled and a task fails, check your workflow execution logs. Failed tasks show up clearly with error details. You can identify patterns (maybe a specific data format always fails), fix the underlying issue, and rerun the workflow or manually reprocess the failed tasks.
Putting It All Together
These three settings work together. A workflow using Intelligent execution mode with DSPy and Continue on Error enabled can adapt its execution order, optimize its prompts automatically, and keep running even when individual tasks fail. That's a resilient, self-optimizing system. Conversely, a workflow using Sequential execution with Langchain and Continue on Error disabled prioritizes strict control and consistency over speed and fault tolerance.
Your choice depends on what you're building. High-stakes workflows that can't tolerate any errors need the conservative approach. Experimental workflows that process large volumes of varied data benefit from the more adaptive settings. The beauty of Moltin's Workflow Settings is that you can change these options anytime. Try different combinations. Measure what works. Optimize based on real performance, not assumptions.





