![]() |
Get your important interfaces running smoothly and efficiently with my top tips for super fast Epicor Service Connect Workflows. |
Here are a few handy tips that I’d like to provide for improving the speed and performance of your Epicor Service Connect workflows. For workflows that only process minimal volumes of data and are not utilised very often, speed and efficiency are not too important. But if you have an interface that is heavily used and deals with large volumes of transactions, it can become a real problem if it doesn’t process through very quickly. The best workflows are fast and efficient with clever error handling and no unnecessary steps.
Step 1 – Try and avoid cycling if possible.
Occasionally there are times when you need to iteratively call a sub-workflow. My advise would be to first consider a couple of other options before taking this approach. Every Epicor business object has a method available called “UpdateExt”. This method allows you to push in multiple dirty rows (I’ll explain this later) and you can even set a flag called “continueprocessingonerror”, so even if one of the rows you pass in produces an error, setting this flag ensures that processing continues for the other rows in the dataset (any errors that occur are passed back in the response).
As an example of what is meant by multiple dirty rows, you could utilise the “Part.UpdateExt” method to create many parts in a single call. Simply pass “Part.Company”, “Part.PartNum”, “Part.PartDescription”, and “Part.RowMod” (set to “U”) into the “Part” section of the dataset to create a Part record in Epicor. However, the real power comes from realising that you can pass in multiple “Part” sections in order to create many parts at once. So if you were importing a CSV spreadsheet that contained multiple parts, after mapping in the fields I just described, you can link the “row” element from your spreadsheet schema to the “Part” element of the Epicor schema. This creates all your Part records in a single method call. Add to this a batching approach (described later) and you can throttle your updates to gain maximum performance.
Step 2 – Get rid of unnecessary method calls.
There are many workflows that I come across that contain method calls that are just not needed. Of course the Epicor business object method calls are the meat in your interface sandwich, but if you are including calls that are just not required then this can really slow your workflow (this is compounded if the unnecessary calls are in a sub-workflow that is called iteratively).
A lot of people will use Service Connect to mimic the manual process of carrying out a transaction in the application (perhaps using the trace tool available in Epicor). However, following this too closely can mean that you are including calls that are not required. My advise would be to utilise the power of the “UpdateExt” method and work backwards from the point of supplying this method with the required data. Getting that required data may mean additional calls (such as the “GetList” method) but it is a good starting point for producing an efficient workflow.
Step 3 – Use Database calls to get the data you need.
A great approach to reducing the number of method calls in your workflow is to use a “DB Operation”. These are used to make ODBC calls and can completely remove the need for method calls such as “GetList” or “GetByID”. They are extremely quick and you can retrieve just the fields that you need. If you’re smart about it you can set a complicated SQL statement up as a Stored Procedure and pass in parameters when you execute it.
Step 4 – Batch your data.
If your interface deals with a large volume of transactions, especially when they relate to a single parent record, then batching your data can really improve performance.
Lets say for example that your interface imports sales orders where a single sales order can have hundreds of lines. Rather than pass those hundreds of lines into the “SalesOrder.UpdateExt” method in one go as suggested earlier, you should batch them into blocks of 50 either via an XSLT transformation (the mapping tool) or, if you’re importing a CSV file, utilising the “GroupByFields” or “GroupByCount” options on the Channel Configuration settings (see the detailed help in Service Connect on Channel Configuration for a CSV file).
This is one approach where calling a sub-workflow and cycling on your batches of 50 in order to add the lines with the “SalesOrder.UpdateExt” method is the way to go.
Step 5 – Be clever with error handling.
Error handling is usually one of the last things to be considered when building an interface but I consider it one of the most important and, depending on how in depth the requirements turn out to be, can have quite a large impact on the design of a workflow.
If the number of expected transactions is low and error handling is of minor importance then a Poster element in your workflow, which sends an email detailing the error, would most likely suffice.
For interfaces that process lots of transactions and require quick and efficient error handling, I tend to favour the idea of writing any errors to a UD table. This gives the added option of being able to display the details in a dashboard within Epicor. In a recent interface that I developed for client I added an updatable dashboard that allowed the user to review the errors in the UD table and tick the line in order to re-process it through the workflow.
I hope these tips help when you are considering an approach for your next Service Connect development. A workflow that only contains the necessary method calls, using data pulled directly from the database via ODBC calls, combined with a batched approach and clever error handling can mean the difference between an interface that just ‘works’ and an interface that is super quick!