Hello again fellow Scribers! Brendan Peterson here, Team Lead for the Scribe Services department. In my day to day position here at Scribe, I am lucky enough to be able to help our partners and customers achieve success (insert "Awwww" from the audience here). In all seriousness, I love what I do here and I love being able to help our customers achieve great things with our product. Now, having said this I am sure that lots of you would rather save yourselves a phone call to our team and just have some better ways of finding the solution to your issues. Hopefully this will help, I've going to try and purge my brain of the more common things I find people get stuck on in this (and hopefully more) blog posts. We had a great entry recently from Fuzz about CRM attachments and I think adding more technical information here will be beneficial to our technical folks out there **WARNING -- This blog is sure to be incredibly nerdy and technical, so abandon all hope ye who fears techy mumbo jumbo**
So, let's start with the Source side. Obviously we know we can pull data with custom queries here, but how can we easily eliminate rows we've already processed? WELL update source (queue groan from the audience) is a great, easy way. But, this is not without its fair share of pain points (hence the groan I expected earlier). The most common issue I see is that it will work with a single data object, but once I use a query it says it will not work. The most important thing to keep in mind is that this feature needs a true primary key set up on the base table. Now, if this has a PK and it's still not working, the second most common issue is with the order of the query. This can be two fold, but the take away is that the update source feature ONLY works with the very first table referenced in the query. So if I am joining two tables, only that first FROM clause if going to be valid to update. The problem is that we'll allow you to set up an update source on fields that are NOT on that table, we give you a list of all fields from your query. So let's take a look at what I did, using GP as my source I want to update the SYNCSTATUS1 field from the Shadow table. BUT, I have my RM00101 table as the first in my source query. It's going to let me think that I am setting it to update that field; it'll run without error, but low and behold no updates to my source! I get around this by simply changing my JOIN to use the Shadow table primarily and join the RM00101 table.
So here is my "Bad" setup, pulling from RM00101 and trying to update the Shadow table's field. Also you can see on the SQL side how many rows of data I have. When I ran this, no updates occurred and I ended up with the same 3455 rows.
Now, here I fixed the issue. You'll see that I changed the query to pull from the shadow table, ran the job and the resulting source rows from the query are less than they began as.
Now that we've talked a little about the source side, let's go into some issues we see within the DTS itself. My example is going to be a GP to CRM integration, I've got all 3 machines set up on different servers, different SQL instances too. I'm using the standard Scribe template, so I never have a connection between CRM and GP directly, always having XML as the source. I have the need to pull back some data that is stored in specific base tables on the SQL side that the adapter itself does not expose. How can I do this without a direct connection to the data source in question? WELL, the answer is gracefully simple (most of the time). I've set up an example here of moving accounts from GP to CRM, but I only want to move accounts that have placed an order. In my case, I want to check for this in the DTS as opposed to customizing my trigger in GP itself. What I have done is added my GP server (QAV-TRAIN-GP) as a linked server to my SQL server using the sp_addlinkedserver procedure. Now, I can create a view on my SCRIBEINTERNAL database that points to my GP database, thereby giving me a way to query data from a base table of a system I am not connected to in the DTS! Here is basically how to set it up…Our first shot is just of the add server command and the view I created, pulling from the SOP10100 table in GP. I've created this view under the Scribe Schema so that our SCRIBE user can access it.
Now on the GP side, I need to add my SCRIBE user as this is how we authenticate "under the covers" with our internal database. If you are using Windows Auth, you must ensure the account you have set up has access to the corresponding database, otherwise this will not work in the workbench.
Now that we have our SQL side setup properly, we can see how this is used in the workbench. I have XML as my source, and CRM as my target. In my datalink I am using the SQLQUERY function to pull from the view I created on the SCRIBEINTERNAL database. So this formula will pull back a SOPNUMBE from the SOP10100 view, which pulls from the SOP10100 table on the GP database where the CUSTNMBR I provide matches one in the table. I can then use this with some ISERROR() functions to see that if no results are found, skip the step.
SQLQUERY( S12, "I", "select SOPNUMBE from SCRIBE.SOP10100 where CUSTNMBR = '%s'" )
This can be used for pretty much any SQL backed system with ease. It's quick and easy, and normally quite effective! It's not perfect but it should allow you to get a little more from the product with minimal amount of work.
So, that's it for now, I will absolutely be posting more of these as they come up for your reading pleasure, so stay tuned! If you have any special requests please leave them in the comments section and I'll be sure to work on getting some more blog posts out there for you! Thanks for taking the time to read this, and I hope you found it worthwhile!