Reading Time: 9 minutes

In yesterday’s post, Mike pointed out that if our sales team responds quickly to a lead, it correlates to a higher probability of connecting with the prospect. This is something we had to take seriously. To deliver a solution, we partnered with Marketing Operations and the Account Development teams.

The first step was to determine the systems we needed to interface with. Our leads go directly into Marketo, but that wasn’t the right place to gather them. Marketo pushes leads to our instance of Salesforce (or SFDC) with a reasonable SLA around delivery. Part of this sync from Marketo to SFDC involves filtering and assigning the leads to an appropriate Account Development individual. It didn’t make sense to rebuild that logic in Anypoint; also, if we had done this we would have removed Marketing’s ability to change their criteria themselves. Instead we interfaced with SFDC. We worked with Marketing Ops to refine the Salesforce Object Query Language (SOQL) search to narrow down high quality leads acquired in the last 5 minutes and then  assign them to a person.

latest report
Learn why we are the Leaders in API management and iPaaS

Cool. At this point we have the leads we want, so how the heck do we deliver them? SFDC associates the lead with their unique user ID for each person. We did a lookup of the user lists for SFDC and Slack and coordinated employees between the two systems using email. Then we built an object store inside of Mule using the SFDC unique user ID as the key to an object containing name, email, Slack unique user ID and Slack time zone. This means when a lead falls under scope of the integration, it quickly coordinates the assigned unique user ID with all the information we need. “Quickly” is kind of key here. We have a self-imposed 5 minute SLA from the time of lead generation. The push from Marketo to SFDC under the right circumstances takes a couple of minutes. Everything we do has to be as close to instant as we can get it. This means minimizing the API calls we make, or, if we have to make them, caching the results that don’t change often–e.g. user lists.

Now that we have the lead we need to deliver, we will need to maintain state. We used a messaging queue to have a persistent record of the messages we need to send. We push all leads we receive into the queue and stream from the queue into our Slack messaging logic.

We need to build filtering into our processing before it’s ready to go into production. We’re using an external persistent queue, so we have to anticipate times where the two systems are unable to contact each other. So we filter the leads based on whether or not they were created in the last 5 minutes. Then, and this is really cool, we use the Slack time zone associated with each employee to translate current time into their local time. Once we have that value, we can filter based on working hours; this means our integration will only ping the AD team member when they’re working. Filtering helps eliminate noise and make sure we’re only delivering actionable leads. Think of these messages like emails. If you get two thousand emails a day and only two hundred are important, it’s going to be easy to miss things. However, if you only get two hundred a day and one hundred and eighty are important, you’re just more likely to read your email every day.

The final step is to handle message delivery and error handling for any potential failures in that process. We interface directly with the Slack API to send messages as a bot user to each individual AD team member. This means they only get the leads targeted to them and aren’t spammed in a channel with everyone else’s leads. We utilize a dead letter queue to manage delivery failures and escalate potential issues for investigation.

All in all, this was one of the more fun projects I have worked on to date. Initially we thought it would be impossible to filter based on local time zone within the 5-minute window, but it ended up being possible through the Anypoint Platform’s capabilities. This integration took about a day to develop the framework and logic, but any subsequent similar integrations would take minutes. All you would have to develop is the triggering; after that, everything is completely reusable.  It really can’t be overstated how powerful this is. Let’s say SalesOps wants to be notified when a deal is closed. All you need to do is work with the business unit to determine the logic for that trigger and it’s literally done. This allows you to move so much faster than before. It’s really exciting on a personal level to get to work with a tool that is so empowering. Take a look and see what you think.