Why stress is actually good for you – and four ways to manage it

Reading Time: 7 minutes

Everybody is constantly telling me that stress is bad for my health. But, quite frankly, I’m not so sure. Stress researcher Daniella Kaufer at the University of California, Berkeley, has found that short-term bursts of stress can boost brainpower, productivity and alertness.  In my job as an account executive at MuleSoft, I’ve definitely found that to be true. I never know what’s going to be on my plate that day. Whatever I plan is just the appetizer; the main course is always yet to come. There’s always a new problem, new campaigns, new rejections — you just never know what’s going to happen, whether it’s positive or negative. I thrive on spontaneity.

Stress makes me think faster. It makes me think outside the box. I like feeling a little antsy. I like feeling like I’m on the verge of something – when it’s completed it’s more of a rush.

That said, I’ve learned there are some ways make stress work for me, to turn it to my advantage. You can’t just feel stressed and anxious all the time – there are certain mental tricks to make stress productive, rather than destructive.

  1. Build an early-morning wakeup call into your schedule. The price we pay for living in sunny California is that by the time we wake up, the day is well under way in the East Coast and practically over in Europe. So I start my workday at 5:15 AM. The tradeoff is that after I get my early morning calls out of the way I can exercise and get my mind working and active. It’s been a huge advantage.
  2. Don’t be afraid of distractions. The conventional wisdom is that the constant stream of notifications is a negative distraction, but I find if I manage them well, notifications actually increase my mental focus. I like being distracted and being aware of my surroundings. I want to know about what might be an urgent incoming situation right away. The key to dealing with constant interruptions is to develop a strong sense of prioritization and to be a c’est la vie kind of person.
  3. Stay level-headed. Plans are great but I find my day is more about focusing on what’s next and the task at hand. To manage it all, I find I have to be reactive, calm, and confident in my ability to think through a problem on the fly. I do often think about stressful situations and replay them in my head. When things don’t play out as I expect them to, I do get frustrated, but I find that the review helps me learn how to deal with the next situation that comes my way.
  4. Take time for yourself. I play Madden to unwind. I also play basketball, which creates more scenarios where I have to think on the fly. The time I spend away from work either thinking about other things or just relaxing prepares me for the next scenario. Rest provides the flip side to the constructive drive of stress.

What’s amazing to me is that living in the moment and allowing a certain amount of stress to affect my day actually has the effect of making me more productive and accomplish more without my actually knowing it. I’ve helped close millions in deals and have gotten to speak with C-level clients from Fortune 100 companies, but because I’m working on the fly and thinking about individual tasks, I actually accomplish more than if I were constantly thinking about the big picture.

I’ve been lucky to accomplish a great deal of personal growth at MuleSoft, and I’m excited to see where my career goes next.


Encrypt Specific XML Tags With The Power Of DataWeave

Reading Time: 5 minutes

DataWeave is a powerful language, and the possibilities of what you can do with it are infinite.

In this blog post, I am going to show you how to select specific data inside a series of specified XML tags.

For example, in this case we want to encrypt data inside sensitive XML tags such as an SSN, a credit card number, etc.

We define an array with the XML tags to be encrypted named keyToEncrypt (we are encrypting just the contents, not the whole line including the tag)

Then we define a function, to send the matching contents to a separate flow where the encryption actually happens.

After that, we define another function to determine if the tag we read is part of our keyToEncrypt array, if so, the sizeOf method will return a result greater than 0 (this ignores schemas such as <web:superTag>, the function will match anyway even if we just search for “superTag” alone)

Here comes the star of the show: the function maskSensitiveData will go through the tags and levels trying to match the element of our array, and when it matches one, it will send it to the function encrypt and re-assemble the XML tag content passed, replacing that data with “*****”, as we specified.

Here, instead of returning asterisks, you can easily call a (private) Flow that encrypts that data passed with your desired encryption method, using the lookup method.
It would look something like this:

%function encrypt(val) lookup(“encryptFlow“,val)

We create the encrypt function, which gives the passed val value, to the encryptFlow flow.

Here’s a link that helps you create an encryption flow using PGP: PGP Encryption Using Anypoint Studio

The full DataWeave code will look something like this:

For the ones interested in calling the encrypting flow with DataWeave, here is the DataWeave code with the call:

This way, we re-assemble the XML, but replaced the contents of the tags specified with encrypted/masked strings.

IMPORTANT NOTE:  Always make sure that the mime-type that the transform-message component is receiving, it set correctly, in this case application/xml
To do this you can simply set it previously when you create the message that is going to be the input of that transform-message component.
Or you can force it in the configuration XML of the DataWeave input-payload:


Let me show you an example.

Let’s take this XML formatted personal information example we might want to protect, and run it through our DataWeave code:

Sample Input Data

The DataWeave code as seen on Anypoint Studio:

DW Code

The data output as a JSON Array, as we specified previously in our output variable in the DataWeave code:

Screen Shot 2016-02-24 at 4.21.43 PM

As you can see, the fields ssn and creditCardNumber, which were members of our keyToEncrypt array, are now masked with the specified “*****” mask.

This has been fully tested with:

Download the Full Working Example

Link to Download the Full Working Example: Download Now!

CloudHub CLI Tool: New version release

Reading Time: 5 minutes

In the past you might have experienced the Command Line Cloudhub tools, and if so, you might have encountered many limitations regarding usability and options of commands.

Now, a new version with improvements that will be appreciated for any user looking for a simplified usage of CH without losing any feedback nor visibility of their actions. This tool will be of good use for support and clients especially when needing to search quickly for answers regarding entitlements and information regarding organizations and environments.

Installing CloudHub-CLI Tool

npm install -g cloudhub-cli

Starting to use CloudHub-CLI:

cloudhub-cli [params] [command]

export CLOUDHUB_USERNAME=UserName (example)

export CLOUDHUB_PASSWORD=Password123 (example)

cloudhub-cli

 User can put the export lines in their ~/.bash_profile file or similar, type cloudhub-cli and the tool will use those values.

Another optional commands:

 export CLOUDHUB_ORG=orgName

 export CLOUDHUB_ENV=envName

Notice: If none of this commands are added, cloudhub-cli will run in interactive mode

An interesting option this tool provides user is granting the capability of logging in directly into an environment just by adding @environmentName after username when logging in.

Once logged in, a table listing all applications for the selected environment will display, providing information regarding app name, zip file, status, latest update, vcores and workers.

Screen Shot 2016-02-22 at 11.12.14 AM

Thought the command help, the system will provide not only available courses of action, but the description for each one and in case needed more information regarding syntax of usage, writing the command name next to help, will provide the information needed with required parameters and optional fields.

Screen Shot 2016-02-22 at 11.14.46 AM

A valuable feature for this tool is related to the responses regarding Environments and Business groups information. When executing environment command user will see listed information regarding permissions to and types of Environment organization has,  while business-group will show Owner, Type, Entitlements and Environments.

Screen Shot 2016-02-22 at 11.15.34 AM
Screen Shot 2016-02-22 at 11.15.16 AM

Deploying an application

The only required parameters are application name and zip file.  All other parameters are optional and will use the Cloudhub defaults if not specified

Screen Shot 2016-02-22 at 11.17.24 AM

An unique feature for the Cloudhub-CLI, is that it has the option to add properties directly from a .txt file by executing the --propertiesFile command. If you wish to add more properties, --property will add properties without overwriting the ones already in the application.Modify your app, add persistent queues, enable and disable static IP, almost all actions that can be used in deployment and settings page in CloudHub. When in need, hitting tab button will autocomplete the text and also show the available options for each command.

update

With this tool, users can modify and manage their apps, add persistent queues, enable and disable static IP, tail and download logs.This tool is really useful for integrating your workflow with CloudHub. You can use it to deploy or update a CloudHub application when your CI build succeeds, or automatically move your CloudHub application to a different region. You could even use it to warn you if someone deploys an application into your environment which is using an old runtime.  And sometimes, using your terminal is just more fun!

Introduction to Deploying Mule: From Workstation to Production

Reading Time: 3 minutes

We often say that if your business uses MuleSoft, you’ll reap rewards in increased speed, agility, and flexibility that will help your business succeed in today’s hyper-competitive environment. And our customers have proven that to be true. But how exactly do you go from siloed data and entangled point-to-point integrations to smooth, fast, API-led connectivity?

Fortunately, we have an expert to answer that very question. Eugene Berman, a senior enterprise architect at MuleSoft, spends his life teaching businesses of every size and in every industry how to set up and deploy Mule in the simplest, most efficient way possible.

Eugene will be hosting a webinar on Thursday, February 25, 2016 at 10 AM Pacific on how to deploy Mule and develop applications on Anypoint Platform. He’ll be talking about the skillset developers need to work with Mule, the right organization to make sure you’re using Anypoint Platform in the best way possible, and his recommendations for the right tools to get the most out of the product.

If you’re thinking about getting started with Anypoint Platform, if you’re a developer using or considering Mule, or if you’re evaluating the platform’s capabilities, this is a must-attend webinar.


Before you get started, feel free to give Anypoint Platform a test drive and see what you think. See you on Thursday, February 25!

MuleSoft at MuleSoft: HR system to Charles Schwab

Reading Time: 9 minutes

Recently we launched a service for MuleSoft employees (Muleys) to be able to manage their stock options online using Charles Schwab’s service. With the solution, there is a requirement to exchange data between our HR system and Schwab in both directions. As you can imagine, accuracy is key where anything financial is concerned and as such, reducing complexity reduces the risk of errors. With over six hundred employees now and forecasting fifty percent growth in headcount this year, we needed to have something automated and efficient. We decided to use our technology to build a solution to help us make this easy for our employees – especially since we needed to launch the solution all at once to our existing employees as well as make it scale over time.

For the first phase of the project, we needed to bring employee data from our HR system and transform it to fit the specification for the CSV that the target system expects. This involved sitting down with experts on both the MuleSoft and the Schwab teams to decide which fields made the most sense to map into the target system. The process of identifying the right data to feed actually took some effort from a human perspective, but thanks to Anypoint Studio, we were able to do the mapping and transformations while collaborating with the business process owners. This rapid feedback loop allowed us to move fast – testing the iterations with the target system until we were satisfied with the results. From there we were able to publish to CloudHub and get the integration running.

Anypoint Studio and CloudHub allowed us to rapidly prototype the solution with the business process owner in the room – and deliver it rapidly as a service running in the cloud. From here I’m going to hand off to Andrew McAllister to talk about the technical side.

MuleSoft at MuleSoft

At our yearly company Meetup we went went live with our ESPP (Employee Stock Purchase Plan) using Charles Schwab Equiview Product.  This integration was accomplished using Anypoint Studio and Cloudhub Platform as part of our continued initiative to use our own product to connect everything internally.

The initial scope of this project was to take both organization and payroll data such as year to date, withholdings, etc. and push this data daily into Charles Schwab Equiview.  Charles Schwab is a more traditional system that consumes a tab delimited text file via SFTP to insert updated data.  Certainly our HR system has a built in EIB (Enterprise Interface Builder) for integrations, however a few key items made it clear the choice to use Anypoint was best for desired capabilities and time to market.

Key items

Our HR system has calculated fields however they are somewhat limited on more complex operations such as filtering results and returning specific object values on filtered results based on the key value meeting certain criteria.  Anypoint provided flexibility needed to store criteria for certain business rules and formatting requirements.

Exception strategy and logging are essential to make sure data uploaded to Schwab is accurate.  The HR system has its own audit logs, but if an internal integration was used, we would still need to pull the audit logs via API on a scheduled basis.  Anypoint provided flexibility in this area.

The system must be both performant and reusable:  Today we have one HR system, but someday we might need to add a new system.  Any new system with a REST / SOAP API would only require modification to a couple of Dataweave connectors to make sure the sample payload remained the same and integration would continue to work.  Our HR system’s data model and built-in integration tools are proprietary.

Why Anypoint?

Dataweave was the exact tool we needed as it gave us a very simple way of both transforming and mapping data from several calls made to the HR system into one object.  Each record once transformed was stored to a string buffer and then streamed as tab-delimited data via SFTP.  Dataweave was particularly helpful for address data.  Charles Schwab has specific requirements for their address fields.  In the HR system, certain address fields have address data concatenated for mailing purposes.  For international records, the field used for mailing purposes is not consistent compared to USA or other countries.  This presented a problem for consistency needed for Charles Schwab address data.  Dataweave allowed us to hide and or transform the HR system’s address data easily so it was consistent and country specific when passed to Charles Schwab.

The SFTP connector just worked.

An HTTP payload as application/json, plain text or even just “hello world”becomes the body of the txt file streamed via the SFTP connector.  This was great for our use case.

Cloudhub gave us the ability to launch our integration in minutes with logging and visibility.  There was zero time spent figuring out where and how we would host this.

Take a look at more of our MuleSoft at MuleSoft projects.

Best Practices for Tuning Mule

Reading Time: 6 minutes

We often get asked to help tune applications running on Mule for optimal performance. Over time, we have developed a methodology that helps us deliver on that request — and we wanted to share that methodology with you.

To-Do Before Tuning

Here are a few questions to ask before tuning. Performance tuning requires affirmative answers for (1) and (2), plus a concise response to (3).

  1. Does the application function as expected?
  2. Is the testing environment stable?
  3. How does the application need to be tuned?

Donald Knuth maintained that “premature optimization is the root of all evil“. Make sure the application runs properly before tuning it.

Performance Tuning Process Overview

Design Phase Tuning

  • Tune Mule’s flows
  • Tune Mule’s configuration settings.

Runtime Environment Tuning

  • Tune the Java Virtual Machine (JVM).
  • Tune the garbage collection (GC) mechanism.

Operating System Tuning

  • Tune the ulimit
  • Tune the TCP/IP stack

Use an iterative approach when tuning. Make one change at a time, retest, and check the results.  Though it may be tempting to apply several changes at once, that hasty approach leads to difficulties linking causes with effects. Doing one change at a time makes it apparent how each modification affects performance.

Performance Testing Best Practices

Use a Controlled Environment: Repeatability is crucial to running performance tests.

The testing environment must be controlled and stable. To help ensure stability, use:

  • A dedicated host to prevent other running processes from interfering with Mule and the application
  • A wired, stable network
  • Separate hosts to run other dependent services (e.g., MySQL, ActiveMQ, other backend services)
  • Separate hosts for running load client tools (e.g., Apache Bench, JMeter]

WARNING: A dedicated VM on shared hardware is not *controlled*, a similar environment should be used or it must be the only VM running in the server.

Use Representative Workloads

Representative workloads mimic the customer use cases in the real world. The planning of the workload usually include analysis of the payloads and user behaviors. Payloads can be designed to vary realistically in terms of size, type and complexity. Its arrival rate can also be adjusted to imitate actual customer behavior by introducing think time in the load test tool.  For testing proxy scenarios, artificial latency may also need to be added to the backend service.

Clarify Requirements and Goals

It is important to specify the performance criteria for an application’s use case. Here, use case refers to an application running in an environment to achieve particular goals. Different types of requirements lead to different configurations. For example, use cases emphasizing throughput should utilize parallel mark-and-sweep (MS) garbage collection (GC). Cases focusing on response time may prefer concurrent MS (CMS). Those GC techniques are themselves tuned differently. As such, a case cannot be tuned until after performance requirements are defined.

Here are some questions that may help clarify requirements:

  • What are the expected average and peak workloads?
  • Is the use case emphasis on throughput or response time?
  • What is the minimum acceptable throughput?
  • What is the maximum acceptable response time?

Want to learn more?

Download our Performance Tuning Guide to learn from our experts how to choose the right tools, set performance goals for throughput, latency, concurrency, and large payloads.  The guide also provides guidance for how you can design applications running on Mule for high performance.  

What we learned from failed SOA implementations

Reading Time: 9 minutes

It’s been almost 30 years since the concepts behind SOA, specifically the notion of decomposing monolithic applications as discrete functions, were first introduced.  Many organizations embarked on the journey towards SOA, but results have been mixed.  Though SOA has several benefits and can be a powerful architectural paradigm, many SOA implementations have fallen short. In this blog post, we explore the primary reasons SOA, as an approach, failed to deliver on expectations

Reason 1: Limited end user engagement

Imagine a band playing a concert where they don’t care about the audience. Many SOA implementations fit this description.  Too often, the backend details of the core systems of record being exposed by SOA projects used data models that only domain experts could understand. This made implementation difficult due to the particularities of backend applications. When the number of services is small, it’s no big deal to solve this by going directly to the responsible colleague. But this quickly becomes unscalable. The expectation that we would build several services and businesses would self-serve never happened at scale for most SOA implementations.

Reason 2: Heavy-handed design-time governance

Traditional design-time governance has both favorable and unfavorable consequences. Building services in ways that keep enterprise data secure is critical, but as SOA implementations grew more complex, this governance model created heavyweight work flows that slowed things down to a crawl or even blocked change without a full impact analysis — which ultimately dissuaded many users from even participating in the process. SOA implementations that succeeded focused more energy on making sure APIs and services were well-designed (and potentially built for a specific purpose), thereby requiring less energy in managing and implementing change once deployed in production.

Reason 3: Heavyweight deployment and operations  

Traditional SOA stacks were heavyweight and complex, containing several bits and pieces of software. Adding to this complexity, the software stack demanded extra components to operate like a database, and application servers.  Standards like WS-* tried (and failed) to reduce this complexity by solving for every requirement. Ultimately, this prevented people from operating, developing, changing and migrating SOA artifacts.  Changing one artifact created multiple (and unknown) downstream impacts. Those who succeeded had hardcore integration use cases among core systems of record which belong only to central IT teams and had slow release cycles. As integration use cases moved outside of central IT with the advent of SaaS, it became clear that these traditional SOA stacks couldn’t handle the speed of connectivity required.

Reason 4: Bottlenecks on enablement

Designing against proper architectural guidelines and having people and processes in place to approve projects is logical. To achieve this, many SOA programs established an ICC (Integration Competency Centre) or CoE (Centre of Excellence).  These centers quickly became bottlenecks at scale. The centralized bodies couldn’t move quickly enough — resulting in approval processes that were lengthy and expensive. In organizations where this Center was not politically strong, lines of business have simply bypassed it to get things done in an uncontrolled way (e.g. purchasing SaaS). In organisations where these centers were politically strong, those projects were simply blocked or forced to participate in the same slow waterfall process required of traditional use cases.

Reason 5:  Complex standards

SOA wasn’t intended just for SOAP/XML web services, but somehow this became the defacto standard. Despite the many benefits of SOAP, using it came with a high price.  SOAP web services require significant investment to build and maintain — and often deliver many more capabilities than are actually required to meet the service objective. REST, on the other hand, has gained popularity due to its simplicity and ease of use. In fact, most APIs created today embrace REST design principles. REST, unlike SOAP,  allows you to provide your audience with a stable and explorable web of resources that is very much ready for web scale.  Similarly, XML provides a verbose standard not really designed for high volumes of traffic outside and between data centres, making the simplicity and freshness of JSON more appealing.

Reason 6: One size doesn’t fit all data models

Canonical data models were meant to provide standards and designed to keep data in sync — yet siloed systems persist.  Why? Different parts of the business have different definitions of what a customer, a product, an invoice, or an order actually look like.  Successful SOA implementation are more pragmatic and think that domain models are usually more applicable. It lets it go the need for a centralized and unique canonical data definition.

Conclusions

SOA brings several benefits, but for the reasons outlined above these benefits are often never realized.  The good news for those thinking about SOA today: these failing have more to do with approaches, practices and tools than with the core principles of SOA.  

At MuleSoft, we recommend a modern approach to delivering the core principles of SOA with purpose-built design, a federated ownership model, regular audience engagement, easy to use tooling, a center for enablement and domain data models.  We call this API-led connectivity.  Download the whitepaper to learn more.

Something old, something new, something connected, and something blue

Reading Time: 4 minutes

How are individuals being married to technology? In lots of ways, it turns out. Ross Mason has an article in Entrepreneur this week all about ways in which we’re all developing a closer relationship with our mobile phones and other types of devices. Let’s get a closer look!

Financial Services. Consumers want ways to ease the payment process and get real-time banking information. So, says Ross, “to meet these demands from consumers, financial institutions broke down their monolithic systems into reusable APIs that can be combined to create new digital services.” This means you can deposit a check via your mobile phone, get immediate banking information via an app, or transfer money via your device.

Healthcare. APIs are playing an increasingly important role in healthcare, but as wearable devices become more prevalent, consumers are taking a bigger role in making healthcare decisions as they now have the information to do it. Ross notes, “These technologies are redefining how physicians and patients collaborate in real time, paving the way for better overall care and hooking consumers on the security and comfort they provide.”

Automotive. As car manufacturers expand the connected universe into vehicles, suddenly the car is becoming an extension of our homes and personal lives. Ross predicts that “in the next few years, car companies will provide 90 percent connectivity through APIs, fostering a closer relationship between individuals and their cars. Cars will no longer serve just as vehicles that move us from point A to point B; in addition, they’ll act as collaborator, entertainer and confidant.” But the connected car isn’t just transforming the in-car experience; the driverless car is becoming a reality.

We are all definitely experiencing a closer relationship with our mobile devices. Does this mean we’re headed to the chapel? Take a look at more resources on how APIs and mobility strategies
are changing business forever.

MuleSoft at MuleSoft: building the Salesforce and Slack integration to improve sales

Reading Time: 9 minutes

In yesterday’s post, Mike pointed out that if our sales team responds quickly to a lead, it correlates to a higher probability of connecting with the prospect. This is something we had to take seriously. To deliver a solution, we partnered with Marketing Operations and the Account Development teams.

The first step was to determine the systems we needed to interface with. Our leads go directly into Marketo, but that wasn’t the right place to gather them. Marketo pushes leads to our instance of Salesforce (or SFDC) with a reasonable SLA around delivery. Part of this sync from Marketo to SFDC involves filtering and assigning the leads to an appropriate Account Development individual. It didn’t make sense to rebuild that logic in Anypoint; also, if we had done this we would have removed Marketing’s ability to change their criteria themselves. Instead we interfaced with SFDC. We worked with Marketing Ops to refine the Salesforce Object Query Language (SOQL) search to narrow down high quality leads acquired in the last 5 minutes and then  assign them to a person.

Cool. At this point we have the leads we want, so how the heck do we deliver them? SFDC associates the lead with their unique user ID for each person. We did a lookup of the user lists for SFDC and Slack and coordinated employees between the two systems using email. Then we built an object store inside of Mule using the SFDC unique user ID as the key to an object containing name, email, Slack unique user ID and Slack time zone. This means when a lead falls under scope of the integration, it quickly coordinates the assigned unique user ID with all the information we need. “Quickly” is kind of key here. We have a self-imposed 5 minute SLA from the time of lead generation. The push from Marketo to SFDC under the right circumstances takes a couple of minutes. Everything we do has to be as close to instant as we can get it. This means minimizing the API calls we make, or, if we have to make them, caching the results that don’t change often–e.g. user lists.

Now that we have the lead we need to deliver, we will need to maintain state. We used a messaging queue to have a persistent record of the messages we need to send. We push all leads we receive into the queue and stream from the queue into our Slack messaging logic.

We need to build filtering into our processing before it’s ready to go into production. We’re using an external persistent queue, so we have to anticipate times where the two systems are unable to contact each other. So we filter the leads based on whether or not they were created in the last 5 minutes. Then, and this is really cool, we use the Slack time zone associated with each employee to translate current time into their local time. Once we have that value, we can filter based on working hours; this means our integration will only ping the AD team member when they’re working. Filtering helps eliminate noise and make sure we’re only delivering actionable leads. Think of these messages like emails. If you get two thousand emails a day and only two hundred are important, it’s going to be easy to miss things. However, if you only get two hundred a day and one hundred and eighty are important, you’re just more likely to read your email every day.

The final step is to handle message delivery and error handling for any potential failures in that process. We interface directly with the Slack API to send messages as a bot user to each individual AD team member. This means they only get the leads targeted to them and aren’t spammed in a channel with everyone else’s leads. We utilize a dead letter queue to manage delivery failures and escalate potential issues for investigation.

All in all, this was one of the more fun projects I have worked on to date. Initially we thought it would be impossible to filter based on local time zone within the 5-minute window, but it ended up being possible through the Anypoint Platform’s capabilities. This integration took about a day to develop the framework and logic, but any subsequent similar integrations would take minutes. All you would have to develop is the triggering; after that, everything is completely reusable.  It really can’t be overstated how powerful this is. Let’s say SalesOps wants to be notified when a deal is closed. All you need to do is work with the business unit to determine the logic for that trigger and it’s literally done. This allows you to move so much faster than before. It’s really exciting on a personal level to get to work with a tool that is so empowering. Take a look and see what you think.

MuleSoft at MuleSoft: Responsive sales by integrating Salesforce and Slack

Reading Time: 5 minutes

When anyone shows interest in our technology we want to engage with them as soon as possible. We are excited about what we build with at MuleSoft and any opportunity to share that excitement with someone is a big deal.

One day we were hanging out with some fellow Muleys from the Sales Development and Marketing teams and they were talking about how great it would be to receive a notification in Slack whenever a new lead has come in – and to have that Slack message go directly to the right person and include a link to the opportunity in Salesforce. Citing data analysis done by Lead Response Management, they explained that the time it takes to contact someone who reaches out to you greatly impacts the possibility of connecting or qualifying the sale. They said, “The odds of contacting a lead if called in 5 minutes versus 30 minutes drops 100 times. The odds of qualifying a lead if called in 5 minutes versus 30 minutes drop 21 times.”  There is obviously some serious business value to this idea.

We all got excited about this and decided to build a solution. We already use Slack for relevant human to human communication (thus we are engaged with one another in the tool already), so the idea of adding relevant data from our business systems into the mix was really exciting and we jumped on the project. We also have the tools within our platform to make this happen quickly.

Salesforce has queries to help us identify when an object has changed (i.e. PushTopic on the Lead object) and Slack provides the APIs to send messages to specific people. Anypoint provides us with the tools we need to build a working version in less than a day. Working directly with excited Muleys in our Marketing and Account Development teams to get to quick results that lined up with their goals.

Anypoint made it possible for us to create (using Anypoint Studio) and deliver (using Cloudhub) a solution to our internal teams where they were able to be a direct part of the process, working closely with IT to get a win in record time that produces great benefit. Anypoint delivering real-time relevant information to the right people using familiar tools. This solution has made it possible to be as responsive to relevant changes in business data as we are to each other – we can know that a change has happened and the right person is notified.

Tomorrow, Paul Henry from my team will talk about the technical implementation of the solution. Stay tuned!