Creating a Conversational Bot with ChatGPT, MuleSoft, and Slack

Can we create a fully functional conversational bot that leverages the power of a Large Language Model (LLM)? The answer is a resounding yes!

In this post, we’ll guide you through the process of building a robust and interactive conversational bot from scratch. If you have a fresh OpenAI account, it’s possible to utilize 100% free accounts and software since OpenAI gives us $15 of credit to try it. If not, you must add credits to your OpenAI account, but it’s inexpensive for this sample app.

We’ll use MuleSoft, Slack, and the state-of-the-art ChatGPT to make it happen. Unlike traditional NLP systems, ChatGPT is an LLM designed to understand and generate human-like text. This makes it extremely useful for various language-processing tasks.

So, buckle up and join us as we reveal the secrets to creating an intelligent bot that leverages the advanced capabilities of ChatGPT, an LLM that can enhance team collaboration and productivity, and deliver a seamless user experience. Let’s dive in!

Note: The accounts and software used in this post could have some limitations since MuleSoft gives us trial accounts.

The main purpose it’s that you understand and learn the basics about:

  • Implementation of OpenAI REST API (we’ll be using ChatGPT-3.5-turbo model)
  • How to create a simple backend integration with Anypoint Studio.
  • How to realize an integration with Slack.

Pre-requirements

  • Anypoint Studio’s latest version.
    • Once you installed Anypoint Studio and created a new Mule Project, we need to install the Slack Connector, you just need to access the Anypoint Exchange tab, and then you will be able to search for and install the connector.
  • Anypoint Platform trial account, you can create a 30 days trial account.
  • A Slack Bot installed on a Channel.
  • An OpenAI account with available credit. Remember, OpenAI gives us $15 if it’s your first account. If you previously registered on the OpenAI platform, then you will need to add a balance to your account. However, following this guide and creating your sample application, will be really cheap.

Once we have everything installed and configured, we can proceed with getting the corresponding authorization tokens that we will need along with our integration. Save these in your mule-properties .yaml file.

OpenAI API Key

Once you have created your account on OpenAI, you will be able to access your account dashboard, where you will see a tab labeled “API Keys”. Here, you can generate your secret key to make requests to the OpenAI API. Simply click on “Create new secret key”, copy the key, and save it to a text file.

Slack Oauth

On your Slack application, you should have already configured your bot inside a channel on Slack. If you don’t know how to do it, you can follow this guide. On Bot’s scope configuration, enable ‘channels:read’, ‘chat:write:bot’, and ‘channels:history’. 

This screenshot it’s an example of how looks the interface, you will have your own client ID and Client Secret:

Configuration properties

You can use this sample file for your mule-properties .yaml file, you just need to replace your own KEYS and IDs.

The Integration

Now that we have our Bot created in Slack, and our API Key on the OpenAI dashboard, you start getting an idea about the roles of each system and which is the missing piece that connects them all, that’s right, it’s MuleSoft’s Anypoint Platform.

The Project Structure

The project is divided into a main flow, and 3 flows, divided according to functionality. We need to do some things between receiving and replying to a message from a user on Slack. Please see the image below, and each block’s explanation.

Main Flow

  1. This Mule flow listens for new messages in a Slack channel using the slack:on-new-message-trigger component. The channel is specified using the ${slack.conversationId} property. A scheduling strategy is set to run the flow every 5 seconds using the fixed-frequency component.
  2. Next, the flow checks if the message received is from a user and not from the bot itself. If the message is from the bot, the flow logs a message saying that it is the bot.
  3. The incoming message is then transformed using the DataWeave expression in the Transform Message component. The transformed message is stored in the incomingMessage variable, which contains the user, timestamp, and message text. 
    • If the message is from a user, the incomingMessage.message is checked to see if it equals “new”. If it does, the finish-existing-session-flow is invoked using the flow-ref component. If it doesn’t equal “new”, the check-session-flow is invoked with the target set to incomingMessage.

Overall, this flow handles incoming messages in a Slack channel and uses choice components to determine how to process the message based on its content and source.

The finish-existing-session-flow and check-session-flow are likely other flows in the application that handle the logic for finishing existing sessions or checking if a new session needs to be started.

Finish existing session flow

  • “Finish-existing-session-flow”: terminates the previous session created by the user.

Check session flow

This flow called “check-session-flow” checks if a user has an existing session or not, and if not, it creates one for the user. The flow follows the following steps:

  1. Check if a user has an existing session: This step checks if the user has an existing session by looking up the user’s ID in an object store called “tokenStore”.
  2. Check array messages user: This step checks the object store “store_messages_user” to see if there are any messages stored for the user.
  3. Choice Payload: This step uses a choice component to check if the payload returned from step 1 is true or not.
    • When Payload is true: If the payload from step 1 is true, this step retrieves the existing session ID from the “tokenStore” object store and sets it as a variable called “sessionId”. It also retrieves any messages stored for the user from the “store_messages_user” object store and sets them as a variable called “messageId”. Finally, it logs the “messageId” variable.
    • Otherwise: If the payload from step 1 is not true, this step sets a welcome message to the user and stores it in the “store_messages_user” object store. It generates a new session ID and stores it in the “tokenStore” object store. Finally, it sets the “sessionId” variable and generates a welcome message for the user in Slack format.
  4. At the end of the flow is where we interact with OpenAI API, calling a flow named “make-openai-request-flow”.

The steps in this flow ensure that a user’s session is properly handled and that messages are stored and retrieved correctly.

Make OpenAI request flow

The purpose of this flow is to take a user’s message from Slack, send it to OpenAI’s API for processing, and then return the response to the user via Slack. The flow can be broken down into the following steps:

  1. Transform the user’s message into a format that can be sent to OpenAI’s API. This transformation is done using DataWeave language in the “Transform Message” component. The transformed payload includes the user’s message, as well as additional data such as the OpenAI API model to use, and a default message to send if there is an error.
  2. Log the transformed payload using the “Logger” component. (Optional, was used to check if the payload was loaded correctly)
  3. Send an HTTP request to OpenAI’s API using the “Request to ChatGPT” component. This component includes the OpenAI API key as an HTTP header.
  4. Store the user’s message and OpenAI’s response in an object store using the “Store message user” component. This allows the application to retrieve the conversation history later. (please read more about this on OpenAI documentation. This will help to keep the conversation context that a user has with ChatGPT since messages are stored with roles: “user” and “assistant”.).
  5. Transform the OpenAI response into a format that can be sent to Slack using the “Make JSON to send through Slack” component. This component creates a JSON payload that includes the user’s original message, the OpenAI response, and formatting information for Slack.
  6. Send the Slack payload as an ephemeral message to the user using the “send answer from chatGPT to Slack” component.
  7. As the final step, we delete the original message sent by the user, as we are using ‘Ephemeral messages’, since the Bot is deployed on a channel, the messages are public, with ‘Ephemeral messages’ we can improve the privacy on the messages sent on the Slack channel.
    1. Create a payload to delete the original message from Slack using the “payload to delete sent messages” component.
    2. Send a request to delete the original message from Slack using the “delete sent message” component. 

By following these steps, the MuleSoft application can take a user’s message from Slack, send it to OpenAI’s API, and return the response to the user via Slack, while also storing the conversation history for later use.

This was created and tested with these versions:
Mule Runtime v4.4.0
Anypoint Studio v7.14
Slack Connector v1.0.16

Oktana is a SOC 2 Certified Salesforce Partner

As members of the Salesforce ecosystem, we are all aware Trust is the #1 core value of Salesforce. Customers trust data stored in Salesforce is secure. This expectation of trust naturally extends to any partner accessing a company’s Salesforce instance and connected systems.

Oktana is a SOC 2 Certified Salesforce Partner

Oktana is proud to have maintained SOC 2 Type II certification since 2021, which allows us to provide the assurance we meet the highest data security standards. Since 87% of our business over the past three years is within the High Tech industry, including Healthtech and Fintech, this certification also enables our customers to maintain their compliance certification as we meet vendor security requirements.

What is SOC 2 certification?

SOC 2 is a voluntary compliance standard for service organizations, developed by the American Institute of CPAs (AICPA), which specifies how organizations should manage customer data. The standard is based on what they call “Trust Services Criteria”, covering these core concepts:

  • Security – Your data is managed and stored securely
  • Availability – Your data is always available
  • Processing Integrity – Your data remains intact at all times
  • Confidentiality – Your data is treated confidentially
  • Privacy – Your data is not exposed when not necessary

To maintain our SOC 2 certification, we are audited against a set of security controls supporting these Trust Services Criteria.

Why should you care?

To Oktana, this is the bare minimum a Salesforce partner can provide to you given the sensitivity and importance of the data you store in Salesforce. A SOC 2 certified Salesforce partner confirms they will respect your data and help you provide the same level of trust Salesforce provides to you, to your customers.

Here are some of the benefits of  working with a SOC 2 certified Salesforce partner:

  • Peace of mind and confidence in data security

By choosing Oktana as your Salesforce partner, you can rest assured we are taking active steps to protect your data. SOC 2 certification is an additional guarantee that we are committed to our customer’s data security and that we have implemented appropriate security controls to protect it, including training our team members.

  • Regulatory compliance 

To meet your own regulatory requirements, you may need to require vendors to be SOC 2 certified. By working with Oktana on your Salesforce implementation, you can be sure we meet the necessary bar to enable you to comply with your regulatory requirements.

  • Risk reduction 

By working with a SOC 2 certified Salesforce partner, you can be sure we have taken proactive measures to protect your data and reduce the risk of data security breaches and associated costs. In line with this, we work with you to ensure your proprietary data does not enter Oktana’s systems. We will use your project management software and repositories and, if you prefer, your VPN and hardware.

  • Competitive advantage 

By choosing to work with a SOC 2 certified provider, you can differentiate your company from the competition and improve your reputation and the trust of their own customers.

Our compliance program is robust which has enabled us to work with regulated industries including public sector at both the state and federal levels. In addition to being SOC 2 certified, we can provide onshore resources to meet other compliance requirements. To learn more, check out our Trust page.

Mastering Acceptance Criteria: 3 tips to write like a Professional

What is Acceptance Criteria?

Acceptance Criteria is a term used in software projects for a deliverable that will list a set of pre-defined requirements in relation to a product, service, or feature. Such criteria must be met or approved so that it can be ultimately “accepted” by the end-user and therefore become a functional part of the organization’s solution or software. The criteria is specific to each User Story in Agile project management and are elaborated during planning phases so that once defined, the development team can use it as a guide for implementation and testing. It is highly recommended to have detailed and measurable criteria that is clear to all involved so that measurable outcome is obtainable.

Why is Acceptance Criteria necessary?

Writing requirements in the form of “Acceptance Criteria” has become the current day norm in Agile projects. Crafting these requirements as digestible deliverables is an integral way to help with a successful implementation. In doing so, using acceptance criteria is standard practice for requirement documentation and can easily align different teams to hold a common understanding of the ask. 

It is extremely important that cross-functional teams hold a shared understanding since collaborating together highlights that they all have their own unique backgrounds, ideas, and interpretations which can lead to misalignment.  Moreover, writing acceptance criteria can vary greatly per the author’s unique writing style. This is particularly evident on large projects when multiple individuals work on producing acceptance criteria.

Needless to say, we all have our own preference on how to write, but it’s important to remember that writing acceptance criteria is a skill that can always be refined and improved upon with the ultimate goal of producing a document that reduces implementation ambiguity, that is clear to understand by all parties involved and provides value to the project.

Acceptance Criteria & User Stories

The skill of writing user stories is well defined: understand the project scope, work on your personas, follow the INVEST mnemonic and you’re pretty much set. On the other hand, acceptance criteria is much broader and “open” in terms of definition. There is often a gap between theory and practice. Whilst working on requirement analysis, the real world often presents time constraints, no well-defined scope, and a lack of stakeholder engagement. User stories can reflect a specific goal but the acceptance criteria needs to showcase the behavior in detail so that the user story can be achieved. 

As a Business Analyst in software projects, I am involved during all phases of design and implementation. However, countless times I have seen the expectation of having reached a shared understanding become dismantled at all stages of the project. 

There are many resources out there that cover best practices, but I want to emphasize the importance of actively listening to questions or feedback when reviewing acceptance criteria with the scrum team. This is a critical aspect to know whether it is well written and achieves the goal of the user story. Nailing down a good set of acceptance criteria is a challenge and finding that sweet spot can make your requirements a masterpiece for the team. 

The Goldilocks Principle

What is the Goldilocks principle?

The Goldilocks story can help us think about finding that middle ground on how to write effective acceptance criteria that is dependent on each project’s particular goals and needs. Aside from the blatant issue of home invasion, Goldilocks does teach an important lesson. Namely, when doing something, nothing at the extremes was “right,” not eating the porridge, sitting in the chairs, or sleeping in the beds. Yes, this might have been intended for you always to seek out that ideal balance in life, but also let’s think about it when writing acceptance. Too vague and it becomes a solution nightmare. Too detailed and it complicates the wiggle room needed for “issues” or design/tech constraints that occasionally pop up. Too lengthy can make it hard for QA to test effectively but too short might not reflect any implementation needs. 

However, let’s go a step further. We don’t always have time to write well, and sometimes we don’t know the clear scope or vision of the stakeholders but have to start anyway, we might not have a strong technical lead, or we might not have access to the right stakeholders to get the information we need. These constraints can lead to unclear and difficult-to-read acceptance criteria.

In the past, I have assessed the project risks applicable to writing acceptance criteria, similar to the ones mentioned above, and devised a strategy on how I can best write them so that they become a valuable key piece of work used by the team.

Tips and recommendations to write good acceptance criteria​

  • Assess your timeline to deliver the acceptance criteria.

An aggressive timeline requires a fast output and more generic criteria. Details will need to be fleshed out further when reviewing the user stories (during scrum team refinement) and possibly during implementation. Not ideal, but is a real-world scenario.

A lengthy timeline gives more time to work alongside stakeholders and fully understand the requirement and its context. We should work on supporting documentation like process flows or designs to assist teams to understand the written criteria.

  • Understand the project’s complexity.

A straightforward project involving simple development work and design gives us the opportunity to write in detail (always respect best practices – like 8-10 max criteria per user story) and call out the specifics. Such as errors, exceptions, or alternate behavior.

A highly complex implementation often involves integration/s, whereby it can actually be more beneficial to write more generically with the key details only since during development unforeseen limitations always arise. Work with what you know as a basis and any underlying constraints will naturally come to the surface.

  • The audience: get to know your stakeholders, their engagement, and how invested they are.

If stakeholders do not display much product knowledge or are not very helpful in defining the requirement, they might need you to aid their decision-making. This is an extremely common issue in projects. If this is the case, writing acceptance criteria needs to be clear to them so as not to include too much technical jargon. This can be elaborated with the development team during refinement.

However, if stakeholders are overly involved and have a technical background, this might help you get what you need but they should not solve the “how” a criteria needs to be met. Here, we need to stick to writing acceptance criteria as statements and not describing how something needs to be achieved – even how obvious this can be.

Conclusion

All in all, we can add a lot of value to a project when writing acceptance criteria while also taking into consideration all the particulars and risks of a project. This analysis can be used before investing time, effort, and potentially rework to examine how you’re going to tackle writing the acceptance criteria. Always remember, although this is a generalization, it can help to get those acceptance criteria just right.

 

Check out our staff augmentation services

Salesforce TDD (Test-Driven Development)

Hi, I’m Diego and I have several years (I prefer not to say how many, but let’s say “enough”) working in Salesforce. I am also an Agile enthusiast and lover of applying related techniques throughout my work.

I’ve found test-driven development (TDD) can be a great technique for building robust software, but when we work in Salesforce, I find some constraints or restrictions can make it frustrating. In this post, I’m going to show when and how I use TDD while coding in Salesforce.

Disclaimer: The following is written on the basis of what has worked for me in the past. This is not intended to be a formal or exhaustive description. Use it as you see fit, I do not take any responsibility if you screw it up! 🙂

Salesforce TDD (Test-Driven Development)

Let’s start at the beginning:

What is TDD?

TDD is an Agile development technique that requires you to write a failing unit test before writing any “production code.”

How are you supposed to do TDD?

First I’ll describe how TDD is done in general (this is the way to go in languages like Java).

  1. Write a unit test and make it fail (a compilation error is considered a failing test). Write no more lines of code than needed.
  2. Write the least possible production code lines to make the test pass (fixing a compilation error is considered a pass test).
  3. Refactor the code.
  4. Repeat until you are done.

Let’s check out an example in Java so you see how it works. In this example, we wanna create an advanced calculator of integers.

 

We work in split view when doing TDD

Round 1

Let’s write a failing unit test:

Oops, MyCalculator is not defined yet, compilation issue…therefore, it is a failing test.

Let’s make the test pass:

Compilation problem fixed! The test is passing again. Woohoo!

No tons of code to refactor. 

Round 2

Let’s continue with that test to make it fail again.

Mmm…getOpposite is not defined, ergo compilation issue, ergo failing test.

Let’s fix that. Let’s write the minimum code to fix the test:

getOpposite is defined and returns 0 to any parameter (in particular, 0). Test is passing again!!!

Let’s refactor.

We still don’t have much code to refactor, but there are some name changes we could use to make code easier to read ( yup, yup, yup…unit test code is code, too).

Much better now! 😀

Round 3

Let’s add a new minimum test to fail.

Right now, getOpposite returns 0 to any parameter… it’s a fail!

Let’s write the minimum code required to make the test pass.

Yay! It’s green again! Let’s continue.

Round 4

Let’s add a new failing test.

Last test fail (we are return 0 to any value different than 1), so now we need to write the minimum code to fix this test:

Test is passing again… but this solution is not good, let’s refactor.

Tests are still passing and we solve all the cases! We are done! Well, not actually, we still need to document, test more, write more tests and write even more tests…but we’re on the right path.

I expect this silly example gives you a feel for what TDD is and how it is done.

Now, let’s continue with the discussion, focused on Salesforce.

TDD Advantages

  • Code coverage: We get great code coverage without even thinking about it.
  • Testability: The code is designed to be testable by default (we are actually testing every time we change something).
  • Easier to refactor: We should not change or refactor code without having a set of tests we can lean on. As we are constantly writing tests that we know are able to catch bugs (we make it fail at the beginning), we know that we have a set we can rely on.
  • “Better” code: We are constantly refactoring the code, striving for the best possible code.
  • Predictability: After we finish a “round,” we are completely sure the code is working as we designed it to work and we know we didn’t break anything. We can say we have “working software.”
  • Prevents useless work in Salesforce: In Salesforce, aside from Apex, we have plenty of options to make changes like triggers, workflow rules, process builder, etc. Imagine that after we write a test that changes a value on a contact record, it passes. We could discover that there is another moving part that is taking care of that change (or we wrote the test badly).
  • Documentation: Tests are a great tool to communicate with other developers (or the future you) how, for example, a class API should be called and the expected results of each case.

TDD Disadvantages

  • Overtrust: It happens to me that, as we are testing continuously and we are getting test green, I sometimes have a feeling that the code is working perfectly…but it doesn’t mean it is. We may miss, or simply get lazy, and leave a case out of the unit test.
  • Slow in Salesforce: TDD is designed based on the theory that compiling or writing a test is really fast (a jUnit unit test has to run in less than 1ms). In Salesforce, we need several seconds to compile (the code is compiled on the server) and several more seconds to run the test. In my opinion, this is usually 10+ seconds. As we are compiling and running tests constantly, we add several minutes of “waiting for Salesforce.” However, this can be mitigated if you think you will need to write/compile/execute tests later anyway – you might as well do it upfront.

 

 

Me when I realize the QA found a case I had not considered when I was doing TDD

I will (probably) use TDD when...

In general, I’ve found that TDD is a great tool in several circumstances and I tend to do it almost always in the below cases.

  • Back-end bug fixes: Doing TDD in this context has two big advantages. First, you make sure you are able to reproduce the bug consistently. Second, and even more important, as you are writing a test specific to the bug, you know you will never introduce that bug again.
  • Back-end work with clear requirements and a clear implementation strategy: In this context, writing tests is going to be easy and implementing the production code will be easy, too, as you know where you are heading when you create the test cases.
  • Back-end work with clear requirements and minor implementation unknowns: In this context, the test is easy to write and the production code may be getting clearer as you move into cases.
  • Back-end work with some requirements discovery: Imagine in our calculator example you write a test to divide by zero and you realize you’ve never discussed that case with the BA. TDD helps you discover opportunities to clarify requirements.

I might do TDD, but it’s unlikely...

  • As part of requirements discovery: You could write unit tests as part of requirements discovery, and discuss it with your stakeholders, BA, or other technical people, but you probably have better techniques to support this process.
  • Front-end work: I’m gonna discuss this briefly later, when we talk about Lightning web components.

I will never do TDD when

  • I’m doing a prototype: By definition, a prototype or PoC should be discarded after we show it, so I code it as fast as I can, focused on demonstrating the core functionality.
  • I’m experimenting: If I’m trying a new idea, I don’t focus on code quality (again, this is a prototype).
  • I’m evaluating implementation options: There are some cases where you want to compare two implementation options, so focus on having a good-enough-to-decide prototype and throw it away after you decide…then do the stuff well.
  • I don’t care about code quality: I know code quality is not usually negotiable, but in very limited and extreme situations, it may not be the top priority. For example, when everything is screwed up on prod and you need to fix the problem ASAP because the company is losing millions of dollars per minute. In this very extreme circumstance, fix the problem as fast as you can, make your company earn money again, go to sleep (or maybe get a drink) and tomorrow at 10 am (yup, after a stressful night, start working a little later the next day) make the code beautiful with TDD. Make a test that reproduces the bug and then fix and refactor the code properly.

 

 

Me again, but on one of THOSE nights.

  • When creating test code is extremely difficult (but not possible): In Salesforce there are a few elements that are very hard to test, like working with CMT. In this scenario, I’d probably split the problem into two parts – one that is TDD-doable using mocking data (@TestVisible is your best friend here) and a second, smaller part that I’d consider how to test later (if I even consider it).

How I do TDD in Salesforce

I really don’t do TDD as I defined at the beginning of this article when I’m working in Salesforce. Why? Mainly because of the slower compile/test flow, but also because in Apex we generally start writing integration tests instead of unit tests. Instead of “regular” TDD, I tweaked the formula a bit to work better under Salesforce circumstances.

  1. Write an entire deployable test that checks the flow or use case. Yup, I said deployable, so if I called a method I haven’t created yet, I will create it, empty, so I can deploy.
  2. Run it and get a failure.
  3. Write the minimum code that makes that test pass.
  4. Refactor.
  5. Continue with the next flow or use case.
  6. When I’m done with all the flows and use cases, I refactor the code again (splitting methods, checking code cleanliness, documentation). I run the unit test continuously, every few changes to check if everything continues to work as expected.

To make everything clear, let’s view an <could-be-real-world> example.

Requirement:
As a user, I want the values stored in any object copied into a number of specified contact fields. The specified “mappings” will be stored in a CustomMetadataType called Contact_Mappings__cmt. The Contact_Mappings_cmt has two fields:

  • Original_Fields__c Text
  • Mapped_Fields__c Text

Round 1

As I said before, I should start writing an Apex test that tests a business case. The first thing I’m thinking of developing is “The contact should not change if there is no mapping defined.” I have to write a deployable test that is going to fail with the minimum amount of code to make it fail:

We work in split view

As expected, the code deploys but the test fails. So, we need to fix it! We can simply return the same object.

Now It passes, but we don’t have a lot of code to refactor (we could extract some constants in the test).

This is a much better test.

Test still passes!

Round 2

Okay, let’s add another case. What if we check that the User.LastName is copied into the contact when I define the Mapping Lastname => Lastname? Great idea, let’s do it!

I start to write the unit test but…. I realize I can’t do an Insert in a CMT. Or, I give seeAllData permission to the test and define it in the project metadata. Or, I have to somehow deploy it. 

Remember that I said that I don’t do TDD when writing the test is extremely hard? Well, it looks like I’m in one of those situations. At this moment, I can quit writing this blog post and go cry…or I can redefine what I am developing with TDD, leaving all the complexities outside of scope. I imagine you would be very annoyed after reading this far to see me just quit, so let’s go with the second option.

I can’t use the CMT right now, so let’s do something different. What if we use a Map<String, String> where the key is the field in the original object and the value is the list of fields names in the Contact Object. It might work, later on we just need to read the CMT and create a Map with that information, but spoiler alert…that won’t be covered in this article.

But okay, great, let’s create a map and write the deployable failing test.

And as it was expected… it fails.

Let’s write the “minimum” code that makes that test pass

Our new test passes, but the other one failed! Let’s fix that.

Let’s do some refactoring, either in test or production code.

I think the put/get part is messy to read (and has its own meaning), so let’s split it into its own method.

Also, as we want that theMap could be injected into test case scenarios, the @TestVisible annotation is useful here.

Round 3

Now we should add a new test that executes a new flow and see it fail. I think you got the idea, so I won’t do it now, but just to specify the cases, I can think:

  • Mapping a field to multiple fields (separated by colon)
  • Does nothing if origin field is wrong
  • Does nothing if destination field is wrong
  • Does nothing if types are not compatible
    …and so on

Can we do TDD in Lightning web components (or front-end)

The short answer is yes, we can.

Long answer: As the Jest test can’t see the objects, but they see only the “generated DOM”, it may be harder to do TDD in an efficient way for front-end solutions. Usually, it is better to test visual code by watching the result and THEN write the tests we need to ensure code won’t break in the future.

Conclusion

TDD is a best practice that’s good to master so that you can decide the best moment to apply it (I don’t believe there is One Ring to rule them all, One Ring to find them, One Ring to bring them all, and in the darkness bind them, thank you J.R.R. Tolkien). Applied correctly it will make you produce better, more robust code…and fewer bugs, which means…

Homer is happy

Salesforce System Architect: Tips, Role, and Responsibilities

 

Interested in becoming a Salesforce System Architect, but still have some questions about it? Travis, a System Architect in our West Virginia office, has the answers.

Is the Salesforce recommended material enough to prepare me for the Salesforce System Architect Certification? 

Yes. I think that in the beginning, it may be a little overwhelming. Make sure you understand the content well and don’t get discouraged. If some of the details seem like a little too much to memorize, just have a general understanding of everything. If you have excellent general knowledge, you should be okay on the exam. Some of the exams do get more specific than what you would ever actually need in a working situation, like the Identity and Access Management Designer exam in particular. So there are some things that you have to memorize, but I think that if you have a good understanding of all the content, you should be okay.

 

Is the System Architect certification a prerequisite for another certification, or is it the end of the career path?

                                               Source: Trailhead Architect Overview 

The next step for a Salesforce System Architect is to become an Application Architect and then a Technical Architect. For the Salesforce System Architect and Application Architect, you need to pass four exams per certification. Once you get the certifications, you’re considered either a System Architect or Application Architect, depending on your achievements.

Becoming a Salesforce Certified Technical Architect (CTA) is a little bit different. There isn’t an exam as far as questions and answers that you go online to take. It’s actually like sitting in front of a board of people who have achieved the Salesforce CTA. They ask you questions about the solutions you present and then decide whether you should be a CTA as well.

Do you recommend following some particular order for the certifications?

I recommended starting with Salesforce Platform Developer I certification to understand the basics of Salesforce. They’re so specific compared to the other exams, so it is better to start with that one. Also, none of the other certifications build on the other. All three are separate topics.

How long does it take to become a certified Salesforce System Architect?

It depends. Things were a little different for me. It took me one year, which is probably a little unique. Salesforce says it typically takes three to five years and requires working with the Salesforce platform. But I kind of jumped through things a little bit as I didn’t have to learn much about the background and best practices because I already had experience with that. In my case, I was just learning the Salesforce platform and how to put that together with my experience. It’s going to depend on how much time you have to study, and what your regular work is. If your regular job involves many things related to the exam, you’re going to gain that knowledge more quickly. It just depends on your situation.

What other certifications or technologies would you recommend learning to be a well-rounded Salesforce architect?

As far as the certifications and Salesforce go, there are Certified Consultant certifications for different areas like Sales, Service, and Experience Cloud. You can get those and obtain a deeper knowledge of Salesforce and the technologies they offer. 

The MuleSoft certification is something I’ve seen a lot about lately. I’ve been studying MuleSoft quite a bit because many customers and Salesforce team members are using it. I recommend learning anything popular with your customers and your work area.

The big thing is just having a perfect understanding of the Salesforce platform and the fundamentals because you can apply those fundamentals to anything the customers need. 

Did you use other materials besides Trailhead from Salesforce?

I did for a little bit right before I took the exam. I looked at some practice exams just to know what I should expect. I found some excellent pages but others were out of date. You have to be careful, there is lousy information floating around that can encourage you to make mistakes.  

Some useful resources:

What makes a good Salesforce System Architect?

If you want to become a good Salesforce System Architect, you need to:

  • Understand the customer’s requirements and how to translate those into actual functional tasks.
  • Be able to work with others. Most of the time, architects are almost consultants within a company. You’re not just going to start a task, then sit back and work on it in isolation for a month. You need to be directly involved.
  • Communicate well. Whether it’s the people on your team or your customer, you need to learn how to give the best advice in a way they understand.
  • Have patience. Never jump into things too quickly. Always take your time and investigate. It’s okay to have questions. In those situations, it is better to say: ‘I’m going to do a little research. And I’ll get back to you.’

    If you want to learn more, we highly recommend going over these two resources:

Becoming a Salesforce System Architect

Have you become a Salesforce Certified System Architect yet? Travis has and he did it in an impressively short amount of time, so we asked him to share his experience. Before we jump into it, let’s get some background on Travis and learn more about him. 

  • He worked with .NET applications for about seven years.
  • Likes Salesforce’s certification paths because you can show your prior knowledge.
  • Changed his career path by complementing his prior knowledge with Salesforce.
  • Has been working with Salesforce for about four years.
 

Challenges to becoming a Salesforce System Architect, Developer, Designer, Architect?

Challenge Accepted

 

Switching to a new career path can be daunting. That’s why we asked Travis what challenges came along with the process of getting these certifications.

Whenever we start something new, there is a certain level of anxiety or fear. This situation, though expected, can hit us like a cold shower at 5 am. This is all part of the process of going in-depth on a new topic. Working with new information also opens up a new area to explore, requiring the necessary amount of research behind it. One of the main challenges Travis found is when working in areas where you have no past experience or you don’t yet know the basics. These areas bring more interesting challenges. 

I just had to spend a lot of time learning Salesforce well, before I moved into the more in-depth topics

Travis. Oktana Salesforce System Architect

For some people, time management is not an issue. But for us mortals, this may not be the case. Becoming a Salesforce System Architect will take a bite out of your daily routine and that will be an adjustment. This makes sense because most people work and study at the same time. Travis emphasizes the importance of staying motivated and finding the time to study.  Especially if you work in a different area where you are developing different skills. 

Set aside some time every day to go through the Salesforce recommended material

Travis. Oktana Salesforce System Architect

The path seems pretty simple, right? We start with Salesforce Platform Developer I Certification to get a handle on all the basics, then we continue onwards! One key point to remember though is not to get ahead of things. Each module deserves time and attention. It may seem intuitive to go into the nitty-gritty or jump ahead. However, you could bump into uncharted territory down the line, which may derail or put a stop to your master plan, and nobody wants that. That’s why Travis reminds us to give ourselves some space and take the time we need to work through each of the modules.

Focus on one area or topic at a time. It could get a little more confusing if you don’t.

Travis. Oktana Salesforce System Architect

Valuable Resources to become a Salesforce System Architect

Salesforce’s learning platform, Trailhead, helps you accelerate your digital transformation. The System Architect Certification consists of four different certifications, each with its own trails and trailmixes to help guide you. As was previously mentioned, Travis recommends completing the Platform Developer I Certification first,  then going on to the other three. These trails are hands-on guides with examples that allow you to experience what you’re working on. There is also plenty of documentation and other resources. On top of that, the gamified experience will surely help you keep motivated. On the whole, the trails, plus all the other resources, can be overwhelming at first. That’s why keeping calm and studying as much as you can before the exams are your best bet. 

Travis loves the data and seeing what works best. What about you? To get an idea of which module you are most interested in, here is a short description of the modules coming after the Platform Developer I Certification. You will also get an example of how to go about taking the exams. This was the path Travis followed. Let’s see if this works for you!

  • Integration Architecture Designer: This certification is more about active learning. Travis mentions following the recommended material. And he also highlights that general best practices were useful along with his prior experience and knowledge. The exam focuses on a specific scenario, where you’ll have to understand the requirements of the customer. This depends on the apps and data your customer needs and how they should be evaluated and handled. 

  • Identity and Access Management Designer: This certification gets more challenging. Experience in this area is a big factor. The exam focuses on details, configuration, and authentication. It’s all a matter of learning new concepts that will broaden your skills. Travis also points out that the customer and their technologies come into play and influence your decision-making process. Understanding the relationship between the customer and Salesforce is a must.    

  • Development Lifecycle and Deployment Designer: Woohoo! We’re here. This one focuses on the development lifecycle. From development to production and testing, you will go through the methods and steps behind it all. Travis reminds us that the process will inevitably be linked to the customer’s needs. Again, the key is to find the best fit for the customer. 

A Day in the life of a Salesforce System Architect

Now you have an idea of how to tackle the certification, we also asked Travis what his daily work looks like. He shared that his actual work is similar to the exams. Good to know, right? Knowing the customer and their requirements is always a must. To do this, strong communication and learning how to ask the right questions are definitely game changers when it comes to dealing with customers. The following is a sample of questions you might ask yourself, he says:

Do we already have a lot of existing applications accessing them?

How do we need to sync all these things in real-time?

Travis. Oktana Salesforce System Architect

In a Nutshell

Here are some of the best tips to remember:

  • Prior knowledge and best practices can be really helpful.
  • Try not to get overwhelmed by all the content.
  • Don’t memorize. 
  • Take your time to understand the content you are learning.
  • If you fail the first time, try again! Don’t be discouraged.
  • Don’t hurry! In the end, this is all to further your career.

If you want to know more about the information Travis shared with us, we also recommend checking out our Salesforce System Architect: Tips, Role, and Responsibilities article and our webinar about Travis’ full experience. 

Salesforce DevOps Q&A

Interested in becoming a DevOps engineer, but you still have some questions about it? Sebastian V, an architect in our Uruguay office, has answers.

Salesforce DevOps Q&A

What DevOps tools have you come across? Would you recommend any Salesforce-related products?

There are many tools, but the most useful ones are repositories. Then we have different automation tools based on the Continuous Integration approach. The first tool I used was CumulusCI, a powerful toolset for employees and community collaborators. It allows anyone working on an enhancement to NPSP(Nonprofit Success Pack) or EDA (Education Data Architecture)—or even a community project to spin up a Salesforce instance complete with NPSP or EDA already installed and configured.

CumulusCI builds orgs based on repeatable recipes (dependency management, package or application installation, metadata deployment to tailor org, and more). CumulusCI makes it easy to define fully realized scratch orgs. Also, it has a pipeline, a UML script that lets you determine how you want that deployment to be. For example, suppose you need to add test data or run one specific apex class before the deployment. In that case, you can customize the way that you are going to deploy.

Jenkins is the second tool I have been using. Jenkins is a free and open-source automation server. It helps automate software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery. You can integrate it with different tools, like Slack, GitHub, Assembla, and more. It’s an excellent tool. 

 

Should the production deployment be automated too?

The idea is that we try to automate everything. The thing about production deployments is that sometimes it is an issue because businesses don’t want an automated tool to do those deployments. 

The primary goal of DevOps is to try to automate everything. This is what we call the Value Stream: the process required to turn a business hypothesis into a technology-enabled service that provides value to the customer.

 

Does CumulusCI use the same SalesforceDX commands?

No, it doesn’t use the same ones. For me, SalesforceDX (SFDx) commands are more difficult to understand because of the way they are written. CumulusCI gives you more human-readable commands, and it lets you create your own commands.

 

How different or similar are the profiles of a DevOps engineer and developer? Do DevOps engineers need to be a developer first?

In the past, operations teams have had a different role in IT. DevOps is not trying to change that because there are some things that the operation team needs to continue doing, like monitoring. The DevOps profile adds development, so if you ask me, you can start from both sides.

It is essential to mention that being on the DevOps side requires knowing code because pipelines are written with code. For example, Jenkins pipelines are written in a groovy version of Java. Also, command-line tools use shell commands, so it is better if you know how to write shell commands.

So most DevOps engineers must understand and know how to work with GIT, shell commands, and different languages like Java. 

 

What should I do if I want to get started in DevOps?

You can follow many paths, and there’s the myth that DevOps engineers are the top senior developers, that is not like that. DevOps has a broader spectrum. And, of course, there are DevOps engineers that are awesome!

But if you want to be a DevOps engineer, we are not interested in designing applications; we focus on helping improve the development lifecycle. You can take some great certifications online, like AWS Certified DevOps Engineer – Professional and Certified Kubernetes Application Developer (CKAD). They’re a great way to get started.

 

How is DevOps different from agile?

That’s the myth that says that DevOps is coming to eradicate agile. That is a lie because DevOps and agile complement each other. Agile focuses on the requirements side and the building part, while DevOps focuses on what comes next – it’s like an extension of the framework. DevOps tries to focus on the value streams, how they can reduce them, and how to automate. So it’s like the next step in the new era of a software engineer. You still need agile, a scrum master, a product owner, and your team. But how do you deliver that to the market? Well, you will have to use DevOps. 

 

What is the best thing about DevOps?

Our DevOps team keeps growing, so we asked some of the team members for their favorite things about DevOps:

“What I like the most about DevOps is the interrelationship with all teams in general. One is a full stakeholder. The most important thing to know and remember is that DevOps is not a trade, profession, or specialty; it is a philosophy and culture, and it is not only the knowledge about the use of tools. At the same time, a DevOps engineer never stops learning. We are constantly learning new tools and ways of working from different resources. Another exciting thing is the automation of processes to launch servers, monitor them, and generate an infinite number of jobs in this area. I can’t select one favorite tool because I like them all. I use the IntelliJ IDE and Infrastructure As A Code (IAAC) Terraform + Ansible + Puppet + Salt for programming. For CI/CD, I use Jenkins + Github. DroneCI for image generation, for acceptance test Cucumber, and for monitoring Sonarqube.” 

Marco Ramirez – Oktana DevOps, Bolivia

“I like being able to have a precise and efficient process of moving the new ‘features’ implemented to other environments. So later, as a developer, you can focus on the development itself and not worry too much about deploying the new features.”  

Kevin Monzon – Oktana DevOps, Uruguay

If you are interested in learning more about DevOps, read our latest article: Introduction to DevOps and Continuous Integration. And if you are interested in joining our family and following this career path, check out our open positions.

Introduction to DevOps and Continuous Integration

Sebastian V. has been working as a release manager on our team in Uruguay for more than a year. He is a Salesforce Certified Application Architect who most enjoys designing development life cycles that improve development capacity. Helping developers and the whole team to work as comfortably as possible brings him great satisfaction.

In this article, you will learn the main goal of DevOps, why we need DevOps, and lastly, we’ll explore the Continuous Integration (CI) framework.

Introduction to DevOps and Continuous Integration

What is DevOps?

The term “DevOps” combines the words developers and operations, which are typically two different teams in IT:

One takes care of the building process and the other takes care of maintenance. When we say developer team, we aren’t talking just about developers, we must include lead developers, tech leads, architects, and quality assurance engineers. And when we say operations team we are talking about system admins, software configuration management engineers, DBAs, network engineers, and server engineers. Everyone is responsible for deploying the application, maintaining the servers, databases, and monitoring the logs and traffic. 

The DevOps movement started around 2006 in response to the culture of fear that the industry generated. The agile process was great at solving the issues between gathering requirements and building, but the software industry was still dysfunctional. 

DevOps is a framework with a series of preconditions, activities, processes, and tools that are designed to solve and prevent some problems, such as production issues, rolling back incompatible changes, delayed releases, delays going live to market, and total team burnout. It also helps operations teams avoid having complex deployment sessions that are time-consuming because the more time between each update, the more those environments tend to diverge from each other. As a consequence, the discrepancies between the environments will impact the developers, making it more difficult for them to develop new features. To sum it up, these challenges make it harder to get something out to production.

 

What is this framework about and how can it help to solve some of these problems?

DevOps enables us to simultaneously improve organizational performance and the human condition.

  • End-to-end responsibility

Delivery is a team responsibility. The phrase “it works on my machine” is no longer valid. Developers and the operations team need to take ownership together. Both teams must collaborate from the beginning.

Example: Operations teams could prepare scripts to allow developer teams to work comfortably on their individual environments (automatic deployment scripts, automatic test data load, environment management scripts, for cleaning and cloning). If for any reason there are manual configurations that need to be made, the whole team will be responsible for documenting the changes needed, to make sure that a piece of code can be deployed to any given environment. The quality assurance team also has a primary role here as they will be the first ones to receive a finished piece of code, so it’s a great moment to test the scripts and correct any issues with the deployment activity.

  • Small increments over monolithic deliveries

It’s not easy for many new features to fit on a running server without any errors. Deliveries should be more frequent and with less “density,” meaning fewer features in each delivery. Ideally, we want just one small functional change to be delivered, so we can do it several times during a given period.

Example: Imagine you have an open-source free writing tool that crashes once in a while. You soon realize that you usually save your work when you finish a chapter (every two weeks) but this crash happens once a month, so when this happens you may lose almost two weeks of work in the worst case. You start saving frequently and eventually you end up saving after you finish a paragraph, so in the worst-case scenario, you would only lose one paragraph. This gives you a safer and systematic solution.  The same approach applies to software. If for any reason you need to roll back, you only lose the last change and not the complete release. 

  • Automate everything

We must reduce manual procedures as much as we can. This is where the operations team can help more, by defining configuration scripts, or data scripts that can be bound to the source code so when there is an environment that needs an update, these scripts will do all the preparation and post setup work. Sometimes this level of automation is not going to be possible, but the less manual configuration the better. 

Example: Sometimes it happens that manual configuration is easier and faster than the scripts. Logging in to the environment and selecting an option on the settings menu is easier than creating a script, finding and writing the appropriate code, and finally, testing it. We are often tempted to find the easiest way. But what happens when this procedure needs to be done on every deployment (each server or each environment), after repeating the steps 10+ times, the script will seem the easiest way.

  • Run unit tests 

Unit testing is the key to delivery reliability but developers tend to hate it. We sometimes don’t see the value of creating so many tests, especially that obvious one because it feels almost like losing time. But more often, unit testing is the part that we forget when we estimate a task.
Example: Imagine you only need to add a few lines of code and it will be done. You think it’s two story points for the estimate (story points are units of measure for expressing an estimate of the overall effort required to fully implement a product backlog item or any other piece of work.). Then when you write the code, you realize that you could unit test it, and when you dive into it, there are many tests that you need to write, so the task actually required more than two story points.

 

Continuous Integration (CI)

Continuous Integration (CI) is a development practice where developers integrate working code several times a day and each integration can be verified by automated tests. The word ‘integrate’ comes from the process of “adding the changes.” Working code is the code that passed all tests. If the code isn’t working it can’t be integrated. Also, each integration can be verified, this means that there is a history of these logs going on, this traceability aspect is very important since it can help to track issues and find the exact turning points where defects are introduced.

 

Continuous Integration goals

  • Easily detect integration bugs 

Sometimes a test that works in our environment won’t work on another. This points out that there are differences between the environments that we need to address. Resolving the differences helps create an environment that is synchronized. It’s important to mention that the more updated tests you run, the stronger the application you build.

There are development practices like test-driven development that are based on this principle. They develop tests before the software is fully developed. Those practices are considered the most reliable way of programming and are the ones that achieve the highest development velocity. Despite the general belief that writing tests slow the developers down, it’s the other way around, because we lose more time trying to find the source of a defect than the time we lose writing a unit test. 

  • Increase code coverage 

Code coverage has been used as a metric that determines how reliable the unit tests are. It is based on a simple concept: if after a test run we have uncovered source code in areas of our application that we are not certain is correct, therefore, there could be unspotted defects. There are platforms like Salesforce that won’t let you deploy code that hasn’t reached a given amount of code coverage, helping ensure you introduce best practices 

  • Develop faster and never ship broken code 

Today, organizations adopting DevOps practices often deploy changes hundreds of times per day. In an age where competitive advantage requires fast time to market, organizations that are still driven by old-fashioned practices are doomed to fail. Two major points make a big impact on development velocity: 

Hand-offs so the deployment can take place. Having to rely on other teams will only make the deployment slower because those teams won’t be prepared and will have to catch up with the task. 

Debugging, when unexpected errors occur, and we don’t have a clue where they come from, fixing and debugging them can take many hours, even days. We know that having a unit test doesn’t guarantee a defect-free application, but it helps in identifying where the issues come from, and we can catch those issues at a very early stage, so fixing them is much faster and we will only promote working code.

  • Consistency in the build process 

Using CI pipelines streamlines the process of product release, helping teams release high-quality code faster and more often and lets us rely on an automated framework that can repeat the process over and over again. This consistency is achieved by reducing manual procedures and hand-offs.

  • Increase confidence in the software

CI is the result of decades of experience and provides value by helping to deliver valuable systems to users faster. It helps to produce a higher quality code base with fewer defects. The severity and frequency of defects drop after adopting continuous integration. This assures that the production deployments will run a lot more smoothly by identifying incompatible aspects earlier, which is critical.

If you want to know more about DevOps and how become a certified DevOps engineer, we recommend AWS Certified DevOps Engineer – Professional. Also, we are looking for developers to grow our DevOps team, and check our available positions

The Best JavaScript Certification for 2021: Salesforce JavaScript Developer I

 

The Salesforce JavaScript Developer I certification, introduced in 2020, is an excellent way to demonstrate experience developing with one of the most popular web programming languages. JavaScript developers work with front-end and back-end development and even related technologies like Salesforce’s Lightning web components. This credential is a great way to further your development career.

Salesforce Javascript certification

Tips and secrets to obtain the Salesforce JavaScript Developer I Certification 

The JavaScript Developer I certification includes a multiple-choice exam that validates core JavaScript development skills. A huge benefit of the Lightning web component programming model is that developers write standard JavaScript. Passing the JavaScript Developer I exam demonstrates that you have the standard JavaScript fundamentals required to develop Lightning web components.

To learn more about this certification, we spoke with two developers from our team in Paraguay: Laura S. and David N.  They both decided to complete the certification to refresh their knowledge in Javascript and to demonstrate their abilities.  They both work closely with Lightning web components and wanted to expand their knowledge of JavaScript to help with that work.

The certification consists of two parts: the Lightning Web Components Specialist Superbadge and the JavaScript Developer I multiple-choice exam. These two parts can be accomplished in any sequence. Laura and David found that it can take approximately four weeks to finish the trail mix and study for the certification. But it all depends on the hours of study you dedicate to it.

The exam is structured around 7 main topics

Laura and David agreed that the most difficult topics are variables, types, and collections. Specifically about the types of data that Javascript handles since each one has its own methods. And, among the easiest topics to understand is asynchronous programming.  In the exam, they give you scenarios and ask you to apply asynchronous programming concepts like using event loop and event monitor or determining loop outcomes.

David considers understanding Javascript testing functions one of the more interesting topics. Other programming languages required additional libraries to do unit tests, while Javascript has built-in test functions. Laura considers server JavaScript (Node.js) more interesting. There are many languages that can be used for developing on the server-side, but she prefers JavaScript.

Oktana’s training team was very helpful to Laura and David. The team guided them from the beginning through to obtaining the certification. They gave them access to platforms like Focus on Force and Udemy, where they could practice until they felt ready to take the exam.

 

Why do we recommend this certification?

There are many reasons to want a certification like this. For some, it’s a way to advance their career. For others, it’s an opportunity to structure and strengthen their knowledge of the language.  

In David’s case, he works front-end and handles web components, so having a deeper understanding of JavaScript helps a lot. Also, it helps you as a tool in some types of projects. For example, in the projects that you carry out for Salesforce, in which you have to modify some component, that work is 100% Javascript. Additionally, it helps to learn new elements that you were not exposed to before, and that are not very commonly taught in the certificates. There is always more than one way to solve problems, the certification helps you to discover new functions and approaches that can make your work easier. Finally, Javascript runs better in browsers, so it is the most logical thing to learn. It is very useful!

Laura and David highly recommend this certification and they believe all developers should obtain it. It’s also a great way to learn more about Salesforce.

 

Certification preparation resources 

 

What are you waiting for? Start preparing for this certification. Also, if you are interested in other Salesforce certifications, our team strongly suggests following the Salesforce Platform Developer I.