Sunday, 4 February 2018

Building My Own Learning System - Part 2

Building My Own Learning System - Part 2

Introduction

In Part 1 of this blog series I covered the problem I was trying to solve (on-boarding/accrediting internal and external users with the same content, but without opening up all my content to everyone) and the data model to support this.I also mentioned that this isn’t an attempt to rebuild Trailhead, and that is still the case.

In this post I’ll cover the user interface and the elements of the solution that allowed me to test it without having the build out the entire backend.

The “Design"

The first thing I did was to sketch out what I wanted a couple of the components to look like. The original sketches are below, and I think we can all agree that this communicates the entire concept of the look and feel I’m trying to achieve ;)

Screen Shot 2018 02 04 at 07 35 23 png

It did help me to think about how I wanted things to work though.

Single Page Application

I wanted a single page application (SPA) as I wasn’t going to have the sobjects in the same instance as the client, so lightning navigation between sobjects wasn’t an option. This does present a challenge with regard to bookmarking, but that is something I think I can do by making the SPA support URL parameters. The user might have to jump through a couple of extra hoops, but nothing too arduous, so I felt happy kicking that down the road to a later release.

The SPA consists of a central section and right hand sidebar,  The sidebar contains details of the current content endpoint and allows switching of endpoints, while the central section contains the actual learning content. The page is constructed from a number of lightning components and styled using the SLDS, as I want people to use it from inside Salesforce so it’s important that the styling is familiar.

Screen Shot 2018 02 04 at 08 04 50 png

Fake News

When I’m building an application of this nature, I’ll usually create a fake data provider so that I can get the UI flow without having to put a lot of effort into writing the actual server side implementation. Usually this is because I’m building it out in my spare time and it allows me to get something to throw stones at in place quickly.  As I’m looking at a distributed system in this case, it was even more useful as I didn’t have to create remote content endpoints and manage the integration with them. Instead I created the initial cut of the Apex interface that I want an endpoint to support and then wrote a faker implementation class that would return indicative but hardcoded responses.  This approach has the added benefit of allowing me to iterate on the interface without having to update multiple implementations of it.

Training Page

My training page is a lightning page with the training SPA added to it. Notice that I didn’t create a header aspect for my SPA - this is because a lightning page automatically adds a header that I can’t customise. The page thus has the lightning experience global header and the standard page header, so if I add a third one then most of the visible area is consumed. 

The SPA initially displays the available training paths from the fake service, laid out as a wrapping grid that shows 3 paths per row for the desktop and 1 per row on mobile:

Screen Shot 2018 02 04 at 08 39 08 png

Clicking on any of the paths shows the underlying steps that I need to complete:

Screen Shot 2018 02 04 at 08 39 20 png

and clicking into a step takes me to the actual content with any questions that have been defined, although for the demo step I’ve chosen here the fake service pretends I’ve completed it:

Screen Shot 2018 02 04 at 08 39 42 png

 

And that wraps up this post. In the next instalment I’ll cover the integration with a remote endpoint.

Related Posts

 

Friday, 26 January 2018

SFDX and the Metadata API Part 4 - VSCode Integration

SFDX and the Metadata API Part 4 - VSCode Integration

Introduction

In the previous instalments of this blog series I’ve shown how to deploy metadata, script the deployment to avoid manual polling and carry out destructive changes. All key tasks for any developer, but executed from the command line. On a day to day basis I, like just about any other developer in the Salesforce ecosystem, will spend large periods of the day working on code in an IDE. As it has Salesforce support (albeit still somewhat fledgling) I’ve switched over completely to the Microsoft VSCode IDE. The Salesforce extension does provide a mechanism to deploy local changes, but at the time of writing (Jan 2018) only to scratch orgs, so a custom solution is required to target other instances.

In the examples below I’m using the deploy.js Node script that I created in SFDX and the Metadata API Part 2 - Scripting as the starting point.

Sample Code

My sample class is so simple that I can’t think of anything to say about it, so here it is:

public with sharing class VSCTest1 {
    public VSCTest1() {
        Contact me;
    }
}

and the package.xml to deploy this:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>*</members>
        <name>ApexClass</name>
    </types>
    <version>40.0</version>
</Package>

VSCode Terminal

VSCode has a nice built-in terminal in the lower panel, so the simplest and least integrated solution is to run my commands though this. It works, and I get my set of results, but it’s clunky.

Screen Shot 2018 01 24 at 17 41 26

VSCode Tasks

If I’m going to execute deployments from my IDE, what I’d really like is a way to start them from a menu or shortcut key combination. Luckily the designers of VSCode have foreseen this and have the concept of Tasks. Simply put, a Task is a way to configure VSCode with details of an external process that compiles, builds, tests etc. Once configured, the process will be available via the Task menu and can also be set up as the default build step. 

To configure a Task, select the Tasks -> Configure Tasks menu option and choose the Create tasks.json file from template option in the command bar dropdown:

Screen Shot 2018 01 24 at 07 31 04

Then select Others from the resulting menu of Task types;

Screen Shot 2018 01 24 at 07 31 57

This will generate a boilerplate tasks.json file with minimal information, which I then add details of my node deploy script to:

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": “build",
            "type": "shell",
            "command": "node",
            "args":["deploy.js"]
        }
    ]
}

I then execute this via the Tasks -> Run Task menu, choosing ’build’ from the command bar dropdown and selecting 'Continue without scanning the task output'

This executes my build in the terminal window much like, but saves me having to remember and enter the command each time:

Screen Shot 2018 01 24 at 17 06 36

Sadly I can’t supply parameters to the command when executing it, so if I need to deploy to multiple orgs I need to create multiple entries in the tasks,json file, but for the purposes of this blog let’s imagine I’m living a very simple life and only ever work in a single org!

Capturing Errors

Executing my command from inside VSCode is the first part of an integrated experience, but I still have to check the output myself to figure out if there are any errors and which files they are located in. For that true developer experience I’d like feedback from the build stage to be immediately reflected in my code. To capture an error I first need to generate one, so I set my class up to fail

public with sharing class VSCTest1 {
    public VSCTest1() {
        Contact me;
        // this will fail
        me.do();
    }
}

VSCode Tasks can pick up errors, but it requires a bit more effort than simple configuration.

Tasks detect errors via ProblemMatchers - these take a regular expression to parse an error string produced by the command and extract useful information, such as the filename, line and column number and error message. 

While my deploy script has access to the error information, it’s in JSON format which the ProblemMatcher can’t process. Not a great problem though, as my node script can extract the errors from the JSON and output them in regexp friendly format. 

Short Diversion into the Node Script

As I’m using execFileSync to run the SFDX command from my deploy script, if the command returns a non-zero result, which SFDX does if there are failures on the deployment, it will throw an exception and halt the script. To get around this without having to resort to executing the command asynchronously and capturing the stdout, stderr etc, I simply send the error stream output to a file and catch the exception, if there is one. I then check the error output to see if it was a failure on deployment, in which case I just use that instead of the regular output stream or if it is a “real” exception, when I need to let the command fail. This is all handled by a single function that also turns the captured response into a JavaScript object:

function execHandleError(cmd, params) {
    try {
        var err=fs.openSync('/tmp/err.log', 'w');
        resultJSON=child_process.execFileSync(cmd, params, {stdio: ['pipe', 'pipe', err]});
        result=JSON.parse(resultJSON);
        fs.closeSync(err);
    }
    catch (e) {
        fs.closeSync(err);
        // the command returned non-zero - this may mean the metadata operation
        // failed, or there was an unrecoverable error
        // Is there an opening brace?
        var errMsg=''+fs.readFileSync('/tmp/err.log');
        var bracePos=errMsg.indexOf('{');
        if (-1!=bracePos) {
            resultJSON=errMsg.substring(bracePos);
            result=JSON.parse(resultJSON);
        }
        else {
            throw e;
        }
    }

    return result;
}

Once my deployment has finished, I check to see if it failed and if it did, extract the failures from the JSON response:

if ('Failed'===result.result.status) {
	if (result.result.details.componentFailures) {
		// handle if single or array of failures
		var failureDetails;
		if (Array.isArray(result.result.details.componentFailures)) {
			failureDetails=result.result.details.componentFailures;
		}
		else {
			failureDetails=[];
			failureDetails.push(result.result.details.componentFailures);
		}
          ...
        }
   ...
}

and then iterate the failures and output text versions of them.

for (var idx=0; idx<failureDetails.length; idx++) {
	var failure=failureDetails[idx];
	console.log('Error: ' + failure.fileName + 
		    ': Line ' + failure.lineNumber + 
	            ', col ' + failure.columnNumber + 
		    ' : '+ failure.problem);
}

Back in the Room

Rerunning the task shows an errors that occur:

Screen Shot 2018 01 24 at 17 34 42

I can then create my regular expression to extract information from the failure text - I used regular expressions 101 to create this. as it allows me to baby step my way through building the expression. Once I’ve got the regular expression down, I add the ProblemMatcher stanza to tasks.json:

"problemMatcher": {
    "owner": "BB Apex",
    "fileLocation": [
        "relative",
        "${workspaceFolder}"
    ],
    "pattern": {
        "regexp": "^Error: (.*): Line (\\d)+, col (\\d)+ : (.*)$",
        "file": 1,
        "line": 2,
        "column": 3,
        "message": 4
    }
}

Now when I rerun the deployment, the problems tab contains the details of the failures surfaced by the script:

Screen Shot 2018 01 24 at 17 46 10

and I can click on the error to be taken to the location in the offending file.

There’s a further wrinkle to this, in that lightning components report errors in a slightly different format - the row/column in the result is undefined, but if it is known it appears in the error message on the following line, e.g.

Error: src/aura/TakeAMoment/TakeAMomentHelper.js: Line undefined, col undefined : 0Ad80000000PTL3:8,2: ParseError at [row,col]:[9,2]
Message: The markup in the document following the root element must be well-formed.

This is no problem for my task, as the ProblemMatcher attribute can specify an array of elements, so I just add another one with an appropriate regular expression:

"problemMatcher": [ {
        "owner": "BB-apex",
        ...
    },
    {
        "owner": "BB-lc",
        "fileLocation": [
            "relative",
            "${workspaceFolder}"
        ],
        "pattern": [ {
            "regexp": "^error: (.*): Line undefined, col undefined : (.*): ParseError at \\[row,col\\]:\\[(\\d+),(\\d+)]$",
            "file": 1,
            "line": 3,
            "column": 4,
        },
        {
            "regexp":"^(.*$)",
            "message": 1
        } ]
    }],

Note that I also specify an array of patterns to match the first and second lines of the error output. If the error message was spread over 5 lines, I’d have 5 of them.

You can view the full deploy.js file at the following GIST, and the associated tasks.json.

Default Build Task

Once the tasks.json file is in place, you can set this up as the default build task by selecting the Tasks -> Configure Default Build Task menu option, and choosing Build from the command drop down menu. Thereafter, just use the keyboard shortcut to execute the default build.

Related Posts

 

Saturday, 13 January 2018

Building My Own Learning System - Part 1

Building My Own Learning System

Learn

Introduction

Before I get started on this post, I want to make one thing clear. This is not Trailhead. It’s not Bob Buzzard’s Trailhead. It’s not a clone or wannabe of Trailhead. While it would be fun to build a clone of Trailhead, all it would be is an intellectual exercise to see how close I could get. So that’s not what I did. I didn’t build my own Trailhead. Are we clear on that? Nor is it MyTrailhead, although it could be used in that way. But again, I’m not looking to clone an existing solution, even if it is still in pilot and likely to stay there for a couple of releases. I’m coming at this from a different angle, as will hopefully become clear from this and subsequent blog posts. Put the word Trailhead out of your mind.

All that said, I was always going to build my own training system. Pretty much every post I’ve written about Trailhead had a list of things I’d like to see, and I can only suppress the urge to write code in this space for so long. This might mean that I moderate my demands, realising how difficult things really are when you have to implement them rather than just think about them in abstract form.

The Problem

Trailhead solves the problem of teaching people about Salesforce at scale, with content that comes from the source and is updated with each release. MyTrailhead is about training/onboarding people into your organisation. The problem I was looking to solve was somewhat different, although closer to MyTrailhead. I wanted a way to onboard people from inside and outside my organisation onto a specific application or technology, but without sending everyone through the same process.

For example, regular readers of this blog or my medium posts will know that I run product development at BrightGen, and that we have a mature Full Force solution in BrightMedia. We also have a bunch of collateral and training material around BrightMedia that I’d like to surface to various groups of people:

  • Internal BrightGen sales team
  • Internal BrightGen developers
  • External customer users

I don’t particularly want a single training system, as this would mean giving external users access to internal systems. It’s also likely that I’ll have a bunch of training information that isn’t BrightMedia specific, and I don’t really want to colocate this with everything else.

Essentially what I’m looking for is a training client that can connect to multiple endpoints, each endpoint containing content specific to a product/application/team. That, and a way to limit who can access the content, allows me to colocate the content with the application, potentially in the packaging org that contains the application.

The First Stirrings of the Solution

Data Model

As the client won’t be accessing data from the same Salesforce org, or potentially any Salesforce org, my front end is backed by a custom apex class data model rather than sObjects:

Screen Shot 2018 01 13 at 18 12 00

I’ve deliberately chosen names that are different to Trailhead, because as we all know this isn’t Trailhead. I was very tempted to use insignia rather than badge, as I think that gives it a somewhat British feel, but in the end I decided that would confuse people. Each path has topics associated with it so that I can see how strong a candidate is in a particular field. The path and associated steps are essentially the learning template, while the candidate path/step tracks the progress of a candidate through the path. A path has a badge associated with it and once a candidate completes all steps in the path they are awarded the badge. The same(isn) data model as myriad training systems around the globe.

The records that back this data model live in the content endpoint. Thus the candidate doesn’t have a badge count per se, instead they have a badge count per functional area. In the BrightGen scenario they will have a badge count for BrightMedia, and a separate badge count for other product areas. The can also have multiple paths in progress striped across content endpoints.

User Interface

I created the front end to work against these custom classes as a single page application. As the user selected paths and steps the page would re-render itself to show the appropriate detail. I’m still tweaking this so I’ll cover the details in the next post in the series.

Show me the Code

I don’t plan to share any code in these posts until the series is complete, at which point I’ll open source the whole thing on github, mainly because it isn’t ready yet. I’m pretty sure I’ve got the concepts straight in my head, but the detail keeps changing as I think of different ways of doing things.

 

Sunday, 24 December 2017

SFDX and the Metadata API Part 3 - Destructive Changes

SFDX and the Metadata API Part 3 - Destructive Changes

Chuck

Introduction

In Part 1 of this series, I covered how you can use the SFDX command line tool to deploy metadata to a regular (non-scratch) Salesforce org, including checking the status and receiving the results of the deployment in JSON format. In Part 2, how to combine the deploy and check into a node script that shows the progress of the deployment.

Creating metadata is only part of the story when implementing Salesforce or building an application on the platform. Unless you have supernatural prescience, it’s likely you’ll need to remove a component or two as time goes by. While items can be manually deleted, that’s not an approach that scales and you are also likely to get through a lot of developers when they realise their job consists of replicating the same manual change across a bunch of instances!

It’s just another deployment

Destroying components in Salesforce is accomplished in the same way as creating them - via a metadata deployment, but with a couple of key differences.

Empty Package Manifest

The package.xml file for destructive changes is simply an empty variant that only contains the version of the API being targeted:

<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>41.0</version>
</Package>

Destructive Changes Manifest

The destructiveChanges.xml file identifies the components to be destroyed - it’s the same format as any other package.xml file, but doesn’t support wildcards so you need to know the name of everything you want to deep six. Continuing the theme in this series of using my Take a Moment blog post as the source of examples, here’s the destructive changes manifest to remove the component:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>TakeAMoment</members>
        <name>AuraDefinitionBundle</name>
    </types>
    <version>40.0</version>
</Package>

Destroy Mode Engaged

For the purposes of this post I’ve created a directory named destructive and placed the two manifest files in there. I can then execute the following command to remove the app from my dev org.

> sfdx force:mdapi:deploy -d destructive -u keirbowden@googlemail.com -w -1

Note that I’ve specified the -w switch with a value of -1, which means the command will poll the Salesforce server until it completes. As this is a deployment it can also be handled by a node script in the same way that I demonstrated in Part 2 of this series. 

The output of the command is as follows:

619 bytes written to /var/folders/tn/q5mzq6n53blbszymdmtqkflc0000gs/T/destructive.zip using 25.393ms
Deploying /var/folders/tn/q5mzq6n53blbszymdmtqkflc0000gs/T/destructive.zip...

=== Status
Status:  Pending
jobid:  0Af80000003zYCfCAM
Component errors:  0
Components deployed:  0
Components total:  0
Tests errors:  0
Tests completed:  0
Tests total:  0
Check only: false


Deployment finished in 2000ms

=== Result
Status:  Succeeded
jobid:  0Af80000003zYCfCAM
Completed:  2017-12-24T17:14:02.000Z
Component errors:  0
Components deployed:  1
Components total:  1
Tests errors:  0
Tests completed:  0
Tests total:  0
Check only: false

and the component is gone from my org. If I’ve deployed it to a bunch of other orgs, I just need to re-run the command with the appropriate -u switch.

Related Posts

 

 

 

Tuesday, 19 December 2017

SFDX and the Metadata API Part 2 - Scripting

SFDX and the Metadata API Part 2 - Scripting

Introduction

In Part 1 of this series, I covered how you can use the SFDX command line tool to deploy metadata to a regular (non-scratch) Salesforce org, including checking the status and receiving the results of the deployment in JSON format. In this post I’ll show how the deploy and check can be combined in a node script to allow you to show the progress of the deployment. The examples below are for MacOS - if that isn’t your operating system you may have to do some tweaking, although as I’m not using directory names I don’t think that will be the case. These examples also assume that you are in the directory that you cloned the repository into in Part 1.

Needs Node

You’ll need Node.js installed in order to try out the example - you can download it here. You’ll also need the SFDX CLI, but I’m sure everyone has that from the first post in this series, right?

Node has a built-in module, named child_process, that allows you to execute an application on your local disk.  While most things Node and JavaScript are asynchronous, the authors of child_process have given us synchronous versions too. These block the node event loop until the application has finished executing and then return the output of the application as the result. Perfect for scripting.

Executing SFDX

The function that we are interested in is execFileSync, which takes the name(or path) of the application to execute and an array of parameters. In the previous post, my command to execute the deployment was:

> sfdx force:mdapi:deploy -d src -u keirbowden@googlemail.com

To carry out the same operation via the execFileSync function:

child_process.execFileSync('sfdx',
                           ['force:mdapi:deploy', '-d', 'src',
                            '-u', 'keirbowden@googlemail.com]);

Processing the Results

By default the results of the deploy will be returned in a human-readable text format, but supplying an additional parameter of ‘—json’ turns it into JSON format,  (note this is much redacted output!):

{
  "status": 0,
  "result": {
      ...
    "id": "0Af80000003ynf6CAA",
    "status": "Succeeded",
    "success": true
  }
}

which JavaScript can easily parse into an object - this is the main reason that I script tools like this in node - parsing JSON in bash scripts is way more difficult.

var jsonResult=child_process.execFileSync('sfdx',
                               ['force:mdapi:deploy', '-d', 'src',
                                '-u', 'keirbowden@googlemail.com]);
var result=JSON.parse(jsonResult);
console.log(‘Status = ‘ + result.result.status);

Outputs ‘Status = Succeeded’.

Polling the Deployment

The result from the execution of the deployment contains an id parameter:

    "id": "0Af80000003ynf6CAA"

which can be used to request a deployment report from the org. 

child_process.execFileSync('sfdx’,
                           ['force:mdapi:deploy:report',
                           '-i', result.result.id,
                            '-u', ‘keirbowden@googlemail.com', '--json’]);

the results of which are again returned in JSON format and can be processed easily through JavaScript.

All together now

Based on the above learning, here’s the sample node script that executes a deployment and then polls the org until it has completed, successfully or otherwise:

#!/usr/local/bin/node

var child_process=require('child_process');
var username='keirbowden@googlemail.com';

var deployParams=['force:mdapi:deploy', '-d', 'src',
                  '-u', username, '--json'];

var resultJSON=child_process.execFileSync('sfdx', deployParams);
var result=JSON.parse(resultJSON);
var status=result.result.status; while (-1==(['Succeeded', 'Canceled', 'Failed'].indexOf(status))) { var msg='Deployment ' + status; if ('Queued'!=status) { msg+=' (' + result.result.numberComponentsDeployed + '/' + result.result.numberComponentsTotal + ')' } console.log(msg); var reportParams=['force:mdapi:deploy:report', '-i', result.result.id, '-u', username, '--json']; resultJSON=child_process.execFileSync('sfdx', reportParams); result=JSON.parse(resultJSON); status=result.result.status; } console.log('Deployment ' + result.result.status);

Breaking this up a little, the child_process module is included and the name of my user assigned to a variable as I’ll be using it it in a few places:

var child_process=require('child_process');
var username='keirbowden@googlemail.com';

The deployment is then executed and the results parsed:

var deployParams=['force:mdapi:deploy', '-d', 'src',
                  '-u', username, '--json'];

var resultJSON=child_process.execFileSync('sfdx', deployParams);
var result=JSON.parse(resultJSON);

Then the code enters a loop that continues until the deployment has completed:

var status=result.result.status;
while (-1==(['Succeeded', 'Canceled', 'Failed'].indexOf(status))) {

Next I generate a message to display to the user - the status of the deployment and, if it isn’t queued, the number of components deployed and the total number:

var msg='Deployment ' + status;
if ('Queued'!=status) {
    msg+=' (' + result.result.numberComponentsDeployed + '/' +
                result.result.numberComponentsTotal + ')'
}
console.log(msg);

Then I execute the report on the status of the deployment and assign the results to the existing variable - as far as I’ve been able to tell the results of the deploy and deploy report are the same, but this doesn’t appear to be documented anywhere so may be subject to change:

var reportParams=['force:mdapi:deploy:report', '-i', result.result.id,
                          '-u', username, '--json'];
        resultJSON=child_process.execFileSync('sfdx', reportParams);
        result=JSON.parse(resultJSON);
        status=result.result.status;

and round the loop it goes again.

Executing this script generates the following output:

> node deploy.js
Deployment Queued
Deployment Pending (0/0)
Deployment Pending (0/0)
Deployment Pending (0/0)
Deployment InProgress (0/1)
Deployment Succeeded

Conclusion

This script just scratches the surface of what can be done with the results - for example, if any of the components fail it can raise the alarm in the org or elsewhere. It also polls continuously, which I wouldn’t recommend for a large number of components - I typically sleep for a few seconds in between each request. It also doesn’t do much with the final result aside from writing it to the screen. 

The SFDX force:mdapi:deploy command does have a ‘-w’ option to wait a specified period of time for the deployment to complete, which if set to -1 reports the progress at regular intervals in a very similar way until the deployment completes. This is fine if you are happy to wait until the end before taking any further action, but I like this granularity so that I can take action as soon as something happens. You should use what works for you.

Related Posts

 

Friday, 15 December 2017

Santa Force is Coming to Town

Santa Force is Coming to Town

Santa

Introduction

This post comes all the way from Lapland, from the workshop of Santa Force. a long time Salesforce user. This Salesforce instance has received some enhancements to help with the unique problems of this unique non-profit, which we’ll take a closer look at.

Customisations

There are a few additional fields on user, which don’t necessarily make a lot of sense when viewed in isolation:

Screen Shot 2017 12 15 at 09 59 47

However, they are vital for a formula field :

 

Screen Shot 2017 12 15 at 10 01 33

 

So as you can see, you’d better watch out, not pout and not cry. This might seem an odd requirement, but the help text tells you why:

 

Screen Shot 2017 12 15 at 10 02 06

 

 

On to Santa Force now, he’s making a list view. The elves created one a few days ago, but there’s some issues with it - the name doesn’t look right and there’s a few fields missing.

 

Screen Shot 2017 12 15 at 10 37 42

 

Santa Force clones the list view, renames it and adds the required fields:

 

Screen Shot 2017 12 15 at 10 36 55

 

This is much better - he’s making a list view, he’s checked it twice and can now see who has been naughty or nice.

 

There’s also a process builder that works off another custom field on the contact record - a checkbox field labelled Asleep?

 

Screen Shot 2017 12 15 at 10 41 39

 

So Santa Force knows if you are sleeping, and knows if you are awake, because it is posted to his chatter feed:

Screen Shot 2017 12 15 at 10 17 35

 

 

Finally, there’s one more field on contact - Goodness.  Santa Force can look at this field and determine if you’ve been good or bad - there’s also some useful help text that will guide a contact’s behaviour if they view this though the community.

 

Screen Shot 2017 12 15 at 10 19 03

 

Why?

The key question is what is all this information being gathered for? Checking the calendar, we can see that it’s for an event scheduled for 24th December:

 

Screen Shot 2017 12 15 at 10 22 16

 

As you can see, Santa Force is Coming to Town!

Happy Christmas everyone and thanks for reading the Bob Buzzard Blog.

 

Tuesday, 12 December 2017

SFDX and the Metadata API

SFDX and the Metadata API

Introduction

SFDX became Generally Available in the Winter 18 Release of Salesforce and I was ready for it. However, my use case was our BrightMedia appcelerator which is mostly targeted at sandboxes and production orgs, where scratch orgs wouldn’t really help that much. The good news is that the SFDX CLI has support for metadata deploy/retrieve operations via the mdapi commands in the force topic.

What you need

In order to deploy metadata you need the directory structure and package.xml manifest - if you’ve used the Force.com migration tool (ant) or the Force CLI, this should be familiar. For the purposes of this blog I’m using the GITHUB repository from my Take a Moment blog post, which has the following structure:

src/
src/package.xml
src/aura/
src/aura/TakeAMoment
src/aura/TakeAMoment/TakeAMoment.cmp
src/aura/TakeAMoment/TakeAMoment.cmp-meta.xml
src/aura/TakeAMoment/TakeAMoment.css
src/aura/TakeAMoment/TakeAMomentController.js
src/aura/TakeAMoment/TakeAMomentHelper.js
src/aura/TakeAMoment/TakeAMomentRenderer.js

What you do

The first thing I do is clone the repo to my local filesystem and navigate to the directory created:

 > git clone https://github.com/keirbowden/TakeAMoment.git
Cloning into 'TakeAMoment'...
remote: Counting objects: 20, done.
remote: Total 20 (delta 0), reused 0 (delta 0), pack-reused 20
Unpacking objects: 100% (20/20), done.
> cd TakeAMoment

I then set this up as an SFDX project:

> sfdx force:project:create -n .
create sfdx-project.json
conflict README.md
force README.md
create config/project-scratch-def.json

Next I login to one of my dev orgs:

> sfdx force:auth:web:login
Successfully authorized keirbowden@googlemail.com with org ID …..
You may now close the browser

(For the purposes of this blog my login is ‘keirbowden@googlemail.com’ - substitute your username in the commands below)

Everything is now set up and I can deploy to my dev org:

> sfdx force:mdapi:deploy -d src -u keirbowden@googlemail.com
2884 bytes written to /var/folders/tn/q5mzq6n53blbszymdmtqkflc0000gs/T/src.zip using 36.913msDeploying /var/folders/tn/q5mzq6n53blbszymdmtqkflc0000gs/T/src.zip...
=== StatusStatus:  Queuedjobid:  0Af80000003ynf6CAA
The deploy request did not complete within the specified wait time [0 minutes].To check the status of this deployment, run "sfdx force:mdapi:deploy:report"

Sometimes the deployment completes immediately, but most of the time it takes a bit longer and I have to query the status via the command that the SFDX CLI helpfully gives me in the output:

> sfdx force:mdapi:deploy:report
=== Result
Status: Succeeded
jobid: 0Af80000003ynf6CAA
Completed: 2017-12-12T16:28:39.000Z
Component errors: 0
Components checked: 1
Components total: 1
Tests errors: 0
Tests completed: 0
Tests total: 0
Check only: true

And that’s it - my deployment is done!

Why would you do this?

That’s a really good question. For me, the following reasons are good enough:

  1. The SFDX CLI, unlike the Force Migration Tool, uses oauth to authorise operations, so I don’t need to specify the password in plaintext. It also means that the rest of my team don’t need to learn ANT.
  2. The SFDX CLI, unlike the Force CLI, allows me to fire the deployment off and query the status later, plus it gives me a lot of information in the report.

It’s also clear to me that SFDX is the future, so aligning myself with the SFDX CLI seems a sensible move.

It also allows me to get the status of the deployment as JSON:

> sfdx force:mdapi:deploy:report --json

which gives me a ton of information:

{
  "status": 0,
  "result": {
    "checkOnly": false,
    "completedDate": "2017-12-12T16:28:39.000Z",
    "createdBy": "00580000001ju2C",
    "createdByName": "Keir Bowden",
    "createdDate": "2017-12-12T16:28:09.000Z",
    "details": {
      "componentSuccesses": [
        {
          "changed": "true",
          "componentType": "AuraDefinitionBundle",
          "created": "true",
          "createdDate": "2017-12-12T16:28:36.000Z",
          "deleted": "false",
          "fileName": "src\/aura\/TakeAMoment",
          "fullName": "TakeAMoment",
          "id": "0Ab80000000PEGWCA4",
          "success": "true"
        },
        {
          "changed": "true",
          "componentType": "",
          "created": "false",
          "createdDate": "2017-12-12T16:28:38.000Z",
          "deleted": "false",
          "fileName": "src\/package.xml",
          "fullName": "package.xml",
          "success": "true"
        }
      ],
      "runTestResult": {
        "numFailures": "0",
        "numTestsRun": "0",
        "totalTime": "0.0"
      }
    },
    "done": true,
    "id": "0Af80000003ynf6CAA",
    "ignoreWarnings": false,
    "lastModifiedDate": "2017-12-12T16:28:39.000Z",
    "numberComponentErrors": 0,
    "numberComponentsDeployed": 1,
    "numberComponentsTotal": 1,
    "numberTestErrors": 0,
    "numberTestsCompleted": 0,
    "numberTestsTotal": 0,
    "rollbackOnError": true,
    "runTestsEnabled": "false",
    "startDate": "2017-12-12T16:28:29.000Z",
    "status": "Succeeded",
    "success": true
  }
}

Having the results in JSON also means that I can easily process it in JavaScript, which I’ll cover in my next post.

Related Posts