Salesforce Live Agent

Live Agent – What is it?
Live Agent enables real-time, online chat between an organisation and its customers, prospects etc. The chat sessions can be initiated via clicking a button or link on a web page, or via automated invitation based on page access metrics etc. For the end user, it can be very convenient to resolve a query through chat, avoiding the usual frustrations of calling a support line – although the end-user experience is still very dependent on the skill and knowledge of the receiving agent. For the organisation, there’s huge potential for call deflection – an expert team efficiently managing multiple concurrent chats (routed by skill) can hugely impact on the call volume. The handling cost of a chat session is generally perceived to be 1 third of the cost of a phone call.

Theoretically therefore we have a win-win situation, online chat should be good for all parties. The double digit percentage increase in the use of chat year-on-year (according to Analysts) is evidence of this. Analysts are also predicting strong growth in live video chat for customer service, with B2C chat options such as Amazon’s Mayday button, becoming mainstream.

Salesforce Live Agent was introduced onto the platform in the Spring 12 release following the acquisition of Activa Live Chat in September 2010.

Key Implementation Concepts
Configuration of Live Agent is more complicated than most functional areas and not something to embark upon without reading the implementation guide. It is possible to deliver a usable solution using the declarative approach, however a fully branded or custom experience will require technical expertise with Visualforce and standard web technologies such as HTML, CSS and JavaScript.

1. Configuration
A Configuration defines the behaviour and presentation of Live Agent with a Salesforce console. Configurations define the number of maximum active chats, the welcome greeting, sound notifications on various events, supervisor monitoring settings and skills-based chat transfer options. Configurations are assigned to Users and/or User Profiles and are typically defined with separate configurations for for standard chat users and for supervisors.

2. Deployment
A Deployment defines the behaviour and presentation of Live Agent to the end-user, i.e. customer or prospect. Deployments control the look and feel of the Chat windows displayed, plus options for the saving of transcripts. Multiple deployments may be configured across product lines or host web sites.

Note – each deployment generates a snippet of HTML which should be inserted once into the host page. The referenced endpoint is unique to the Live Agent org and is generated (and displayed) when Live Agent is enabled.

<script type='text/javascript' src='https://c.______.salesforceliveagent.com/content/g/js/31.0/deployment.js'></script>
<script type='text/javascript'>
liveagent.init('https://d.______.salesforceliveagent.com/chat', '___g00000008OMv', '___g0000003MCJN');
</script>

3. Skill
Skills play a key role in routing incoming chats to appropriate agents. In short, multiple skills can be defined (“General Enquiry”, “Product A Enquiry” etc.) and assigned to Users and/or User Profiles. Every inbound chat is related (via the button) to one or more skill.

4. Chat Button
A Chat Button defines the entry point for a chat and also key information relating to routing such as skills required, queuing options and routing type. The routing type can be set to Least Active, Most Available or Choice. Buttons can be fully customised and branded and be set to display pre-chat forms or post-chat pages.

Note – each Chat Button generates a snippet of HTML which should be inserted into the host page.

<a id="liveagent_button_online____g00000008OT8" href="javascript://Chat" style="display: none;" onclick="liveagent.startChat('___g00000008OT8')">
<!-- Online Chat Content -->
</a>
<div id="liveagent_button_offline____g00000008OT8" style="display: none;">
<!-- Offline Chat Content -->
</div>

<script type="text/javascript">
if (!window._laq) { window._laq = []; }
window._laq.push(function(){liveagent.showWhenOnline('___g00000008OT8', document.getElementById('liveagent_button_online_vg00000008OT8'));
liveagent.showWhenOffline('___g00000008OT8', document.getElementById('liveagent_button_offline____g00000008OT8'));
});
</script>

5. Automated Invitation
In principle Automated Invitations work the same way as a Chat Button, however the chat session initiation occurs via a proactive invitation based on defined sending rules (Seconds on Page, Seconds on Site, Page Views, Url Match, Custom Variable).

6. Quick Text
Live Agent appears as a Channel for predefined Quick Text messages. Such message can be easily inserted into the Chat conversation, for agent convenience and standardisation of messaging. To do the character sequence ;; must entered into the chat text input, this will trigger a list of most recently used messages to appear, alternatively additional characters after the ;; will result in a filtered view. Not an ideal user experience as the agent must be familiar with the message naming. The now retired, standalone version of Live Agent console (flash based) had stronger functionality in this area – enabling messages to found by category.

Extension Points
1. Deployment API
The deployment API enables JavaScript to written that can customise the chat window, launch a chat session or specify back-end functionality such as record searching and creation. This API also enabled Direct to Agent Chat Routing, where routing rules are ignored and all chat invitations can be routed directly to one or more specific agents.

2. Pre-Chat Forms / Pre-Chat API
Introduce a pre-chat data capture that could be used for routing, mapping and auto-query. Mapping relates to mapping page inputs to record attributes in Salesforce. Auto-query relates to records automatically opening when a chat is accepted e.g. the contact record for the customer – or perhaps a transactional record such as an invoice, purchase, booking etc. In addition to record search, record creation is also supported e.g. create a contact record if no match exists. Information entered into the pre-chat form can be viewed by the agent before and during the chat session.

3. Customised Chat Window
Standard Live Agent Visualforce components can be used to build completely custom chat windows.

4. Post Chat Page
Finish the chat session with a page displaying a summary, useful links etc.

5. Live Agent REST API
REST resources that enable a custom chat experience to be developed using any programming language or technology capable of addressing a RESTful web service. Exemplar use cases for this API are custom mobile application development and integration of chat functionality into an existing application.

Note, each org enabled for Live Agent exposes a unique Live Agent API endpoint.

e.g. https://d.__2__cs.salesforceliveagent.com/chat/rest/

POST https://{org unique endpoint}/chat/rest/Chasitor/ChasitorTyping
POST https://{org unique endpoint}/chat/rest/Chasitor/ChatMessage
POST https://{org unique endpoint}/chat/rest/Chasitor/ChatEnd

Functionality – Agent View
Live Agent in the Salesforce Console

Live Agent - Console View

Key Features -
Manage multiple concurrent chats within the console
Find and open existing records related to chats
Create new records based on incoming chats
Choose records or pages to open as sub tabs of each chat session
New Case, Lead, Account, Contact, VF page (1-5)
Include Suggested Articles from Salesforce Knowledge in Live Agent
Attach a file to the chat session
Supervisor tab for monitoring

Functionality – Customer View

Live Agent - Chat Window View

Key Features -
Initiate and end a chat session via button or link
Initiate and end a chat session via invitation acceptance
Save transcript

Salesforce Knowledge Integration
A chat answer field can be added to article types (field name must be specifically Chat_Answer__c – Long Text), clicking the Share link for an the article will result in the Chat Answer text being pasted into the chat window. A nice convenience for agents

There is also a Suggested Articles from Chat feature in the Salesforce Console configuration which I assumed would search Salesforce Knowledge based on the Chat text – it seems that all this does is add Knowledge to the Live Agent sidebar.

Licensing
Performance Edition – included.
Enterprise and Unlimited Edition – feature license cost

Note – Live Agent is also available in Developer Edition.

References
Implementation Guide
Developers Guide
REST API Guide

Technical Naming Conventions

Challenge – outside of the ISV development model there is no concept of an application namespace that can be used to group the technical components related to a single logical application. To mitigate this issue, and to provide a means to isolate application-specific components, naming schemes such as application specific prefixes are commonplace.

Risk – without application/module/function namespaces etc. all technical components reside as an unstructured (unpackaged) collection, identified only by their metadata type and name. As such maintainability and future extensibility can be inhibited as the technical components related to multiple logical applications converge into a single unstructured code-base.

Options –
1. Application specific prefix. All components related to a specific application are prefixed with an abbreviated application identifier, e.g. Finance Management = “fm”, HR = “hr”. This option addresses the requirement for isolation, but inevitably causes issue where helper classes or classes related to common objects span multiple applications. This option has the advantage of minimising the effort required to remove functionality related to a logical application, only shared classes would need to be modified.

2. Object centric approach. In considering a Salesforce org as a single consolidated codebase where most components (technical or declarative) relate to a primary data object, a strict object-centric approach can be taken to the naming of technical components. With such a mindset, the concept of a logical application becomes less significant, instead components are grouped against the primary data object and shared across the custom functionality that may be related to the object. A strictly governed construction pattern should promote this concept with the main class types defined on a per-object basis. Functional logic not related to a single object should only every reside in a controller class, web service class or helper class. In the controller and web service cases, the class should orchestrate data transactions across multiple objects to support specific functionality. In the helper class case a function-centric approach is appropriate.

In architectural terms, an object-centric data layer is introduced that is called from a function-centric presentation layer.

presentation layer [Object][Function].page –
SalesInvoiceDiscountCalc.page
SalesInvoiceDiscountCalcController.cls

data layer [Object][Class Type].cls –
SalesInvoiceManager.cls
AccountManager.cls

business logic layer [Function][Helper|Utility]–
DiscountCalcHelper.cls

The downside of this approach is contention on central classes in the data layer when multiple developers are working in a single org, plus the effort required to remove functionality on a selective basis. In the latter case using a source code management system such as Git with a smart tagging strategy can help to mitigate the issue. Additionally, code commenting should always be used to indicate class dependencies (i.e. in the header comment) and to convey the context in which code runs, this is imperative in ensuring future maintainability.

Recommended Approach -
1. Option 2. In summary, naming conventions should not artificially enforce the concept of a logical application – the composition of which is open to change by Admins, instead an object-centric approach should be applied that promotes code re-use and discipline in respect adherence to the applied construction patterns.

Whichever approach is taken, it is highly useful to consider how the consolidated codebase will evolve as future functionality and related code is introduced. A patterns-based approach can mitigate the risk of decreasing maintainability as the codebase size increases.

Salesforce Application Types

In a typical development process requirements are captured and the information synthesised to form a solution design. The constituent components of the solution design are ultimately translated into physical concepts such as a class, page or sub-page view. This analysis, design, build cycle could be iterative in nature or fixed and may have different degrees of detail emerging at different points, however the applied principle is consistent. In considering the design element of the cycle, interaction design techniques suggest a patterns-based approach where features are mapped to a limited set of well-defined and robust user interface patterns, complemented by policies for concepts that transcend the patterns such as error handling, validation messages, stylistic aspects (fonts, dimensionality etc.). This process delivers efficiency in terms of reusability of code and reduced technical design and testing, but also critically provides a predictable, consistent end-user experience. When building custom applications using the declarative tools, we gain all of these advantages using pre-defined patterns and pre-fabricated building blocks. When building using the programmatic aspects of the platform a similar approach should be taken, meaning follow established patterns and use as much of the pre-fabricated components as possible. I can never fathom the driver to invent bespoke formats for pages that display within the standard UI, the end result is jarring for the end-user and expensive to build and maintain. In addition to delivering a consistent, predicative end-user experience at the component level, the containing application itself should be meaningful and appropriate in type. This point is becoming increasingly more significant as the range of application types grows release-on-release and the expanding platform capabilities introduce relevance to user populations outside of the front-office. The list below covers the application types possible at the time of writing (Spring ’14).

Standard Browser App
Standard Browser App (Custom UI)
Console (Sales, Service, Custom)
Subtab App
Community (Internal, External, Standard or Custom UI)
Salesforce1 Mobile
Custom Mobile App (Native, Hybrid, browser-based)
Site.com Site
Force.com Site

An important skill for Salesforce implementation practitioners is the accurate mapping of required end user interactions to application types within an appropriate license model. This is definitely an area where upfront thinking and a documented set of design principles is necessary to deliver consistency.

By way of illustration, the following exemplar design principles strive to deliver consistency across end user interactions.

1. Where the interaction is simple, confined to a single User, the data relates to the User and is primarily modifiable by the User only and has no direct business relevance then a Subtab App (Self) is appropriate. Examples: “My Support Tickets”, “Work.com – Recognition”.
2. Where a grouping of interactions form a usage profile that requires streamlined, efficient navigation of discrete, immersive, process centric tasks then a Console app is appropriate. Examples: “IT Helpdesk”, “Account Management”
3. Where a grouping of interactions from a usage profile that is non-immersive, non-complex (i.e. aligned with the pattern of record selection and view/edit) and likely to be conducted on constrained devices then Salesforce1 Mobile is appropriate. Examples: “Field Sales”, “Executive Insight”.

Design principles should also provide a strong definition for each application type covering all common design aspects to ensure consistency. For example, all Subtab apps should be built the same way technically, to the same set of standards, and deliver absolute consistency in the end user experiences provided.

Salesforce Release Methodology – Simple Case

A very common challenge addressed by architects working with Salesforce is the definition of an appropriate release methodology. By this I mean the identification of the Salesforce orgs required to support the project delivery whether serial or concurrent in nature, the role and purpose of each org and critically, the means by which change is managed and synchronised across environments. With this latter point, a clear definition of the path-to-production is imperative.

In the large-scale, complex project case there is typically time and expertise available to define a bespoke methodology, with build automation, source code control system integration and so forth tailored to the specifics of the programme environment. There’s an abundance of best-practice information available online to help guide the definition of a release methodology for complex projects. For less complex projects, such as those employing the declarative build model only, there is less information available, in such cases what is typically required is a standardised, best-practice approach that can be adopted as-is.

The remainder of this post provides an outline view of an exemplar release methodology for small-to-medium scale, configuration-centric projects (i.e. no Apex code or technical complexities). This information is provided for reference purposes only.

Environment Strategy
The following diagram outlines the environments and their purpose, the defined release steps and a basic approach to change management.

Release Methodology - Simple Case

Key Principles
1. Isolate development from testing activities. This is the golden rule. Testing requires a stable environment unaffected by ongoing development. Development shouldn’t grind to a halt while system testing and acceptance testing processes are applied.
2. Utilise the minimum number of sandboxes as possible. Synchronisation of change is time expensive and error prone, avoid this wherever possible. Preparation of standing data post sandbox refresh can also take time, as can the communication required to establish that a refresh can proceed.
3. Don’t over specify the sandbox type. Sandboxes are an expensive asset, especially full-copy and partial-data sandboxes. Calculate the required storage capacity and map to either Developer or Developer Pro. Retain full-copy sandboxes for purposes that do actually require the copied data.
4. Maintain a Change Control Log in the production org to record all changes (at a reasonably high-level) against applied environments.
5. Use the production org for implementation project collaboration. It can also be a useful adoption tool to create Chatter groups such as “Salesforce: Marketing”, “Salesforce: Finance” where collaboration can occur directly with the business users whilst the project is in flight.
6. Accept that change will inevitably be applied to the production org first; record such changes and apply to development and testing sandboxes asap.
7. Always verify the Change Control Log against the Setup Audit Trail before deployments.
8. Use Change Sets for deployment wherever possible.
9. Encourage a development process where Change Sets are updated continually, rather than retrospectively.
10. Always verify the Change Control Log against the list of Change Set support components.
11. On larger projects a Change Set partitioning strategy may be required; along functional lines, by team or by component type etc.
12. Ensure releases to production are documented and approved. A simple Deployment Request Form (DRF) template should be defined and used to gain approval. This process is key to communication and governance but also helps the team consider fully the pre- and post- deployment steps, risks and rollback strategy.
13. Post-release. Communicate how business processes have been mapped to Salesforce concepts, and the permissions model. Understanding how things work in simple terms can help avoid end-user frustration with a new system. This can also reduce the support burden as end-users can often self diagnose the cause of a problem.

The org strategy diagram above presents an appropriate approach for a serial-release model, i.e. one project or one sprint at a time is being developed, tested then released. In the concurrent-release model, where multiple parallel projects are converging into a single production org, isolated develop and test sandboxes will be duplicated per project with an integration (or pre-production) org providing a synchronisation point where the combined state is validated prior to deployment to production.

Apex Trigger Exceptions

Custom Apex Triggers execute on standard CRM objects (Account, Contact, Lead etc.) and custom objects in response to all data modifications applied via the Salesforce web application, API transactions and custom Apex script. As such it is imperative that trigger code adheres to patterns that promote performance and maintainability, guards against recursive behaviour and most critically protects data integrity.

Apex code outside of triggers (Batch Apex, Visualforce Controllers etc.) is completely isolated to the specific context, trigger code is not; the effect of errant code is felt across the standard web app, mobile apps, API-based integration flows, data loads and so on. As such triggers must be used judiciously and avoided wherever possible on the core standard objects. I often come across multiple complex triggers defined on Account, where the logic applied is non-time critical and could have been handled by Batch Apex, or indeed by declarative methods. In certain circumstances this idealistic view is impractical and triggers are a necessary friend. When writing triggers always stick to a proven pattern, good examples provided by community experts are easy to find, and read the relevant sections in the Apex Language Reference if you haven’t done so for a while.

This post aims to provide some practical insight related to exception behaviour within Apex Triggers that may not be obvious from the referenced resources.

Key Concepts
1. DML Processing Mode
All bulk DML transactions on the platform run in either allOrNothing or partial processing mode. In some cases the behaviour is implicit and fixed, in other cases flags can be set to override the default behaviour.

allOrNothing
– Apex Database statements (e.g. update contactsToUpdate;)

partial processing
– Apex Database methods (Database.update(contactsToUpdate);. The default behaviour can be overridden via DmlOptions.
– API (default behaviour – can be overridden)

2. .addError()
The .addError(custom message) method can be invoked at the record or field level within the trigger context to flag a record as invalid. This is particularly useful in asserting logical errors. A best practice in regard to the custom messages is to use Custom Labels with a defined Apex Trigger Error category.

3. Runtime Exceptions
If an unhandled runtime exception is thrown then the Apex Transaction will immediately rollback and Apex exception notifications will occur. Structured exception handling can be added to the trigger code to catch runtime exceptions and to apply custom handling logic.

Apex Trigger – Exception Test Cases
The following test cases highlight the impact of the .addError() method and runtime exceptions in the context of the 2 processing modes. Tests were conducted using Execute Anonymous and the Apex Data Loader at API version 30.0.

Test Case 1 – Does code continue after .addError()?
Result: Yes (this is necessary in both allOrNothing and partial processing cases to gather a complete set of errors).

Test Case 2.1 – all_or_none – Does .addError() rollback the transaction if any subset of records has .addError() applied?
Result: Yes – Full rollback.

Test Case 2.2 – all_or_none – Does .addError() rollback the transaction if all records have .addError() applied?
Result: Yes – Full rollback.

Test Case 2.3 – all_or_none – Does an un-handled runtime exception rollback the transaction if no records have .addError() applied?
Result: Yes – Full rollback.

Test Case 2.4 – all_or_none – Does a handled runtime exception rollback the transaction if no records have .addError() applied?
Result: No – Records are committed (complete trigger context – regardless of which record caused the runtime exception).

Test Case 3.1 – partial – Does .addError() rollback the transaction if any subset of records has .addError() applied?
Result: Yes – Full rollback of all uncommitted changes, the trigger fires again for the subset of records that have not had addError() applied. Up to 2 retries are performed, after which the “Too many batch retries in the presence of Apex triggers and partial failures.” exception is thrown.

Test Case 3.2 – partial – Does .addError() rollback the transaction if all records have .addError() applied?
Result: Yes – Full rollback.

Test Case 3.3 – partial – Does an un-handled runtime exception rollback the transaction if no records have .addError() applied?
Result: Yes – Full rollback.

Test Case 3.4 – partial – Does a handled runtime exception rollback the transaction if no records have .addError() applied?
Result: No – Records are committed (complete trigger context – regardless of which record caused the runtime exception).

Test Case 4 – partial – In a partial failure case are static variables set within the original trigger invocation reset before retry invocations?
Result No – Static variables are not reset between trigger invocations. This means that statics guard variables defined to protect against recursive behaviour will stop trigger logic being applied to records processed by retry invocations.

Conclusions
1) In all cases .addError() results in the full Apex Transaction being rolled back. Custom logging schemes such as writing exceptions to a Custom Object (including via @future) or sending notification emails are futile gestures when employed in conjunction with .addError().

2) Handled runtime exceptions result in the Apex Transaction being committed, unless .addError() is applied to at least one record.

3) In the partial processing case, governor limits are reset to their original state for each retry trigger invocation, static variables are not.

4) Where static guard variables are used to protect against re-execution of the trigger logic via field updates within update DML transactions, partial processing retry trigger invocations will also be blocked. Such invocations are consistent with those initiated by workflow field updates later in the order of execution.

In order to distinguish between a field update trigger execution and a partial processing retry it becomes difficult to use static variables – batch sizes can be equivalent, sequencing is unpredictable etc.

One possible approach is to use a custom field on the target object (Is Workflow Processing?), a field update action would be added to set this to “True” alongside the existing actions. In the trigger code we can now reliably test this field state to identify a workflow field update initiated trigger execution. The trigger code must always reset the field to “False” before returning, such that any partial processing retry executions safely proceed. Where trigger logic resides in an after update trigger, a before update trigger could be added simply for convenience of reseting the field – in this case a static could be set in the before context and referenced in the after context.

It’s possible there are more efficient ways to provide a robust solution to this. A complete Apex based solution would definitely be preferable to the custom field based solution.

Salesforce Summer 14 Platform Highlights

The Summer ’14 release notes are out (in preview), this post outlines 10 selective highlights related to the Force.com platform (in no order of significance). Given the raft of enhancements to Salesforce1 I’ll cover this area in a separate post.

Custom Permissions (Developer Preview)
Custom permissions enable arbitrary permissions (typically function-centric) to be defined and assigned to users via Permission Set or User Profile, such permissions can be tested in code in a consistent manner to standard function-type permissions (Manage Cases etc.). A new CustomPermission standard object exists in addition to the related SetupEntityAccess object for this.

External ID Limit
The limit for external Ids on Custom Fields has been raised from 3 to 7. A definite necessity for medium-to-large scale implementations with multiple data integrations or complex query requirements.

Query Plan Tool
The Developer Console has been incrementally enhanced, as has been the case over recent releases, a key feature of the Summer ’14 version is the provision of access to the underlying query plan executed for SOQL queries. Very useful insight (impossible to access previously) for query optimisation.

Describe Limits Removed
With Summer ’14 the describe limits for Apex are fully removed. This is excellent news and will hopefully encourage more use of Dynamic Apex and generally a more dynamic approach to development with obvious maintainability and TCO benefits.

Access the Standard PriceBook in Test Code
A long term gripe for many developers has been the inability to access the standard pricebook in test code, meaning Product related unit tests had to run in seeAllData=true mode. Now test code can access the standard pricebook Id via Test.getStandardPricebookId(), use this Id in creating PBE, create custom pricebooks and related PBE, all with full test data isolation in place. Excellent.

Apex Flex Queue (Developer Preview)
The Apex Flex Queue is a holding queue for batch jobs where the current count exceeds the allowed 5 concurrent/queued jobs. Up to 100 can be held on the Apex Flex Queue; entries can be viewed in the setup UI and reordered. The entries on the Flex Queue are processed as system resources permit. This is a great improvement reducing the need for retry logic applied via Scheduled Apex to handled the full-queue case.

Higher Limits for @future Methods (Pilot)
Additional attributes on the @future annotation enable doubling or tripling of key execution limits; CPU time, heap size, soql queries etc. It would appear that increases may effect other asynchronous processes for the org. A similar approach to applying higher limits to Batch Apex classes would be very useful.

Simplified Package Installer
The relatively long-winded package installer now offers a simpler user-experience; one-page, one-click install. The detailed configuration options are there, but accessed on separate links by exception not as part of the standard course as was the case previously. AppExchange installs use the new UI by default, installs occurring via URL required the addition of the &newui=1 querystring parameter.

Analytics Dashboards API
The REST Analytics API has been extended to cover dashboards (in addition to reports). Capabilities include triggering a dashboard refresh, retrieving dashboard metadata and retrieving a list of the most recently used dashboards. All of which are key functions in incorporating dashboards into custom mobile views.

Related Lists as Console Components
Useful ability to define related lists for that appear as console components (within a Console app) for a specific page layout. The utility here is one of convenience, no more scrolling vertically, the lists can appear horizontally adjacent to the record details.

Visualforce Controller Class Convention

A quick post to outline an informal convention for the definition of a Visualforce controller class, key maintainability characteristics being predictable structure, readability and prominent revision history. All developers have a subjective preference in this regard, however consistency is key, particularly in the Salesforce context where multiple developers/consultancies contribute to a codebase over its lifetime. A simple, logical approach always makes sense to maximise adoption.

/*
Name:  MyPageController.cls
Copyright © 2014  Force365
======================================================
======================================================
Purpose:
-------
Controller class for the VF page - MyPage.page
======================================================
======================================================
History
------- 
Ver. Author        Date        Detail
1.0  Mark Cane&    2014-05-20  Class creation.
1.1  Mark Cane&    2014-05-21  Initial coding for page initialisation.
*/
public with sharing class MyPageController {
	//& public-scoped properties.	
	public List<MyWrapperClass> wrapperClassInstances { get; set; }
	//& End public-scoped properties.
   				
	//& private-scoped variables.
	private Boolean isInitialised=false;
	//& End private-scoped variables.
	
	//& page initialisation code.
	public MyPageController(){
		initialise();
	}
	
	private void initialise(){ isInitialised=true; }
	//& End page initialisation code.
	
	//& page actions.
	public PageReference saveAction(){
		return null;
	}
	//& End page actions.
	
	//& data access helpers (class methods accessed from binding expressions).
	//& End data access helpers.
	
	//& controller code Helpers (class methods providing helper functions to data access helpers or actions).
	//& End controller code helpers.
	
	//& inner classes (wrapper classes typically, extending SObjects for convenience in the UI).
	public class MyWrapperClass {
		public MyWrapperClass(){}
	}
	//& End inner classes.
}

Salesforce Implementation Audit

This post provides an outline approach to consider when performing an internal audit of an existing (or emerging) Salesforce implementation. As an individual who specialises in the provision of such quality assurance services from an external perspective, I’m convinced that most projects would benefit from a periodic internal review, perhaps augmented by some occasional external perspective and insight (Salesforce services can help here). However this is approached, in the majority case the internal project team will have the requisite experience and competency to deliver such an introspective review, the challenge is often one of finding the right time, or indeed any time, to conduct it. This is why a retrospective build review should be planned every 3 or 4 sprints (or thereabouts – projects differ) with a full implementation audit scheduled every release. The principal being that whilst the build is in flight, periodic sense checks are made on key quality aspects, technical integrity, platform limits etc. with a comprehensive audit applied pre-release (ideally). The latter may need to consider a combined future deployment state where multiple parallel development streams converge into a single production org.

Note, an implementation audit is build-focused (or solution oriented) and should not assess the fit-for-purpose nature of the functionality in respect to business requirements (i.e. the problem-to-solution mapping). The only exception to this arises where an obvious mapping to a standard feature is missed resulting in a “gap” that is unnecessarily filled by a technical solution option.

Note, in order to cut down on the time required to conduct the audit access to individuals who can describe the functional intent is imperative. In the internal case the programme/project architect should be leading the audit and should be aware of the functional design context.

Note, before diving into the detail of the implementation, it can be highly valuable to re-define the high-level solution architecture (HLSA) in current state terms. The key point being that the macro-level view is often distorted by micro-level design decisions made during the project course. A periodic check is useful to ensure that this organic change is understood and that the integrity of the original architectural vision is maintained.

Indicative review areas are listed below (this is not exhaustive)

Declarative build environment
1. Identify platform limits that are reaching a high percentage of utilisation that may present risk to scalability and future phases of development.
2. Identify any future maintainability risk presented by the conventions applied in the definition of configuration elements (e.g. naming conventions, opportunities for best practice improvements.).
3. Identify functional areas where a mapping to standard features could be achieved.
4. Identify security vulnerabilities (org-access, sharing model etc.).

Technical customisations
1. Identify risks to data integrity and application responsiveness.
2. Document risks to scalability and extensibility imposed by platform execution limits.
3. Document deviations from best practice technical patterns, conventions and coding standards.
4. Identify security vulnerabilities introduced by technical componentry.
5. Document deviations from best practice development practices and process.

Integration architecture
1. Identify risk associated with deviations from best practice integration patterns and practices.
2. Identify opportunities to reduce limits consumption.
3. Identify data integrity and scalability vulnerabilities related to the current state integration architecture.

Identity management
1. Identify risk associated with implemented single sign-on processes and related services/infrastructure.
2. Document deviations from best practices related to identity management.

Salesforce Cross-Organization Data Sharing

As a long time Salesforce-to-Salesforce (S2S) advocate and interested follower of the Saleforce integration space the Winter ’14 pilot of cross org data sharing, or COD for short, caught my attention recently. This short post covers the essentials only, for the detail refer to the linked PDF at the end of the post.

Note, the pilot is currently available for developer edition (DE) orgs exclusively, which gives some indication of where this feature is in term of production state.

What is it?
Hub and spoke architecture; the hub provides the metadata definition (selected fields) and data for shared objects, the spoke gets a read-only copy (proxy version) of the object named with the __xo suffix, e.g. myObj__xo.

Setup Configuration takes place at:
Data Management – Cross-Organization Data – (Hub | Spoke) Configuration

Connections are established via an invitation email, generated at the Hub, sent to a Spoke administrator and accepted.

Synchronisation takes place via the [Sync with Hub] action on the Spoke Detail page, which also displays the status of the org connection.

Limitations?
Synchronisation is a manual process, in the pilot release at least.
All custom objects can be shared but only the Account standard object.
Not all standard fields on Account can be shared,
API limits apply in addition to specify platform limits applicable to COD (for example, Spokes per Hub, concurrent Spoke connections).

Exemplar Use cases?
Master data sharing in Multi-org architectures.
Reference data sharing (product catalog, price list etc.) with business partners.

Key Differences with S2S?
Uni-directional synchronisation of data.
Spoke data is read-only.
S2S is not limited to the Account standard object (other limits apply)
S2S implies record-level sharing, COD is object-level (sharing permissions withstanding)
S2S initial record sharing can be automated (via ApexTrigger).
S2S externally shared records synchronise automatically.

Final Thoughts
COD is definitely an interesting new addition to the native platform integration capabilities. Data sharing between Salesforce customers via ETL tools or file exchange can be error prone and time expensive to maintain. COD provides utility in terms of simplification, centralised administration and tracking. In considering the pilot release, S2S remains the best fit for transactional data, whilst COD provides coverage of reference data and some master data scenarios.

References
COD Pilot Guide PDF
COD Overview – Salesforce Help

Salesforce OpenID Connect

In addition to the proprietary Authentication Provider types (Facebook, Janrain, Salesforce) Winter ’14 (v29.0) added support for the OpenID Connect protocol, enabling off-platform authentication via any compatible OpenID Provider (Google, PayPal, Amazon and others). This post provides a basic implementation overview.

OpenID Connect what is it?
OpenID Connect is a lightweight authentication (identity verification) protocol built on top of modern web standards (OAuth 2.0, REST and JSON). OpenID Connect supersedes OpenID 2.0 and amongst other goals is intended to promote interoperability, be accessible to developers and to provide greater support for mobile use cases.

The OpenID Connect standard was recently ratified by members of the OpenID foundation and announced publicly at the Mobile World Congress in Barcelona on 26th February 2014. The standard is supported by Google, Microsoft, Salesforce, AOL, Ping and others.

The protocol works on the principal of an “Authorization Server” or OpenID Provider (OP) (e.g. Google), authenticating users on behalf of a “Client” or Relying Party (RP) (e.g. Salesforce). With the current implementation Salesforce can be configured as a RP but not an OP. In this context an Authentication Provider is configured in Salesforce with the type set to OpenID Connect. Note OP is also referred to as IDP, confusingly we have 3 seemingly interchangeable terms – however the OP term is the one defined in the standard.

Please refer to the excellent OpenID website for more details in regard to specifications, implementations, useful FAQs etc..

Identity Use Cases
In simple terms, users can single sign-on (SSO) into Salesforce using external web application credentials. A Salesforce user record can be created just-in-time on the first authentication event, subsequent events for the user map to this user record. Note, usefully it’s also possible to map existing users via the [Existing User Linking URL].

A key use case here is B2C portals and communities, however internal users and partners can also use this authentication approach. For internal users SSO via a Google Account could make sense where an enterprise has adopted Google Apps for Business.

What’s important to understand is that Salesforce supports external Authentication Providers for all user types (with the exception of Chatter External) with SAML, OAuth and OpenID Connect protocol support. This provides a high degree of flexibility in the terms of how identity management is implemented.

Implementation Steps
The Salesforce help providers a detailed series of steps to follow. The following high-level example shows a basic implementation of a Google Accounts Authentication Provider.

1. Register an OpenID Connect Application.
As per Salesforce Connected Apps, within your Google account an application is required, within which OAuth is configured. Applications are created via the Google developers console. The Redirect URI won’t be known until the Authentication Provider is configured in Salesforce.

7. Google Developers Console

2. Create an Authentication Provider.
Consumer Key = Client ID
Consumer Secret = Client secret
Scope = profile email openid

8. Salesforce Auth Provider Detail Page

3. Update OpenID Connect Application.
Copy the Salesforce [Callback URL] to the Google [Redirect URI] field and save.

4. Add Authentication Provider to Login Page (Standard App or Community).
This step requires a My Domain where the internal app login page is customised.

9. Salesforce Login Page Customisation

5. Configure a Registration Handler class.
Within the Authentication Provider configuration a Registration Handler class can be specified, this class implements the Auth.RegistrationHandler interface and is invoked to create new users or map to existing users in response to authentication events.


global class GoogleAccountsRegistrationHandler implements Auth.RegistrationHandler{
  global Boolean canCreateUser(Auth.UserData data) {
      //Check whether we want to allow creation of a user?
      return true;
  }

  global User createUser(Id portalId, Auth.UserData data){
      if(!canCreateUser(data)) {
          //Returning null or throwing an exception fails the SSO flow
          return null;
      }
      if(data.attributeMap.containsKey('sfdc_networkid')) {
          //We have a community id, so create a user with community access.
          //.. create community user.
      } else {
          //.. create standard user.
          return u;
      }
  }

  global void updateUser(Id userId, Id portalId, Auth.UserData data){
      User u = new User(id=userId);
      //.. update fields if required.
      update(u);
  }
}

Testing
1. Initialisation
The [Test-Only Initialization URL] provided on the Authentication Provider detail page can be pasted into a browser address bar and used to examine the raw output provided back from the Authorization Server.

1. Test Initialisation Output

2. Authentication
The following screenshots show the basic authentication flow. Note, as with SAML based SSO, errors are appended to the URL querystring.

Customised login page showing the Google Account Authentication Provider.

2. Login Page With Auth Providers

Clicking the button redirects the browser to the Google Service Login page to authenticate (unless a Google Accounts session exists).

3. Google Service Login Page

For new users the user consent page is displayed. This page can be customised via the Google Developer console.

10. User Consent page

Authentication errors are appended to the URL, as-per SAML authentication errors.

4. Error Page 1

5. Error Page 2

Finally, a Salesforce session is established. New users can be provisioned automatically, script within the Registration Handler class controls the configuration of such User records.

6. Auto-provisioned User Detail Page

Implementation Notes
1. Google Developer Console. Remember to turn ON – Google+ API access. This is required.

2. The auto-created Registration Handler class template must be modified as the default code will fail in many cases.
canCreateUser is false by default – in most cases this must be changed to true.

The Combination of values below don’t work if the user isn’t configured with US locale.
u.languagelocalekey = UserInfo.getLocale();
u.localesidkey = UserInfo.getLocale();
u.emailEncodingKey = ‘UTF-8′;
u.timeZoneSidKey = ‘America/Los_Angeles';

3. Activation code entry appears to fail within an Internal Server error, but the code has been successful so subsequent attempts will succeed. This may be specific to my context.

4. As a best practice map the user Id from the OpenID Provider (Google Account Id in the example) into a custom field on the User record. This provides a robust mapping between the 2 system identifiers that can be used by the Registration Handler script.

5. Access can be revoked via the Third Party Account Link related list on the User detail page.

6. Make the OpenID Connect Application name meaningful to the end-users, the Google user consent page will display this in a “[AppName] is requesting access” format, anything weird or meaningless may cause concern.

Protocol Flow
Please treat the diagram below as indicative only, I put this together from a combination of browser profiling and assumptions made on the basis of reading the OpenID Connect specification.

As always, corrections would be appreciated.

OpenID Connect - SF Process Flow

Final Thoughts
OpenID Connect support is a highly useful extension to the Authentication Provider platform capability. For B2C portals and communities it makes sense to offer as many sign-in-via options as possible (Facebook, Google etc.) removing as much friction to user adoption as possible. As a personal opinion, over time I’m becoming less tolerant to having to register a new user account on each and every authenticated web site I interact with, particularly where I view the interaction as transient. Some users may have concerns around data security, i.e. by signing-in with a Google account are they implicitly giving Google access to the data held in the portal? In the majority case however, users will appreciate the improved user experience. For internal users OpenID Connect makes single sign-on via one or more of a multitude of current and future web platforms incredibly straightforward to implement. In the Salesforce context the key challenge will ultimately relate to reconciliation and rationalisation of identities (i.e. User records).

References
http://openid.net/connect/
http://openid.net/developers/specs/
https://developers.google.com/accounts/docs/OAuth2Login
https://console.developers.google.com/project