Salesforce Environment Hub

Environment Hub was announced back in August 2012 and was initially made available as a Pilot feature of the Winter ’13 release. This post follows up (some 2 years later) on my initial interest in understanding the utility Environment Hub delivers in the context of environment/user management.

Environment Hub – What is it?
In simple terms Environment Hub is an org administration tool that enables multiple orgs (of any type) to be associated and accessed from a central location. Each connected org is a termed a Hub Member, with an imposed constraint that each Hub Member org can only be parented by one Hub org. It is therefore imperative that Hub Members are only added where there is no contention over the appropriateness of the parent Hub. Commonality of target production org or packaging org is a good starting point for this consideration, as is the rule that client orgs should never be parented by an ISV or Consultancy Hub org. The parent Hub org should always be the most-accessed org, the credentials for which will become those by which all org access is made.

For larger programmes of work Environment Hub provides a highly useful means to catalogue the org estate and to provide SSO between the constituent orgs. This latter point enables reduced password maintenance, simplified access and centralised user administration in terms of deactivating a user account in one place only and ability to view login history in once place.

Environment Hub Tab

Environment Hub is enabled by Salesforce support who will require confirmation to proceed with the change. From experience, the enablement process can take a few days.

Key Features

Connect Organisation
Connecting Hub Members to the Environment Hub occurs via the entry of an administrator username for the target org and subsequent OAuth authentication and authorisation flow. The User Permission “Connect Organisation to Environment Hub” is required.

Connect Organisation 1

Connect Organisation 2

Connect Organisation 3 - OAuth

Connected Organisation Detail Page

Once an org is connected to the Environment Hub interesting detail such as the Edition, Org Status and Org Expiry date is revealed. It is also possible to add custom name and description attributes to the Hub Member, which I really like, each org in the estate should be justifiable and have a specific purpose – here we can capture this plus accountable contact etc. A very useful means of cataloguing and tracking the org estate.

The Company Detail page in the connected org will now show the Environment Hub Org Id value as below.

Company Information Page

In addition to the “User Added” origin, Hub Members are also auto-discovered using existing org-to-org relationships as below.

Auto-discovery types; sandbox to production, patch orgs to release org, trialforce source org to trialforce management org, release org to LMO

Create Organisation
New development/test/demo orgs can be created directly within the Environment Hub – this replaces the functionality previously exposed via the Salesforce Partner Portal, for partners at least. I’m unclear how the types of org offered are affected by partnership status etc. or perhaps whether the Environment Hub itself is available only to partners.

Create Organisation

Create Organisation 2

Org types offered;
Development = PDE org.
Test/Demo = Pick Edition for a 30 day time expired org.

Useful information of the distinction between different org types can be found here.

Single Sign-on
SSO can be enabled between each Hub Member and the Hub org; in implementation terms this means the Hub org is configured as an Identity Provider with a Service Provider being configured (automatically via the SSO enablement process) in both the Hub Member org and the Hub org. Both IdP initiated (via the Environment Hub tab) and SPI-initiated (via enablement of the Service Provider as a Login Page Authentication Service) SAML flows are supported. The latter point meaning SSO could be enforced as the only authentication means – thereby switching off standard Salesforce authentication entirely. Each Hub Member Org must have a My Domain configured for SSO to function.

Note, enabling SSO creates a Service Provider in the Hub org, default permissions are provided to the Standard User and System Administrator profiles only. It is therefore a requirement to ensure relevant permissions (Profile or Permission Set) are provided.

Connected Organisation Detail Page SSO

Within the Identity Provider, the Hub Member specific Service Provider configuration is set with “Subject Type=User’s ID determined by Environment Hub”, this setting delegates the user mapping to the Environment Hub settings, defined as below.

3 types of SSO User Mapping;
Method 1. Mapped Users – 1 to 1 mapping of user names – per-user.
Method 2. Federation Id – boolean state, yes = attempt to map users on common Federation Identifier values.
Method 3. User Name Formula – formula expression = attempt to map users via formula result.

Where multiple mapping types are enabled the precedence order above applies. For SSO between a sandbox and production org, user mapping is implicit and not configured as above.

Customisation
Enabling Environment Hub adds the EnvironmentHubMember standard object which is open to the declarative build model; custom fields, page layouts. validation rules, workflow rules, approvals etc.. In addition Apex Triggers can be defined on this object, use cases for which may include notifications relating to status changes etc.

A second standard object EnvironmentHubInvitation is also added, however this object is inaccessible from the Setup menu (although it can be reached via /p/setup/layout/LayoutFieldList?type=EnvironmentHubInvitation&setupid=EnvironmentHubInvitationFields). I’m assuming this object to be either forward looking or a legacy of an invitation-based connection model (as per Salesforce to Salesforce). Regardless, records do seem to be added to this object, the logic for which is unclear.

Related Permissions
Hub Org -
Manage Environment Hub
Environment Hub App and Tab access
EnvironmentHubMember standard object access permissions
Environment Hub Connected App
SSO Service Providers e.g. [00_____0000Cj__] Service Provider Access for SSO (by Profile or Permission Set)

Hub Member Org -
Connect Organisation to Environment Hub

Environment Hub in Practice
For consulting projects Environment Hub offers significant value in terms of management and tracking of the org-estate and centralisation of user administration. In practice this would require all project contributors to access production (Hub) as the primary org and SSO into secondary (Hub Member) orgs as required. In this model, production could be utilised for project collaboration, bug-tracking, project management etc. which is a common approach. The obvious downside being the requirement to license the project team in production – a big challenge on many projects where user licenses aren’t provisioned until a late stage or business use runs parallel to project activity. In such examples, project contributors could be provisioned with low-end user licenses on a temporary basis. Ideally we could do this with a Chatter Plus license as users would only require Chatter, Custom Objects (10 or less) and Environment Hub access.

For ISV projects, the utility of Environment Hub relates more specifically to the ability to catalogue the multitude of environments required for development, test, i11n, packing, release and patch purposes, not to mention TrialForce. Efficiency of access across this estate is also a key factor.

References
Environment Hub Online Help

Salesforce1 Lightning

Once again the annual Dreamforce event has been and gone leaving most practitioners with an uncomfortable knowledge deficit in terms of the real detail of the wealth of new innovations announced. This autumn period, post-Dreamforce, is often a time for investigation; piecing together information gathered from press releases, blog post, social media and event session replays.

This post provides the output of my own brief investigations into the Salesforce1 Lightning announcement. Note, I may be vague or incorrect in some areas, I have made assumptions (safe I hope).

Salesforce1 Lightning – What is it?
Salesforce1 Lightning is the next generation of the Salesforce1 Platform, it is framed specifically as a platform-as-a-service (PaaS) play and heralded as the world’s number 1. The platform is comprised of a number of new and re-branded technologies that collectively target the rapid delivery of cross-device, responsive applications via clicks-not-code. Responsive in this context relating to dynamic components that adapt their layout to the viewing environment, to provide an optimised viewing experience, regardless of the dimensions (i.e. a single view layer that supports desktop computers, smartphones, tablets and wearables).

Salesforce1 Lightning – Key Features
– Lightning Framework (GA now)
Saleforce developed UI framework for rapid web application development based on the single-page model, loosely coupled components and event-driven programming. The underlying Aura Framework was open-sourced last year and is used internally by Salesforce for the development of Salesforce1 plus recently introduced platform functionality (Opportunity Splits UI etc.). The Lightning Framework represents the availability of a subset of the Aura Framework functionality as a platform service for custom application development.

- Lightning Components (beta now, GA Spring 2015)
In order to build with the Salesforce1 Lightning Framework we need components. Components represent cleanly encapsulated units of functionality from which user interactions are comprised (i.e. apps). Component communication is supported by an event-based-architecture.

Standard Components – e.g. Button, Report Chart, Chatter Feed. Standard Components enable custom applications to be composed that inherit the Lightning UI (meaning the Salesforce1 UI).

Custom Components. Using the Lightning Component Framework, custom components can be developed to deliver any required experience (style, branding, function etc.). Developers build components with predominantly client-side behaviours, Apex can be employed to provide server-side controller functionality.

AppExchange Component. 3rd party commercial components installed from the AppExchange.

- Lightning App Builder (beta Spring 2015)
As the name implies the App Builder enables application composition through the drag-and-drop of components – meaning no-code. The builder provides Desktop, Laptop, Tablet, Phone and Smart Watch views onto which optimal component layouts are configured – the responsive behaviour to the components themselves therefore applies within the device type view – this makes good sense as an optimal laptop experience is not the phone view with proportionally bigger components. This approach is commonplace, wix.com for example works this way – i.e. build a generic device type specific view with components that adjust to the device specific viewing environment.

- Lightning Process Builder (GA Spring 2015)
Re-branded Force.com Flow / Visual Workflow functionality. In this context the point being that the business process logic is configurable through a drag-and-drop visual tool. Edit: my assumption here in relation to re-branding may well be incorrect, it has been suggested that the tool is in fact a distinct technology from Force.com Flow and may coexist. I’ll update this when I understand more – the functional overlap between Force.com Flow and Process Builder looks to be significant.

- Lightning Schema Builder (GA now)
Re-branded Schema Builder functionality. In this context the point being that the data schema is open to visual (i.e. drag-and-drop) manipulation.

- Lightning Connect (GA now)
Re-branded Platform Connect, or External Objects. The ability to configure virtual objects that query external data sources in real-time via the OData protocol. Connect underpins the concept of real-time external data access via the Salesforce1 platform and its constituent mobile applications.

- Lightning Community Builder (beta now, GA Spring 2015)
As the name implies a drag-and-drop tool for the configuration of Salesforce Communities plus the content delivered.

Entry Points
Salesforce1 Lightning Components can be accessed in the following ways.

Standalone App – access via /NAMESPACE/APPNAME.app
Salesforce1 Mobile App Tabs – create a Tab linked to a Lightning Component and add to the Mobile Navigation Menu.
Salesforce1 Mobile App Extensions – currently in Pilot, a means by which custom components can override standard components.

Future – All Visualforce entry points should ultimately become options for Lightning Components. This one is definitely an assumption. Standard components with which exhibit the standard Salesforce (Aloha) styling would be required for this.

Considerations
The following list represents some initial thoughts regarding the significance of Salesforce1 Lightning – formed without the insight gained through practical experience.

- Mobile-first.
The design principle is simple, the most effective way to build compelling user experiences across multiple devices is to start with the simplest (or perhaps smallest) case, i.e. the mobile. Note, the introduction of wearables possibly invalidates this slightly. The alternate approach of trying to shoe-horn complex desktop experiences onto smaller viewing environments rarely produces anything worthwhile.

- Rapid Development.
Quickly, faster, speed-of-business plus various other speed-related adverbs litter the press releases for Salesforce1 Lightning. If the name itself isn’t sufficient to highlight this, the platform is all about rapid development. In context rapid is realised by configuration-over-code; assembling apps with pre-fabricated, road-tested components delivers the shortest development cycle. That said, regardless of whatever form development takes a development lifecycle is absolutely necessary – the vision of analysts configuring apps over lunch and releasing immediately to business users is prone to disaster; I don’t believe this to be the message of Salesforce1 Lightning however.

- Technical Barriers versus Organisational Friction.
Removing technical barriers to rapid application development is only one part of the equation, organisations need to be culturally ready to embrace and therefore benefit from such agility. Building an enterprise app in a week makes little sense if it takes 3 months for acceptance, approval and adoption processes to run. The concept of turning IT departments into centres of innovation is an incredibly powerful aspiration, however this relies heavily on empowerment, trust and many other agile principles some organisations struggle with.

- Development model.
The Lightning development model is fully consistent with the age-old Salesforce philosophy of rapid declarative development via pre-fabricated componentry. Application architects, admins, analysts etc. assemble the apps with components supplied by Salesforce, built in-house or procured from the AppExchange. Developers will therefore focus more on building robust and reusable components than actual applications – which makes good sense assuming appropriate skills are applied to component specification and application design. The model requires non-technical resource to adopt a development mindset, this may be problematic for some.

- Developer Skills.
To build with the Salesforce1 Lightning Component Framework developers must be proficient in JavaScript, beyond intermediate level in my view. Lightning is a comprehensive component-based client-server UI framework with an event driven architecture. Developers with previous exposure limited to basic JavaScript augmentations to Visualforce face a learning curve. Anyone still under the false impression that JavaScript programming is simple compared to languages such as Java, C, C++ etc. may want to reconsider this before embarking on a Lightning project.

- Use Cases.
The viability of the proposed development model may ultimately come down to the complexity of the use cases/user experiences that can be supported without reverting to custom component development. By their very nature mobile interactions should be simplistic, but for desktop interactions it will be interesting to understand the scope of the potential for complex applications.

- Adoption.
Salesforce1 Lightning follows a similar paradigm to both Site.com and Force.com Flow, where historically technically oriented tasks are made possible for non-technical users; drag and drop visual composition of business process realisations and interactive web site development respectively. Both technologies, as innovative and empowering as they are, do not appear to have radically changed the development models to which they pertain. An obvious question therefore is whether the empowering technology alone is enough to drive adoption.

References
Aura Documentation Site
Lightning Components Developer’s Guide
Lightning QuickStart

Apex Unit Test Best Practice

This post provides some general best practices in regard to Apex Unit Tests. This isn’t a definitive list by any means, as such I’ll update the content over time.

Top 10 Best Practices (in no order)

1. TDD. Follow Test Driven Development practice wherever possible. There is no excuse for writing unit tests after the functional code, such an approach is indicative of a flawed development process or lax standards. It’s never a good idea to estimate or deliver functional code without unit tests – the client won’t appreciate an unexpected phase of work at the point of deployment, not to mention the pressure this approach puts on system testing.

2. Code Quality. Ensure unit tests are written to cover as many logical test cases as possible, code coverage is a welcome by-product but should always be a secondary concern. Developers who view unit tests as a necessary evil, or worse, need to be educated in the value of unit tests (code quality, regression testing, early identification of logical errors etc. etc.).

3. Test Code Structure. For some time now I’ve adopted a Test Suite, Test Helper pattern. A suite class groups tests related to a functional area. A test helper class creates test data for a primary object such as Account (i.e. AccountTestHelper.cls), secondary objects such as price book entry would be created within the product test helper class. The suite concept provides a logical and predictable structure, the helper concept emphasises that test data creation should be centralised.

4. Test Code Structure. Put bulk tests in a separate class e.g. AccountTriggerBulkTestSuite.cls (in addition to AccountTriggerTestSuite.cls). Bulk tests can take a long time to complete – this can be really frustrating when debugging test failures – particularly in production.

5. Test Code Structure. Ensure test classes contain a limited number of test methods. I tend to limit this to 10. As with point 4, this relates to test execution time, individual methods can’t be selectively executed – the smallest unit of execution is the class.

6. SeeAllData. Always use SeeAllData=true by exception and at the test method level only. Legacy test code related to pricebooks that historically required this can now be refactored to use Test.getStandardPricebookId(). Also, set the [Independent Auto-Number Sequence] flag to avoid gaps in auto number sequences through the creation of transient test data.

7. Test Case Types. As the Apex Language reference proposes, write unit tests for the following test case types.

Positive Behaviour – logical tests that ensure the code behaves as expected and provides successful positive outcomes
Negative Behaviour – logical tests for code behaviour where parameters are missing, or records do not adhere to defined criteria – does the code protect the integrity of unaffected records – does the runtime exception handling function as expected
Bulk – trigger related tests primarily – how the code behaves with a batch of 200 records – mix the batch composition to stress the code against governor limits
Restricted User – test relevant combinations of user role and profile – this test case type is prone to failure through sharing model adjustments – triggers should delegate processing to handler classes that have the “with sharing” modifier

8. Debugging. Always use the syntax below for debug statements within code (test and non-test code). An efficient practice is to add sensible outputs whilst writing the code. This approach avoids a code update or re-deployment to add debug statements during error diagnostics. Note – in such cases Checkpoints could be a better approach anyway – particularly in production. The use of the ERROR logging level enables a restrictive log filter to be applied such a clear debug log is produced and max log size truncation is avoided – note, log filters can also have a positive impact on transaction execution time.

System.debug(LoggingLevel.ERROR, 'my message');

9. Commenting. Always comment test methods verbosely to ensure the test case intent is clear and that the test code can be mapped to the related non-test code. Test classes should be fully self documenting and be viewed as the primary enabler for the future maintenance of the non-test code.

10. Maintenance. Test code is highly dependent on the environment state. Any configuration change can require test code to be updated; this could be a new mandatory custom field or a sharing model adjustment. In many cases the resultant unit test failure state is not encountered until the next deployment to production, which can’t proceed until the tests are fixed. This scenario will be familiar to many people. The mitigation requires the local administrator to understand the risk, frequently run the full set of unit tests and to manage the test code update cycle proactively.

Example Test Suite Class

/*
Name: RecordMergeTestSuite.cls
Copyright © 2014  CloudMethods
======================================================
======================================================
Purpose:
-------
Test suite covering RecordMerge operations.
Bulk tests are defined in the class RecordMergeBulkTestSuite.cls
======================================================
======================================================
History
------- 
Ver. Author        Date        Detail
1.0  Mark Cane&    2014-09-16  Initial development.
*/
@isTest(SeeAllData=false)
public with sharing class RecordMergeTestSuite {
	/*
     Test cases:	
        singleTestCase1 - postive code behaviour/expected outcome test case 1.
        negativeTestCase1 - negative outcome test case 1.
        restrictedUserTestCase1 - postive/negative code behaviour in the context of specific user role/profile combinations.
        ..
        future test cases to cover : * some coverage provided
        1. tbd.
        2. tbd.
    */
    
    /* */
	static testMethod void singleTestCase1() {
		// Test case 1 : postive outcome test case 1.
        setup();

		// Steps - 1. 
		// Logical tests - 1.
    }
    /* */    

    /* */
	static testMethod void negativeTestCase1() {
		// Negative test case 1 : negative outcome test case 1.
        setup();

		// Steps - 1.
		// Logical tests - 1. 
    }
    /* */    

    /* */
	static testMethod void restrictedUserTestCase1() {
		// Restricted user test case 1 : postive/negative code behaviour in the context of specific user role/profile combinations.		    	    			
		List<User> users;
		
		System.runAs(new User(Id = Userinfo.getUserId())){ // Avoids MIXED_DML_OPERATION error (when test executes in the Salesforce UI).
			setup();		    					
			users = UserTestHelper.createStandardUsers(2, 'Sophie', 'Grigson');
		}
		
		System.runAs(users[0]){
			accounts = AccountTestHelper.createAccounts(1, 'Abc Incorporated');
			
			// Steps - 1. 
			// Logical tests - 1.
		}		
    }
    /* */
	
	// helper methods    
    private static void setup(){
   		SettingsTestHelper.setup();    	
    }
    // end helper methods
}

Conceptual Data Modelling

The biggest area of risk on any Salesforce implementation project is the data model. In my view this assertion is beyond question. The object data structures and relationships underpin everything. Design mistakes made in the declarative configuration or indeed technical components such as errant Apex Triggers, poorly executed Visualforce pages etc. are typically isolated and therefore relatively straightforward to remediate. A flawed data model will impact on every aspect of the implementation from the presentation layer through to the physical implementation of data integration flows. This translates directly to build time, build cost and the total cost of ownership. It is therefore incredibly important that time is spent ensuring the data model is efficient in terms of normalisation, robust and fit for purpose; but also to ensure that LDV is considered, business critical KPIs can be delivered via the standard reporting tools and that a viable sharing model is possible. These latter characteristics relate to the physical model, meaning the translation of the logical model into the target physical environment, i.e. Salesforce (or perhaps database.com). Taking a step back, the definition of a data model should journey through three stages; conceptual, logical and physical design. In the majority case most projects jump straight into entity relationship modelling – a logical design technique. In extreme cases the starting point is the physical model where traditional data modelling practice is abandoned in favour of a risky incremental approach with objects being identified as they are encountered in the build process. In many cases starting with a logical model can work very well and enable a thorough understanding of the data to be developed, captured and communicated before the all important transition to the physical model. In other cases, particularly where there is high complexity or low understanding of the data structures, a preceding conceptual modelling exercise can help greatly in ensuring the validity and efficiency of the logical model. The remainder of this post outlines one useful technique in performing conceptual data modelling; Object Role Modelling (ORM).

I first started using ORM a few years back on Accounting related software development projects where the data requirements were emergent in nature and the project context was of significant complexity. There was also a need to communicate early forms of the data model in simple terms and show the systematic, fact-based nature of the model composition. The ORM conceptual data model delivered precisely this capability.

ORM – What is it?
Object Role modelling is a conceptual data modelling technique based on the definition of facts in the form of natural language and intuitive diagrams. ORM models are subject to rigorous data population checks, the addition of logical constraints and iterative improvement. A key concept of ORM is the Conceptual Schema Design Procedure (CSDP), a prescriptive 7 step approach to the application of ORM, i.e. the analysis and design of data. Once the conceptual model is complete and validated, a simple algorithm can be applied to produce a logical view, i.e. a set of normalised entities (ERD) that are guaranteed to be free of redundancy. This generation of a robust logical model directly from the conceptual schema is a key benefit of the ORM technique.

Whilst many of the underlying principles have existed in various forms since the 1970s, ORM as described here was first formalised by Dr. Terry Halpin in his PhD thesis in 1989. Since then a number of books and publications have followed by Dr. Halpin and other advocates. Interestingly, Microsoft made some investment in ORM in the early 2000’s with the implementation of ORM as part of the Visual Studio for Enterprise Architects (VSEA) product. VSEA offered tool support in the form of NORMA (Natural ORM Architect), a memorable acronym. International ORM workshops are held annually, the ORM2014 workshop takes place in Italy this month.

In terms of tools support ORM2 stencils are available for both Visio and Omnigraffle.

ORM Example
The technique is best described in the ORM whitepaper. I won’t attempt to replicate or paraphrase this content, instead, a very basic illustrative model is provided to give nothing more than a sense of how a conceptual model appears.

ORM2 basic example

Final Thoughts
In most cases a conceptual data model can be an unnecessary overhead, however where data requirements are emergent or sufficiently complex to warrant a distinct analysis and design process, the application of object role modelling can be highly beneficial. Understanding the potential of such techniques I think is perhaps the most important aspect, a good practitioner should have a broad range of modelling techniques to call upon.

References
Official ORM Site
ORM2 Whitepaper
ORM2 Graphical Notation
Omnigraffle stencil on Graffletopia

Salesforce Release Methodology – Change Control

This post presents a basic model for the control of change within a Salesforce development process. Best practice suggests that all non-trivial projects should implement some degree of governance around environment change, i.e. Change Control. This is perhaps obvious, what isn’t necessarily obvious is how to achieve effective change control without introducing friction to the develop->test->release cycle.

In simplistic terms a change control process should ensure that all changes are applied in a controlled and coordinated manner. The term controlled in this context relates to audit-ability, acceptance and approval. The term coordinated relates to communication, transparency and orchestration of resources. The foundation upon which such control and coordination is achieved is accurate recording of changes and their application to specific environments, the object model below shows one approach to this.

Note, where feasible I recommend using the production org for this purpose, which may be challenging from a licensing perspective, however this approach has many advantages over off-platform alternatives such as Excel spreadsheets for tracking change. Chatter provides excellent support for collaboration on deployments.

Change Control Object Model

Key Principles
1. For most projects tracking change at the component level (Custom Field, layout adjustment etc.) is time expensive and impractical in terms of associated overhead.

2. The model does not require change to be recorded at the component level. Instead change summaries are recorded and the flow of change between environments tracked. The exception to this is Manual Change, where the component type is not supported by the API or Change Set approach, in such cases Manual Changes are recorded individually.

3. Sandbox to sandbox deployments should be recorded (as the internal deployment type) and tracked.

4. A Deployment will be comprised of Manual Changes organised into Pre and Post Actions, plus a set of grouped Automated Changes. Manual changes may be configuration or data in type.

5. A periodic audit should be conducted to compare the Change Control Log for an Environment against the Setup Audit Log within the org.

6. A production deployment should always be preceded by a full deployment verification test (DVT) that replicates exactly the conditions of deployment to the production org.

7. A Deployment that targets the Production org should always require approval. A standard Approval Process should be introduced, with Chatter Post approval where appropriate.

References
Components supported by Change Set
Metadata API Unsupported Component Types

Salesforce Winter 15 Platform Highlights

Once again it’s official the summer is over and winter is approaching – Winter ’15 that is. Sporting a nice Eskimo logo, the new release rolls out across the sandbox instances imminently, with the main production pod upgrades occurring in mid October. The detailed rollout schedule can be found here and the all important Winter ’15 Release Notes here.

This post outlines selected highlights related to the Force.com platform (in no order of significance). As usual with Dreamforce on the near horizon (October), the Winter release is relatively low key, however even in this context the minimal update in relation to Apex and Visualforce is notable.

- features are GA if not indicated otherwise

Salesforce1 Platform Connect
Salesforce1 Platform Connect introduces the concept of External objects where the data is accessed via RESTful web service callout on request, i.e. the object definition exists in Salesforce but the data is queried from the source system on-demand. Platform Connect is built on the Open Data Protocol (OData) version 2.0. The key use case here is the ability to provide a seamless view of data across system boundaries without the overhead/inefficiency and latency issues with secondary data persistence. It’s unclear at the time of writing whether modifications to external object data is possible – the OData protocol certainly supports this.

Winter 15 External Object

Data Pipelines – Pilot
Apache Pig scripts can be executed on the Apache Hadoop running on the Salesforce platform to perform highly scalable data processing/evaluation tasks. Information on this complex areas appears limited at this time.

Data.com Duplicate Management – Beta
Point-of-entry duplicate prevention for Accounts, Contacts and Leads. Matching rules can be defined which either allow or prevent duplicates identified via custom rule logic. Cross-object matching rules are not supported. It’s unclear how this feature will be licensed.

Custom Lookup fields on Activities – Beta
A long overdue enhancement to Tasks and Events enabling custom lookup fields to be defined in addition to the what and the who.

Social Customer Service Starter Pack – Pilot
Built on integration to the Radian6 platform the starter pack enables monitoring of 2 social accounts directly within Salesforce, without additional Radian6 licensing. Whilst the starter pack appears to be limited to the creation of cases for all inbound social content, the Social Customer Service feature will enable processing by an Apex class to determine appropriate handling of tweets, posts etc.

Event Log File Access
API access to historical event log data (after 24 hours has elapsed). Event types such as Apex Callouts, API, Report, Login can be used to analyse historical trends and to diagnose technical or limits related support issues.

Salesforce1 Flexible Pages
Flexible Pages now support 3 new component types; reportChart, richText and visualforcePage. As the Salesforce1 development documentation states, flexible pages occupy the middle ground between layouts and Visualforce pages. To my mind such pages are simply composite views – app home pages and the like. With Winter 15 flexible page construction is still an xml task, there is still no UI builder.

$Permission Global Variable
Enables checking of Custom Permission assignment for the current users within declarative formula expressions.

Login Flows
Force.com Flow now supports post-authentication Login Flows enabling the declarative configuration of custom login journeys; 2 factor authentication, terms and conditions agreements, product tours etc. defined flows are assigned to user profiles and display as integrated content within the login page. This capability could be a key factor in increased adoption of Force.com Flow, I hope so, the Winter ’15 release also includes time-based processes and the ability to post to Chatter (without an Apex plug in) further good reasons to take a second look at Flow.

LinkedIn and Twitter Authentication Provider Types
An obvious extension to the existing set of proprietary authentication providers. As with the Facebook equivalent, custom Apex code in the form of a registration handler is required. Note, custom button images can now be added to an Authentication Provider for display exclusively on community login pages.

External Identity License
A new low cost license type enabling Community authentication and access to Salesforce Identity, limited Chatter features and 2 custom objects. External Identity licensed users are upgradeable to Customer or Partner Community types. This license type may be intended for use cases where community users simply want to collaborate via Chatter, with no requirement for standard CRM functionality, or where very basic custom functionality is sufficient. I have recent experience of the real need for this type of license, it will be interesting to see the price point.

Deploy with Active Apex Jobs
At long last it’s now possible to deploy components referenced by active asynchronous processes (scheduled jobs, Batch Apex, @futures). This behaviour requires a Deployment Setting [Deploy components when corresponding Apex Jobs are pending or in progress] to be set. It’s unclear whether there are any consequences to this, nonetheless this is a great improvement for orgs with a busy batch schedule.

Apex Queueable Interface
A new Apex interface enabling initiation and monitoring of asynchronous processing; an improved @future in other words. At first glance the implementation approach looks like a simpler version of the Batchable interface, with a single method execute(). Queryable classes are invoked via System.enqueueJob(new MyQueryableClass()), which for monitoring purposes returns the ID for the corresponding record in the AsyncApexJob object. Single path job chaining is possible as each execute() method can invoke a single Queryable class.

Salesforce Live Agent

Live Agent – What is it?
Live Agent enables real-time, online chat between an organisation and its customers, prospects etc. The chat sessions can be initiated via clicking a button or link on a web page, or via automated invitation based on page access metrics etc. For the end user, it can be very convenient to resolve a query through chat, avoiding the usual frustrations of calling a support line – although the end-user experience is still very dependent on the skill and knowledge of the receiving agent. For the organisation, there’s huge potential for call deflection – an expert team efficiently managing multiple concurrent chats (routed by skill) can hugely impact on the call volume. The handling cost of a chat session is generally perceived to be 1 third of the cost of a phone call.

Theoretically therefore we have a win-win situation, online chat should be good for all parties. The double digit percentage increase in the use of chat year-on-year (according to Analysts) is evidence of this. Analysts are also predicting strong growth in live video chat for customer service, with B2C chat options such as Amazon’s Mayday button, becoming mainstream.

Salesforce Live Agent was introduced onto the platform in the Spring 12 release following the acquisition of Activa Live Chat in September 2010.

Key Implementation Concepts
Configuration of Live Agent is more complicated than most functional areas and not something to embark upon without reading the implementation guide. It is possible to deliver a usable solution using the declarative approach, however a fully branded or custom experience will require technical expertise with Visualforce and standard web technologies such as HTML, CSS and JavaScript.

1. Configuration
A Configuration defines the behaviour and presentation of Live Agent with a Salesforce console. Configurations define the number of maximum active chats, the welcome greeting, sound notifications on various events, supervisor monitoring settings and skills-based chat transfer options. Configurations are assigned to Users and/or User Profiles and are typically defined with separate configurations for for standard chat users and for supervisors.

2. Deployment
A Deployment defines the behaviour and presentation of Live Agent to the end-user, i.e. customer or prospect. Deployments control the look and feel of the Chat windows displayed, plus options for the saving of transcripts. Multiple deployments may be configured across product lines or host web sites.

Note – each deployment generates a snippet of HTML which should be inserted once into the host page. The referenced endpoint is unique to the Live Agent org and is generated (and displayed) when Live Agent is enabled.

<script type='text/javascript' src='https://c.______.salesforceliveagent.com/content/g/js/31.0/deployment.js'></script>
<script type='text/javascript'>
liveagent.init('https://d.______.salesforceliveagent.com/chat', '___g00000008OMv', '___g0000003MCJN');
</script>

3. Skill
Skills play a key role in routing incoming chats to appropriate agents. In short, multiple skills can be defined (“General Enquiry”, “Product A Enquiry” etc.) and assigned to Users and/or User Profiles. Every inbound chat is related (via the button) to one or more skill.

4. Chat Button
A Chat Button defines the entry point for a chat and also key information relating to routing such as skills required, queuing options and routing type. The routing type can be set to Least Active, Most Available or Choice. Buttons can be fully customised and branded and be set to display pre-chat forms or post-chat pages.

Note – each Chat Button generates a snippet of HTML which should be inserted into the host page.

<a id="liveagent_button_online____g00000008OT8" href="javascript://Chat" style="display: none;" onclick="liveagent.startChat('___g00000008OT8')">
<!-- Online Chat Content -->
</a>
<div id="liveagent_button_offline____g00000008OT8" style="display: none;">
<!-- Offline Chat Content -->
</div>

<script type="text/javascript">
if (!window._laq) { window._laq = []; }
window._laq.push(function(){liveagent.showWhenOnline('___g00000008OT8', document.getElementById('liveagent_button_online_vg00000008OT8'));
liveagent.showWhenOffline('___g00000008OT8', document.getElementById('liveagent_button_offline____g00000008OT8'));
});
</script>

5. Automated Invitation
In principle Automated Invitations work the same way as a Chat Button, however the chat session initiation occurs via a proactive invitation based on defined sending rules (Seconds on Page, Seconds on Site, Page Views, Url Match, Custom Variable).

6. Quick Text
Live Agent appears as a Channel for predefined Quick Text messages. Such message can be easily inserted into the Chat conversation, for agent convenience and standardisation of messaging. To do the character sequence ;; must entered into the chat text input, this will trigger a list of most recently used messages to appear, alternatively additional characters after the ;; will result in a filtered view. Not an ideal user experience as the agent must be familiar with the message naming. The now retired, standalone version of Live Agent console (flash based) had stronger functionality in this area – enabling messages to found by category.

Extension Points
1. Deployment API
The deployment API enables JavaScript to written that can customise the chat window, launch a chat session or specify back-end functionality such as record searching and creation. This API also enabled Direct to Agent Chat Routing, where routing rules are ignored and all chat invitations can be routed directly to one or more specific agents.

2. Pre-Chat Forms / Pre-Chat API
Introduce a pre-chat data capture that could be used for routing, mapping and auto-query. Mapping relates to mapping page inputs to record attributes in Salesforce. Auto-query relates to records automatically opening when a chat is accepted e.g. the contact record for the customer – or perhaps a transactional record such as an invoice, purchase, booking etc. In addition to record search, record creation is also supported e.g. create a contact record if no match exists. Information entered into the pre-chat form can be viewed by the agent before and during the chat session.

3. Customised Chat Window
Standard Live Agent Visualforce components can be used to build completely custom chat windows.

4. Post Chat Page
Finish the chat session with a page displaying a summary, useful links etc.

5. Live Agent REST API
REST resources that enable a custom chat experience to be developed using any programming language or technology capable of addressing a RESTful web service. Exemplar use cases for this API are custom mobile application development and integration of chat functionality into an existing application.

Note, each org enabled for Live Agent exposes a unique Live Agent API endpoint.

e.g. https://d.__2__cs.salesforceliveagent.com/chat/rest/

POST https://{org unique endpoint}/chat/rest/Chasitor/ChasitorTyping
POST https://{org unique endpoint}/chat/rest/Chasitor/ChatMessage
POST https://{org unique endpoint}/chat/rest/Chasitor/ChatEnd

Functionality – Agent View
Live Agent in the Salesforce Console

Live Agent - Console View

Key Features -
Manage multiple concurrent chats within the console
Find and open existing records related to chats
Create new records based on incoming chats
Choose records or pages to open as sub tabs of each chat session
New Case, Lead, Account, Contact, VF page (1-5)
Include Suggested Articles from Salesforce Knowledge in Live Agent
Attach a file to the chat session
Supervisor tab for monitoring

Functionality – Customer View

Live Agent - Chat Window View

Key Features -
Initiate and end a chat session via button or link
Initiate and end a chat session via invitation acceptance
Save transcript

Salesforce Knowledge Integration
A chat answer field can be added to article types (field name must be specifically Chat_Answer__c – Long Text), clicking the Share link for an the article will result in the Chat Answer text being pasted into the chat window. A nice convenience for agents

There is also a Suggested Articles from Chat feature in the Salesforce Console configuration which I assumed would search Salesforce Knowledge based on the Chat text – it seems that all this does is add Knowledge to the Live Agent sidebar.

Licensing
Performance Edition – included.
Enterprise and Unlimited Edition – feature license cost

Note – Live Agent is also available in Developer Edition.

References
Implementation Guide
Developers Guide
REST API Guide

Technical Naming Conventions

Challenge – outside of the ISV development model there is no concept of an application namespace that can be used to group the technical components related to a single logical application. To mitigate this issue, and to provide a means to isolate application-specific components, naming schemes such as application specific prefixes are commonplace.

Risk – without application/module/function namespaces etc. all technical components reside as an unstructured (unpackaged) collection, identified only by their metadata type and name. As such maintainability and future extensibility can be inhibited as the technical components related to multiple logical applications converge into a single unstructured code-base.

Options –
1. Application specific prefix. All components related to a specific application are prefixed with an abbreviated application identifier, e.g. Finance Management = “fm”, HR = “hr”. This option addresses the requirement for isolation, but inevitably causes issue where helper classes or classes related to common objects span multiple applications. This option has the advantage of minimising the effort required to remove functionality related to a logical application, only shared classes would need to be modified.

2. Object centric approach. In considering a Salesforce org as a single consolidated codebase where most components (technical or declarative) relate to a primary data object, a strict object-centric approach can be taken to the naming of technical components. With such a mindset, the concept of a logical application becomes less significant, instead components are grouped against the primary data object and shared across the custom functionality that may be related to the object. A strictly governed construction pattern should promote this concept with the main class types defined on a per-object basis. Functional logic not related to a single object should only every reside in a controller class, web service class or helper class. In the controller and web service cases, the class should orchestrate data transactions across multiple objects to support specific functionality. In the helper class case a function-centric approach is appropriate.

In architectural terms, an object-centric data layer is introduced that is called from a function-centric presentation layer.

presentation layer [Object][Function].page –
SalesInvoiceDiscountCalc.page
SalesInvoiceDiscountCalcController.cls

data layer [Object][Class Type].cls –
SalesInvoiceManager.cls
AccountManager.cls

business logic layer [Function][Helper|Utility]–
DiscountCalcHelper.cls

The downside of this approach is contention on central classes in the data layer when multiple developers are working in a single org, plus the effort required to remove functionality on a selective basis. In the latter case using a source code management system such as Git with a smart tagging strategy can help to mitigate the issue. Additionally, code commenting should always be used to indicate class dependencies (i.e. in the header comment) and to convey the context in which code runs, this is imperative in ensuring future maintainability.

Recommended Approach -
1. Option 2. In summary, naming conventions should not artificially enforce the concept of a logical application – the composition of which is open to change by Admins, instead an object-centric approach should be applied that promotes code re-use and discipline in respect adherence to the applied construction patterns.

Whichever approach is taken, it is highly useful to consider how the consolidated codebase will evolve as future functionality and related code is introduced. A patterns-based approach can mitigate the risk of decreasing maintainability as the codebase size increases.

Salesforce Application Types

In a typical development process requirements are captured and the information synthesised to form a solution design. The constituent components of the solution design are ultimately translated into physical concepts such as a class, page or sub-page view. This analysis, design, build cycle could be iterative in nature or fixed and may have different degrees of detail emerging at different points, however the applied principle is consistent. In considering the design element of the cycle, interaction design techniques suggest a patterns-based approach where features are mapped to a limited set of well-defined and robust user interface patterns, complemented by policies for concepts that transcend the patterns such as error handling, validation messages, stylistic aspects (fonts, dimensionality etc.). This process delivers efficiency in terms of reusability of code and reduced technical design and testing, but also critically provides a predictable, consistent end-user experience. When building custom applications using the declarative tools, we gain all of these advantages using pre-defined patterns and pre-fabricated building blocks. When building using the programmatic aspects of the platform a similar approach should be taken, meaning follow established patterns and use as much of the pre-fabricated components as possible. I can never fathom the driver to invent bespoke formats for pages that display within the standard UI, the end result is jarring for the end-user and expensive to build and maintain. In addition to delivering a consistent, predicative end-user experience at the component level, the containing application itself should be meaningful and appropriate in type. This point is becoming increasingly more significant as the range of application types grows release-on-release and the expanding platform capabilities introduce relevance to user populations outside of the front-office. The list below covers the application types possible at the time of writing (Spring ’14).

Standard Browser App
Standard Browser App (Custom UI)
Console (Sales, Service, Custom)
Subtab App
Community (Internal, External, Standard or Custom UI)
Salesforce1 Mobile
Custom Mobile App (Native, Hybrid, browser-based)
Site.com Site
Force.com Site

An important skill for Salesforce implementation practitioners is the accurate mapping of required end user interactions to application types within an appropriate license model. This is definitely an area where upfront thinking and a documented set of design principles is necessary to deliver consistency.

By way of illustration, the following exemplar design principles strive to deliver consistency across end user interactions.

1. Where the interaction is simple, confined to a single User, the data relates to the User and is primarily modifiable by the User only and has no direct business relevance then a Subtab App (Self) is appropriate. Examples: “My Support Tickets”, “Work.com – Recognition”.
2. Where a grouping of interactions form a usage profile that requires streamlined, efficient navigation of discrete, immersive, process centric tasks then a Console app is appropriate. Examples: “IT Helpdesk”, “Account Management”
3. Where a grouping of interactions from a usage profile that is non-immersive, non-complex (i.e. aligned with the pattern of record selection and view/edit) and likely to be conducted on constrained devices then Salesforce1 Mobile is appropriate. Examples: “Field Sales”, “Executive Insight”.

Design principles should also provide a strong definition for each application type covering all common design aspects to ensure consistency. For example, all Subtab apps should be built the same way technically, to the same set of standards, and deliver absolute consistency in the end user experiences provided.

Salesforce Release Methodology – Simple Case

A very common challenge addressed by architects working with Salesforce is the definition of an appropriate release methodology. By this I mean the identification of the Salesforce orgs required to support the project delivery whether serial or concurrent in nature, the role and purpose of each org and critically, the means by which change is managed and synchronised across environments. With this latter point, a clear definition of the path-to-production is imperative.

In the large-scale, complex project case there is typically time and expertise available to define a bespoke methodology, with build automation, source code control system integration and so forth tailored to the specifics of the programme environment. There’s an abundance of best-practice information available online to help guide the definition of a release methodology for complex projects. For less complex projects, such as those employing the declarative build model only, there is less information available, in such cases what is typically required is a standardised, best-practice approach that can be adopted as-is.

The remainder of this post provides an outline view of an exemplar release methodology for small-to-medium scale, configuration-centric projects (i.e. no Apex code or technical complexities). This information is provided for reference purposes only.

Environment Strategy
The following diagram outlines the environments and their purpose, the defined release steps and a basic approach to change management.

Release Methodology - Simple Case

Key Principles
1. Isolate development from testing activities. This is the golden rule. Testing requires a stable environment unaffected by ongoing development. Development shouldn’t grind to a halt while system testing and acceptance testing processes are applied.
2. Utilise the minimum number of sandboxes as possible. Synchronisation of change is time expensive and error prone, avoid this wherever possible. Preparation of standing data post sandbox refresh can also take time, as can the communication required to establish that a refresh can proceed.
3. Don’t over specify the sandbox type. Sandboxes are an expensive asset, especially full-copy and partial-data sandboxes. Calculate the required storage capacity and map to either Developer or Developer Pro. Retain full-copy sandboxes for purposes that do actually require the copied data.
4. Maintain a Change Control Log in the production org to record all changes (at a reasonably high-level) against applied environments.
5. Use the production org for implementation project collaboration. It can also be a useful adoption tool to create Chatter groups such as “Salesforce: Marketing”, “Salesforce: Finance” where collaboration can occur directly with the business users whilst the project is in flight.
6. Accept that change will inevitably be applied to the production org first; record such changes and apply to development and testing sandboxes asap.
7. Always verify the Change Control Log against the Setup Audit Trail before deployments.
8. Use Change Sets for deployment wherever possible.
9. Encourage a development process where Change Sets are updated continually, rather than retrospectively.
10. Always verify the Change Control Log against the list of Change Set support components.
11. On larger projects a Change Set partitioning strategy may be required; along functional lines, by team or by component type etc.
12. Ensure releases to production are documented and approved. A simple Deployment Request Form (DRF) template should be defined and used to gain approval. This process is key to communication and governance but also helps the team consider fully the pre- and post- deployment steps, risks and rollback strategy.
13. Post-release. Communicate how business processes have been mapped to Salesforce concepts, and the permissions model. Understanding how things work in simple terms can help avoid end-user frustration with a new system. This can also reduce the support burden as end-users can often self diagnose the cause of a problem.

The org strategy diagram above presents an appropriate approach for a serial-release model, i.e. one project or one sprint at a time is being developed, tested then released. In the concurrent-release model, where multiple parallel projects are converging into a single production org, isolated develop and test sandboxes will be duplicated per project with an integration (or pre-production) org providing a synchronisation point where the combined state is validated prior to deployment to production.