Salesforce Cross-Organization Data Sharing

As a long time Salesforce-to-Salesforce (S2S) advocate and interested follower of the Saleforce integration space the Winter ’14 pilot of cross org data sharing, or COD for short, caught my attention recently. This short post covers the essentials only, for the detail refer to the linked PDF at the end of the post.

Note, the pilot is currently available for developer edition (DE) orgs exclusively, which gives some indication of where this feature is in term of production state.

What is it?
Hub and spoke architecture; the hub provides the metadata definition (selected fields) and data for shared objects, the spoke gets a read-only copy (proxy version) of the object named with the __xo suffix, e.g. myObj__xo.

Setup Configuration takes place at:
Data Management – Cross-Organization Data – (Hub | Spoke) Configuration

Connections are established via an invitation email, generated at the Hub, sent to a Spoke administrator and accepted.

Synchronisation takes place via the [Sync with Hub] action on the Spoke Detail page, which also displays the status of the org connection.

Limitations?
Synchronisation is a manual process, in the pilot release at least.
All custom objects can be shared but only the Account standard object.
Not all standard fields on Account can be shared,
API limits apply in addition to specify platform limits applicable to COD (for example, Spokes per Hub, concurrent Spoke connections).

Exemplar Use cases?
Master data sharing in Multi-org architectures.
Reference data sharing (product catalog, price list etc.) with business partners.

Key Differences with S2S?
Uni-directional synchronisation of data.
Spoke data is read-only.
S2S is not limited to the Account standard object (other limits apply)
S2S implies record-level sharing, COD is object-level (sharing permissions withstanding)
S2S initial record sharing can be automated (via ApexTrigger).
S2S externally shared records synchronise automatically.

Final Thoughts
COD is definitely an interesting new addition to the native platform integration capabilities. Data sharing between Salesforce customers via ETL tools or file exchange can be error prone and time expensive to maintain. COD provides utility in terms of simplification, centralised administration and tracking. In considering the pilot release, S2S remains the best fit for transactional data, whilst COD provides coverage of reference data and some master data scenarios.

References
COD Pilot Guide PDF
COD Overview – Salesforce Help

Salesforce OpenID Connect

In addition to the proprietary Authentication Provider types (Facebook, Janrain, Salesforce) Winter ’14 (v29.0) added support for the OpenID Connect protocol, enabling off-platform authentication via any compatible OpenID Provider (Google, PayPal, Amazon and others). This post provides a basic implementation overview.

OpenID Connect what is it?
OpenID Connect is a lightweight authentication (identity verification) protocol built on top of modern web standards (OAuth 2.0, REST and JSON). OpenID Connect supersedes OpenID 2.0 and amongst other goals is intended to promote interoperability, be accessible to developers and to provide greater support for mobile use cases.

The OpenID Connect standard was recently ratified by members of the OpenID foundation and announced publicly at the Mobile World Congress in Barcelona on 26th February 2014. The standard is supported by Google, Microsoft, Salesforce, AOL, Ping and others.

The protocol works on the principal of an “Authorization Server” or OpenID Provider (OP) (e.g. Google), authenticating users on behalf of a “Client” or Relying Party (RP) (e.g. Salesforce). With the current implementation Salesforce can be configured as a RP but not an OP. In this context an Authentication Provider is configured in Salesforce with the type set to OpenID Connect. Note OP is also referred to as IDP, confusingly we have 3 seemingly interchangeable terms – however the OP term is the one defined in the standard.

Please refer to the excellent OpenID website for more details in regard to specifications, implementations, useful FAQs etc..

Identity Use Cases
In simple terms, users can single sign-on (SSO) into Salesforce using external web application credentials. A Salesforce user record can be created just-in-time on the first authentication event, subsequent events for the user map to this user record. Note, usefully it’s also possible to map existing users via the [Existing User Linking URL].

A key use case here is B2C portals and communities, however internal users and partners can also use this authentication approach. For internal users SSO via a Google Account could make sense where an enterprise has adopted Google Apps for Business.

What’s important to understand is that Salesforce supports external Authentication Providers for all user types (with the exception of Chatter External) with SAML, OAuth and OpenID Connect protocol support. This provides a high degree of flexibility in the terms of how identity management is implemented.

Implementation Steps
The Salesforce help providers a detailed series of steps to follow. The following high-level example shows a basic implementation of a Google Accounts Authentication Provider.

1. Register an OpenID Connect Application.
As per Salesforce Connected Apps, within your Google account an application is required, within which OAuth is configured. Applications are created via the Google developers console. The Redirect URI won’t be known until the Authentication Provider is configured in Salesforce.

7. Google Developers Console

2. Create an Authentication Provider.
Consumer Key = Client ID
Consumer Secret = Client secret
Scope = profile email openid

8. Salesforce Auth Provider Detail Page

3. Update OpenID Connect Application.
Copy the Salesforce [Callback URL] to the Google [Redirect URI] field and save.

4. Add Authentication Provider to Login Page (Standard App or Community).
This step requires a My Domain where the internal app login page is customised.

9. Salesforce Login Page Customisation

5. Configure a Registration Handler class.
Within the Authentication Provider configuration a Registration Handler class can be specified, this class implements the Auth.RegistrationHandler interface and is invoked to create new users or map to existing users in response to authentication events.


global class GoogleAccountsRegistrationHandler implements Auth.RegistrationHandler{
  global Boolean canCreateUser(Auth.UserData data) {
      //Check whether we want to allow creation of a user?
      return true;
  }

  global User createUser(Id portalId, Auth.UserData data){
      if(!canCreateUser(data)) {
          //Returning null or throwing an exception fails the SSO flow
          return null;
      }
      if(data.attributeMap.containsKey('sfdc_networkid')) {
          //We have a community id, so create a user with community access.
          //.. create community user.
      } else {
          //.. create standard user.
          return u;
      }
  }

  global void updateUser(Id userId, Id portalId, Auth.UserData data){
      User u = new User(id=userId);
      //.. update fields if required.
      update(u);
  }
}

Testing
1. Initialisation
The [Test-Only Initialization URL] provided on the Authentication Provider detail page can be pasted into a browser address bar and used to examine the raw output provided back from the Authorization Server.

1. Test Initialisation Output

2. Authentication
The following screenshots show the basic authentication flow. Note, as with SAML based SSO, errors are appended to the URL querystring.

Customised login page showing the Google Account Authentication Provider.

2. Login Page With Auth Providers

Clicking the button redirects the browser to the Google Service Login page to authenticate (unless a Google Accounts session exists).

3. Google Service Login Page

For new users the user consent page is displayed. This page can be customised via the Google Developer console.

10. User Consent page

Authentication errors are appended to the URL, as-per SAML authentication errors.

4. Error Page 1

5. Error Page 2

Finally, a Salesforce session is established. New users can be provisioned automatically, script within the Registration Handler class controls the configuration of such User records.

6. Auto-provisioned User Detail Page

Implementation Notes
1. Google Developer Console. Remember to turn ON – Google+ API access. This is required.

2. The auto-created Registration Handler class template must be modified as the default code will fail in many cases.
canCreateUser is false by default – in most cases this must be changed to true.

The Combination of values below don’t work if the user isn’t configured with US locale.
u.languagelocalekey = UserInfo.getLocale();
u.localesidkey = UserInfo.getLocale();
u.emailEncodingKey = ‘UTF-8′;
u.timeZoneSidKey = ‘America/Los_Angeles’;

3. Activation code entry appears to fail within an Internal Server error, but the code has been successful so subsequent attempts will succeed. This may be specific to my context.

4. As a best practice map the user Id from the OpenID Provider (Google Account Id in the example) into a custom field on the User record. This provides a robust mapping between the 2 system identifiers that can be used by the Registration Handler script.

5. Access can be revoked via the Third Party Account Link related list on the User detail page.

6. Make the OpenID Connect Application name meaningful to the end-users, the Google user consent page will display this in a “[AppName] is requesting access” format, anything weird or meaningless may cause concern.

Protocol Flow
Please treat the diagram below as indicative only, I put this together from a combination of browser profiling and assumptions made on the basis of reading the OpenID Connect specification.

As always, corrections would be appreciated.

OpenID Connect - SF Process Flow

Final Thoughts
OpenID Connect support is a highly useful extension to the Authentication Provider platform capability. For B2C portals and communities it makes sense to offer as many sign-in-via options as possible (Facebook, Google etc.) removing as much friction to user adoption as possible. As a personal opinion, over time I’m becoming less tolerant to having to register a new user account on each and every authenticated web site I interact with, particularly where I view the interaction as transient. Some users may have concerns around data security, i.e. by signing-in with a Google account are they implicitly giving Google access to the data held in the portal? In the majority case however, users will appreciate the improved user experience. For internal users OpenID Connect makes single sign-on via one or more of a multitude of current and future web platforms incredibly straightforward to implement. In the Salesforce context the key challenge will ultimately relate to reconciliation and rationalisation of identities (i.e. User records).

References
http://openid.net/connect/
http://openid.net/developers/specs/
https://developers.google.com/accounts/docs/OAuth2Login
https://console.developers.google.com/project

Salesforce Naming Conventions – Declarative

This post follows on from my last post on Custom Settings and provides coverage of the wider set of naming conventions I apply across the various component types of the declarative build environment. The list isn’t exhaustive or necessarily better than any other set of standards. Having a set of conventions applied consistently across the build is key, the specifics of those conventions can be subjective. A key principle applied is that of consistency with established standard conventions wherever possible. For example, standard objects and fields don’t have underscores in the API names and follow simple patterns in terms of naming, there’s no good reason to deviate from this for custom objects and fields. Naming conventions shouldn’t be considered an area for creative thinking, instead creative energy should be focused on the functional and technical design, the conventions applied should be mundane and predictable.

Convention A – Custom Object

[Object name]. Singular, Pascal Case (upper camel case) and no underscores.
e.g. Data Source -> DataSource__c

Convention B – Custom Field

[Field name]. Pascal Case (upper camel case) and no underscores.

e.g. Date of Birth -> DateOfBirth__c

In the scenario where an implementation is comprised of distinct functional domains, with custom fields relating specifically (and exclusively) to one single domain, the following convention should be applied.

Each functional domain has a defined field name Prefix. e.g. HR, FINANCE, SALES etc.
Fields exclusive to one domain have their API name prefixed with the domain prefix.
e.g. Payroll Number -> FINANCE_PayrollNumber__c
e.g. Industry Segment -> SALES_IndustrySegment__c

This convention allows a logical structure to be applied in isolating fields specific to a single functional domain.

Convention C – Child Relationships

Singular, Pascal Case (upper camel case) and no underscores.
e.g. Account->AccountMetrics__c relationship = Account.AccountMetrics__r

Convention D – Page Layout

[[Function] | [Object]] Layout

e.g. Pricing Case Layout
e.g. Pricing Case Close Layout

Convention E – Custom Setting

Custom Setting Label – Pluralised in all cases (e.g. Data Sources). No “Setting[s]” suffix.

API Name – List Settings
- [Data entity that each list entry represents]ListSetting__c

Each record represents an individual entry and as such singular naming is applied, as per objects.

e.g. Analytic Views – AnalyticViewListSetting__c
e.g. Data Sources – DataSourceListSetting__c

API Name – Hierarchy Settings
- [Function of the settings]Settings__c

Each record represents the same set of settings applied at different levels. In concept this differs from objects and list settings, the plural naming reflects this.

e.g. Org Behaviour Settings – OrgBehaviourSettings__c
e.g. My App Settings – MyApplicationSettings__c

Convention F – Workflow Rule

[Object]: [Criteria Description]

Convention G – Workflow Action

Field Update -
[Object]: Set [Field] to [Value]

Email Alert -
[Object]: Send [Template short description]

Task -
[Object]: [Task Subject]

Convention H – Sharing Rule

OBS: [Object] [From selection] to [To selection]
CBS: [Object] [Criteria] to [To selection]

Convention I – Custom Report Type

[Primary Object] with [Child object]s [and [Grandchild object]s]

Convention J – Custom Label

Define sensible categories for the labels. e.g. UI Button Label, UI Text, UI Error Message etc.

Name = [Category with underscores]_[Value with underscores] e.g. UI_Button_Label_Proceed
Category = [Category with underscores] e.g. UI_Button_Label
ShortDescription = [Category] [Value] e.g. UI Button Label Proceed
Value = [Value] e.g. Proceed

Convention K – Validation Rule

Single field :
[Field Label] [rule applied]
Mailing City Is Required
Start Date Must Be a Weekday

Multiple fields :
[Field grouping term] [rule applied]
Billing Address Must Be Complete

Cross object :
[Object Name] [Field Label] [rule applied]
Opportunity Stage Is Closed No Edit Of Opportunity Products

Convention L – Publisher Action

[Verb] [Noun]
New Invoice
Update Order
Alert Executive Team

Convention M – User Profile

[[Job Function] | [Department] | [Company]] [[User] | [System Administrator]]
Accounts Payable User
Marketing Executive User
Acme System Administrator

Convention N – Permission Set

Simple Permissions :
[Verb] [Noun]
Delete Case
Convert Leads
Access Sales Console

Complex Permissions :
[Feature Area Descriptor] [User Type]
Work.com Administrator
CloudInvoices User
Knowledge Contributor

Convention O – Public Group

[Grouping term] [[Users] | [Members]]

EU Users
Sales Users
HR Users
Project A Members

Custom Settings Naming Conventions

A quick post to share some thoughts on the standardisation of naming conventions applied to Custom Settings. With Custom Objects it is an obvious best practice to mirror exactly the conventions applied by the Standard Objects, there are no reasons not to adhere to this approach, none. With Custom Settings however there is no comparable reference and as such a great deal of variation exists in the naming conventions applied. For many, this matters little, however I’d make the usual arguments about maintenance, readability, build quality through predictable convention and so forth. There’s also merit in clearly distinguishing Hierarchy from List types and ensuring a meaningful naming for the latter, which can be utilised across many declarative build elements.

I use the naming conventions defined below, for simplicity and clarity reasons. All that really matters is that a standardised approach is taken.

1. Custom Setting Label – Pluralised in all cases (e.g. Data Sources). No “Setting[s]” suffix.

2. API Name - List Settings
- [Data entity that each list entry represents]ListSetting__c

Each record represents an individual entry and as such singular naming is applied, as per objects.

Examples – List
e.g. Analytic Views – AnalyticViewListSetting__c
e.g. Data Sources – DataSourceListSetting__c

3. API Name - Hierarchy Settings
- [Function of the settings]Settings__c

Each record represents the same set of settings applied at different levels. In concept this differs from objects and list settings, the plural naming reflects this.

Examples – Hierarchy
e.g. Org Behaviour Settings – OrgBehaviourSettings__c
e.g. My App Settings – MyApplicationSettings__c

As a further best practice, it is important that any requisite logic applied to the population of the Name field is described in the Custom Setting Description field. If the population is immaterial then simply state as such. Given the inability to relate settings at the attribute level, it can be the case that the Name field plays some role in grouping related settings.

Finally, where possible always retrieve Custom Settings data via the Custom Settings Methods – not via SOQL query. In the latter case the Application Cache is bypassed and the query counts against the limits context within the Apex transaction.

Salesforce Identity Connect

Over the recent years I’ve spent focused on the Salesforce architecture domain I’ve designed and implemented federated single sign-on (SSO) schemes many times (and the proprietary Delegated Authentication on rare occasions). Whilst each implementation has its nuances in terms of specific access use cases (mobile, composite app, public internet versus corporate network only etc.) and infrastructure deployment topology, there is typically a high degree of commonality. For instance, most corporate environments utilise Active Directory as the primary enterprise identity store, this is inarguable. Equally common in such environments, at least in my experience, is the absence of Active Directory Federation Services (ADFS), or any federated identity service (Ping Identity, Okta etc.). As such the introduction of SSO for Salesforce typically means introducing the infrastructure to support federated SSO as a generic service. Notable exceptions to this are corporates that have transitioned to the Office365 cloud productivity suite, where federated SSO is highly desirable for the same good reasons as those for Salesforce SSO, namely security, support and end-user experience.

Following the idea that Active Directory is pervasive in today’s corporate infrastructure environments and that Salesforce SSO implementations typically conform to a set of patterns and related technology solutions, implementing SSO becomes a question of mapping a specific set of identity management requirements appropriately.

Example patterns (illustrative and not exhaustive)

SAML Service Provider initiated flow with optional automated user provisioning.
Solution Option – ADFS (possible with AD groups and conditional custom claims rules) with SAML based JIT user provisioning.
Rationale – Simple, cost-effective solution using standardised technology. If the IT department supports AD, ADFS shouldn’t cause any alarm.

SAML Service Provider initiated flow with bi-directional identity synchronisation.
Solution Options – ADFS with middleware-based data synchronisation between Salesforce User records and AD User Principals. Or, 3rd party identity management service offering user synchronisation capability. For a simple one app, point solution Okta Cloud Connect is a free service to be evaluated alongside equivalent competitor offerings.
Rationale – ADFS has no native identity synchronisation capability. Users can be provisioned into Salesforce, and updates applied to certain fields, via SAML JIT provisioning at the time of access, however there are no background updates and perhaps most crucially there is not automated de-activation of users.

SAML Identity Provider initiated flow with bi-directional identity synchronisation.
Solution Option – 3rd party identity management service.
Rationale – ADFS offers no native user-facing IdP capability, 3rd party services offer great flexibility in this area. Salesforce itself can be used as an IdP – a subject beyond the scope of this post.

As the example patterns show, there are a number of factors to consider and a multitude of enabling technologies, each with different levels of capability (at a reflective price point). Given the complexity of a federated SSO solution, and accepting that an enterprise-wide solution should be considered over a point solution, it is key to understand the solution options available and to consider some degree of future proofing.

A relatively recent arrival in the market is Salesforce Identity Connect (October 2013), an add-on component to Salesforce Identity, providing connection to on-premise identity directories, namely Active Directory. The rest of this post outlines the result of a high-level investigation into the capabilities of Salesforce Identity connect.

Salesforce Identity Connect
Salesforce Identity Connect, hereafter referred to as SIC, is built on the ForgeRock Bridge Service Provider Edition (SPE) technology and falls into the category of 3rd identity service.

What is it?
Informal notes in no order.

On-premise identity service with a browser-based admin UI.
Supports Windows, OSX and Linux hosts.
Can be installed on Windows as a service.
Install, configure, run service. Quick to get up and running.
No ADFS proxy equivalent.
An org must be activated for Salesforce identity, this adds a download link to the setup menu and feature licenses.
One SIC instance can support multiple orgs (with different configurations). Sandboxes are supported.
Setup steps; create connections (SF and AD, define mappings, set schedule, configure SSO)
No local storage of users and passwords, user associations are held in a local MySQL db.
Connects to a SF org via OAuth 2.0
Multiple direct AD Domain Controller connections are not supported, instead a connection should be made to the Global Catalogue where multiple domains must be spanned. Note this approach requires manual modification of the AD schema.
User licenses are managed via Permission Set Licensing, not feature licensing.
Org must have Multiple SAML Configurations enabled.

What does it do?

User Authentication – Federated user authentication is supported via SAML 2.0 Service Provider Initiated and Identity Provider Initiated flows. SPI is the common case, where users access the SF My Domain and are redirected to the IdP (SIC, Identity Connect) for either seamless integrated authentication (IWA) or prompted AD login. For IWA to work there are a number of complex Kerberos authentication related setup steps to be applied to the SIC host, this can be a challenge as technical expertise in this subject area is limited. There are also end-user browser configuration changes to be applied, for IE users a GPO would cover this, for Firefox a manual configuration would be necessary. Not ideal.

User Synchronisation – Automated background synchronisation, not JIT. Mapping can be attribute-based (AD User attribute to Salesforce User field), AD Group membership to User Profile or AD Group membership to Permission Set. In the attribute case, direct mappings, transformations via JavaScript or default static values can be applied. Where transformations are applied ternary expressions are a useful convention where only populated field values are transformed to provide the target value. In the Group to Profile case, a list of AD groups are mapped to specific profiles, such mappings are defined with a precedence order such that the result is always a single profile, regardless of how many groups a user is a member of. A default profile must be defined as a catch-all. Tracking of users provisioned into Salesforce occurs via Permission Set Licenses for the Identity Connect Permission Set. User associations established by the SIC Reconciliation Report can be manually edited via the web interface. The applied association rules can also be modified where necessary. Synchronisation can be configured to run in Live Update or Scheduled Update mode. The former being more timely, but more prone to inconsistency the latter being more comprehensive approach with the typical periodic scheduling frequencies. It would be nice to have the option to combine live updates with a scheduled daily synchronisation – this doesn’t appear to be possible.

Licensing?
Salesforce Identity Connect is licensed as an add-on to Salesforce Identity at £1 per-user, per-month regardless of Edition. Current pricing can be found via this link.

Conclusions
Salesforce Identity Connect provides a capable user authentication and synchronisation service. Good points being the ease of configuration of the Salesforce side of things plus the multiple org support, the attribute and group mapping functionality is also very nice. The two downsides being the cost, although it’s an expensive area of technology, and the complexity of configuring Integrated Windows Authentication (IWA). The SSO aspect to the service is only meaningful if the SSO is seamless. Coming back to the cost, ADFS is free-of-charge but provides no user synchronisation (other than via SAML JIT). I believe many people will take this route regardless, simply to avoid any cost whatsoever, although £1 per user/month does represent a significant outlay for larger installs and could be hard to justify for performance edition customers particularly. In addition to the run cost there is also the technical complexity of establishing IWA to consider; many IT departments will look nervously at the long list of Kerberos configuration tasks and may resist the approach in terms of the effort, expertise and deployment cost involved.

To be compelling services such as SIC need to be simple and reasonably non-technical to install and run. SIC achieves this nicely for the run aspect, for the install aspect there’s some room for improvement. For many organisations the installation experience will not be an issue, the focus will be on the end-user experience and the administrative functionality, in these areas Salesforce Identity Connect looks a solid offering.

Links
Really good, short walk through video on YouTube

Implementation Guide PDF

Salesforce Implementation Game Plan

Whether you’re managing a commercial software development, leading a consultancy project or building an IKEA table a game plan is absolutely key to successful delivery. In the latter example IKEA recognises the importance of prescriptive guidance and supplies an instruction leaflet in the box. This however covers only one dimension of successful delivery, the ‘What’ (i.e. what you need to do) – the ‘Who’, the ‘How’ and the ‘When’ are left up to you. In the case of an IKEA table this is acceptable as the resource is probably you (who will likely build the table in your own way regardless of advice received) and the timeline may not be critical. Moving away from this tenuous example, in non-trivial situations all the dimensions of successful delivery are equally significant and must combine cohesively to achieve the defined objective. This calm, controlled, empowering and productive state is precisely what planning is intended to achieve. This success delivery-state is rarely the norm, for various reasons; inexperience, over-optimism, command and control culture, inadequate expertise, process rigidity, poor communication etc. The net effect of such factors being a distress delivery-state where productivity is low and the sense of team is diminished in terms of empowerment, trust and accountability.

Like many people I often find myself misquoting Leo Tolstoy, who didn’t say that failing projects fail for a variety of reasons but succeeding projects succeed for the same reason. He did say this however – “All happy families are alike; each unhappy family is unhappy in its own way.” (Leo Tolstoy, Anna Karina) – where the interpretation comes from. I definitely read this somewhere, apologies if this was your book or blog.

So the idea is that all successful projects succeed for the same reason – that reason being in my view that the project was able to achieve a success delivery-state, i.e. the plan encompassed all the requisite dimensions and maintained the agility to react and adapt during flight. In this context it matters little which project process, methodology, framework etc. you employ what matters is that you have a well conceived plan from the outset, or game plan as I like to call it, and execute on that plan in a disciplined manner.

A game plan can take many forms (spreadsheet, picture, diagram, A3 sheet pinned to the wall etc.) whichever way you go the end result should be an engaging, high-level fusion of vision and planning and be tuned for effective communication to your specific audience.

The game plan should influence the detailed planning, but is a higher level concern that outlines the fundaments of how the project will succeed, covering the essential aspects only. My preference in the past has been an annotated timeline diagram, showing clearly the basis upon which I’m confident of success – this can be highly effective in terms of establishing confidence within the delivery team and across stakeholders. I don’t believe this is possible with a Gantt chart or spreadsheet, even where progression metrics are added.

By way of illustration the following sections outline an example Game Plan related to a fictitious Salesforce implementation project.

gameplan example

In summary, the game plan concept applied to project delivery can be a powerful tool. It matters little how the game plan is presented or what it contains, simply having one in any form can make a big difference in terms of confidence, focus and communication.

Salesforce Communities at a Glance

This long overdue post provides a quick review of the Salesforce Communities functionality which became GA in the Summer ’13 release.

In my simplistic view Salesforce Communities is a way to bring external users directly into a Salesforce org in a restricted manner with a branded user-experience. This may be for collaboration purposes in the social enterprise sense (i.e. Chatter) or simply to extend a business process out to external parties.

In user license terms a single Community can support internal users and both customer and partner users – a key differentiator from the preceding portal functionality, where portals were limited to either customers or partners with no internal user access. A highly customised portal experience can be delivered using Force.com Sites (and Visualforce etc.), as per the traditional manner – or via Site.com, with its rich drag-and-drop build model. In both cases unauthenticated and authenticated interactions can be delivered. With Communities however the traditional (I mean outdated) portal look-and-feel is replaced with the standard Salesforce aesthetics, with the option to customise the header, footer and colour palette. It’s also straightforward to deliver a branded login page on a custom URL. All in all I think Communities provides a great advancement in terms of delivering branded, collaborative customer/partner interactions using the declarative build tools. Add to that the full collaborative power of Chatter and the Communities proposition is a compelling one.

Salesforce Communities

Communities 1

A very basic Community in preview mode – note the custom colour palette. The rather invasive Global Header can also be seen, thankfully this is now optional page furniture.

Communities 2

Chatter email branding – a previous pain point with branded Chatter implementations.

Communities 3

Branded login page with Authentication Service buttons.

What is it?
In principle a portal technology, in practice a restricted access entry point to a subset of data and functions from an internal Salesforce organisation – presented with a customised/branded UI and custom url.

Key points.
- Internal and external members. A community can contain members with Partner and Customer Community licenses, internal org licenses and legacy portal licenses.

- Custom URL. Communities can be exposed on a custom domain. The default is a subdomain on the force.com domain as-per Force.com Sites etc. Note, I’m unclear on how this works in terms of SSL, at this time it is not possible to upload a server certificate for a custom domain, therefore SSL isn’t supported. This limiting situation may change imminently, I believe a beta for exactly this type of functionality ran last year.

- Communities are built in Preview status, then published.

- Community tabs can be either Salesforce tabs (custom or otherwise) or provided by Site.com.

- Branded login page (header logo and footer text) – see screenshot. Basic customisation is possible, how effective this is will largely depend on the image etc. The login page supports selection of enabled Authentication Providers.

- Branded UI (header, footer images and colour palette). Again, this is light-touch cosmetic customisation and will work where a minimal brand presence is required – which is advisable for an immersive application.

- Accounts are enabled for Partner Communities. Up to 3 roles can be specified (which are direct children of the Account owner in the role hierarchy – consistent with legacy portal roles. The super user concept is also supported.

- Contacts are enabled for Customer Communities (HVPU – i.e. no role or sharing rules = Sharing Sets and Share Groups).

- Person Account can’t be enabled for Partner Communities. Person Accounts can’t be enabled as Partner Accounts and therefore can’t access partner portals, so this limitation is consistent.

- User Sharing can be used to control who can see who within the Community (as-per the internal org). A useful capability particularly in scenarios where senior employees may wish to observe a community but not be visible to all.

- Communities support self-registration.

- Mobile access enabled.

- Unauthenticated content, or highly customised interactions can be supported by either Force.com Sites or Site.com.

- Force.com Sites pages inherit the Community branding by default.

- Chatter Email Branding. System emails generated by Chatter can be branded in terms of logo and footer text. This removes the Salesforce branding which has historically been problematic with branded Chatter implementations.

- Ideas, Chatter Answers and Salesforce Knowledge can be enabled within a Community.

- Communities supports SSO for internal and external users via the SAML protocol and a SAML enabled Identity provider, or via external service providers (Facebook, Open ID Connect, Janrain and Salesforce).

User Licensing?
Customer Community License – Equates to a High Volume Portal User License (HVPU). Therefore the usual restrictions on HVPU record access apply (no role and no ownership or criteria-based sharing rules). Sharing Sets and Share Groups must be understood when considering an effective record access model.

Partner Community License – Equates to Gold Partner License.

The new Community user license types represent the two extremes of the portal licensing scale, with significant differences in capability and pricing. It is therefore good news that a single Salesforce Community can support both types enabling a blended view to be taken on the license model.

One final observation, the two key pre-requisites to implementing Salesforce Communities are a rigorous audit of the current state security model (record access, profile/permission sets, groups, folders, listviews etc. etc.) and a detailed future state design. In both cases this should cover specifics such as Chatter groups, user-sharing etc.. In my view it’s always better to be proactive and pessimistic when it comes to security.

Salesforce Spring ’14 Highlights

After a short period of blindly exploring a pre-release Spring ’14 org, the release notes are now available.

With Salesforce1 released recently it’s unsurprising that Spring ’14 isn’t a significant release in terms of new features and that a lot of the highlights are currently at Pilot status. I expected a low-key release with incremental improvements to Salesforce1, the usual evolution to Chatter and a number of older pre-release features becoming GA – this would seem to be the case. A few personal highlights below in no particular order, following a quick read of the release notes.

Skills and Endorsements (Pilot) – Skills reference data (ProfileSkill) can be added and associated with People (ProfileSkillUser). The skills profile for a User is displayed on the Chatter profile page and each skill can be endorsed by other users (ProfileSkillEndorsement). Skills can be managed via Chatter or via the Setup pages (Manage Users etc.). Interesting LinkedIn style concept.

Feed-Based Page Layouts – The Case Feed experience now available for Account, Contact, Lead, Opportunity and Custom Objects. The concept here is to split the record feed from the record details via tabs, enabling a more focused view with easy switching. For implementations where Chatter is used extensively at the record level, this provides a convenient efficiency.

Sharing Sets Enhancements (Pilot) – Sharing sets for HVPU now support indirect lookups to Account, Contact, Case, Service Contract, User and Custom Object records. Previously only directly related records to the Account and Contact were accessible, now a second level of related records can be accessed.

Launch Flows from Workflow (Flow Triggers) – A new Flow Trigger Workflow Action has been added which enables UI-less flows to be executed as per standard workflow actions (email, task etc.). This could provide a non-code alternative to Apex Triggers in some cases.

Orders / Place Order REST API – Basic standard objects and accompanying REST API to support the order management process (contract, orders and order products). There doesn’t appear to be any relationship (or automated synchronisation) with Opportunity or Quote. Definitely a feature intended for customisation.

API Limit App Quotas (Pilot) – Connected Apps can have 24 hour API quotas set for standard, bulk and streaming API consumption. I like where this is heading in terms of providing proactive API management tools – up to this point it’s all been reactive.

New User Interface for Monitoring Deployments – Clean new UI to view (and cancel) running deployments made via the metadata API.

Visualforce Remote Objects (Developer Preview) – Data proxies for standard objects that make SObject DML operations directly accessible in JavaScript code without the need for @RemoteAction methods. Another highly useful feature for developing in JavaScript which also avoids API limit consumption (as per JavaScript remoting).

Canvas App Accessible from the Salesforce1 Menu – Force.com Canvas apps can now be placed directly into the Salesforce1 navigation menu. I think we’ll see Canvas becoming a more important technology in the future as integration at the application level becomes more widespread. Where some degree of application blending is achievable, UI-level integration can be significantly less expensive to implement than at the data-level.

Independent Auto-Number Sequence for Unit Tests – It’s always puzzled me as to why the auto-number sequence increments in response to the creation of test data records (which are ultimately rolled-back). Surely, the unit test context should be better isolated? Anyway, with Spring ’14 we can now set the flag below to achieve this.

Develop->Apex Test Execution->Options->Independent Auto-Number Sequence

Usage Metrics (Pilot) – ISVs can now access the following metrics (via API only) per subscriber org:

Record count for custom objects
Access count for Visualforce pages (over the last 24 hours)

This feature requires an Environment Hub setup, with one org designated as the reporting org where the metrics are delivered. This long awaited feature will be well received by the ISV community.

View and Manage User Sessions – View and invalidate (end) user sessions as required. A useful administrator feature.

Mass Assign Permission Sets – At long last users can be assigned-to and removed-from Permission Sets en-masse. Previously this was a painful user-at-a-time process.

Analytics API Available in Apex – All the capabilities of the REST Analytics API are now available directly to Apex, enabling reports to run etc. (..Reports.ReportManager.runReport(myReportId)..).

Integration Architecture Patterns

As an architect I’m generally obsessive about three things; patterns, principles and practices. I could probably add to this list but I also prefer to keep things simple. This post is concerned with the first P, Patterns – in the integration architecture context. At what level should they be defined and applied? I tend to consider the logical and physical aspects of data integration flows independently. In the logical case, the focus should be on the definition of an end-to-end business process that spans multiple systems. There should be no technology constraint or perspective applied to the logical view. In the physical case, the logical view should be considered an input, and a technical view defined in full consideration of the following.

Frequency of integration (batch, near-real-time, real-time)
Bi-directional, versus uni-directional
Multi-lateral, versus bi-lateral or uni-lateral
Volumetrics
Security
Protocols and message formats
Reference data dependencies
Technical constraints (API limits model)
Existing enterprise integration technologies (middleware, ESB)
Future maintenance skill sets (technical versus administrator)

Each physical integration flow definition should not be entirely independent, instead groupings should be identified and robust integration patterns designed and documented. The solution components for each pattern would then be developed, tested and re-applied wherever possible. The schematic below provides a fictitious example of this approach.

Integration Patterns

Having a simple set of clearly defined patterns visible to the project team is key, and should be complemented by a project principle that new approaches to physical integration are by exception – nobody has discretion to be creative in this regard. Standardisation is good practice; integration is expensive in terms of technology, implementation time, run cost and maintenance.

Salesforce Platform Limits – Designing for Scale

A Salesforce instance is a constrained environment where limits exist in respect to capacity and execution. Examples of capacity limits being data storage, number of active users, number of custom objects/custom fields/picklists etc. examples of execution limits being API calls per 24-hour period, SOQL queries executed within an Apex transaction, Viewstate size in Visualforce pages etc.. In both the capacity limit and execution limit case it is imperative that the existence and implications of the constraints are factored into the solution design from the outset. Each and every constrained resource must be treated as a precious asset and consumed in an optimised manner even on seemingly trivial implementation projects. From experience it is often the case that a Salesforce implementation grows (in terms of both use and breadth of functionality) at a rapid rate once it gains traction in an enterprise. If you’ve carelessly exhausted all the constrained resources in the first release, what happens next? Note, some soft limits can be increased by Salesforce on a discretional or paid-for basis, however this doesn’t negate the need to make responsible design decisions and at the very least the highlight the possible additional cost associated with a particular approach. Hard limits do exist in key areas, the Spanning Relationships Limit or cross-object reference limit as it also referred is a strong example of this.

Designing for scale simply requires an intelligent consumption of such resources and appropriate solution design decisions in a limited number of areas. The proliferation of Apex and Visualforce related execution limits don’t necessarily impact the scalability of the implementation, the impact is isolated to the micro level. The selected limits listed below however apply at the org level (Salesforce instance) and can constrain the scalability of an implementation (in functional terms). This list is not exhaustive, for a complete picture refer to the Salesforce Limits Quick Reference Guide.

Limits Primarily Influenced by User License Model

Asynchronous Apex Method Executions :
This limit includes @futures, Batch Apex (start, execute and finish method invocations and Scheduled Apex (execute method invocations). Future method calls made from Apex Triggers can be a risk in relation to this limit. For example, Apex Triggers which fire on record updates which make callouts via @futures can cause scalability issues as data volumes grow. In this example it may become necessary to bulk process the modifications via Batch Apex, assuming a batch style of integration is acceptable. What if near real-time (NRT) is necessary?

The calculated limit is the higher number of 250K or (200 * user license count), where the applicable licenses to this calculation are full Salesforce and Force.com App Subscription only.

Total API Request Limit :
Enterprise Edition = 1,000 per Salesforce and Salesforce platform license, 200 for Force.com App subscription license
Unlimited/Performance Edition = 5,000 per Salesforce and Salesforce platform license, 200 per Force.com app subscription license

Note, sandboxes have a flat limit of 5M which can give a false impression of the limits applied in production.

All inbound API traffic counts towards this limit, including Outlook plug-ins, Data Loader etc. For implementations with limited Standard users this limit can be restrictive, and it is reasonably common for extension packs to be purchased to mitigate this. In all cases consumption must optimised by batching updates and use of the Bulk API where possible.

Limits Primarily Influenced by Salesforce Edition

Workflow Time Triggers Per Hour :
Enterprise Edition = 500
Unlimited/Performance Edition = 1000

This limit can be an issue for implementations with high volume transaction processing throughputs, where time-based workflow is employed to send reminder emails etc. If the hourly limit is exceeded triggers are processed in the next hour and so on. This may cause issue if the actions are time critical.

Workflow Emails Per Day :
1,000 per standard Salesforce license, capped at 2 million.

Apex Emails Per Day:
1,000 in total. The maximum message count per send is limited per edition.
Enterprise Edition = 500
Unlimited/Production Edition = 1000

An unlimited number of emails can be sent per day to Users by using the SingleEmailmessage.setTargetObjectId() and MassEmailmessage.setTargetobjsctIds() methods. This includes customer and partner portal users and even high volume portal users.

This limit is critical to understand and to mitigate in a scalable solution design. In short don’t use Apex to send email unless the recipient is a User. In portal cases use the User Id and not the Contact Id. Prefer Workflow based email sending, as the limits Are considerably higher, and perhaps use Apex script to set criteria picked up by a Workflow rule.

Additional Limits to Consider

Batch Apex (5 pending or active)
Scheduled Jobs (100 scheduled)
Apex Script Characters (3M)
Dynamic Apex Describes
Cross Object References

From a best practice perspective a Platform Limits Reference document should be maintained for all Salesforce implementations that lists the applicable limits and related consumption. This approach surfaces the existence of the limits and should provide design principles such as using Workflow to send customer emails in preference to Apex script. Without an ordered approach where limit consumption is proactively tracked it is highly likely that expensive refactoring exercises or multi-org strategies become necessary over time that could have been minimised,deferred or entirely avoided.