Showing posts tagged with:

Platform

Salesforce Environment Hub

Environment Hub was announced back in August 2012 and was initially made available as a Pilot feature of the Winter ’13 release. This post follows up (some 2 years later) on my initial interest in understanding the utility Environment Hub delivers in the context of environment/user management.

Environment Hub – What is it?
In simple terms Environment Hub is an org administration tool that enables multiple orgs (of any type) to be associated and accessed from a central location. Each connected org is a termed a Hub Member, with an imposed constraint that each Hub Member org can only be parented by one Hub org. It is therefore imperative that Hub Members are only added where there is no contention over the appropriateness of the parent Hub. Commonality of target production org or packaging org is a good starting point for this consideration, as is the rule that client orgs should never be parented by an ISV or Consultancy Hub org. The parent Hub org should always be the most-accessed org, the credentials for which will become those by which all org access is made.

For larger programmes of work Environment Hub provides a highly useful means to catalogue the org estate and to provide SSO between the constituent orgs. This latter point enables reduced password maintenance, simplified access and centralised user administration in terms of deactivating a user account in one place only and ability to view login history in once place.

Environment Hub Tab

Environment Hub is enabled by Salesforce support who will require confirmation to proceed with the change. From experience, the enablement process can take a few days.

Key Features

Connect Organisation
Connecting Hub Members to the Environment Hub occurs via the entry of an administrator username for the target org and subsequent OAuth authentication and authorisation flow. The User Permission “Connect Organisation to Environment Hub” is required.

Connect Organisation 1

Connect Organisation 2

Connect Organisation 3 - OAuth

Connected Organisation Detail Page

Once an org is connected to the Environment Hub interesting detail such as the Edition, Org Status and Org Expiry date is revealed. It is also possible to add custom name and description attributes to the Hub Member, which I really like, each org in the estate should be justifiable and have a specific purpose – here we can capture this plus accountable contact etc. A very useful means of cataloguing and tracking the org estate.

The Company Detail page in the connected org will now show the Environment Hub Org Id value as below.

Company Information Page

In addition to the “User Added” origin, Hub Members are also auto-discovered using existing org-to-org relationships as below.

Auto-discovery types; sandbox to production, patch orgs to release org, trialforce source org to trialforce management org, release org to LMO

Create Organisation
New development/test/demo orgs can be created directly within the Environment Hub – this replaces the functionality previously exposed via the Salesforce Partner Portal, for partners at least. I’m unclear how the types of org offered are affected by partnership status etc. or perhaps whether the Environment Hub itself is available only to partners.

Create Organisation

Create Organisation 2

Org types offered;
Development = PDE org.
Test/Demo = Pick Edition for a 30 day time expired org.

Useful information of the distinction between different org types can be found here.

Single Sign-on
SSO can be enabled between each Hub Member and the Hub org; in implementation terms this means the Hub org is configured as an Identity Provider with a Service Provider being configured (automatically via the SSO enablement process) in both the Hub Member org and the Hub org. Both IdP initiated (via the Environment Hub tab) and SPI-initiated (via enablement of the Service Provider as a Login Page Authentication Service) SAML flows are supported. The latter point meaning SSO could be enforced as the only authentication means – thereby switching off standard Salesforce authentication entirely. Each Hub Member Org must have a My Domain configured for SSO to function.

Note, enabling SSO creates a Service Provider in the Hub org, default permissions are provided to the Standard User and System Administrator profiles only. It is therefore a requirement to ensure relevant permissions (Profile or Permission Set) are provided.

Connected Organisation Detail Page SSO

Within the Identity Provider, the Hub Member specific Service Provider configuration is set with “Subject Type=User’s ID determined by Environment Hub”, this setting delegates the user mapping to the Environment Hub settings, defined as below.

3 types of SSO User Mapping;
Method 1. Mapped Users – 1 to 1 mapping of user names – per-user.
Method 2. Federation Id – boolean state, yes = attempt to map users on common Federation Identifier values.
Method 3. User Name Formula – formula expression = attempt to map users via formula result.

Where multiple mapping types are enabled the precedence order above applies. For SSO between a sandbox and production org, user mapping is implicit and not configured as above.

Customisation
Enabling Environment Hub adds the EnvironmentHubMember standard object which is open to the declarative build model; custom fields, page layouts. validation rules, workflow rules, approvals etc.. In addition Apex Triggers can be defined on this object, use cases for which may include notifications relating to status changes etc.

A second standard object EnvironmentHubInvitation is also added, however this object is inaccessible from the Setup menu (although it can be reached via /p/setup/layout/LayoutFieldList?type=EnvironmentHubInvitation&setupid=EnvironmentHubInvitationFields). I’m assuming this object to be either forward looking or a legacy of an invitation-based connection model (as per Salesforce to Salesforce). Regardless, records do seem to be added to this object, the logic for which is unclear.

Related Permissions
Hub Org –
Manage Environment Hub
Environment Hub App and Tab access
EnvironmentHubMember standard object access permissions
Environment Hub Connected App
SSO Service Providers e.g. [00_____0000Cj__] Service Provider Access for SSO (by Profile or Permission Set)

Hub Member Org –
Connect Organisation to Environment Hub

Environment Hub in Practice
For consulting projects Environment Hub offers significant value in terms of management and tracking of the org-estate and centralisation of user administration. In practice this would require all project contributors to access production (Hub) as the primary org and SSO into secondary (Hub Member) orgs as required. In this model, production could be utilised for project collaboration, bug-tracking, project management etc. which is a common approach. The obvious downside being the requirement to license the project team in production – a big challenge on many projects where user licenses aren’t provisioned until a late stage or business use runs parallel to project activity. In such examples, project contributors could be provisioned with low-end user licenses on a temporary basis. Ideally we could do this with a Chatter Plus license as users would only require Chatter, Custom Objects (10 or less) and Environment Hub access.

For ISV projects, the utility of Environment Hub relates more specifically to the ability to catalogue the multitude of environments required for development, test, i11n, packing, release and patch purposes, not to mention TrialForce. Efficiency of access across this estate is also a key factor.

References
Environment Hub Online Help

Salesforce Winter 14 – Platform

Slightly depressing to be saying this in late August but Winter is almost upon us in the shape of the Winter ’14 (or v29.0) release. The Winter ’14 release notes are now available, find below a brief introduction to 10 Force.com highlights (in no order of significance).

1. Increased Sandbox Storage Limits. Developer sandboxes now enjoy a 200MB data storage capacity limit (and yes that is a massive increase from 10MB) enabling record storage to increase from 5k to 100k records. Configuration-only sandboxes double in capacity from 500MB to 1GB, and are renamed to Developer Pro. I understand that existing sandboxes will only increase in capacity post-refresh and will therefore require the production pod (NA1, EU3 etc.) to be upgraded to Winter ’14 before related existing sandboxes can be increased.

2. Performance Edition. Not exclusively a platform highlight but the introduction of Performance Edition is definitely worth a mention. It appears that Performance Edition will replace Unlimited Edition for new customers, in that UE will no longer be available to buy after November 2013. Performance Edition is essentially a bundled (or suite) offering comprised of core CRM, platform, Data.com, Work.com, Identity, Live Agent, Salesforce Knowledge plus at least one full-copy sandbox. The list price differential from UE appears to be $50 ($250 per user/per month increasing to $300), which seems reasonable in relative terms. I had hoped that “Performance” related to high-performance, very large data volume (VLDV) support etc. not this time apparently.

3. Developer Console Improvements. Some nice Visualforce related enhancements including a page preview feature within the console and the ability to inspect the viewstate (beta only). Additionally it is now possible to add and edit Static Resource files, a nice convenience particularly for JavaScript and CSS resources.

4. Apex Statement Limit Elimination. The script statement limit previously applied to an Apex context has now been replaced with an execution time limit; 10 seconds for synchronous contexts, 60 seconds for asynchronous. Time spent waiting for synchronous callouts to complete is not counted nor is time taken by the database to service queries and DML. I’ve always considered the governor limits applied by the platform to be a positive encouragement to write efficient code respectful of well defined and constant, predictable limits. The CPU time limit appears to be more variable in nature, i.e. Application server load may impact on context execution performance, and therefore could potentially be more problematic for developers to mitigate.

Excellent blog post on this topic by Dan Appleman

5. Flexible API Limits (Pilot). In short the platform now supports up to a 50% spike in inbound API calls (SOAP or REST) versus calculated allowance (function of edition and selected license counts). This additional flexibility makes a great deal of sense given the growth of off-platform composite apps such as mobile, where the API call utilisation can be inconsistent. This flexibility isn’t intended for continued use more to prevent limit exceptions on an occasional basis.

6. Visualforce – HTML5 Development. A new HTML5 friendly component , some additional attributes, plus the ability to pass-through attributes to the rendered HTML markup (key for JavaScript library interactions) have been added to Visualforce to enhance the HTML5 developer experience.

7. Visualforce – Server Side Viewstate (Pilot). There isn’t much in the way of detail available on this one, however it would seem that pages will be able to opt for a server-side viewstate, making the page weight considerably less. Quite how this change to a stateful server-side will be implemented is unclear, ideally the server side state cache would be persisted somehow across controllers within a Session. This would be extremely usedful (and hardly uncommon), avoiding a custom object solution to persisting state across multiple pages that do not share a common controller.

8. ISVforce – Delete Managed Package Component (Pilot). ISVs can now delete obsolete custom fields and tabs from a released managed package. A subscribing org that upgrades to the new package version will not have deletions applied automatically, instead deleted components will list as Unused Components on the Package Details page. It is therefore up to the ISV to inform their customers. For the ISV this is a great improvement in terms of maintainability.

9. Environment Hub (GA). Provides a central view on all orgs across a multi-org estate, including related orgs such as sandboxes and patch-orgs (which are auto-discovered), but also completely unrelated orgs. This feature requires activation via Salesforce support and will be extremely useful on both large implementation programmes and to ISVs juggling multiple QA, localisation, development, packaging orgs etc. Note, the hub org requires a My Domain to be configured which may be related to single sign-on (SSO), an area of Environment Hub functionality unclear from the release notes.

10. Canvas Apps as Chatter Actions and Chatter Feed Items (Pilot). The UI integration options for standard Canvas apps (i.e. not Visualforce page hosted) is now extended to enable apps to be integrated directly into the Chatter feed as either feed items or publisher actions. In the former case the initial view is collapsed (thumbnail, link and description), clicking the link opens the full canvas app (still in the Chatter feed) to enable a larger display you may not want to see in the feed otherwise. In both cases this is a useful extension to the integration potential for Canvas apps, it will be very interesting to see the real world use cases for this over the coming months.

Any-org Design Considerations

The concept of any-org development is an interesting one. The strict definition, to my mind, being the development of a set of components (perhaps packaged) that are designed and coded specifically to install and function in any Salesforce org. This is typically an ISV concern, where testing and maintaining a single-code base can be highly desirable over managing a base package plus multiple extension packages, or in the worse case multiple independent packages. Either way an ISV needs to maximise the addressable market for a product whilst minimising the ongoing effort to do so. The same drivers do not apply in the single-org case, where a consultancy and/or in-house team are delivering technical components to be installed into a known Salesforce org (or multi-org estate). In the single-org case it is common practice to see technical components designed and coded for the current state of the target org(s), with no consideration to how the org(s) may evolve over time. This can often result in situations where costly technical work is required simply to activate an optional product feature, or to provide user access in another locale. In such cases the situation can often be compounded by the fact that the original development team are no longer available.

In short, in my view some degree of future-proofing should be considered in designing for the single-org model, using the techniques applied by ISVs in the any-org model.

    Any-org Design Considerations

  1. Optional Features
  2. Examples; Person Accounts, Quotes

    There are a multitude of optional product features which can be enabled directly in the Salesforce web application UI or via Salesforce support. In the majority of cases such feature activations irreversibly add new objects and fields to the Salesforce instance. From the perspective of keeping simple orgs uncluttered by objects related to unused features this makes perfect sense. From the perspective of designing for the any-org model, this approach poses a few challenges. The main challenge being that Apex code won’t compile where a static reference exists to an object (or field) that doesn’t exist in the org. There is no simple answer to this, instead a selective approach should be taken where optional features that may already be active (or could in the future be activated), that have some impact on your code are accommodated. The approach to achieving this for any-org Apex code basically involves replacing static references with Dynamic SOQL and Dynamic Apex (see coding techniques below).

  3. Multi-currency
  4. The default currency mode of a Salesforce org is single currency, the majority stay this way. It is however common to have multi-currency and perhaps advanced currency management (ACM) activated in orgs where business operations are international. Activation of multi-currency often occurs once the Salesforce org has become established, perhaps in a single region. This can be problematic where technical customisations have been added that aren’t currency aware.

    In the any-org case, all Apex code should be multi-currency aware and use Dynamic SOQL to add the CurrencyIsoCode field to all object queries involving currency fields. Additionally, currency aware logic should include checks to ensure that related transactions are the same currency, and that custom analytics are presenting data in the corporate currency (default and therefore expected behaviour for the native reporting functions). Note, the behaviour of aggregate functions involving currency fields must also be handled.

  5. Editions Support
  6. A key design decision for ISVs is the Salesforce editions to be supported by their managed package. This one has less relevance to the single-org model, unless the multi-org estate includes different editions.

    It is possible to group editions into two distinct groups;
    1. Group (or Team) Edition and Professional Edition
    2. Enterprise Edition and Unlimited Edition

    In the case of group 1 assume that standard objects such as Product, Pricebook2, PricebookEntry, RecordType do not exist and ensure no static references exist in the code. The OrganizationType field on the Organization object tells us which edition the code is executing within.

    [sourcecode language=”java”]
    public static Boolean isTeamOrProEdition(){

    if (isTeamOrProEdition==null){
    List<Organization> orgs = [select OrganizationType from Organization where Id=:UserInfo.getOrganizationId() limit 1];
    if (orgs.size()>0)
    isTeamOrProEdition=(orgs[0].OrganizationType==’Team Edition’ || orgs[0].OrganizationType==’Professional Edition’);
    }
    return isTeamOrProEdition;
    }
    [/sourcecode]

  7. Internationalisation
  8. Whether an international user base is anticipated or not it is general software development best practice to externalise string literals into resource files. In the Salesforce context this means Custom Labels. A best practice here is to apply strict categorisation and a meaningful naming convention. Also ensure all literals are externalised not just labels in the UI, for example trigger error messages.

    Another consideration for i18n is the use of currency and date formatting helpers. Where UI components do not apply default formatting for an SObject field you need to handle this in code. An i18nHelper class which translates ISO locale and currency codes to date format strings and currency format strings plus symbols respectively can be very helpful.

    Useful abbreviations:
    i18n – internationalisation; development practice enabling support for localisation.
    l11n – localisation; act of localising an internationalised software product for a specific locale.

  9. Profile Permissions
  10. Visualforce pages are preprocessed for components directly bound to SObject fields where the user profile does not have CRUD or FLS permissions. In such cases the fields are not displayed or are made read-only, depending on visibility state. This comes as a surprise for many developers who assume that User Profile permissions are entirely ignored on Visualforce pages.

    reference: Enforcing_CRUD_and_FLS

    In the any-org model, where direct SObject field binding is being used in a Visualforce page, this may require a manual check during initialisation to protect the functional integrity of the page. For example, a custom page with no fields displayed and no explanation is not a great user experience, instead the page should simply inform the user they don’t have sufficient permissions, they can then take this up with their Administrators.

    [sourcecode language=”java”]
    private Boolean hasRequiredFLS(){
    // rule 1: all custom fields must be accessible.
    // rule 2: check isUpdateable on all fields where inline editing offered.

    Schema.DescribeFieldResult d;

    Map<String, Schema.SObjectField> siFieldNameToToken=Schema.SObjectType.SalesInvoice__c.fields.getMap();

    for (Schema.SObjectField f : siFieldNameToToken.values()){
    d = f.getDescribe();

    if (!d.isCustom()) continue;
    if (!d.isAccessible()) return false;
    }

    d = siFieldNameToToken.get(‘InvoiceDate__c’).getDescribe();
    if (!d.isUpdateable())
    this.isInlineEditable=false;
    else {
    d = siFieldNameToToken.get(‘DueDate__c’).getDescribe();
    if (!d.isUpdateable())
    this.isInlineEditable=false;
    else this.isInlineEditable=true;
    }
    return true;
    }
    [/sourcecode]

    Coding Techniques

  1. Dynamic SOQL
  2. Do not statically reference objects or fields that may not exist in the org. Instead compose Dynamic SOQL queries and execute via Database.query(). With this approach, you can build the required query using flags which indicate the presence of optional feature fields such as RecordTypeId, CurrencyIsoCode etc. The Apex Language Reference provides good coverage of Dynamic SOQL. Be very careful to ensure that your composed string does not include user supplied text input – this would open up a vulnerability to SOQL injection security vectors.

    [sourcecode language=”java”]
    public static Id getStandardPricebookId(){
    if (standardPricebookId==null){
    String q=’select Id, isActive from Pricebook2 where IsStandard=true’;
    SObject p = Database.query(q);

    if (!(Boolean)p.get(‘IsActive’)){
    p.put(‘IsActive’,true);
    update p;
    }
    standardPricebookId=(String)p.get(‘Id’);
    }
    return standardPricebookId;
    }

    public SalesInvoice__c retrieveSalesInvoice(String siId){
    try{
    //& Using dynamic Apex to retrieve fields from the fieldset to create a soql query that returns all fields required by the view.
    String q=’select Id,Name,OwnerId’;
    q+=’,TotalGross__c’;

    for(Schema.FieldSetMember f : SObjectType.SalesInvoice__c.FieldSets.invoices__Additional_Information.getFields()){
    if (!q.contains(f.getFieldPath())) q+=’,’+f.getFieldPath();
    }

    if (UserInfo.isMultiCurrencyOrganization()) q+=’,CurrencyIsoCode’;
    if (AppHelper.isPersonAccountsEnabled()) q+=’,PersonEmail,PersonContactId’;

    q+=’,(select Id,Description__c,Quantity__c from SalesInvoiceLineItems__r order by CreatedDate asc)’;
    q+=’ from SalesInvoice__c’;
    q+=’ where Id=\”+siId+’\”;

    return Database.query(q);
    } catch (Exception e){
    throw e;
    }
    }
    [/sourcecode]

  3. Dynamic Apex
  4. Do not statically reference objects or fields that may not exist in the org. Instead use Dynamic Apex techniques such as global describes and field describes. Where a new SObject is required, use the newSObject() method as shown below, this is particularly useful for unit test data creation. The Apex Language Reference provides good coverage of Dynamic Apex, every developer should be familiar with this topic.

    [sourcecode language=”java”]
    public static List<SObject> createPBE(Id pricebookId, List<SObject> products){
    SObject pbe;
    List<SObject> entries = new List<SObject>();

    Schema.SObjectType targetType = Schema.getGlobalDescribe().get(‘PricebookEntry’);
    if (targetType==null) return null;

    for (SObject p : products){
    pbe = targetType.newSObject();

    pbe.put(‘Pricebook2Id’,pricebookId);
    pbe.put(‘Product2Id’,p.Id);
    pbe.put(‘UseStandardPrice’,false);
    pbe.put(‘UnitPrice’,100);
    pbe.put(‘IsActive’,true);
    entries.add(pbe);
    }
    if (entries.size()>0) insert entries;
    return entries;
    }
    [/sourcecode]

  5. UserInfo Methods
  6. The UserInfo standard class provides some highly useful methods for any-org coding such as;
    isMultiCurrencyOrganization(), getDefaultCurrency(), getLocale() and getTimezone(). The isMultiCurrencyOrganization() method will be frequently used to branch code specific to multi-currency orgs.

    [sourcecode language=”java”]
    public static String getCorporateCurrency(){
    if (corporateCurrencyIsoCode==null){
    corporateCurrencyIsoCode=UserInfo.getDefaultCurrency();

    if (UserInfo.isMultiCurrencyOrganization()){
    String q=’select IsoCode, ConversionRate from CurrencyType where IsActive=true and IsCorporate=true’;
    List<SObject> currencies = Database.query(q);
    if (currencies.size()>0)
    corporateCurrencyIsoCode=(String)currencies[0].get(‘ISOCode’);
    }
    return corporateCurrencyIsoCode;
    }
    }
    [/sourcecode]

    Challenges

  1. Unit Test Data
  2. In the any-org model the creation of unit test data can be a challenge due to the potential existence of mandatory custom fields and/or validation rules. To mitigate the former, Dynamic Apex can be used to identify mandatory fields and their data type such that test data can be added (via a factory pattern of some sort). In the latter case there is no way to reliably detect a validation rule condition and as such for ISVs it is a blessing that unit tests do not actual have to pass in a subscriber org (wrong as this may be in principle). In the single-org case we can improve on this (and we have to), by adding a global Validation Rule switch-off flag in a Org Behaviour Custom Setting (see previous post) – this approach is helpful in many areas but for unit test data creation it can isolate test code from Validation Rules added post-deployment. There’s a tradeoff here between protecting unit tests versus the risk of using test data that may not adhere to the current set of Validation Rules.

  3. Unit Test Code Coverage
  4. The addition of multiple conditional code paths, i.e. branching, for the any-org case makes it challenging to achieve a high code coverage percentage in orgs which do not have the accommodated features activated. For example, unit tests executing in a single currency org, will not be run code specific to multi-currency, and therefore the code coverage drops accordingly. To mitigate this, consider adding OR conditions to IF branches which include unit test flags and perhaps Test.isRunningTest() to cover as much code as possible before leaving the branch. During coding always strive to absolutely minimise the feature-specific code – this approach will help greatly in respect to unit test coverage.

  5. QA
  6. In the any-org model, it is imperative to test your code in an org with the accommodated features activated. This will require multiple QA orgs and can increase the overall testing overhead considerably. Also, factor in the lead time required to have features activated by Salesforce support, such as multi-currency and Person Accounts.

  7. Security
  8. Dynamic SOQL queries open up the possibility of SOQL-injection attacks where user-supplied text values are concatentated into an executed SOQL query string. Always sanitise and escape data values where such code behaviour is necessary.

  9. Governor Limits
  10. The any-org model is highly contingent upon the use of limited resources such as Apex Describes. As a best practice employ a helper class pattern with cached values.

    One Approach – Future Proofing Single-org Developments

    Optional Features – selective
    Multi-currency – yes
    Editions Support – no
    i18n – yes
    Unit Test Data – yes
    Profile Permissions – yes

    The list above is my default position on the approach to take on single-org developments, this can change significantly depending on the current state of the org in terms of configuration and customisation, plus the client perspective on the evolution of their Salesforce org and attitude toward investing in future-proofing/extensibility. In the single-org, consultancy project case it’s advisable to be completely open and let the client decide if the additional X% cost is worth the value. I think the real point here is that the conversation takes place and the client has the opportunity to make an informed decision.

Salesforce Query Optimisation

Understanding how the Salesforce Query Optimiser (SQO) works is fundamental to delivering performance at the data layer. Good practice in this regard, as part of an optimised physical data model implementation, is key to delivering efficient data access at the application layer.

In short, certain fields are indexed by default, custom indexes can be added via External Id and unique field attributes or by Salesforce support. For composite indexes the latter is required. The SQO utilises an index statistics table which records the distribution of data in each index. When processing a query, a pre-query is made to the statistics table to determine whether a candidate index is useful or not, in terms of selectivity thresholds etc.

The Query & Search Optimization Cheat Sheet provides a very useful reference in terms of the logic applied by the SQO, plus the fields indexed by default. Bookmark as a guide next time you’re writing a complex SOQL query or designing a data model. In the latter case performance of data retrieval is an often overlooked design consideration.

Salesforce Source Control and Release Process

This post outlines my preferred approach to managing parallel developments on the Salesforce platform in what I refer to as the Converged Programme Model. I readily acknowledge that there’s a multitude of ways to accomplish this each with it’s own subjective merits. Before adopting a parallel work-stream model take the time to understand the technical complexity, process overhead and time investment required. Of particular concern should be the team’s readiness for such a disruptive change. In my experience it’s better to plug any skills gaps upfront, be very prescriptive with process guidance, start-small and build out incrementally – the risk otherwise is considerable. Typically resistance will come from individuals unaccustomed to a disciplined approach to software development/release process.

SCC

Objectives

  1. Concurrent Development. Support parallel programme workstreams converging into a shared production Salesforce environment.
  2. Automation. Deliver build automation – reducing the manual overhead required to deploy between environments.
  3. Gold Standard. Deliver a best practice approach – the initial design should scale up and down in response to changing programme conditions.
  4. Non-disruptive. Facilitate a staggered approach to adoption – enabling key benefits to be realised quickly without disrupting productivity.
  5. Minimise Release Overhead. Project branches should be regularly and incrementally updated from the master branch – reducing the inherent risk of divergence over time.

Tools

  1. GitHub
  2. – Get started with public repositories, upgrade to a paid plan and use private repositories for any source code you don’t want to share with the world at large.

    – Create an Organisation account to enable Team functionality.

    – Key benefit versus Subversion (CVS etc.) is fast and efficient branch management; parallel workstreams are managed on branches with frequent merging.

    – It is possible, albeit time expensive to implement a Git server within the enterprise. In my view the GitHub administration interface alone is worth the price.

  3. Jenkins
  4. – Deployed on a Windows EC2 instance with an elastic IP. A free usage tier, micro instance provides an ideal server host. Using a Linux host can be beneficial in regard to SSH authentication from Jenkins to GitHub. This is just one advantage of many. Pick the Operating System/Platform the team you’re working with are most familiar, a Linux host that only one team member can administrate makes no sense.

    – On Windows the Jenkins service should be configured to run a specific user account (with least privileges assigned). This is required to generate the key files for SSH authentication.

    – Enable Jenkins security. Particularly relevant if the host is open to the public web. Lock the inbound IP ranges via the EC2 security group if possible.

    – Either store the Ant build files (build.xml, build.properties) in the Git repository or use an XCOPY post-build step to copy the files into the workspace from a file system location – as below. I prefer to keep the build files external to Git – there shouldn’t be any need to version manage such files – plus the build.properties file may contain passwords in plaintext.

    Jenkins Job Build Config

    – Install GitHub and Git Plugins
    Required to build from a GitHub repository and enables build automation via Post-Receive Hooks. Under Jenkins System Configuration; configure “Manually manage hook URLs”, this requires your GitHub repository to have the hook set manually via Service Hooks under repository settings. Add a [Jenkins (GitHub plugin)] service hook like http://yourservername:8080/github-webhook/. The message sent on git-push to the remote repository will trigger any Jenkins job that builds from the branch that has been updated and has the [Build when a change is pushed to GitHub] option set to true.

    – SSH Keys
    In order to use SSH from Jenkins to a private GitHub repository, SSH authentication is required, which uses a generated key pair. The public key is added as a Deploy Key in GitHub under repository settings. This works well but if you want the same Jenkins user to access multiple repositories over SSH you have a problem as each Deploy Key must be globally unique across all GitHub repositories. The answer to this is to use aliasing and a SSH config file (refer: http://www.onemogin.com/blog/2011/9/1/jenkins-and-github-multiple-private-projects.html) however this won’t work with Post-Receive Hooks as the repository URL in the sent message won’t match to the aliased repository URL in the Jenkins job – typically errant behaviour below from the Jenkins log. I can’t see a way around this at the time of writing this post.

    [sourcecode language=”text”]
    FINE: Skipped GitHub Test – buildautomationtest repository because it doesn’t have a matching repository.
    May 7, 2013 6:21:35 PM com.cloudbees.jenkins.GitHubWebHook
    FINE: Considering to poke GitHub Test – buildautomationtest repository
    [/sourcecode]

    – Chatter Plugin
    I’m a big fan of this plugin by Simon Fell. I tend to use a dedicated release manager user, e.g. release.manager@force365.com, standard user license capacity permitting, and perform all deployment tasks in this user context. This approach provides clarity on changes made by a deployment versus actual user and provides an easy way to be notified of failures etc.

    Key Principles

    1. Fit-for-purpose Org-set
    2. – Org-set is the terminology I use to describe the collection of orgs, and their roles, required to deliver a project safely to production.

      – One size does not fit all. Pick the minimum set of orgs roles required to deliver the project. Each org is a time expensive overhead.

      – Sandbox types. In defining the org-set, factor in the availability of config-only and full-copy sandboxes. The latter must be retained for cases where infrequent refresh is required. Project-level orgs don’t need to be part of the sandbox estate, Developer Edition orgs, or perhaps Partner Developer Edition orgs can be employed. Full-copy sandboxes are incredibly expensive, valuable resources, use only when absolutely necessary for as wide a set of roles as possible.

      – Connected orgs. For projects involving complex integrations, the complexity involved in creating a connected-org may influence the org-set design – there may be an argument to consolidate roles onto a single test org used for QA and UAT perhaps.

    3. Continuous Integration
    4. A best practice org-set design for non-trivial technical projects with multiple technical contributors should require isolation of developer activities into a separate developer orgs with a code-level integration org and Continuous Integration (CI) process in place.

    5. Project-level sandboxes are not refreshed
    6. Project-level orgs are all built from the Git repository. The Pre-production programme-level org must be refreshed from Production pre-deployment to ensure the deployment is verified against the current state.

    7. Commit to the remote project branch is a commitment that metadata is ready for system testing
    8. Build automation will deploy a project branch commit to the project QA org. In my experience it pays to be prescriptive in terms of development process.

    9. Commit to the remote master branch is a commitment that metadata is ready for integration testing
    10. Build automation will deploy a master branch commit to the programme INT (integration) org – this org exists to enable rigorous regression testing to be applied by all project workstream. Post-deployment suites of automated tests should be invoked and reports analysed by the test lead on each project.

    11. Test Automation
    12. It’s a significant resource overhead to execute manual test scripts for each regression test cycle, not to mention error prone. For non-trivial projects, the investment must be made at an early stage in automated-testing. Selenium is a good choice, but the tool utilised doesn’t really matter, what matters is that from the outset of the project the test team start to build-up a comprehensive suite of automated test cases with coverage of the key acceptance criteria defined for each user story. The suites then enable automation of regression testing during deployment phases – the same scripts underpin system testing and provide an often overlooked second stage to CI (unit tests + acceptance tests).

    13. GitHub branch design
    14. – A simple, clean branch design is desirable in the remote repository.

      – Long-lived branches for active project workstreams. Project branches may have sub-branches for each sprint or phase.

      – Long-lived branch for patches. Bug fixes are developed on local branches and committed to the remote support branch when ready for system testing.

      – It can be advisable to consider how important a clean Network Graph is, this is impacted by Git merge versus rebase decisions.

    15. Build automation challenges
    16. In a perfect world, all metadata component types would be covered by the Metadata API. This isn’t the case so the nirvana of simple cloning of an org configuration is yet to exist. Instead a prescriptive process is required which spans manual configuration tasks, metadata deletion and build automation.

      – Proactive management of change
      A nominated release manager should proactively manage change at a programme-level, advise the project teams on release process and strive to minimise deployment conflicts through early involvement in all project developments. A change log should be maintained which lists all changes being made. This could include technical component types (ApexClass, ApexTrigger etc.) being added, modified or deleted, but as a minimum must track configuration changes requiring manual action – enablement of features, field data type changes etc. and required standing data (custom settings etc.). All changes should be mapped to a Change Type of manual or automated and a list of orgs to which the change has been deployed tracked. This is clearly an overhead to the project but without control it can be very easy to lose track of the current state of the orgs in use and face significant time expense in attempting to rationalise the situation through failing deployments. The release manager, or technical lead should apply manual tasks to target orgs pre-emptively to minimise automated build failures.

      – Be prepared for build failures
      Automated builds will fail; this is a fact of life where build-dependencies on manual actions exist. Proactive management will only get you so far. Attempting to minimise this is more realistic than elimination.

      – Data
      Automation of data setup in a target org is possible via Ant and the Data Loader CLI, or other similar means. Alternatively a data file could be deployed as a document or static resource and then loaded from an Apex script (as per the ISV approach).

      – Unsupported metadata component types
      Automation is possible using Selenium scripts, which execute at the UI level and can simulate, for example, a user activating a setting. Such scripts can then be integrated into an automated build. This is highly possible, but takes time and expertise with both Ant and Selenium to accomplish.

    17. Programme-level Integration
    18. The Converged Programme Model involves project workstreams building in isolated org-sets with frequent merge-from-master actions bringing across any changes to the production state. This approach should surface conflicts early, i.e. during development itself, but to be sure that shared component changes have not introduced any functional inconsistencies, regression testing must be applied by each and every project workstream on each occasion any project does a release. This is a strong argument for test automation.

    19. UAT
    20. – Project-level or programme-level?
      In principle UAT should always be applied at the local project-level as the commit to the programme-level integration org is an absolute commitment that the code is production ready. In practice UAT may be two tiered; initial user acceptance of new functionality, followed by some form of secondary acceptance testing in Pre-production, in parallel to deployment verification testing.

    21. Path-to-production Change Management
    22. As with any programme of work, fit-for-purpose Change Management processes should be in place. In context this means a Change Advisory Board (CAB) should be in place to approve deployment, this must include informed and empowered representation across business and technical functions.

      – A Deployment Request Form (DRF), or similar, should be produced to document the change being released, the impact, pre and post deployment tasks, GitHub commit # etc., approval date or rejection reason. The DRF could be approved by a convened board or via email response.

      – The DRF process is absolutely required for the final deployment to Production, but may also be applied to the Pre-Production deployment, i.e. the commencement of the final step of the path-to-production release flow.

Salesforce Summer 13 – Platform

The Summer ’13 release notes are now available, find below a brief introduction to 10 Force.com highlights (in no order of significance).

1. New Setup interface. Clean new interface for the Setup area, with a top-level “Setup” link in the header region. Administration and Personal settings are now separate, the latter is accessed from a “My Settings” menu option – available on the user name titled menu in the header region. Streamlined and convenient.

2. Record Type assignment via Permission Set. Non-default custom record types can now be assigned via Permission Set, strengthening the concept that Permission Sets encapsulate application permissions.

3. (Universal Single Sign-on) Multiple SSO service provider configurations. A single org can now be configured as a service provider for multiple identity providers. Previously federated SSO required all users to be authenticated by a single IDP, an unrealistic situation where the org services user populations in disparate IT environments.

4. Test methods must be defined in test classes. Anything that enforces good behaviour is a positive step in my mind. A scattering approach to defining test methods can be a maintenance nightmare.

5. Consolidated asynchronous execution limits. Batch Apex, Scheduled Apex and @futures now share a single set of limits calculated as 250k or 200*standard user count per 24 hour period, whichever is greater. This is really great news for orgs with low standard user counts that utilise @future calls (callouts in trigger scripts perhaps). Such orgs have been prone to limit exceptions.

6. System.scheduleBatch(). New Apex method to enable onetime scheduled invocation of batch Apex class. A nice convenience.

7. Sandbox Templates. For full copy sandbox refreshes, template can be defined to specify the objects for which data is copied.

8. Sandbox refresh copies Custom Settings data. This is a long awaited improvement, applies to all sandbox types. Additionally the new Sandbox page UI shows the availability of the different sandbox types.

9. Domain Management. Force.com Sites (and Site.com sites) can now share domains – Custom URLs are defined between the Site and the Domain enabling a many-to-many relationship. For example it is now possible for a custom domain to host a mixture of sites each with a different Custom URL. A new page “Setup>Administer>Domain Management” enables centralised management of domains and custom URLs.

10. Approval Process deployment via Changeset. Ideally this would also be the case with the Metadata API, that said anyway to automate deployment is great news. The manual configuration of Approval Processes can be a time expensive process, and historically has been one of the issues detracting from the value of deployment automation.