Friday, 27 April 2018

Plugins in the sandbox, and why you don't get System.Security.Permissions.SecurityPermission

A relatively common error with plugins is "Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed". This is the general message that you get when you have a plugin registered in the sandbox, and it is trying to do something that is not permitted. You may get variations on this error, depending on exactly what the code is trying to do, for example:
  • FileIOPermission - access the file system
  • InteropPermission - mostly likely using an external assembly (either directly, or having ILMerged into your plugin assembly)
  • System.Net.WebPermission - some for on network access, e.g. trying to access a network resource by IP address (the sandbox only allows by DNS name)
  • SqlClientPermission - accessing SQL Server
The list can go on, and on. Rather than trying to list everything you can't do, it's a lot simpler to list what you can, which is broadly:
  • Execute code that doesn't try to access any local resources (file system, event log, threading etc)
  • Call the CRM IOrganizationService using the context passed to the plugin
  • Access remote web resources as long as you:
    • Use http or https
    • Use a domain name, not an IP address
    • Do not use the .Net classes for authentication
All of which is pretty restrictive, but is understandable given the sandbox is designed to protect the CRM server. To me, the most annoying one is the last, which makes it pretty much impossible to call other Microsoft web services directly, such as SharePoint or Reporting Services.

So, what to do about it. If you have CRM OnPremise, the simple and only solution is to register the assembly outside the sandbox, so that it can run in FullTrust - i.e. do whatever it wants (though still subject to the permissions of the CRM service account or asynchronous service account that it runs under).

And if you've got CRM Online, then the normal solution is to offload the processing to an environment that you have more control over. The most common option is to offload the processing to Azure, using the Azure Service Bus or Azure Event Hub . The alternative, new to CRM 9, is to send the data to a WebHook, which can be hosted wherever you like. 

Saturday, 31 March 2018

What's in a name - CRM, Dynamics 365, CDS

Now that I've restarted posting on this blog, I'm struggling to name the technologies consistently. It used to just be CRM (or Microsoft CRM, or Dynamics CRM, or Microsoft Dynamics CRM), but now it's Dynamics 365, or Dynamics 365 for Customer Engagement. And from the platform perspective, it's Common Data Services (CDS).
To an extent, we're necessarily at the whim of Microsoft branding, which can change, but I feel we're close to an overall set of terms that can be consistently applied. As I see it, there are 3 distinct things that can be named:

The overall suite of technologies
This has been Dynamics, Dynamics 365, or Microsoft Business Applications. Of these, Dynamics 365 is definitely the leader, though there has been recent use of Microsoft Business Applications, so we may find this to become more popular. To me, the main difference is that Microsoft Business Applications can include technologies such as PowerApps and Flow, which started out under the Office 365 brand


The applications that Microsoft deliver
We started with the separate Dynamics products (CRM, AX, GP, NAV etc). Several (but not all) were then included within Dynamics 365, along with some new application (e.g. Talent). From the original CRM application and implementations, we can refer to each Application, which are Sales, Customer Service, Marketing, Field Service and Project Service Automation. Here the roadmap is a useful reference. These applications can be usefully referred to individually, but we need to be able to refer to them collectively, and distinguish them from the other Dynamics 365 applications (Finance and Operations, Retail, Talent, Business Central) that are not based on the CRM technology. Rather than using the term 'CRM', Microsoft are pushing the term 'Microsoft Dynamics 365 Customer Engagement'. I do mostly understand the Microsoft approach, but it is a lot longer than 'CRM', so I'm going to struggle to move off CRM. For more on this, see Jukka's post

The platform - i.e. what underpins the applications
'Platform' itself can mean different things to different people, which we won't resolve here, but I'm taking about the technologies that started in CRM, and not just the Azure platform. Here we started with CRM, then the term xRM was introduced, but now (as of March 2018), I think that we should now be referring to CDS (Common Data Services). Now that Common Data Services for Applications and CRM are the same platform is a huge step. And from now on , I think the platform that started out as CRM is better termed CDS. There are a few details to sort out still; there are 'Common Data Services for Applications' and 'Common Data Services for Analytics', and I reckon only the former truly relates to the original CRM platform, but I'm not certain on that yet

Overall, I thing the picture will soon be reasonably clear, with a few caveats. For the foreseeable future, I expect I'll still preface most presentations by saying that I'll use the terms 'CRM' and 'Dynamics 365' interchangeably, unless there is a reason to differentiate between them, in which case I'll try and explain the difference. Similarly, I'll probably be using 'CRM' and 'xRM' and 'CDS' interchangeably for a while

Common Data Services Architecture in CDS 2.0

I struggled to think of a good title for this post, and I hope to change it to something more inspirational, as this is a very significant topic.
Microsoft have made several recent announcements in March 2018, but for me the most significant is the PowerApps Spring Update. This may seem strange for me, a CRM MVP, to say, given how much there was on CRM in the Business Applications Spring ’18 Release Notes, but I think it makes sense once you realise that the PowerApps Update describes the new and future Common Data Services (CDS) architecture, and that in this architecture, much of CDS is the CRM platform (aka xRM).
Rather than CDS being a separate layer or component that then communicates with the CRM platform, CDS and CRM are a shared platform.
Strictly, it's not quite as simple as the last sentence makes out, especially as CDS now splits into Common Data Service for Applications and Common Data Service for Analytics (I'm hoping we'll soon get good acronyms to distinguish these), but for now it's worth emphasising that, if using Common Data Service for Applications, you are directly using the same platform components that CRM uses. This has several major implications (all of which are good to my mind):

  1. CDS for Apps can fully use the CRM platform features, such as workflow, business process flows, calculated fields. This immediately makes CDS a hugely powerful platform, but also means there are no decisions to take on which platform to use, or differences to take into account, because they are the same platform
  2. There are no extra integration steps. Commissioning a CDS environment will give you a CRM organisation, and equally, commissioning a CRM organisation will give you a CDS environment. This is not a duplication of data or platforms, because again, they are the same platform
There's a lot to play with, and explore, but for now this seems a major step forward for the platform, and I feel I'll be writing a lot more about CDS (though I'm still not sure when I'll stop referring to CRM when describing the platform).
The one area that still needs to be confirmed, and which could have a major impact on adoption, is licensing, but I hope we'll get clarity on this soon.

Thursday, 29 March 2018

Concurrent or Consistent - or both

A lesser-known feature that CRM 2016 brought to us is support for optimistic concurrency in the web service API. This may not be as exciting as some features, but as it's something I find exciting, I thought I write about it.

Am I an optimist
So, what is it about ?  Concurrency control is used to ensure data remains consistent when multiple users are making concurrent modifications to the same data. The two main models are pessimistic concurrency and optimistic concurrency. The difference between the 2 can be illustrated by considering two users (Albert and Brenda), who are trying to update the same field (X) on the same record (Y). In each case the update is actually 2 steps (reading the existing record, then updating it), and Albert and Brenda's try and do the steps in the following time sequence:
  1. Albert reads X from record Y (let's say the value is 30)
  2. Brenda reads record Y (while it's still 30)
  3. Albert updates record Y (Albert wants to add 20, so he updates X to 50)
  4. Brenda updates record Y (she wants to subtract 10, so subtracts 10 from the value (30) she read in step 2, so she updates X to 20) 
If we had no concurrency control, we would have started with 30, added 20, subtracted 10, and found that apparently 30 + 20 - 10 = 20. Arguably we have a concurrency model, which is called 'chaos', because we end up with inconsistent data.
To avoid chaos, we can use pessimistic concurrency control. With this, the sequence is:
    1. Albert reads X from record Y (when the value is 30), and the system locks record Y
    2. Brenda tries to read record Y, but Albert's lock blocks her read, so she sits waiting for a response
    3. Albert adds 20 to his value (30), and updates X to 50, then the system releases the lock on Y
    4. Brenda now gets her response, which is that X is now 50
    5. Brenda subtracts 10 from her value (50), and updates X to 40
    So, 30 + 20 - 10 = 40, and we have consistent data. So we're all happy now, and I can finish this post.
    Or maybe not. Brenda had to wait between steps 2 and 4. Maybe Albert is quick, but then again, maybe he isn't, or he's been distracted, or gone for a coffee. For this to be robust, locks would have to placed whenever a record is read, and only released when the system knows the Albert is not still about to come back from his extended coffee break. In low latency client-server systems this can be managed reasonably well (and we can use different locks to distinguish between an 'I'm just reading', and 'I'm reading and intending to update'), but with a web front-end like CRM, we have no such control. We've gained consistency, but at a huge cost of concurrency. This is pessimistic concurrency.
    Now for optimistic concurrency, which goes like this:
    1. Albert reads X (30) from record Y (when the value is 30), and also reads a system-generated record version number (let's say it's version 1)
    2. Brenda reads record Y (while it's still 30), and the system-generated record version number (which is still version 1, as the record's not changed yet)
    3. Albert adds 20 to his value (30), and updates X to 50. The update is only permitted because Albert's version number (1) matches the current version number (1). The system updates the version number to 2
    4. Brenda subtracts 10 from her value (30), and tries to update X to 20.This update is not permitted as Brenda's version number (2) does not match the current version number (1). So, Brenda will get an error
    5. Brenda now tries again, reading now read the current value (50) and version number (2), then subtracting 10, and the update is allowed
    The concurrency gain is that Albert, Brenda and the rest of the alphabetical users can read and update with no blocks, except when there is a conflict. The drawback is that the system will need to do something (even if it is just give an error message), when there is a conflict.
    .
    What are the options
    Given this post is about a feature that was introduced in CRM 2016, what do you think happened before (and now, because you have to explicitly use optimistic concurrency). If it's not optimistic concurrency, then it's either pessimistic or chaos. And it's not pessimistic locking, as if Microsoft defaulted to this, then CRM would grind to a locked halt if users often tried to concurrently access records.

    Maybe I want to be a pessimist
    As chaos sounds bad, maybe you don't believe that CRM would grind to a locked halt, or you're happy that users don't need concurrent access, or you've been asked to prevent concurrent access to records (see note 1). So, can we apply pessimistic locking ? The short answer is 'no', and most longer answers also end up 'no'. Microsoft give us almost no control over locking (see note 2 for completeness) within CRM, and definitely no means to hold locks beyond any one call. If you want to prolong the answer as much as you can, you might conceive a mechanism whereby users only get user-level update access to records, and have to assign the record to themselves before they can update it, but this doesn't actually work either, as a user may still be making the update based on a value they read earlier. And you can't make it user-level read access, and the user then wouldn't be able to see a record owned by someone else to be able to assign it to themselves.

    OK, I'll be an optimist
    So, how do we use optimistic concurrency ? First of all, not every entity is enabled for optimistic concurrency, but most are. This is controlled by the IsOptimisticConcurrencyEnabled property of the entity, and by default it is true for all out-of-box entities enabled for offline sync, and for all custom entities. You can check this property by querying the entity metadata (but not in the EntityMetadata.xlsx document in the SDK, despite the SDK documentation)

    Then, to use optimistic concurrency you need to do at least 2 things, and preferrably 3:
    1. In the Entity instance that you are sending to the Update, ensure the RowVersion property is set to the RowVersion that you received when you read this record 
    2. In the UpdateRequest, set the ConcurrencyBehavior to IfRowVersionMatches
    3. Handle any exceptions. If there is a row version conflict (as per my optimistic scenario above), then you get a ConcurrencyVersionMismatch exception. 
    For a code example, see the SDK
    I've described this for an Update request, and you can also use it for a Delete request, and I hope you'll understand why it doesn't apply to a Create request.

    One word of warning; I believe that some entities fail when using optimistic concurrency - this seems to be the entities that are metadata related (e.g. webresource or savedquery). I suspect this is because the metadata-related internals work on different internal (at the SQL level) concurrency from most other entities.

    How much does it matter
    I've left this till last, otherwise you may not have read the rest of the post, as it often doesn't matter. Consistency issues are most relevant if there's a long time between a read and the corresponding update. The classic example is offline usage (hence why it's enabled for out-of-box entities enabled for offline sync). I also see it as relevant for some bulk operations; for example we do a lot of bulk operations with SSIS, and for performance reasons, there's often a noticeable time gap between reads and writes in an SSIS data flow.

    Notes

    1. During CRM implementatons, if asked 'Can we do X in CRM ?', I very rarely just so no, and I'm more likely to say no for reasons other than purely technical ones. However, when I've been asked to prevent concurrent access to records, then this is a rare case when I go for the short answer of 'no'
    2. We can get a little bit of control over locking within a synchronous plugin, as this runs within the CRM transaction. This is the basis of the most robust CRM-based autonumber implementations. However, the lock can't be held outside of the platform operation
    3. My examples have concentrated on updating a single field, but any talk of locking or row version is at a record level. If Albert and Brenda were changing different fields, then we may not have a consistency issue to address. However, for practical reasons, any system applies locks and row versioning at a record, and not field level. Also, even if the updates are to different fields, it is possible that the change they make is dependent on other fields that may have changed, so for optimistic concurrency we do get a ConcurrencyVersionMismatch if any fields had changed


    Friday, 27 June 2014

    Plugin pre-stages - some subtleties

    The CRM SDK describes the main differences in plug stages here. However, there are some additional differences between the pre-validation and pre-operation stages that are not documented.

    Compound Operations
    The CRM SDK includes some compound operations that affect more than one entity. One example is the QualifyLead message, which can update (or create) the lead, contact, account and opportunity entities. With compound operations, the pre-validation event fires only once, on the original message (QualifyLead in this case) whereas the pre-operation event fires for each operation.
    You do not get the pre-validation event for the individual operations. A key consequence of this is that if, for example, you register a plugin on pre-validation of Create for the account entity, it will not fire if an account is created via QualifyLead. However, a plugin on the pre-operation of Create for the account entity will fire if an account is created via QualifyLead.

    Activities and Activity Parties
    I've posted about this before, however it's worth including it in this context. When you create an activity, there will be an operation for the main activity entity, and separate operations to create activityparty records for any attribute of type partylist (e.g. the sender or recipient). The data for the activityparty appears to be evaluated within the overall validation - i.e. before the pre-operation stage. The key consequence is that any changes made to the Target InputParameter that would affect an activityparty will only be picked up if made in the pre-validation stage for the activity entity.

    Monday, 7 April 2014

    Controlling Duplicate Detection

    The CRM SDK messages CreateRequest and UpdateRequest support a configuration parameter "SuppressDuplicateDetection" that provides control over whether duplicate detection rules will be applied - see http://msdn.microsoft.com/en-us/library/hh210213(v=crm.6).aspx. However, this parameter is not available through over programmatic means (such as the REST endpoint) to create or update records.

    To workaround this, I created a plugin that sets the "SuppressDuplicateDetection" parameter based on the value of a boolean attribute that can included in the Entity instance that is created or updated.

    I've posted the source code to the MSDN Code Gallery here

    I created this because I had a need to apply duplicate detection rules to entities created via the REST endpoint in CRM 2011.

    It may be that this plugin could also be used as a way to revert the CRM 2013 behaviour back to that of CRM 2011, to allow duplicate detection rules to fire on CRM forms. However, I've yet to test this fully; if anybody wants to test it, feel free to do so and make comments on this post. Otherwise, I'll probably update this post if I find anything useful with the CRM 2013 interface.

    Friday, 13 December 2013

    Crm 2013 – Script errors after upgrading an ex-Crm 4.0 organisation


    After a recent upgrade to Crm 2013 of an organisation that had been a Crm 4.0 organisation, there were client script errors when navigating to the Case or Queue entities. The underlying cause was some SiteMap entries that referenced Crm 4.0 urls; these were being redirected to new urls, but seemed to be missing some elements on the QueryString.
    The SiteMap entries with issues were:

    <SubArea Id="nav_cases" Entity="incident" DescriptionResourceId="Cases_SubArea_Description" Url="/CS/home_cases.aspx" />
    <SubArea Id="nav_queues" Entity="queue" Url="/workplace/home_workplace.aspx" DescriptionResourceId="Queues_SubArea_Description">
      <Privilege Entity="activitypointer" Privilege="Read" />
    </SubArea>

    The fix is to replace them with the following (which come from a default SiteMap in a new Crm 2013 organisation, though I’ve stripped out the GetStarted attributes):

    <SubArea Id="nav_cases" DescriptionResourceId="Cases_SubArea_Description" Entity="incident" />
    <SubArea Id="nav_queues" ResourceId="Homepage_Queues" DescriptionResourceId="Queues_SubArea_Description" Icon="/_imgs/ico_18_2020.gif" Url="/_root/homepage.aspx?etc=2029" >
     <Privilege Entity="queue" Privilege="Read" />
    </SubArea>

    These are the only entries I’ve found so far with problems. I think the entry for Queues is a one-off, but the entry for cases is notable in that the original (Crm 4.0) SiteMap entry included a Url attribute, whereas entries for most other entities did not include the Url attribute. So, it’s possible that other entries that include both the Entity and Url attribute could have the same issue.
    Although annoying at the time, I don’t see this as a major issue, as reviewing the SiteMap will be one of the standard tasks we do for any upgrades to Crm 2013. This is due to change in navigation layout, which means the overall navigation structure deserves a rethink to make best use of the new layout. When doing this, we find it is best to start with a new clean SiteMap and edit this to a customer-specific structure for Crm 2013, rather than trying to edit an existing structure. It’s also worth noting that a few of the default permissions have changed (spot the difference above for the privilege to see the Queues SubArea), and it’s worth paying attention to these at upgrade time for future consistency.


    Monday, 9 December 2013

    Crm 2013 – Upgrading from an ex-Crm 1.2 organisation


    This post should only affect a small fraction of Crm 2013 users, but if you do have a CRM organisation that was first created in Crm 1.2, and upgraded through the versions to Crm 2013, you may get an “unexpected error” message when opening account contact or lead records that had been created in Crm 1.2 (I told you this wouldn’t affect many people, but we do still have, and interact with, customers from Crm 1.2 days).
    The cause of this is the ‘merged’ attribute. Record merging (for accounts, contacts and leads) was introduced in Crm 3.0, and a ‘merged’ attribute was created to track if a record had been merged. For all records created in Crm 3.0 and higher, this attribute was set to false, but for records created in Crm 1.2, the attribute was null.

    This causes a problem in the RTM build of Crm 2013. If you enable tracing, you will see an error like the following:
    Crm Exception: Message: An unexpected error occurred., ErrorCode: -2147220970, InnerException: System.NullReferenceException: Object reference not set to an instance of an object.
       at Microsoft.Crm.BusinessEntities.RecordDisabledMergedNotificationGenerator.BusinessLogic(IBusinessEntity entity, IOrganizationContext context, NotificationAdder notificationAdder)

    So, that’s the problem. There are three ways to fix it:
    • If you’ve already upgraded, then the quick, but unsupported, fix is via direct SQL statements that set the merged attribute to false (see below)
    • If you have not yet upgraded, you can merge each affected record in turn with a dummy record, which will set the merged attribute.
    • You can automate the merge process programmatically by submitting a merge request for each record, and passing appropriate parameters. I’m not sure if this will work after the upgrade, or only before, as I’ve not tried it
    Unfortunately (but unsurprisingly), the merged attribute is not ValidForUpdate, so you can’t use a simple, supported update request to set the attribute

    The SQL statements for an unsupported fix:

    update contact set merged = 0 where merged is null
    update account set merged = 0 where merged is null
    update lead set merged = 0 where merged is null

    Friday, 6 December 2013

    Crm 2013 – No more ExtensionBase tables


    So, Dynamics Crm 2013 is here, and there’s lots to say about the new UI, and the new features. But, many others are talking about these, so I thought I’d start with what may seem to be an obscure technical change, but it’s one that I welcome, and which is a significant contribution to the stability and performance of Crm 2013.

    With Crm 3.0, Microsoft changed the underlying table structure so that any customisable entity was split across 2 tables; a base table that contained all system attributes, and an extensionbase table for custom attributes. For example, there was an accountbase and an accountextensionbase table. Each table used the entity’s key as the primary key, and the extensionbase table also had a foreign key constraint from the primary key field to the primary key in the base table. Each entity has a SQL view that joined the data from these table to make it appear as one table to the platform. As I understand it, the main reason for this design was to allow for more custom attributes, as SQL Server had a row-size limit of 8060 bytes, and some of the system attributes were already using ~6000 bytes.

    The same table design was retained in Crm 4.0 and Crm 2011. However, Crm 2011 introduced a significant change to the plugin execution pipeline, which allowed custom plugins to execute within the original SQL transaction. This was a very welcome change that provided greater extensibility. However it did mean that the duration of SQL transactions could be extended, which means that SQL locks may be held for longer, which means potentially more locking contention between transactions. In very occasional circumstances, a combination of certain plugin patterns, the design of the base and extensionbase tables, and heavy concurrent use, could give rise to deadlocks (see below for an example).

    Given this, I’m very glad that the product team retained the facility to have plugins execute within the original transaction (then again, it would be hard to remove this facility from us). It wouldn’t be realistic to ask customers to reduce concurrent usage of CRM, so the only way to reduce the potential deadlock issue was to address the design of the base and extensionbase tables. From my investigations (sorry, but I actually quite like investigating SQL locking behaviour), a substantial improvement could have been made by retaining the table design, but modifying the SQL view, but a greater improvement comes from combining the tables into one. An added advantage of this change is that the performance of most data update operations are also improved.
    Deadlock example

    Here are two SQL statements generated by CRM:
    select
    'new_entity0'.new_entityId as 'new_entityid'
    , 'new_entity0'.OwningBusinessUnit as 'owningbusinessunit'
    , 'new_entity0'.OwnerId as 'ownerid'
    , 'new_entity0'.OwnerIdType as 'owneridtype'
    from new_entity as 'new_entity0'
    where ('new_entity0'.new_entityId = @new_entityId0)  

    And

    update [new_entityExtensionBase]
    set [new_attribute]=@attribute0
    where ([new_entityId] = @new_entityId1)
     
    These were deadlocked, with the SELECT statement being the deadlock victim. The locks that caused the deadlock were:
    • The SELECT statement had a shared lock on the new_entityExtensionBase table, and was requesting a shared lock on new_entityBase table
    • The UPDATE statement had an update lock on the new_entityBase table, and was requesting an update lock on new_entityExtensionBase table
    The likely reason for this locking behaviour was that:
    • Although the SELECT statement was requesting fields from the new_entityBase table, it had obtained a lock on the new_entityExtensionBase table to perform the join in the new_entity view
    • The UPDATE statement that updates a custom attribute (new_attribute) on the new_entity entity would have been the second statement of 2 in the transaction. The first statement would modify system fields (e.g. modifiedon) in the new_entityBase table, and hence place an exclusive lock on a row in the new_entityBase table, and the second statement is the one above, which is attempting to update the new_entityExtensionBase table
    Both operations needed to access both tables, and if you’re very unlucky, then the two operations, working on the same record, may overlap in time, and cause a deadlock.

    The new design in Crm 2013 solves this in three ways:
    1. With just the one entity table, the SELECT statement only needs one lock, and does not need to obtain one lock, then request another
    2. Only one UPDATE statement is required in the transaction, so locks are only required on the one table and they can be requested together, as they would be part of just one statement
    3. Both operations will complete more quickly, reducing the time for which the locks are held
    Of these 3 improvements, either no. 1 or 2 would have been sufficient to prevent deadlocks in this example, but it is gratifying that both improvements have been made. The third improvement would not necessarily prevent deadlocks, but will reduce their probability by reducing overall lock contention, and will also provide a performance improvement.

    Wednesday, 12 June 2013

    SQL Setup error "Registry properties are not valid under this context"

    When using new versions of software (in this case SQL Server 2012 service pack 1), there's always the chance of a new, random error. In this case it was "Registry properties are not valid under this context" when attempting to add a component (the Full-text service) to an existing installation.

    It seems like the issue comes down to the sequence of installing updates, both to the existing installation, and to the setup program. The specific scenario was:
    • The initial install of SQL Server had been done directly from the slipstreamed SQL Server 2012 service pack 1 setup. At this time, the server was not connected to the internet, so no additional updates were applied either to the installed components, or the setup program
    • When attempting to add the Full-text service, the server was connected to the internet, and had the option set to install updates to other products via Microsoft Update. When I started the setup (which used exactly the same initial source), the setup downloaded an updated setup, and also found a 145 MB update rollup that would also be installed
    • Part way through the setup steps, setup failed with the message "Registry properties are not valid under this context"
    The problem seemed to be that the setup program was using a more recent update than the currently installed components. Even though the setup program had identified updates to apply to the current components, it had not yet applied them before crashing out with the error.

    The solution was to go to Microsoft Update and install the SQL Update Rollup, then go back and run SQL Setup to add the extra component. Interestingly, SQL Setup still reported that it had found this 145 MB rollup to apply, even though it was already installed