Some recap on Salesforce Data Skew
As we saw in the previous article about data skew in Salesforce, there are certain configuration patterns which appear again and again, because they are apparently “good ideas”. However they increase the probability of performance issues arising in our Org. These patterns cause inefficient data distributions at the storage layer, technically known as Data Skew. Data Skew problems in Salesforce are hard to anticipate. They can have such a dramatic impact on performance that they are known by Salesforce as “the silent killer”.
Let’s take a look at another of these apparently good ideas which can have a severe impact in performance. Remember that every time a record is inserted or updated, Salesforce must lock the related records. This practice ensures that, when the data is committed to the database, its integrity is maintained. If Salesforce did not lock these records, and one record was deleted in the middle of the save operation, you would face a referential integrity issue.
Lookup Salesforce Data Skew
So far so good.
Now let’s think about the situation where we have a standard object and we want to assign every single instance of the object to a business-specific category. Most of us would think about creating a new Custom object to represent the category. And then, define a Lookup field on the master object which references this Category Custom Object. This way categorizing the records of the master object is easy. Changing values can be also easily done. And reporting would also be user-friendly and a piece of cake.
But unfortunately, this can create an undesired performance issue, called Lookup Skew.
Lookup Skew happens when a very large number of records are associated with a single record in a lookup object. Because you can place lookup fields on any object in Salesforce, lookup skew can create problems for any object within your organization.
As we learned at the beginning of this article, every time a record is inserted or updated in the master object, Salesforce must lock the target records that are selected for each lookup field. So, what would happen when we try to load a large volume of data simultaneously in the master object? We will probably encounter lock exceptions that cause failures as we try to insert or update records.
For example: let’s imagine our Leads standard object is storing 200.000 records. We decided to classify all these Leads in 4 categories: Hot, Warm, Cold, Frozen. These categories are themselves a custom object, with additional attributes besides their description. So now we want to associate any Lead with one of the 4 categories. After some time, we need to load a new large bunch of Leads but with the Warm category. As any single insert will lock the “Warm” record, following inserts will fail as the record for “Warm” has been already locked by the previous insert operation.
Quality Clouds for Salesforce has +20 rules to check data model design and data quality consistency, providing salesforce administrators with high-quality KPIs to find out unknown situations and evaluate the complexity growth that happens in their Orgs. Automated code review for Salesforce are key to have a healthy Org.
Okay, show me that trial!
Start your trial and understand your org’s performance bottlenecks