top of page

SQL Server Memory Optimized Tables on production notes from the field

Firstly I have a confession.

I am Sharon Rimer – a DBA using SQL Server 2014 with Memory- Optimized tables on production. Yes, on production.

Now seriously, how many DBAs do you know that can say that?

In the past year, we have upgraded our SQL Server from 2012 to 2014. The main reason we did this, is of course to use the "In-Memory" feature.

I am developing the new system for the Israeli Customs Authority called "Global- Gate". One of our sub-systems is supposed to do a lot of validation checks on import or export declarations.

The validation process has to take no longer than 3 seconds per declaration at any given moment and there are about 10,000 declarations per day.

We can divide the declarations into two main groups.

  • Personal import/export declaration- Individual that wants to buy or sell an item on the web (like a personal import on eBay, Amazon etc...) or any other means.

  • Commercial import/export - by Courier company customs.

There are 600 validation checks (for now) on each declaration. Each validation is implemented as a single stored procedure. This process is managed by .net code.

When a declaration comes in to be processed, we check what type of declaration it is. The type dictates the validations used that are necessary for the process to succeed. The list of validations are sent back to the .net in a forest (several trees) format.

This means that a validation runs only if the parent validation completed successfully. In average 350 different validations are runs per a declaration.

Each declaration received in our system goes through the following steps:

  1. First, collect the right data for the individual declaration such as customer info, item info, regulation for the particular item etc. All the data is collected into a Memory- Optimize Table (MOT), there about 25 MOT.

  2. The data capture is from disk based tables and MOT.

  3. The 350 validations are divided to 10 threads (Not equally) all using the 25 MOT + disk based table simultaneously. Each validation Stored Procedure that "fails" inserts rows into MOT called "Exception".

  4. At the end of the process the MOT "Exception" is copied into a base table.

  5. The final step is executed asynchronously by Service- Broker and clears the unnecessary data from the MOT.

In fact, the data that we need for this process stays in the MOT only for a few seconds before it is cleaned up.

When a courier company ships an item whether it by land, sea or air it usually works in the same way, it goes on a single declaration, however there are thousands of such “single” declarations.

The system receives thousands of declarations in seconds. That causes a manageable load.

Remember what we said earlier –

The validation process has to take no longer than 3 seconds per declaration at any given moment.

This was the main reason for upgrading to SQL Server 2014, right? MOT – NO locks?

Optimistic Concurrency Control for Memory-Optimized Tables

Conflicts and failures are rare and transactions on memory-optimized tables assume there are no conflicts with concurrent transactions and operations succeed. Transactions do not take locks or latches on memory-optimized table to guarantee transaction isolation.

Writers do not block readers. Writers do not block writers. Instead, transactions proceed under the (optimistic) assumption that there will be no conflicts with other transactions. Not using locks and latches and not waiting for other transactions to finish processing the same rows improves performance.

This is not 100% true in SQL Server 2014. In SQL Server 2016? – Maybe. I will explain later in this article.

In – Deep MOT:

There are several transaction types, called "Isolation Levels" in the database.

  1. Read Uncommitted.

  2. Read Committed.

  3. Snapshot.

  4. Repeatable read.

  5. Serializable.

On transaction that access MOTs you can only choose B-E.

Optimistic concurrency, that basically mean readers and writers do not block each other.

As stated in the earlier quote from Microsoft– on MOT writers do not block writers.

To handle better in our database we choose "row versioning", on disk- based table you can choose Snapshot isolation or read – committed snapshot. We chose read – committed snapshot isolation –aka RCSI.

When using MOT, you can turn on MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT, which will set the isolation level for memory-optimized tables to SNAPSHOT (as if you included WITH (SNAPSHOT) hints to every memory-optimized table). We tested our approach 2012 vs 2014 (base table vs MOT).

2014 wins BIG TIME!

Therefore, we deployed the solution to production where it has been running for the last 6 month. During this time, we have been tuning the solution to optimize performance. All went really well, until we tried to test the - Commercial import/export - by Courier company customs. The load was at 50%. We found that the server could not handle the load, more specifically the MOT failed.

The server had 8 logical cores (Run on VM) with plenty of memory (256GB).

The process logs each Stored Procedure, The failures were across the board in about 100 Stored Procedures at varying times.

The error we received was –

41301. A previous transaction that the current transaction took a dependency on has aborted, and the current transaction can no longer commit.

This was a bit odd.

We realized that because we were working on each declaration in parallel, it was causing problems during times of intense load. On the second attempt we changed the process to be serialized, there was no change at all – this was difficult to swallow.

We saw that in these places the process stopped because of an error, within the complicated Stored Procedure.

We tuned the Stored Procedures, adding a retry logic on T-SQL Stored Procedure that will now call each validation.

We activated the load again, there were less errors than before but no significant reduction in the number of errors.

We moved the retry logic to the .net code instead of the T-SQL and we tried again.

Finally we saw significant change - all the errors that caught got a second chance with the retry logic and succeeded,

So, why the retry on the SQL did not help?

In the code when an error raised the connection is terminated (and all of the transaction, memory etc.…). On the retry a new connection is established – this takes a few milliseconds.

We doubled the load (100%). The process failed again.

Now the retry logic was not helpful (over seven failed retries were reached). In the meantime, we tried one more thing. We change the core from 8 to 32. Now with 32 cores somehow the process worked well. However this quadrupled the licensing costs, but worked. We were happy for a while until we tried again. Now with some real life load, but the process failed (there were many retries) again.

Let's try to better understand the error that we received. We know that there are other transactions that influence the current transaction. We know the big Stored Procedure (or to be exact, the larger transaction) are causing the problem. Finally we know that this issue is connected to the MOT.

We went back to the Stored Procedure. Examined several execution plans and we discovered that in every Stored Procedure that we opened there was a TABLE_SCAN on MOT, which indicates a wrong index to the query or wrong query to the index.

In MOT the index is HASH_INDEX, when you define the column or columns for the index, there is a hash function on those columns that return an address (bucket), this address is matched to a bucket from our BUCKET_COUNT.

To find a particular line, the same columns must be used as are used in the hash index.

We started to review every Stored Procedure in order to change each TABLE_SCAN on a MOT into a SEEK operator.

In order to work in an efficient way I wrote a script that finds each SCAN on MOT in the cache.

This made our job much easier.

Ok, so we found that on SQL Server 2014 on memory- optimize table there is NO- LOCK as long you make sure that you have TABLE_SCAN MOT free system.

A lesson was learnt from this case that you should be aware of, you can learn from our experience:

  1. Use short Stored Procedure – Try to make as simple a query as possible.

  2. If you can try to convert regular Stored Procedures to Natively Compiled Stored Procedures – this will save the server time spent on compiling each Stored Procedure.

  3. When using optimistic locking add retry logic (On T-SQL or .net).

  4. No SCANs on MOT.

  5. Use a "dived strategy" on big queries. – This means that large queries with several joins were separated into multiple queries with temp tables etc. in between.

  6. Do not use functions on the index column in the WHERE clause – this applies to base tables as well.

  7. Monitor your work.

Before the free Q&A I have got A few questions of my own:

Why not make all of the based tables that are involved in this process into MOT.

Firstly I wanted to take this process and try to improve it in the minimum amount of work and as fast as possible.

Secondly, if I had changed the all the base tables that are involved in this process, it would mean changing probably 70% of the DB schema and the app, that will cause a significantly step back in the project.

Thirdly, MOTs have a lot of limitation:

  1. Max row size 8192 – there are a lot of large row tables involved in this process.

  2. Indexes – At most 8 indexes, no Filtered index.

  3. Index column cannot be nullable.

  4. Only BIN2 collations.

  5. No foreign-keys.

  6. No check constrains.

  7. Well the list of "why not" can be long and there is plenty of well written information on the web.

https://www.simple-talk.com/sql/performance/the-promise---and-the-pitfalls---of-in-memory-oltp/

(The Promise - and the Pitfalls - of In-Memory OLTP)

(Why I don’t (yet) recommend In-Memory OLTP to my customers)

I have to agree, they all have a point – Not for everyone and not for everything.

Why not create Natively Compiled Stored Procedures. And make this process even faster?

Natively Compiled Stored Procedures have their own guidelines. Some of them you can find here –

But, the main reason is that our Stored Procedures contain MOT and base table access as well, unfortunately Natively Compiled Stored Procedures does not support that referencing both MOT and base tables.

What BUCKET_COUNT did I use?

  1. What is the best way to get the correct estimate?

  2. Is a very large BUCKET_COUNT a bad strategy?

Great question! (For a moment I forgot that I was the one asking the questions☺).

At first, I would just throw a number. Then, I did some calculations.

The first problem with this is that you do not have an "aggregate DMV" to query and therefore monitor the max bucket. So, what can you do?

We do have a DMV that can shows only the state at an exact point in time, not historical aggregations.

I have built a little monitoring script that will sample the DMV every second, during run time when the system is populated with an excessive amount (I have doubled the amount of declarations that are supposed to be on high volume load) of new varieties of declarations. Then I have rounded my numbers up. The common advice on the web for now is to double your number.

b. Yes, when you SCAN the table. But not when you do a seek predicate.

What method did I use to build my MOT?

This means that we are going to talk about the "D" in ACID. D stands for Durability. This is a new option that can be changed by the user for the first time in SQL Server.

You can change it on the database properties but you must define this specifically for MOTs.

On MOTs you have 2 options:

SCHEMA_ONLY - this means only the schema but no actual data will be saved to disk in the event of a server restart.

In this state you do not have to define a Primary Key.

SCHEMA_AND_DATA - this saves the schema and in-memory data to disk in the background with the option to delay durability for better performance by not immediately flushing the transaction log writes.

Because my MOTs act like temporary tables and the data needs to be alive as long as the process work, as I mentioned before about 4 seconds SCHEMA_ONLY was chosen.

Did I use a different pool for MOT?

At first NO. I am working on it.

There is a way to define the right percentage from the overall pool specifically to the MOT through the resource governor. This is recommended by Microsoft as well.

I am working on dynamic environments. Every one of the environments has a different configuration and different amount of memory, to calculate the right percentage you need. Keep in mind that there are some things to consider like memory allocated for SQL Server, BUCKET_COUNT for each MOT, number of indexes on MOT, row size of MOT. Let’s say for my DB the schema and the BUCKET_COUNT is the same for all our 7 environments. That leaves us with the memory on the server, which is different between environments (4GB, 32GB, 256GB) I wrote a script that can define the right percentage to define. DOWNLOAD_SCRIPT_HERE

Further reading:

Resolve Out Of Memory Issues:

Bind a Database with Memory-Optimized Tables to a Resource Pool:

Monitor and Troubleshoot Memory Usage:

Requirements for Using Memory-Optimized Tables:

Estimate Memory Requirements for Memory-Optimized Tables:

Table and Row Size in Memory-Optimized Tables:

Guidelines for Retry Logic for Transactions on Memory-Optimized Tables:

Transactions in Memory-Optimized Tables:

Featured Posts
Check back soon
Once posts are published, you’ll see them here.
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page