Search Suggest

What's New in SQL Server 2019 and Table Variables deferred compilation

Hi guys,

Here we are together again to continue the discussion started in the previous article you find here.

Previously I told you the history of Table Variables and we have also seen their limits.
Today is the time to tell you the news that accompany the release of the 2019 version of SQL Server, starting from the Table Variables.

Are you ready? Get comfortable and let's go!


Table Variables deferred compilation

Let's immediately give a name to the novelty introduced by SQL Server 2019 regarding Table Variables: the Table Variables deferred compilation

As the name explains, the compilation of the Table Variables becomes Deferred.
Deferred in the sense that it happens not at the moment in which the table is created but is postponed (deferred) at the moment in which the table is used in our Query

More in detail, the cardinality estimate occurs at the first use, after which no other recompilation will take place.

In fact, it is therefore a compromise between recompiling the table every time to get a precise cardinality estimate and never recompiling it and having a cardinality estimate equal to 1.

The "trick" is, as Microsoft makes clear, only this but the results obtained are by no means secondary.

Obviously you need to have installed the SQL Server 2019 version with the native compatibility level set to 150.

Now let's check what happens by making a parallel comparison with the version of SQL Server 2017.


SQL 2017 vs. SQL 2019

Let's open our SMSS and set the compatibility level to 140.

We create a table in memory @T and fill it with 1000 rows.


DECLARE @T TABLE(ID INT PRIMARY KEY, NAME SYSNAME NOT NULL UNIQUE);

INSERT @T SELECT TOP (1000) [OBJECT_ID], NAME FROM SYS.ALL_OBJECTS;

SELECT T.ID, T.NAME
FROM SYS.ALL_OBJECTS O
JOIN @T AS T ON O.[OBJECT_ID] = T.ID
WHERE T.NAME like '%EXEC%'

 
After the execution we see that 10 pages of memory have been read.

       
(30 rows affected)
Table 'sysschobjs'. Scan count 0, logical reads 60, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table '#BD097E09'. Scan count 1, logical reads 10, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.


The execution plan is this:


If you see the green box in the image above you can see that the SQL Server optimizer estimates that the table contains only one (1) row.
Therefore, to physically execute the JOIN, use the algorithm called NESTED JOIN which is a suitable operator only as long as we have a few rows in the table (<50 more or less)

Good!

Now let's set the native compatibility level to 150, run the Query again and see what happens!

The number of readings remains the same:


(30 rows affected)
Table 'sysschobjs'. Scan count 1, logical reads 22, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table '#AD3FE5D2'. Scan count 1, logical reads 10, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.

But the execution plan changes:


 

The estimated number of rows contained in the table is now 90
The optimizer uses the fastest MERGE JOIN algorithm to solve the JOIN

 

Conclusions

To date we can use the Table Variables confident that the cardinality estimate is based on real values. The only limitation arises if the number of rows contained in the table varies during the execution of our procedure.

All this happens, however, without specifying any RECOMPILE option and without using any trace flags as we saw in the last article.

This is just one reason we can tell when we are asked if it is worth switching to SQL 2019.

 

 
Now you know all about table variables, I just have to give you an appointment at the next post.

See you soon and I hope you enjoyed reading,
Luca
 
Luca Biondi @ SQLServerPerformance blog!


 






Next post:

Previous post:

Post a Comment