{"id":912,"date":"2016-08-14T14:00:50","date_gmt":"2016-08-14T14:00:50","guid":{"rendered":"http:\/\/nenadnoveljic.com\/blog\/?p=912"},"modified":"2016-08-14T14:00:50","modified_gmt":"2016-08-14T14:00:50","slug":"anti-semi-join-null-cost-calculation","status":"publish","type":"post","link":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/","title":{"rendered":"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns"},"content":{"rendered":"<h1>Introduction<\/h1>\n<p>While analyzing a long running query on a SQL Server database I discovered a boundary condition which revealed a flaw in the cost calculation of <a href=\"http:\/\/sqlity.net\/en\/1360\/a-join-a-day-the-left-anti-semi-join\/\" target=\"_blank\">Anti Semi Join<\/a>.\u00a0Consequently,\u00a0the wrong cost led\u00a0to a bad execution plan that caused the query to run &#8220;forever&#8221;. In this blog post I&#8217;ll provide a test case, an analysis and a possible workaround.<\/p>\n<p>I&#8217;ll also\u00a0show how the max server memory and\u00a0NOT NULL\u00a0constraint can influence the execution plan selection.<\/p>\n<h1>Setting the Scene<\/h1>\n<p>The test case is a simplified version of the mentioned real world scenario:<\/p>\n<pre><code>create table t1 (a integer)\r\n\r\ncreate table t2 (\r\n  a integer not null ,\r\n  CONSTRAINT [t2_pk] PRIMARY KEY NONCLUSTERED ( [a] ASC )\r\n)\r\n\r\nupdate statistics <span style=\"color: #ff0000;\">t1<\/span> with <span style=\"color: #ff0000;\">rowcount=206057632,pagecount=4121154<\/span>\r\n\r\nupdate statistics <span style=\"color: #ff0000;\">t2<\/span> with <span style=\"color: #ff0000;\">rowcount=2,pagecount=1<\/span><\/code><\/pre>\n<p>I tweaked the statistics to make the optimizer think that the table t1 is very large and t2 extremely small containing just two records.<br \/>\nFurthermore, the maximum SQL Server memory is set to 20000 MB (or less).\u00a0Maybe it is not\u00a0obvious at the moment, but later\u00a0it will turn out that the max memory limit has a significant impact on the selection of the execution plan.<\/p>\n<pre><code>SELECT name, value\r\nFROM sys.configurations\r\nWHERE name like 'max server memory%'\r\nORDER BY name \r\n\r\nmax server memory (MB)\t20000<\/code><\/pre>\n<h1>Anti Semi Join<\/h1>\n<p>The following query selects all the records from t1 without corresponding values in t2:<\/p>\n<pre><code>SET SHOWPLAN_text on\r\n\r\nselect count(a) from t1 \r\n  where a not in \r\n    (select a from t2) option (maxdop 1)\r\n\r\n|--Compute Scalar(DEFINE:([Expr1006]=CONVERT_IMPLICIT(int,[Expr1009],0)))\r\n    |--Stream Aggregate(DEFINE:([Expr1009]=COUNT([t1].[a])))\r\n        |--<span style=\"color: #ff0000;\">Nested Loops(Left Anti Semi Join, WHERE:([t1].[a] IS NULL OR [t1].[a]=[t2].[a]))<\/span>\r\n                |--Table Scan(OBJECT:([t1]))\r\n                |--Table Scan(OBJECT:([t2]))<\/code><\/pre>\n<p>The Anti Semi Join returns the inverted result of the highlighted predicate of the Nested Loop. Since it contains an OR operation, neither Merge nor Hash Match, which would be a better option in this case, could be considered as alternatives.<br \/>\nAs for the next step, let&#8217;s see what happens to the execution plan if we &#8220;make&#8221; the table t2 much larger, again, by manipulating its statistics:<\/p>\n<pre><code>SET SHOWPLAN_text off\r\n\r\nupdate statistics <span style=\"color: #ff0000;\">t2<\/span> with rowcount=<span style=\"color: #ff0000;\">1320764,pagecount=146753<\/span>\r\n\r\n|--Compute Scalar(DEFINE:([Expr1006]=CONVERT_IMPLICIT(int,[Expr1009],0)))\t26060.84\r\n     |--Stream Aggregate(DEFINE:([Expr1009]=COUNT([t1].[a])))\t26060.84\r\n          |--<span style=\"color: #ff0000;\">Hash Match(Right Anti Semi Join, HASH:([t2].[a])=([t1].[a])<\/span>, RESIDUAL:([t1].[a]=[t2].[a]))\t26060.83\r\n               |--Index Scan(OBJECT:([t2].[t2_pk]))\t1.456122\r\n               |--<span style=\"color: #ff0000;\">Nested Loops(Left Anti Semi Join, WHERE:([t1].[a] IS NULL))<\/span>\t24828.88\r\n                    |--Table Scan(OBJECT:([t1]))\t3279.373\r\n                    |--<span style=\"color: #ff0000;\">Row Count Spool\t20626.37\r\n<\/span>                         |--Top(TOP EXPRESSION:((1)))\t0.0032832\r\n                              |--Index Scan(OBJECT:([t2].[t2_pk]))\t0.0032831\r\n\r\n<\/code><\/pre>\n<p>I cut out all of non-relevant information from the SHOWPLAN_ALL output above leaving only the total cost of the subtree at the end of each line.<\/p>\n<p>For the high number of records in the inner table t2 the optimizer has decided to replace the Nested Loop with a more efficient Hash Match which led to a more complex plan; namely, because of the OR operator, the IS NULL condition must be checked in a separate branch now, i.e. Nested Loop Anti Semi Join.<\/p>\n<p>At this point, we&#8217;ll also\u00a0make a mental note of the Row Count Spool operation and its cost. Row Count Spool scans its input, counts the number of rows for a given key, and caches the result making it generally a more efficient operation for checking the existence of a row than Table Scan (see\u00a0<a href=\"https:\/\/www.simple-talk.com\/sql\/learn-sql-server\/showplan-operator-of-the-week-row-count-spool\/\" target=\"_blank\">the blog post<\/a> by Fabiano Amorim for understanding the Row Count Spool operation in more detail).<\/p>\n<h1>Max Server Memory Impact<\/h1>\n<p>As we keep increasing memory nothing really happens &#8211; both the plan and the calculated cost remain the same. However, this is only true until a certain threshold is reached.<\/p>\n<p>Interestingly, setting the max server memory to 30000 MB will cause a change in the execution plan:<\/p>\n<pre><code>USE master\r\nEXEC sp_configure 'max server memory (MB)', 30000\r\nRECONFIGURE WITH OVERRIDE\r\nGO \r\nConfiguration option <span style=\"color: #ff0000;\">'max server memory (MB)' changed from 20000 to 30000<\/span>. Run the RECONFIGURE statement to install.\r\n\r\n|--Compute Scalar(DEFINE:([Expr1006]=CONVERT_IMPLICIT(int,[Expr1009],0)))\t21857.26\r\n     |--Stream Aggregate(DEFINE:([Expr1009]=COUNT([t1].[a])))\t21857.26\r\n          |--Hash Match(Right Anti Semi Join, HASH:([t2].[a])=([t1].[a]), RESIDUAL:([t1].[a]=[t2].[a]))\t21857.25\r\n               |--Index Scan(OBJECT:([t2].[t2_pk]))\t1.456122\r\n               |--Nested Loops(Left Anti Semi Join, WHERE:([t1].[a] IS NULL))\t20625.31\r\n                    |--Table Scan(OBJECT:([t1]))\t3279.373\r\n                    |--Top(TOP EXPRESSION:((1)))\t16422.8\r\n                         <span style=\"color: #ff0000;\">|--Table Scan(OBJECT:([t2]))\t16402.19<\/span><\/code><\/pre>\n<p>The Row Count Spool has been replaced by a Table Scan, because the cost of the Table Scan amounts to 16402.19 and is therefore considered by the optimizer to be more efficient than the Row Count Spool which has the cost 20626.37. By changing the maximum server memory and observing the estimated cost, it can be easily verified that the cost is not a function of the memory size for both Row Count Spool and Table Scan.<\/p>\n<p>But then again, if the cost calculation doesn&#8217;t depend on the memory limit, why was the optimizer ignoring the plan with the Table Scan inspite of its lower cost until the memory was increased? I couldn&#8217;t find any hints that would explain the observed behavior.\u00a0Possibly, the optimizer speculates that with less memory it would be more probable that some pages of the t2 table get evicted from the cache during the Nested Loop operation so it wouldn&#8217;t consider the Table Scan unless there is sufficient memory around to keep the whole t2 cached during the execution.<\/p>\n<p>However, in reality, the query runs with the plan with the Row Count Spool far more faster than with the plan with the Table Scan. The elapsed times are the half of a minute and &#8220;forever&#8221;, respectively.<\/p>\n<h1>Table Scan Cost<\/h1>\n<p>So why has the optimizer massively underestimated the amount of work associated with the Table Scan?<\/p>\n<p>To answer this question, I&#8217;ll change the size of the t2 table and observe the estimated cost. In general, the total operation cost of the Table Scan within the Nested Loop is the product of the number of records in the outer source and the CPU cost for a single Table Scan plus some additional base cost. Curiously, in this particular case the cost is not a function of the rowcount of the inner table:<\/p>\n<pre><code>\r\nupdate statistics t2 with <span style=\"color: #ff0000;\">rowcount=1000,pagecount=100\r\n<\/span>|--Table Scan(OBJECT:([t2]))\t<span style=\"color: #ff0000;\">16402.19<\/span>\t\r\n\r\nupdate statistics t2 with <span style=\"color: #ff0000;\">rowcount=1000000,pagecount=100000<\/span>\r\n|--Table Scan(OBJECT:([t2]))\t<span style=\"color: #ff0000;\">16402.19\t\r\n<\/span><\/code><\/pre>\n<p>Even after changing the size of the table by orders of magnitude, the cost hasn&#8217;t even slightly changed! As a consequence, the cost of the Table Scan of large tables will be massively underestimated misleading the optimizer to arrive at a very bad plan.<\/p>\n<h1>To NULL or NOT to NULL &#8211; That is the Question<\/h1>\n<p>Hence, we ended up with a paradox &#8211; in order to improve page life expectancy and overall performance we increased the server memory, but because of the flaw in the cost calculation one query started to run significantly slower after the memory adjustment. I started to look for possible workarounds. An easy way out would have been to rewrite the query to use alternative join methods for returning the same result set, but I strived for a solution which would fix all of the queries following the same pattern without touching the code.<\/p>\n<p>Provided\u00a0we think of the definition of the problem once again, we will quickly come to conclusion that the whole complexity has arisen due to the fact that the optimizer has to create a new branch in the execution plan just to handle the NULL values of the table t1. If t1.a\u00a0was\u00a0mandatory, we could declare the column as NOT NULL giving the optimizer a cue not to bother dealing with the NULL records, which in turn would significantly reduce the realm of execution plan candidates to choose from, thus making the scenario of hitting an optimizer bug under some boundary conditions less probable.<\/p>\n<p>Although the column was never NULL on my live system, it was not declared as such. Actually, in the whole schema the application vendor avoided using NOT NULL for no obvious reason.<\/p>\n<p>Therefore, it came as no surprise to me that altering the column t1.a to NOT NULL resulted in a much more efficient and less volatile execution plan:<\/p>\n<pre><code>alter table t1 alter column a integer not null\r\n\r\n|--Compute Scalar(DEFINE:([Expr1006]=CONVERT_IMPLICIT(int,[Expr1009],0)))\t4517.912\r\n     |--Stream Aggregate(DEFINE:([Expr1009]=Count(*)))\t4517.912\r\n          |--<span style=\"color: #ff0000;\">Hash Match(Right Anti Semi Join, HASH:([t2].[a])=([t1].[a]))\t4394.278\r\n<\/span>               |--Index Scan(OBJECT:([t2].[t2_pk]))\t1.456122\r\n               |--Table Scan(OBJECT:([t1]))\t3279.373<\/code><\/pre>\n<p>This plan has a much lower cost than the one which was devised when NOT NULL was not specified (4517.912 and 21857.26, respectively).<\/p>\n<p>In conclusion, not only does the declaration of the column as NOT NULL (if appropriate for a given case) ensure data integrity, but also it provides the optimizer with invaluable information leading to far better and more stable execution plans.<\/p>\n<h1>Versions Affected<\/h1>\n<p>The problem is reproducible in the SQL Server versions 2008R2 and 2014. In contrast, it seems to be fixed in the versions 2012 and 2016.<\/p>\n<h1>Summary<\/h1>\n<p>Due to a software bug the optimizer can devise a bad plan under following conditions:<\/p>\n<ul>\n<li>anti semi join is used<\/li>\n<li>NULL values are allowed in the joining column<\/li>\n<li>max server memory is large( &gt; app. 30000M)<\/li>\n<\/ul>\n<p>The problem can be avoided by declaring the columns as NOT NULL (when feasible), which is a good practice anyway as this provides additional information to the optimizer. By using this information the optimizer is able to come up with much better and more stable execution plans. The good news is that the problem is fixed in the most recent version (currently 2016).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The SQL Server optimizer cost calculation of Anti Semi Join over null columns may produce bad execution plans if max server memory is set high. <a href=\"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/\" class=\"more-link\">Continue Reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[17],"tags":[],"class_list":["post-912","post","type-post","status-publish","format-standard","hentry","category-sql-server"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns - All-round Database Topics<\/title>\n<meta name=\"description\" content=\"The SQL Server optimizer cost calculation of Anti Semi Join over null columns may produce bad execution plans if max server memory is set high.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns - All-round Database Topics\" \/>\n<meta property=\"og:description\" content=\"The SQL Server optimizer cost calculation of Anti Semi Join over null columns may produce bad execution plans if max server memory is set high.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/\" \/>\n<meta property=\"og:site_name\" content=\"All-round Database Topics\" \/>\n<meta property=\"article:published_time\" content=\"2016-08-14T14:00:50+00:00\" \/>\n<meta name=\"author\" content=\"Nenad Noveljic\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@NenadNoveljic\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Nenad Noveljic\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/anti-semi-join-null-cost-calculation\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/anti-semi-join-null-cost-calculation\\\/\"},\"author\":{\"name\":\"Nenad Noveljic\",\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/#\\\/schema\\\/person\\\/51458d9dd86dbbdd19f5add451d44efa\"},\"headline\":\"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns\",\"datePublished\":\"2016-08-14T14:00:50+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/anti-semi-join-null-cost-calculation\\\/\"},\"wordCount\":1272,\"commentCount\":1,\"articleSection\":[\"SQL Server\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/anti-semi-join-null-cost-calculation\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/anti-semi-join-null-cost-calculation\\\/\",\"url\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/anti-semi-join-null-cost-calculation\\\/\",\"name\":\"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns - All-round Database Topics\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/#website\"},\"datePublished\":\"2016-08-14T14:00:50+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/#\\\/schema\\\/person\\\/51458d9dd86dbbdd19f5add451d44efa\"},\"description\":\"The SQL Server optimizer cost calculation of Anti Semi Join over null columns may produce bad execution plans if max server memory is set high.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/anti-semi-join-null-cost-calculation\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/anti-semi-join-null-cost-calculation\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/anti-semi-join-null-cost-calculation\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/\",\"name\":\"All-round Database Topics\",\"description\":\"Nenad Noveljic\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/#\\\/schema\\\/person\\\/51458d9dd86dbbdd19f5add451d44efa\",\"name\":\"Nenad Noveljic\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a97b796613ea48ec8a7b79c8ffe1c685dcffc920c68121f6238d5caab5070670?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a97b796613ea48ec8a7b79c8ffe1c685dcffc920c68121f6238d5caab5070670?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a97b796613ea48ec8a7b79c8ffe1c685dcffc920c68121f6238d5caab5070670?s=96&d=mm&r=g\",\"caption\":\"Nenad Noveljic\"},\"sameAs\":[\"nenad-noveljic-9b746a6\",\"https:\\\/\\\/x.com\\\/NenadNoveljic\"],\"url\":\"https:\\\/\\\/nenadnoveljic.com\\\/blog\\\/author\\\/nenad\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns - All-round Database Topics","description":"The SQL Server optimizer cost calculation of Anti Semi Join over null columns may produce bad execution plans if max server memory is set high.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/","og_locale":"en_US","og_type":"article","og_title":"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns - All-round Database Topics","og_description":"The SQL Server optimizer cost calculation of Anti Semi Join over null columns may produce bad execution plans if max server memory is set high.","og_url":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/","og_site_name":"All-round Database Topics","article_published_time":"2016-08-14T14:00:50+00:00","author":"Nenad Noveljic","twitter_card":"summary_large_image","twitter_creator":"@NenadNoveljic","twitter_misc":{"Written by":"Nenad Noveljic","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/#article","isPartOf":{"@id":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/"},"author":{"name":"Nenad Noveljic","@id":"https:\/\/nenadnoveljic.com\/blog\/#\/schema\/person\/51458d9dd86dbbdd19f5add451d44efa"},"headline":"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns","datePublished":"2016-08-14T14:00:50+00:00","mainEntityOfPage":{"@id":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/"},"wordCount":1272,"commentCount":1,"articleSection":["SQL Server"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/","url":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/","name":"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns - All-round Database Topics","isPartOf":{"@id":"https:\/\/nenadnoveljic.com\/blog\/#website"},"datePublished":"2016-08-14T14:00:50+00:00","author":{"@id":"https:\/\/nenadnoveljic.com\/blog\/#\/schema\/person\/51458d9dd86dbbdd19f5add451d44efa"},"description":"The SQL Server optimizer cost calculation of Anti Semi Join over null columns may produce bad execution plans if max server memory is set high.","breadcrumb":{"@id":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/nenadnoveljic.com\/blog\/anti-semi-join-null-cost-calculation\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/nenadnoveljic.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Anomaly in Cost Calculation of Anti Semi Join With Nullable Columns"}]},{"@type":"WebSite","@id":"https:\/\/nenadnoveljic.com\/blog\/#website","url":"https:\/\/nenadnoveljic.com\/blog\/","name":"All-round Database Topics","description":"Nenad Noveljic","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/nenadnoveljic.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/nenadnoveljic.com\/blog\/#\/schema\/person\/51458d9dd86dbbdd19f5add451d44efa","name":"Nenad Noveljic","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/a97b796613ea48ec8a7b79c8ffe1c685dcffc920c68121f6238d5caab5070670?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a97b796613ea48ec8a7b79c8ffe1c685dcffc920c68121f6238d5caab5070670?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a97b796613ea48ec8a7b79c8ffe1c685dcffc920c68121f6238d5caab5070670?s=96&d=mm&r=g","caption":"Nenad Noveljic"},"sameAs":["nenad-noveljic-9b746a6","https:\/\/x.com\/NenadNoveljic"],"url":"https:\/\/nenadnoveljic.com\/blog\/author\/nenad\/"}]}},"_links":{"self":[{"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/posts\/912","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/comments?post=912"}],"version-history":[{"count":1,"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/posts\/912\/revisions"}],"predecessor-version":[{"id":939,"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/posts\/912\/revisions\/939"}],"wp:attachment":[{"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/media?parent=912"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/categories?post=912"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nenadnoveljic.com\/blog\/wp-json\/wp\/v2\/tags?post=912"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}