Do varchar(max), nvarchar(max) and varbinary(max) columns affect select queries?Speed up INSERTsCreate a table dynamically in SQL Serverdeteriorating stored procedure running timesDuplicate records in primary key during big selectUnderstanding varchar(max) 8000 column and why I can store more than 8000 characters in itWhy does my database structure and SELECT operations generate NULLs?Partially-Unique Check ConstraintsWill Re-Seeding an Identity column back to 0 cause page splits?Performance difference between Text and Varchar in MysqlError 666 on clustered primary key

Is there a good way to store credentials outside of a password manager?

Meta programming: Declare a new struct on the fly

What to do when my ideas aren't chosen, when I strongly disagree with the chosen solution?

Latex for-and in equation

Giant Toughroad SLR 2 for 200 miles in two days, will it make it?

Lifted its hind leg on or lifted its hind leg towards?

Installing PowerShell on 32-bit Kali OS fails

What does the "3am" section means in manpages?

In Star Trek IV, why did the Bounty go back to a time when whales were already rare?

Simple recursive Sudoku solver

Perfect riffle shuffles

Can I Retrieve Email Addresses from BCC?

How to prevent YouTube from showing already watched videos?

Reply ‘no position’ while the job posting is still there (‘HiWi’ position in Germany)

Is it okay / does it make sense for another player to join a running game of Munchkin?

How do I repair my stair bannister?

Can a Bard use an arcane focus?

For airliners, what prevents wing strikes on landing in bad weather?

Is a naturally all "male" species possible?

Is there an Impartial Brexit Deal comparison site?

Can I use my Chinese passport to enter China after I acquired another citizenship?

word describing multiple paths to the same abstract outcome

Is there a problem with hiding "forgot password" until it's needed?

Is it legal to discriminate due to the medicine used to treat a medical condition?



Do varchar(max), nvarchar(max) and varbinary(max) columns affect select queries?


Speed up INSERTsCreate a table dynamically in SQL Serverdeteriorating stored procedure running timesDuplicate records in primary key during big selectUnderstanding varchar(max) 8000 column and why I can store more than 8000 characters in itWhy does my database structure and SELECT operations generate NULLs?Partially-Unique Check ConstraintsWill Re-Seeding an Identity column back to 0 cause page splits?Performance difference between Text and Varchar in MysqlError 666 on clustered primary key













5















Consider this table:



create table Books
(
Id bigint not null primary key identity(1, 1),
UniqueToken varchar(100) not null,
[Text] nvarchar(max) not null
)


Let's imagine that we have over 100,000 books in this table.



Now we're given a 10,000 books data to insert into this table, some of which are duplicate. So we need to filter duplicates first, and then insert new books.



One way to check for the duplicates is this way:



select UniqueToken
from Books
where UniqueToken in
(
'first unique token',
'second unique token'
-- 10,000 items here
)


Does the existence of Text column affect this query's performance? If so, how can we optimized it?



P.S.
I have the same structure, for some other data. And it's not performing well. A friend told me that I should break my table into two tables as follow:



create table BookUniqueTokens 
(
Id bigint not null primary key identity(1, 1),
UniqueToken varchar(100)
)

create table Books
(
Id bigint not null primary key,
[Text] nvarchar(max)
)


And I have to do my duplicate finding algorithm on the first table only, and then insert data into both of them. This way he claimed performance gets way better, because tables are physically separate. He claimed that [Text] column affects any select query on the UniqueToken column.










share|improve this question

















  • 2





    Is there a nonclustered index on UniqueToken ? Also, I would not advise an IN with 10k items, I would store them in a temp table and filter the UniqueTokens with this temporary table. More on that here

    – Randi Vertongen
    yesterday






  • 1





    1) If you are checking for duplicates, why would you include the Text column in the query? 2) can you please update the question to inlcude a few examples of values stored in the UniqueToken column? If you don't want to share actual company data, modify it, but keep the format the same.

    – Solomon Rutzky
    yesterday












  • @RandiVertongen, yes there is a nonclustered index on UniqueToken

    – Saeed Neamati
    yesterday











  • @SolomonRutzky, I'm retrieving existing values from database, to be excluded inside the application code.

    – Saeed Neamati
    yesterday











  • @SaeedNeamati I added an edit based on the NC index existing. If the query in the question is the one that needs to be optimized, and the NC index does not have the Text column included, then I would look at the IN for query optimization. There are better ways to find duplicate data.

    – Randi Vertongen
    yesterday















5















Consider this table:



create table Books
(
Id bigint not null primary key identity(1, 1),
UniqueToken varchar(100) not null,
[Text] nvarchar(max) not null
)


Let's imagine that we have over 100,000 books in this table.



Now we're given a 10,000 books data to insert into this table, some of which are duplicate. So we need to filter duplicates first, and then insert new books.



One way to check for the duplicates is this way:



select UniqueToken
from Books
where UniqueToken in
(
'first unique token',
'second unique token'
-- 10,000 items here
)


Does the existence of Text column affect this query's performance? If so, how can we optimized it?



P.S.
I have the same structure, for some other data. And it's not performing well. A friend told me that I should break my table into two tables as follow:



create table BookUniqueTokens 
(
Id bigint not null primary key identity(1, 1),
UniqueToken varchar(100)
)

create table Books
(
Id bigint not null primary key,
[Text] nvarchar(max)
)


And I have to do my duplicate finding algorithm on the first table only, and then insert data into both of them. This way he claimed performance gets way better, because tables are physically separate. He claimed that [Text] column affects any select query on the UniqueToken column.










share|improve this question

















  • 2





    Is there a nonclustered index on UniqueToken ? Also, I would not advise an IN with 10k items, I would store them in a temp table and filter the UniqueTokens with this temporary table. More on that here

    – Randi Vertongen
    yesterday






  • 1





    1) If you are checking for duplicates, why would you include the Text column in the query? 2) can you please update the question to inlcude a few examples of values stored in the UniqueToken column? If you don't want to share actual company data, modify it, but keep the format the same.

    – Solomon Rutzky
    yesterday












  • @RandiVertongen, yes there is a nonclustered index on UniqueToken

    – Saeed Neamati
    yesterday











  • @SolomonRutzky, I'm retrieving existing values from database, to be excluded inside the application code.

    – Saeed Neamati
    yesterday











  • @SaeedNeamati I added an edit based on the NC index existing. If the query in the question is the one that needs to be optimized, and the NC index does not have the Text column included, then I would look at the IN for query optimization. There are better ways to find duplicate data.

    – Randi Vertongen
    yesterday













5












5








5


2






Consider this table:



create table Books
(
Id bigint not null primary key identity(1, 1),
UniqueToken varchar(100) not null,
[Text] nvarchar(max) not null
)


Let's imagine that we have over 100,000 books in this table.



Now we're given a 10,000 books data to insert into this table, some of which are duplicate. So we need to filter duplicates first, and then insert new books.



One way to check for the duplicates is this way:



select UniqueToken
from Books
where UniqueToken in
(
'first unique token',
'second unique token'
-- 10,000 items here
)


Does the existence of Text column affect this query's performance? If so, how can we optimized it?



P.S.
I have the same structure, for some other data. And it's not performing well. A friend told me that I should break my table into two tables as follow:



create table BookUniqueTokens 
(
Id bigint not null primary key identity(1, 1),
UniqueToken varchar(100)
)

create table Books
(
Id bigint not null primary key,
[Text] nvarchar(max)
)


And I have to do my duplicate finding algorithm on the first table only, and then insert data into both of them. This way he claimed performance gets way better, because tables are physically separate. He claimed that [Text] column affects any select query on the UniqueToken column.










share|improve this question














Consider this table:



create table Books
(
Id bigint not null primary key identity(1, 1),
UniqueToken varchar(100) not null,
[Text] nvarchar(max) not null
)


Let's imagine that we have over 100,000 books in this table.



Now we're given a 10,000 books data to insert into this table, some of which are duplicate. So we need to filter duplicates first, and then insert new books.



One way to check for the duplicates is this way:



select UniqueToken
from Books
where UniqueToken in
(
'first unique token',
'second unique token'
-- 10,000 items here
)


Does the existence of Text column affect this query's performance? If so, how can we optimized it?



P.S.
I have the same structure, for some other data. And it's not performing well. A friend told me that I should break my table into two tables as follow:



create table BookUniqueTokens 
(
Id bigint not null primary key identity(1, 1),
UniqueToken varchar(100)
)

create table Books
(
Id bigint not null primary key,
[Text] nvarchar(max)
)


And I have to do my duplicate finding algorithm on the first table only, and then insert data into both of them. This way he claimed performance gets way better, because tables are physically separate. He claimed that [Text] column affects any select query on the UniqueToken column.







sql-server performance






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked yesterday









Saeed NeamatiSaeed Neamati

5501518




5501518







  • 2





    Is there a nonclustered index on UniqueToken ? Also, I would not advise an IN with 10k items, I would store them in a temp table and filter the UniqueTokens with this temporary table. More on that here

    – Randi Vertongen
    yesterday






  • 1





    1) If you are checking for duplicates, why would you include the Text column in the query? 2) can you please update the question to inlcude a few examples of values stored in the UniqueToken column? If you don't want to share actual company data, modify it, but keep the format the same.

    – Solomon Rutzky
    yesterday












  • @RandiVertongen, yes there is a nonclustered index on UniqueToken

    – Saeed Neamati
    yesterday











  • @SolomonRutzky, I'm retrieving existing values from database, to be excluded inside the application code.

    – Saeed Neamati
    yesterday











  • @SaeedNeamati I added an edit based on the NC index existing. If the query in the question is the one that needs to be optimized, and the NC index does not have the Text column included, then I would look at the IN for query optimization. There are better ways to find duplicate data.

    – Randi Vertongen
    yesterday












  • 2





    Is there a nonclustered index on UniqueToken ? Also, I would not advise an IN with 10k items, I would store them in a temp table and filter the UniqueTokens with this temporary table. More on that here

    – Randi Vertongen
    yesterday






  • 1





    1) If you are checking for duplicates, why would you include the Text column in the query? 2) can you please update the question to inlcude a few examples of values stored in the UniqueToken column? If you don't want to share actual company data, modify it, but keep the format the same.

    – Solomon Rutzky
    yesterday












  • @RandiVertongen, yes there is a nonclustered index on UniqueToken

    – Saeed Neamati
    yesterday











  • @SolomonRutzky, I'm retrieving existing values from database, to be excluded inside the application code.

    – Saeed Neamati
    yesterday











  • @SaeedNeamati I added an edit based on the NC index existing. If the query in the question is the one that needs to be optimized, and the NC index does not have the Text column included, then I would look at the IN for query optimization. There are better ways to find duplicate data.

    – Randi Vertongen
    yesterday







2




2





Is there a nonclustered index on UniqueToken ? Also, I would not advise an IN with 10k items, I would store them in a temp table and filter the UniqueTokens with this temporary table. More on that here

– Randi Vertongen
yesterday





Is there a nonclustered index on UniqueToken ? Also, I would not advise an IN with 10k items, I would store them in a temp table and filter the UniqueTokens with this temporary table. More on that here

– Randi Vertongen
yesterday




1




1





1) If you are checking for duplicates, why would you include the Text column in the query? 2) can you please update the question to inlcude a few examples of values stored in the UniqueToken column? If you don't want to share actual company data, modify it, but keep the format the same.

– Solomon Rutzky
yesterday






1) If you are checking for duplicates, why would you include the Text column in the query? 2) can you please update the question to inlcude a few examples of values stored in the UniqueToken column? If you don't want to share actual company data, modify it, but keep the format the same.

– Solomon Rutzky
yesterday














@RandiVertongen, yes there is a nonclustered index on UniqueToken

– Saeed Neamati
yesterday





@RandiVertongen, yes there is a nonclustered index on UniqueToken

– Saeed Neamati
yesterday













@SolomonRutzky, I'm retrieving existing values from database, to be excluded inside the application code.

– Saeed Neamati
yesterday





@SolomonRutzky, I'm retrieving existing values from database, to be excluded inside the application code.

– Saeed Neamati
yesterday













@SaeedNeamati I added an edit based on the NC index existing. If the query in the question is the one that needs to be optimized, and the NC index does not have the Text column included, then I would look at the IN for query optimization. There are better ways to find duplicate data.

– Randi Vertongen
yesterday





@SaeedNeamati I added an edit based on the NC index existing. If the query in the question is the one that needs to be optimized, and the NC index does not have the Text column included, then I would look at the IN for query optimization. There are better ways to find duplicate data.

– Randi Vertongen
yesterday










1 Answer
1






active

oldest

votes


















6














Examples



Consider your query with 8 filter predicates in your IN clause on a dataset of 10K records.



select UniqueToken
from Books
where UniqueToken in
(
'Unique token 1',
'Unique token 2',
'Unique token 3',
'Unique token 4',
'Unique token 5',
'Unique token 6',
'Unique token 9999',
'Unique token 5000'
-- 10,000 items here
);


A clustered index scan is used, there are no other indexes present on this test table



enter image description here



With a data size of 216 Bytes.



You should also note how even with 8 records, the OR filters are stacking up.



The reads that happened on this table:



enter image description here



Credits to statisticsparser.



When you include the Text column in the select part of your query, the actual data size changes drastically:



select UniqueToken,Text
from Books
where UniqueToken in
(
'Unique token 1',
'Unique token 2',
'Unique token 3',
'Unique token 4',
'Unique token 5',
'Unique token 6',
'Unique token 9999',
'Unique token 5000'
-- 10,000 items here
);


Again, the Clustered index Scan with a residual predicate:



enter image description here



But with a dataset of 32KB.



As there are almost 1000 lob logical reads:



enter image description here



Now, when we create the two tables in question, and fill them up with the same 10k records



Executing the same select without Text. Remember that we had 99 Logical reads when using the Books Table.



select UniqueToken
from BookUniqueTokens
where UniqueToken in
(
'Unique token 1',
'Unique token 2',
'Unique token 3',
'Unique token 4',
'Unique token 5',
'Unique token 6',
'Unique token 9999',
'Unique token 5000'
-- 10,000 items here
)


The reads on BookUniqueTokens are lower, 67 instead of 99.



enter image description here



We can track that back to the pages in the original Books table and the pages in the new table without the Text.



Original Books table:



enter image description here



New BookUniqueTokens table



enter image description here



So, all the pages + (2 overhead pages?) are read from the clustered index.



Why is there a difference, and why is the difference not bigger? After all the datasize difference is huge (Lob data <> No Lob data)



Books Data space



enter image description here



BooksWithText Data space



enter image description here



The reason for this is ROW_OVERFLOW_DATA.



When data gets bigger than 8kb the data is stored as ROW_OVERFLOW_DATA on different pages.



Ok, if lob data is stored on different pages, why are the page sizes of these two clustered indexes not the same?



Due to the 24 byte pointer added to the Clustered index to track each of these pages.
After all, sql server needs to know where it can find the lob data.



Source




To answer your questions




He claimed that [Text] column affects any select query on the
UniqueToken column.




And




Does the existence of Text column affect this query's performance? If
so, how can we optimized it?




If the data stored is actually Lob Data, and the Query provided in the answer is used



It does bring some overhead due to the 24 byte pointers.



Depending on the executions / min not being crazy high, I would say that this is negligible, even with 100K records.



Remember that this overhead only happens if an index that includes Text is used, such as the clustered index.



But, what if the clustered index scan is used, and the lob data does not exceed 8kb?



If the data does not exceed 8kb, and you have no index on UniqueToken,the overhead could be bigger . even when not selecting the Text column.



Logical reads on 10k records when Text is only 137 characters long (all records):




Table 'Books2'. Scan count 1, logical reads 419




Due to all this extra data being on the clustered index pages.



Again, an index on UniqueToken (Without including the Text column) will resolve this issue.



As pointed out by @David Browne - Microsoft, you could also store the data off row, as to not add this overhead on the Clustered index when not selecting this Text Column.




Also, if you do want the text stored off-row, you can force that
without using a separate table. Just set the 'large value types out of
row' option with sp_tableoption.
docs.microsoft.com/en-us/sql/relational-databases




TL;DR



Based on the query given, indexing UniqueToken without including TEXT should resolve your troubles.
Additionally, I would use a temporary table or table type to do the filtering instead of the IN statement.



EDIT:




yes there is a nonclustered index on UniqueToken




Your example query is not touching the Text column, and based on the query this should be a covering index.



If we test this on the three tables we previously used (UniqueToken + Lob data, Solely UniqueToken, UniqueToken + 137 Char data in nvarchar(max) column)



CREATE INDEX [IX_Books_UniqueToken] ON Books(UniqueToken);
CREATE INDEX [IX_BookUniqueTokens_UniqueToken] ON BookUniqueTokens(UniqueToken);
CREATE INDEX [IX_Books2_UniqueToken] ON Books2(UniqueToken);


The reads remain the same for these three tables, because the nonclustered index is used.



Table 'Books'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'BookUniqueTokens'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Books2'. Scan count 8, logical reads 16, physical reads 5, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


Additional details



by @David Browne - Microsoft




Also, if you do want the text stored off-row, you can force that
without using a separate table. Just set the 'large value types out of
row' option with sp_tableoption.
docs.microsoft.com/en-us/sql/relational-databases/




Remember that you have to rebuild your indexes for this to take effect on already populated data.



By @Erik Darling



On



  • MAX Data Types Do WHAT?

Filtering on Lob data sucks.



  • Memory Grants and Data Size

Your memory grants might go through the roof when using bigger datatypes, impacting performance.






share|improve this answer
























    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "182"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f232941%2fdo-varcharmax-nvarcharmax-and-varbinarymax-columns-affect-select-queries%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    6














    Examples



    Consider your query with 8 filter predicates in your IN clause on a dataset of 10K records.



    select UniqueToken
    from Books
    where UniqueToken in
    (
    'Unique token 1',
    'Unique token 2',
    'Unique token 3',
    'Unique token 4',
    'Unique token 5',
    'Unique token 6',
    'Unique token 9999',
    'Unique token 5000'
    -- 10,000 items here
    );


    A clustered index scan is used, there are no other indexes present on this test table



    enter image description here



    With a data size of 216 Bytes.



    You should also note how even with 8 records, the OR filters are stacking up.



    The reads that happened on this table:



    enter image description here



    Credits to statisticsparser.



    When you include the Text column in the select part of your query, the actual data size changes drastically:



    select UniqueToken,Text
    from Books
    where UniqueToken in
    (
    'Unique token 1',
    'Unique token 2',
    'Unique token 3',
    'Unique token 4',
    'Unique token 5',
    'Unique token 6',
    'Unique token 9999',
    'Unique token 5000'
    -- 10,000 items here
    );


    Again, the Clustered index Scan with a residual predicate:



    enter image description here



    But with a dataset of 32KB.



    As there are almost 1000 lob logical reads:



    enter image description here



    Now, when we create the two tables in question, and fill them up with the same 10k records



    Executing the same select without Text. Remember that we had 99 Logical reads when using the Books Table.



    select UniqueToken
    from BookUniqueTokens
    where UniqueToken in
    (
    'Unique token 1',
    'Unique token 2',
    'Unique token 3',
    'Unique token 4',
    'Unique token 5',
    'Unique token 6',
    'Unique token 9999',
    'Unique token 5000'
    -- 10,000 items here
    )


    The reads on BookUniqueTokens are lower, 67 instead of 99.



    enter image description here



    We can track that back to the pages in the original Books table and the pages in the new table without the Text.



    Original Books table:



    enter image description here



    New BookUniqueTokens table



    enter image description here



    So, all the pages + (2 overhead pages?) are read from the clustered index.



    Why is there a difference, and why is the difference not bigger? After all the datasize difference is huge (Lob data <> No Lob data)



    Books Data space



    enter image description here



    BooksWithText Data space



    enter image description here



    The reason for this is ROW_OVERFLOW_DATA.



    When data gets bigger than 8kb the data is stored as ROW_OVERFLOW_DATA on different pages.



    Ok, if lob data is stored on different pages, why are the page sizes of these two clustered indexes not the same?



    Due to the 24 byte pointer added to the Clustered index to track each of these pages.
    After all, sql server needs to know where it can find the lob data.



    Source




    To answer your questions




    He claimed that [Text] column affects any select query on the
    UniqueToken column.




    And




    Does the existence of Text column affect this query's performance? If
    so, how can we optimized it?




    If the data stored is actually Lob Data, and the Query provided in the answer is used



    It does bring some overhead due to the 24 byte pointers.



    Depending on the executions / min not being crazy high, I would say that this is negligible, even with 100K records.



    Remember that this overhead only happens if an index that includes Text is used, such as the clustered index.



    But, what if the clustered index scan is used, and the lob data does not exceed 8kb?



    If the data does not exceed 8kb, and you have no index on UniqueToken,the overhead could be bigger . even when not selecting the Text column.



    Logical reads on 10k records when Text is only 137 characters long (all records):




    Table 'Books2'. Scan count 1, logical reads 419




    Due to all this extra data being on the clustered index pages.



    Again, an index on UniqueToken (Without including the Text column) will resolve this issue.



    As pointed out by @David Browne - Microsoft, you could also store the data off row, as to not add this overhead on the Clustered index when not selecting this Text Column.




    Also, if you do want the text stored off-row, you can force that
    without using a separate table. Just set the 'large value types out of
    row' option with sp_tableoption.
    docs.microsoft.com/en-us/sql/relational-databases




    TL;DR



    Based on the query given, indexing UniqueToken without including TEXT should resolve your troubles.
    Additionally, I would use a temporary table or table type to do the filtering instead of the IN statement.



    EDIT:




    yes there is a nonclustered index on UniqueToken




    Your example query is not touching the Text column, and based on the query this should be a covering index.



    If we test this on the three tables we previously used (UniqueToken + Lob data, Solely UniqueToken, UniqueToken + 137 Char data in nvarchar(max) column)



    CREATE INDEX [IX_Books_UniqueToken] ON Books(UniqueToken);
    CREATE INDEX [IX_BookUniqueTokens_UniqueToken] ON BookUniqueTokens(UniqueToken);
    CREATE INDEX [IX_Books2_UniqueToken] ON Books2(UniqueToken);


    The reads remain the same for these three tables, because the nonclustered index is used.



    Table 'Books'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'BookUniqueTokens'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'Books2'. Scan count 8, logical reads 16, physical reads 5, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


    Additional details



    by @David Browne - Microsoft




    Also, if you do want the text stored off-row, you can force that
    without using a separate table. Just set the 'large value types out of
    row' option with sp_tableoption.
    docs.microsoft.com/en-us/sql/relational-databases/




    Remember that you have to rebuild your indexes for this to take effect on already populated data.



    By @Erik Darling



    On



    • MAX Data Types Do WHAT?

    Filtering on Lob data sucks.



    • Memory Grants and Data Size

    Your memory grants might go through the roof when using bigger datatypes, impacting performance.






    share|improve this answer





























      6














      Examples



      Consider your query with 8 filter predicates in your IN clause on a dataset of 10K records.



      select UniqueToken
      from Books
      where UniqueToken in
      (
      'Unique token 1',
      'Unique token 2',
      'Unique token 3',
      'Unique token 4',
      'Unique token 5',
      'Unique token 6',
      'Unique token 9999',
      'Unique token 5000'
      -- 10,000 items here
      );


      A clustered index scan is used, there are no other indexes present on this test table



      enter image description here



      With a data size of 216 Bytes.



      You should also note how even with 8 records, the OR filters are stacking up.



      The reads that happened on this table:



      enter image description here



      Credits to statisticsparser.



      When you include the Text column in the select part of your query, the actual data size changes drastically:



      select UniqueToken,Text
      from Books
      where UniqueToken in
      (
      'Unique token 1',
      'Unique token 2',
      'Unique token 3',
      'Unique token 4',
      'Unique token 5',
      'Unique token 6',
      'Unique token 9999',
      'Unique token 5000'
      -- 10,000 items here
      );


      Again, the Clustered index Scan with a residual predicate:



      enter image description here



      But with a dataset of 32KB.



      As there are almost 1000 lob logical reads:



      enter image description here



      Now, when we create the two tables in question, and fill them up with the same 10k records



      Executing the same select without Text. Remember that we had 99 Logical reads when using the Books Table.



      select UniqueToken
      from BookUniqueTokens
      where UniqueToken in
      (
      'Unique token 1',
      'Unique token 2',
      'Unique token 3',
      'Unique token 4',
      'Unique token 5',
      'Unique token 6',
      'Unique token 9999',
      'Unique token 5000'
      -- 10,000 items here
      )


      The reads on BookUniqueTokens are lower, 67 instead of 99.



      enter image description here



      We can track that back to the pages in the original Books table and the pages in the new table without the Text.



      Original Books table:



      enter image description here



      New BookUniqueTokens table



      enter image description here



      So, all the pages + (2 overhead pages?) are read from the clustered index.



      Why is there a difference, and why is the difference not bigger? After all the datasize difference is huge (Lob data <> No Lob data)



      Books Data space



      enter image description here



      BooksWithText Data space



      enter image description here



      The reason for this is ROW_OVERFLOW_DATA.



      When data gets bigger than 8kb the data is stored as ROW_OVERFLOW_DATA on different pages.



      Ok, if lob data is stored on different pages, why are the page sizes of these two clustered indexes not the same?



      Due to the 24 byte pointer added to the Clustered index to track each of these pages.
      After all, sql server needs to know where it can find the lob data.



      Source




      To answer your questions




      He claimed that [Text] column affects any select query on the
      UniqueToken column.




      And




      Does the existence of Text column affect this query's performance? If
      so, how can we optimized it?




      If the data stored is actually Lob Data, and the Query provided in the answer is used



      It does bring some overhead due to the 24 byte pointers.



      Depending on the executions / min not being crazy high, I would say that this is negligible, even with 100K records.



      Remember that this overhead only happens if an index that includes Text is used, such as the clustered index.



      But, what if the clustered index scan is used, and the lob data does not exceed 8kb?



      If the data does not exceed 8kb, and you have no index on UniqueToken,the overhead could be bigger . even when not selecting the Text column.



      Logical reads on 10k records when Text is only 137 characters long (all records):




      Table 'Books2'. Scan count 1, logical reads 419




      Due to all this extra data being on the clustered index pages.



      Again, an index on UniqueToken (Without including the Text column) will resolve this issue.



      As pointed out by @David Browne - Microsoft, you could also store the data off row, as to not add this overhead on the Clustered index when not selecting this Text Column.




      Also, if you do want the text stored off-row, you can force that
      without using a separate table. Just set the 'large value types out of
      row' option with sp_tableoption.
      docs.microsoft.com/en-us/sql/relational-databases




      TL;DR



      Based on the query given, indexing UniqueToken without including TEXT should resolve your troubles.
      Additionally, I would use a temporary table or table type to do the filtering instead of the IN statement.



      EDIT:




      yes there is a nonclustered index on UniqueToken




      Your example query is not touching the Text column, and based on the query this should be a covering index.



      If we test this on the three tables we previously used (UniqueToken + Lob data, Solely UniqueToken, UniqueToken + 137 Char data in nvarchar(max) column)



      CREATE INDEX [IX_Books_UniqueToken] ON Books(UniqueToken);
      CREATE INDEX [IX_BookUniqueTokens_UniqueToken] ON BookUniqueTokens(UniqueToken);
      CREATE INDEX [IX_Books2_UniqueToken] ON Books2(UniqueToken);


      The reads remain the same for these three tables, because the nonclustered index is used.



      Table 'Books'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

      Table 'BookUniqueTokens'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

      Table 'Books2'. Scan count 8, logical reads 16, physical reads 5, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


      Additional details



      by @David Browne - Microsoft




      Also, if you do want the text stored off-row, you can force that
      without using a separate table. Just set the 'large value types out of
      row' option with sp_tableoption.
      docs.microsoft.com/en-us/sql/relational-databases/




      Remember that you have to rebuild your indexes for this to take effect on already populated data.



      By @Erik Darling



      On



      • MAX Data Types Do WHAT?

      Filtering on Lob data sucks.



      • Memory Grants and Data Size

      Your memory grants might go through the roof when using bigger datatypes, impacting performance.






      share|improve this answer



























        6












        6








        6







        Examples



        Consider your query with 8 filter predicates in your IN clause on a dataset of 10K records.



        select UniqueToken
        from Books
        where UniqueToken in
        (
        'Unique token 1',
        'Unique token 2',
        'Unique token 3',
        'Unique token 4',
        'Unique token 5',
        'Unique token 6',
        'Unique token 9999',
        'Unique token 5000'
        -- 10,000 items here
        );


        A clustered index scan is used, there are no other indexes present on this test table



        enter image description here



        With a data size of 216 Bytes.



        You should also note how even with 8 records, the OR filters are stacking up.



        The reads that happened on this table:



        enter image description here



        Credits to statisticsparser.



        When you include the Text column in the select part of your query, the actual data size changes drastically:



        select UniqueToken,Text
        from Books
        where UniqueToken in
        (
        'Unique token 1',
        'Unique token 2',
        'Unique token 3',
        'Unique token 4',
        'Unique token 5',
        'Unique token 6',
        'Unique token 9999',
        'Unique token 5000'
        -- 10,000 items here
        );


        Again, the Clustered index Scan with a residual predicate:



        enter image description here



        But with a dataset of 32KB.



        As there are almost 1000 lob logical reads:



        enter image description here



        Now, when we create the two tables in question, and fill them up with the same 10k records



        Executing the same select without Text. Remember that we had 99 Logical reads when using the Books Table.



        select UniqueToken
        from BookUniqueTokens
        where UniqueToken in
        (
        'Unique token 1',
        'Unique token 2',
        'Unique token 3',
        'Unique token 4',
        'Unique token 5',
        'Unique token 6',
        'Unique token 9999',
        'Unique token 5000'
        -- 10,000 items here
        )


        The reads on BookUniqueTokens are lower, 67 instead of 99.



        enter image description here



        We can track that back to the pages in the original Books table and the pages in the new table without the Text.



        Original Books table:



        enter image description here



        New BookUniqueTokens table



        enter image description here



        So, all the pages + (2 overhead pages?) are read from the clustered index.



        Why is there a difference, and why is the difference not bigger? After all the datasize difference is huge (Lob data <> No Lob data)



        Books Data space



        enter image description here



        BooksWithText Data space



        enter image description here



        The reason for this is ROW_OVERFLOW_DATA.



        When data gets bigger than 8kb the data is stored as ROW_OVERFLOW_DATA on different pages.



        Ok, if lob data is stored on different pages, why are the page sizes of these two clustered indexes not the same?



        Due to the 24 byte pointer added to the Clustered index to track each of these pages.
        After all, sql server needs to know where it can find the lob data.



        Source




        To answer your questions




        He claimed that [Text] column affects any select query on the
        UniqueToken column.




        And




        Does the existence of Text column affect this query's performance? If
        so, how can we optimized it?




        If the data stored is actually Lob Data, and the Query provided in the answer is used



        It does bring some overhead due to the 24 byte pointers.



        Depending on the executions / min not being crazy high, I would say that this is negligible, even with 100K records.



        Remember that this overhead only happens if an index that includes Text is used, such as the clustered index.



        But, what if the clustered index scan is used, and the lob data does not exceed 8kb?



        If the data does not exceed 8kb, and you have no index on UniqueToken,the overhead could be bigger . even when not selecting the Text column.



        Logical reads on 10k records when Text is only 137 characters long (all records):




        Table 'Books2'. Scan count 1, logical reads 419




        Due to all this extra data being on the clustered index pages.



        Again, an index on UniqueToken (Without including the Text column) will resolve this issue.



        As pointed out by @David Browne - Microsoft, you could also store the data off row, as to not add this overhead on the Clustered index when not selecting this Text Column.




        Also, if you do want the text stored off-row, you can force that
        without using a separate table. Just set the 'large value types out of
        row' option with sp_tableoption.
        docs.microsoft.com/en-us/sql/relational-databases




        TL;DR



        Based on the query given, indexing UniqueToken without including TEXT should resolve your troubles.
        Additionally, I would use a temporary table or table type to do the filtering instead of the IN statement.



        EDIT:




        yes there is a nonclustered index on UniqueToken




        Your example query is not touching the Text column, and based on the query this should be a covering index.



        If we test this on the three tables we previously used (UniqueToken + Lob data, Solely UniqueToken, UniqueToken + 137 Char data in nvarchar(max) column)



        CREATE INDEX [IX_Books_UniqueToken] ON Books(UniqueToken);
        CREATE INDEX [IX_BookUniqueTokens_UniqueToken] ON BookUniqueTokens(UniqueToken);
        CREATE INDEX [IX_Books2_UniqueToken] ON Books2(UniqueToken);


        The reads remain the same for these three tables, because the nonclustered index is used.



        Table 'Books'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

        Table 'BookUniqueTokens'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

        Table 'Books2'. Scan count 8, logical reads 16, physical reads 5, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


        Additional details



        by @David Browne - Microsoft




        Also, if you do want the text stored off-row, you can force that
        without using a separate table. Just set the 'large value types out of
        row' option with sp_tableoption.
        docs.microsoft.com/en-us/sql/relational-databases/




        Remember that you have to rebuild your indexes for this to take effect on already populated data.



        By @Erik Darling



        On



        • MAX Data Types Do WHAT?

        Filtering on Lob data sucks.



        • Memory Grants and Data Size

        Your memory grants might go through the roof when using bigger datatypes, impacting performance.






        share|improve this answer















        Examples



        Consider your query with 8 filter predicates in your IN clause on a dataset of 10K records.



        select UniqueToken
        from Books
        where UniqueToken in
        (
        'Unique token 1',
        'Unique token 2',
        'Unique token 3',
        'Unique token 4',
        'Unique token 5',
        'Unique token 6',
        'Unique token 9999',
        'Unique token 5000'
        -- 10,000 items here
        );


        A clustered index scan is used, there are no other indexes present on this test table



        enter image description here



        With a data size of 216 Bytes.



        You should also note how even with 8 records, the OR filters are stacking up.



        The reads that happened on this table:



        enter image description here



        Credits to statisticsparser.



        When you include the Text column in the select part of your query, the actual data size changes drastically:



        select UniqueToken,Text
        from Books
        where UniqueToken in
        (
        'Unique token 1',
        'Unique token 2',
        'Unique token 3',
        'Unique token 4',
        'Unique token 5',
        'Unique token 6',
        'Unique token 9999',
        'Unique token 5000'
        -- 10,000 items here
        );


        Again, the Clustered index Scan with a residual predicate:



        enter image description here



        But with a dataset of 32KB.



        As there are almost 1000 lob logical reads:



        enter image description here



        Now, when we create the two tables in question, and fill them up with the same 10k records



        Executing the same select without Text. Remember that we had 99 Logical reads when using the Books Table.



        select UniqueToken
        from BookUniqueTokens
        where UniqueToken in
        (
        'Unique token 1',
        'Unique token 2',
        'Unique token 3',
        'Unique token 4',
        'Unique token 5',
        'Unique token 6',
        'Unique token 9999',
        'Unique token 5000'
        -- 10,000 items here
        )


        The reads on BookUniqueTokens are lower, 67 instead of 99.



        enter image description here



        We can track that back to the pages in the original Books table and the pages in the new table without the Text.



        Original Books table:



        enter image description here



        New BookUniqueTokens table



        enter image description here



        So, all the pages + (2 overhead pages?) are read from the clustered index.



        Why is there a difference, and why is the difference not bigger? After all the datasize difference is huge (Lob data <> No Lob data)



        Books Data space



        enter image description here



        BooksWithText Data space



        enter image description here



        The reason for this is ROW_OVERFLOW_DATA.



        When data gets bigger than 8kb the data is stored as ROW_OVERFLOW_DATA on different pages.



        Ok, if lob data is stored on different pages, why are the page sizes of these two clustered indexes not the same?



        Due to the 24 byte pointer added to the Clustered index to track each of these pages.
        After all, sql server needs to know where it can find the lob data.



        Source




        To answer your questions




        He claimed that [Text] column affects any select query on the
        UniqueToken column.




        And




        Does the existence of Text column affect this query's performance? If
        so, how can we optimized it?




        If the data stored is actually Lob Data, and the Query provided in the answer is used



        It does bring some overhead due to the 24 byte pointers.



        Depending on the executions / min not being crazy high, I would say that this is negligible, even with 100K records.



        Remember that this overhead only happens if an index that includes Text is used, such as the clustered index.



        But, what if the clustered index scan is used, and the lob data does not exceed 8kb?



        If the data does not exceed 8kb, and you have no index on UniqueToken,the overhead could be bigger . even when not selecting the Text column.



        Logical reads on 10k records when Text is only 137 characters long (all records):




        Table 'Books2'. Scan count 1, logical reads 419




        Due to all this extra data being on the clustered index pages.



        Again, an index on UniqueToken (Without including the Text column) will resolve this issue.



        As pointed out by @David Browne - Microsoft, you could also store the data off row, as to not add this overhead on the Clustered index when not selecting this Text Column.




        Also, if you do want the text stored off-row, you can force that
        without using a separate table. Just set the 'large value types out of
        row' option with sp_tableoption.
        docs.microsoft.com/en-us/sql/relational-databases




        TL;DR



        Based on the query given, indexing UniqueToken without including TEXT should resolve your troubles.
        Additionally, I would use a temporary table or table type to do the filtering instead of the IN statement.



        EDIT:




        yes there is a nonclustered index on UniqueToken




        Your example query is not touching the Text column, and based on the query this should be a covering index.



        If we test this on the three tables we previously used (UniqueToken + Lob data, Solely UniqueToken, UniqueToken + 137 Char data in nvarchar(max) column)



        CREATE INDEX [IX_Books_UniqueToken] ON Books(UniqueToken);
        CREATE INDEX [IX_BookUniqueTokens_UniqueToken] ON BookUniqueTokens(UniqueToken);
        CREATE INDEX [IX_Books2_UniqueToken] ON Books2(UniqueToken);


        The reads remain the same for these three tables, because the nonclustered index is used.



        Table 'Books'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

        Table 'BookUniqueTokens'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

        Table 'Books2'. Scan count 8, logical reads 16, physical reads 5, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


        Additional details



        by @David Browne - Microsoft




        Also, if you do want the text stored off-row, you can force that
        without using a separate table. Just set the 'large value types out of
        row' option with sp_tableoption.
        docs.microsoft.com/en-us/sql/relational-databases/




        Remember that you have to rebuild your indexes for this to take effect on already populated data.



        By @Erik Darling



        On



        • MAX Data Types Do WHAT?

        Filtering on Lob data sucks.



        • Memory Grants and Data Size

        Your memory grants might go through the roof when using bigger datatypes, impacting performance.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited yesterday

























        answered yesterday









        Randi VertongenRandi Vertongen

        3,926824




        3,926824



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Database Administrators Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f232941%2fdo-varcharmax-nvarcharmax-and-varbinarymax-columns-affect-select-queries%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            getting Checkpoint VPN SSL Network Extender working in the command lineHow to connect to CheckPoint VPN on Ubuntu 18.04LTS?Will the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayVPN SSL Network Extender in FirefoxLinux Checkpoint SNX tool configuration issuesCheck Point - Connect under Linux - snx + OTPSNX VPN Ububuntu 18.XXUsing Checkpoint VPN SSL Network Extender CLI with certificateVPN with network manager (nm-applet) is not workingWill the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayImport VPN config files to NetworkManager from command lineTrouble connecting to VPN using network-manager, while command line worksStart a VPN connection with PPTP protocol on command linestarting a docker service daemon breaks the vpn networkCan't connect to vpn with Network-managerVPN SSL Network Extender in FirefoxUsing Checkpoint VPN SSL Network Extender CLI with certificate

            NetworkManager fails with “Could not find source connection”Trouble connecting to VPN using network-manager, while command line worksHow can I be notified about state changes to a VPN adapterBacktrack 5 R3 - Refuses to connect to VPNFeed all traffic through OpenVPN for a specific network namespace onlyRun daemon on startup in Debian once openvpn connection establishedpfsense tcp connection between openvpn and lan is brokenInternet connection problem with web browsers onlyWhy does NetworkManager explicitly support tun/tap devices?Browser issues with VPNTwo IP addresses assigned to the same network card - OpenVPN issues?Cannot connect to WiFi with nmcli, although secrets are provided

            대한민국 목차 국명 지리 역사 정치 국방 경제 사회 문화 국제 순위 관련 항목 각주 외부 링크 둘러보기 메뉴북위 37° 34′ 08″ 동경 126° 58′ 36″ / 북위 37.568889° 동경 126.976667°  / 37.568889; 126.976667ehThe Korean Repository문단을 편집문단을 편집추가해Clarkson PLC 사Report for Selected Countries and Subjects-Korea“Human Development Index and its components: P.198”“http://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EB%8C%80%ED%95%9C%EB%AF%BC%EA%B5%AD%EA%B5%AD%EA%B8%B0%EB%B2%95”"한국은 국제법상 한반도 유일 합법정부 아니다" - 오마이뉴스 모바일Report for Selected Countries and Subjects: South Korea격동의 역사와 함께한 조선일보 90년 : 조선일보 인수해 혁신시킨 신석우, 임시정부 때는 '대한민국' 국호(國號) 정해《우리가 몰랐던 우리 역사: 나라 이름의 비밀을 찾아가는 역사 여행》“남북 공식호칭 ‘남한’‘북한’으로 쓴다”“Corea 대 Korea, 누가 이긴 거야?”국내기후자료 - 한국[김대중 前 대통령 서거] 과감한 구조개혁 'DJ노믹스'로 최단기간 환란극복 :: 네이버 뉴스“이라크 "韓-쿠르드 유전개발 MOU 승인 안해"(종합)”“해외 우리국민 추방사례 43%가 일본”차기전차 K2'흑표'의 세계 최고 전력 분석, 쿠키뉴스 엄기영, 2007-03-02두산인프라, 헬기잡는 장갑차 'K21'...내년부터 공급, 고뉴스 이대준, 2008-10-30과거 내용 찾기mk 뉴스 - 구매력 기준으로 보면 한국 1인당 소득 3만弗과거 내용 찾기"The N-11: More Than an Acronym"Archived조선일보 최우석, 2008-11-01Global 500 2008: Countries - South Korea“몇년째 '시한폭탄'... 가계부채, 올해는 터질까”가구당 부채 5000만원 처음 넘어서“‘빚’으로 내몰리는 사회.. 위기의 가계대출”“[경제365] 공공부문 부채 급증…800조 육박”“"소득 양극화 다소 완화...불평등은 여전"”“공정사회·공생발전 한참 멀었네”iSuppli,08年2QのDRAMシェア・ランキングを発表(08/8/11)South Korea dominates shipbuilding industry | Stock Market News & Stocks to Watch from StraightStocks한국 자동차 생산, 3년 연속 세계 5위자동차수출 '현대-삼성 웃고 기아-대우-쌍용은 울고' 과거 내용 찾기동반성장위 창립 1주년 맞아Archived"중기적합 3개업종 합의 무시한 채 선정"李대통령, 사업 무분별 확장 소상공인 생계 위협 질타삼성-LG, 서민업종인 빵·분식사업 잇따라 철수상생은 뒷전…SSM ‘몸집 불리기’ 혈안Archived“경부고속도에 '아시안하이웨이' 표지판”'철의 실크로드' 앞서 '말(言)의 실크로드'부터, 프레시안 정창현, 2008-10-01“'서울 지하철은 안전한가?'”“서울시 “올해 안에 모든 지하철역 스크린도어 설치””“부산지하철 1,2호선 승강장 안전펜스 설치 완료”“전교조, 정부 노조 통계서 처음 빠져”“[Weekly BIZ] 도요타 '제로 이사회'가 리콜 사태 불러들였다”“S Korea slams high tuition costs”““정치가 여론 양극화 부채질… 합리주의 절실””“〈"`촛불집회'는 민주주의의 질적 변화 상징"〉”““촛불집회가 민주주의 왜곡 초래””“국민 65%, "한국 노사관계 대립적"”“한국 국가경쟁력 27위‥노사관계 '꼴찌'”“제대로 형성되지 않은 대한민국 이념지형”“[신년기획-갈등의 시대] 갈등지수 OECD 4위…사회적 손실 GDP 27% 무려 300조”“2012 총선-대선의 키워드는 '국민과 소통'”“한국 삶의 질 27위, 2000년과 2008년 연속 하위권 머물러”“[해피 코리아] 행복점수 68점…해외 평가선 '낙제점'”“한국 어린이·청소년 행복지수 3년 연속 OECD ‘꼴찌’”“한국 이혼율 OECD중 8위”“[통계청] 한국 이혼율 OECD 4위”“오피니언 [이렇게 생각한다] `부부의 날` 에 돌아본 이혼율 1위 한국”“Suicide Rates by Country, Global Health Observatory Data Repository.”“1. 또 다른 차별”“오피니언 [편집자에게] '왕따'와 '패거리 정치' 심리는 닮은꼴”“[미래한국리포트] 무한경쟁에 빠진 대한민국”“대학생 98% "외모가 경쟁력이라는 말 동의"”“특급호텔 웨딩·200만원대 유모차… "남보다 더…" 호화病, 고질병 됐다”“[스트레스 공화국] ① 경쟁사회, 스트레스 쌓인다”““매일 30여명 자살 한국, 의사보다 무속인에…””“"자살 부르는 '우울증', 환자 중 85% 치료 안 받아"”“정신병원을 가다”“대한민국도 ‘묻지마 범죄’,안전지대 아니다”“유엔 "학생 '성적 지향'에 따른 차별 금지하라"”“유엔아동권리위원회 보고서 및 번역본 원문”“고졸 성공스토리 담은 '제빵왕 김탁구' 드라마 나온다”“‘빛 좋은 개살구’ 고졸 취업…실습 대신 착취”원본 문서“정신건강, 사회적 편견부터 고쳐드립니다”‘소통’과 ‘행복’에 목 마른 사회가 잠들어 있던 ‘심리학’ 깨웠다“[포토] 사유리-곽금주 교수의 유쾌한 심리상담”“"올해 한국인 평균 영화관람횟수 세계 1위"(종합)”“[게임연중기획] 게임은 문화다-여가활동 1순위 게임”“영화속 ‘영어 지상주의’ …“왠지 씁쓸한데””“2월 `신문 부수 인증기관` 지정..방송법 후속작업”“무료신문 성장동력 ‘차별성’과 ‘갈등해소’”대한민국 국회 법률지식정보시스템"Pew Research Center's Religion & Public Life Project: South Korea"“amp;vwcd=MT_ZTITLE&path=인구·가구%20>%20인구총조사%20>%20인구부문%20>%20 총조사인구(2005)%20>%20전수부문&oper_YN=Y&item=&keyword=종교별%20인구& amp;lang_mode=kor&list_id= 2005년 통계청 인구 총조사”원본 문서“한국인이 좋아하는 취미와 운동 (2004-2009)”“한국인이 좋아하는 취미와 운동 (2004-2014)”Archived“한국, `부분적 언론자유국' 강등〈프리덤하우스〉”“국경없는기자회 "한국, 인터넷감시 대상국"”“한국, 조선산업 1위 유지(S. Korea Stays Top Shipbuilding Nation) RZD-Partner Portal”원본 문서“한국, 4년 만에 ‘선박건조 1위’”“옛 마산시,인터넷속도 세계 1위”“"한국 초고속 인터넷망 세계1위"”“인터넷·휴대폰 요금, 외국보다 훨씬 비싸”“한국 관세행정 6년 연속 세계 '1위'”“한국 교통사고 사망자 수 OECD 회원국 중 2위”“결핵 후진국' 한국, 환자가 급증한 이유는”“수술은 신중해야… 자칫하면 생명 위협”대한민국분류대한민국의 지도대한민국 정부대표 다국어포털대한민국 전자정부대한민국 국회한국방송공사about korea and information korea브리태니커 백과사전(한국편)론리플래닛의 정보(한국편)CIA의 세계 정보(한국편)마리암 부디아 (Mariam Budia),『한국: 하늘이 내린 한 폭의 그림』, 서울: 트랜스라틴 19호 (2012년 3월)대한민국ehehehehehehehehehehehehehehWorldCat132441370n791268020000 0001 2308 81034078029-6026373548cb11863345f(데이터)00573706ge128495