SQL server, SQLServerPedia Syndication

SQLCAT whitepapers.

SQLCAT stands for SQL Customer Advisory Team. The SQLCAT blog contains amazing posts by Microsoft’s SQL engineers back at Redmond. Reading this blog will give you a lot of  intrinsic information about SQL server.

One thing I love reading in particular is the whitepapers released by the team. They contain best practices and real life user scenarios. The steps and settings outlined in these whitepapers are offer deep insights and point to ponder about.

Just reading the abstracts gives be goose bumps, for eg: This white paper explains how Microsoft® adCenter implemented a Microsoft SQL Server® 2008 Analysis ServicesScalable Shared Database on EMC® Symmetrix VMAX™ storage. Leveraging TimeFinder® clones and Enterprise Flash Drives with the read-only feature of SQLServer 2008 Analysis Services allowed adCenter to dramatically scale out OLAP while maintaining SLAs and decreasing system outages. Note that  adCentre is what powers the Ads on Bing search.

Do visit the blog and enjoy. Here is a link to a consolidated index of all the SQLCAL whitepapers.

 

 

Advertisements
Standard
SQL server, SQLServerPedia Syndication

SQL 2005 installation oddity on multicore processors.

There is an issue with installing SQL 2005 on systems that have an odd (as in not even) number of cores/processors.  The conditions are as follows:

  • The ratio between logical processors and physical sockets is not a power of 2. For example, the computer has a single socket together with a triple-core processor.
  • The number of physical cores is not a power of 2.
  • The total number of CPUs is an odd number. For example, there are seven or nine CPUs.

Note that these issues are fixed in SP2, but the catch 22 is that SQL 2005 setup (non express editions) is not integrated. That is you need to install SQL 2005 first and then upgrade to SP2. So how will you get to SP2 if you cannot install SQL 2005 at all? There are three methods mentioned by Microsoft:

  1. Copy the SQL 2005 setup and SP2 files to local folders. Expand the SP2 package. Point the SQL 2005 setup binary to the expanded package and run setup. Run the SP2 patch. Link.
  2. Temporarily change the number of logical processors to one. Start>Run>Msconfig>BOOT.INI>Advanced options>/NUMPROC=1. Install SQL 2005 and upgrade to SP2. Change back processor count. Link.
  3. Run SQL 2005 setup. When the setup fails, you get a ‘Retry’ option. Replace the Sqlservr.exe and Sqlos.dll files in the BINN folder with the same files from the SP2 package. Retry. Link.

Note: I am not sure if the author meant ‘power of 2’ (1,2,4,8,16,32) or ‘multiple of 2’ (2,4,6,8,10). The reasoning being that I have installed SQL 2005 on a 24 core (physical) machine.

Standard
SQL server, SQLServerPedia Syndication

SQL backup mirroring (and splitting).

Did you know that you can mirror your SQL backups? This is good for those times when you need multiple backup copies. Like one local copy and another one on a network share.

You can even split backups over multiple files. If you have an IO bottleneck during backup, or if you are low on disk space, this backup splitting feature will come in handy. Below you can see my tests.

The backup file is 5.2 GB in size.

1)     Doing a normal backup:

backup database backupdb_dev to disk=’c:\dba\db_normal_bakup.bak’

Processed 656248 pages for database 'backupdb_dev', file 'Backupdb' on file 1.
Processed 2 pages for database 'backupdb_dev', file 'Backupdb_log' on file 1.
BACKUP DATABASE successfully processed 656250 pages in 67.109 seconds (76.397 MB/sec).

 

2)     Backuping up to two files on the same spindle:

backup database backupdb_dev

to disk=’c:\dba\db_bakup_spindle1_file1.bak’,

disk=’c:\dba\db_bakup_spindle1_file2.bak’

Processed 656248 pages for database ‘backupdb_dev’, file ‘Backupdb’ on file 1.

Processed 1 pages for database 'backupdb_dev', file 'Backupdb_log' on file 1.
BACKUP DATABASE successfully processed 656249 pages in 63.322 seconds (80.966 MB/sec).
Each backup file was 2.6 GB in size.

 

3)     Backing up to two files on two different spindles:

backup database backupdb_dev

to disk=’c:\dba\db_bakup_spindle1_file1.bak’,

disk=’f:\dba\db_bakup_spindle2_file1.bak’

Processed 656248 pages for database 'backupdb_dev', file 'Backupdb' on file 1.
Processed 1 pages for database 'backupdb_dev', file 'Backupdb_log' on file 1.
BACKUP DATABASE successfully processed 656249 pages in 62.775 seconds (81.671 MB/sec).

 

Theoratically, the backups

4)     Doing a backup and mirroring to the same spindle.

backup database backupdb_dev

to disk=’c:\dba\db_backup_mirror1.bak’

mirror to disk=’c:\dba\db_backup_mirror2.bak’

with format

Processed 656248 pages for database 'backupdb_dev', file 'Backupdb' on file 1.
Processed 1 pages for database 'backupdb_dev', file 'Backupdb_log' on file 1.
BACKUP DATABASE successfully processed 656249 pages in 95.500 seconds (53.685 MB/sec).

Both files were 5.2 GB in size. No surprise here. The backup took only 50% longer.

5)      Doing a backup and mirroring to different spindles.

backup database backupdb_dev

to disk=’c:\dba\db_backup_mirror1.bak’

mirror to disk=’f:\dba\db_backup_mirror2.bak’

with format

Processed 656248 pages for database ‘backupdb_dev’, file ‘Backupdb’ on file 1.

Processed 1 pages for database 'backupdb_dev', file 'Backupdb_log' on file 1.
BACKUP DATABASE successfully processed 656249 pages in 97.538 seconds (52.563 MB/sec).
Standard
SQL server, SQLServerPedia Syndication

SQL server edition comparisions.

I had created an old post about Enterprise only features of SQL server. As I found out interesting features, I added it to that old post. Today, I found out an MSDN link that has a feature comparison table which summarizes everything I was trying to capture in that post.

The link above is for the 2008 R2 version. You can switch between 2008, 2008 R2, and 2012.

As of 2012, the different editions of SQL server are:

  • Enterprise
  • Business Intelligence
  • Standard
  • Web
  • Express with Advanced Services
  • Express with Tools
  • Express

I thought the PDW was like the top most edition, but apparently, it is marketed as an appliance.

Standard
SQL server, SQLServerPedia Syndication

Implicit and explicit transactions.

There are two ways a query can be committed in SQL. Implicitly and explicitly. Queries that have use the begin tran/commit statements are performing explicit transactions. Queries without the begin tran/commit statements are implicitly committed. There are some performance implications when not explicitly commiting a transaction. Example below:

— Create the table
CREATE table testins
(
id int constraint cix_id_testins primary key clustered identity(1,1),
some_data varchar(25) null
)

— implicit transactions, tran1
SET NOCOUNT on
DECLARE @counter int
SET @counter =1
WHILE (@counter < 100000)
begin
    insert into testins VALUES (‘Test Data’)
    set @counter=@counter+1
end

— explicit transactions, tran2
SET NOCOUNT on
DECLARE @counter int
SET @counter =1
BEGIN tran
    WHILE (@counter < 100000)
    begin
        insert into testins VALUES (‘Test Data’)
        set @counter=@counter+1
    end
commit

The implicit transaction (tran1) took 49 seconds to complete, whereas the explicit transaction (tran2) completed in 1 second. The diffrence in this scenario has to do with Log buffer flush. In the implicit trasactions, the log
blocks in memory are flushed to disk (log file) more often than when the explicit transactions was used. In the explicit transaction, the logs are flushed to the log file only when the log blocks in memory are full. To learn more about SQL Server transaction log files check out these links: log file logical architecture, log file physical architecture.

Standard
SQL server, SQLServerPedia Syndication

Finding the size of a data type.

When doing capacity planning, it is important to know the space that will be required for a row of data. This will also help with performance because if you can change your schema to pull in more rows into a buffer page (without degrading other aspects), then you will have better performance.

One function you can use is the DATALENGTH function which will show you the number of bytes used to represent any expression. Consider the data/expression: ‘20130101‘, which is a date. To see the effects on size when using different data types, see below.

SELECT DATALENGTH(CAST(‘20130101’ as char))  — 30 bytes
SELECT DATALENGTH(CAST(‘20130101’ as binary))  — 30 bytes
SELECT DATALENGTH(CAST(‘20130101’ as nvarchar))  — 16 bytes

SELECT DATALENGTH(CAST(‘20130101’ as char(8)))  — 8 bytes
SELECT DATALENGTH(CAST(‘20130101’ as varbinary))  — 8 bytes
SELECT DATALENGTH(CAST(‘20130101’ as text))  — 8 bytes
SELECT DATALENGTH(CAST(‘20130101’ as varchar))  — 8 bytes
SELECT DATALENGTH(CAST(‘20130101’ as image))  — 8 bytes

SELECT DATALENGTH(CAST(‘20130101’ as int))  — 4 bytes
SELECT DATALENGTH(CAST(‘20130101’ as date))  — 3 bytes

You can see how 10 times the data could be stored if the column was a date type as compared with a char type.

Standard
SQL server, SQLServerPedia Syndication

Breaking down dates using DATEPART.

You may have written ad-hoc queries to pull data for a certain date range. As this report gets popular over time and the date range expands, you may be asked to do a break down of the report. For example: per month or querter.

The magical function that helps with this purpose is the DATEPART function. With the DATEPART function, you can extract (otherwise tricky) calendar references like quarter, month, week, weekday, etc without having to guess the correct condition in your where clause.

Below is an example of how this function is used to break down a yearly report into quarters.

SELECT subq1.QuarterPart, sum(subq1.OrderPrice) from
(
SELECT
CASE DATEPART(quarter, OrderCompletedDate)
    when 1 then ‘Q1’
    when 2 then ‘Q2’
    when 3 then ‘Q3’
    when 4 then ‘Q4’
end as QuarterPart
,OrderPrice
from Orders where OrderCompletedDate BETWEEN ‘2012-01-01’ AND ‘2013-01-01’
)subq1
GROUP by subq1.QuarterPart
order by subq1.QuarterPart asc

Compare this with having to figure out and write where clauses like: ” where OrderCompletedDate >= ‘2012-01-01’ and OrderCompletedDate < ‘2012-04-01’ “. Now imagine doing this for a weekly breakdown.

Standard