Tuesday 20 December 2016

SQL Agent Permissions

Problem


I was contacted by a user asking if it was possible for his team to be able to execute specific SQL Agent jobs and check the job history. This was made possible by creating a SQL authenticated login and changing the job owner to this new account.

But this got me thinking about who actually has permissions when executing SQL Agent jobs. For example, who actually needs the SELECT\INSERT etc. permissions when a Powershell script is run using SQL Agent under a proxy account? Does the Job Owner or the RunAs account need access to the package for a SSIS job? I was surprised by the results and the security hole that can be opened (which I will reveal at the end).

The test involved running an INSERT statement into a table using a TSQL, Powershell and SSIS job step (as these are pretty much the only steps I use). The code to create the table is...

CREATE DATABASE [JobOwner_Perms_Test]
GO

CREATE TABLE [dbo].[JobOwner_Perms_Test] (
[Col1] int NOT NULL,
[Col2] varchar(max) NOT NULL,
[LoginName] varchar(max) NULL DEFAULT (suser_sname()),
[User_Name] varchar(max) NULL DEFAULT (user_name()))

TSQL Job Step


I created two jobs, the first with the job owner as a sysadmin and the second with the job owner as a non-sysadmin but with permissions to INSERT into the table.







Both jobs executed successfully, with the sysadmin account job using the credentials of the SQL Agent service and the non-sysadmin account using job owner credentials.







I was surprised that the RunAs option, which allows a proxy account to be used, wasn't accessible for the TSQL job step. I was further surprised that there was a capability for sysadmin users to declare the job step to be able to be run under another users context which is found under the Run As User on the Advanced tab.
















Setting this to a valid user allows that account to be used as a proxy for the sysadmin account (where col1 = 3)(although it doesn't have any affect when set for a non-sysadmin account). Not sure why the RunAs option in the general tab doesn't allow this...









POWERSHELL job step


For the Powershell test, I used the INVOKE-SQLCMD cmdlet to run the INSERT statement.

Windows
INVOKE-SQLCMD -ServerInstance localhost -Database JobOwner_Perms_Test -Query "INSERT INTO [dbo].[JobOwner_Perms_Test](Col1, Col2) VALUES (n, 'XXXX')"

SQL
INVOKE-SQLCMD -ServerInstance localhost -Database JobOwner_Perms_Test -Username -Password -Query "INSERT INTO [dbo].[JobOwner_Perms_Test](Col1, Col2) VALUES (n, 'XXXX')"

There were quite a few tests to consider. For example, whether the job owner is a sysadmin or not, whether a proxy account is used and if the Powershell string specifies whether to use Windows or SQL authentication.

The first bunch of tests used SQL Server Agent Service as the RunAs value...













With the following results....






First off, all the jobs with a job owner of non-sysadmin failed with the error explaining that non-sysadmin accounts need to use a Proxy account to run.

The results show that using the INVOKE-SQLCD cmdlet with windows authentication will use the SQL Agent service account to perform DML actions while the account specified using the cmdlets -Username and -Password parameters will be used for SQL authentication.

Next up was running the cmdlet using proxy accounts, one that was a sysadmin and another that had minimal permissions on the object.














The outcome (see screenshot below) shows that using windows authentication within the cmdlet will use the proxy account for the DML. Using SQL authentication in the cmdlet uses the accounts specified under the -Username and -Password parameters. One surprising result of these tests was that a Powershell script can be executed in SQL Agent without either the Job Owner or Run As accounts having access to msdb or the target database. All that's needed is the SQL Authenticated user to have access to the object being executed.










What errors can be received? If the Proxy account doesn’t have access to the database or doesn’t have permissions to insert into the table, the following errors will be returned...
The INSERT permission was denied on the object 'JobOwner_Perms_Test', database 'JobOwner_Perms_Test', schema 'dbo'

The error information returned by PowerShell is: 'Cannot open database "JobOwner_Perms_Test" requested by the login. The login failed. Login failed for user '\ProxyNoSysadminLogin'
The same goes when using SQL authentication and the SQL accounts don't have access\permissions.

SSISDB Job Step


SSIS jobs running in SQL Agent add further complications when it comes to the permissions. Aswell as having the Job Owner and RunAs account, there's an option for “Log on to the server” which gives the choice of Windows or SQL authentication but as far as I can tell, only allows Windows.








If the package is stored in the SSISDB catalogue, then permissions are required on the SSISDB database. Finally, the package itself may be connecting to a database using a connection manager within the package.

To test, I created two packages that both ran an execute SQL task, which executed a stored procedure to INSERT a row into a table. The first package used Windows Authentication within it to connect to the database. In the second package, I created two parameters that are passed into the package to be used in an expression to build a connection string that uses SQL Authentication to connect to the database.

After A LOT of testing, with many different scenarios, I made the following observations...

  • The job owner takes no part in setting the permissions. 
  • If the job owner is not a sysadmin, a proxy account is needed for the SSIS job step. 
  • The account in the Run As option within the job step is the one that executes the package and needs permissions in the SSISDB catalogue to do this. 
  • When the package uses a connection with Windows Authentication, the account in the Run As option is used and needs permissions for whatever that step does. 
  • When the package uses a connection with SQL Authentication, the account specified within the connection manager in the package needs permissions for whatever that step does.

Security Hole


The reader may've already spotted the problems that can arise with running SQL statements in SQL Agent. A combination of over privileged accounts coupled with a SQL Agent jobs can be give some nasty results.

Many places I have worked at have set all SQL Agent jobs to be owned by SA and use the SQL Agent service account as the Run As option in the job step. Taking the TSQL job step as an example, lets run through the following scenario....

A super user called Steve Bull requests for a stored procedure he has created in his own schema in a user database to be executed every night, which adds a row into a table. The table contains a datetime column and a second, DEFAULT, column that adds the users LOGIN name.











The DBA creates the job, setting the owner as SA, and informs Steve that it will be executed every night from tonight.




























This works fine. Steve checks the table and can see that the user executing the SQL Agent job is the a the SQL Agent service account, presumably with sysadmin privileges.











Steve then goes and alters the stored procedure and waits for the nightly SQL Agent job to run.










Which leaves Steve as a sysadmin on the instance.





















Conclusion


Setting the correct permissions within SQL Agent is essential when creating jobs. Using highly privileged accounts to execute jobs can be exploited, leading to the possibility of a user being able to elevate their permissions and being able to access data that they shouldn't be able to see.

Thursday 10 November 2016

Linked Server Security

Problem


When creating Linked Servers, security often gets overlooked.  This can lead to accounts on one instance having access, sometimes with elevated permissions, to a second instance.

Security Options


Linked Servers offer the following security options....























  • Local server to remote server mappings
  • Not be made
  • Be made without using a security context
  • Be made using the logins current security context
  • Be made using this security context

Lets take a look at each one...

Local server to remote server mappings










This setting takes precedence over the other Linked Server security settings and gives the ability to map specific logins on the local instance to logins on the remote instance.

The login on the local instance can be either SQL or Windows authenticated but unfortunately, Windows Groups cant be used. The local login can be mapped to logins on the remote instance, where the permissions of the remote login would then be used.

Accounts can impersonate the same account on the remote server. For SQL logins, the username and password need to be exact matches on both instances. For Windows logins, a Service Principal Name (SPN) needs to be created on both instances. As this is done in Active Directory and very often not the responsibility of the DBA, this can be missing which can cause errors. I will go into more detail about SPNs later.

All other security settings are used for any accounts that are NOT specified in the local to remote server mappings.

Not be made


This setting denies access to the remote server for all logins except the ones specified in the local server to remote server mappings.


 
















Be made without using a security context


Books online has the very unhelpful description for this setting...
Be made without using a security context
Specify that a connection will be made without using a security context for logins not defined in the list.
My impression was that accounts would connect and use the privileges assigned to the public server role. Unfortunately, I have never been able to test this out as I always get the following error when trying to create the Linked Server with the setting...













If anyone knows how to make this setting work, please get in touch :-)


Be made using the login's current security context


This setting uses the same login connected to the local instance to connect to the remote instance. When using a SQL authenticated user, the username and password need to be the same on both instances. For Windows authenticated users, they can have direct access or be part of an AD group on the remote instance.

This setting gives the user access on the local instance to everything they have access to on the remote instance. If a user should only be accessing, from the local instance, a subset of the database\objects they have access to on the remote instance, then this may not be the correct setting to use.













Be made using this security context


This setting uses a SQL authenticated login on the remote instance for all connections using the Linked Server. A remote login and password need to specified within the Linked Server security options. Its not possible to use this setting with a Windows login or AD group.

I've seen people use the SA account for this setting, probably to overcome SPN issues with one of the other settings or the complexity involved with granting multiple accounts access to specific databases\objects on the remote instance. Making the remote login SA (or any login with sysadmin priveleges) will effectively allow the local login SA privileges on the remote instance.








an example...

Set the “Be made using this security context” to an account with sysadmin permissions. Then create an account on the remote instance with no permissions. I have used a login called LinkerServerTest.

USE master
GO
CREATE LOGIN LinkedServerTest WITH PASSWORD = N'LinkedServerTest',
DEFAULT_DATABASE = [master],
CHECK_EXPIRATION = OFF,
CHECK_POLICY = OFF
GO

Execute the following stored procedure on the local instance...

EXEC [INSTANCE_B_DATABASE_1].MASTER.dbo.sp_addsrvrolemember 'LinkedServerTest' , 'sysadmin'

Now check the permissions that LinkerServerTest has...














(For this to work, the rpc out in the server options page needs to be set to true)









Service Principal Name (SPN)


SPNs are vital when using Linked Servers with Windows Authentication. The following error can be received when using a Linked Server, which normally means an SPN hasnt been created...










MSDN describes SPNs as...
A service principal name (SPN) is a unique identifier of a service instance. SPNs are used by Kerberos authentication to associate a service instance with a service logon account. This allows a client application to request that the service authenticate an account even if the client does not have the account name.

There are many articles on the internet that can explain SPNs and Kerberos authenication and the “double hop” problem much better than I can (this one is pretty good https://shuggill.wordpress.com/2015/03/06/configuring-sql-server-kerberos-for-double-hop-authentication/). The way I like to think about this problem is...

If you are using Window authentication on a laptop\desktop\application computer to connect to a SQL instance on another computer (one hop) which then uses a Linked Server to connect to an instance on another computer (second hop), you will need to create an SPN for the connection to work.

The above scenario is probably how most people are connecting and without SPNs, Windows accounts will not be able to gain access to the remote instance through the Linked Server.

I use the following two methods to find out if an SPN has been created.

1 – Check the SQL error log.

You'll either get good news...







or bad news...







2 - Microsoft have produced the Kerberos Configuration Manager tool to help with the detection and creation of SPNs. The tool can be downloaded from...

https://www.microsoft.com/en-us/download/details.aspx?id=39046

Creating SPNs goes above what I'd like to talk about here, so instead use this link to show you how...

https://msdn.microsoft.com/en-us/library/ms191153.aspx

Server Options


There are a couple of settings within the security options that can alter the way Linked Servers work. Both settings can be set to either true or false.
  • Data Access – When set to false, this disables the Linked Servers ability to be used to retrieve data in a 4 part naming convention or using OPENQUERY SELECT statement.
  • RPC out - When set to false, this disables the Linked Servers ability to be used to execute stored procedures.














DMVs


The following two DMVs give you information regarding the Linked Servers and login mappings on the instance.
  • sys.servers - This gives the information on the Linked Servers that have been created on the instance. The column is_linked will show a 1 for any linked servers that have been created.










  • sys.linked_logins – This gives the information on the local logins and the mappings they have on the remote instance











How should a Linked Servers security be set?


Like pretty much everything else in SQL Server, there isn't a one size fits all and you have to tailor the options to your needs. For me, I will always try to set up a Linked Server per database and explicitly set the each user to map to a specifically created SQL authenticated user that only has access to a single database with the correct permissions. Any login not specified in the local server to remote server mappings, wont be able to use the Linked Server. This way, only the logins which need access will be able to access the remote server and also SPN problems can be bypassed. Obviously, the downside to this method is that there can be many Linked Servers created with many users which can make the management burdensome.










Multiple Linked Servers for 6 databases on 2 instances.













Specific accounts mapped to SQL authenticated users.

Summary


After creating a Linked Server, time should be spent to work out what logins need to access it and what permissions will be required on the remote instance. Taking short cuts with Linked Server security can make an instance accessible to many more people than originally intended.

Tuesday 4 October 2016

Restore through FULL backup

Problem


A common misconception that i've heard a few times around backups is that a FULL backup invalidates previous TRANSACTION LOG backups. By this, I mean, that when a FULL backup is taken, TRANSACTION LOG backups cant be restored past that FULL backup.

Solution


I have created two SQL Agent jobs, one to take a FULL backup every hour and the other a TRANSLOG backup every 10 mins. Also, i've created a job that inserts a CURRENT_TIMESTAMP into a table called TIMES every minute.








Details of the backups are held in the table dbo.backupset in msdb.  Here you can see what type of backup was taken, the datetime of the backup plus many other details pertaining to the backup.















Some of the most interesting columns are the ones containing the LSN (log sequence values) values.

Books online has the following description for LSNs...
Every record in the SQL Server transaction log is uniquely identified by a log sequence number (LSN). LSNs are ordered such that if LSN2 is greater than LSN1, the change described by the log record referred to by LSN2 occurred after the change described by the log record LSN.
The first_lsn and last_lsn columns show the first and last values of the LSN contained in the backup file when the backup was taken.   It noticeable that the last_lsn of the previous TRANSLOG  backup is identical to the first_lsn of the next TRANSLOG backup.  Its also noticeable that the last_lsn of the TRANSLOG backup taken before the FULL backup is identical to the first_lsn of the TRANSLOG backup taken after the FULL backup.















Testing


To demonstrate how the restore works, I will restore the following backups...
  • The FULL database backup from 20160916 @ 3pm
  • 6 TLOG backups from 3.05pm – 3.55pm
  • 1 TLOG backup from 4.05pm (taken after the 4pm FULL database backup)










Backups taken between 3pm and 4pm and restored with NORECOVERY



















and finally, the backup taken at 4.05pm, after the FULL backup, with the database recovered.






Now, when checking the database, you can see that the TIMES table has been recovered to 4.05pm, straight through the FULL backup of 4pm.





















Conclusion


As shown, the latest FULL database backup isn't needed to restore a database to a point in time. BUT...don't take this as a green light to stop taking FULL database backups altogether. As the amount of TLOG backups that need to be restored increases, so do your problems. Each TLOG restore will add more time to the restoration process. There could come a point where having hundreds or thousands of TLOG backups to restore will push the restoration duration past your recovery time objective (RTO). Also, the greater number of files required means a greater chance of file corruption or an accidental file deletion by a low caffeined DBA. My recommendation would be to make an hourly or nightly FULL backup with TLOG backups in between.

Bonus Cup


Differential backups work in the same way. You can restore TLOG backups through them also. Its worth investigating whether you can benefit by using DIFFERENTIALs, especially on databases that don't change much, as they can really reduce the time and complexity to bring a database back to a point of failure (imagine having to restore 23 hours worth of TLOG backups taken every 5 mins – 23x12= 276 TLOG backups to restore)

Something to be aware of is the column “Begins_log_chain” within the backupset table. When this value equals 0 then there's no problems with TLOG restores going straight through. When this value is 1, then something has happened which has broken the log chain, which will require a FULL backup to reinitialise. In my experience, this is usually because the database has been flipped from FULL to SIMPLE and then back to FULL recovery model.









Now, trying to restore past the TLOG backup with the begin_log_chain = 1 will not be possible and an error will be returned...



















...when restoring the 1.05pm TLOG backup.


Friday 16 September 2016

Using Logins and Certificates to evelate Permissions

Problem


A common question I get asked by SQL developers is to be able to query the sys.dm_db_index_usage_stats table to see what indexes are being used. As happy as I am that SQL developers are taking an interest in their deployments, this requires the VIEW SERVER STATE permission which opens up a lot of DMVs and can be a potential security concern. This scenario is ideal for creating a login mapped to a certificate which will allow the SQL developer access to the DMV without directly granting elevated permissions.

Solution


The following steps outline how to create a stored procedure that can be executed by users to return index usage stats results for the database they are accessing.


Create a Stored Procedure


The Stored Procedure (sp) uses the db_id() function to only return results from the current database and joins with sys.indexes to make the results more readable. Creating the sp in master with a name that starts with sp_ allows the stored procedure to be called from any database.

IF EXISTS(SELECT 1 FROM sys.procedures WHERE name = 'sp_IndexUsageStats_PerDatabase')
BEGIN
DROP PROCEDURE sp_IndexUsageStats_PerDatabase
END
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC sp_IndexUsageStats_PerDatabase
AS
BEGIN
SELECT db_name(database_id) as DatbaseName, object_name(ius.object_id) as   ObjectName,
        i.name, user_seeks, user_scans, user_lookups, user_updates,
        last_user_seek, last_user_scan, last_user_lookup, last_user_update
        FROM sys.dm_db_index_usage_stats ius
        INNER JOIN sys.indexes i
        ON i.index_id = ius.index_id AND i.object_id = ius.object_id
        where database_id = db_id()
END


If a user with limited privileges (for example, a user called iain who is part of the db_owner role in AdventureWorks2014) tries to execute the SP, the error that “The EXECUTE permission was denied on the object” will be returned.










This can be easily rectified by granting the execute permission on the SP to public...

USE master
GO
GRANT EXEC ON
sp_IndexUsageStats_PerDatabase TO public
GO

which now changes the error message to...










Create a Certificate 


The certificate needs to be created in the master database.

USE master
GO
CREATE CERTIFICATE CertificateElevatePerms
ENCRYPTION BY PASSWORD = 'Walt3rZ3nga'
WITH SUBJECT = 'General cert to be used for processes that need a higher perms';
GO


Add a Signature


The signature ties the stored procedure to the certificate. Both need to be created before the signature can be set and if the stored procedure is altered, the signature needs to be reset. The password needs to be the same as the one used to create the certificate.


ADD SIGNATURE TO OBJECT ::sp_IndexUsageStats_PerDatabase
BY CERTIFICATE CertificateElevatePerms
WITH PASSWORD = 'Walt3rZ3nga' ;


Create a Login


A login which uses the certificate needs to be created next.

Books online describes a login mapped to a certificate as...

Logins created from certificates or asymmetric keys are used only for code signing. They cannot be used to connect to SQL Server. You can create a login from a certificate or asymmetric key only when the certificate or asymmetric key already exists in master.

USE master
GO

CREATE LOGIN CertificateElevatePerms_Login
FROM CERTIFICATE
GO


Grant the necessary permissions to the Login to be able to execute the SP

USE master
GO

GRANT VIEW SERVER STATE TO CertificateElevatePerms_Login ;
GO
GRANT AUTHENTICATE SERVER TO CertificateElevatePerms_Login ;
GO


Results


Running the same procedure as earlier now returns some results. Here the user has db_owner access to the AdventureWorks2014 database.












If a user only has access to specific objects rather than db_owner role (for example, just the JobCandidate table) then only the metadata for this object will be displayed.

USE [AdventureWorks2014]
GO
ALTER ROLE [db_owner] DROP MEMBER [iain]
GO
GRANT SELECT ON [HumanResources].[JobCandidate] TO [iain]
GO


Wednesday 6 July 2016

Tempdb monitoring - Free space alert

SQL Server uses tempdb for many features, including row versioning, service broker and query execution amongst others.  During the execution of queries, data used for sorts and hash joins can be “spilled” to tempdb if the memory granted to the query isn't enough to store the data.  If this happens for a very big query or lots of small queries or for queries on instance that makes extensive use read committed snapshot or any combination of many reasons, an error resembling the one below can be received...


Msg 1105, Level 17, State 2, Line 3
Could not allocate space for object 'dbo.SORT temporary run storage: 140737494384640' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

Due to the way I configure tempdb (a single drive dedicated to the datafiles with multiple equal sized datafiles all pre-grown to their maximum size) the advice offered in the error message wont work. Instead, I create a SQL Agent alert to warn me before the space is all used up.

Recreating the problem

Running the following query in Adventure Works will create a plan with sorts and hash joins, but with the correct statistics and enough memory available, will not need to use tempdb...

SELECT * 
FROM Sales.SalesOrderDetail s1 
INNER HASH JOIN Sales.SalesOrderDetail s2 
ON s1.rowguid = s2.rowguid 
ORDER BY s1.rowguid












PerfMon confirms that the free space in tempdb doesn't change...



















By fooling the optimiser into thinking there's only 1 row in the table...

UPDATE STATISTICS Sales.SalesOrderDetail WITH ROWCOUNT = 1

...the query plan generated will include a spill to tempdb because the estimated number of rows is very wrong and the subsequent memory grant is too low.










...and tempdb will have to store the data, as can be seen in PerfMon.




















This isn't a problem if the space in tempdb is available but if the there isn't enough space, errors will occur.
 









Solutions

What can be done when tempdb becomes full? Creating more space is the obvious answer. Enabling autogrowth on the datafile(s) can help but its best practise to have multiple, equal sized tempdb datafiles. Turning on trace flag 1117, which expands all datafiles in a filegroup at the same rate is a good idea although this is a server wide setting so will affect other databases on the instance.

My solution, creating an alert, doesn't really solve the problem but at least gives an early warning so evasive action can be taken.

Tempdb is getting full” alert

Although I call it an alert, its actually made up of the following 3 things...

1 – A SQL Agent alert
2 – A SQL Agent job
3 – A stored procedure

Starting from the bottom, the stored procedure is used to find what's using tempdb and then send an email with this info. I check current running queries, the size of the row store and the size of permanent tables created in tempdb although other checks can be added. To check the current running queries, I use Adam Mechanics sp_whoisactive to look for queries with a tempdb_allocation above zero and insert the 5 with the highest values into a temporary table (yes, I realise what I'm doing here :-) ). I use sys.dm_os_performance_stats for the row store size and sys.allocation_units for the permanent table size.

Once all the tempdb checks are collected, a dynamic HTML block of code is built to be used to email the DBA team using sp_send_dbmail.

USE [master]
GO
CREATE PROC [dbo].[spTempDBSizeInfo]
AS
DECLARE @subject varchar(500)
DECLARE @body NVARCHAR(MAX)
DECLARE @body_format varchar(20) = 'HTML'

SELECT @subject = 'Free TempDB space on ' + @@ServerName + ' - ' + CAST(100 - cast((A.TotalSpaceInMB-cntr_value/1024)*100/A.TotalSpaceInMB as int) AS VARCHAR(3)) + '% left of ' + cast(A.TotalSpaceInMB as varchar(12)) + 'MB'
FROM sys.dm_os_performance_counters
CROSS APPLY (select sum(size)*8/1024 as TotalSpaceInMB from tempdb.sys.database_files where type = 0) AS A
WHERE counter_name = 'Free Space in tempdb (KB)'


CREATE TABLE [dbo].[#tblWhoIsActive](
[dd hh:mm:ss.mss] [varchar](8000) NULL,
[session_id] [smallint] NOT NULL,
[sql_text] [xml] NULL,
[login_name] [nvarchar](128) NOT NULL,
[wait_info] [nvarchar](4000) NULL,
[tran_log_writes] [nvarchar](4000) NULL,
[CPU] [varchar](30) NULL,
[tempdb_allocations] [varchar](30) NULL,
[tempdb_current] [varchar](30) NULL,
[blocking_session_id] [smallint] NULL,
[reads] [varchar](30) NULL,
[writes] [varchar](30) NULL,
[physical_reads] [varchar](30) NULL,
[query_plan] [xml] NULL,
[used_memory] [varchar](30) NULL,
[status] [varchar](30) NOT NULL,
[tran_start_time] [datetime] NULL,
[open_tran_count] [varchar](30) NULL,
[percent_complete] [varchar](30) NULL,
[host_name] [nvarchar](128) NULL,
[database_name] [nvarchar](128) NULL,
[program_name] [nvarchar](128) NULL,
[start_time] [datetime] NOT NULL,
[login_time] [datetime] NULL,
[request_id] [int] NULL,
[collection_time] [datetime] NOT NULL) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
EXEC sp_WhoIsActive
@get_transaction_info = 2,
@get_plans = 1,
@destination_table='#tblWhoIsActive'

SELECT TOP 5
[dd hh:mm:ss.mss],
session_id,
CAST(sql_text as varchar(MAX)) AS sql_text,
login_name,
(REPLACE(tempdb_current,',','')*8)/1024 as tempdb_current_MB,
(REPLACE(tempdb_allocations,',','')*8)/1024 as tempdb_allocations_MB,
CPU,
reads,
writes
INTO #tblWhoIsActive_temp
FROM #tblWhoIsActive
WHERE REPLACE(tempdb_allocations,',','') > 0
ORDER BY tempdb_allocations DESC
IF @@ROWCOUNT = 0
BEGIN
SET @body = N'<head>' + N'<style type="text/css">h2, body {font-family: Arial, verdana;} table{font-size:11px; border-collapse:collapse;} td{background-color:#F1F1F1; border:1px solid black; padding:3px;} th{background-color:#99CCFF;}</style>' +
N'<h2><font color="#0000ff" size="4">Free TempDB space</font></h2>' + N'</head>' + N'<p>' + ' ' + '</p>'+ N'<body>' + N' <hr> ' + N'<h1><font color="#0000ff" size="2">The top 5 TempDB intensive queries currently running are:</font></h1>' +
N' ' + N'<font color="#0000ff" size="2">No queries using TempDB currently running</font>' + N' ' + N' <br></br>' + N'<p>' + ' ' + '</p>'+ N' <hr> ' + N'<h1><font color="#0000ff" size="2">TempDB components size MB:</font></h1>' +
N' ' + N'<table border="1">' + N'<tr><th>PersistedTableSizeMB</th><th>VersionStoreSizeMB</th></tr>' +
CAST ( (
SELECT td = SUM(au.total_pages)*8/1024, '',
td = MAX(A.VersionStoreSizeMB), ''
FROM tempdb.sys.partitions p
INNER JOIN tempdb.sys.allocation_units au
ON p.hobt_id = au.container_id
CROSS APPLY (SELECT cntr_value/1024 AS VersionStoreSizeMB FROM sys.dm_os_performance_counters WHERE counter_name = 'Version Store Size (KB)') AS A
WHERE OBJECT_NAME(p.object_id) not like '#%'
FOR XML PATH('tr'), TYPE )
AS NVARCHAR(MAX) ) +
N'</table>' + N'</body>' ;
END
ELSE
BEGIN
SET @body = N'<head>' + N'<style type="text/css">h2, body {font-family: Arial, verdana;} table{font-size:11px; border-collapse:collapse;} td{background-color:#F1F1F1; border:1px solid black; padding:3px;} th{background-color:#99CCFF;}</style>' +
N'<h2><font color="#0000ff" size="4">Free TempDB space</font></h2>' + N'</head>' + N'<p>' + ' ' + '</p>'+ N'<body>' + N' <hr> ' + N'<h1><font color="#0000ff" size="2">The top 5 tempdb intensive queries currently running are:</font></h1>' +
N' ' + N'<table border="1">' + N'<tr><th>[dd hh:mm:ss.mss]</th><th>session_id</th><th>sql_text</th><th>login_name</th><th>tempdb_allocations_MB</th><th>tempdb_current_MB</th><th>CPU</th><th>reads</th><th>writes</th></tr>' +
CAST ( (
SELECT td = [dd hh:mm:ss.mss], '',
td = session_id, '',
td = CAST(sql_text AS VARCHAR(256)),'',
'',
td = login_name, '',
td = tempdb_allocations_MB, '',
td = tempdb_current_MB, '',
td = CPU, '',
td = reads, '',
td = writes, ''
FROM #tblWhoIsActive_temp
ORDER BY tempdb_allocations_MB DESC
FOR XML PATH('tr'), TYPE )
AS NVARCHAR(MAX) ) +
N'</table>' + N' <br></br>' + N'<p>' + ' ' + '</p>'+ N' <hr> ' + N'<h1><font color="#0000ff" size="2">TempDB components size MB:</font></h1>' + N' ' + N'<table border="1">' +
N'<tr><th>PersistedTableSizeMB</th><th>VersionStoreSizeMB</th></tr>' +
CAST ( (
SELECT
td = SUM(au.total_pages)*8/1024, '',
td = MAX(A.VersionStoreSizeMB), ''
from tempdb.sys.partitions p
INNER JOIN tempdb.sys.allocation_units au
ON p.hobt_id = au.container_id
CROSS APPLY
(SELECT cntr_value/1024 AS VersionStoreSizeMB FROM sys.dm_os_performance_counters WHERE counter_name = 'Version Store Size (KB)') AS A
WHERE object_name(p.object_id) not like '#%'
FOR XML PATH('tr'), TYPE ) AS NVARCHAR(MAX) ) +
N'</table>' + N'</body>'
END

EXEC MSDB.dbo.sp_send_dbmail
@profile_name = 'ProfileName'
,@recipients = 'DBATeam'
,@subject = @subject
,@body = @body
,@body_format = @body_format



The stored procedure is called by an unscheduled SQL Agent Job.























The SQL Agent job is executed by a performance condition alert being triggered within SQL Agent alerts, with the option set to only raise the alert every 5 minutes. This value maybe too high for some people but can be altered on the options page.

























The SQL Agent alert is configured to be triggered when the Free Space falls below a certain value, which I normally set at 50% of the size of tempdb.






















As tempdb size can differ for different instances, i've done a little bit of TSQL trickery to make the script runnable on any instance.


USE [msdb]
GO
DECLARE @PercentToWarn DECIMAL(12,2) = 50
SET @PercentToWarn = @PercentToWarn/100

DECLARE @InstanceName VARCHAR(128) =
(SELECT CASE WHEN SERVERPROPERTY ('InstanceName') IS NULL THEN 'SQLSERVER'
ELSE 'MSSQL$' + CAST(SERVERPROPERTY ('InstanceName') AS VARCHAR(32)) END)

DECLARE @TempWarnSize INT = 
 (SELECT SUM(size)*8 FROM tempdb.sys.database_files WHERE type = 0) * @PercentToWarn
 
DECLARE @Job_id UNIQUEIDENTIFIER = (
 select job_id FROM sysjobs WHERE name = 'zzTempDB_Size_Info')
 
DECLARE @PerfCond VARCHAR(1024) = 
@InstanceName + ':Transactions|Free Space in tempdb (KB)||<|' + CAST(@TempWarnSize AS VARCHAR(132)) + ''

BEGIN
TRY
EXEC msdb.dbo.sp_help_alert 'FreeSpaceInTempdb'
END TRY

BEGIN
CATCH
EXEC msdb.dbo.sp_add_alert
@name=N'FreeSpaceInTempdb',       
@enabled=1,       
@delay_between_responses=300,       
@include_event_description_in=0,      
@performance_condition=@PerfCond,       
@job_id=@Job_id
END CATCH                 
GO


Results 

When tempdb free space drops below the configured value (50% by default although on this instance its configured to 80%), an email will be sent the DBAs.










Further Reading 

Although the tempdb 2005 whitepaper is starting to get a bit old, the details contained within are still largely relevant to subsequent SQL Server versions and is an informative read.

https://technet.microsoft.com/en-gb/library/cc966545.aspx

Also worth watching is a PASS lecture called Inside Tempdb by Bob Ward.

https://www.youtube.com/watch?v=SvseGMobe2w&feature=share&list=PLF80A8A233EE9F22F