Monday, 25 September 2017

Policy Based Management and Powershell

Introduction


Policy Based Management (PBM) can be used to inform DBAs when an incorrect setting is being used on a SQL instance and in some cases, can restrict changes being made to the SQL instance. Microsoft explain it as follows...

“Policy-Based Management is a policy based system for managing one or more instances of SQL Server. Use is to create conditions that contain condition expressions. Then, create policies that apply the conditions to database target objects. For example, as the database administrator, you may want to ensure that certain servers do not have Database Mail enabled, so you create a condition and a policy that sets that server option.”

Most settings within the database\instance environment can be monitored with PBM. The feature also includes the ability to use TSQL queries with the ExecuteSQL function. For processes outside of the SQL instance, the ExecuteWQL function can be used. Unfortunately, this is where I have run into problems in the past which has made me explore using Powershell as an alternative.

My PBM checks run daily and any failures populate the following tables

  • syspolicy_policy_execution_history
  • syspolicy_policy_execution_history_details

A stored procedure then queries these tables and sends an email to the DBA team showing which policies failed on each instance.

For this example, I will show how to use Powershell to check the status of SQL Browser and how to report if it fails the policy. The check will be triggered by a SQL Agent job.

Condition and Policy 


The following condition and policy are created.























The condition is deliberately set to check that 1= 0 which will obviously fail!

Powershell 


Two Powershell scripts are created. The first script is used to make the check. It takes in 3 parameters. The Hostname\Instance name the check is being executed on, the Policy name of the check and a hashtable containing any variables needed in the script.







There's a reference to the second Powershell script and the ability to write out the parameter values using the verbose switch. The second Powershell script needs to be in the location referenced. Although I've used C:\temp, this can be a network share.








The script then has the logic to test whether the check passes or fails. In this case, the required status and actual status of the service are retrieved. The required status is a value passed into the script from SQL Agent (which will be shown further down) and the actual status is found using the get-service cmdlet.





The values are compared and either the script finishes if the comparison is equal or the second Powershell script is executed if theres a difference.










The second Powershell script, or Execute-PBM as its called, is executed when the required status is different to the actual status. The script takes the Hostname\Instance name and Policy name as parameters. These values are used with the SQL provider for Powershell to expose and set the location to the PBM folder of the SQLServer drive. The Policy is then invoked.











This will then execute the Policy which is set to fail and populate the relevant DMVs.

SQL Agent Job


The process can be scheduled using SQL Agent.

A job or jobstep can be created to run the script. The jobstep should be of type Operating system (CmdExec).






And the command needs to call the first powershell script, saved as a ps1 script on the file system, with the 3 parameters from the script specified.








The HostnameInstanceName parameter uses a SQL Agent token, which passes in the @@ServerName value. The advantage of using the SQL Agent token is that each job\jobstep doesn't need to be manually updated for each instance. The PolicyName needs to be the same as the policy set up in SSMS. The variables parameter is set up as a hashtable and can accept multiple values as a key\value pair. In this example, the SQLBrowserRequiredStatus is passed into the script with the value of Running. If the script requires further variables then they can be added to the Variables parameter using key = value. Separate each key\value pair with a semi-colon.

Conclusion


Powershell can be used in conjunction with Policy Based Management to extend the scope of checks that can be performed.

Tuesday, 14 March 2017

TempDB Multiple Datafiles

Problem

Most DBAs are aware that (prior to SQL 2016) one of the most common, post installation changes that needs to be made in SQL Server is the addition of extra TempDB datafiles. What seems to be less well known is that ALL these files all need to be of equal size and need to expand in equal increments.


The addition of extra TempDB files is to alleviate contention on the PFS, GAM and SGAM pages (these are the pages that help SQL decide which pages to use to store data; meta-pages maybe?). The idea being that, basically, the extra datafiles means more PFS, GAM and SGAM pages, which means TempDB workloads have a greater number of pages they can use hence less time spent waiting for used pages to become unlatched. The reason for the EQUALLY sized datafiles is the way SQL allocates the data to each datafile. SQL will check to see if all the datafiles have the same amount of free space. If so, it will add data in a “round robin” fashion, where each datafile will take it in turn to store data. This is how the contention on the PFS, GAM and SGAM pages are reduced. If one of the TempDB files is of a greater size than the rest and has more free space, SQL will prioritise this one file over the others and use it until the free space becomes the same across all files in the filegroup.

Example

In the following example, i've used sys.dm_io_virtual_file_stats to capture the MB written and IO count on each file in TempDB. A baseline is taken initially, then a query is executed 5 times which creates a temporary table and fills it with data and finally another snapshot of sys.dm_io_virtual_file_stats is taken and the results used to find the totals.

Queries

Before starting the TempDB workload the following query is executed 

SELECT GETDATE() as dtStart,
iovfs.*,
mf.name,
mf.type_desc
INTO #dm_io_virtual_file_stats_start
FROM sys.master_files mf
INNER JOIN sys.dm_io_virtual_file_stats(2,NULL) iovfs
ON mf.database_id = iovfs.database_id and mf.file_id = iovfs.file_id

The following query is executed 5 times to generated a workload in TempDB

SELECT TOP ( 2000000 )
fn = LEFT( o.name,  64 ),
ln = LEFT( c.name,  64 ), em = LEFT( o.name,  LEN ) c.name )% 5+1 ) + '.'
+ LEFT( c.name,  LEN ( o.name )% 5+2 ) + '@'
+ RIGHT( c.name, LEN ( o.name + c.name )% 12 + 1 )
+ LEFT( RTRIM ( CHECKSUM ( NEWID ())), 3 ) + '.com' ,
= CASE WHEN c.name LIKE '%y%' THEN 0 ELSE 1 END
INTO
#temptest
FROM sys.all_objects AS o
CROSS JOIN sys.all_columns AS c

And finally the following SQL is executed to compute the results.

SELECT
DB_NAME ( t1.[database_id] ) [DatabaseName] , t1.[name] [LogicalName] ,
(SUM(t2.[num_of_bytes_written])-SUM(t1.[num_of_bytes_written]))/1048576
AS [Written_MB]
, SUM(t2.[num_of_reads]+t2.[num_of_writes])-SUM(t1.[num_of_reads]+t1.[num_of_writes])
AS [IO_Count]
FROM #dm_io_virtual_file_stats_start AS t1
INNER JOIN
(SELECT iovfs.*,mf.[name],mf.[type_desc]
FROM sys.master_files mf
INNER JOIN sys.dm_io_virtual_file_stats (2,NULL) iovfs
ON mf.[database_id]= iovfs.[database_id] AND mf.[file_id] = iovfs.[file_id]) t2
ON t1.[database_id] = t2.[database_id] AND t1.[file_id] = t2.[file_id]
GROUP BY DB_NAME(t1.[database_id]), t1.[name]
ORDER BY IO_Count DESC

Results

1 TempDB file

Using a single TempDB datafile...

the following values were observed.









2 Equal TempDB files

Using two equal TempDB files






the following values were observed. Note the almost equal Written_MB and IO_Count.










2 Unequal TempDB files

Using two unequal TempDB files






the following values were observed. Note the unequal Written_MB and IO_Count.









4 Equal sized TempDB files

Using four equally sized TempDB files







the following values were observed.











4 Unequal TempDB files

And finally, Using four unequally sized TempDB files







the following values were observed.











Solutions

There are a couple of ways, when creating multiple TempDB files, to make sure they stay evenly sized. My preference is to dedicate TempDB datafiles to its own drive. Divide the drive size by the number of datafiles, create each datafile that size and dont allow them to autoextend. This can cause problems if an errent query uses all the space in TempDB, in which case the query will need to rewriten to use less TempDB space or additional space will need to be added to the drive.

A trace flag, -T1117, can be also be used. This trace flags grows all datafiles in a filegroup at the same time. Beware that this is a global trace flag and will affect all database on an instance.

Conclusion

Its important, when creating multiple TempDB files, that each file is of equal size. Be aware of TempDBs usage patterns and how each datafile will grow if fully used.

Tuesday, 20 December 2016

SQL Agent Permissions

Problem


I was contacted by a user asking if it was possible for his team to be able to execute specific SQL Agent jobs and check the job history. This was made possible by creating a SQL authenticated login and changing the job owner to this new account.

But this got me thinking about who actually has permissions when executing SQL Agent jobs. For example, who actually needs the SELECT\INSERT etc. permissions when a Powershell script is run using SQL Agent under a proxy account? Does the Job Owner or the RunAs account need access to the package for a SSIS job? I was surprised by the results and the security hole that can be opened (which I will reveal at the end).

The test involved running an INSERT statement into a table using a TSQL, Powershell and SSIS job step (as these are pretty much the only steps I use). The code to create the table is...

CREATE DATABASE [JobOwner_Perms_Test]
GO

CREATE TABLE [dbo].[JobOwner_Perms_Test] (
[Col1] int NOT NULL,
[Col2] varchar(max) NOT NULL,
[LoginName] varchar(max) NULL DEFAULT (suser_sname()),
[User_Name] varchar(max) NULL DEFAULT (user_name()))

TSQL Job Step


I created two jobs, the first with the job owner as a sysadmin and the second with the job owner as a non-sysadmin but with permissions to INSERT into the table.







Both jobs executed successfully, with the sysadmin account job using the credentials of the SQL Agent service and the non-sysadmin account using job owner credentials.







I was surprised that the RunAs option, which allows a proxy account to be used, wasn't accessible for the TSQL job step. I was further surprised that there was a capability for sysadmin users to declare the job step to be able to be run under another users context which is found under the Run As User on the Advanced tab.
















Setting this to a valid user allows that account to be used as a proxy for the sysadmin account (where col1 = 3)(although it doesn't have any affect when set for a non-sysadmin account). Not sure why the RunAs option in the general tab doesn't allow this...









POWERSHELL job step


For the Powershell test, I used the INVOKE-SQLCMD cmdlet to run the INSERT statement.

Windows
INVOKE-SQLCMD -ServerInstance localhost -Database JobOwner_Perms_Test -Query "INSERT INTO [dbo].[JobOwner_Perms_Test](Col1, Col2) VALUES (n, 'XXXX')"

SQL
INVOKE-SQLCMD -ServerInstance localhost -Database JobOwner_Perms_Test -Username -Password -Query "INSERT INTO [dbo].[JobOwner_Perms_Test](Col1, Col2) VALUES (n, 'XXXX')"

There were quite a few tests to consider. For example, whether the job owner is a sysadmin or not, whether a proxy account is used and if the Powershell string specifies whether to use Windows or SQL authentication.

The first bunch of tests used SQL Server Agent Service as the RunAs value...













With the following results....






First off, all the jobs with a job owner of non-sysadmin failed with the error explaining that non-sysadmin accounts need to use a Proxy account to run.

The results show that using the INVOKE-SQLCD cmdlet with windows authentication will use the SQL Agent service account to perform DML actions while the account specified using the cmdlets -Username and -Password parameters will be used for SQL authentication.

Next up was running the cmdlet using proxy accounts, one that was a sysadmin and another that had minimal permissions on the object.














The outcome (see screenshot below) shows that using windows authentication within the cmdlet will use the proxy account for the DML. Using SQL authentication in the cmdlet uses the accounts specified under the -Username and -Password parameters. One surprising result of these tests was that a Powershell script can be executed in SQL Agent without either the Job Owner or Run As accounts having access to msdb or the target database. All that's needed is the SQL Authenticated user to have access to the object being executed.










What errors can be received? If the Proxy account doesn’t have access to the database or doesn’t have permissions to insert into the table, the following errors will be returned...
The INSERT permission was denied on the object 'JobOwner_Perms_Test', database 'JobOwner_Perms_Test', schema 'dbo'

The error information returned by PowerShell is: 'Cannot open database "JobOwner_Perms_Test" requested by the login. The login failed. Login failed for user '\ProxyNoSysadminLogin'
The same goes when using SQL authentication and the SQL accounts don't have access\permissions.

SSISDB Job Step


SSIS jobs running in SQL Agent add further complications when it comes to the permissions. Aswell as having the Job Owner and RunAs account, there's an option for “Log on to the server” which gives the choice of Windows or SQL authentication but as far as I can tell, only allows Windows.








If the package is stored in the SSISDB catalogue, then permissions are required on the SSISDB database. Finally, the package itself may be connecting to a database using a connection manager within the package.

To test, I created two packages that both ran an execute SQL task, which executed a stored procedure to INSERT a row into a table. The first package used Windows Authentication within it to connect to the database. In the second package, I created two parameters that are passed into the package to be used in an expression to build a connection string that uses SQL Authentication to connect to the database.

After A LOT of testing, with many different scenarios, I made the following observations...

  • The job owner takes no part in setting the permissions. 
  • If the job owner is not a sysadmin, a proxy account is needed for the SSIS job step. 
  • The account in the Run As option within the job step is the one that executes the package and needs permissions in the SSISDB catalogue to do this. 
  • When the package uses a connection with Windows Authentication, the account in the Run As option is used and needs permissions for whatever that step does. 
  • When the package uses a connection with SQL Authentication, the account specified within the connection manager in the package needs permissions for whatever that step does.

Security Hole


The reader may've already spotted the problems that can arise with running SQL statements in SQL Agent. A combination of over privileged accounts coupled with a SQL Agent jobs can be give some nasty results.

Many places I have worked at have set all SQL Agent jobs to be owned by SA and use the SQL Agent service account as the Run As option in the job step. Taking the TSQL job step as an example, lets run through the following scenario....

A super user called Steve Bull requests for a stored procedure he has created in his own schema in a user database to be executed every night, which adds a row into a table. The table contains a datetime column and a second, DEFAULT, column that adds the users LOGIN name.











The DBA creates the job, setting the owner as SA, and informs Steve that it will be executed every night from tonight.




























This works fine. Steve checks the table and can see that the user executing the SQL Agent job is the a the SQL Agent service account, presumably with sysadmin privileges.











Steve then goes and alters the stored procedure and waits for the nightly SQL Agent job to run.










Which leaves Steve as a sysadmin on the instance.





















Conclusion


Setting the correct permissions within SQL Agent is essential when creating jobs. Using highly privileged accounts to execute jobs can be exploited, leading to the possibility of a user being able to elevate their permissions and being able to access data that they shouldn't be able to see.

Thursday, 10 November 2016

Linked Server Security

Problem


When creating Linked Servers, security often gets overlooked.  This can lead to accounts on one instance having access, sometimes with elevated permissions, to a second instance.

Security Options


Linked Servers offer the following security options....























  • Local server to remote server mappings
  • Not be made
  • Be made without using a security context
  • Be made using the logins current security context
  • Be made using this security context

Lets take a look at each one...

Local server to remote server mappings










This setting takes precedence over the other Linked Server security settings and gives the ability to map specific logins on the local instance to logins on the remote instance.

The login on the local instance can be either SQL or Windows authenticated but unfortunately, Windows Groups cant be used. The local login can be mapped to logins on the remote instance, where the permissions of the remote login would then be used.

Accounts can impersonate the same account on the remote server. For SQL logins, the username and password need to be exact matches on both instances. For Windows logins, a Service Principal Name (SPN) needs to be created on both instances. As this is done in Active Directory and very often not the responsibility of the DBA, this can be missing which can cause errors. I will go into more detail about SPNs later.

All other security settings are used for any accounts that are NOT specified in the local to remote server mappings.

Not be made


This setting denies access to the remote server for all logins except the ones specified in the local server to remote server mappings.


 
















Be made without using a security context


Books online has the very unhelpful description for this setting...
Be made without using a security context
Specify that a connection will be made without using a security context for logins not defined in the list.
My impression was that accounts would connect and use the privileges assigned to the public server role. Unfortunately, I have never been able to test this out as I always get the following error when trying to create the Linked Server with the setting...













If anyone knows how to make this setting work, please get in touch :-)


Be made using the login's current security context


This setting uses the same login connected to the local instance to connect to the remote instance. When using a SQL authenticated user, the username and password need to be the same on both instances. For Windows authenticated users, they can have direct access or be part of an AD group on the remote instance.

This setting gives the user access on the local instance to everything they have access to on the remote instance. If a user should only be accessing, from the local instance, a subset of the database\objects they have access to on the remote instance, then this may not be the correct setting to use.













Be made using this security context


This setting uses a SQL authenticated login on the remote instance for all connections using the Linked Server. A remote login and password need to specified within the Linked Server security options. Its not possible to use this setting with a Windows login or AD group.

I've seen people use the SA account for this setting, probably to overcome SPN issues with one of the other settings or the complexity involved with granting multiple accounts access to specific databases\objects on the remote instance. Making the remote login SA (or any login with sysadmin priveleges) will effectively allow the local login SA privileges on the remote instance.








an example...

Set the “Be made using this security context” to an account with sysadmin permissions. Then create an account on the remote instance with no permissions. I have used a login called LinkerServerTest.

USE master
GO
CREATE LOGIN LinkedServerTest WITH PASSWORD = N'LinkedServerTest',
DEFAULT_DATABASE = [master],
CHECK_EXPIRATION = OFF,
CHECK_POLICY = OFF
GO

Execute the following stored procedure on the local instance...

EXEC [INSTANCE_B_DATABASE_1].MASTER.dbo.sp_addsrvrolemember 'LinkedServerTest' , 'sysadmin'

Now check the permissions that LinkerServerTest has...














(For this to work, the rpc out in the server options page needs to be set to true)









Service Principal Name (SPN)


SPNs are vital when using Linked Servers with Windows Authentication. The following error can be received when using a Linked Server, which normally means an SPN hasnt been created...










MSDN describes SPNs as...
A service principal name (SPN) is a unique identifier of a service instance. SPNs are used by Kerberos authentication to associate a service instance with a service logon account. This allows a client application to request that the service authenticate an account even if the client does not have the account name.

There are many articles on the internet that can explain SPNs and Kerberos authenication and the “double hop” problem much better than I can (this one is pretty good https://shuggill.wordpress.com/2015/03/06/configuring-sql-server-kerberos-for-double-hop-authentication/). The way I like to think about this problem is...

If you are using Window authentication on a laptop\desktop\application computer to connect to a SQL instance on another computer (one hop) which then uses a Linked Server to connect to an instance on another computer (second hop), you will need to create an SPN for the connection to work.

The above scenario is probably how most people are connecting and without SPNs, Windows accounts will not be able to gain access to the remote instance through the Linked Server.

I use the following two methods to find out if an SPN has been created.

1 – Check the SQL error log.

You'll either get good news...







or bad news...







2 - Microsoft have produced the Kerberos Configuration Manager tool to help with the detection and creation of SPNs. The tool can be downloaded from...

https://www.microsoft.com/en-us/download/details.aspx?id=39046

Creating SPNs goes above what I'd like to talk about here, so instead use this link to show you how...

https://msdn.microsoft.com/en-us/library/ms191153.aspx

Server Options


There are a couple of settings within the security options that can alter the way Linked Servers work. Both settings can be set to either true or false.
  • Data Access – When set to false, this disables the Linked Servers ability to be used to retrieve data in a 4 part naming convention or using OPENQUERY SELECT statement.
  • RPC out - When set to false, this disables the Linked Servers ability to be used to execute stored procedures.














DMVs


The following two DMVs give you information regarding the Linked Servers and login mappings on the instance.
  • sys.servers - This gives the information on the Linked Servers that have been created on the instance. The column is_linked will show a 1 for any linked servers that have been created.










  • sys.linked_logins – This gives the information on the local logins and the mappings they have on the remote instance











How should a Linked Servers security be set?


Like pretty much everything else in SQL Server, there isn't a one size fits all and you have to tailor the options to your needs. For me, I will always try to set up a Linked Server per database and explicitly set the each user to map to a specifically created SQL authenticated user that only has access to a single database with the correct permissions. Any login not specified in the local server to remote server mappings, wont be able to use the Linked Server. This way, only the logins which need access will be able to access the remote server and also SPN problems can be bypassed. Obviously, the downside to this method is that there can be many Linked Servers created with many users which can make the management burdensome.










Multiple Linked Servers for 6 databases on 2 instances.













Specific accounts mapped to SQL authenticated users.

Summary


After creating a Linked Server, time should be spent to work out what logins need to access it and what permissions will be required on the remote instance. Taking short cuts with Linked Server security can make an instance accessible to many more people than originally intended.