How to configure your own DMV monitors dashboards…? (3/4)

By the last blogs we explained how to configure DMV monitors for making your dashboards for expensive queries statistics and for Connection statistics and outage cases , we are going today to discuss today a new dashboard related to waits /Overload statistics:

Waits /Overload  statistics

As done previously within the previous

Online Event – DBCC Commands: The Quick and the Dangerous

Presenter: Erin Stellato

Abstract:

There are many DBCC statements at the disposal of Database Administrators and Developers. Some are used for the greater good; others can wreak havoc in your system. The majority of these commands are informational, but if you’re not careful you can introduce chaos or at worst, lose data you may never find again. In this session we will explore DBCC commands that are useful for the DBA and developer, and cover when to use them and how. Expect demonstrations that provide a practical application of these commands – both documented and not – that database professionals can utilize quickly and easily.

Bio:

Erin Stellato is a Senior Database Engineer at Hyland Software, Inc. in Westlake, OH. She has over 11 years of technology experience, and has worked with SQL Server for the last eight years. Erin works with customers to provide support for problems that involve the database and system performance. She is an active member of the SQL Server community and blogs about her experiences. Her areas of interest include Internals, Maintenance strategies, Performance Tuning and High Availability.

Door prize: There will be a raffle for a $50 Amazon gift certificate.  You do not need to register for the meeting, but if you want to enter the raffle, you must register at https://www.livemeeting.com/lrs/8000181573/Registration.aspx?pageName=s6798gl02wk3xg74 no later than 5:00 PM EDT on May 8th.

Live Meeting Link:

https://www.livemeeting.com/cc/8000181573/join?id=J93PGC&role=attend

How to configure your own DMV monitors dashboards…? (2/4)

By last article we explained how to build your DMV monitors for expensive queries and today we are going explain how we can build DMV monitors for Connectivity and outage cases between DB and Application servers

Statistics of connections coming from APP server

Data Warehousing Workshop (4/4)

Following up our series of Data warehousing workshops, we have still numerous cases that might help you when you are going to implement any data warehousing solution.

B -Online Data warehousing solutions for ETL processes:

Extract Transform Load is one f the most common terminology worldwide and could be used in different RDBMs like Oracle , Tera Data , Informix, DB2…and SQL Server also.

By that kind of processes, you have just to:

  • Extract the data needed from source
  • Transform them to an appropriate format & collation
  • Then load them to an appropriate destination

This could be done simply by :

  • Using Import & Export Functionality of DB Service
  • Using DTS components of SSIS (Integration Service)

But this process might encounter difficulties while huge data entity where no enough room to get down time of hours or perhaps might be days , so a question comes now how could we conduct ETL processes without any down time for huge data entity & BLOB

(Binary Large Objects ) and XML data 

 

This is could be done through 2 major steps :

  1. Run your ETL process in parallel with end users activities but to a different table rather than that relevant to end users activities.
  1. Once finish, start to scan all mismatches between the 2 tables through the 3 data warehousing statements below:
--Scan mismatches for new data inserted
 
set xact_abort on ;
 
set nocount on;
 
set deadlock_priority low;
 
ALTER DATABASE [Workshops] SET RECOVERY BULK_LOGGED WITH NO_WAIT
 
declare @ErrorMessage nvarchar (400)
 
declare @TotalMessage nvarchar (400)
 
Begin try
 
Begin transaction Mismatch_insert with mark  'Mismatch_insert'
 
      merge Articles_new T using  Articles  S WITH (NOLOCK)on  S.ArticleID=T.ArticleID
 
      WHEN NOT MATCHED THEN INSERT
 
              ([Article_Title]
 
           ,[Article_PublishedDate]
 
           ,[Article_PublishedDateView]
 
           ,[Article_UserID]
 
           ,[Article_Brief]
 
           ,[Article_Details]
 
           ,[Article_Notes]
 
           ,[Article_Keywords]
 
           ,[Article_IsMarquee]
 
           ,[Article_ReadCount]
 
           ,[Article_AvgRating]
 
           ,[Article_CommentsCount]
 
           ,[Article_LastModifiedByID]
 
           ,[Article_LastModifiedDate]
 
           ,[Article_ActivatedByID]
 
           ,[Article_IsActive]
 
           ,[Article_CategoryID]
 
           ,[Article_IsMain]
 
           ,[Article_IsUpdated]
 
           ,[Article_IsSubject]
 
           ,[Article_IsEditing]
 
           ,[Article_IsArchived]
 
           ,[Article_IsEditingByID]
 
           ,[Articles_ArticlehasComment]
 
           ,[ArticleVideo_VideoURL]
 
           ,[ArticleVideo_VideoEmbed]
 
           ,[Article_IsWrittenBy]
 
           ,[Article_LikesCount]
 
           ,[Article_DisLikesCount]
 
           ,[Article_ImageThumb]
 
           ,[Article_ImageDesc])
 
           VALUES
 
            (s.[Article_Title]
 
           ,s.[Article_PublishedDate]
 
           ,s.[Article_PublishedDateView]
 
           ,s.[Article_UserID]
 
           ,s.[Article_Brief]
 
           ,s.[Article_Details]
 
           ,s.[Article_Notes]
 
           ,s.[Article_Keywords]
 
           ,s.[Article_IsMarquee]
 
           ,s.[Article_ReadCount]
 
           ,s.[Article_AvgRating]
 
           ,s.[Article_CommentsCount]
 
           ,s.[Article_LastModifiedByID]
 
           ,s.[Article_LastModifiedDate]
 
           ,s.[Article_ActivatedByID]
 
           ,s.[Article_IsActive]
 
           ,s.[Article_CategoryID]
 
           ,s.[Article_IsMain]
 
           ,s.[Article_IsUpdated]
 
           ,s.[Article_IsSubject]
 
           ,s.[Article_IsEditing]
 
           ,s.[Article_IsArchived]
 
           ,s.[Article_IsEditingByID]
 
           ,s.[Articles_ArticlehasComment]
 
           ,s.[ArticleVideo_VideoURL]
 
           ,s.[ArticleVideo_VideoEmbed]
 
           ,s.[Article_IsWrittenBy]
 
           ,s.[Article_LikesCount]
 
           ,s.[Article_DisLikesCount]
 
           ,s.[Article_ImageThumb]
 
           ,s.[Article_ImageDesc]);
 
commit transaction Mismatch_insert
 
End try
 
Begin catch
 
 if (XACT_STATE()) =-1
 
begin
 
ROLLBACK TRANSACTION Mismatch_insert
 
Set @ErrorMessage=ERROR_MESSAGE ()
 
Set @TotalMessage =' Mismatch scan phase for new inserts has failed Due to ......'+@ErrorMessage
 
EXECUTE msdb.dbo.sp_notify_operator @name=N'Operator',@subject=N'Failure of Mismatch scan phase for new inserts ',@body=@TotalMessage
 
end
 
end catch
 
ALTER DATABASE [Workshops] SET RECOVERY FULL WITH NO_WAIT
 
Scan mismatches for new data updated
set xact_abort on ;
 
set nocount on;
 
set deadlock_priority low;
 
ALTER DATABASE [Workshops] SET RECOVERY BULK_LOGGED WITH NO_WAIT
 
declare @ErrorMessage nvarchar (400)
 
declare @TotalMessage nvarchar (400)
 
BEGIN
 
Begin try
 
Begin transaction Mismatch_update with mark 'Mismatch_udpate'
 
Merge Articles_new T using Articles S WITH (NOLOCK)on  S.ArticleID=T.ArticleID
 
WHEN MATCHED and
 
   ( [Article_Title]<>s.[Article_Title]
 
    or [Article_PublishedDate]<>s.[Article_PublishedDate]
 
    or [Article_PublishedDateView]<>s.[Article_PublishedDateView]
 
    or [Article_UserID]<>s.[Article_UserID]
 
    or [Article_Brief]<>s.[Article_Brief]
 
    or [Article_Details]<>s.[Article_Details]
 
    or [Article_Notes] <>s.[Article_Notes]
 
    or [Article_Keywords]<>s.[Article_Keywords]
 
    or [Article_IsMarquee]<>s.[Article_IsMarquee]
 
    or [Article_ReadCount]<>s.[Article_ReadCount]
 
    or [Article_AvgRating] <>s.[Article_AvgRating]
 
    or [Article_CommentsCount]<>s.[Article_CommentsCount]
 
    or [Article_LastModifiedByID]<>s.[Article_LastModifiedByID]
 
    or [Article_LastModifiedDate]<>s.[Article_LastModifiedDate]
 
    or [Article_ActivatedByID]<>s.[Article_ActivatedByID]
 
    or [Article_IsActive]<>s.[Article_IsActive]
 
    or [Article_CategoryID]<>s.[Article_CategoryID]
 
    or [Article_IsMain]<>s.[Article_IsMain]
 
    or [Article_IsUpdated] <>s.[Article_IsUpdated]
 
    or [Article_IsSubject]<>s.[Article_IsSubject]
 
    or[Article_IsSubject]<>s.[Article_IsSubject]
 
    or[Article_IsEditing]<>s.[Article_IsEditing]
 
    or[Article_IsArchived]<>s.[Article_IsArchived]
 
    or[Article_IsEditingByID]<>s.[Article_IsEditingByID]
 
    or[Articles_ArticlehasComment]<>s.[Articles_ArticlehasComment]
 
    or[ArticleVideo_VideoURL]<>s.[ArticleVideo_VideoURL]
 
    or[ArticleVideo_VideoEmbed]<>s.[ArticleVideo_VideoEmbed]
 
    or[Article_IsWrittenBy]<>s.[Article_IsWrittenBy]
 
    or[Article_LikesCount]<>s.[Article_LikesCount]
 
    or[Article_DisLikesCount]<>s.[Article_DisLikesCount]
 
    or[Article_ImageThumb]<>s.[Article_ImageThumb]
 
    or[Article_ImageDesc] <>s.[Article_ImageDesc]
 
    )
 
THEN update set [Article_Title]=s.[Article_Title]
 
    ,[Article_PublishedDate]=s.[Article_PublishedDate]
 
    ,[Article_PublishedDateView]=s.[Article_PublishedDateView]
 
    ,[Article_UserID]=s.[Article_UserID]
 
    ,[Article_Brief]=s.[Article_Brief]
 
    ,[Article_Details]=s.[Article_Details]
 
    ,[Article_Notes] =s.[Article_Notes]
 
    ,[Article_Keywords]=s.[Article_Keywords]
 
    ,[Article_IsMarquee]=s.[Article_IsMarquee]
 
    ,[Article_ReadCount]=s.[Article_ReadCount]
 
    ,[Article_AvgRating] =s.[Article_AvgRating]
 
    ,[Article_CommentsCount]=s.[Article_CommentsCount]
 
    ,[Article_LastModifiedByID]=s.[Article_LastModifiedByID]
 
    ,[Article_LastModifiedDate]=s.[Article_LastModifiedDate]
 
    ,[Article_ActivatedByID]=s.[Article_ActivatedByID]
 
    ,[Article_IsActive]=s.[Article_IsActive]
 
    ,[Article_CategoryID]=s.[Article_CategoryID]
 
    ,[Article_IsMain]=s.[Article_IsMain]
 
    ,[Article_IsUpdated] =s.[Article_IsUpdated]
 
    ,[Article_IsSubject]=s.[Article_IsSubject]
 
    ,[Article_IsSubject]=s.[Article_IsSubject]
 
    ,[Article_IsEditing]=s.[Article_IsEditing]
 
    ,[Article_IsArchived]=s.[Article_IsArchived]
 
    ,[Article_IsEditingByID]=s.[Article_IsEditingByID]
 
    ,[Articles_ArticlehasComment]=s.[Articles_ArticlehasComment]
 
    ,[ArticleVideo_VideoURL]=s.[ArticleVideo_VideoURL]
 
    ,[ArticleVideo_VideoEmbed]=s.[ArticleVideo_VideoEmbed]
 
    ,[Article_IsWrittenBy]=s.[Article_IsWrittenBy]
 
    ,[Article_LikesCount]=s.[Article_LikesCount]
 
    ,[Article_DisLikesCount]=s.[Article_DisLikesCount]
 
    ,[Article_ImageThumb]=s.[Article_ImageThumb]
 
    ,[Article_ImageDesc] =s.[Article_ImageDesc]        ;
 
commit transaction Mismatch_update
 
End try
 
Begin catch
 
 if (XACT_STATE()) =-1
 
begin
 
ROLLBACK TRANSACTION Mismatch_update
 
Set @ErrorMessage=ERROR_MESSAGE ()
 
Set @TotalMessage =' Mismatch scan phase for new updates has failed Due to ......'+@ErrorMessage
 
EXECUTE msdb.dbo.sp_notify_operator @name=N'Operator',@subject=N'Failure of Mismatch scan phase for new updates ',@body=@TotalMessage
 
end
 
end catch
 
ALTER DATABASE [Workshops] SET RECOVERY FULL WITH NO_WAIT
 
Scan mismatches for new data deleted
 
set xact_abort on ;
 
set nocount on;
 
set deadlock_priority low;
 
ALTER DATABASE [Workshops] SET RECOVERY BULK_LOGGED WITH NO_WAIT
 
declare @ErrorMessage nvarchar (400)
 
declare @TotalMessage nvarchar (400)
 
BEGIN
 
Begin try
 
Begin transaction Mismatch_delete with mark 'Mismatch_delete'
 
Merge Articles_new T using (select * from  Articles_new  WITH (NOLOCK) except select * from Articles  WITH (NOLOCK)) S on  S.ArticleID=T.ArticleID
 
      WHEN MATCHED THEN delete
 
;
 
commit transaction Mismatch_delete
 
End try
 
Begin catch
 
 if (XACT_STATE()) =-1
 
begin
 
ROLLBACK TRANSACTION Mismatch_delete
 
Set @ErrorMessage=ERROR_MESSAGE ()
 
Set @TotalMessage =' Mismatch scan phase for new deletes has failed Due to ......'+@ErrorMessage
 
EXECUTE msdb.dbo.sp_notify_operator @name=N'Operator',@subject=N'Failure of Mismatch scan phase for new deletes ',@body=@TotalMessage
 
end
 
end catch
 
ALTER DATABASE [Workshops] SET RECOVERY FULL WITH NO_WAIT

Kindly please take into your considerations the cumulative considerations enumerated before in the first articles of this series besides of the below ones as well:

1- Basic concepts for Mismatch scans were as below :

  • Scanning any new inserted data entity within the old tables to be inserted to the new tables where our ETL process were implemented there.
  • Scanning any updated data entity was through scanning any records shared between the 2 tables for PK values but different for any other columns.
  • Scanning any deleted data was through using except commands to get the data exists at the new one but not exists at the old one which means here “Deleted”

2- The 3 phases could be undertaken asynchronously without any risk at all to make it more reliable ETL process.

I know it was momentum workshop examples for data warehousing as it looks like realistic examples to make them more meaningful and closely for what you could experience in your current work environment.

Therefore, I would come up with a next supplemental series for what is called

“Data sources & linked Servers “ to complement your data warehousing projects if you are going to make them from different RDBMs instead of one RDMB .

Data Warehousing Workshop (3/4)

By the last blog , we were talking about DWH solutions for online Reporting and by this session we are going to build a new DWH solution

B -Online Data warehousing solutions for Archiving purposes:

I am still keeping using this terminology “online “ since actually it represents always a challenging point for any DBA or DB Analyst once he is building a DWH solution
Fundamentally DWH solution for archiving a table is broken into half and each half can run separately at any time which adds another rich power for this new online archivig solution
• Bulk insert the needed data from the live table Employee to the archived table Employee_Archived

• Bulk delete from Employee the matched data with Employee_Archived

But keep in your mind that DWH solution is still working as offline solution and to convert it to online data warehousing, we have to batch this bulk delete into small patches of an appropriate size according to volume of transactions and volume of data also (I suggest 1000 records ) and leave an appropriate delay between each batch and another like 10 or 15 sec sec as for example to avoid any tangible locks or deadlocks and by thus we can mitigate the volume of impacts and approach so closely the meaning of “Online DWH”

First phase of Archiving (Bulk insert ):

 
use Workshops
 
 
--Create the Archived table
 
CREATE TABLE [dbo].[Employees_Archived](
	[empid] [int] IDENTITY(1,1) NOT NULL,
	[empname] [nvarchar](100) NULL,
	[deptid] [int] NULL,
	[Salary] [float] NULL
 CONSTRAINT [PK_Employees2] PRIMARY KEY CLUSTERED 
(
	[empid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
 
GO
 
--Start the Archiving Solution
 
set xact_abort on ;
set nocount on;
set deadlock_priority low;
declare @ErrorMessage nvarchar (400)
declare @TotalMessage nvarchar (400)
 
 
Begin try
Begin transaction Insert_phase1 with mark 'Insert_phase1 '
 
begin
 
-- Merge live data to the archived table
 
set identity_insert Employees_Archived on 
 
merge Employees_Archived T using  Employees S WITH (NOLOCK) on s.[empid] =T.[empid] 
WHEN NOT MATCHED THEN INSERT 
([empid]
,[empname]
,[deptid]
,[Salary])
values 
(s.[empid]
,s.[empname]
,s.[deptid]
,s.[Salary]);
 
 
end
commit transaction Insert_phase1
End try
 
--End of Try part and start of Catch part
 
Begin catch
 
--Checking Failures of Archiving phase 1
 
if (XACT_STATE()) =-1
 
begin
 
--Rollback Archiving phase 1 and send a mail notification
ROLLBACK TRANSACTION Insert_phase1
 
Set @ErrorMessage=ERROR_MESSAGE ()
Set @TotalMessage ='The 1st phase of Archiving  (Bulk Insert) has failed Due to ......'+@ErrorMessage
EXECUTE msdb.dbo.sp_notify_operator @name=N'Operator',@subject=N'Failure of the 1st phase of Archiving  (Bulk Insert)',@body=@TotalMessage 
 
end
 
end catch

Second phase of Archiving (Bulk Delete):

 
--Archiving Phase 2 (Bulk Delete)
 
 
 
set xact_abort on ;
set nocount on;
set deadlock_priority low;
declare @ErrorMessage nvarchar (400)
declare @TotalMessage nvarchar (400)
declare @count int 
 
 
Select @count= count (1) from Employees_Archived t with (nolock) inner join Employees s with (nolock) on s.empid=t.empid
 
 
 
--Start batching the deletion phase into smaller backets of an appropriate size like 10000 records 
while (@count>0)
 
begin 
 
 
Begin try
Begin transaction Delete_phase1 with mark 'Delete__phase1 '
 
 
begin
 
 
 
Merge Top (1000) employees T using employees_archived s on s.[empid] =T.[empid] 
when matched then delete;
 
 
 
end
commit transaction Delete_phase1
End try
 
 
--End of Try part and start of Catch part
Begin catch
 
--Checking Failures of Archiving phase 2
 
if (XACT_STATE()) =-1
 
begin
 
--Rollback Archiving phase 2 and send a mail notification
ROLLBACK TRANSACTION Delete_phase1
 
Set @ErrorMessage=ERROR_MESSAGE ()
Set @TotalMessage ='The 2nd phase of Archiving  (Bulk Delete) has failed Due to ......'+@ErrorMessage
EXECUTE msdb.dbo.sp_notify_operator @name=N'Operator',@subject=N'Failure of the 2nd phase of Archiving  (Bulk Delete)',@body=@TotalMessage 
 
end
 
end catch
 
set @count=@count-1000
 
--Adding a delay time between each patch and another like 15 sec to mitigate heavy locks and thus impacts
 
waitfor delay '00:00:15'
 
end

The 1st archiving phase ( Bulk Insert) sounds like a simple Merge command used to migrate data from Employees table to Employees_Archived table as we explained a similar example by the last blog but the 2nd archiving phase seem a while loop based on @count paramter which express the number of matched records between Employee and Employees_Archived and the whole of Try/Catch transaction is included within this loop to be repeated for each 1000 records as per our example

You have to add into your considerations the checklist below:

1 – It is heavily recommended while much Bulk Insert /Delete to put DB under Bulk –Logged mode to save much storage capacity of log files
2- Use WAITFOR DELAY ’00:00:15′doesn’t mark for an ugly wait here as it looks like for DBA Through monitors, but it just a wait like that of service broker as for example.

4- Bulk Insert and bulk delete phases could be conducted at 2 different
time intervals without any risk due to its rich powers of Bulk Merge commands of 2008 and this

5- If any failure happened for the 2nd phase “Bulk Delete” , it will not roll back all data but just the last batch ( 1000 records ex: ) will be just rolled back safely without any data inconsistency which add another rich power for this archiving solution

By the next blog , we will turn our attention for DWH used for online ETL processes