Recently I went to a client for a one day data warehouse performance tuning exercise. Because you only have one day, it’s important to quickly find the pain-points of the system. I remembered seeing a webinar somewhere of Brent explaining the sp_Blitz script, so I decided to bring those scripts with me. I couldn’t have made a better choice.
There was a serious indexing problem at the client. They had heard “indexes make reads go faster”, so they slapped a lot of indexes on most of the tables. None of them clustered. I ran the script with its default settings and I quickly got a list of all the problems it could find with the indexes on the data warehouse.
It gave me an overview the following items, all of them were immediately actionable:
- duplicate indexes. Remove the offenders immediately.
- near-duplicate indexes. Check if for example an index has columns (A,B,C) and another index (A,B). Delete the last one.
- Heaps. Quite a long list, but the script also has a section on which tables are accessed the most. This allowed us to focus on the more important heaps in the data warehouse.
- the so-called work-a-holics: indexes which were used a lot. I focused on making these indexes more efficient: could I make a filtered index out of it? Or maybe add some included columns?
Other topics were listed as well, but these were the main ones I focused on.
What’s great is that this script also provides you with the URLs to knowledge articles on the Brent Ozar website. If you don’t understand one of the results, you can immediately look it up and read about it.
By focusing on the results of sp_BlitzIndex script, I could boost performance in just a few hours of work. This near real-time data warehouse is the source for a reporting application used by dozens of people in the field, and you could immediately tell it worked a lot faster. Awesomesauce.
Disclaimer: I was honestly really impressed with the results. I did not get paid by Brent for this blog post. 🙂