Hi there,
I am trying to create a list of all the tables in one database and then list the size of each table. So for example I want to create a table with the table name and table size for one DB
E.g
Table1 1111KB
Table2 123300MB
Table3 120448KB
etc for all the tables in a particukar DB
I know there is a stored procedure to list the sizes: 'sp_spaceused' but not sure how to script all this together.
can anyone help please!!
From
NewToSQLIf you don't do an UPDATE STATISTICS (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_ua-uz_1mpf.asp) you'll probably be dealing with GIGO, but you could use:SELECT
Coalesce(8 * Sum(CASE WHEN si.indid IN (255) THEN si.reserved END), 0) AS blob_kb
, 8 * Sum(CASE WHEN si.indid IN (0, 1) THEN si.reserved END) AS data_kb
, Coalesce(8 * Sum(CASE WHEN si.indid NOT IN (0, 1, 255) THEN si.reserved END), 0) AS index_kb
, so.name
FROM dbo.sysobjects AS so
JOIN dbo.sysindexes AS si
ON (si.id = so.id)
WHERE 'U' = so.type
GROUP BY so.name
ORDER BY so.name-PatP|||Or...
USE Northwind
GO
SET NOCOUNT ON
GO
CREATE TABLE #SpaceUsed (
[name] varchar(255)
, [rows] varchar(25)
, [reserved] varchar(25)
, [data] varchar(25)
, [index_size] varchar(25)
, [unused] varchar(25)
)
GO
DECLARE @.tablename nvarchar(128)
, @.maxtablename nvarchar(128)
, @.cmd nvarchar(1000)
SELECT @.tablename = ''
, @.maxtablename = MAX(name)
FROM sysobjects
WHERE xtype='u'
WHILE @.tablename < @.maxtablename
BEGIN
SELECT @.tablename = MIN(name)
FROM sysobjects
WHERE xtype='u' and name > @.tablename
SET @.cmd='exec sp_spaceused['+@.tablename+']'
INSERT INTO #SpaceUsed EXEC sp_executesql @.cmd
END
SET NOCOUNT OFF
GO
SELECT * FROM #SpaceUsed
GO
DROP TABLE #SpaceUSed
GO|||I generally use
dbcc updateusage(0)
go
select sum(reserved)*8 as "Size in KB", object_name(id)
from sysindexes
where indid in (0, 1, 255)
group by id
order by 1 desc
The usage statistics tend to decay over time, as Pat pointed out.|||Is that just the index or the index and the datapage?|||Both, and text. The indid is what determines it.
indid = 0 = heap
indid = 1 = clustered index
indid = 255 = text/image
What I need is a way to subtract the nonclustered indexes, in the case that they happen to be on separate filegroups. Then I can get a script together to monitor space usage on a multi-filegroup system.|||If you look at my original posting, the non-clustered indicies are what are reported as index_kb. The data pages are either the heap or the clustered index, and the blob (TEXT and IMAGE) pages are just that, the index_kb are what are left.
-PatP|||I can't be too certain, but I think that M$ drops all of the index pages into the reserved count for indids 0 and 1. For a test, I created a table, loaded a bit of data into it, ran dbcc updateusage(0), and ran both our scripts. Then create an index on the table, run the dbcc again, run both scripts, and see what changes. In the second instance, it looks like the data page count goes up for the table.
This sort of thing just makes me a little more jealous of the Oracle DBA, who can get all of his size statistics with a lot less back-bending.|||Was the index you created clustered, or non-clustered? If it was clustered, then that is exactly the behavior that I would expect. If not, then hmmmm...
-PatP|||Thanks for your help, this worked a treat!!!!|||I'm just curioius, but which suggestion did you end up using?
-PatP|||I used Brett's in the end as it was similar to what I had originally tried to do. because I then put the contents of the table to an excel spreadsheet|||Pat: I added a nonclustered index. The definitions of the columns reserved, used, and dpages in books online are almost infuriating. I don't think there is any way to derrive the number of reserved pages for a particular index on a separate filegroup from the data. I thought about just rounding the dpages value up to the next multiple of 8, but that does not take into account highly fragmented indexes. It may be that you have to run dbcc showcontig to get the actual values, but that is too resource intensive for a simple monitor.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment