only My site

Friday, April 29, 2011

Adobe Flash Live Straming Technology

WPF Good Links

http://msdn.microsoft.com/en-us/library/cc296381.aspx

http://karlshifflett.wordpress.com/2008/04/10/wpf-sample-series-wpf-mdi-task-switching/

http://blogs.msdn.com/b/llobo/archive/tags/wpf+samples/

http://blogs.msdn.com/b/llobo/archive/2008/02/26/drag-drop-library-updated.aspx

SQL Best practices and Tips

1. Avoid DISTINCT



2. TOP 1 ORDER BY ID desc (to get the last record) - Use WHERE ID = (SELECT (MAX(ID) FROM Tbl) - TOP N * or TOP N PERCENTAGE or TOP N WITH TIES (tank)



3. Table Scan needs to eliminate if the cost is high... Always Table Seek is better for IO operation and Performance.



4. INNER join is better than Equi join ECT col1, col2, col3... FROM table_name1, table_name2 WHERE table_name1.col2 = table_name2.col1;



5. Use PRIMARY key and then use Non clustered index as Non clustered index uses the Clustered index key



6. Use Order by only if required



7. Use composite index keys if the Where condition using more than one filters CDN



8. Use INCLUDE
CREATE NONCLUSTERED INDEX [] ON [dbo].[order] ([Contract_No],[Delete_Flag],[Status]) INCLUDE ([Order_No],[Exchange_Id])



Index fragmentation - Health check



SELECT CAST(DB_NAME(database_id) AS varchar(20)) AS [Database Name],CAST(OBJECT_NAME(object_id) AS varchar(20)) AS [TABLE NAME], Index_id, Index_type_desc, Avg_fragmentation_in_percent, Avg_page_space_used_in_percentFROM sys.dm_db_index_physical_stats(DB_ID('SOLO'),OBJECT_ID('dbo.order'),NULL,NULL,'Detailed')
SELECT index_id, index_depth, cast(avg_fragmentation_in_percent as int), page_count FROM sys.dm_db_index_physical_stats(DB_ID('SOLO'), OBJECT_ID('dbo.order'), NULL, NULL, 'DETAILED') order by 1,2




replace the NOT IN clause with a != , if you are comparing with 1 value
Avoid Index for a fixed value columns such as "Boolean", "Status" which will not give a great benifit.




WHERE SUBSTRING(firstname,1,1) = 'm'
can be rewritten like this:
WHERE firstname like 'm%




If you currently have a query that uses NOT IN, which offers poor performance because the SQL Server optimizer has to use a nested table scan to perform this activity, instead try to use one of the following options instead, all of which offer better performance:
-Use EXISTS or NOT EXISTS -Use IN -Perform a LEFT OUTER JOIN and check for a NULL condition




WHERE clauses that use NOT are not sargable, but can often be rewritten to remove the NOT from the WHERE clause, for example:
WHERE NOT column_name > 5
to
WHERE column_name <= 5





if you can rewrite the WHERE clause so that the column and function are separate, then the query can use an available index, greatly boosting performance. for example: Function Acts Directly on Column, and Index Cannot Be Used:
SELECT member_number, first_name, last_nameFROM membersWHERE DATEDIFF(yy,datofbirth,GETDATE()) > 21
Function Has Been Separated From Column, and an Index Can Be Used:
SELECT member_number, first_name, last_nameFROM membersWHERE dateofbirth < DATEADD(yy,-21,GETDATE





SELECT column_name FROM table_nameWHERE LOWER(column_name) = 'name'
SELECT column_name FROM table_nameWHERE column_name = 'NAME' or column_name = 'name'
This code will run much faster than the first example.



Here are the key operators used in the WHERE clause, ordered by their performance. Those operators at the top will produce results faster than those listed at the bottom.
= >, >=, <, <= LIKE <>This lesson here is to use = as much as possible, and <> as least as possible.





Carefully evaluate whether your SELECT query needs the DISTINCT clause or not



While is most cases, using either option works equally efficiently, there are some instances (such as rows returned from an unsorted heap) where the TOP operator is more efficient than using SET ROWCOUNT. Because of this, using the TOP operator is preferable to using SET ROWCOUNT to limit the number of rows returned by a query.





Whenever SQL Server has to perform a sorting operation, additional resources have to be used to perform this task. Sorting often occurs when any of the following Transact-SQL statements are executed:
ORDER BY GROUP BY SELECT DISTINCT UNION CREATE INDEX (generally not as critical as happens much less often)In many cases, these commands cannot be avoided. On the other hand, there are few ways that sorting overhead can be reduced. These include:
Keep the number of rows to be sorted to a minimum. Do this by only returning those rows that absolutely need to be sorted. Keep the number of columns to be sorted to the minimum. In other words, don't sort more columns that required. Keep the width (physical size) of the columns to be sorted to a minimum. Sort column with number datatypes instead of character datatypes.When using any of the above Transact-SQL commands, try to keep the above performance-boosting suggestions in mind.





USE NorthwindSELECT OrderIDFROM [Order Details]WHERE UnitPrice > 10GROUP BY OrderID
or
USE NorthwindSELECT DISTINCT OrderIDFROM [Order Details]WHERE UnitPrice > 10
Both of the above queries produce the same results, but the second one will use less resources and perform faster.





It is fairly common request to write a Transact-SQL query to to compare a parent table and a child table and find out if there are any parent records that don't have a match in the child table. Generally, there are three ways this can be done:
Using a NOT EXISTS
SELECT a.hdr_keyFROM hdr_tbl aWHERE NOT EXISTS (SELECT * FROM dtl_tbl b WHERE a.hdr_key = b.hdr_key)
Using a Left Join
SELECT a.hdr_keyFROM hdr_tbl aLEFT JOIN dtl_tbl b ON a.hdr_key = b.hdr_keyWHERE b.hdr_key IS NULL
Using a NOT IN
SELECT hdr_keyFROM hdr_tblWHERE hdr_key NOT IN (SELECT hdr_key FROM dtl_tbl)





Here's how you might use COUNT(*):
IF (SELECT COUNT(*) FROM table_name WHERE column_name = 'xxx')
Here's a faster way, using IF EXISTS:
IF EXISTS (SELECT * FROM table_name WHERE column_name = 'xxx')
The reason IF EXISTS is faster than COUNT(*) is because the query can end immediately when the text is proven true, while COUNT(*) must count go through every record, whether there is only one, or thousands, before it can be found to be true.






Performing UPDATES takes extra resources for SQL Server to perform. When performing an UPDATE, try to do as many of the following recommendations as you can in order to reduce the amount of resources required to perform an UPDATE. The more of the following suggestions you can do, the faster the UPDATE will perform.
If you are UPDATing a column of a row that has an unique index, try to only update one row at a time. Try not to change the value of a column that is also the primary key. When updating VARCHAR columns, try to replace the contents with contents of the same length. Try to minimize the UPDATing of tables that have UPDATE triggers. Try to avoid UPDATing columns that will be replicated to other databases. Try to avoid UPDATing heavily indexed columns. Try to avoid UPDATing a column that has a reference in the WHERE clause to the column being updated. Of course, you may have very little choice when UPDATing your data, but at least give the above suggestions a thought. [6.5, 7.0, 2000]
If you have created a complex transaction that includes several parts, one part of which has a higher probability of rolling back the transaction than the others, better performance will be provided if you locate the most likely to fail part of the transaction at the front of the greater transaction. This way, if this more-likely-to-fail transaction has to roll back because of a failure, there has been no resources wasted on the other less-likely-to-fail transactions.





you often need to INSERT the same value into a column. For example, perhaps you have to perform 100,000 INSERTs a day into a particular table, and that 90% of the time the data INSERTed into one of the columns of the table is the same value.If this the case, you can reduce network traffic (along with some SQL Server overhead) by creating this particular column with a default value of the most common value. This way, when you INSERT your data, and the data is the default value, you don't INSERT any data into this column, instead allowing the default value to automatically be filled in for you.





we discussed various T-SQL discovery, optimization and tuningtechniques using the SET and DBCC commands. SQL Server performance is acomposite of many factors. And while T-SQL p

Friday, April 22, 2011

FaceBook - My view

Hi All,

I am new to Face book (FB) and understood the wonderful features of the FB application. It helps us to remind the birthday of friends, send messages, and share information across the group.

The downside is it tempts us to login to the application multiple times a day to read the comments and messages, eventually makes us addict to it.

Most of you are adding YouTube movie songs, which we are seeing it in all TV music channels, which is ultimately not useful, and waste of bandwidth and time.

I gradually reduced my time in FB and using my time in reading books which will focus the mind in one direction. Whereas In web, we jump from one link to another and finally we will do noting.

I suggest the youngsters not to spend more time in FB. This is the right age for you to focus on your career and studies.

Facebook in a part of life and not the heart of life.

Warm Regards
Balakrishnan G.

Thursday, April 21, 2011

SQL COALESCE & Top N within Group By

CREATE TABLE EMPLOYEE(
[EMPLOYEEID] INT PRIMARY KEY,
[NAME] NVARCHAR(50),
[MANAGERID] INT
)

GO

INSERT INTO EMPLOYEE VALUES(101,'Mary',102)
INSERT INTO EMPLOYEE VALUES(102,'Ravi',NULL)
INSERT INTO EMPLOYEE VALUES(103,'Raj',102)
INSERT INTO EMPLOYEE VALUES(104,'Pete',103)
INSERT INTO EMPLOYEE VALUES(105,'Prasad',103)
INSERT INTO EMPLOYEE VALUES(106,'Ben',103)

GO

SELECT E1.[NAME],COALESCE(E2.[NAME],'No Manager') AS [MANAGER NAME]
FROM EMPLOYEE E1
LEFT OUTER JOIN EMPLOYEE E2
ON E2.EMPLOYEEID =E1.MANAGERID


SQL Select Top N within Group By

drop table #test
create table #test (testid int identity(1,1), personid int, persondate int)

insert into #test (personid, persondate) values (1, 1)
insert into #test (personid, persondate) values (1, 2)
insert into #test (personid, persondate) values (1, 3)
insert into #test (personid, persondate) values (1, 4)
insert into #test (personid, persondate) values (1, 5)

insert into #test (personid, persondate) values (2, 1)
insert into #test (personid, persondate) values (2, 2)
insert into #test (personid, persondate) values (2, 3)
insert into #test (personid, persondate) values (2, 4)
insert into #test (personid, persondate) values (2, 5)

insert into #test (personid, persondate) values (3, 1)
insert into #test (personid, persondate) values (3, 2)
insert into #test (personid, persondate) values (3, 3)

insert into #test (personid, persondate) values (4, 1)
insert into #test (personid, persondate) values (4, 2)

SELECT * FROM #test

SELECT b.testid, b.personid, b.persondate, count(1) AS RowOrder
FROM #test a, #test b
WHERE a.persondate >= b.persondate AND a.personid = b.personid
GROUP BY b.testid, b.personid, B.persondate
HAVING Count(1) BETWEEN 1 AND 1
ORDER BY 1 DESC

Tuesday, April 5, 2011

Query Tuning Steps

Most of the DBA’s new to query tuning would wonder where to start in order to tune a query. Step 1: Run the query in Management Studio and view the actual execution plan. To view the execution plan, press Ctrl+M and then execute the query in SSMS. Step 2: In Execition plan next to results ,Check if there are any table scans or Clustered index scan or Index scan involved in the execution plan. If yes, then you should analyze that table’s info thoroughly in the execution. Step 3: Identify the actual rows in the table where there is scan involved. If the table is slightly larger i.e. greater than 2000 rows I would suggest you to check if there are proper indexes in the table. If the table has less than 2000 records table scan wouldn’t be a problem and I would rather prefer a table scan on those tables. Step 4: If there is already an index you have to analyze why the optimizer preferred a Clustered index scan or an Index scan rather than Seeks. The reason may be due to fragmentation or outdated statistics or due to the least selectivity or the query cost. Step 5: The following query will give the exact % of fragmentation in the indexes for a particular table. The below query will display the fragmentation status in the table “Person.Address” in Adventureworks database. SELECT CAST(DB_NAME(database_id) AS varchar(20)) AS [Database Name], CAST(OBJECT_NAME(object_id) AS varchar(20)) AS [TABLE NAME], Index_id, Index_type_desc, Avg_fragmentation_in_percent, Avg_page_space_used_in_percent FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),OBJECT_ID('person.address'),NULL,NULL,'Detailed') If the avg_fragmentation_in_percent is > 40% rebuild the index (using Alter index rebuild command) to eliminate fragmentation. It’s recommended to have a rebuild index job for all the tables scheduled to run on a weekly basis. Please NOTE that rebuilding an index is an expensive operation and ensure that it’s done only during OFF-Production hours. Step 6: If the indexes are fine, then check the statistics. Sometimes the index will be fine but the query would still continue to be slow since the optimizer wouldn’t be able to use the correct indexes due to outdated statistics. The following query gives the last time when the statistics for an index was last updated. SELECT Name AS Stats_Name, STATS_DATE(object_id, stats_id) AS Statistics_update_date FROM sys.stats WHERE object_id=OBJECT_ID('person.address') The statistics should be updated either weekly or daily or on alternate days depending on the frequency of modifications in the table. The more frequent the table is modified the more frequent the statistics should be updated. Sometimes for high transactional tables you can schedule a job to update the statistics on a regular basis. Please NOTE that rebuilding the index will automatically update the statistics as well. Hence avoid updating the statistics if you are rebuilding the index. Step 7: If you see any key lookups happening in the execution plan, make use of Included columns to create a covering Nonclustered index to avoid expensive lookup operation. This will help in improving the query performance as the logical reads would be reduced to a great extent. Step8: Ensure that each table has a clustered index preferably on primary key columns (by default there is one unless you explicitly mention Nonclustered) or on Identity columns. The clustered index should always be defined on unique valued columns like primary keys or identity. Step9: If you have a composite index, ensure to have the most selective field (the ones which have unique values) as the leading column in the index. Step10: If you couldn’t tune the query further or if you are clueless, try to use Database Tuning Advisor (DTA). Provide the SQL query as input file and run the DTA. It will provide a list of recommendations to reduce the query cost. Please do NOT blindly implement the suggestions doing so would certainly improve the query performance but you would end up creating numerous indexes which will be difficult to maintain during maintenance operations. You have to take the call of creating indexes as suggested by DTA, check whether the index will be used in most cases or if you can rewrite the query to make use of the existing indexes. Step11: While tuning stored procedures you need to ensure that the query plan for stored procedures is cached. The following query will help in providing the caching info for the stored procedures. SELECT usecounts, cacheobjtype, objtype, [text] FROM sys.dm_exec_cached_plans P CROSS APPLY sys.dm_exec_sql_text(plan_handle) S WHERE cacheobjtype = 'Compiled Plan' AND objtype='Proc' AND [text] NOT LIKE '%dm_exec_cached_plans%' AND S.DBID=11 --MENTION THE DATABASE ID FOR THE RESPECTIVE DATABASE (USE SP_HELPDB TO GET THE DBID) The value of usecounts will increase every time you run the same stored procedure.If there is a problem in caching check if there is any SET options as most of them will cause a recompile in query plan. Also the plan will be flushed out every time you run DBCC Freeproccache or DBCC FlushprocinDB. Never use both of them in production environment as it will remove the cache for all the procedures and they (SP) will have to be recompiled the next time they are run. If you suspect there might be some problem in the query plan, you can try to use WITH RECOMPILE option which will recompile the particular stored procedure every time it runs and see how the performance is. CREATE PROC Test WITH RECOMPILE AS Statement 1 Statement 2 Step12: Finally if all the above options are fine and the query couldn’t be tuned, try to rewrite the query. In few cases as soon as you view the query such as the ones below we need to rewrite the query: Creating a view with TOP 100% in order to include the ORDERBY clause in view definition where the view will not be sorted unless we explicitly sort the view by issuing <!--[endif]--> Select * from view order by column1 –Result will be sorted Select * from view – Result will NOT be sorted Thus there is a extra cost involved in sorting by using the ORDER BY clause in view definition even though the result is NOT sorted. Hence we should avoid ORDER BY in view definition and instead use it as Select * from view order by column1 <!--[endif]-->Using correlated sub queries will cause RBAR – Row by agonizing Row and will affect the performance. Avoid using Scalar functions in select statements and instead use Inline or Table valued function. Since Scalar function behaves like a cursor we need to avoid it being referenced in the Select statement Source : http://thedbadelight.blogspot.com/2011/01/normal-0-false-false-false-en-us-x-none.html