Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
388 views
in Technique[技术] by (71.8m points)

sql server - Constraints check: TRY/CATCH vs Exists()

I have a table with unique constraint on it:

create table dbo.MyTab
(
    MyTabID int primary key identity,
    SomeValue nvarchar(50)
);
Create Unique Index IX_UQ_SomeValue 
On dbo.MyTab(SomeValue);
Go

Which code is better to check for duplicates (success = 0 if duplicate found)?

Option 1

Declare @someValue nvarchar(50) = 'aaa'
Declare @success bit = 1;
Begin Try 
    Insert Into MyTab(SomeValue) Values ('aaa');
End Try
Begin Catch
    -- lets assume that only constraint errors can happen
    Set @success = 0;
End Catch
Select @success

Option 2

Declare @someValue nvarchar(50) = 'aaa'
Declare @success bit = 1;
IF EXISTS (Select 1 From MyTab Where SomeValue = @someValue)
    Set @success = 0;
Else 
    Insert Into MyTab(SomeValue) Values ('aaa');
Select @success

From my point of view- i do believe that Try/Catch is for errors, that were NOT expected (like deadlock or even constraints when duplicates are not expected). In this case- it is possible that sometimes a user will try to submit duplicate, so the error is expected.

I have found article by Aaron Bertrand that states- checking for duplicates is not much slower even if most of inserts are successful.

There is also loads of advices over the net to use Try/Catch (to avoid 2 statements not 1). In my environment there could be just like 1% of unsuccessful cases, so that kind of makes sense too.

What is your opinion? Whats other reasons to use option 1 OR option 2?

UPDATE: I'm not sure it is important in this case, but table have instead of update trigger (for audit purposes- row deletion also happens through Update statement).

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I've seen that article but note that for low failure rates I'd prefer the "JFDI" pattern. I've used this on high volume systems before (40k rows/second).

In Aaron's code, you can still get a duplicate when testing first under high load and lots of writes. (explained here on dba.se) This is important: your duplicates still happen, just less often. You still need exception handling and knowing when to ignore the duplicate error (2627)

Edit: explained succinctly by Remus in another answer

However, I would have a separate TRY/CATCH to test only for the duplicate error

BEGIN TRY

-- stuff

  BEGIN TRY
     INSERT etc
  END TRY
  BEGIN CATCH
      IF ERROR_NUMBER() <> 2627
        RAISERROR etc
  END CATCH

--more stuff

BEGIN CATCH
    RAISERROR etc
END CATCH

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...