Showing posts with label application. Show all posts
Showing posts with label application. Show all posts

Thursday, March 29, 2012

Do Sql 2005 and Access not play well together?

Does anyone know if I can't put Access 2003 and SQL 2005 on the same server together? I've got an old, well not that old, application that was farmed out coming back home and apparently it's backend is Access and the frontend is ASP, I think. Anyway, the power's that be would like to upgrade to SQL 2005 and would like everything on the same box together. I've heard rumors that's not possible... Does anyone have any ideas? Thanks a ton.

Nicole (not Anna) Smith

It is NOT a problem to install both Access and SQL Server on the same computer.|||Well as Arnie said, in general it is no problem, don′t know if this is officially supported (like outlook and Exchange isn′t), but I also saw that many times working smootly.

Jens K. Suessmeyer.

http://www.sqlserver2005.desql

do process

how can I do
perform a stored procedure with paramert from my client application but my q
uestion is sent call but transfer process to sqlserver a for example i can c
lose my application but process continue running in a sqlserver until finiss
hed, similar a job but with
out job.
Thank You
AlfSome environments and providers (e.g. .NET) will allow you to make a stored
procedure call asynchronous, meaning it does not have to wait for the
result.
So, it is hard to say if your client application can do that, because you
forgot to tell us anything about your client application.
A
<ag> wrote in message news:%23iXkfN6QGHA.4740@.TK2MSFTNGP14.phx.gbl...
> how can I do
>
> perform a stored procedure with paramert from my client application but my
> question is sent call but transfer process to sqlserver a for example i
> can close my application but process continue running in a sqlserver until
> finisshed, similar a job but without job.
> Thank You
> Alf
>

Tuesday, March 27, 2012

Do I really need a cursor?

I've built an application to import transactions into the database. Bad transactions go in a separate table and dupe transactions get updated. Currently, it takes about 2 hours to import ~40K records using the code below. Obviously I'd like this to run as fast as possible and since cursors are a real drag I was wondering if there was a more efficient way to accomplish this.

DECLARE
@.contact_id int,
@.product_code char(9),
@.status_date datetime,
@.business_code char(4),
@.expire_date datetime,
@.prod_status char(4),
@.transaction_id int,
@.emailAddress varchar(50),
@.journal_id int

BEGIN TRAN
DECLARE transaction_import_cursor CURSOR
FOR SELECT transaction_id, product_code, emailAddress, status_date, business_code, expire_date, prod_status from transactions_batch_tmp
OPEN transaction_import_cursor
FETCH NEXT FROM transaction_import_cursor INTO @.transaction_id, @.product_code, @.emailAddress, @.status_date, @.business_code, @.expire_date, @.prod_status
WHILE (@.@.FETCH_STATUS = 0)
BEGIN
SELECT top 1 contacts.contact_id AS contact_id, transactions_batch_tmp.status_date AS status_date, transactions_batch_tmp.product_code AS product_code,
transactions_batch_tmp.business_code AS business_code, transactions_batch_tmp.expire_date AS expire_date,
transactions_batch_tmp.prod_status AS product_status
FROM transactions_batch_tmp INNER JOIN
journal INNER JOIN
contacts ON journal.contact_id = contacts.contact_id ON transactions_batch_tmp.emailAddress = contacts.emailAddress AND
transactions_batch_tmp.product_code = journal.product_code INNER JOIN
products ON transactions_batch_tmp.product_code = products.product_code
WHERE rtrim(ltrim(contacts.emailAddress)) = @.emailAddress AND journal.product_code = @.product_code
ORDER BY transactions_batch_tmp.status_date desc
IF @.@.ROWCOUNT = 0
BEGIN
print 'NEW transaction! ' + @.product_code + @.emailAddress
insert into journal (contact_id, product_code, status_date, business_code, expire_date, entryTypeID, product_status, date_entered)
SELECT distinct rtrim(ltrim(contacts.contact_id)) as cid, rtrim(ltrim(products.product_code)), transactions_batch_tmp.status_date,
rtrim(ltrim(transactions_batch_tmp.business_code)) , transactions_batch_tmp.expire_date, 21, rtrim(ltrim(transactions_batch_tmp.prod_status)), getDate()
FROM contacts INNER JOIN (transactions_batch_tmp INNER JOIN products ON transactions_batch_tmp.product_code=products.produ ct_code) ON contacts.emailAddress=transactions_batch_tmp.email Address
WHERE transactions_batch_tmp.transaction_id=@.transaction _id
END
ELSE
BEGIN
--print 'UPDATE transaction! ' + @.product_code + @.emailAddress
UPDATE journal
SET status_date =
(SELECT max(tmp.status_date)
FROM transactions_batch_tmp tmp, contacts c, products p, journal j
WHERE tmp.emailaddress = @.emailAddress
AND tmp.emailaddress = rtrim(c.emailaddress)
AND c.contact_id = j.contact_id
AND j.product_code = @.product_code
AND j.product_code = tmp.product_code)
FROM transactions_batch_tmp tmp, contacts c, products p, journal j
WHERE tmp.emailaddress = @.emailAddress
AND tmp.emailaddress = rtrim(c.emailaddress)
AND c.contact_id = j.contact_id
AND j.product_code = @.product_code
AND j.product_code = tmp.product_code
END
FETCH NEXT FROM transaction_import_cursor INTO @.transaction_id, @.product_code, @.emailAddress, @.status_date, @.business_code, @.expire_date, @.prod_status
END
CLOSE transaction_import_cursor
DEALLOCATE transaction_import_cursor
COMMIT TRAN

/** purge data from temp error table before writing bad records for this batch **/
truncate table tran_import_error;

/** write bad records (missing product code or email address) to temp_error table **/
insert into tran_import_error (transaction_id, product_code, emailAddress, date_entered)
SELECT DISTINCT transactions_batch_tmp.transaction_id, transactions_batch_tmp.product_code, transactions_batch_tmp.emailAddress, getDate()
FROM transactions_batch_tmp
where transactions_batch_tmp.emailaddress not in (select emailaddress from contacts)
OR
transactions_batch_tmp.product_code not in (select product_code from products)

TIAI don't see anything in your code that requires a cursor. It would run much faster as set-based INSERT and UPDATE statements.|||Well, how would I handle the update part without a cursor? I need to make sure that *only* unique contact_id-product_code values exist in the journal table.

Thanks.|||Add some bit flag and notes columns to your import table. Then you can run data checks against the records prior to importing them. Flag any duplicates or bad records and add a note as to why they were flagged. Then import only the non-flagged records. Delete the non-flagged records when you are done, and you are left with a list of bad records that you can review or discard.|||blindman - Thanks for your help.

I'm almost there (I hope), but was wondering if there was a more efficient way to delete the dupe records than having to write two separate queries. I need to keep the most recent product_code-status_date transaction for *each* person. This runs after I insert ALL the records in the journal table.

--delete dupe trans with status_date as the flag
DELETE journal FROM journal, contacts
JOIN
(select product_code, contact_id, max(status_date) as max_status_date
from journal
group by product_code, contact_id) AS G
ON G.[contact_id] = contacts.[contact_id]
WHERE journal.[status_date] < G.[max_status_date]
AND G.[product_code] = journal.[product_code];

--delete dupe trans with journal_id as the flag
DELETE journal FROM journal, contacts
JOIN
(select product_code, contact_id, max(journal_id) as maxID
from journal
group by product_code, contact_id) AS G
ON G.[contact_id] = contacts.[contact_id]
WHERE journal.[journal_id] < G.[MaxID]
AND G.[product_code] = journal.[product_code];

Thanks again.|||The contact table has nothing to do with your delete, except to limit the deleted records to those that have a contact_id. I assume that contact_id is part of journal's natural key and that all records have a valid contact_id, so drop if from both your queries. (If you do need it for filtering, join it in the subquery.)

--delete dupe trans with status_date as the flag
DELETE
FROM journal
INNER JOIN
(select product_code, contact_id, max(status_date) as max_status_date
from journal
group by product_code, contact_id) AS G
ON journal.[product_code] = G.[product_code]
and journal.[contact_id] = G.[contact_id]
and journal.[status_date] < G.[max_status_date]

--delete dupe trans with journal_id as the flag
DELETE journal
FROM journal
INNER JOIN
(select product_code, contact_id, max(journal_id) as maxID
from journal
group by product_code, contact_id) AS G
ON journal.[product_code] = G.[product_code]
and journal.[contact_id] = G.[contact_id]
and journal.[journal_id] < G.[MaxID]

It also appears that the first query should handle all product_code/contact_id duplicates except those with that share exactly the same status_date. If status_date stores only whole-date values, then I guess I see the point of the second delete statement, but otherwise I wouldn't expect you to get a high rowcount from it.

Now to your question; can this be done as a single SQL statement? Yes, but it would essentially require two nested subqueries, so I don't think you would get a big performance boost from it, and you would certainly have to sacrifice code clarity. I recommend that you leave it as two separate deletes.

Do I need to start the service..

Hi,
I am providing a UI in my application (which uses MSDE) for the user to back
up/restore the application database. Today I tried to test it and it didn't
work complaining that sql server does not exist. I checked and saw that I
had not started the MSSQL$M
yApp service.. when I started it, the backup and restore started working fin
e. So does that mean that I have to programatically start the service in my
app before backup/restore. Generally my application does not need to start
the service while talking
to the database.
ThanksIf you set the service to automatic, it will start when the computer
starts. There is always a possibility that it will get stopped manually. So
you could check to see if it is started in your app and if not start it.
Rand
This posting is provided "as is" with no warranties and confers no rights.

Do I need to start the service..

Hi,
I am providing a UI in my application (which uses MSDE) for the user to backup/restore the application database. Today I tried to test it and it didn't work complaining that sql server does not exist. I checked and saw that I had not started the MSSQL$M
yApp service.. when I started it, the backup and restore started working fine. So does that mean that I have to programatically start the service in my app before backup/restore. Generally my application does not need to start the service while talking
to the database.
Thanks
If you set the service to automatic, it will start when the computer
starts. There is always a possibility that it will get stopped manually. So
you could check to see if it is started in your app and if not start it.
Rand
This posting is provided "as is" with no warranties and confers no rights.

Sunday, March 25, 2012

Do I need ADO 2.8 to connect to SQL Server 2005 from Access?

We have just upgraded to SQL Server 2005 from 2000. We have an MS Access application that connects and links tables from the new SQL Server database. Users are getting errors when creating records for the first time on the subscriber databases. We have reseeded the primary keys on all the tables to no avail. The only thing we can think of that may be a problem is the version of ADO that we're using as a reference in the MS Access application. We have a reference to ADO 2.5 and don't know whether we should be using 2.8. Any suggestions? Thank you.

YOu should be able to use the MDAC 2.5 for basic funtionality (otherwise and for the new features you should use the SNAC driver). Is the SQL Server located on the servers where the application is installed ? SQL Server 2005 will install MDAC 2.8 so installing 2.5 over this version might casue an error on the server side but not if it is only installed on the client side.

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de
|||

Thanks for your response Jens. The Access application resides on the same servers as the databases. Both versions of MDAC exist on each server. It looks like the issues occur soon after the data is synch'd in from the Publisher database. The errors occur on forms that have subforms whose row source is a table, as opposed to a query:

Error Number: 30014

Error: The data was added to the database but the data won't be displayed in the form because it doesn't satisfy the criteria in the underlying record source.

After receiving the error, the users exit out of the application and log back in. They are then able to add more records without receiving any errors. But, like in the Groundhog Day movie, the exact same problem occurs the next morning after sync has run.

|||Hello,
I just got the same error message when trying to use my Access application connected to my newly installed SQL Server 2005.

Did you manage to solve the pb ?

How ?

Thanks

Do I need ADO 2.8 to connect to SQL Server 2005 from Access?

We have just upgraded to SQL Server 2005 from 2000. We have an MS Access application that connects and links tables from the new SQL Server database. Users are getting errors when creating records for the first time on the subscriber databases. We have reseeded the primary keys on all the tables to no avail. The only thing we can think of that may be a problem is the version of ADO that we're using as a reference in the MS Access application. We have a reference to ADO 2.5 and don't know whether we should be using 2.8. Any suggestions? Thank you.

YOu should be able to use the MDAC 2.5 for basic funtionality (otherwise and for the new features you should use the SNAC driver). Is the SQL Server located on the servers where the application is installed ? SQL Server 2005 will install MDAC 2.8 so installing 2.5 over this version might casue an error on the server side but not if it is only installed on the client side.

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de|||

Thanks for your response Jens. The Access application resides on the same servers as the databases. Both versions of MDAC exist on each server. It looks like the issues occur soon after the data is synch'd in from the Publisher database. The errors occur on forms that have subforms whose row source is a table, as opposed to a query:

Error Number: 30014

Error: The data was added to the database but the data won't be displayed in the form because it doesn't satisfy the criteria in the underlying record source.

After receiving the error, the users exit out of the application and log back in. They are then able to add more records without receiving any errors. But, like in the Groundhog Day movie, the exact same problem occurs the next morning after sync has run.

|||Hello,
I just got the same error message when trying to use my Access application connected to my newly installed SQL Server 2005.

Did you manage to solve the pb ?

How ?

Thanks

Thursday, March 22, 2012

Do I have the right to modify third-party applications stored procedures?

Our current concern deals with stored procedures from a third-party application that were modified in order to correct future data inconsistency that was being generated. Since the stored procedures were not encrypted, I was able to modify them and correct the problem. At the same time, we developed a small in-house application to correct the current data inconsistency and we created new stored procedures in the same database. Now I'm concern about if I had the right to modify those stored procedures and additionally, created new ones inside this database? Am I restricted somehow to use our full version of MS SQL Server with a scenario like this?It'd really depend on what sort of deal you have with the vendor of the third party app.

A lot of the vendors that I have dealt with in the past have allowed the changing of stored procedures and the addition of new ones on a "at your own risk" type of deal (eg. future versions will be be guaranteed to work, the database is no longer covered by their support etc).

I'd suggest having a talk to the vendor, don't tell them you have changed anything but ask them what their views on changing stored procedures etc are.

HTH|||If it clearly creates data inconsistency then I at least would have no second thaughts abut changing the sp's. I would sort of stick to the "what they don't know can't hurt'em"-plan if this was me...|||I'll try to describe better our situation.

Before I did the changes, the provider notified us that they were closing their doors. Thus, we didn't have any support for sometime. After a while, they came to us offering the source code which we rejected considering the fact that it was a very poor developed application. Afterwards, we decide to develop, with our own effort, an application to replace the existing. In short, there is a legal process running for last 4 years where they have alleged we have commited several violations to their rights which so far, we have disproved. Now, they are conducting their efforts toward the fact I modified 2 procs and also, created new ones. I know I didn't do wrong. But, how can I show the attorneys? Perhaps, I need leads to articles, statements or disclaims from, lets say, Microsoft that contain information about the pros and cons of leaving procs as text. How can a solution provider protect procs source code or even better, their know-how?

Originally posted by rokslide
It'd really depend on what sort of deal you have with the vendor of the third party app.

A lot of the vendors that I have dealt with in the past have allowed the changing of stored procedures and the addition of new ones on a "at your own risk" type of deal (eg. future versions will be be guaranteed to work, the database is no longer covered by their support etc).

I'd suggest having a talk to the vendor, don't tell them you have changed anything but ask them what their views on changing stored procedures etc are.

HTH|||This ha NOTHING to do with M$

Did you sign a contract with them?

Did you pay them anything?

Did they build this code for you exclusively?

If I contract out, any code I build belongs to the Client (usually)

It's all a matter of what's on paper...

Check this out:

The part about self employeement...

http://weblogs.sqlteam.com/markc/|||Wow, I didn't realize it was this serious. It is possible to encrypt sp's in sql-server and if this company has let you have direct access to the database then this would have been a good idea from their side. It is not common for software-vendors to expose their sourcecode and this company you are been in a dispute with should know this.

An sp can be encrypted like this:

CREATE PROCEDURE myProcName WITH ENCRYPTION
AS
...

This way you or any others will not be able to access the sorcecode of the procedure. You can try it yourself for verification:

EXEC sp_helptext myProcName

Now when it comes to your legal rights to change their sourcecode my belief is that you unfortunately have a weak case. Brett Kaiser is partially right I belive that it comes down to what you have on paper and not, but since you have used their software you automatically agree that their software is usable and if you hae paid for it aswell you aknowledge that they are the rightful owners. I would try to go down the lane of putting the blame on them for not fulfilling their duties as a software vendor, and because they didn't perform their duties you had to remedy the bugs yourself.

Good luck man, I'm sorry to say that I'll think you'll need it :(|||I apologize...I wasn't meaning to be offensive...

But I guess I was...

Sorry|||What are you suggesting? That they don't have laws in Venezuela? I really hope I misunderstood this message of yours because I found it to be quite rude, but I will give you the benifit of the doubt...|||Frettmaestro, I'm sure Brett didn't mean to be insulting. If you know the different parts of the US well, then, you know how those guys from Jersey can be. (Right Brett? :D )

j_shaw, not being a lawyer anywhere, much less knowledgeable about Venezuelan law, everything I say here is totally my opinion and not to be taken as legal advice. But here goes: Regardless of whether the procedures were encrypted or not, I think you overstepped the bounds by changing them without first establishing ownership or at least permission. If the work was done for you custom, then there's a good chance your company owns them, but if it was a commercial product, there's a good chance that you only had license to use, not change. And as Brett pointed out, this all comes down to what the paperwork says.

Just because someone doesn't encrypt their procedures doesn't mean you have the right to change them. It may be a stupid move on their part to leave them hanging out there so easy to read and altered, but it's not an excuse to say, "if you didn't want me to do it, you should have locked it". That's like saying if somebody leaves the door to their house unlocked, or a window open, then it's okay to enter and take anything you want. Wrong!

And nobody should be under the misperception that using SQL Server's WITH ENCRYPTION is a guaranteed secure lock. I've heard that it has been cracked. It's still a good idea, but not foolproof.

(P.S. Brett, thanks for the referral!)|||The story is a little longer. My company hired this guys to develop a solution for us. But, we are unable to prove it because in one update they changed our personalized version with one that they had previously registered.

Originally posted by Frettmaestro
Wow, I didn't realize it was this serious. It is possible to encrypt sp's in sql-server and if this company has let you have direct access to the database then this would have been a good idea from their side. It is not common for software-vendors to expose their sourcecode and this company you are been in a dispute with should know this.

An sp can be encrypted like this:

CREATE PROCEDURE myProcName WITH ENCRYPTION
AS
...

This way you or any others will not be able to access the sorcecode of the procedure. You can try it yourself for verification:

EXEC sp_helptext myProcName

Now when it comes to your legal rights to change their sourcecode my belief is that you unfortunately have a weak case. Brett Kaiser is partially right I belive that it comes down to what you have on paper and not, but since you have used their software you automatically agree that their software is usable and if you hae paid for it aswell you aknowledge that they are the rightful owners. I would try to go down the lane of putting the blame on them for not fulfilling their duties as a software vendor, and because they didn't perform their duties you had to remedy the bugs yourself.

Good luck man, I'm sorry to say that I'll think you'll need it :(|||Originally posted by AjarnMark
Frettmaestro, I'm sure Brett didn't mean to be insulting. If you know the different parts of the US well, then, you know how those guys from Jersey can be. (Right Brett? :D )

(P.S. Brett, thanks for the referral!)

Just being the ignorant American...

But to re-address in (hopefully) a different tone....

There were hardly ANY US laws for software and stuff till just recently...

How and what the laws are for Venezuala...I have no idea...

And wouldn't you want to counter-sue anyway for your time it took to do the data sanitation?

Don't you have the expectation/right to expect their product to work?|||My own cursory knowledge of law probably does not apply, but it does not sound like you made a profit by re-selling the application with your changes, so you should be safe from lawsuits by SCO. When you found that the application could invalidate your data, did you go to the software company and ask for a fix? If they refused, then you may be able to sue them (if you have in writing that the application promises to do so-and-such). Since they are bringing the suit, they have to prove that your changes have somehow damaged them.

In general, I do not allow any schema (or data) updates to a third party application database, as this can invalidate the service contract. Extra lawsuits have never come into the picture (as far as I know).|||A company I have been working for is currently in a legal dispute with a hosting provider. The case is not at all similar but this company did not have a signed agreement of any sort, and here in England that didn't matter because we supposedly accepted their terms and conditions automatically when they paid the first bill and made use of their services. The company I worked for will most likely get the case dismissed because the hosting company breached their own contract on several counts but my point is simply that even though no written contract has been signed you can still be legally obliged. This can offcourse be different in Venezuela, but I don't know that...

When it comes to altering software I do belive that you will have to make a case on the fact that their software was incapable of doing what it was supposed to and because they offered no real help, you had to change it yourself to save your business. You can't go bankrupt because some moron developer don't know what he's doing...|||Ok. Lets change the subject.

We also had to create an application to fix the already corrupted data. We created new sprocs and placed them in the same database. Do I have the right to add new objects to the existing database?

Originally posted by Frettmaestro
A company I have been working for is currently in a legal dispute with a hosting provider. The case is not at all similar but this company did not have a signed agreement of any sort, and here in England that didn't matter because we supposedly accepted their terms and conditions automatically when they paid the first bill and made use of their services. The company I worked for will most likely get the case dismissed because the hosting company breached their own contract on several counts but my point is simply that even though no written contract has been signed you can still be legally obliged. This can offcourse be different in Venezuela, but I don't know that...

When it comes to altering software I do belive that you will have to make a case on the fact that their software was incapable of doing what it was supposed to and because they offered no real help, you had to change it yourself to save your business. You can't go bankrupt because some moron developer don't know what he's doing...|||All in favor say 'eye'! EYE!

I don't think this will get you anywhere, we can say this and that but it all comes down to the laws of your country and your best bet is some venezuelan lawyers ;)|||A couple of things spring to light here...

The story is a little longer. My company hired this guys to develop a solution for us. But, we are unable to prove it because in one update they changed our personalized version with one that they had previously registered.

Okie,.. well if you hired them to develop a solution for you there should be documents showing it yes?

If they gave you a solution that was something they developed for others or that they in turn sold to others then they are in breach of contract (assuming standard contract laws apply).

If the application they gave you/developed for you is faulty as suggested...

We also had to create an application to fix the already corrupted data. We created new sprocs and placed them in the same database. Do I have the right to add new objects to the existing database?

...and you gave them the opportunity to fix it (which they didn't) then I don't see what the issue should be legally. Yes, you are on somewhat shakey ground for changing their code but they are on equally unstable ground for their practises during the development and delivery of the application.

You could also use the facts that their application was faulty and that they code was unprotected to show that they had made false claims about their abilities...

Of course I'm not a lawyer and I haven't studied law (except for what I have come across in the industry) and I don't have any leads/examples to point to, but I'm sure if you raised these points with an attorney he would be able to help you more.|||It's like the line from Animal House...

My advice to you is to drink heavily...

I didn't know you where in pre Med?

Pre Med, SQL Server DBA, what's the difference?

Do I have a disk performance problem?

Hi All:

I would like help identifying if I have a disk performance issue or not. First the background: we have a j2ee application using the MS SQL 2005 JDBC driver and Hibernate on 4 application servers, and an active-passive SQL Server 2005 cluster. All of the servers reside in the same physical rack and switch.

Our application is typically bounded by CPU on the app server, or throughput from the database. Several months ago we were using SQL 2000 and would often max out the CPU on the database server before anything else, but often the database could keep up. Now we have 2005 on a much more powerful machine and more app servers, but we seem to be running up against a problem with throughput from the database.

The issue is not CPU. The total cpu average, as monitored in perfmon on 30 second intervals, stays consistently below 40%. But what concerns me is the Average Disk Queue Read Length, particularly for our E: drive. On this machine, the transaction log, the system and temp dbs, and our application's database are all on separate EMC partitions, connected via fibre chanel. The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration.

I have read various articles online describing how to interpet the Average Disk Queue Read Length performance counter with regards to SQL Server. Some have said this should not exceed the number of physical spindles * 2. We are seeing values of 32 consistently, averaging over 60 during peak processing hours, and spiking to well over 100 on a scale of 1.0. (3-second sample interval).

So since our application servers seem to be waiting on their database calls (a lot of inserts with frequent, but small-resultset selects) and do not show I/O issues either with their local storage, memory, or network interface. The database server again has no CPU, network, or memory issues. I should add the the Average Disk Queue Write Length counter does not have any issues; its always below 1 (on a 1.0 scale). The EMC Celerra array has both read and write caching. The indexes of the application database are rebuilt weekly and defragmented every day, with stats rebuilt after the defrag.

So how can I further determine where my performance problem lies? All thoughts appreciated! Thanks!

-tuj
First of all, you are always waiting on something. Either the CPU, memory, or hard drives. The question is, is the performance unacceptable and how much will it cost to fix?

Generally, faster the processors will help CPU. Memory will help both applications and disk cache to eliminate going to the hard drives, which are generally the slow dog in the race.

From what you describe, your bottleneck must be the disk access.

There are many schools on how to configure RAID properly. I am concerned by your comments: The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration.

Are you saying you have split physical drives, into logical drives RAID arrays? You never, ever want to do that. Also, a stripe size of 960K is HUGE. That means every single read/write has to read/write in 960K.

Intel has some basic guidelines on configuring RAID here http://www.intel.com/support/motherboards/server/srcs16/sb/cs-020782.htm.

As a general rule: create a RAID-5 array using all the physical drives in a unit. If you have 15 drives in a unit, make a 15 drive RAID-5 array. THEN, you split that into logical drives using the OS. Otherwise, you are killing the performance of the RAID-5.

I am sure we will get confilicting information from other people.

|||Please understand, we are using EMC storage whcih uses its own proprietary version of RAID-5 called RAID-S. This is an enterprise-class Symmetrix system. The storage array is connected to the SQL Server via fibre-channel and lives in the same rack.

Back to my original question? How can I quantify that I have a disk performance issue. My storage folks tell me that the storage is performing fine, at least from their perspective. How can I tell if I am maxing out my I/O capabilities?


|||

Disk queue length is one indicator for disk IO issue.

For SQL server, you can use fn_virtualfilestats to sample the IO stalk periodically (say every 3 mins) on the database or file you concerned. Check the difference of the IoStallMS between two samples and calculate the IOStalls perminute (Ref. BOL for details).

If you have high IoStallMS/Min, your DB is suffering IO issue.

Once one of my client had 50sec/min IOStalls during some period of time. That convinced the San team to reconfigure the storage.

|||Ok, so I setup a process to use the fn_virfualfilestats function to sample by IoStallMS value every 1 second.

DECLARE @.myDBID int
SET @.myDBID = db_id()
SELECT * FROM ::fn_virtualfilestats (@.myDBID, -1)
go

On my data file, I see IoStallMS values like '1754339800'. I stored the values in a table, then I found the max IoStallMS value by minute.

select datepart(hh, ts) as hr, datepart(n, ts) as min,
min(ts) as ts, max(iostallms) as stalls, file_id
from io_stall i1
group by datepart(hh, ts), datepart(n, ts), file_id

I then joined the grouping to itself, shifting the join minute by 1. This allowed me to calculate the
IoStallMS per minute.

select a.hr, a.min, b.min, a.stalls, b.stalls, b.stalls - a.stalls as diff
from
(select datepart(hh, ts) as hr, datepart(n, ts) as min,
min(ts) as ts, max(iostallms) as stalls, file_id
from io_stall i1
group by datepart(hh, ts), datepart(n, ts), file_id) a
inner join
(select datepart(hh, ts) as hr, datepart(n, ts) as min,
min(ts) as ts, max(iostallms) as stalls, file_id
from io_stall i1
group by datepart(hh, ts), datepart(n, ts), file_id) b
on a.hr = b.hr and a.min + 1 = b.min
and a.file_id = b.file_id
where a.file_id = 1
order by a.hr, a.min

So... my results look like this for my data file:
hr min min2 avgiostall a avg io stall b b - a
14 20 21 1677684043 1679200642 1516599
14 21 22 1679200642 1679671839 471197
14 22 23 1679671839 1680146618 474779
14 23 24 1680146618 1681345822 1199204
14 24 25 1681345822 1681816656 470834
14 25 26 1681816656 1682260233 443577
14 26 27 1682260233 1682689827 429594
14 27 28 1682689827 1684180645 1490818

These seem to be exceptionally high, no?

On my log file for this database, I get these reults:
hr min min2 avgiostall a avg io stall b b - a
15 11 12 17382490 17473367 90877
15 12 13 17473367 17473370 3
15 13 14 17473370 17473372 2
15 14 15 17473372 17473374 2
15 15 16 17473374 17473375 1
15 16 17 17473375 17473377 2
15 17 18 17473377 17540468 67091
15 18 19 17540468 17543980 3512
15 19 20 17543980 17544013 33

Which in general seem much more reasonable, but it appears to still be spiking.

1. Have I calcuated the IoStalls / Minute correct?
2. If so, it looks like I clearly have an I/O problem to my SAN, right?
|||Run this command:

DECLARE @.myDBID int
SET @.myDBID = db_id()
SELECT ReadBytesPerMS = (BytesRead/IoStallReadMS),
WriteBytesPerMS = (BytesWritten/IoStallWriteMS),
DBFileName = physical_name
FROM ::fn_virtualfilestats (@.myDBID, null) vs
JOIN sys.database_files df ON vs.FileID = df.file_id

My results are all >9000, but my situation might be different than yours.
|||My results:

4169 2860 E:\MSSQL2K5\Data\TW.mdf
18802 5731 F:\MSSQL2K5\Data\TW_log.ldf
29696 7235 G:\MSSQL2K5\Data\TW_log2.ldf

So it looks like my reads and writes to E: are much slower?
|||Yes, it does look like your reads and writes are much slower on E. The writes in general are not real fast. Do you have write caching turned on? How much cache do you have on the array controller? Reads are generally twice as fast as writes. But in your case they are 3 times. That would indicate the controller is not performing well on writes.

It the E drive a logical partition on the same physical array? Can you move the log file to the local drive, or another drive, and see if performance increases?

|||Yes our EMC is configured to cache both reads and writes. Interestingly, the average disk queue WRITE length for the E drive is almost always < 1 while the average disk queue READ length is often over 100. That makes me think that the writes are being cached and written ok, although the numbers from SQL Server say different.

The F and G partitions are separate physical SAN shares from the E drive SAN share. So they should all be physically independent, although they do all live in the same array and utilize the same FC connection to the server. We moved the tempdb and transaction log to separate SAN shares a while back to try to reduce I/O against E.

We see no average disk queue length problems for read nor write on the other drives besides E. We cannot put the data or transaction log on a local drive because the system is a cluster.
|||

The iostall value is pretty high. While running the job, you can also trace the following under physical disk in performance monitor:

Avg Disk sec/Read, Avg Disk sec/Write , Disk Writes/sec, Disk Reads/sec

Based on the RAID configuration (RAID5, RAID10 etc) , and hard drive (spindle) specs, you can estimate how many spindles you need to eliminate the bottle neck.

Do I have a disk performance problem?

Hi All:

I would like help identifying if I have a disk performance issue or not. First the background: we have a j2ee application using the MS SQL 2005 JDBC driver and Hibernate on 4 application servers, and an active-passive SQL Server 2005 cluster. All of the servers reside in the same physical rack and switch.

Our application is typically bounded by CPU on the app server, or throughput from the database. Several months ago we were using SQL 2000 and would often max out the CPU on the database server before anything else, but often the database could keep up. Now we have 2005 on a much more powerful machine and more app servers, but we seem to be running up against a problem with throughput from the database.

The issue is not CPU. The total cpu average, as monitored in perfmon on 30 second intervals, stays consistently below 40%. But what concerns me is the Average Disk Queue Read Length, particularly for our E: drive. On this machine, the transaction log, the system and temp dbs, and our application's database are all on separate EMC partitions, connected via fibre chanel. The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration.

I have read various articles online describing how to interpet the Average Disk Queue Read Length performance counter with regards to SQL Server. Some have said this should not exceed the number of physical spindles * 2. We are seeing values of 32 consistently, averaging over 60 during peak processing hours, and spiking to well over 100 on a scale of 1.0. (3-second sample interval).

So since our application servers seem to be waiting on their database calls (a lot of inserts with frequent, but small-resultset selects) and do not show I/O issues either with their local storage, memory, or network interface. The database server again has no CPU, network, or memory issues. I should add the the Average Disk Queue Write Length counter does not have any issues; its always below 1 (on a 1.0 scale). The EMC Celerra array has both read and write caching. The indexes of the application database are rebuilt weekly and defragmented every day, with stats rebuilt after the defrag.

So how can I further determine where my performance problem lies? All thoughts appreciated! Thanks!

-tuj
First of all, you are always waiting on something. Either the CPU, memory, or hard drives. The question is, is the performance unacceptable and how much will it cost to fix?

Generally, faster the processors will help CPU. Memory will help both applications and disk cache to eliminate going to the hard drives, which are generally the slow dog in the race.

From what you describe, your bottleneck must be the disk access.

There are many schools on how to configure RAID properly. I am concerned by your comments: The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration.

Are you saying you have split physical drives, into logical drives RAID arrays? You never, ever want to do that. Also, a stripe size of 960K is HUGE. That means every single read/write has to read/write in 960K.

Intel has some basic guidelines on configuring RAID here http://www.intel.com/support/motherboards/server/srcs16/sb/cs-020782.htm.

As a general rule: create a RAID-5 array using all the physical drives in a unit. If you have 15 drives in a unit, make a 15 drive RAID-5 array. THEN, you split that into logical drives using the OS. Otherwise, you are killing the performance of the RAID-5.

I am sure we will get confilicting information from other people.

|||Please understand, we are using EMC storage whcih uses its own proprietary version of RAID-5 called RAID-S. This is an enterprise-class Symmetrix system. The storage array is connected to the SQL Server via fibre-channel and lives in the same rack.

Back to my original question? How can I quantify that I have a disk performance issue. My storage folks tell me that the storage is performing fine, at least from their perspective. How can I tell if I am maxing out my I/O capabilities?


|||

Disk queue length is one indicator for disk IO issue.

For SQL server, you can use fn_virtualfilestats to sample the IO stalk periodically (say every 3 mins) on the database or file you concerned. Check the difference of the IoStallMS between two samples and calculate the IOStalls perminute (Ref. BOL for details).

If you have high IoStallMS/Min, your DB is suffering IO issue.

Once one of my client had 50sec/min IOStalls during some period of time. That convinced the San team to reconfigure the storage.

|||Ok, so I setup a process to use the fn_virfualfilestats function to sample by IoStallMS value every 1 second.

DECLARE @.myDBID int
SET @.myDBID = db_id()
SELECT * FROM ::fn_virtualfilestats (@.myDBID, -1)
go

On my data file, I see IoStallMS values like '1754339800'. I stored the values in a table, then I found the max IoStallMS value by minute.

select datepart(hh, ts) as hr, datepart(n, ts) as min,
min(ts) as ts, max(iostallms) as stalls, file_id
from io_stall i1
group by datepart(hh, ts), datepart(n, ts), file_id

I then joined the grouping to itself, shifting the join minute by 1. This allowed me to calculate the
IoStallMS per minute.

select a.hr, a.min, b.min, a.stalls, b.stalls, b.stalls - a.stalls as diff
from
(select datepart(hh, ts) as hr, datepart(n, ts) as min,
min(ts) as ts, max(iostallms) as stalls, file_id
from io_stall i1
group by datepart(hh, ts), datepart(n, ts), file_id) a
inner join
(select datepart(hh, ts) as hr, datepart(n, ts) as min,
min(ts) as ts, max(iostallms) as stalls, file_id
from io_stall i1
group by datepart(hh, ts), datepart(n, ts), file_id) b
on a.hr = b.hr and a.min + 1 = b.min
and a.file_id = b.file_id
where a.file_id = 1
order by a.hr, a.min

So... my results look like this for my data file:
hr min min2 avgiostall a avg io stall b b - a
14 20 21 1677684043 1679200642 1516599
14 21 22 1679200642 1679671839 471197
14 22 23 1679671839 1680146618 474779
14 23 24 1680146618 1681345822 1199204
14 24 25 1681345822 1681816656 470834
14 25 26 1681816656 1682260233 443577
14 26 27 1682260233 1682689827 429594
14 27 28 1682689827 1684180645 1490818

These seem to be exceptionally high, no?

On my log file for this database, I get these reults:
hr min min2 avgiostall a avg io stall b b - a
15 11 12 17382490 17473367 90877
15 12 13 17473367 17473370 3
15 13 14 17473370 17473372 2
15 14 15 17473372 17473374 2
15 15 16 17473374 17473375 1
15 16 17 17473375 17473377 2
15 17 18 17473377 17540468 67091
15 18 19 17540468 17543980 3512
15 19 20 17543980 17544013 33

Which in general seem much more reasonable, but it appears to still be spiking.

1. Have I calcuated the IoStalls / Minute correct?
2. If so, it looks like I clearly have an I/O problem to my SAN, right?
|||Run this command:

DECLARE @.myDBID int
SET @.myDBID = db_id()
SELECT ReadBytesPerMS = (BytesRead/IoStallReadMS),
WriteBytesPerMS = (BytesWritten/IoStallWriteMS),
DBFileName = physical_name
FROM ::fn_virtualfilestats (@.myDBID, null) vs
JOIN sys.database_files df ON vs.FileID = df.file_id

My results are all >9000, but my situation might be different than yours.
|||My results:

4169 2860 E:\MSSQL2K5\Data\TW.mdf
18802 5731 F:\MSSQL2K5\Data\TW_log.ldf
29696 7235 G:\MSSQL2K5\Data\TW_log2.ldf

So it looks like my reads and writes to E: are much slower?
|||Yes, it does look like your reads and writes are much slower on E. The writes in general are not real fast. Do you have write caching turned on? How much cache do you have on the array controller? Reads are generally twice as fast as writes. But in your case they are 3 times. That would indicate the controller is not performing well on writes.

It the E drive a logical partition on the same physical array? Can you move the log file to the local drive, or another drive, and see if performance increases?

|||Yes our EMC is configured to cache both reads and writes. Interestingly, the average disk queue WRITE length for the E drive is almost always < 1 while the average disk queue READ length is often over 100. That makes me think that the writes are being cached and written ok, although the numbers from SQL Server say different.

The F and G partitions are separate physical SAN shares from the E drive SAN share. So they should all be physically independent, although they do all live in the same array and utilize the same FC connection to the server. We moved the tempdb and transaction log to separate SAN shares a while back to try to reduce I/O against E.

We see no average disk queue length problems for read nor write on the other drives besides E. We cannot put the data or transaction log on a local drive because the system is a cluster.
|||

The iostall value is pretty high. While running the job, you can also trace the following under physical disk in performance monitor:

Avg Disk sec/Read, Avg Disk sec/Write , Disk Writes/sec, Disk Reads/sec

Based on the RAID configuration (RAID5, RAID10 etc) , and hard drive (spindle) specs, you can estimate how many spindles you need to eliminate the bottle neck.

Do I have a disk performance problem?

Hi All:

I would like help identifying if I have a disk performance issue or not. First the background: we have a j2ee application using the MS SQL 2005 JDBC driver and Hibernate on 4 application servers, and an active-passive SQL Server 2005 cluster. All of the servers reside in the same physical rack and switch.

Our application is typically bounded by CPU on the app server, or throughput from the database. Several months ago we were using SQL 2000 and would often max out the CPU on the database server before anything else, but often the database could keep up. Now we have 2005 on a much more powerful machine and more app servers, but we seem to be running up against a problem with throughput from the database.

The issue is not CPU. The total cpu average, as monitored in perfmon on 30 second intervals, stays consistently below 40%. But what concerns me is the Average Disk Queue Read Length, particularly for our E: drive. On this machine, the transaction log, the system and temp dbs, and our application's database are all on separate EMC partitions, connected via fibre chanel. The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration.

I have read various articles online describing how to interpet the Average Disk Queue Read Length performance counter with regards to SQL Server. Some have said this should not exceed the number of physical spindles * 2. We are seeing values of 32 consistently, averaging over 60 during peak processing hours, and spiking to well over 100 on a scale of 1.0. (3-second sample interval).

So since our application servers seem to be waiting on their database calls (a lot of inserts with frequent, but small-resultset selects) and do not show I/O issues either with their local storage, memory, or network interface. The database server again has no CPU, network, or memory issues. I should add the the Average Disk Queue Write Length counter does not have any issues; its always below 1 (on a 1.0 scale). The EMC Celerra array has both read and write caching. The indexes of the application database are rebuilt weekly and defragmented every day, with stats rebuilt after the defrag.

So how can I further determine where my performance problem lies? All thoughts appreciated! Thanks!

-tuj
First of all, you are always waiting on something. Either the CPU, memory, or hard drives. The question is, is the performance unacceptable and how much will it cost to fix?

Generally, faster the processors will help CPU. Memory will help both applications and disk cache to eliminate going to the hard drives, which are generally the slow dog in the race.

From what you describe, your bottleneck must be the disk access.

There are many schools on how to configure RAID properly. I am concerned by your comments: The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration.

Are you saying you have split physical drives, into logical drives RAID arrays? You never, ever want to do that. Also, a stripe size of 960K is HUGE. That means every single read/write has to read/write in 960K.

Intel has some basic guidelines on configuring RAID here http://www.intel.com/support/motherboards/server/srcs16/sb/cs-020782.htm.

As a general rule: create a RAID-5 array using all the physical drives in a unit. If you have 15 drives in a unit, make a 15 drive RAID-5 array. THEN, you split that into logical drives using the OS. Otherwise, you are killing the performance of the RAID-5.

I am sure we will get confilicting information from other people.

|||Please understand, we are using EMC storage whcih uses its own proprietary version of RAID-5 called RAID-S. This is an enterprise-class Symmetrix system. The storage array is connected to the SQL Server via fibre-channel and lives in the same rack.

Back to my original question? How can I quantify that I have a disk performance issue. My storage folks tell me that the storage is performing fine, at least from their perspective. How can I tell if I am maxing out my I/O capabilities?


|||

Disk queue length is one indicator for disk IO issue.

For SQL server, you can use fn_virtualfilestats to sample the IO stalk periodically (say every 3 mins) on the database or file you concerned. Check the difference of the IoStallMS between two samples and calculate the IOStalls perminute (Ref. BOL for details).

If you have high IoStallMS/Min, your DB is suffering IO issue.

Once one of my client had 50sec/min IOStalls during some period of time. That convinced the San team to reconfigure the storage.

|||Ok, so I setup a process to use the fn_virfualfilestats function to sample by IoStallMS value every 1 second.

DECLARE @.myDBID int
SET @.myDBID = db_id()
SELECT * FROM ::fn_virtualfilestats (@.myDBID, -1)
go

On my data file, I see IoStallMS values like '1754339800'. I stored the values in a table, then I found the max IoStallMS value by minute.

select datepart(hh, ts) as hr, datepart(n, ts) as min,
min(ts) as ts, max(iostallms) as stalls, file_id
from io_stall i1
group by datepart(hh, ts), datepart(n, ts), file_id

I then joined the grouping to itself, shifting the join minute by 1. This allowed me to calculate the
IoStallMS per minute.

select a.hr, a.min, b.min, a.stalls, b.stalls, b.stalls - a.stalls as diff
from
(select datepart(hh, ts) as hr, datepart(n, ts) as min,
min(ts) as ts, max(iostallms) as stalls, file_id
from io_stall i1
group by datepart(hh, ts), datepart(n, ts), file_id) a
inner join
(select datepart(hh, ts) as hr, datepart(n, ts) as min,
min(ts) as ts, max(iostallms) as stalls, file_id
from io_stall i1
group by datepart(hh, ts), datepart(n, ts), file_id) b
on a.hr = b.hr and a.min + 1 = b.min
and a.file_id = b.file_id
where a.file_id = 1
order by a.hr, a.min

So... my results look like this for my data file:
hr min min2 avgiostall a avg io stall b b - a
14 20 21 1677684043 1679200642 1516599
14 21 22 1679200642 1679671839 471197
14 22 23 1679671839 1680146618 474779
14 23 24 1680146618 1681345822 1199204
14 24 25 1681345822 1681816656 470834
14 25 26 1681816656 1682260233 443577
14 26 27 1682260233 1682689827 429594
14 27 28 1682689827 1684180645 1490818

These seem to be exceptionally high, no?

On my log file for this database, I get these reults:
hr min min2 avgiostall a avg io stall b b - a
15 11 12 17382490 17473367 90877
15 12 13 17473367 17473370 3
15 13 14 17473370 17473372 2
15 14 15 17473372 17473374 2
15 15 16 17473374 17473375 1
15 16 17 17473375 17473377 2
15 17 18 17473377 17540468 67091
15 18 19 17540468 17543980 3512
15 19 20 17543980 17544013 33

Which in general seem much more reasonable, but it appears to still be spiking.

1. Have I calcuated the IoStalls / Minute correct?
2. If so, it looks like I clearly have an I/O problem to my SAN, right?
|||Run this command:

DECLARE @.myDBID int
SET @.myDBID = db_id()
SELECT ReadBytesPerMS = (BytesRead/IoStallReadMS),
WriteBytesPerMS = (BytesWritten/IoStallWriteMS),
DBFileName = physical_name
FROM ::fn_virtualfilestats (@.myDBID, null) vs
JOIN sys.database_files df ON vs.FileID = df.file_id

My results are all >9000, but my situation might be different than yours.
|||My results:

4169 2860 E:\MSSQL2K5\Data\TW.mdf
18802 5731 F:\MSSQL2K5\Data\TW_log.ldf
29696 7235 G:\MSSQL2K5\Data\TW_log2.ldf

So it looks like my reads and writes to E: are much slower?
|||Yes, it does look like your reads and writes are much slower on E. The writes in general are not real fast. Do you have write caching turned on? How much cache do you have on the array controller? Reads are generally twice as fast as writes. But in your case they are 3 times. That would indicate the controller is not performing well on writes.

It the E drive a logical partition on the same physical array? Can you move the log file to the local drive, or another drive, and see if performance increases?

|||Yes our EMC is configured to cache both reads and writes. Interestingly, the average disk queue WRITE length for the E drive is almost always < 1 while the average disk queue READ length is often over 100. That makes me think that the writes are being cached and written ok, although the numbers from SQL Server say different.

The F and G partitions are separate physical SAN shares from the E drive SAN share. So they should all be physically independent, although they do all live in the same array and utilize the same FC connection to the server. We moved the tempdb and transaction log to separate SAN shares a while back to try to reduce I/O against E.

We see no average disk queue length problems for read nor write on the other drives besides E. We cannot put the data or transaction log on a local drive because the system is a cluster.
|||

The iostall value is pretty high. While running the job, you can also trace the following under physical disk in performance monitor:

Avg Disk sec/Read, Avg Disk sec/Write , Disk Writes/sec, Disk Reads/sec

Based on the RAID configuration (RAID5, RAID10 etc) , and hard drive (spindle) specs, you can estimate how many spindles you need to eliminate the bottle neck.

Do getDate() inside SQL Server OR do System.DateTime.Now inside application ?

For inserting current date and time into the database, is it more efficient and performant and faster to do getDate() inside SQL Server and insert the value

OR

to do System.DateTime.Now in the application and then insert it in the table?

I figure even small differences would be magnified if there is moderate traffic, so every little bit helps.

Thanks.

If you generate the data in code and then pass it to sql server over the network via a query, then you are simply generating more traffic for your network (albeit just a few extra bytes). If you use the built-in GETDATE() function you are saving yourself from having to pass that data over the network. I usually use GETDATE() myself.

|||

This depends on what you're doing. If you need the current time, it would be best to get the current time from whereever you need the time. (I know a guy who only needed the current time in his app, but created a call to the database's GetDate function, which was on another machine. That's not very accurate for the current application time.)

This may have been a big issue if your DB and app are on two or more machines, and the Server OS was older and it didn't have the ability to sync the time. I would say that the performance of the two functions would be really really close.

If you're worried about network bandwidth (if connecting with a sql server instance on another server), then having the database do the insert would be less of a load. If the column always displays the current time, then you could set the default for the column to GetDate(), eliminating the need to manually call the function or insert a value.

The biggest thing to thing about would be how important is this to you? There is the issue of what is deemed the "current time" -- Is the current time when the data was entered by the user at the application level, or is it current when the data is inserted into the table (use app's Now for the former, Sql's GetDate for the latter). For instance, there are cases where there may be a delay between a record being built and actually inserted into the DB. Every decision should be made based on frame of reference.

Again, my general suggestion is to use the function from the local source, unless you use the default value suggestion for your table's column.

|||

If the data is going to be stored in the database, I almost always use GetUTCDate() inside the database.

For web applications, this solves a number of problems. If you need to scale out to a web farm, you don't have to worry about keeping all the web farm machines (and possibly the SQL Server) clocks in sync with each other (Single time source). You don't have to worry about what timezone each of your webservers/users are in (Since the time is in UTC). That means I can take the easy way out and display the result in UTC or a fixed/hardcoded timezone to the user to start, and add user timezone support (user set, or autodetected) as the application matures, and I don't have to convert the data in the database when I do.

For desktop applications accessing the database, obviously keeping all the clocks synced is much more difficult, so not relying on the application layer to provide the current time greatly simplifies things.

In the past quite often a project starts as either a web application and then as it matures, we build a webservice for it or have desktop applications that need to access/create/import/export the data as well. Or the reverse, it's a desktop application and then use a web application for other parts/seconday interface into the data or migrate to a web application completely.

|||

Motley:

If the data is going to be stored in the database, I almost always use GetUTCDate() inside the database.

For web applications, this solves a number of problems. If you need to scale out to a web farm, you don't have to worry about keeping all the web farm machines (and possibly the SQL Server) clocks in sync with each other (Single time source). You don't have to worry about what timezone each of your webservers/users are in (Since the time is in UTC). That means I can take the easy way out and display the result in UTC or a fixed/hardcoded timezone to the user to start, and add user timezone support (user set, or autodetected) as the application matures, and I don't have to convert the data in the database when I do.

For desktop applications accessing the database, obviously keeping all the clocks synced is much more difficult, so not relying on the application layer to provide the current time greatly simplifies things.

In the past quite often a project starts as either a web application and then as it matures, we build a webservice for it or have desktop applications that need to access/create/import/export the data as well. Or the reverse, it's a desktop application and then use a web application for other parts/seconday interface into the data or migrate to a web application completely.

Thanks Motley. I've never had to figure out hardware allocations for a website before and I'm new to all of this. I understand where you may want 2 or 3 web servers simply due to heavy traffic and just keep 1 database server. But would there ever be a situation where 1 web server is enough and you may need more than 1 database server? Or maybe one might need more than 1 of each kind? Are there rules of thumb to use when deciding how to scale up?

|||

Yes, using multiple web servers that access a single database server is common.

Scaling out to multiple database servers is a LOT more complicated, so it's usually avoided if possible. That usually means build a bigger database server, and/or improve the web application code as much as possible first. This is a lot less common.

Scaling out a database server and only requiring a single web server is probably the rarest of them all. Now there may be reasons for a single web server and multiple database servers, like instancing, or web front-ending multiple database driven applications, but scaling out isn't usually one of them.

|||

Motley:

Yes, using multiple web servers that access a single database server is common.

Scaling out to multiple database servers is a LOT more complicated, so it's usually avoided if possible. That usually means build a bigger database server, and/or improve the web application code as much as possible first. This is a lot less common.

Scaling out a database server and only requiring a single web server is probably the rarest of them all. Now there may be reasons for a single web server and multiple database servers, like instancing, or web front-ending multiple database driven applications, but scaling out isn't usually one of them.


Thanks for the clarification, Motley. I have a question regarding backing up of data. Arguably, the contents of the database are the meat and potatoes of the website. Lose the database information and you'll lose everything, correct? So what do most people running 1 database server do to protect their lifeblood? Is the RAID 1 setup sufficient? If one of the drives go down, I'm ok right? I've heard of the databases having their own protection like failover (not even sure what this is), etc, but I'd imagine a RAID setup would make those superfluous features unnecessary.

|||

Motley:

If the data is going to be stored in the database, I almost always use GetUTCDate() inside the database.

For web applications, this solves a number of problems. If you need to scale out to a web farm, you don't have to worry about keeping all the web farm machines (and possibly the SQL Server) clocks in sync with each other (Single time source). You don't have to worry about what timezone each of your webservers/users are in (Since the time is in UTC). That means I can take the easy way out and display the result in UTC or a fixed/hardcoded timezone to the user to start, and add user timezone support (user set, or autodetected) as the application matures, and I don't have to convert the data in the database when I do.

For desktop applications accessing the database, obviously keeping all the clocks synced is much more difficult, so not relying on the application layer to provide the current time greatly simplifies things.

In the past quite often a project starts as either a web application and then as it matures, we build a webservice for it or have desktop applications that need to access/create/import/export the data as well. Or the reverse, it's a desktop application and then use a web application for other parts/seconday interface into the data or migrate to a web application completely.

I think I'm going to go with this solution. I can add another column to my user table called something like "timezone" and have them input their GMT offsets when they register their account. Then, on authenticated pages, I would add/subtract the offsets everytime I obtain datetime values. Does this sound like an ok route?

I just tested this on my localmachine. I'm in NY so my offset is GMT -5, however, when I looked in the database, the time stored was 4 hours ahead, not 5. What did I do wrong here?

Thanks.

|||

S2kDriver:

I think I'm going to go with this solution. I can add another column to my user table called something like "timezone" and have them input their GMT offsets when they register their account. Then, on authenticated pages, I would add/subtract the offsets everytime I obtain datetime values. Does this sound like an ok route?

I just tested this on my localmachine. I'm in NY so my offset is GMT -5, however, when I looked in the database, the time stored was 4 hours ahead, not 5. What did I do wrong here?

Thanks.

Don't let your app decide whether to add or subtract. Simply add the offset to the GMT. You may be having a mathematical error, such as GMT - -5, which would yield GMT + 5. If you simply add the offset, you should be fine, i.e.: GMT + -5 is the same as GMT - 5.

You may also be using GetDate() instead of GetUtcDate() in SQL Server; the former would return your local time instead of GMT/UTC.

|||

I did not do the math yet. All I did was run getUTCDate() on the local machine I'm using to test. I'm in NY (GMT -5), so the inputted data in the database should have been 5 hours ahead, but it was only 4. Sorry for the add/subtract confusion, yea I was just speaking affirmatively.

I'm sure I'm not using getDate(), if I were, the inputted data in the database would have been identical to the time I was seeing on the computer.

I've thought about the DST (daylight savings) factor, but I don't see why that would have any effect since the database just looks at the computer's settings (which time zone it is set to) and just compensates to arrive at GMT.

Did anyone else have this similar problem?

Thanks.

Wednesday, March 21, 2012

DMZ Web Server and Internal SQL Server

I have a .net web application that is running on a w2k server located in our
DMZ it has a private address. The SQL server is also running on a w2k server
but it is located in our private network. I have opened port 1433 on the
firewall from the DMZ to Lan and from Lan to DMZ. Since neither server knows
the other excist I have added an entry in the local host file of the
webserver in the DMZ that points to the internal SQL server. Does an entry
also need to be placed in the host file of the SQL server? Also what would
be the proper connection string in the web.config to link these two servers
together? And would anything need to be done on the SQL server?
Thanks
You may need to specify the IP address of the firewall and 1433 to allow
the connection to succeed, since the client won't know how to resolve the
netbios or host name of the server.
Thanks,
Kevin McDonnell
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.
sql

DMZ Web Server and Internal SQL Server

I have a .net web application that is running on a w2k server located in our
DMZ it has a private address. The SQL server is also running on a w2k server
but it is located in our private network. I have opened port 1433 on the
firewall from the DMZ to Lan and from Lan to DMZ. Since neither server knows
the other excist I have added an entry in the local host file of the
webserver in the DMZ that points to the internal SQL server. Does an entry
also need to be placed in the host file of the SQL server? Also what would
be the proper connection string in the web.config to link these two servers
together? And would anything need to be done on the SQL server?
ThanksYou may need to specify the IP address of the firewall and 1433 to allow
the connection to succeed, since the client won't know how to resolve the
netbios or host name of the server.
Thanks,
Kevin McDonnell
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.

DMZ to SQL Server 2nd instance connection issue

Hi,
We are implementing ShaprePoint Application.
SharePoint installation is completed and the server is in the DMZ (behind the firewall). Name it as SERVER1
Also, we installed SQL Server database (SHAREPOINTSQL) on a server as second instance (we already had one database). Name it as SERVER2.
I could connect SHAREPOINTSQL from any client within the domain (not from DMZ).
When I try to connect same database from SERVER1 (in DMZ), I don't see SHAREPOINTSQL listing when I am in SQL Enterprise Manager. I was hoping to see two entries (one for default database as SERVER2 and second one as SERVER2\SHAREPOINTSQL). I could see de
fault database and connection fine.
Do I need to do any special setup to connect from SERVER1 to SERVER2\SHAREPOINTSQL ?
Hope I explained clearly
Thanks
Port 1433 was opened for firewall. As I said, I could connect SERVER1 to SERVER2 (default database) with out any problem.
Thanks,
Bob
"Bob Robert" wrote:

> Hi,
> We are implementing ShaprePoint Application.
> SharePoint installation is completed and the server is in the DMZ (behind the firewall). Name it as SERVER1
> Also, we installed SQL Server database (SHAREPOINTSQL) on a server as second instance (we already had one database). Name it as SERVER2.
> I could connect SHAREPOINTSQL from any client within the domain (not from DMZ).
> When I try to connect same database from SERVER1 (in DMZ), I don't see SHAREPOINTSQL listing when I am in SQL Enterprise Manager. I was hoping to see two entries (one for default database as SERVER2 and second one as SERVER2\SHAREPOINTSQL). I could see
default database and connection fine.
> Do I need to do any special setup to connect from SERVER1 to SERVER2\SHAREPOINTSQL ?
> Hope I explained clearly
> Thanks
|||Any update from Guru's. I am still having problem
"Bob Robert" wrote:

> Hi,
> We are implementing ShaprePoint Application.
> SharePoint installation is completed and the server is in the DMZ (behind the firewall). Name it as SERVER1
> Also, we installed SQL Server database (SHAREPOINTSQL) on a server as second instance (we already had one database). Name it as SERVER2.
> I could connect SHAREPOINTSQL from any client within the domain (not from DMZ).
> When I try to connect same database from SERVER1 (in DMZ), I don't see SHAREPOINTSQL listing when I am in SQL Enterprise Manager. I was hoping to see two entries (one for default database as SERVER2 and second one as SERVER2\SHAREPOINTSQL). I could see
default database and connection fine.
> Do I need to do any special setup to connect from SERVER1 to SERVER2\SHAREPOINTSQL ?
> Hope I explained clearly
> Thanks
|||I found some thing interesting. After opening all the ports, I could able to connect named instance even though named instance was using port 1434.
"Bob Robert" wrote:
[vbcol=seagreen]
> Any update from Guru's. I am still having problem
> "Bob Robert" wrote:
e default database and connection fine.[vbcol=seagreen]
|||Bob, to connect to instances of SQL there are more ports required, 1434 from memory, you'll find info on this on TechNet if you search under SQL.
Steven Collier
SharePoint Portal Server MVP
http://mvp.support.microsoft.com
"Bob Robert" wrote:
[vbcol=seagreen]
> I found some thing interesting. After opening all the ports, I could able to connect named instance even though named instance was using port 1434.
> "Bob Robert" wrote:
see default database and connection fine.[vbcol=seagreen]
|||Run "Server Network Utility" on the server machine (the one you cannot connect to, probably the
named instance), and check what port number it is using, Open for that port in the firewalls. Also,
You might want to fix the port number by typing ion the port number...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Bob Robert" <BobRobert@.discussions.microsoft.com> wrote in message
news:955A498C-6CFA-46F1-B9A1-53075DA9AD42@.microsoft.com...
> I found some thing interesting. After opening all the ports, I could able to connect named
instance even though named instance was using port 1434.[vbcol=seagreen]
> "Bob Robert" wrote:
it as SERVER1[vbcol=seagreen]
already had one database). Name it as SERVER2.[vbcol=seagreen]
when I am in SQL Enterprise Manager. I was hoping to see two entries (one for default database as
SERVER2 and second one as SERVER2\SHAREPOINTSQL). I could see default database and connection fine.[vbcol=seagreen]

DMZ to SQL Server 2nd instance connection issue

Hi,
We are implementing ShaprePoint Application.
SharePoint installation is completed and the server is in the DMZ (behind th
e firewall). Name it as SERVER1
Also, we installed SQL Server database (SHAREPOINTSQL) on a server as second
instance (we already had one database). Name it as SERVER2.
I could connect SHAREPOINTSQL from any client within the domain (not from DM
Z).
When I try to connect same database from SERVER1 (in DMZ), I don't see SHARE
POINTSQL listing when I am in SQL Enterprise Manager. I was hoping to see tw
o entries (one for default database as SERVER2 and second one as SERVER2\SHA
REPOINTSQL). I could see de
fault database and connection fine.
Do I need to do any special setup to connect from SERVER1 to SERVER2\SHAREPO
INTSQL ?
Hope I explained clearly
ThanksPort 1433 was opened for firewall. As I said, I could connect SERVER1 to SER
VER2 (default database) with out any problem.
Thanks,
Bob
"Bob Robert" wrote:

> Hi,
> We are implementing ShaprePoint Application.
> SharePoint installation is completed and the server is in the DMZ (behind
the firewall). Name it as SERVER1
> Also, we installed SQL Server database (SHAREPOINTSQL) on a server as seco
nd instance (we already had one database). Name it as SERVER2.
> I could connect SHAREPOINTSQL from any client within the domain (not from
DMZ).
> When I try to connect same database from SERVER1 (in DMZ), I don't see SHAREPOINTS
QL listing when I am in SQL Enterprise Manager. I was hoping to see two entries (one
for default database as SERVER2 and second one as SERVER2\SHAREPOINTSQL). I could s
ee
default database and connection fine.
> Do I need to do any special setup to connect from SERVER1 to SERVER2\SHARE
POINTSQL ?
> Hope I explained clearly
> Thanks|||Any update from Guru's. I am still having problem
"Bob Robert" wrote:

> Hi,
> We are implementing ShaprePoint Application.
> SharePoint installation is completed and the server is in the DMZ (behind
the firewall). Name it as SERVER1
> Also, we installed SQL Server database (SHAREPOINTSQL) on a server as seco
nd instance (we already had one database). Name it as SERVER2.
> I could connect SHAREPOINTSQL from any client within the domain (not from
DMZ).
> When I try to connect same database from SERVER1 (in DMZ), I don't see SHAREPOINTS
QL listing when I am in SQL Enterprise Manager. I was hoping to see two entries (one
for default database as SERVER2 and second one as SERVER2\SHAREPOINTSQL). I could s
ee
default database and connection fine.
> Do I need to do any special setup to connect from SERVER1 to SERVER2\SHARE
POINTSQL ?
> Hope I explained clearly
> Thanks|||I found some thing interesting. After opening all the ports, I could able to
connect named instance even though named instance was using port 1434.
"Bob Robert" wrote:
[vbcol=seagreen]
> Any update from Guru's. I am still having problem
> "Bob Robert" wrote:
>
e default database and connection fine.[vbcol=seagreen]|||Bob, to connect to instances of SQL there are more ports required, 1434 from
memory, you'll find info on this on technet if you search under SQL.
Steven Collier
SharePoint Portal Server MVP
http://mvp.support.microsoft.com
"Bob Robert" wrote:
[vbcol=seagreen]
> I found some thing interesting. After opening all the ports, I could able
to connect named instance even though named instance was using port 1434.
> "Bob Robert" wrote:
>
see default database and connection fine.[vbcol=seagreen]|||Run "Server Network Utility" on the server machine (the one you cannot conne
ct to, probably the
named instance), and check what port number it is using, Open for that port
in the firewalls. Also,
You might want to fix the port number by typing ion the port number...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Bob Robert" <BobRobert@.discussions.microsoft.com> wrote in message
news:955A498C-6CFA-46F1-B9A1-53075DA9AD42@.microsoft.com...
> I found some thing interesting. After opening all the ports, I could able to conne
ct named
instance even though named instance was using port 1434.[vbcol=seagreen]
> "Bob Robert" wrote:
>
it as SERVER1[vbcol=seagreen]
already had one database). Name it as SERVER2.[vbcol=seagreen]
when I am in SQL Enterprise Manager. I was hoping to see two entries (one fo
r default database as
SERVER2 and second one as SERVER2\SHAREPOINTSQL). I could see default database and connectio
n fine.[vbcol=seagreen]

Monday, March 19, 2012

DMO, VFP and SQL 2005

I have an application writing in VFP that is used to manage some database
processes, some of these processes are currently writing in DMO. We are in
the process of upgrading to sql 2005 and I’m running into some trouble. I
m
currently changing some of the process to t-sql statements but I have an
issue where I think I need to use DMO. I’ve installed the backwards
compatible objects but I can not seem to connect via DMO. I have a default
instance of sql 2000, a named instance 2000 and a named instance of 2005
running on laptop running XP Tablet edition.
Code
ox= CREATEOBJECT("SQLDMO.sqlServer")
?ox.Connect("MA-ENG-PCANINO2\sql2005","sa", "SASA")
Executing that returns this error.
OLE IDispatch exception code 55555 from Microsoft SQL-DMO (ODBC SQLState:
42000): [Microsoft][ODBC SQL Server Driver][SQL Server]To connect to this
server you must use SQL Server Management Studio or SQL Server Management
Objects (SMO)...
Thank you in advance.
Pauly CIt’s working now, I read in another post to unregistered and reregister th
e
SQLDMO.dll - \\Program Files\Microsoft SQL Server\80\Tools\Binn\SQLDMO.dll.
I did just that and now I’m able to successfully connect to SQL 2005 via D
MO.
YIPPEE
"Pauly C" wrote:

> I have an application writing in VFP that is used to manage some database
> processes, some of these processes are currently writing in DMO. We are i
n
> the process of upgrading to sql 2005 and I’m running into some trouble.
I’m
> currently changing some of the process to t-sql statements but I have an
> issue where I think I need to use DMO. I’ve installed the backwards
> compatible objects but I can not seem to connect via DMO. I have a defaul
t
> instance of sql 2000, a named instance 2000 and a named instance of 2005
> running on laptop running XP Tablet edition.
> Code
> ox= CREATEOBJECT("SQLDMO.sqlServer")
> ?ox.Connect("MA-ENG-PCANINO2\sql2005","sa", "SASA")
> Executing that returns this error.
> OLE IDispatch exception code 55555 from Microsoft SQL-DMO (ODBC SQLState:
> 42000): [Microsoft][ODBC SQL Server Driver][SQL Server]To connect to this
> server you must use SQL Server Management Studio or SQL Server Management
> Objects (SMO)...
> Thank you in advance.
> Pauly C
>

Sunday, March 11, 2012

Dll error

Hello,
I have developed an vb.net windows application, which uses SQLXMLBulkload3class. For this, I added references to Microsoft SQLXMLBulkload 3.0 Type library. This works fine through sourcecode and exe in my development pc. While building windows installer
(package and deployment) through setupwizard, says "cannot automatically detect dependencies for "c:\commonfiles\system\oledb\xblkld3.dll". Ignored that, went ahead and built the WindowsInstaller i.e. msi file. In the target pc, I installed .net framew
ork first and while installing the application package got an error "xblkld3.dll failed to register. HRESULT-2147023782." still went ahead and installed the application. Says "COM object with CLSID{.....} is either not valid or not registered" when it c
omes to SQLXMLBulkload module. Anything should I add in the references? Or Install any components or missing any dlls? Pls. Help me.
Thanks for all your help.
Rgds,
Padmaja.
Suggested way to redistribute SQLXML is to ask the user install SQLXML
themselves. The installer is available from MSDN.
FYI, these are the dependencies for SQLXML
a.. This release is installed using the Microsoft Windows Installer 2.0. You
might need to upgrade your installer to Windows Installer 2.0 prior to
installing SQLXML 3.0.
a.. SQLXML requires Microsoft SOAP Toolkit 2.0 to be installed to run Web
Services. Users not using Microsoft Windows XP must download the toolkit to
use Web Services.
a.. MDAC 2.6 or later must be installed before the installation can be run.
a.. This release also has a dependency on MSXML version 4.0 SP2. This
version of MSXML is included with the download.
Soap toolkit is not needed for SQLXML Bulkload (xblkld3.dll)
Chandra
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Padmaja" <anonymous@.discussions.microsoft.com> wrote in message
news:796EAED4-E157-44B7-BE52-8B0F6A73F2EC@.microsoft.com...
> Hello,
> I have developed an vb.net windows application, which uses
SQLXMLBulkload3class. For this, I added references to Microsoft
SQLXMLBulkload 3.0 Type library. This works fine through sourcecode and exe
in my development pc. While building windows installer (package and
deployment) through setupwizard, says "cannot automatically detect
dependencies for "c:\commonfiles\system\oledb\xblkld3.dll". Ignored that,
went ahead and built the WindowsInstaller i.e. msi file. In the target pc,
I installed .net framework first and while installing the application
package got an error "xblkld3.dll failed to register. HRESULT-2147023782."
still went ahead and installed the application. Says "COM object with
CLSID{.....} is either not valid or not registered" when it comes to
SQLXMLBulkload module. Anything should I add in the references? Or Install
any components or missing any dlls? Pls. Help me.
> Thanks for all your help.
> Rgds,
> Padmaja.
>
|||I checked for SQLXML SP3 & MDAC 2.7. They are already installed. I am getting the error of 'not detecting the dependencies' while building the windows installer. Any help is appreciated.
|||I checked for SQLXML and MDAC2.7. They are already installed. I am getting the error of 'cannot automatically detect dependencies xblkld3.dll' while creating windows installer. Any help is appreciated.
Thanks,
Padmaja.

Wednesday, March 7, 2012

Distribution Subsystem: agent (null) failed.

Win 2k & SQL 2k SP3a
This error appears in my Application Log. How can I fix/remove/delete
this? How can I determine what dB this is from?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Event Type:Error
Event Source:MSSQLSERVER
Event Category:(2)
Event ID:17052
Date:12/23/2005
Time:8:00:11 AM
User:ROUSES.COM\Administrator
Computer:ROUSEONE
Description:
Error: 14151, Severity: 18, State: 1
Replication-Replication Distribution Subsystem: agent (null) failed.
The subscription to publication '(null)' has expired and does not
exist.
Data:
0000: 47 37 00 00 12 00 00 00 G7.....
0008: 09 00 00 00 52 00 4f 00 ...R.O.
0010: 55 00 53 00 45 00 4f 00 U.S.E.O.
0018: 4e 00 45 00 00 00 0d 00 N.E....
0020: 00 00 64 00 69 00 73 00 ..d.i.s.
0028: 74 00 72 00 69 00 62 00 t.r.i.b.
0030: 75 00 74 00 69 00 6f 00 u.t.i.o.
0038: 6e 00 00 00 n...
Run your agents one by one to see if you can manually force this error in
your event log.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"LPR-3rd" <lreames@.gmail.com> wrote in message
news:1135354198.113526.29560@.z14g2000cwz.googlegro ups.com...
> Win 2k & SQL 2k SP3a
> This error appears in my Application Log. How can I fix/remove/delete
> this? How can I determine what dB this is from?
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> Event Type: Error
> Event Source: MSSQLSERVER
> Event Category: (2)
> Event ID: 17052
> Date: 12/23/2005
> Time: 8:00:11 AM
> User: ROUSES.COM\Administrator
> Computer: ROUSEONE
> Description:
> Error: 14151, Severity: 18, State: 1
> Replication-Replication Distribution Subsystem: agent (null) failed.
> The subscription to publication '(null)' has expired and does not
> exist.
> Data:
> 0000: 47 37 00 00 12 00 00 00 G7.....
> 0008: 09 00 00 00 52 00 4f 00 ...R.O.
> 0010: 55 00 53 00 45 00 4f 00 U.S.E.O.
> 0018: 4e 00 45 00 00 00 0d 00 N.E....
> 0020: 00 00 64 00 69 00 73 00 ..d.i.s.
> 0028: 74 00 72 00 69 00 62 00 t.r.i.b.
> 0030: 75 00 74 00 69 00 6f 00 u.t.i.o.
> 0038: 6e 00 00 00 n...
>

Distribution server restore on standby server

Hi,
we have developed an application using transactional replication, with
several publication servers (push) and a distribution server (which is
the only subscriber too). It seems to work fine.
Now we would "protect" the distribution/subscriber server by using a
standby server (note: only the distribution/subscriver server must be
protected, not publication servers) which should replace the working
server in the case it crashes.
Could someone suggest us the best strategy to do this? Thanks in
advance...
Marco
You can manually set this up. Have a look at Strategies for Backing Up and
Restoring Transactional Replication in BOL.
The problem is that this adds to the latency. Transactions remain in your
tlog until the log is dumped. Then they are read from the tlog and written
to the distirbution database.
"Marco69" <marcosindona@.virgilio.it> wrote in message
news:ce7beb14.0403250541.5f374a8f@.posting.google.c om...
> Hi,
> we have developed an application using transactional replication, with
> several publication servers (push) and a distribution server (which is
> the only subscriber too). It seems to work fine.
> Now we would "protect" the distribution/subscriber server by using a
> standby server (note: only the distribution/subscriver server must be
> protected, not publication servers) which should replace the working
> server in the case it crashes.
> Could someone suggest us the best strategy to do this? Thanks in
> advance...
> Marco