Thursday, March 22, 2012

Do getDate() inside SQL Server OR do System.DateTime.Now inside application ?

For inserting current date and time into the database, is it more efficient and performant and faster to do getDate() inside SQL Server and insert the value

OR

to do System.DateTime.Now in the application and then insert it in the table?

I figure even small differences would be magnified if there is moderate traffic, so every little bit helps.

Thanks.

If you generate the data in code and then pass it to sql server over the network via a query, then you are simply generating more traffic for your network (albeit just a few extra bytes). If you use the built-in GETDATE() function you are saving yourself from having to pass that data over the network. I usually use GETDATE() myself.

|||

This depends on what you're doing. If you need the current time, it would be best to get the current time from whereever you need the time. (I know a guy who only needed the current time in his app, but created a call to the database's GetDate function, which was on another machine. That's not very accurate for the current application time.)

This may have been a big issue if your DB and app are on two or more machines, and the Server OS was older and it didn't have the ability to sync the time. I would say that the performance of the two functions would be really really close.

If you're worried about network bandwidth (if connecting with a sql server instance on another server), then having the database do the insert would be less of a load. If the column always displays the current time, then you could set the default for the column to GetDate(), eliminating the need to manually call the function or insert a value.

The biggest thing to thing about would be how important is this to you? There is the issue of what is deemed the "current time" -- Is the current time when the data was entered by the user at the application level, or is it current when the data is inserted into the table (use app's Now for the former, Sql's GetDate for the latter). For instance, there are cases where there may be a delay between a record being built and actually inserted into the DB. Every decision should be made based on frame of reference.

Again, my general suggestion is to use the function from the local source, unless you use the default value suggestion for your table's column.

|||

If the data is going to be stored in the database, I almost always use GetUTCDate() inside the database.

For web applications, this solves a number of problems. If you need to scale out to a web farm, you don't have to worry about keeping all the web farm machines (and possibly the SQL Server) clocks in sync with each other (Single time source). You don't have to worry about what timezone each of your webservers/users are in (Since the time is in UTC). That means I can take the easy way out and display the result in UTC or a fixed/hardcoded timezone to the user to start, and add user timezone support (user set, or autodetected) as the application matures, and I don't have to convert the data in the database when I do.

For desktop applications accessing the database, obviously keeping all the clocks synced is much more difficult, so not relying on the application layer to provide the current time greatly simplifies things.

In the past quite often a project starts as either a web application and then as it matures, we build a webservice for it or have desktop applications that need to access/create/import/export the data as well. Or the reverse, it's a desktop application and then use a web application for other parts/seconday interface into the data or migrate to a web application completely.

|||

Motley:

If the data is going to be stored in the database, I almost always use GetUTCDate() inside the database.

For web applications, this solves a number of problems. If you need to scale out to a web farm, you don't have to worry about keeping all the web farm machines (and possibly the SQL Server) clocks in sync with each other (Single time source). You don't have to worry about what timezone each of your webservers/users are in (Since the time is in UTC). That means I can take the easy way out and display the result in UTC or a fixed/hardcoded timezone to the user to start, and add user timezone support (user set, or autodetected) as the application matures, and I don't have to convert the data in the database when I do.

For desktop applications accessing the database, obviously keeping all the clocks synced is much more difficult, so not relying on the application layer to provide the current time greatly simplifies things.

In the past quite often a project starts as either a web application and then as it matures, we build a webservice for it or have desktop applications that need to access/create/import/export the data as well. Or the reverse, it's a desktop application and then use a web application for other parts/seconday interface into the data or migrate to a web application completely.

Thanks Motley. I've never had to figure out hardware allocations for a website before and I'm new to all of this. I understand where you may want 2 or 3 web servers simply due to heavy traffic and just keep 1 database server. But would there ever be a situation where 1 web server is enough and you may need more than 1 database server? Or maybe one might need more than 1 of each kind? Are there rules of thumb to use when deciding how to scale up?

|||

Yes, using multiple web servers that access a single database server is common.

Scaling out to multiple database servers is a LOT more complicated, so it's usually avoided if possible. That usually means build a bigger database server, and/or improve the web application code as much as possible first. This is a lot less common.

Scaling out a database server and only requiring a single web server is probably the rarest of them all. Now there may be reasons for a single web server and multiple database servers, like instancing, or web front-ending multiple database driven applications, but scaling out isn't usually one of them.

|||

Motley:

Yes, using multiple web servers that access a single database server is common.

Scaling out to multiple database servers is a LOT more complicated, so it's usually avoided if possible. That usually means build a bigger database server, and/or improve the web application code as much as possible first. This is a lot less common.

Scaling out a database server and only requiring a single web server is probably the rarest of them all. Now there may be reasons for a single web server and multiple database servers, like instancing, or web front-ending multiple database driven applications, but scaling out isn't usually one of them.


Thanks for the clarification, Motley. I have a question regarding backing up of data. Arguably, the contents of the database are the meat and potatoes of the website. Lose the database information and you'll lose everything, correct? So what do most people running 1 database server do to protect their lifeblood? Is the RAID 1 setup sufficient? If one of the drives go down, I'm ok right? I've heard of the databases having their own protection like failover (not even sure what this is), etc, but I'd imagine a RAID setup would make those superfluous features unnecessary.

|||

Motley:

If the data is going to be stored in the database, I almost always use GetUTCDate() inside the database.

For web applications, this solves a number of problems. If you need to scale out to a web farm, you don't have to worry about keeping all the web farm machines (and possibly the SQL Server) clocks in sync with each other (Single time source). You don't have to worry about what timezone each of your webservers/users are in (Since the time is in UTC). That means I can take the easy way out and display the result in UTC or a fixed/hardcoded timezone to the user to start, and add user timezone support (user set, or autodetected) as the application matures, and I don't have to convert the data in the database when I do.

For desktop applications accessing the database, obviously keeping all the clocks synced is much more difficult, so not relying on the application layer to provide the current time greatly simplifies things.

In the past quite often a project starts as either a web application and then as it matures, we build a webservice for it or have desktop applications that need to access/create/import/export the data as well. Or the reverse, it's a desktop application and then use a web application for other parts/seconday interface into the data or migrate to a web application completely.

I think I'm going to go with this solution. I can add another column to my user table called something like "timezone" and have them input their GMT offsets when they register their account. Then, on authenticated pages, I would add/subtract the offsets everytime I obtain datetime values. Does this sound like an ok route?

I just tested this on my localmachine. I'm in NY so my offset is GMT -5, however, when I looked in the database, the time stored was 4 hours ahead, not 5. What did I do wrong here?

Thanks.

|||

S2kDriver:

I think I'm going to go with this solution. I can add another column to my user table called something like "timezone" and have them input their GMT offsets when they register their account. Then, on authenticated pages, I would add/subtract the offsets everytime I obtain datetime values. Does this sound like an ok route?

I just tested this on my localmachine. I'm in NY so my offset is GMT -5, however, when I looked in the database, the time stored was 4 hours ahead, not 5. What did I do wrong here?

Thanks.

Don't let your app decide whether to add or subtract. Simply add the offset to the GMT. You may be having a mathematical error, such as GMT - -5, which would yield GMT + 5. If you simply add the offset, you should be fine, i.e.: GMT + -5 is the same as GMT - 5.

You may also be using GetDate() instead of GetUtcDate() in SQL Server; the former would return your local time instead of GMT/UTC.

|||

I did not do the math yet. All I did was run getUTCDate() on the local machine I'm using to test. I'm in NY (GMT -5), so the inputted data in the database should have been 5 hours ahead, but it was only 4. Sorry for the add/subtract confusion, yea I was just speaking affirmatively.

I'm sure I'm not using getDate(), if I were, the inputted data in the database would have been identical to the time I was seeing on the computer.

I've thought about the DST (daylight savings) factor, but I don't see why that would have any effect since the database just looks at the computer's settings (which time zone it is set to) and just compensates to arrive at GMT.

Did anyone else have this similar problem?

Thanks.

No comments:

Post a Comment