What database strategy to choose for a large web application

Posted by Snoopy on Stack Overflow See other posts from Stack Overflow or by Snoopy
Published on 2012-06-30T20:44:11Z Indexed on 2012/07/01 9:16 UTC
Read the original article Hit count: 206

Filed under:
|
|
|
|

I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM.

The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution.

The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC.

I thought about the following options:

  • Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services.

  • Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client.

  • Use a combination of SQL Server and Redis

  • Use SQL Server 2012 together with Hadoop

  • Use Hadoop without SQL Server

What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario?

The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax.

The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

© Stack Overflow or respective owner

Related posts about sql

Related posts about database