Search Results

Search found 4 results on 1 pages for 'youwhut'.

Page 1/1 | 1 

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • WCF to WCF communication 401, HttpClient

    - by youwhut
    I have a WCF REST service that needs to communicate with another WCF REST service. There are three websites: Default Web Site Website1 Website2 If I set up both services in Default Web Site and connect to the other (using HttpClient) using the URI http://localhost/service then everything is okay. The desired set-up is to move these two services to separate websites and rather than using the URI http://localhost/service, accessing the service via http://website1.domain.com/service still using HttpClient. I received the exception: System.ArgumentOutOfRangeException: Unauthorized (401) is not one of the following: OK (200), Created (201), Accepted (202), NonAuthoritativeInformation (203), NoContent (204), ResetContent (205), PartialContent (206) I can see this is a 401, but what is going on here? Thanks

    Read the article

  • ASP.NET MVC Alter Markup before Output

    - by youwhut
    Hi, Excuse my limited knoweldge here. In the past I have used Steve Sanderson's method to HTML encode by default at runtime: http://blog.stevensanderson.com/2007/12/19/aspnet-mvc-prevent-xss-with-automatic-html-encoding/ I have a need to alter img src and a href attributes before they are spat out in the user's browser. There is a solution using JavaScript but this is not ideal for several reasons. Intercepting the compiler is not an option because of unnecessarily using Response.Write for trivial HTML. Is there something I can do with HTTP modules or the view engine? Any thoughts? Cheers.

    Read the article

  • Navigate HTML Source while performing WatiN tests

    - by youwhut
    I am performing actions on the page during WatiN tests. What is the neatest method for asserting certain elements are where they should be by evaluating the HTML source? I am scraping the source but looking for a clean way to navigate the tags pulled back. UPDATE: Right now I am thinking about grabbing certain elements within HTML source using regular expressions and then analysing that to see if other elements exist within. Other thoughts appreciated.

    Read the article

1