Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 326/1148 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Url Rewriting with Querystrings

    - by Alon
    In IIS 7.5, I'm trying to rewrite a Url such as /about to /content.asp?p=about, with support for QueryString-s, so if the orginal Url was /about?x=y, it should rewrite to /content.asp?p=about&x=y. The basic rewriting is now working, but when I'm trying to add a QueryString it doesn't work. Tried both /about?x=y and /about&x=y. My current rule: <rule name="RewriteUserFriendlyURL1" stopProcessing="false"> <match url="^([^/]+)/?$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="content.asp?p={R:1}" /> </rule> How can I fix this? Thank you.

    Read the article

  • Better understanding of my SQL transactions

    - by Slew Poke
    I just realized that my application was needlessly making 50+ database calls per user request due to some hidden coding -- hidden in the sense that between LINQ, persistence frameworks and events it just so turned out that a huge number of calls were being made without me being aware. Is there a recommended way to analyze individual transactions going to my SQL 2008 database, preferably with some integration to my Visual Studio 2010 environment? I want to be able to 'spy' on individual transactions being made, but only for certain pieces of my code, and without making serious changes to either the code or database.

    Read the article

  • How to fix this simple SQL query?

    - by morpheous
    I have a database with three tables: user_table country_table city_table I want to write ANSI SQL which will allow me to fetch all the user data (i.e. user details including the name of the country of the last school and the name of the city they live in now). The problem I am having is that I have to use a self join, and I am getting slightly confused. The schema is shown below: CREATE TABLE user_table (id int, first_name varchar(16), last_school_country_id int, city_id int); CREATE TABLE country_table (id int, name varchar(32)); CREATE TABLE city_table (id int, country_id int, name varchar(32)); This is the query I have come up with so far, but the results are wrong, and sometimes, the db engine (mySQL), asks me if I want to show all [HUGE NUMBER HERE] results - which makes me suspect that I am unintentionally creating a cartesian product somewhere. Can someone explain what is wrong with this SQL statement, and what I need to do to fix it? SELECT usr.id AS id, usr.first_name, ctry1.name as loc_country_name, ctry2.name as school_country_name, city.name as loc_city_name FROM user_table usr, country_table ctry1, country_table ctry2, city_table city WHERE usr.last_school_country_id=ctry2.id AND usr.city_id=city.id AND city.country_id=ctry1.id AND ctry1.id=ctry2.id;

    Read the article

  • About the String#substring() method

    - by alain.janinm
    If we take a look at the String#substring method implementation : new String(offset + beginIndex, endIndex - beginIndex, value); We see that a new String is created with the same original content (parameter char [] value). So the workaround is to use new String(toto.substring(...)) to drop the reference to the original char[] value and make it eligible for GC (if no more references exist). I would like to know if there is a special reason that explain this implementation. Why the method doesn't create herself the new shorter String and why she keeps the full original value instead? The other related question is : should we always use new String(...) when dealing with substring?

    Read the article

  • WordPress website getting hung up for 10-15 seconds

    - by synergy989
    Problem I have a website (which loads fine): http://testupg.videve.com/ Go to the blog section (and it begins to load, gets hung up, then loads): http://testupg.videve.com/blog/ It's all one WordPress install. The only pages that are getting hung up are the blog page itself and any article page. The video pages load fine etc. What Have I done and noticed. If I disable all plugins, the blog page loads fine. I narrowed it down to DisplayBuddy plugins (Featured posts, Carousel, Slider)...as soon as I enable any single one of them, the site loads slow. The thing that doesn't make sense is, I disabled the sidebar (deleted it from the template) and the article page still loaded slow and it has ZERO instances of this plugin. I disable the plugin and the article page loads fine. How on earth can this be? I am hoping this case above can give just a little bit of insight! One other thing. The video pages use a custom taxonomy so I am wondering if its something to do with the default wordpress taxonomy. Any help is GREATLY appreciated, I have been at this thing for hours and it's time to call in some support. Cheers

    Read the article

  • Which method of adding items to the ASP.NET Dictionary class is more efficient?

    - by ahmd0
    I'm converting a comma separated list of strings into a dictionary using C# in ASP.NET (by omitting any duplicates): string str = "1,2, 4, 2, 4, item 3,item2, item 3"; //Just a random string for the sake of this example and I was wondering which method is more efficient? 1 - Using try/catch block: Dictionary<string, string> dic = new Dictionary<string, string>(); string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { try { string s2 = s.Trim(); dic.Add(s2, s2); } catch { } } } 2 - Or using ContainsKey() method: string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { string s2 = s.Trim(); if (!dic.ContainsKey(s2)) dic.Add(s2, s2); } }

    Read the article

  • Mysql Avg function for recent 15 records by date (order date desc) in every symbol

    - by venkatesh
    i am trying to create a statement in sql (for a table which holds stock symbols and price on specified date) with avg of 5 day price and avg of 15 days price for each symbol. table description: symbol open high close date the average price is calculated from last 5 days and last 15 days. i tried this for getting 1 symbol: SELECT avg(close), avg(`trd_qty`) FROM (select * from cashmarket WHERE symbol = \'hdil\' order by `M_day` desc limit 0,15 ) s ...but I couldn't get the desired the list for showing avg values for all symbols.

    Read the article

  • Generated images fail to load in browser

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • Is Java serialization a tool to shrink the memory footprint?

    - by Pentius
    Hey folks, does serialization in Java always have to shrink the memory that is used to hold an object structure? Or is it likely that serialization will have higher costs? In other words: Is serialization a tool to shrink the memory footprint of object structures in Java? Edit I'm totally aware of what serialization was intended for, but thanks anyway :-) But you know, tools can be misused. My question is, whether it is a good tool to decrease the memory usage. So what reasons can you imagine, why memory usage should increase/decrease? What will happen in most cases?

    Read the article

  • How do I strip multiple (optional) parts of a SQL string using .NET Regular Expressions?

    - by Luc
    I've been working on this for a few hours now and can't find any help on it. Basically, I'm trying to strip a SQL string into various parts (fields, from, where, having, groupBy, orderBy). I refuse to believe that I'm the first person to ever try to do this, so I'd like to ask for some advise from the StackOverflow community. :) To understand what I need, assume the following SQL string: select * from table1 inner join table2 on table1.id = table2.id where field1 = 'sam' having table1.field3 > 0 group by table1.field4 order by table1.field5 I created a regular expression to group the parts accordingly: select\s+(?<fields>.+)\s+from\s+(?<from>.+)\s+where\s+(?<where>.+)\s+having\s+(?<having>.+)\s+group\sby\s+(?<groupby>.+)\s+order\sby\s+(?<orderby>.+) This gives me the following results: fields => * from => table1 inner join table2 on table1.id = table2.id where => field1 = 'sam' having => table1.field3 > 0 groupby => table1.field4 orderby => table1.field5 The problem that I'm faced with is that if any part of the SQL string is missing after the 'from' clause, the regular expression doesn't match. To fix that, I've tried putting each optional part in it's own (...)? group but that doesn't work. It simply put all the optional parts (where, having, groupBy, and orderBy) into the 'from' group. Any ideas?

    Read the article

  • Windows Workflow runs very slowlyh on my DEV machine

    - by Joon
    I am developing an app using WF hosted in IIS as WCF services as a business layer. This runs quickly on any machine running Windows Server 2008 R2, but very slowly on our dev machines, running Windows XP SP3. Yesterday, the workflows were as fast on my dev machine as they are on the server for the whole day. Today, they are back to running slowly again (I rebooted overnight) Has anyone else experienced this problem with workflows running slowly on IIS in XP? What did you do to fix it?

    Read the article

  • Get the count of A -> B and B->A without duplicates

    - by TomGasson
    I have a table like so: index|from | to ------------------ 1 | ABC | DEF 2 | ABC | GHI 3 | ABC | GHI 4 | ABC | JKL 5 | ABC | JKL 6 | ABC | JKL 7 | DEF | ABC 8 | DEF | GHI 9 | DEF | JKL 10 | GHI | ABC 11 | GHI | ABC 12 | GHI | ABC 13 | JKL | DEF And I need to count how the total times between the points (regardless of direction) to get the result: A | B | count ----------------- ABC | DEF | 2 ABC | GHI | 5 ABC | JKL | 3 DEF | GHI | 1 DEF | JKL | 2 So far I can get: SELECT `a`.`from` as `A`, `a`.`to` as `B`, (`a`.`count` + `b`.`count`) as `count` FROM (SELECT `from`, `to`, count(*) as `count` FROM `table` GROUP BY 1,2) `a` LEFT OUTER JOIN (SELECT `from`,`to`, count(*) as `count` FROM `table` GROUP BY 1,2) `b` ON `a`.`from` = `b`.`to` AND `a`.`to` = `b`.`from` But I'm unsure how to remove the A/B swapped duplicates.

    Read the article

  • In Excel 2010, how can I show a count of occurrences on a specific date within multiple time ranges?

    - by Justin
    Here's what I'm trying to do. I have three columns of data. ID, Date(MM/DD/YY), Time(00:00). I need to create a chart or table that shows the number of occurrences on, say, 12/10/2010 between 00:00 and 00:59, 1:00 and 1:59, etc, for each hour of the day. I can do countif and get results for the date, but I cannot figure out how to show a summary of the count of occurrences per hour for the 24 hour period. I have months of data and many times each day. Example of data set is below. Any help is greatly ID Date Time 221 12/10/2010 00:01 223 12/10/2010 00:45 227 12/10/2010 01:13 334 12/11/2010 14:45 I would like the results to read: Date Time Count 12/10/2010 00:00AM - 00:59AM 2 12/10/2010 01:00AM - 01:59AM 1 12/10/2010 02:00AM - 02:59AM 0 ......(continues for every hour of the day) 12/11/2010 00:00AM - 00:59AM 0 ......... 12/11/2010 14:00PM - 14:59PM 1 And so on. Sorry for the length but I wanted to be clear. EDIT Here is a sample spreadsheet. Very little data, but I couldn't figure out a better way without having a huge file. Tested in notepad for formatting and worked ok on import as csv. PID,Date,Time 2888759,12/10/2010,0:10 2888760,12/10/2010,0:10 2888761,12/10/2010,0:10 2888762,12/10/2010,0:11 2889078,12/10/2010,15:45 2889079,12/10/2010,15:57 2889080,12/10/2010,15:57 2889081,12/10/2010,15:58 2889082,12/10/2010,16:10 2889083,12/10/2010,16:11 2889084,12/10/2010,16:11 2889085,12/10/2010,16:12 2889086,12/10/2010,16:12 2889087,12/10/2010,16:12 2889088,12/10/2010,16:13 2891529,12/14/2010,16:21

    Read the article

  • How to avoid web server traffic peak resulting from iOS Newsstand app receiving a remote notification?

    - by thomers
    I'm developing an iOS Newsstand app. If it is suspended or not running and connected to a WLAN, Newsstand apps can be triggered by a Push remote notification to download the latest issue (in our case around 100MB) in the background. I'm using Urban Airship for the delivery of the Push broadcast. I'm now worrying about many many iOS devices hitting the web server for one big download more or less at the same time, because I expect the majority of the devices will receive the notification in a very short timeframe. Instead of broadcasts to all devices, should I rather send individual notifications to batches of small groups of devices, spreading them out over a longer period of time? And/or would a CDN like Amazon Cloudfront solve that issue easier/anyway?

    Read the article

  • Optimizing NSNumber numberWithInt:

    - by Riviera
    I am profiling an iPhone app and I noticed a strange pattern. In a certain block of code that's called quite frequently... [item setQuadrant:[NSNumber numberWithInt:a]]; [item setIndex:[NSNumber numberWithInt:b]]; [item setTimestamp:[NSNumber numberWithInt:c]]; [item setState:[NSNumber numberWithInt:d]]; [item setCompletionPercentage:[NSNumber numberWithInt:e]]; [item setId_:[NSNumber numberWithInt:f]]; ...the first call to [NSNumber numberWithInt:] takes an inordinate amount of time, in the order of 10-15x that of the remaining calls. I've verified that the results are consistent if I shuffle the lines (the first line is always the slow one, by the same ratio). Is there something going on that I'm not aware of? Perhaps this happens because this block is inside a try/catch?

    Read the article

  • Does anybody have any suggestions on which of these two approaches is better for large delete?

    - by RPS
    Approach #1: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE TOP (@count) FROM ProductOrderInfo WHERE ProductId = @product_id AND bCopied = 1 AND FileNameCRC = @localNameCrc SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' Approach #2: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE FROM ProductOrderInfo WHERE ProductId = @product_id AND FileNameCRC IN ( SELECT TOP(@count) FileNameCRC FROM ProductOrderInfo WITH (NOLOCK) WHERE bCopied = 1 AND FileNameCRC = @localNameCrc ) SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' END

    Read the article

  • How to detect whether an EventWaitHandle is waiting?

    - by AngryHacker
    I have a fairly well multi-threaded winforms app that employs the EventWaitHandle in a number of places to synchronize access. So I have code similar to this: List<int> _revTypes; EventWaitHandle _ewh = new EventWaitHandle(false, EventResetMode.ManualReset); void StartBackgroundTask() { _ewh.Reset(); Thread t = new Thread(new ThreadStart(LoadStuff)); t.Start(); } void LoadStuff() { _revTypes = WebServiceCall.GetRevTypes() // ...bunch of other calls fetching data from all over the place // using the same pattern _ewh.Set(); } List<int> RevTypes { get { _ewh.WaitOne(); return _revTypes; } } Then I just call .RevTypes somewehre from the UI and it will return data to me when LoadStuff has finished executing. All this works perfectly correctly, however RevTypes is just one property - there are actually several dozens of these. And one or several of these properties are holding up the UI from loading in a fast manner. Short of placing benchmark code into each property, is there a way to see which property is holding the UI from loading? Is there a way to see whether the EventWaitHandle is forced to actually wait?

    Read the article

  • iPhone Core Data - Access deep attributes with to many relationships

    - by ncohen
    Hi everyone, Let say I have an entity user which has a one to many relationship with the entity menu which has a one to many relationship with the entity meal which has a many to one relationship with the entity recipe which has a one to many relationship with the entity element. What I would like to do is to select the elements which belong to a particular user (username = myUsername) and particular menu*s* (minDate < menu.date < maxDate). Does anyone have an idea how to get them? Thanks

    Read the article

  • Query Parameter Value Is Null When Enum Item 0 is Cast with Int32

    - by Timothy
    When I use the first item in a zero-based Enum cast to Int32 as a query parameter, the parameter value is null. I've worked around it by simply setting the first item to a value of 1, but I was wondering though what's really going on here? This one has me scratching my head. Why is the parameter value regarded as null, instead of 0? Enum LogEventType : int { SignIn, SignInFailure, SignOut, ... } private static DataTable QueryEventLogSession(DateTime start, DateTime stop) { DataTable entries = new DataTable(); using (FbConnection conn = new FbConnection(DSN)) { using (FbDataAdapter adapter = new FbDataAdapter( "SELECT event_type, event_timestamp, event_details FROM event_log " + "WHERE event_timestamp BETWEEN @start AND @stop " + "AND event_type IN (@signIn, @signInFailure, @signOut) " + "ORDER BY event_timestamp ASC", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new FbParameter("@start", start), new FbParameter("@stop", stop), new FbParameter("@signIn", (Int32)LogEventType.SignIn), new FbParameter("@signInFailure", (Int32)LogEventType.SignInFailure), new FbParameter("@signOut", (Int32)LogEventType.SignOut)}); Trace.WriteLine(adapter.SelectCommand.CommandText); foreach (FbParameter p in adapter.SelectCommand.Parameters) { Trace.WriteLine(p.Value.ToString()); } adapter.Fill(entries); } } return entries; }

    Read the article

  • How can I do this with MySQL partitions

    - by Uffo
    I have a table with millions of rows and I want to create some partions, but I really don't know how I can to this. I mean I want to have the data which is starting with the ID 1 - 10000 to be on partition one, and and the data that is starting with the ID 10001 - 20000 to be on partition two; and so on...?Can you give me an example how to do it? I have searched a lot on the internet and I read a lot of documentation, but I still don't understand how it needs to be done! Best Regards,

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >