Search Results

Search found 7179 results on 288 pages for 'slow logon'.

Page 274/288 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • jQuery fadeIn is not working in Internet Explorer

    - by Nazaf
    I have the following HTML DIV which does not work using FadeIn in IE: $(".tip").fadeIn("slow"); /* Is not working in IE. */ $(".tip").show(); /* Works well in IE, that's weird. */ <div class="tip" style="width: 220px; display: none;"> <div class="tip-header"> <span><b>Title</b></span> <div class="right close"><a href="javascript:void(0);">close</a> <img alt="" src="/Images/close-normal.png"/></div> </div> <div class="tip-content">EBody comes here.</div> </div> .tip { display: block; z-index: 99999; position: fixed; background-color: #ffffff; -moz-box-shadow: 2px 2px 10px 2px rgba(0, 0, 0, 0.6); -webkit-box-shadow: 2px 2px 10px 2px rgba(0, 0, 0, 0.6); border:solid 1px #82C2FA; -moz-border-radius: 8px; -webkit-border-radius: 8px; } .tip-header { padding: 8px; min-height: 10px; -moz-border-radius-topleft: 8px; -moz-border-radius-topright: 8px; -webkit-border-radius-topright: 8px; -webkit-border-radius-topleft: 8px; background-color: #CFE6FD; border-bottom: 1px solid #82C2FA; } .tip-header span { font-size: 14px; color: #666666; } .tip-content { padding: 8px; text-align: left; font-size: 12px; } .close, .whats-this { cursor: pointer; } .close a { color: #085FBC; text-decoration: none; } .close img { vertical-align: bottom; }

    Read the article

  • Stored procedure performance randomly plummets; trivial ALTER fixes it. Why?

    - by gWiz
    I have a couple of stored procedures on SQL Server 2005 that I've noticed will suddenly take a significantly long time to complete when invoked from my ASP.NET MVC app running in an IIS6 web farm of four servers. Normal, expected completion time is less than a second; unexpected anomalous completion time is 25-45 seconds. The problem doesn't seem to ever correct itself. However, if I ALTER the stored procedure (even if I don't change anything in the procedure, except to perhaps add a space to the script created by SSMS Modify command), the completion time reverts to expected completion time. IIS and SQL Server are running on separate boxes, both running Windows Server 2003 R2 Enterprise Edition. SQL Server is Standard Edition. All machines have dual Xeon E5450 3GHz CPUs and 4GB RAM. SQL Server is accessed using its TCP/IP protocol over gigabit ethernet (not sure what physical medium). The problem is present from all web servers in the web farm. When I invoke the procedure from a query window in SSMS on my development machine, the procedure completes in normal time. This is strange because I was under the impression that SSMS used the same SqlClient driver as in .NET. When I point my development instance of the web app to the production database, I again get the anomalous long completion time. If my SqlCommand Timeout is too short, I get System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Question: Why would performing ALTER on the stored procedure, without actually changing anything in it, restore the completion time to less than a second, as expected? Edit: To clarify, when the procedure is running slow for the app, it simultaneously runs fine in SSMS with the same parameters. The only difference I can discern is login credentials (next time I notice the behavior, I'll be checking from SSMS with the same creds). The ultimate goal is to get the procs to sustainably run with expected speed without requiring occasional intervention. Resolution: I wanted to to update this question in case others are experiencing this issue. Following the leads of the answers below, I was able to consistently reproduce this behavior. In order to test, I utilize sp_recompile and pass it one of the susceptible sprocs. I then initiate a website request from my browser that will invoke the sproc with atypical parameters. Lastly, I initiate a website request to a page that invokes the sproc with typical parameters, and observe that the request does not complete because of a SQL timeout on the sproc invocation. To resolve this on SQL Server 2005, I've added OPTIMIZE FOR hints to my SELECT. The sprocs that were vulnerable all have the "all-in-one" pattern described in this article. This pattern is certainly not ideal but was a necessary trade-off given the timeframe for the project.

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Calling a function that resides in the main page from a plugin?

    - by Justin Lee
    I want to call a function from within plugin, but the function is on the main page and not the plugin's .js file. EDIT I have jQuery parsing a very large XML file and building, subsequently, a large list (1.1 MB HTML file when dynamic content is copied, pasted, then saved) that has expand/collapse functionality through a plugin. The overall performance on IE is super slow and doggy, assuming since the page/DOM is so big. I am currently trying to save the collapsed content in the event.data when it is collapsed and remove it from the DOM, then bring it back when it is told to expand... the issue that I am having is that when I bring the content back, obviously the "click" and "hover" events are gone. I'm trying to re-assign them, currently doing so inside the plugin after the plugin expands the content. The issue then though is that is says the function that I declare within the .click() is not defined. Also the hover event doesn't seem to be re-assigning either.... if ($(event.data.trigger).attr('class').indexOf('collapsed') != -1 ) { // if expanding // console.log(event.data.targetContent); $(event.data.trigger).after(event.data.targetContent); $(event.data.target).hide(); /* This Line --->*/ $(event.data.target + 'a.addButton').click(addResourceToList); $(event.data.target + 'li.resource') .hover( function() { if (!($(this).attr("disabled"))) { $(this).addClass("over"); $(this).find("a").css({'display':'block'}); } }, function () { if (!($(this).attr("disabled"))) { $(this).removeClass("over"); $(this).children("a").css({'display':'none'}); } } ); $(event.data.target).css({ "height": "0px", "padding-top": "0px", "padding-bottom": "0px", "margin-top": "0px", "margin-bottom": "0px"}); $(event.data.target).show(); $(event.data.target).animate({ height: event.data.heightVal + "px", paddingTop: event.data.topPaddingVal + "px", paddingBottom: event.data.bottomPaddingVal + "px", marginTop: event.data.topMarginVal + "px", marginBottom: event.data.bottomMarginVal + "px"}, "normal");//, function(){$(this).hide();}); $(event.data.trigger).removeClass("collapsed"); $.cookies.set('jcollapserSub_' + event.data.target, 'expanded', {hoursToLive: 24 * 365}); } else if ($(event.data.trigger).attr('class').indexOf('collapsed') == -1 ) { // if collapsing $(event.data.target).animate({ height: "0px", paddingTop: "0px", paddingBottom: "0px", marginTop: "0px", marginBottom: "0px"}, "normal", function(){$(this).hide();$(this).remove();}); $(event.data.trigger).addClass("collapsed"); $.cookies.set('jcollapserSub_' + event.data.target, 'collapsed', {hoursToLive: 24 * 365}); } EDIT So, having new eyes truly makes a difference. As I was reviewing the code in this post this morning after being away over the weekend, I found where I had err'd. This: $(event.data.target + 'a.addButton').click(addResourceToList); Should be this (notice the space before a.addbutton): $(event.data.target + ' a.addButton').click(addResourceToList); Same issue with the "li.resource". So it was never pointing to the right elements... Thank you, Rene, for your help!!

    Read the article

  • Finding contained bordered regions from Excel imports.

    - by dmaruca
    I am importing massive amounts of data from Excel that have various table layouts. I have good enough table detection routines and merge cell handling, but I am running into a problem when it comes to dealing with borders. Namely performance. The bordered regions in some of these files have meaning. Data Setup: I am importing directly from Office Open XML using VB6 and MSXML. The data is parsed from the XML into a dictionary of cell data. This wonks wonderfully and is just as fast as using docmd.transferspreadsheet in Access, but returns much better results. Each cell contains a pointer to a style element which contains a pointer to a border element that defines the visibility and weight of each border (this is how the data is structured inside OpenXML, also). Challenge: What I'm trying to do is find every region that is enclosed inside borders, and create a list of cells that are inside that region. What I have done: I initially created a BFS(breadth first search) fill routine to find these areas. This works wonderfully and fast for "normal" sized spreadsheets, but gets way too slow for imports into the thousands of rows. One problem is that a border in Excel could be stored in the cell you are checking or the opposing border in the adjacent cell. That's ok, I can consolidate that data on import to reduce the number of checks needed. One thing I thought about doing is to create a separate graph that outlines the cells using the borders as my edges and using a graph algorithm to find regions that way, but I'm having trouble figuring out how to implement the algorithm. I've used Dijkstra in the past and thought I could do similar with this. So I can span out using no endpoint to search the entire graph, and if I encounter a closed node I know that I just found an enclosed region, but how can I know if the route I've found is the optimal one? I guess I could flag that to run a separate check for the found closed node to the previous node ignoring that one edge. This could work, but wouldn't be much better performance wise on dense graphs. Can anyone else suggest a better method? Thanks for taking the time to read this.

    Read the article

  • How do you remind your Scrum Product Owner about his promises/actions?

    - by Felix Ogg
    ** EDIT: Rephrased the question to re-focus ** Our Scrum team meets as seldomly as possible, but we meet with the product owner every chance we get. We track everyone's agreed action points (particularly theirs). We are 100% agile, but our product owner lives in traditional world, we remain off-site. We facilitate him in crossing over to our fast-paced world. There's not much wrong. The team and the PO are in good spirits. PO is present at every meeting and positively energized. Just imagine this person as a 70 year old, slow grandpa, who is forgetful, yet kind. In reality he isn't, but he is used to a working environment (public servants) that is much slooooower. Manyana-manyana etc. It is frustrating for my team to cooperate: PO lives in a non-prioritized environment, and everyone in it has learned the productivity-technique of NGTD (Not Getting Things Done). He WANTS to, it's just that he forgets or 'sinks' somewhere along the away. We have experimented with a text file, maintained by the Scrum master (low-tech), which he broadcasts by e-mail every day JIRA, our issue tracker. Turns out this is nice for programmers, but too steep for 'regular people' I Googled for Issue tracking webtools but came up empty handed: All tools are aimed at IT issue tracking, instead of meeting action point tracking/planning for mere mortals. I did find TODO-lists like RememberTheMilk, but they don't track comments, and - to be honest - I doubt we could get our product owner to use it (too complicated). We have three requirements: Register action points, assign to a team member and a deadline Offer anyone to 'comment' on progress of any action point Do not build our own tool from scratch We do not need: - impressive authorization models, - multi-project, - workflow, - crosslinking. Is there any trick/tool you use to assist your product owner 'fly' like the rest of the rest of the team? Communication before tools I agree with the general consensus that one should not try to apply technology to a communication problem, however in this case I am merely looking for a tool to save me time in setting up prioritized lists. I found www.thymer.com today, may be what I am looking for. The guys are cool. It is getting rather feature-bloated though.

    Read the article

  • WPF: Improving Performance for Running on Older PCs

    - by Phil Sandler
    So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations. I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus. I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing. The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window. I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool: For applications without animation, this value should be near 0. The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations). (Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question). I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance. So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are: What are some general things to look for (or avoid) in the code to improve performance? What steps can I take using the WPF performance tool to narrow down the problem? Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?

    Read the article

  • Jquery - Loading a page with .load and selector doesn't execute script?

    - by PirateKitten
    I'm trying to load one page into another using the .load() method. This loaded page contains a script that I want to execute when it has finished loading. I've put together a barebones example to demonstrate: Index.html: <html> <head> <title>Jquery Test</title> <script type="text/javascript" src="script/jquery-1.3.2.min.js"></script> <script type="text/javascript"> $(document).ready(function() { $('#nav a').click(function() { $('#contentHolder').load('content.html #toLoad', '', function() {}); return false; }); }); </script> </head> <body> <div id="nav"> <a href="content.html">Click me!</a> </div> <hr /> <div id="contentHolder"> Content loaded will go here </div> </body> </html> Content.html: <div id="toLoad"> This content is from content.html <div id="contentDiv"> This should fade away. </div> <script type="text/javascript"> $('#contentDiv').fadeOut('slow', function() {} ); </script> </div> When the link is clicked, the content should load and the second paragraph should fade away. However it doesn't execute. If I stick a simple alert("") in the script of content.html it doesn't execute either. However, if I do away with the #toLoad selector in the .load() call, it works fine. I am not sure why this is, as the block is clearly in the scope of the #toLoad div. I don't want to avoid using the selector, as in reality the content.html will be a full HTML page, and I'll only want a select part out of it. Any ideas? If the script from content.html was in the .load() callback, it works fine, however I obviously don't want that logic contained within index.html. I could possibly have the callback use .getScript() to load "content.html.js" afterwards and have the logic in there, that seems to work? I'd prefer to keep the script in content.html, if possible, so that it executes fine when loaded normally too. In fact, I might do this anyway, but I would like to know why the above doesn't work.

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • how to make DIV show up on top of other Divs

    - by feelexit
    I upload my code on jsFiddle, you can see it there. http://jsfiddle.net/SrT2U/2/ When you click on the link, the hidden FAQ section will show up, and it will push other divs down. But that is not what I want, I need all other divs stay where they are, and my hidden FAQ section just float on the top. Not sure how to do it. Not even sure if this should be done in HTML, CSS or jQuery. My jQuery code: $(function(){ $(".OpenTopMessage").click(function () { $("#details").slideToggle("slow"); }); }); HTML code: <div style="border: 1px solid #000;"> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> </div> <div id="faqBox"> <table width="100%"> <tr><td><a href="#" id="openFAQ" class="OpenTopMessage">this is hte faq section</a></td> </tr> </table> <div id="details" style="display:none"> <br/><br/><br/><br/><br/><br/><br/><br/> the display style property is set to none to ensure that the element no longer affects the layout of the page <br/><br/><br/><br/><br/><br/><br/><br/> </div> </div> <br/><br/> <div style="background:#c00;">other stuff heren the height reaches 0 after a hiding animation, the display style property is set to </div>

    Read the article

  • Commitment to Zend Framework - any arguments against?

    - by Pekka
    I am refurbishing a big CMS that I have been working on for quite a number of years now. The product itself is great, but some components, the Database and translation classes for example, need urgent replacing - partly self-made as far back as 2002, grown into a bit of a chaos over time, and might have trouble surviving a security audit. So, I've been looking closely at a number of frameworks (or, more exactly, component Libraries, as I do not intend to change the basic structure of the CMS) and ended up with liking Zend Framework the best. They offer a solid MVC model but don't force you into it, and they offer a lot of professional components that have obviously received a lot of attention (Did you know there are multiple plurals in Russian, and you can't translate them using a simple ($number == 0) or ($number > 1) switch? I didn't, but Zend_Translate can handle it. Just to illustrate the level of thorougness the library seems to have been built with.) I am now literally at the point of no return, starting to replace key components of the system by the Zend-made ones. I'm not really having second thoughts - and I am surely not looking to incite a flame war - but before going onward, I would like to step back for a moment and look whether there is anything speaking against tying a big system closely to Zend Framework. What I like about Zend: As far as I can see, very high quality code Extremely well documented, at least regarding introductions to how things work (Haven't had to use detailed API documentation yet) Backed by a company that has an interest in seeing the framework prosper Well received in the community, has a considerable user base Employs coding standards I like Comes with a full set of unit tests Feels to me like the right choice to make - or at least, one of the right choices - in terms of modern, professional PHP development. I have been thinking about encapsulating and abstracting ZF's functionality into own classes to be able to switch frameworks more easily, but have come to the conclusion that this would not be a good idea because: it would be an unnecessary level of abstraction it could cost performance the big advantage of using a framework - the existence of a developer base that is familiar with its components - would partly be cancelled out therefore, the commitment to ZF would be a deep one. Thus my question: Is there anything substantial speaking against committing to the Zend Framework? Do you have insider knowledge of plans of Zend Inc.'s to go evil in 2011, and make it a closed source library? Is Zend Inc. run by vampires? Are there conceptual flaws in the code base you start to notice when you've transitioned all your projects to it? Is the appearance of quality code an illusion? Does the code look good, but run terribly slow on anything below my quad-core workstation?

    Read the article

  • Execution plan issue requires reset on SQL Server 2005, how to determine cause?

    - by Tony Brandner
    We have a web application that delivers training to thousands of corporate students running on top of SQL Server 2005. Recently, we started seeing that a single specific query in the application went from 1 second to about 30 seconds in terms of execution time. The application started throwing timeouts in that area. Our first thought was that we may have incorrect indexes, so we reviewed the tables and indexes. However, similar queries elsewhere in the application also run quickly. Reviewing the indexes showed us that they were configured as expected. We were able to narrow it down to a single query, not a stored procedure. Running this query in SQL Studio also runs quickly. We tried running the application in a different server environment. So a different web server with the same query, parameters and database. The query still ran slow. The query is a fairly large one related to determining a student's current list of training. It includes joins and left joins on a dozen tables and subqueries. A few of the tables are fairly large (hundreds of thousands of rows) and some of the other tables are small lookup tables. The query uses a grouping clause and a few where conditions. A few of the tables are quite active and the contents change often but the volume of added rows doesn't seem extreme. These symptoms led us to consider the execution plan. First off, as soon as we reset the execution plan cache with the SQL command 'DBCC FREEPROCCACHE', the problem went away. Unfortunately, the problem started to reoccur within a few days. The problem has continued to plague us for awhile now. It's usually the same query, but we did appear to see the problem occur in another single query recently. It happens enough to be a nuisance. We're having a heck of a time trying to fix it since we can't reproduce it in any other environment other than production. I have downloaded the High Availability guide from Red Gate and I read up more on execution plans. I hope to run the profiler on the live server, but I'm a bit concerned about impact. I would like to ask - what is the best way to figure out what is triggering this problem? Has anyone else seen this same issue?

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

  • Speed up a web service for auto complete and avoid too many method calls.

    - by jphenow
    So I've got my jquery autocomplete 'working,' but its a little fidgety since I call the webservice method each time a keydown() fires so I get lots of methods hanging and sometimes to get the "auto" to work I have to type it out and backspace a bit because i'm assuming it got its return value a little slow. I've limited the query results to 8 to mininmize time. Is there anything i can do to make this a little snappier? This thing seems near useless if I don't get it a little more responsive. javascript $("#clientAutoNames").keydown(function () { $.ajax({ type: "POST", url: "WebService.asmx/LoadData", data: "{'input':" + JSON.stringify($("#clientAutoNames").val()) + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (data) { if (data.d != null) { var serviceScript = data.d; } $("#autoNames").html(serviceScript); $('#clientAutoNames').autocomplete({ minLength: 2, source: autoNames, delay: 100, focus: function (event, ui) { $('#project').val(ui.item.label); return false; }, select: function (event, ui) { $('#clientAutoNames').val(ui.item.label); $('#projectid').val(ui.item.value); $('#project-description').html(ui.item.desc); pkey = $('#project-id').val; return false; } }) .data("autocomplete")._renderItem = function (ul, item) { return $("<li></li>") .data("item.autocomplete", item) .append("<a>" + item.label + "<br>" + item.desc + "</a>") .appendTo(ul); } } }); }); WebService.asmx <WebMethod()> _ Public Function LoadData(ByVal input As String) As String Dim result As String = "<script>var autoNames = [" Dim sqlOut As Data.SqlClient.SqlDataReader Dim connstring As String = *Datasource* Dim strSql As String = "SELECT TOP 2 * FROM v_Clients WHERE (SearchName Like '" + input + "%') ORDER BY SearchName" Dim cnn As Data.SqlClient.SqlConnection = New Data.SqlClient.SqlConnection(connstring) Dim cmd As Data.SqlClient.SqlCommand = New Data.SqlClient.SqlCommand(strSql, cnn) cnn.Open() sqlOut = cmd.ExecuteReader() Dim c As Integer = 0 While sqlOut.Read() result = result + "{" result = result + "value: '" + sqlOut("ContactID").ToString() + "'," result = result + "label: '" + sqlOut("SearchName").ToString() + "'," 'result = result + "desc: '" + title + " from " + company + "'," result = result + "}," End While result = result + "];</script>" sqlOut.Close() cnn.Close() Return result End Function I'm sure I'm just going about this slightly wrong or not doing a better balance of calls or something. Greatly appreciated!

    Read the article

  • scrollTo (jQuery) won't work in firefox

    - by William
    For some reason, firefox seems to ignore my scrollTo function even though it works in chrome and safari. Here's an example link: http://blog.rainbird.me/post/2358248459/blowholes-are-awesome Chrome and Safari will automatically scroll to the top of the image (with an offset of 20 pixels) It doesn't work in firefox. I'm baffled! code: $(document).ready(function() { $(".photoShell img").lazyload({ placeholder: "http://william.rainbird.me/boston-polaroid/white.gif", threshold: 200 }); window.viewport = { height: function() { return $(window).height(); }, width: function() { return $(window).width(); }, scrollTop: function() { return $(window).scrollTop(); }, scrollLeft: function() { return $(window).scrollLeft(); } }; $(".photoShell img").hide(); $(".photoShell .caption").hide(); $(".photoShell img").load(function() { var maxWidth = viewport.width() - 40; // Max width for the image if(maxWidth > 960){ maxWidth = 960; } var maxHeight = viewport.height() - 50; // Max height for the image var ratio = 0; // Used for aspect ratio var width = $(this).width(); // Current image width var height = $(this).height(); // Current image height // Check if the current width is larger than the max if(width > maxWidth){ ratio = maxWidth / width; // get ratio for scaling image $(this).css("width", maxWidth); // Set new width $(this).css("height", height * ratio); // Scale height based on ratio height = height * ratio; // Reset height to match scaled image width = width * ratio; // Reset width to match scaled image } // Check if current height is larger than max if(height > maxHeight){ ratio = maxHeight / height; // get ratio for scaling image $(this).css("height", maxHeight); // Set new height $(this).css("width", width * ratio); // Scale width based on ratio width = width * ratio; // Reset width to match scaled image } $(this).parents('div.photoShell').css("width", $(this).width() + 22); $(this).parents('div.photoShell').addClass('loaded'); $(this).next(".caption").show(); var scrollNum = $(this).parents('div.photoShell').offset().top; $.scrollTo(scrollNum - 20, {duration: 700, axis:"y"}); $(this).fadeIn("slow"); }).each(function() { // trigger the load event in case the image has been cached by the browser if(this.complete) $(this).trigger('load'); });

    Read the article

  • manipulate variable made up of html before adding it to the dom (new in jQuery 1.4???)

    - by pedalpete
    I thought I had seen this in the first announcement of jQuery 1.4, but can't seem to find anything now. I have a calendar table which is built dynamically from a json ajax response. The table is built in a variable called putHtml. Currently, once the table is added to the DOM, I run a showEvents function which takes each event and adds it to the appropriate cell in the table. Unfortunately, when I have 100 events, that means I am updating the DOM 100 seperate times. Which is getting rather slow. I use the showEvents function to add events dynamically, so it would be really nice if I could just use the same function, and specify to look in the DOM for the cell to add the event to, or look in the variable (assuming I've got it right, and you can actually do this with jQuery). The code I use currenlty is this jQuery('div#calendars').append('putHtml.join('')); for(var e in thisCal.events){ showEvent(thisCal.events[e]); } What I had attempted to do instead was for(var e in thisCal.events){ showEvent(thisCal.events[e],putHtml); } jQuery('div#calendars').append('putHtml.join('')); the showEvents function looks like this function showEvents(event){ var eventDate=event.date; var eventTime=event.time; var eventGroup=event.group; var eventName=event.name; var eventType=event.type; var whereEvent=jQuery('div.a'+eventDate, 'table.'+eventGroup); var putEvent='<div class="event" id="a+'eventDate+'_'+eventTime+'">'+eventName+'</div>' jQuery(whereEvent, 'div#calendar').append(putEvent); if(eventType2){ jQuery(whereEvent, 'div#listings').append(putEvent); } } when attempting to manipulate the variable putHtml before adding to the dom, I was passing putHtml into the showEvent function, so instead of '(whereEvent, 'div#calendar'), I had (whereEvent, putHtml), but that didn't work. of course, the other method to accomplish this would be that when I make each cell, I iterate over the events json, and apply the appropriate html to the cell at the time, but that means repetitively running over the entire json in order to get the event to put in the cell. Is there another/better way to do something like this?

    Read the article

  • Get with the ajax data into a php file

    - by Max Torstensson
    I'm trying to build a login system with ajax and php. I use a log-view where I then save the data in ajax which brings into my doLogin.php (php file). My problem is that php file should never be any ajax data for when I build it into a class and a function VIEW: public function DoLoginBox() { //inloggning form-tagg... return '<p>&nbsp;</p> <div id="content"> <h1>Login Form</h1> <form id="form1" name="form1" action="Handler/doLogin.php" method="post"> <p> <label for="username">Username: </label> <input type="text" name="username" id="username" /> </p> <p> <label for="password">Password: </label> <input type="password" name="password" id="password" /> </p> <p> <input type="submit" id="login" name="login" /> </p> </form> <div id="message"></div> </div>'; } AJAX: <script type="text/javascript"> $(document).ready(function() { $("#login").click(function() { var action = $("#form1").attr('action'); var form_data = { username: $("#username").val(), password: $("#password").val(), is_ajax: 1 }; $.ajax({ type: "POST", url: action, data: form_data, success: function(response) { if(response == 'success') $("#form1").slideUp('slow', function() { $("#message").html("<p class='success'>You have logged in successfully!</p>"); }); else $("#message").html("<p class='error'>Invalid username and/or password.</p>"); } }); return false; }); }); </script PHP: <?php require_once ("UserHandler.php"); class DoLogingHandler{ public function Login (){ $is_ajax = !empty($_REQUEST['is_ajax']); if(isset($is_ajax) && $is_ajax) { $username = $_REQUEST['username']; $password = $_REQUEST['password']; $UserHandler = new UserHandler(); $UserHandler -> controllDB($username,$password); if($username == 'demo' && $password == 'demo') { echo "success"; } } } } ` $DoLogingHandler = new DoLogingHandler(); $DoLogingHandler-Login(); ?

    Read the article

  • Yet another Memory Leak Issue (memory is still gone when program terminates)- C program on SLES

    - by user1426181
    I run my C program on Suse Linux Enterprise that compresses several thousand large files (between 10MB and 100MB in size), and the program gets slower and slower as the program runs (it's running multi-threaded with 32 threads on a Intel Sandy Bridge board). When the program completes, and it's run again, it's still very slow. When I watch the program running, I see that the memory is being depleted while the program runs, which you would think is just a classic memory leak problem. But, with a normal malloc()/free() mismatch, I would expect all the memory to return when the program terminates. But, most of the memory doesn't get reclaimed when the program completes. The free or top command shows Mem: 63996M total, 63724M used, 272M free when the program is slowed down to a halt, but, after the termination, the free memory only grows back to about 3660M. When the program is rerun, the free memory is quickly used up. The top program only shows that the program, while running, is using at most 4% or so of the memory. I thought that it might be a memory fragmentation problem, but, I built a small test program that simulates all the memory allocation activity in the program (many randomized aspects were built in - size/quantity), and it always returns all the memory upon completion. So, I don't think that's it. Questions: Can there be a malloc()/free() mismatch that will lose memory permanently, i.e. even after the process completes? What other things in a C program (not C++) can cause permanent memory loss, i.e. after the program completes, and even the terminal window closes? Only a reboot brings the memory back. I've read other posts about files not being closed causing problems, but, I don't think I have that problem. Is it valid to be looking at top and free for the memory statistics, i.e. do they accurately describe the memory situation? They do seem to correspond to the slowness of the program. If the program only shows a 4% memory usage, will something like valgrind find this problem?

    Read the article

  • Robust way to save/load objects with dependencies?

    - by mrteacup
    I'm writing an Android game in Java and I need a robust way to save and load application state quickly. The question seems to apply to most OO languages. To understand what I need to save: I'm using a Strategy pattern to control my game entities. The idea is I have a very general Entity class which e.g. stores the location of a bullet/player/enemy and I then attach a Behaviour class that tells the entity how to act: class Entiy { float x; float y; Behavior b; } abstract class Behavior { void update(Entity e); {} // Move about at a constant speed class MoveBehavior extends Behavior { float speed; void update ... } // Chase after another entity class ChaseBehavior extends Behavior { Entity target; void update ... } // Perform two behaviours in sequence class CombineBehavior extends Behavior { Behaviour a, b; void update ... } Essentially, Entity objects are easy to save but Behaviour objects can have a semi-complex graph of dependencies between other Entity objects and other Behaviour objects. I also have cases where a Behaviour object is shared between entities. I'm willing to change my design to make saving/loading state easier, but the above design works really well for structuring the game. Anyway, the options I've considered are: Use Java serialization. This is meant to be really slow in Android (I'll profile it sometime). I'm worried about robustness when changes are made between versions however. Use something like JSON or XML. I'm not sure how I would cope with storing the dependencies between objects however. Would I have to give each object a unique ID and then use these IDs on loading to link the right objects together? I thought I could e.g. change the ChaseBehaviour to store a ID to an entity, instead of a reference, that would be used to look up the Entity before performing the behaviour. I'd rather avoid having to write lots of loading/saving code myself as I find it really easy to make mistakes (e.g. forgetting to save something, reading things out in the wrong order). Can anyone give me any tips on good formats to save to or class designs that make saving state easier?

    Read the article

  • Database Schema Versioning Strategies

    - by Jack Ryan
    I work on a project that uses a reasonably large database, the live version weighing in at somewhere around 60-80GB. The live database is the only real definitive source of our schema, and because of its size duplicating this database is too slow to be done often. This means we have ended up developing our database schema in a pretty ad hoc way, using sql compare to migrate changes from dev dbs to the live system, and only wiping our dev dbs every month or two. I am hoping to get some pointers on how to improve our database development work flow so that we have a little more control. Some things to think about: Currently nobody is really in charge of the database schema, all developers can change it if they need to, though generally these decisions are talked about before they are done. There are stored procedures, functions, and views in the database. These should probably be dumped to files so they can be reloaded on every build. Schema changes should probably be checked in as scripts. We have started to do this recently. However all our scripts must then be numbered (because there may be dependencies between them), and must be re runnable (because our build script currently runs them all in order). This makes them hard to read because they are full of conditionals that check whether tables or columns already exist. This is a step that is often forgotten by developers. Getting a new database should be quick and easy. This is currently a big problem, it takes several hours to get a copy of last nights backup and restore it onto a dev machine. Some mechanism needs to be in place to allow developers to update static data. We have tables that contain data that is never updated through the application, but does potentially need to be changed when we do a new release (often this drives dropdowns). The whole thing needs to be runnable as part of a build script. Are there any tools that can be used to help to do this? Eventually I would like to be at a point where a new DB can be built from scratch without copying any data from the live system. I don't mind writing some scripts to glue all the steps together but each part should be easily editable so that we continue to use it rather than make changes directly on DBs.

    Read the article

  • How can I represent a line of music notes in a way that allows fast insertion at any index?

    - by chairbender
    For "fun", and to learn functional programming, I'm developing a program in Clojure that does algorithmic composition using ideas from this theory of music called "Westergaardian Theory". It generates lines of music (where a line is just a single staff consisting of a sequence of notes, each with pitches and durations). It basically works like this: Start with a line consisting of three notes (the specifics of how these are chosen are not important). Randomly perform one of several "operations" on this line. The operation picks randomly from all pairs of adjacent notes that meet a certain criteria (for each pair, the criteria only depends on the pair and is independent of the other notes in the line). It inserts 1 or several notes (depending on the operation) between the chosen pair. Each operation has its own unique criteria. Continue randomly performing these operations on the line until the line is the desired length. The issue I've run into is that my implementation of this is quite slow, and I suspect it could be made faster. I'm new to Clojure and functional programming in general (though I'm experienced with OO), so I'm hoping someone with more experience can point out if I'm not thinking in a functional paradigm or missing out on some FP technique. My current implementation is that each line is a vector containing maps. Each map has a :note and a :dur. :note's value is a keyword representing a musical note like :A4 or :C#3. :dur's value is a fraction, representing the duration of the note (1 is a whole note, 1/4 is a quarter note, etc...). So, for example, a line representing the C major scale starting on C3 would look like this: [ {:note :C3 :dur 1} {:note :D3 :dur 1} {:note :E3 :dur 1} {:note :F3 :dur 1} {:note :G3 :dur 1} {:note :A4 :dur 1} {:note :B4 :dur 1} ] This is a problematic representation because there's not really a quick way to insert into an arbitrary index of a vector. But insertion is the most frequently performed operation on these lines. My current terrible function for inserting notes into a line basically splits the vector using subvec at the point of insertion, uses conj to join the first part + notes + last part, then uses flatten and vec to make them all be in a one-dimensional vector. For example if I want to insert C3 and D3 into the the C major scale at index 3 (where the F3 is), it would do this (I'll use the note name in place of the :note and :dur maps): (conj [C3 D3 E3] [C3 D3] [F3 G3 A4 B4]), which creates [C3 D3 E3 [C3 D3] [F3 G3 A4 B4]] (vec (flatten previous-vector)) which gives [C3 D3 E3 C3 D3 F3 G3 A4 B4] The run time of that is O(n), AFAIK. I'm looking for a way to make this insertion faster. I've searched for information on Clojure data structures that have fast insertion but haven't found anything that would work. I found "finger trees" but they only allow fast insertion at the start or end of the list. Edit: I split this into two questions. The other part is here.

    Read the article

  • Jquery help : How to implement a Previous/Next & Play/Pause toggle for this script

    - by rameshelamathi
    (function($){ // Creating the sweetPages jQuery plugin: $.fn.sweetPages = function(opts){ // If no options were passed, create an empty opts object if(!opts) opts = {}; var resultsPerPage = opts.perPage || 3; var swDiv = opts.getSwDiv || "swControls"; // The plugin works best for unordered lists, althugh ols would do just as well: var ul = this; var li = ul.find('li'); li.each(function(){ // Calculating the height of each li element, and storing it with the data method: var el = $(this); el.data('height',el.outerHeight(true)); }); // Calculating the total number of pages: var pagesNumber = Math.ceil(li.length/resultsPerPage); // If the pages are less than two, do nothing: if(pagesNumber<2) return this; // Creating the controls div: //var swControls = $('<div class="swControls">'); var swControls = $('<div class='+swDiv+'>'); for(var i=0;i<pagesNumber;i++) { // Slice a portion of the lis, and wrap it in a swPage div: li.slice(i*resultsPerPage,(i+1)*resultsPerPage).wrapAll('<div class="swPage" />'); // Adding a link to the swControls div: swControls.append('<a href="" class="swShowPage">'+(i+1)+'</a>'); } ul.append(swControls); var maxHeight = 0; var totalWidth = 0; var swPage = ul.find('.swPage'); swPage.each(function(){ // Looping through all the newly created pages: var elem = $(this); var tmpHeight = 0; elem.find('li').each(function(){tmpHeight+=$(this).data('height');}); if(tmpHeight>maxHeight) maxHeight = tmpHeight; totalWidth+=elem.outerWidth(); elem.css('float','left').width(ul.width()); }); swPage.wrapAll('<div class="swSlider" />'); // Setting the height of the ul to the height of the tallest page: //ul.height(maxHeight); var swSlider = ul.find('.swSlider'); swSlider.append('<div class="clear" />').width(totalWidth); var hyperLinks = ul.find('a.swShowPage'); hyperLinks.click(function(e){ // If one of the control links is clicked, slide the swSlider div // (which contains all the pages) and mark it as active: $(this).addClass('active').siblings().removeClass('active'); swSlider.stop().animate({'margin-left':-(parseInt($(this).text())-1)*ul.width()},'slow'); e.preventDefault(); }); // Mark the first link as active the first time this code runs: hyperLinks.eq(0).addClass('active'); // Center the control div: swControls.css({ 'right':'10%', 'margin-left':-swControls.width()/2 }); return this; }})(jQuery);

    Read the article

  • Spinning a circle in J2ME using a Canvas.

    - by JohnQPublic
    Hello all! I have a problem where I need to make a multi-colored wheel spin using a Canvas in J2ME. What I need to do is have the user increase the speed of the spin or slow the spin of the wheel. I have it mostly worked out (I think) but can't think of a way for the wheel to spin without causing my cellphone to crash. Here is what I have so far, it's close but not exactly what I need. class MyCanvas extends Canvas{ //wedgeOne/Two/Three define where this particular section of circle begins to be drawn from int wedgeOne; int wedgeTwo; int wedgeThree; int spinSpeed; MyCanvas(){ wedgeOne = 0; wedgeTwo = 120; wedgeThree = 240; spinSpeed = 0; } //Using the paint method to public void paint(Graphics g){ //Redraw the circle with the current wedge series. g.setColor(255,0,0); g.fillArc(getWidth()/2, getHeight()/2, 100, 100, wedgeOne, 120); g.setColor(0,255,0); g.fillArc(getWidth()/2, getHeight()/2, 100, 100, wedgeTwo, 120); g.setColor(0,0,255); g.fillArc(getWidth()/2, getHeight()/2, 100, 100, wedgeThree, 120); } protected void keyPressed(int keyCode){ switch (keyCode){ //When the 6 button is pressed, the wheel spins forward 5 degrees. case KEY_NUM6: wedgeOne += 5; wedgeTwo += 5; wedgeThree += 5; repaint(); break; //When the 4 button is pressed, the wheel spins backwards 5 degrees. case KEY_NUM4: wedgeOne -= 5; wedgeTwo -= 5; wedgeThree -= 5; repaint(); } } I have tried using a redraw() method that adds the spinSpeed to each of the wedge values while(spinSpeed0) and calls the repaint() method after the addition, but it causes a crash and lockup (I assume due to an infinite loop). Does anyone have any tips or ideas how I could automate the spin so you do not have the press the button every time you want it to spin? (P.S - I have been lurking for a while, but this is my first post. If it's too general or asking for too much info (sorry if it is) and I either remove it or fix it. Thank you!)

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >