Search Results

Search found 6634 results on 266 pages for 'fast fashion'.

Page 168/266 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Testing perceived performance

    - by Josh Kelley
    I recently got a shiny new development workstation. The only disadvantage of this is that the desktop apps I'm developing now run very, very fast, and so I fear that parts of the code that would be annoyingly slow on end users' machines will go unnoticed during my testing. Is there a good way to slow down an application for testing? I've tried searching around, but all of the results I've been able to find seem pretty fiddly to set up (e.g., manually setting up a high-priority CPU-bound task on the same CPU core as the target app, or running a background process that rapidly interrupts and resumes the target app), and I don't know if the end result is actually a good representation of running on a slower computer (with its slower CPU, slower RAM, slower disk I/O...). I don't think that this is a job for a profiler; I'm interested in the user's perception of end-to-end performance rather than in where the time goes for particular operations.

    Read the article

  • Same query, different execution plans

    - by A..
    Hi, I am trying to find a solution for a problem that is driving me mad... I have a query which runs very fast in a QA Server but it is very slow in production. I realised that they have different execution plans... so I have try recompiling, cleanning the cache for the execution plans, update statistics, check the type of collation... but I still can't find what's going on... The databases where the query is running are exactly the same and the SQL Servers have also the same configuration. Any new ideas would be much appreciated. Thanks, A.

    Read the article

  • Correct Way to Get Date Between Dates In SQL Server

    - by Chuck Haines
    I have a table in SQL server which has a DATETIME field called Date_Printed. I am trying to get all records in the table which lie between a specified date range. Currently I am using the following SQL DECLARE @StartDate DATETIME DECLARE @EndDate DATETIME SET @StartDate = '2010-01-01' SET @EndDate = '2010-06-18 12:59:59 PM' SELECT * FROM table WHERE Date_Printed BETWEEN @StartDate AND @EndDate I have an index on the Date_Printed column. I was wondering if this is the best way to get the rows in the table which lie between those date or if there is a faster way. The table has about 750,000 records in it right now and it will continue to grow. The query is pretty fast but I'd like to make it faster if possible.

    Read the article

  • what is equal to WebClient in javascript or jquery ?

    - by kamiar3001
    I am using WebClient. This won't work because WebClient runs server side and therefore uses different session from the users. What is the client version in java script and jquery ? Edit Section : I found a solution but it gives me error var html = $.ajax({ url: mp, //complete: hideBlocker, async: false }).responseText; $("#HomeView").hide(); $("#ContentView").html(html); //in this line it gives me script error $("#ContentView").show("fast"); the error says : SCRIPT5007: 'undefined' is null or not an object the stop line is : var count = theForm.elements.length; debugger is Microsoft internet explorer 9.0 beta

    Read the article

  • Setting amount of time to pass when sending emails in a loop

    - by Obay
    Forgive me for this noob question, but is there such a setting that sets a certain amount of time (e.g. milli/seconds) that has to pass in between sending emails through a script? How is that setting called and where do I change that setting? I'll give an example: I used to have a PHP script that sends emails like so: for ($i=0; $i<count($emails); $i++) { mail($email[$i],'test','test'); } It turned out that not all emails were sent successfully because the script ran so fast that there was not enough time in between sending emails that was required by the server. Did I make sense?

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

  • Best method to compress JSON string in term of performance and compress radio

    - by Eric Yin
    For a JSON string, contains all kinds of settings, numbers, string etc. Total JSON string fairly fall into 10k~50K range. I want to compress it before save to database. So I wonder which compress method should I choose, I am using c# 4, I know I can choose gzip and deflate but the compression radio is not good (although speed is good). More specific, compress can be a little slow (since only once) but should be small. Decompress should be lighting fast since decompress happens lots. Please give some advice.

    Read the article

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • Extremely slow insert from Delphi to Remote MySQL Database

    - by MarkRobinson
    Having a major hair-pulling issue with extremely slow inserts from Delphi 2010 to a remote MySQL 5.09 server. So far, I have tried: ADO using MySQL ODBC Driver Zeoslib v7 Alpha I have used batching and direct insert with ADO (using table access), and with Zeos I have used SQL insertion with a Query, then used Table direct mode and also cached updates Table mode using applyupdates and commit. Both technologies I have tried with compression on and off. So far I have seen a pretty much the same across the board 7.5 records per second!!! Now, I would from this point assume that the remote server is just slow, but the MySQL Workbench is amazingly fast, and the Migration toolkit managed the initial migration very quickly (to be honest, I don't recall how quickly - which kind of means that it was quick) I'm just about to try the MyDAC components as we already use SDAC (wish there was a multi-buy discount or that we'd chosen UniDAC instead now!) Any ideas?

    Read the article

  • Ruby Equivalent of Python Requests Library (HTTP Client)

    - by Hartator
    There is a library in python that I love called requests. requests is a http client build on urllib3, top-notch :) (http://docs.python-requests.org/en/latest/) I am looking for something similar in ruby, basically what I need is : Upload files support (multipart/form-data) Easy get/post Cookies can be passed from a response object to a request object (build manually login script) Stable and Flexible Sessions support (to not have to handle cookies manually if we don't have too) I've looked at Typhoeus, but the code example in the home page doesn't work (they have moved code along and the get method is not longer directly accessible like that), so it's not starting well! :) Curb seems nice and I like curl, there is alson RestClient which seems popular and em-http seems pretty fast according to benchmark. There is a aso Patron and CurlFu which I haven't have the time to try. And of course Net:Http. But it doesn't seems to have a main stream solution that everyone point. I think a lot of people have been in my situation and I wonder what they have choosen and why?

    Read the article

  • What's wrong with my logic here?

    - by stu
    In java they say don't concatenate Strings, instead you should make a stringbuffer and keep adding to that and then when you're all done, use toString() to get a String object out of it. Here's what I don't get. They say do this for performance reasons, because concatenating strings makes lots of temporary objects. But if the goal was performance, then you'd use a language like C/C++ or assembly. The argument for using java is that it is a lot cheaper to buy a faster processor than it is to pay a senior programmer to write fast efficient code. So on the one hand, you're supposed let the hardware take care of the inefficiencies, but on the other hand, you're supposed to use stringbuffers to make java more efficient. While I see that you can do both, use java and stringbuffers, my question is where is the flaw in the logic that you either use a faster chip or you spent extra time writing more efficient software.

    Read the article

  • What database works well with 200+GB of data?

    - by taw
    I've been using mysql (with innodb; on Amazon rds) because it's sort of universal default, but it's been ridiculously under-performing, and tweaking it only delays the inevitable. The data is mostly relatively short (<1kB of bytes each) blobs information about 100Ms of urls. There is (or should be, mysql cannot seem to handle it) very high amount of insert / update / retrieve but few complex queries - not that complex queries wouldn't be useful, but because mysql is so slow that it's far faster to get the data out, process it locally, and cache the results somewhere. I can keep tweaking mysql and throwing more hardware at it, but it seems increasingly futile. So what are the options? SQL/relational model/etc. optional - anything will do as long as it's fast, networked, and language-independent.

    Read the article

  • Mixing RewriteRule and ProxyPass in Apache

    - by Taylor L
    I was working on debugging an issue today related to mixing mod_proxy and mod_rewrite together and I ended up having to use balancer://mycluster in the RewriteRule in order to stop receiving a 404 error from Apache. I have two questions: 1) Is there any other way to get the rewritten URL to go through the balancer without adding balancer://mycluster into the RewriteRule? 2) Is there a way to define all the parameters I defined in ProxyPass (stickysession=JSESSIONID|jsessionid scolonpathdelim=On lbmethod=bytraffic nofailover=Off) in either the <Proxy> or RewriteRule? I'm concerned the requests that match the new RewriteRule won't load balance in the same fashion as those that go through ProxyPass (like /app1/something.do)? Below are the relevant sections of the httpd.conf. I am using Apache 2.2. <Proxy balancer://mycluster> Order deny,allow Allow from all BalancerMember ajp://my.domain.com:8009 route=node1 BalancerMember ajp://my.domain.com:8009 route=node2 </Proxy> ProxyPass /app1 balancer://mycluster/app1 stickysession=JSESSIONID|jsessionid scolonpathdelim=On lbmethod=bytraffic nofailover=Off ProxyPassReverse /app1 ajp://my.domain.com:8009/app1 ... RewriteRule ^/static/cms/image/(.*)\.(.*) balancer://mycluster/app1/$1.$2 [P,L]

    Read the article

  • Compiling Allegro source code (C#)

    - by 7331
    I am trying to build a C# project (downloaded code) in Visual Studio Express 2008. I get the error (my translation): The type or namespace name "Allegro" couldn't be found. for the line using Allegro; I know the 2D graphics library Allegro, of course, but I can't find much information on how to use it in C#. It is being used for visualization in the project I am trying to compile. I also get the warning This reference couldn't be resolved. The Universal assembly couldn't be found. I haven't been working with C# before and and I barely know Visual Studio Express. These are newbie mistakes - but I just need a fast solution for this problem. Could someone provide me with a short step-by-step solution?

    Read the article

  • Can you have a Dynamic Data Field which consists of a list of fields?

    - by Telos
    This is a purely theoretical question (at least until I start trying to implement it) but here goes. I wrote a web form a long time ago which has a configurable section for getting information. Basically for some customers there are no fields, for other customers there are up to 20 fields. I got it working by dynamically creating the fields at just the right time in the page lifecycle and going through a lot of headaches. 2 years later, I need to make some pretty big updates to this web form and there are some nifty new technologies. I've worked with ASP.NET Dynamic Data just a bit and, well, I half-crazed plan just occurred to me: The Ticket object has a one-to-many relationship to ExtendedField, we'll call that relationship Fields for brevity. Using that, the idea would be to create a FieldTemplate that dynamically generated the list of fields and displayed it. The big questions here would probably be: 1) Can a single field template resolve to multiple web controls without breaking things? 2) Can dynamic data handle updating/inserting multiple rows in such a fashion? 3) There was a third question I had a few minutes ago, but coworkers interrupted me and I forgot. So now the third question is: what is the third question? So basically, does this sound like it could work or am I missing a better/more obvious solution?

    Read the article

  • How to move from untyped DataSets to POCO\LINQ2SQL in legacy application

    - by artvolk
    Good day! I've a legacy application where data access layer consists of classes where queries are done using SqlConnection/SqlCommand and results are passed to upper layers wrapped in untyped DataSets/DataTable. Now I'm working on integrating this application into newer one where written in ASP.NET MVC 2 where LINQ2SQL is used for data access. I don't want to rewrite fancy logic of generating complex queries that are passed to SqlConnection/SqlCommand in LINQ2SQL (and don't have permission to do this), but I'd like to have result of these queries as strong-typed objects collection instead of untyped DataSets/DataTable. The basic idea is to wrap old data access code in a nice-looking from ASP.NET MVC "Model". What is the fast\easy way of doing this?

    Read the article

  • "No context-sensitive help installed" , "user32.dll" and "uxtheme.dll" AV errors on Delphi with no reason

    - by Javid
    This is very strange guys. I wrote a simple application. When I make my commands executed fast by moving mouse (event is on mouse move), I experience the following errors if I run my application without debugger (if I do, application just hangs and nothing happens): 1- "No context-sensitive help installed" however i haven't used help in my app. 2- Access violation errors from "uxtheme.dll" and "user32.dll" libraries! well, i think these errors happen when Windows Messages are sent quickly one after another. I came across these errors a while ago in a huge application. In both application I used SendMessage command, but what am i doing wrong? I'm now using Delphi 2010 Has anyone ever experienced this?!

    Read the article

  • Regex take too long to match the result

    - by Joe Ijam
    Hi all I have this regex pattern <(\d+)>(\d+\.\d+|\d{4}\-\d+\-\d+\s+\d{2}:\d{2}:\d{2})(?:\..*?)*\s+(ALER|NOTI) and this is my input (will not matched at all) <150>2010-12-29 18:11:30.883 -0700 192.168.2.145 80 192.168.2.87 2795 "-" "-" GET HTTP 192.168.2.145 HTTP/1.1 200 36200 0 1038 544 192.168.2.221 80 540 SERVER DEFAULT PASSIVE VALID /joomla/ "-" http://192.168.2.145/joomla/index.php?option=com_content&view=a be4d44e8f3986183a87991398c1c212e=1; be4d44e8f3986183a87991398c1c212e=1 This will return not matched result but it takes too long to output the result. Since i have a thousand of logs/inputs in a second, it should finish very fast for every single log/input. Sometime it reaches CPU 100%. Can anyone help me to solve this regex problem? Thanks

    Read the article

  • Clipboard Debugging

    - by Jake Pearson
    In the olden times of .NET 1.1, I could use the SoapFormatter to find out exactly what was getting serialized when I copied an object into the clipboard. Fast forward to 2010, and I tried to do the same trick. It turns out the SoapFormatter does not support generics. Is there an alternative way to find out exactly what binary objects are serialized into the clipboard? For example lets say I have this class: public class Foo { public List<Goo> Children; } If I send an instance of it to the clipboard, I would like to take a look at what is in the clipboard to see if it's children list was included or not. Update: I was finally able to find the over copied field with the debugger. Visual Studio did it's job.

    Read the article

  • Instant Messenger: How does gtalk/yahoo messenger populate the contact list?

    - by Owen
    Hi All, We are currently working on a small IM project which pretty much works like gtalk and yahoo messenger. We came across a problem that puzzled us how gtalk/ym populate their contact lists. Given that the user has let's say more or less 500 contacts, both IMs seem to readily load the contacts pretty fast and already sorted. Here are my questions(referring to either): Does it cache its contacts, like saving it in a file somewhere upon exit so that upon log-in it readily extracts the contacts and displays it in its contact list? Does it always request for the VCARDS upon log in? OR they have a VCARD push or whatever that simply updates the contacts' profiles (like that of their status [presence push - available, busy, etc...] )?

    Read the article

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • how to search files of particular extension in the entire device ?

    - by KayKay
    In my application, i am trying to find all files of particular extension (like .pdf, .txt, etc) that are stored in the whole device (not only in Home Directory) and want to list them in table View. Is it possible to do so and if it is then can i associate file of specific extension to get it open in supporting application (any third party plug-ins).I went through numerous documentation but couldn't find the solution. Also how can i index files of these extension for fast search. Any help is appreciated. Thanks.

    Read the article

  • Usability - How to edit favorites?

    - by Florian
    Hi, I'd like to get some opinions about about usability in the following case: Target group people from 30-50, low to middle internet affinity. App: I have a website with login. Visitors can save interesseting pages in their fav-box for fast access. Here the actual question: How to edit this favorites? Is it better to give the visitors direct access to drag/dropn and delete their favs or is it better to have an edit button so they have to activate the edit mode before? The fav-link would look like this | link text to click | icon-drag | icon-delete | thx for input TC

    Read the article

  • Efficient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Should a given URI in a RESTful architecture always return the same response?

    - by keithjgrant
    This is kind of a follow-up question to this one. So is having a unique response for any given URI a core tenant of RESTful architecture? A lot of discussion here tends that direction, but I haven't seen it anywhere as a "hard and fast" rule. I understand the value of it (for caching, crawling, passing links, etc), but I also see things like the twitter API violate it (A request to http://api.twitter.com/1/statuses/friends_timeline.xml will vary based on the username given), and I understand there are times when it may be necessary--not to mention that a chronologically paged resource will also change as new elements are added. Should I strive for varied responses from the same URI to be eliminated altogether, or do I just accept that sometimes it isn't practical, and as long as I minimize its occurrence, I'll be in decent shape.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >