Search Results

Search found 61615 results on 2465 pages for 'execution time'.

Page 393/2465 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • What tools do you use when generating markup from mockups?

    - by Paul
    So I endeavor to spend most of time on the app server side of things but time to time I need to get my hands dirty and generate markup/css/js from a wireframe or mockup. As far as tools go, Ive found browsershots and Litmus app helpful and of course, vm's as well for checking things out live in ie-{6,7,8}. Otherwise I do the heavy lifting in vim. For generating new markup thats not tied to a target design I think some of the css frameworks & tools like sass look useful but Im skeptical of their utility when needing to generate markup to match a photoshop design. So what tips / tools do you keep in your markup generating utility belt when building solid markup from designs? My list so far: Browsershots Browserlab.adobe.com Haml / Less / Sass (not used but will probably explore)

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • Does the number of busy worker threads in the CLR ThreadPool affect performance of I/O threads?

    - by andrej351
    We have a Windows Service which hosts a number of WCF services and, in an unrelated part of the app, makes extensive use of the TPL Task class to asynchronously do relatively short bits of work. It is my understanding that WCF uses managed I/O threads from the ThreadPool to execute requests. I noticed that after deploying a feature which significantly raised the applications use of Tasks, and as such the use of ThreadPool worker threads as well, performance of a couple of web services has become very slow. We're talking minutes instead of less than a second. The number of Tasks actually trying to run at any one time can range between 20 and 1000, which makes me think that any new (last in) work needing some CPU time could be forced to wait for quite some time. Does the (in my case extremely large) number of busy ThreadPool worker threads affect the ThreadPool's managed I/O threads? Or could these two be connected in any way? Thanks!

    Read the article

  • XNA or C# Pop-up progress bar for the LoadContent() method

    - by Warlax
    Hey people, We wrote a small game using Microsoft's XNA Game Studio 3.1. The LoadContent() takes a long time because, other than loading models, and config files, we're also running some one-time (per run) terrain analysis. We are not C# or XNA programmers... we're Java programmers, and want to be able to give the user some feedback that the system is loading. Preferably, this will be through a simple pop-up with a progress bar that will say something simple like "loading please wait". The progress bar doesn't have to be a 0 to 1 progress bar, it can instead be one of those 'back and forth' progress bars. I was hoping for some quick copy-paste ready code to just do that - as it is not a central piece of our project, nor do we have a need to delve into too much documentation. I appreciate you time, effort, and possible donation. Thanks.

    Read the article

  • AbsoluteTime with numeric argument behaves strangely.

    - by dreeves
    This is strange: DateList@AbsoluteTime[596523] returns {2078, 7, 2, 2, 42, 9.7849} But DateList@AbsoluteTime[596524] returns {1942, 5, 26, 20, 28, 39.5596} The question: What's going on? Note that AbsoluteTime with a numeric argument is undocumented. (I think I now know what it's doing but figured this is useful to have as a StackOverflow question for future reference; and I'm curious if there's some reason for that magic 596523 number.) PS: I encountered this when writing these utility functions for converting to and from unix time in Mathematica: (* Using Unix time (an integer) instead of Mathematica's AbsoluteTime... *) tm[x___] := AbsoluteTime[x] (* tm is an alias for AbsoluteTime. *) uepoch = tm[{1970}, TimeZone->0]; (* unixtm works analogously to tm. *) unixtm[x___] := Round[tm[x]-uepoch] (* tm & unixtm convert between unix & *) unixtm[x_?NumericQ] := Round[x-uepoch] (* mma epoch time when given numeric *) tm[t_?NumericQ] := t+uepoch (* args. Ie, they're inverses. *)

    Read the article

  • Data Structure for a particular problem??

    - by AGeek
    Hi, Which data structure can perform insertion, deletion and searching operation in O(1) time in the worst case. We may assume the set of elements are integers drawn from a finite set 1,2,...,n, and initialization can take O(n) time. I can only think of implementing a hash table. Implementing it with Trees will not give O(1) time complexity for any of the operation. Or is it possible?? Kindly share your views on this, or any other data structure apart from these.. Thanks..

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Directory file size calculation - how to make it faster?

    - by Xinxua
    Using C#, I am finding the total size of a directory. The logic is this way : Get the files inside the folder. Sum up the total size. Find if there are sub directories. Then do a recursive search. I tried one another way to do this too : Using FSO (obj.GetFolder(path).Size). There's not much of difference in time in both these approaches. Now the problem is, I have tens of thousands of files in a particular folder and its taking like atleast 2 minute to find the folder size. Also, if I run the program again, it happens very quickly (5 secs). I think the windows is caching the file sizes. Is there any way I can bring down the time taken when I run the program first time??

    Read the article

  • Robocopy for Windows 2003 doesn't support /DST option

    - by Jon
    Does anyone know if it is possible to download the latest robocopy for Windows 2003. The latest version provides the /DST option which ignores time stamps changed due to BST (British Summer Time). Every time we do a build and sync our servers when we go +1/-1 hour it takes hours instead of minutes because it sees everything as changed. I noticed it is included automatically with Vista/Win7 but the Resource toolkit that I downloaded doesn't include a new version of robocopy for Win Server 2003. If there is a place to download it from & will it also work on Windows Server 2003? Thanks.

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • SOLR and Natural Language Parsing - Can I use it?

    - by andy
    hey guys, my requirements are pretty similar to this: Requirements http://stackoverflow.com/questions/90580/word-frequency-algorithm-for-natural-language-processing Using Solr While the answer for that question is excellent, I was wondering if I could make use of all the time I spent getting to know SOLR for my NLP. I thought of SOLR because: It's got a bunch of tokenizers and performs a lot of NLP. It's pretty use to use out of the box. It's restful distributed app, so it's easy to hook up I've spent some time with it, so using could save me time. Can I use Solr? Although the above reasons are good, I don't know SOLR THAT well, so I need to know if it would be appropriate for my requirements. Ideal Usage Ideally, I'd like to configure SOLR, and then be able to send SOLR some text, and retrieve the indexed tonkenized content. Context So you guys know, I'm working on a small component of a bigger recommendation engine.

    Read the article

  • c program for this quesion

    - by sashi
    suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. the drive is currently serving a request at cylinder 143 and the previous request was at cylinder 125. the ueue of pending requests in the given order is 86,1470,913,17774,948,1509,1022,1750,130. write a 'c' program for finding the total distance in cylinders that the disk arm moves to satisfy all the pending reuests from the current heads position, using SSTF scheduling algorith. seek time is the time for the disk arm to move the head to the cylider containing the desired sector. sstf algorithm selects the minimum seek time from the current head position.

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

  • ant - trying to copy to /lib/endorsed, library is not available in windows 7 to the next task

    - by kfox
    On Windows 7 I have an ant target that copies a xalan library into the jdk endorsed directory so that the next xslt transformation task can occur. The first time that the ant target runs, the xslt transformation fails. The second time it runs the jar file is already in the correct place and the xslt tranformation succeeds. The first time that the ant target runs, it looks like the file copied successfully. It feels like a timing issue, but I don't know what I can do to get around it. Here is my copy task: <mkdir dir="${java.home}\lib\endorsed"/> <copy file="${basedir}\xalan.jar" tofile="${java.home}\lib\endorsed\xalan.jar"/> Has anyone seen anything like this before?

    Read the article

  • Maven and db4o dependency

    - by Jens Jansson
    I'm intrigued to test new frameworks in the Java world, and decided to create a new project that takes advantage of Maven and db4o. I'm starting to get a hang of Maven, but I have a hard time adding db4o as a dependency to the project. First problem is that db4o doesn't exist in the official Maven repositories. Next up comes the problem that db4o seem to have recently restructured their whole site's URI:s, so I'm getting 'site not found' messages all the time when I try to navigate their site. I found somewhere a potential Maven repository that should be at https://source.db4o.com/maven but I get all the time "Error reading archetype catalog https://source.db4o.com/maven Unable to locate resource in repository" when I try to access it. So, any suggestions on how I'll get db4o up through Maven? I've managing Maven through Eclipse with the M2Eclipse plugin.

    Read the article

  • asynchronous method executing

    - by alexeyndru
    I have a delegate method with the following tasks: get something from the internet (ex: some image from a web site); process that image in a certain way; display the result in a subview ; getting the image takes some time, depending on the network's speed so the result of its processing is displayed in the subview after that little while. my problem: during the time between getting the image and showing the result the device looks unresponsive. any attempt to put some spinner, or any other method which is called inside this main procedure has no effect until the result is processed. how should I change this behaviour? I would like to put a big spinner during that waiting time. thank you.

    Read the article

  • Strategy for developing a multi function asp.net web application

    - by user247023
    I'm about to start a new project and want some advice on how to implement. I need a web application which contains a booking module for reserving timeslots, and a time management module which will enable employees to clock in / clock out. If I am writing an update to the time managment module, I don't want to disrupt the booking engine availability by releasing a new solution containing both modules. to make things more difficult, there is some shared functionality like common users, roles and security. Here's a suggestion I've gotten, which sounds a bit cruddy, but may be functional. Write a 'container' web application which consists of basically a frame, and authentication / security features. This then has links which, will load the 2 independantly built and released web applications into the frame. I can see that say, if I wanted to update the time management module, I would only need to build and release this separately, and the rest of the solution would be 'untouched' Any better alternatives?

    Read the article

  • Closing any open info windows in google maps api v3

    - by hhj
    As the title states, on a given event (for me this happens to be upon opening a new google.maps.InfoWindow I want to be able to close any other currently open info windows. Right now, I can open many at a time..but I want only 1 open at a time. I am creating the info windows dynamically (i.e. I don't know ahead of time how many will be generated), so in the click event of the current info window (which is where I want all the other open ones closed) I don't have a reference to any of the other open info windows on which to call close(). I am wondering how I can achieve this. I am not an experienced JavaScript programmer so I don't know if I need to use reflection or something similar here. Would the best way be just to save all the references in some sort of collection, then loop through the list closing them all? Thanks.

    Read the article

  • What lessons can you learn from software maintanence?

    - by Vasil Remeniuk
    Hello everyone, In the perfect world, all the software developers would work with the cutting edge technologies, creating systems from the scratch. In the real life, almost all of us have to maintain software from time to time (unlucky ones do it on a regular basis). Personally I first 2 years of my career was fixing bugs in the company that no longer exists (it has been taken up by Oracle). And probably the biggest lesson I've learned that time - despite of the pressure, always try to get as much information about the domain as possible (even if it's irrelevant to fixing a specific bug or adding a feature) - abstract domain knowledge doesn't lose value as fast as knowledge about trendy frameworks or methodologies. What lessons have you learned from maintenance?

    Read the article

  • "Error opening associated documents" message when loading VS

    - by kumar
    When loading up a solution in VS2008 I get this message: An error was encountered while opening associated documents the last time this solution was loaded. Document load is being skipped during this solution load in order to avoid that error. It shut down immediately the first time I opened it. The next time I opened it, VS popped up a message box but did not shut down at first; however, it did shut down when I clicked a usercontrol or ASPX page. How can I find which document is causing the problem? Thanks...

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • Geocoding service for a startup?

    - by Sologoub
    I'm working on an idea for a service that uses geocoded data (lat/lng) form a US address. Google maps API v3 has been awesome, until I read the terms of service and acceptable uses a little closer. The problem is that the terms seem to prohibit use of the maps API for any commercial use where the site is not freely accessibly to the public, such as a subscription based service. The alternative offered is Google Maps API Premier, but at $10,000 per year minimum, it's just not possible at this time. Same goes for services offered by Yahoo! and MS - initial fees are small for enterprises, but for a very early stage startup (not even a finished prototype yet!) it's just not doable. Geocoding process needs to be real-time and volume would be very small - user would enter address at setup time and only update it if needed. Any help is greatly appreciated!

    Read the article

  • Queue remote calls to a Python Twisted perspective broker?

    - by agartland
    The strength of Twisted (for python) is its asynchronous framework (I think). I've written an image processing server that takes requests via Perspective Broker. It works great as long as I feed it less than a couple hundred images at a time. However, sometimes it gets spiked with hundreds of images at virtually the same time. Because it tries to process them all concurrently the server crashes. As a solution I'd like to queue up the remote_calls on the server so that it only processes ~100 images at a time. It seems like this might be something that Twisted already does, but I can't seem to find it. Any ideas on how to start implementing this? A point in the right direction? Thanks!

    Read the article

  • How do I create a view with a picker on the bottom and a table view on the top?

    - by Andy
    Hi - first time asker, long-time lurker. I am trying to create an iPhone view that has a date/time picker on the bottom half of the screen, and a grouped, single-section, four-row table view on the top half of the screen (almost identical to the one Apple shows in Fig. 2-4 of their View Controller Programming Guide (but then never goes on to explain). Conceptually, I think I understand that what I need is a main view with a pair of subviews - one for the picker, and one for the table view. I'm pretty sure I can make the picker function once I have it on-screen, and I'm pretty sure I can make the table view function too. What I can't for the life of me figure out is how, programmatically speaking, to get the two views onto the screen simultaneously. I can lay it out perfectly in Interface Builder, but then it all goes to hell when I switch to Xcode...the view appears with the picker, but no table view. Thanks, in advance, for any help you can offer.

    Read the article

  • Can Atom be used for things besides syndication feeds?

    - by greim
    Purely in terms of its conceptual model, is the purpose of Atom (and RSS) only to provide a time-sequential series of frequently-updated items, such as "most recent blog posts" or "last twenty SVN commits," or can Atom be legitimately used to represent static and/or non-time-sequential listings/indices? As an example, "index of files under this directory", "dog breeds" or "music genres". Even if there's a date associated with the items, like a file's last modified date, what if you don't necessarily want time to be the primary consideration when you represent that model to your users? The context for this is passing around (generating and consuming) lists of things in a REST-ful environment, hopefully using a well-understood format, where "date something was created/updated" is a pertinent detail, but not the primary consideration. I realize there's probably no right answer, but wanted to get some perspectives. Thanks.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >