Search Results

Search found 62069 results on 2483 pages for 'unix time'.

Page 432/2483 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • SSIS process files from folder

    - by RT
    Background: I've a folder that gets pumped with files continuously. My SSIS package needs to process the files and delete them. The SSIS package is scheduled to run once every minute. I'm picking up the files in ascending order of file creation time. I'm building an array of files and then processing-deleting them one at a time. Problem: If an instance of my package takes longer than one minute to run, the next instance of the SSIS package will pick up some of the files the previous instance has in its buffer. By the time the second instance of teh package gets around to processing a file, it may already have been deleted by the first instance, creating an exception condition. I was wondering whether there was a way to avoid the exception condition. Thanks.

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • Python - question regarding the concurrent use of `multiprocess`.

    - by orokusaki
    I want to use Python's multiprocessing to do concurrent processing without using locks (locks to me are the opposite of multiprocessing) because I want to build up multiple reports from different resources at the exact same time during a web request (normally takes about 3 seconds but with multiprocessing I can do it in .5 seconds). My problem is that, if I expose such a feature to the web and get 10 users pulling the same report at the same time, I suddenly have 60 interpreters open at the same time (which would crash the system). Is this just the common sense result of using multiprocessing, or is there a trick to get around this potential nightmare? Thanks

    Read the article

  • Generating Running Sum of Ratings in SQL

    - by Koobz
    I have a rating table. It boils down to: rating_value created +2 april 3rd -5 april 20th So, every time someone gets rated, I track that rating event in the database. I want to generate a rating history/time graph where the rating is the sum of all ratings up to that point in time on a graph. I.E. A person's rating on April 5th might be select sum(rating_value) from ratings where created <= april 5th The only problem with this approach is I have to run this day by day across the interval I'm interested in. Is there some trick to generating a running total using this sort of data? Otherwise, I'm thinking the best approach is to create a denormalized "rating history" table alongside the individual ratings.

    Read the article

  • What is the best way to migrate documents into Sharepoint (MOSS) 2007?

    - by Jeramie Mercker
    I'm working with a customer that needs to migrate documents from their current document management system (not Sharepoint) into Sharepoint MOSS 2007 retaining document history and metadata. I've written a proof of concept using the Sharepoint web services and that looks promising, but the snag so far seems to be programmatically setting the created date/time and user. The webservices allow the fields to be set but implicitly override them to be the currently logged in user + date/time. For obvious reasons, I need to be able to keep the original created date/time and user on migration. Does anyone know the best way to approach this problem?

    Read the article

  • How to set up Django app to make cookies work on subdomain

    - by Dzida
    Hi, I have deployed my application on subdomain.domain.com (it works only on that one subdomain). Everything works fine except the fact that from time to time users cannot log in to application (the message "Looks like your browser isn't configured to accept cookies. Please enable cookies, reload this page, and try again" is shown when trying to log into admin panel). I've noticed that restarting the web server eliminates this problem for some time. Does anyone have experience with setting up django project on subdomain and can guide me how to configure my application to make it work correctly without need to ocasionally make reset?

    Read the article

  • Does MVC replace traditional manually created UI, BLL, DAL ?

    - by used2could
    I'm use to creating the UI, BLL, DAL by hand (some times i've used LINQ-SQL or SubSonic for the DAL). I've done several small projects using MVC since it's release. On these projects i've still continued to write a BLL and DAL by hand and then incorporate those into the MVC's models/controllers. Looking to optimize my time on projects this seems like over kill and a potential waste of time. My question is: Would it be acceptable to roll a DAL such as SubSonic and directly use it in the Models/Controllers of my MVC web app? Now the models & controllers would act as the BLL. I just see this as a major time savor to not have to worry about another tier. (Agree ? "Please state way" : "Make argument")

    Read the article

  • How to join two wav file using python??

    - by kaushik
    I am using python programming language,I want to join to wav file one at the end of other wav file? I have a Question in the forum which suggest how to merge two wav file i.e add the contents of one wav file at certain offset,but i want to join two wav file at the end of each other... And also i had a prob playing the my own wav file,using winsound module..I was able to play the sound but using the time.sleep for certain time before playin any windows sound,disadvantage wit this is if i wanted to play a sound longer thn time.sleep(N),N sec also,the windows sound wil jst overlap after N sec play the winsound nd stop.. Can anyone help??please kindly suggest to how to solve these prob... Thanks in advance

    Read the article

  • VS2005 project has dependency that is not built

    - by Eyal
    I have VS2005 solution that contains many projects and dependencies (some C++, some C#) - in the past it compiled successfully. when I rebuild all the solution it fails on a project claiming dll is missing (dll that was needed to built before according to dependency). the thing is that from time to time it fails on random project (not all the time the same project). I'm not sure it is meaningful but I see in output console Deleting intermediate and output files for project doesn't VS2005 go according to "Project Build Order" and ReBuild every project starting with its dependencies?

    Read the article

  • PHP and JSON comment help..

    - by jay
    hi, guys thanks for looking my problem is that i cant seem to get jquery to display my generated data. here is my JSON output ("posts":[{"id":"1-2","time":"0","name":"dash","avatar":"http:\/\/www.gravatar.com\/avatar\/9ff30cc2646099e31a4ee4c0376091b0?s=182&d=identicon&r=PG","comment":"rtetretrete tet rt uh utert"},{"id":"2-2","time":"0","name":"james","avatar":"http:\/\/www.gravatar.com\/avatar\/d41d8cd98f00b204e9800998ecf8427e?s=182&d=identicon&r=PG","comment":"fsdfdfsdf\r\n"}]) and here is my jquery $(document).ready(function(){ var url="comments.php"; $.getJSON(url,function(json){ $.each(json.posts,function(i,post){ $("#content").append( '<div class="post">'+ '<h1>'+post.name+'</h1>'+ '<p>'+post.comment+'</p>'+ '<p>added: <em>'+post.time+'</em></p>'+ '<p>posted by: <strong>'+post.name+'</strong></p>'+ '<p>avatar: <strong>'+post.avatar+'</strong></p>'+ '</div>' ); }); }); });

    Read the article

  • Help me find some good 'Reflection' reading materials??

    - by IbrarMumtaz
    If it wasn't you guys I would've never discovered Albahari.com's free threading ebook. My apologies if I come off as being extremely lazy but I need to make efficient use of my time and make sure am not just swallowing any garbage of the web. Therefore I am looking for the best and most informative and with a fair bit of detail for the Reflection chapter in .Net. Reflection is something that comes up time and time again and I want to extend my reading from what I know already from the official 70-536 book. I'm not a big fan of MSDN but at the moment that's al I'm using. Anyone got any other good published reading material off the inter web that can help revision for the entrance exam??? Would be greatly appreciated !!! Thanks in Advance, Ibrar

    Read the article

  • System.Interactive: Difference between Memoize() and MemoizeAll()?

    - by Joel Mueller
    In System.Interactive.dll (v1.0.2521.0) from Reactive Extensions, EnumerableEx has both a Memoize method and a MemoizeAll method. The API documentation is identical for both of them: Creates an enumerable that enumerates the original enumerable only once and caches its results. However, these methods are clearly not identical. If I use Memoize, my enumerable has values the first time I enumerate it, and seems to be empty the second time. If I use MemoizeAll then I get the behavior I would expect from the description of either method - I can enumerate the result as many times as I want and get the same results each time, but the source is only enumerated once. Can anyone tell me what the intended difference between these methods is? What is the use-case for Memoize? It seems like a fairly useless method with really confusing documentation.

    Read the article

  • Does any faster centralized version control than SVN exists?

    - by Savageman
    Hello, I've been using SVN since a long time and now we're trying on Git. I'm not talking on the centralized / decentralized debate here. My only concern is speed. The latter tool is much faster. But sometimes, I NEED to work with a centralized approach, which is much more simple and less complex than the decentralized one. The learning curve is really fast, which saves a lot of time (while digging into decentralized would lead to a waste of time, given the learning curve is much longer and we encounter more problem when working with it). However, SVN is really slow compared to GIT, and I don't think it has anything to do with the centralized argument. Decentralized systems also have to deal with server connections and file transfert. So I can easilly imagine a faster implementation of centralized version control could exists. Does someone has any clue on this?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • Saving contents of ApplicationState in ASP.Net (MVC)

    - by Saqib
    I have an internal app used to edit XML files on disk. The XML files are loaded into an object model which is stored in ApplicationState. I need to save this data. The one option is to do this every time the user changes some data. However, this seems a bit inefficient - writing the data out to disk each time a change is made. Instead, is it possible to be notified whenever the user closes their browser, plus just before the web server exits? Thus, the data would be saved each time a session ends, plus when the computer shuts down, etc. I thought that Application_End(), Application_Error() and Session_End() in Global.asax would provide this, but these methods don't seem to be called.

    Read the article

  • What tools do you use when generating markup from mockups?

    - by Paul
    So I endeavor to spend most of time on the app server side of things but time to time I need to get my hands dirty and generate markup/css/js from a wireframe or mockup. As far as tools go, Ive found browsershots and Litmus app helpful and of course, vm's as well for checking things out live in ie-{6,7,8}. Otherwise I do the heavy lifting in vim. For generating new markup thats not tied to a target design I think some of the css frameworks & tools like sass look useful but Im skeptical of their utility when needing to generate markup to match a photoshop design. So what tips / tools do you keep in your markup generating utility belt when building solid markup from designs? My list so far: Browsershots Browserlab.adobe.com Haml / Less / Sass (not used but will probably explore)

    Read the article

  • Does the number of busy worker threads in the CLR ThreadPool affect performance of I/O threads?

    - by andrej351
    We have a Windows Service which hosts a number of WCF services and, in an unrelated part of the app, makes extensive use of the TPL Task class to asynchronously do relatively short bits of work. It is my understanding that WCF uses managed I/O threads from the ThreadPool to execute requests. I noticed that after deploying a feature which significantly raised the applications use of Tasks, and as such the use of ThreadPool worker threads as well, performance of a couple of web services has become very slow. We're talking minutes instead of less than a second. The number of Tasks actually trying to run at any one time can range between 20 and 1000, which makes me think that any new (last in) work needing some CPU time could be forced to wait for quite some time. Does the (in my case extremely large) number of busy ThreadPool worker threads affect the ThreadPool's managed I/O threads? Or could these two be connected in any way? Thanks!

    Read the article

  • Directory file size calculation - how to make it faster?

    - by Xinxua
    Using C#, I am finding the total size of a directory. The logic is this way : Get the files inside the folder. Sum up the total size. Find if there are sub directories. Then do a recursive search. I tried one another way to do this too : Using FSO (obj.GetFolder(path).Size). There's not much of difference in time in both these approaches. Now the problem is, I have tens of thousands of files in a particular folder and its taking like atleast 2 minute to find the folder size. Also, if I run the program again, it happens very quickly (5 secs). I think the windows is caching the file sizes. Is there any way I can bring down the time taken when I run the program first time??

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

  • Data Structure for a particular problem??

    - by AGeek
    Hi, Which data structure can perform insertion, deletion and searching operation in O(1) time in the worst case. We may assume the set of elements are integers drawn from a finite set 1,2,...,n, and initialization can take O(n) time. I can only think of implementing a hash table. Implementing it with Trees will not give O(1) time complexity for any of the operation. Or is it possible?? Kindly share your views on this, or any other data structure apart from these.. Thanks..

    Read the article

  • How could I evaluate this in code?

    - by WM
    There is a medieval puzzle about an old woman and a basket of eggs. On her way to market, a horseman knocks down the old woman and all the eggs are broken. The horseman will pay for the eggs, but the woman does not remember the exact number she had, only that when she took the eggs in pair, there was one left over; similarly, there was one left over when she took them three or five at a time. When she took them seven at a time, however, none were left. Write an application that can determine the smallest number of eggs the woman could have had. It might be a multiple of seven because there are no eggs left when it's seven at a time. But I have a problem. 49 eggs -1=2*24 49 eggs -1=3*16 49 eggs-4=5*9 49 eggs-0=7*7

    Read the article

  • XNA or C# Pop-up progress bar for the LoadContent() method

    - by Warlax
    Hey people, We wrote a small game using Microsoft's XNA Game Studio 3.1. The LoadContent() takes a long time because, other than loading models, and config files, we're also running some one-time (per run) terrain analysis. We are not C# or XNA programmers... we're Java programmers, and want to be able to give the user some feedback that the system is loading. Preferably, this will be through a simple pop-up with a progress bar that will say something simple like "loading please wait". The progress bar doesn't have to be a 0 to 1 progress bar, it can instead be one of those 'back and forth' progress bars. I was hoping for some quick copy-paste ready code to just do that - as it is not a central piece of our project, nor do we have a need to delve into too much documentation. I appreciate you time, effort, and possible donation. Thanks.

    Read the article

  • asynchronous method executing

    - by alexeyndru
    I have a delegate method with the following tasks: get something from the internet (ex: some image from a web site); process that image in a certain way; display the result in a subview ; getting the image takes some time, depending on the network's speed so the result of its processing is displayed in the subview after that little while. my problem: during the time between getting the image and showing the result the device looks unresponsive. any attempt to put some spinner, or any other method which is called inside this main procedure has no effect until the result is processed. how should I change this behaviour? I would like to put a big spinner during that waiting time. thank you.

    Read the article

  • Closing any open info windows in google maps api v3

    - by hhj
    As the title states, on a given event (for me this happens to be upon opening a new google.maps.InfoWindow I want to be able to close any other currently open info windows. Right now, I can open many at a time..but I want only 1 open at a time. I am creating the info windows dynamically (i.e. I don't know ahead of time how many will be generated), so in the click event of the current info window (which is where I want all the other open ones closed) I don't have a reference to any of the other open info windows on which to call close(). I am wondering how I can achieve this. I am not an experienced JavaScript programmer so I don't know if I need to use reflection or something similar here. Would the best way be just to save all the references in some sort of collection, then loop through the list closing them all? Thanks.

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >