Search Results

Search found 17727 results on 710 pages for 'large apps'.

Page 124/710 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Best practice for handling memory leaks in large Java projects?

    - by knorv
    In almost all larger Java projects I've been involved with I've noticed that the quality of service of the application degrades with the uptime of the container. This is most probably due to memory leaks in the code. The correct way to solve this problem is obviously to trace back to the root cause of the problem and fix the leaks in the code. The quick and dirty way of solving the problem is simply restarting Tomcat (or whichever servlet container you're using). These are my three questions: Assume that you choose to solve the problem by tracing the root cause of the problem (the memory leaks), how would you collect data to zoom in on the problem? Assume that you choose the quick and dirty way of speeding things up by simply restarting the container, how would you collect data to choose the optimal restart cycle? Have you been able to deploy and run projects over an extended period of time without ever restarting the servlet container to regain snappiness? Or is an occasional servlet restart something that one has to simply accept?

    Read the article

  • How can I get a iterable resultset from the database using pdo, instead of a large array?

    - by Tchalvak
    I'm using PDO inside a database abstraction library function query. I'm using fetchAll(), which if you have a lot of results, can get memory intensive, so I want to provide an argument to toggle between a fetchAll associative array and a pdo result set that can be iterated over with foreach and requires less memory (somehow). I remember hearing about this, and I searched the PDO docs, but I couldn't find any useful way to do that. Does anyone know how to get an iterable resultset back from PDO instead of just a flat array? And am I right that using an iterable resultset will be easier on memory? I'm using Postgresql, if it matters in this case. . . . The current query function is as follows, just for clarity. /** * Running bound queries on the database. * * Use: query('select all from players limit :count', array('count'=>10)); * Or: query('select all from players limit :count', array('count'=>array(10, PDO::PARAM_INT))); **/ function query($sql_query, $bindings=array()){ DatabaseConnection::getInstance(); $statement = DatabaseConnection::$pdo->prepare($sql_query); foreach($bindings as $binding => $value){ if(is_array($value)){ $statement->bindParam($binding, $value[0], $value[1]); } else { $statement->bindValue($binding, $value); } } $statement->execute(); // TODO: Return an iterable resultset here, and allow switching between array and iterable resultset. return $statement->fetchAll(PDO::FETCH_ASSOC); }

    Read the article

  • Why use hashing to create pathnames for large collections of files?

    - by Stephen
    Hi, I noticed a number of cases where an application or database stored collections of files/blobs using a has to determine the path and filename. I believe the intended outcome is a situation where the path never gets too deep, or the folders ever get too full - too many files (or folders) in a folder making for slower access. EDIT: Examples are often Digital libraries or repositories, though the simplest example I can think of (that can be installed in about 30s) is the Zotero document/citation database. Why do this? EDIT: thanks Mat for the answer - does this technique of using a hash to create a file path have a name? Is it a pattern? I'd like to read more, but have failed to find anything in the ACM Digital Library

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • C# slowdown while creating a bitmap - calculating distances from a large List of places for each pixel

    - by user576849
    I'm creating a graphic of the glow of lights above a geographic location based upon Walkers Law: Skyglow=0.01*Population*DistanceFromCenter^-2.5 I have a CSV file of places with 66,000 records using 5 fields (id,name,population,latitude,longitude), parsed on the FormLoad event and stored it in: List<string[]> placeDataList Then I set up nested loops to fill in a bitmap using SetPixel. For each pixel on the bitmap, which represents a coordinate on a map (latitude and longitude), the program loops through placeDataList – calculating the distance from that coordinate (pixel) to each place record. The distance (along with population) is used in a calculation to find how much cumulative sky glow is contributed to the coordinate from each place record. So, for every pixel, 66,000 distance calculations must be made. The problem is, this is predictably EXTREMELY slow – on the order of one line of pixels per 30 seconds or so on a 320 pixel wide image. This is unrelated to SetPixel, which I know is also slow, because the speed is similarly slow when adding the distance calculation results to an array. I don’t actually need to test all 66,000 records for every pixel, only the records within 150 miles (i.e. no skyglow is contributed to a coordinate from a small town 3000 miles away). But to find which records are within 150 miles of my coordinate I would still need to loop through all the records for each pixel. I can't use a smaller number of records because all 66,000 places contribute to skyglow for SOME coordinate in my map as it loops. This seems like a Catch-22, so I know there must be a better method out there. Like I mentioned, the slowdown is related to how many calculations I’m making per pixel, not anything to do with the bitmap. Any suggestions? private void fillPixels(int width) { Color pixelColor; int pixel_w = width; int pixel_h = (int)Math.Floor((width * 0.424088664)); Bitmap bmp = new Bitmap(pixel_w, pixel_h); for (int i = 0; i < pixel_h; i++) for (int j = 0; j < pixel_w; j++) { pixelColor = getPixelColor(i, j); bmp.SetPixel(j, i, pixelColor); } bmp.Save("Nightfall", System.Drawing.Imaging.ImageFormat.Jpeg); pictureBox1.Image = bmp; MessageBox.Show("Done"); } private Color getPixelColor(int height, int width) { int c; double glow,d,cityLat,cityLon,cityPop; double testLat, testLon; int size_h = (int)Math.Floor((size_w * 0.424088664)); ; testLat = (height * (24.443136 / size_h)) + 24.548874; testLon = (width * (57.636853 / size_w)) -124.640767; glow = 0; for (int i = 0; i < placeDataList.Count; i++) { cityPop=Convert.ToDouble(placeDataList[i][2]); cityLat=Convert.ToDouble(placeDataList[i][3]); cityLon=Convert.ToDouble(placeDataList[i][4]); d = distance(testLat, testLon, cityLat, cityLon,"M"); if(d<150) glow = glow+(0.01 * cityPop * Math.Pow(d, -2.5)); } if (glow >= 1) glow=1; c = (int)Math.Ceiling(glow * 255); return Color.FromArgb(c, c, c); }

    Read the article

  • Why is it a bad idea to use ClientLogin for web apps in the Google API?

    - by Onema
    I just picked up the Google API today to allow some users of our site to upload videos to our own organization YouTube account. I Don't want our users to know our user name and password, but rather give them the option if they want to upload videos to youtube or not. If they choose to do it, they check on a check box and hit the submit button. I keep seeing over, and over in the Developers guide that ClientLogin, which to me looks like the best option to implement what I want to do, is not a good idea for user authentication in web applicaitons. The "AuthSub for web applications" doesn't seem to be the best mechanism for what I want to implement! Any ideas on what to do? Thank you

    Read the article

  • Why does File::Find finished short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system.

    Read the article

  • What's the most scalable way to handle somewhat large file uploads in a Python webapp?

    - by Jason Baker
    We have a web application that takes file uploads for some parts. The file uploads aren't terribly big (mostly word documents and such), but they're much larger than your typical web request and they tend to tie up our threaded servers (zope 2 servers running behind an Apache proxy). I'm mostly in the brainstorming phase right now and trying to figure out a general technique to use. Some ideas I have are: Using a python asynchronous server like tornado or diesel or gunicorn. Writing something in twisted to handle it. Just using nginx to handle the actual file uploads. It's surprisingly difficult to find information on which approach I should be taking. I'm sure there are plenty of details that would be needed to make an actual decision, but I'm more worried about figuring out how to make this decision than anything else. Can anyone give me some advice about how to proceed with this?

    Read the article

  • Is is possible to parse a web page from the client side for a large number of words and if so, how?

    - by Technoh
    I have a list of keywords, about 25,000 of them. I would like people who add a certain < script tag on their web page to have these keywords transformed into links. What would be the best way to go and achieve this? I have tried the simple javascript approach (an array with lots of elements and regexping/replacing each) and it obviously slows down the browser. I could always process the content server-side if there was a way, from the client, to send the page's content to a cross-domain server script (I'm partial to PHP but it could be anything) but I don't know of any way to do this. Any other working solution is also welcome.

    Read the article

  • Will .NET 4.0 apps work on Win 2008 R2 Server Core?

    - by markus
    When Windows Server 2008 R2 was launched, the "server core" edition started to become useful to me, because it lets me deploy .NET background applications isolated on their own virtual machine instance with only a small fraction of all the disk space overhead of a default Windows Server installation, and very few Windows Updates. It comes with a subset of .NET 3.5 SP1 integrated (as an optional feature). Now that .NET 4.0 is released, the redistributables explicitly state that it's not support on Server Core. Any chance that there will be a separate download available for Server Core (e. g. without WPF) any time soon, has anybody heard about it?

    Read the article

  • Shared WCF client code between .NET and Silverlight apps?

    - by Eduardo Scoz
    I'm developing a .NET application that will have both a WinForms and a Silverlight client. Although the majority of code will be in the server, I'll need to have quite a bit of logic in the clients as well, and I would like to keep the client library code the same. From what I could figure out so far, I need to have two different project types, a class library and a Silverlight class library, and link the files from one project to the other. This seems kind of lame, but it works for simple code. My problem, though, is that the code generated by the SVCUtil.exe to access WCF services is different from the code generated by the slsvcutil.exe, and the silverlight code is actually incompatible with the .NET one: I get a bunch of problems with the System.ServiceModel.Channel classes when I try to import the class into .NET. Has anybody done anything similar to this before? What am I doing wrong?

    Read the article

  • What are CAD apps written in, and how are they organized ?

    - by ldigas
    What are CAD applications (Rhino, Autocad) of today written in and how are they organized internally ? I gave as an example, Autocad and Rhino, although I would love to hear of other examples as well. I'm particularly interested in knowing what is their backend written in (multilanguage ?) and how is it organized, and how do they handle their frontend (GUI) in real time ? Do they use native windows API's or some libraries of their own, since I imagine, as good as may be, the open source solutions on today's market won't cut it. I may be wrong ... As most of you who have used them know, they handle amongs other things relatively complex rotational operations in realtime (shading is not interesting me). I've been doing some experiments with several packages recently, and for some larger models found that there is considerable difference in speed in, for example, programed rotation (big full ship models) amongst some of them (which I won't name). So I'm wondering about their internals ... Also, if someone knows of some book on the subject, I'd be interested to hear of it.

    Read the article

  • Large scale Merge Replication strategy - what can go wrong?

    - by niidto
    Hi, I'm developing a piece of software that uses Merge Replication and SQL Compact on Windows Mobile 6. At the moment it is running on 5 devices reasonably well. The issues I've come up against are as follows: The schema has had to change a lot, and will continue to have to change as the application evolves. There have been various errors replicating these schema changes down to the device, uploads failing due to schema inconsistencies. Subscriptions expiring (after 14 days) and unable to reinitialize with upload - AKA, potential data los of unsynced data up to that point. Basically, the worst case scenario is data loss, and when merge replication fails, there seems to be no way back to get the data off. My method until now has been to drop and create the subscription on the device. I don't hear many people doing this, though it seems to solve everything. The long term plan is to role this out to 500+ devices. Any advice on people who have undertaken similar projects, and how to minimise data loss and make it so that there's appropriate error handling code to recover from sync failures would be much appreciated. James

    Read the article

  • Is there a performance gain from defining routes in app.yaml versus one large mapping in a WSGIAppli

    - by jgeewax
    Scenario 1 This involves using one "gateway" route in app.yaml and then choosing the RequestHandler in the WSGIApplication. app.yaml - url: /.* script: main.py main.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page1/', Page1), ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Scenario 2: This involves defining two routes in app.yaml and then two separate scripts for each (page1.py and page2.py). app.yaml - url: /page1/ script: page1.py - url: /page2/ script: page2.py page1.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") application = webapp.WSGIApplication([ ('/page1/', Page1), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() page2.py from google.appengine.ext import webapp class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Question What are the benefits and drawbacks of each pattern? Is one much faster than the other?

    Read the article

  • Parsing large txt files in ruby taking a lot of time?

    - by hershey92
    below is the code to download a txt file from internet approx 9000 lines and populate the database, I have tried a lot but it takes a lot of time more than 7 minutes. I am using win 7 64 bit and ruby 1.9.3. Is there a way to do it faster ?? require 'open-uri' require 'dbi' dbh = DBI.connect("DBI:Mysql:mfmodel:localhost","root","") #file = open('http://www.amfiindia.com/spages/NAV0.txt') file = File.open('test.txt','r') lines = file.lines 2.times { lines.next } curSubType = '' curType = '' curCompName = '' lines.each do |line| line.strip! if line[-1] == ')' curType,curSubType = line.split('(') curSubType.chop! elsif line[-4..-1] == 'Fund' curCompName = line.split(" Mutual Fund")[0] elsif line == '' next else sCode,isin_div,isin_re,sName,nav,rePrice,salePrice,date = line.split(';') sCode = Integer(sCode) sth = dbh.prepare "call mfmodel.populate(?,?,?,?,?,?,?)" sth.execute curCompName,curSubType,curType,sCode,isin_div,isin_re,sName end end dbh.do "commit" dbh.disconnect file.close 106799;-;-;HDFC ARBITRAGE FUND RETAIL PLAN DIVIDEND OPTION;10.352;10.3;10.352;29-Jun-2012 This is the format of data to be inserted in the table. Now there are 8000 such lines and how can I do an insert by combining all that and call the procedure just once. Also, does mysql support arrays and iteration to do such a thing inside the routine. Please give your suggestions.Thanks.

    Read the article

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • How to store Hierarchical K-Means tree for a large number of images, using Opencv?

    - by AquaAsh
    I am trying to make a program that will find similar images from a dataset of images. The steps are 1)extract SURF descriptors for all images 2)store the descriptors 3)Apply knn on the stored descriptors 4)Match the stored descriptors to the query image descriptor using KNN Now each images SURF descriptor will be stored as Hierarchical k means tree, now do I store each tree as a separate file or is it possible to build some sort of single tree with all the images descriptors and updated as images are added to dataset. This is the paper I am basing the program on www.ijest.info/docs/IJEST10-02-03-13.pdf.

    Read the article

  • How to create beautiful Screencasts for your web apps?

    - by Abs
    Hello all, I am trying to create a screen cast for my new web app. I have just come across a great example of a screencast and I am wondering what is used to do this: Click on the video to play on this page. I am impressed with the animation when the mouse is clicked and zooming into images from different angles. Is this done with Actionscript or is there software that will make my life easier to do this? Thanks all for any help

    Read the article

  • How can I communicate between Windows 8 and WP8 apps using ssl?

    - by Clay Shannon
    I'm considering using either raw notifications (WNS) or sockets for communication between a Windows 8 and WP8 app. I've found some samples for using sockets but my question[s] here are: does WP8 support sending/receiving messages over ssl and, if so, how is it done? Something I need to be true or find a workaround for is that the Windows 8 app has a permanent IPAddress to which the phone app will send its updates. Typically, a tablet will be running Windows 8 app, always listening for incoming messages; the phone app will periodically send messages.

    Read the article

  • Does anyone know of a good free bulk upload tool for web apps?

    - by Ev
    I have a web application in which a user has to upload images to a gallery. At the moment they need to upload one image at a time so it's pretty tedious. I'd like to implement a system where they could potentially drag and drop files into the browser, or select a folder to upload. Any ideas? Thanks in advance! (By the way; it's a .Net App if it makes a difference, but I was thinking most of the work would be happening client side so shouldn't matter) -Ev

    Read the article

  • iOS Question. Is There a Framework for Build Time Based Apps.

    - by dugla
    I have the need for some time based effects in the iPad app I am building. The UIView class animation capability beginAnimatins/commitAnimations is exactly the sort of thing I am looking for but it is restricted to specific properties of UIView deemed animatable. Ideally, I am looking for a solution that lets me drive an a time-based function that can perhaps send messages to a class of my own choosing at the rate I specify in the animation. Specifically, I have a function - my implementation of the RenderMan function "smoothstep" which is essentially an ease-in ease-out curve common in animation. It takes [0 - 1] as input and outputs [0 - 1] as the curve is evaluated. I want to drive this function for a duration of my own choosing at rate of my own choosing. Thanks in advance. -Doug

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >