Search Results

Search found 24037 results on 962 pages for 'every'.

Page 175/962 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Suggestions for a Cron like scheduler in Python?

    - by jamesh
    I'm looking for a library in Python which will provide at and cron like functionality. I'd quite like have a pure Python solution, rather than relying on tools installed on the box; this way I run on machines with no cron. For those unfamiliar with cron: you can schedule tasks based upon an expression like: 0 2 * * 7 /usr/bin/run-backup # run the backups at 0200 on Every Sunday 0 9-17/2 * * 1-5 /usr/bin/purge-temps # run the purge temps command, every 2 hours between 9am and 5pm on Mondays to Fridays. The cron time expression syntax is less important, but I would like to have something with this sort of flexibility. If there isn't something that does this for me out-the-box, any suggestions for the building blocks to make something like this would be gratefully received. Edit I'm not interested in launching processes, just "jobs" also written in Python - python functions. By necessity I think this would be a different thread, but not in a different process. To this end, I'm looking for the expressivity of the cron time expression, but in Python. Cron has been around for years, but I'm trying to be as portable as possible. I cannot rely on its presence.

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Using ember-resource with couchdb - how can i save my documents?

    - by Thomas Herrmann
    I am implementing an application using ember.js and couchdb. I choose ember-resource as database access layer because it nicely supports nested JSON documents. Since couchdb uses the attribute _rev for optimistic locking in every document, this attribute has to be updated in my application after saving the data to the couchdb. My idea to implement this is to reload the data right after saving to the database and get the new _rev back with the rest of the document. Here is my code for this: // Since we use CouchDB, we have to make sure that we invalidate and re-fetch // every document right after saving it. CouchDB uses an optimistic locking // scheme based on the attribute "_rev" in the documents, so we reload it in // order to have the correct _rev value. didSave: function() { this._super.apply(this, arguments); this.forceReload(); }, // reload resource after save is done, expire to make reload really do something forceReload: function() { this.expire(); // Everything OK up to this location Ember.run.next(this, function() { this.fetch() // Sub-Document is reset here, and *not* refetched! .fail(function(error) { App.displayError(error); }) .done(function() { App.log("App.Resource.forceReload fetch done, got revision " + self.get('_rev')); }); }); } This works for most cases, but if i have a nested model, the sub-model is replaced with the old version of the data just before the fetch is executed! Interestingly enough, the correct (updated) data is stored in the database and the wrong (old) data is in the memory model after the fetch, although the _rev attribut is correct (as well as all attributes of the main object). Here is a part of my object definition: App.TaskDefinition = App.Resource.define({ url: App.dbPrefix + 'courseware', schema: { id: String, _rev: String, type: String, name: String, comment: String, task: { type: 'App.Task', nested: true } } }); App.Task = App.Resource.define({ schema: { id: String, title: String, description: String, startImmediate: Boolean, holdOnComment: Boolean, ..... // other attributes and sub-objects } }); Any ideas where the problem might be? Thank's a lot for any suggestion! Kind regards, Thomas

    Read the article

  • Exception from HRESULT: 0x80020009 (DISP_E_EXCEPTION)) in SharePoint Part 2

    - by BeraCim
    Hi all: Following this post which I posted some time ago, I now get the same error every time I try to rewire 2 web's URLs. Basically, this is the code. It runs in a LongRunningOperationJob: SPWeb existingWeb = null; using (existingWeb = site.OpenWeb(wedId)) { SPWeb destinationWeb = createNewSite(existingWeb); existingWeb.AllowUnsafeUpdates = true; existingWeb.Name = existingWeb.Name + "_old"; existingWeb.Title = existingWeb.Title + "_old"; existingWeb.Description = existingWeb.Description + "_old"; existingWeb.Update() existingWeb.AllowUnsafeUpdates = false; destinationWeb.AllowUnsafeUpdates = true; destinationWeb.Name = existingWeb.Name; destinationWeb.Title = existingWeb.Title; destinationWeb.Description = existingWeb.Description; destinationWeb.Update(); destinationWeb.AllowUnsafeUpdates = false; // null this for what its worth existingWeb = null; destinationWeb = null; } // <---- Exception raised here Basically, the code is trying to rename the existing site's URL to something else, and have the destination web's url point to the old site's URL. When I run this for the first time, I received the Exception mentioned in the subject. However, every run after, I do not see the exception anymore. The webs DO get rewired... but at the cost of the app dying an unnecessary and terrible death. I'm at a complete lost as to what is going on and needs urgent help. Does sharepoint keep any hidden table from me or is the logic above has fatal problems? Thanks.

    Read the article

  • Ruby 1.9 GarbageCollector, GC.disable/enable

    - by seb
    I'm developing a Rails 2.3, Ruby 1.9.1 webapplication that does quite a bunch of calculation before each request. For every request it has to calculate a graph with 300 nodes and ~1000 edges. The graph and all its nodes, edges and other objects are initialized for every request (~2000 objects) - actually they are cloned from an uncalculated cached graph using Marshal.load(Marshal.dump()). Performance is quite an issue here. Right now the whole request takes in average 150ms. I then saw that during a request, parts of the calculation randomly take longer. Assuming, that this might be the GarbageCollector kicking in, I wrapped the request in GC.disable and GC.enable, so that the request waits with garbagecollecting until calculating and rendering have finished. def query GC.disable calculate respond_to do |format| format.html {render} end GC.enable end The average request now takes about 100ms (50 ms less). But I'm unsure if this is a good/stable solution, I assume there must be drawbacks doing that. Does anybody has experience with a similar problem or sees problems with the above code?

    Read the article

  • Windows service hangs

    - by Ram
    Hi, I have a webdav client-server application. I'm using Cassini webserver to host the webdav server. One important thing here is that I start and Stop my webserver using a windows service. Now, I start my windows service and I attach my webdav server application's debugger to the windows service. Everything used to go fine, that is I could connect to the server using my webdav client, debug the server code. However, a couple of days back, when I tried to debug my webdav server application, which in turn is attached to a windows service, my computer freezes while I'm in the midst of debugging the server app. The freeze does not occur at the same line every time and is therefore not predictable. Every time this happens, I would not be able to do anything at all with my computer except to reboot the system using the power button. What could be the problem? Could it be the service that is freezing or something else? Any help? Thanks, Ram

    Read the article

  • PHP .htaccess issue, specific/dynamic keywords

    - by Kunal
    Here goes my .htaccess file's content. Options +FollowSymLinks RewriteEngine On RewriteRule ^online-products$ products.php?type=online RewriteRule ^land-products$ products.php?type=land RewriteRule ^payment-methods$ payment-methods.php RewriteRule ^withdrawal-methods$ withdrawal-methods.php RewriteRule ^deposit-methods$ deposit-methods.php RewriteRule ^product-bonuses$ product-bonuses.php RewriteRule ^law-and-regulations$ law-and-regulations.php RewriteRule ^product-news$ product-news.php RewriteRule ^product-games$ product-games.php RewriteRule ^no-products$ no-products.php RewriteRule ^page-not-found$ notfound.php RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^casinos/(.*)$ product.php?id=$1 RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ cms.php?link=$1 ErrorDocument 404 /notfound.php What I am trying to achive here is that the first set of rules apply to some specific keywords which should redirect to specific hard coded pages. But anything else parat from those keywords should redirect to cms.php as a parameter as you can see. But the problem is that every keyword is getting redirected to cms.php. Whereas I want any other keyword apart from which are already hard coded in .htaccess file should go cms.php. Not just every keyword. Example: www.sitename.com/online-products -> www.sitename.com/products.php?type=online www.sitename.com/about-the-website -> www.sitename.com/cms.php?id=about-the-website www.sitename.com/product-news -> www.sitename.com/product-news.php Also another issue I am facing is I can not use any keyword with space. Like "online-products" is fine, but I can't use "online products". Please help me out with your expert knowledge. Many thanks in advance for your kind help. Appreciate it.

    Read the article

  • Problems setting up AuthLogic

    - by sscirrus
    Hi all, I'm trying to set up a simple login using AuthLogic into my User table. Every time I try, the login fails and I don't know why. I'm sure this is a simple error but I've been hitting a brick wall with it for a while. #user_sessions_controller def create @user_session = UserSession.new(params[:user_session]) if @user_session.save flash[:notice] = "Login successful!" else flash[:notice] = "We couldn't log you in. Please try again!" redirect_to :controller => "index", :action => "index" end end #_user_login.html.erb (this is the partial from my index page where Users log in) <% form_tag user_session_path do %> <p><label for="login">Login:</label> <%= text_field_tag "login", nil, :class => "inputBox", :id => "login", </p> <p><label for="password">Password: </label> <%= password_field_tag "password", nil, :class => "inputBox", :id => "password", </p> <p><%= submit_tag "submit", :class => "submit" %></p> <% end %> I had Faker generate some data for my user table but I cannot log in! Every time I try it just redirects to index. Where am I going wrong? Thanks everybody.

    Read the article

  • Passive and active sockets

    - by davsan
    Quoting from this socket tutorial: Sockets come in two primary flavors. An active socket is con­nect­ed to a remote active socket via an open data con­nec­tion... A passive socket is not con­nect­ed, but rather awaits an in­com­ing con­nec­tion, which will spawn a new active socket once a con­nec­tion is es­tab­lished ... Each port can have a single passive socket binded to it, await­ing in­com­ing con­nec­tions, and mul­ti­ple active sockets, each cor­re­spond­ing to an open con­nec­tion on the port. It's as if the factory worker is waiting for new mes­sages to arrive (he rep­re­sents the passive socket), and when one message arrives from a new sender, he ini­ti­ates a cor­re­spon­dence (a con­nec­tion) with them by del­e­gat­ing someone else (an active socket) to ac­tu­al­ly read the packet and respond back to the sender if nec­es­sary. This permits the factory worker to be free to receive new packets. ... Then the tutorial explains that, after a connection is established, the active socket continues receiving data until there are no remaining bytes, and then closes the connection. What I didn't understand is this: Suppose there's an incoming connection to the port, and the sender wants to send some little data every 20 minutes. If the active socket closes the connection when there are no remaining bytes, does the sender have to reconnect to the port every time it wants to send data? How do we persist a once established connection for a longer time? Can you tell me what I'm missing here? My second question is, who determines the limit of the concurrently working active sockets?

    Read the article

  • De-normalization alternative to specific MYSQL problem?

    - by Booker
    I am facing quite a specific optimization problem. I currently have 4 normalized tables of data. Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX. The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables. I have been researching de-normalization... but feel that perhaps there is an easier solution. I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table. I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated. Thanks!

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • Highlight image borders with Ajax request.

    - by Tek
    First, some visualization of the code. I have the following images that are dynamically generated from jquery. They're made upon user request: <img id="i13133" src="someimage.jpg" /> <img id="i13232" src="someimage1.jpg" /> <img id="i14432" src="someimage2.jpg" /> <img id="i16432" src="someimage3.jpg" /> <img id="i18422" src="someimage4.jpg" /> I have an AJAX loop that repeats every 15 seconds using jQuery and it contains the following code: Note: The if statement is inside the Ajax loop. Where imgId is the requested ID from the Ajax call. //Passes the IDs retrieved from Ajax, if the IDs exist on the page the animation is triggered. if ( $('#i' + imgId ).length ) { var pickimage = '#i' + imgId; var stop = false; function highlight(pickimage) { $(pickimage).animate({color: "yellow"}, 1000, function () { $(pickimage ).animate({color: "black"}, 1000, function () { if (!stop) highlight(pickimage); }); }); } // Start the loop highlight(pickimage); } It works great, even with multiple images. But I had originally used this with one image. The problem is I need an interrupt. Something like: $(img).click(function () { stop = true; }); There's two problems: 1.)This obviously stops all animations. I can't wrap my head around how I could write something that only stops the animation of the image that's clicked. 2.)The Ajax retrieves IDs, sometimes those IDs appear more than once every few minutes, which means it would repeat the animations on top of each other if the image exists. I could use some help figuring out how to detect if an animation is already running on an image, and do nothing if the animation is already triggered on an image.

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • Apps with lable XYZ could not be found

    - by HWM-Rocker
    Hello folks, today I ran into an error and have no clue how to fix it. Error: App with label XYZ could not be found. Are you sure your INSTALLED_APPS setting is correct? Where XYZ stands for the app-name that I am trying to reset. This error shows up every time I try to reset it (manage.py reset XYZ). Show all the sql code works. Even manage.py validate shows no error. I already commented out every single line of code in the models.py that I touched the last three monts. (function by function, model by model) And even if there are no models left I get this error. Here http://code.djangoproject.com/ticket/10706 I found a bugreport about this error. I also applied one the patches to allocate the error, it raises an exception so you have a trace back, but even there is no sign in what of my files the error occurred. I don't want to paste my code right now, because it is nearly 1000 lines of code in the file I edited the most. If someone of you had the same error please tell me were I can look for the problem. In that case I can post the important part of the source. Otherwise it would be too much at once. Thank you for helping!!!

    Read the article

  • Configuring an offscreen framebuffer fails the completeness test

    - by randallmeadows
    I'm trying to create an offscreen framebuffer into which I can do some OpenGL drawing, and then pull the bits out manually. I'm following the instructions here, but in step 4, status is 0 instead of GL_FRAMEBUFFER_COMPLETE_OES. If I insert a call go glGetError() after every gl call, it returns 0 (GL_NO_ERROR) every time. But, the values of variables do not change during the call. E.g., GLuint framebuffer; glGenFramebuffersOES(1, &framebuffer); glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer); the value of framebuffer does not get altered at all (even when I change it to some arbitrary value and re-execute). It's almost like the gl calls are not actually being made. I'm linking against OpenGLES framework, and get no compile, link, or run-time errors (or warnings). I'm at a loss as to what to do to fix this. I've tried continuing on with my drawing, but do not see the results I expect, but at this point I can't tell whether it's because of the above error, or the conversion to a UIImage.

    Read the article

  • A graph-based tuple merge?

    - by user1644030
    I have paired values in tuples that are related matches (and technically still in CSV files). Neither of the paired values are necessarily unique. tupleAB = (A####, B###), (A###, B###), (A###, B###)... tupleBC = (B####, C###), (B###, C###), (B###, C###)... tupleAC = (A####, C###), (A###, C###), (A###, C###)... My ideal output would be a dictionary with a unique ID and a list of "reinforced" matches. The way I try to think about it is in a graph-based context. For example, if: tupleAB[x] = (A0001, B0012) tupleBC[y] = (B0012, C0230) tupleAC[z] = (A0001, C0230) This would produce: output = {uniquekey0001, [A0001, B0012, C0230]} Ideally, this would also be able to scale up to more than three tuples (for example, adding a "D" match that would result in an additional three tuples - AD, BD, and CD - and lists of four items long; and so forth). In regards to scaling up to more tuples, I am open to having "graphs" that aren't necessarily fully connected, i.e., every node connected to every other node. My hunch is that I could easily filter based on the list lengths. I am open to any suggestions. I think, with a few cups of coffee, I could work out a brute force solution, but I thought I'd ask the community if anyone was aware of a more elegant solution. Thanks for any feedback.

    Read the article

  • Simple continuously running XMPP client in python

    - by tom
    I'm using python-xmpp to send jabber messages. Everything works fine except that every time I want to send messages (every 15 minutes) I need to reconnect to the jabber server, and in the meantime the sending client is offline and cannot receive messages. So I want to write a really simple, indefinitely running xmpp client, that is online the whole time and can send (and receive) messages when required. My trivial (non-working) approach: import time import xmpp class Jabber(object): def __init__(self): server = 'example.com' username = 'bot' passwd = 'password' self.client = xmpp.Client(server) self.client.connect(server=(server, 5222)) self.client.auth(username, passwd, 'bot') self.client.sendInitPresence() self.sleep() def sleep(self): self.awake = False delay = 1 while not self.awake: time.sleep(delay) def wake(self): self.awake = True def auth(self, jid): self.client.getRoster().Authorize(jid) self.sleep() def send(self, jid, msg): message = xmpp.Message(jid, msg) message.setAttr('type', 'chat') self.client.send(message) self.sleep() if __name__ == '__main__': j = Jabber() time.sleep(3) j.wake() j.send('[email protected]', 'hello world') time.sleep(30) The problem here seems to be that I cannot wake it up. My best guess is that I need some kind of concurrency. Is that true, and if so how would I best go about that? EDIT: After looking into all the options concerning concurrency, I decided to go with twisted and wokkel. If I could, I would delete this post.

    Read the article

  • Project Management and Scheduling Techniques

    - by Alec Smart
    Hello, I know this is probably the nth project management question. But am trying to move my team onto a more robust project management technique. Am wondering what is the best technique to use? I know that probably no technique is best, but which are the most popular techniques? Poker planning? Evidence Based Scheduling? COCOMO? Agile? Scrum? XP? Which one to use? Also, suppose I use EBS, wouldn't it be too time consuming to break down every single activity into fine grained tasks? E.g. "Design" is a goal, what kind of fine-grained tasks will I have under it? Is this is a waste of time i.e. dividing work into so many micro parts. Usually when I give my programmers a task, I follow up every week, and they complete quite a lot of the task assigned to them (the tasks are very broad e.g. X module). Is EBS worth it? Are there any white-papers on it so that I can implement it on my own? (instead of using Fogbugz) Most of my projects are web-based projects. Thank you for your time.

    Read the article

  • Natural vs surrogate keys on support tables

    - by Bugeo
    I have read many articles about the battle between natural versus surrogate primary keys. I agree in the use of surrogate keys to identify records of tables whose contents are created by the user. But in the case of supporting tables what should I use? For example, in a hypothetical table "orderStates". If you use a natural key would have the following data: TABLE ORDERSTATES {ID: "NEW", NAME: "New"} {ID: "MANAGEMENT" NAME: "Management"} {ID: "SHIPPED" NAME: "Shipped"} If I use a surrogate key would have the following data: TABLE ORDERSTATES {ID: 1 CODE: "NEW", NAME: "New"} {ID: 2 CODE: "MANAGEMENT" NAME: "Management"} {ID: 3 CODE: "SHIPPED" NAME: "Shipped"} Now let's take an example: a user enters a new order. In the case in which use natural keys, in the code I can write this: newOrder.StateOrderId = "NEW"; With the surrogate keys instead every time I have an additional step. stateOrderId_NEW = .... I retrieve the id corresponding to the recod code "NEW" newOrder.StateOrderId = stateOrderId_NEW; The same will happen every time I have to move the order in a new status. So, in this case, what are the reason to chose one key type vs the other one?

    Read the article

  • Design issue in Iphone Dev - Generic implementation for Game Bonuses

    - by Idan
    So, I thought consulting you guys about my design, cause I sense there might be a better way of doing it. I need to implement game bonuses mechanism in my app. Currently there are 9 bonuses available, each one is based of different param of the MainGame Object. What I had in mind was at app startup to initialize 9 objects of GameBonus while each one will have different SEL (shouldBonus) which will be responsible for checking if the bonus is valid. So, every end of game I will just run over the bonuses array and call the isBonusValid() function with the MainGame object(which is different after every game). How's that sound ? The only issue I have currently, is that I need to make sure that if some bonuses are accepted some other won't (inner stuff)... any advice how to do that and still maintain generic implementation ? @interface GameBonus : NSObject { int bonusId; NSString* name; NSString* description; UIImage* img; SEL shouldBonus; } @implementation GameBonus -(BOOL) isBonusValid(MainGame*)mainGame { [self shouldBonus:mainGame]; } @end

    Read the article

  • Which software for intranet CMS - Django or Joomla?

    - by zalun
    In my company we are thinking of moving from wiki style intranet to a more bespoke CMS solution. Natural choice would be Joomla, but we have a specific architecture. There is a few hundred people who will use the system. System should be self explainable (easier than wiki). We use a lot of tools web, applications and integrated within 3rd party software. The superior element which is a glue for all of them is API. In example for the intranet tools we do use Django, but it's used without ORM, kind of limited to templates and url - every application has an adequate methods within our API. We do not use the Django admin interface, because it is hardly dependent on ORM. Because of that Joomla may be hard to integrate. Every employee should be able to edit most of the pages, authentication and privileges have to be managed by our API. How hard is it to plug Joomla to use a different authentication process? (extension only - no hacks) If one knows Django better than Joomla, should Django be used?

    Read the article

  • iPhone: Leak with UIWebView loading Office documents. Any ideas how to avoid it?

    - by Thomas Tempelmann
    While there are already quite a few posts about leaks around UIWebView, mine is a bit more special, I believe, and thus deserves its own post here. I see a reproducible large leak every time I load a Office document such as a Word or Excel file. For instance, every time I display a 180KB .doc file, I get a 100KB leak. And that happens with both the simulator and an actual device, running OS 3.1.3. The leak is not visible with the Leaks instrument but only by looking at the malloc instances via the ObjectAlloc instrument. Here's a picture from the instruments trace: I've also made a demo project, UIWebView-Leak.zip, so you can verify this yourself. To see the leak, use the ObjectAlloc instrument, switch to the view where you see individual allocation objects, and sort by size so that you see the large ones in a group, just like in my picture above. Then view a Office document a few times and find the Malloc objects that keep staying "Live" even after the actual UIWebView has been freed. Is this a known bug? Or is there any way I can avoid these leaks? I.e, have you successfully shown Office documents on an iPhone withing getting such leaks? Note: I've reported this as a bug to Apple now, too (ID 7950594) I am still waiting for someone (including Apple) to confirm this as a true leak or show why it isn't (i.e. that I do something wrong or make wrong assumptions)

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • MySql too many connections

    - by MichaelMcCabe
    I hate to bring up a question which is widely asked on the web, but I cant seem to solve it. I started a project a while back and after a month of testing, I hit a "Too many connections" error. I looked into it, and "Solved" it by increasing the max_connections. This then worked. Since then more and more people started to use it, and it hit again. When I am the only user on the site, i type "show processlist" and it comes up with about 50 connections which are still open (saying "Sleep" in the command). Now, I dont know enough to speculate why these are open, but in my code I tripple checked and every connection I open, I close. ie. public int getSiteIdFromName(String name, String company)throws DataAccessException,java.sql.SQLException{ Connection conn = this.getSession().connection(); Statement smt = conn.createStatement(); ResultSet rs=null; String query="SELECT id FROM site WHERE name='"+name+"' and company_id='"+company+"'"; rs=smt.executeQuery(query); rs.next(); int id=rs.getInt("id"); rs.close(); smt.close(); conn.close(); return id; } Every time I do something else on the site, another load of connections are opened and not closed. Is there something wrong with my code? and if not, what could be the problem?

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >