Search Results

Search found 1282 results on 52 pages for 'overhead'.

Page 29/52 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • hibernate distributed 2nd level cache options

    - by ishmeister
    Not really a question but I'm looking for comments/suggestions from anyone who has experiences using one or more of the following: EhCache with RMI EhCache with JGroups EhCache with Terracotta Gigaspaces Data Grid A bit of background: our applications is read only for the most part but there is some user data that is read-write and some that is only written (and can also be reasonably inaccurate). In addition, it would be nice to have tools that enable us to flush and fill the cache at intervals or by admin intervention. Regarding the first option - are there any concerns about the overhead of RMI and performance of Java serialization?

    Read the article

  • Detecting support for a given JavaScript event?

    - by Will
    I'm interested in using the JavaScript hashchange event to monitor changes in the URL's fragment identifier. I'm aware of Really Simple History and the jQuery plugins for this. However, I've reached the conclusion that in my particular project it's not really worth the added overhead of another JS file. What I would like to do instead is take the "progressive enhancement" route. That is, I want to test whether the hashchange event is supported by the visitor's browser, and write my code to use it if it's available, as an enhancement rather than a core feature. IE 8, Firefox 3.6, and Chrome 4.1.249 support it, and that accounts for about 20% of my site's traffic. So, uh ... is there some way to test whether a browser supports a particular event? Thanks.

    Read the article

  • Best way to reuse a Runnable

    - by Gandalf
    I have a class that implements Runnable and am currently using an Executor as my thread pool to run tasks (indexing documents into Lucene). executor.execute(new LuceneDocIndexer(doc, writer)); My issue is that my Runnable class creates many Lucene Field objects and I would rather reuse them then create new ones every call. What's the best way to reuse these objects (Field objects are not thread safe so I cannot simple make them static) - should I create my own ThreadFactory? I notice that after a while the program starts to degrade drastically and the only thing I can think of is it's GC overhead. I am currently trying to profile the project to be sure this is even an issue - but for now lets just assume it is.

    Read the article

  • Alternatives to PropertyInfo.GetValue() for Mono?

    - by Trilok
    I have a method that has the following signature: private object GetNestedObject<y>(y objToAccess, string nestedObjectName) I'm using Reflection to get the nestedObject from the objToAccess and return it. This works well except it's really slow (I have to do this a few hundred thousand times). I came across HyperDescriptor, but since I'm running this on Linux, and Mono doesn't support TypeDescriptionProviders, I can't use it. Are there any alternatives to using getValue in this case? I could always hardcode in overrides for each type, but that is not desirable and would add a lot of maintenance overhead in my case.

    Read the article

  • Unsafe, super-fast cross-process memory buffer?

    - by John
    Cross-process memory buffers always have some overhead, and my understanding is this is quite high. But what if you're implementing a cross-process render-buffer, this isn't critically important in the same way as other data so are there techniques we can use to get 'raw' access to a chunk of memory from multiple processes, with no safety nets apart from it not crashing? Or do modern operating systems simply not work with unabstracted memory in a way to make this possible? I'm working in C++ but the question applies to Win XP/Vista/7, MacOSX 10.5+ (& Linux less importantly).

    Read the article

  • How to permanently prevent specific part of a file from being committed in git?

    - by boutta
    I have cloned a remote SVN repository with git-svn. I have modified a pom.xml file in this cloned repo in a way that the code compiles. This setup is exclusive for me. Thus I don't want to push the changes back on the remote repo. Is there a way to prevent this (partial) change of a file from being committed into the repo? I'm aware of the fact, that I could use a personal branch, but this would mean certain merging overhead. Are there other ways? I've looked into this question and this one, but they are for rather temporal changes. Update: I'm also aware of the .gitignore possibilities, but this would mean to exclude the file completely.

    Read the article

  • How can I add floats together in different orders, and always get the same total?

    - by splicer
    Let's say I have three 32-bit floating point values, a, b, and c, such that (a + b) + c != a + (b + c). Is there summation algorithm, perhaps similar to Kahan summation, that guarantees that these values can be summed in any order and always arrive at the exact same (fairly accurate) total? I'm looking for the general case (i.e. not a solution that only deals with 3 numbers). Is arbitrary precision arithmetic the only way to go? I'm dealing with very large data sets, so I'd like to avoid the overhead of using arbitrary precision arithmetic if possible. Thanks!

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • Struggling to "clear" a CGLayer -- can it even be done?

    - by Joe Blow
    So I'm doing this repetitively - making a CGLayer, doing some processing, and then releasing it. This happens a lot in real time -- so surely there is a lot of overhead in making a whole new CGLayer each time? Surely it would be better to just keep lair around, and start fresh each time? However, I do not know any way, at all, to "erase" or "start from blank" a CGLayer?? Can anyone help on this? There is a function CGContextBeginPath(cc) but it's confusing: it seems to only clear out "that" path, it does not erase all of the CGLayer back to a blank canvas. how to return a CGLayer to a blank canvas????? Does anyone know? CGLayerRef lair = CGLayerCreateWithContext( UIGraphicsGetCurrentContext(), CGSizeMake(1024,768), NULL); CGContextRef cc = CGLayerGetContext(ether); // various processing here CGContextAddPath(cc, somePath); // various processing here CGLayerRelease(lair); Any ideas?!

    Read the article

  • Adding a Jar to ext and using it in Eclipse

    - by Bob Breznak
    I am providing a client with a lab station image for a school program. There is a library that I'd like to directly add to the JRE so that students can use the library without needing to fiddle with adding the classpath. As this is a basic after school program, the goal is to just get students into programming with as little overhead additions as possible. At this point, I have the lib put into the jre/lib/ext and Eclipse is able to find it, however it will not allow access to any of the classes. I see it in the JRE System Library and the classes and packages are showing up there but when I go to use it, I am getting an "Access restriction" error. When I remove the JRE System Library then add it back in, everything works swimmingly. The library is accessible exactly as intended. Any ideas on how to resolve this?

    Read the article

  • Optimal way to store and pass a date to Javascript

    - by user1493115
    I need to store a date-time value in MySQL and subsequently display it on a webpage. Due to its flexibility I usually chose to store a Unix timestamp in the database and convert it with PHP's date() to the desired format. This time however I would like to use MySQL's datetime field (mostly due to 2038) and apply the browser's timezone (hence I cannot simply format it on the server and pass the string to the client). I thought of storing the date as UTC datetime in the database and send it as well-defined format to the client, where it will be further processed. Here I would like to avoid a Unix timestamp but everything else might add additional overhead in processing. Is there any best practice as far as date processing is concerned in a MySQL, PHP, JQuery environment? Thanks.

    Read the article

  • 2D Game: Fast(est) way to find x closest entities for another entity - huge amount of entities, high

    - by Pygmy
    I'm working on a 2D game that has a huge amount of dynamic entities. For fun's sake, let's call them soldiers, and let's say there are 50000 of them (which I just randomly thought up, it might be much more or much less :)). All these soldiers are moving every frame according to rules - think boids / flocking / steering behaviour. For each soldier, to update it's movement I need the X soldiers that are closest to the one I'm processing. What would be the best spatial hierarchy to store them to facilitate calculations like this without too much overhead ? (All entities are updated/moved every frame, so it has to handle dynamic entities very well)

    Read the article

  • Which file types are worth compressing (zipping) for remote storage? For which of them the compresse

    - by user193655
    I am storing documents in sql server in varbinary(max) fileds, I use filestream optionally when a user has: (DB_Size + Docs_Size) ~> 0.8 * ExpressEdition_Max_DB_Size I am currently zipping all the files, anyway this is done because the Document Read/Write work was developed 10 years ago where Storage was more expensive than now. Many files when zipped are almost as big as the original (a zipped pdf is about 95% of original size). And anyway unzipping has some overhead, that becomes twice when I need also to "Check-in"/Update the file because I need to zip it. So I was thinking of giving to the users the option to choose whether the file type will be zipped or not by providing some meaningful default values. For my experience I would impose the following rules: 1) zip by default: txt, bmp, rtf 2) do not zip by default: jpg, jpeg, Microsoft Office files, Open Office files, png, tif, tiff Could you suggest other file types chosen among the most common or comment on the ones I listed here?

    Read the article

  • Fast, cross-platform timer?

    - by dsimcha
    I'm looking to improve the D garbage collector by adding some heuristics to avoid garbage collection runs that are unlikely to result in significant freeing. One heuristic I'd like to add is that GC should not be run more than once per X amount of time (maybe once per second or so). To do this I need a timer with the following properties: It must be able to grab the correct time with minimal overhead. Calling core.stdc.time takes an amount of time roughly equivalent to a small memory allocation, so it's not a good option. Ideally, should be cross-platform (both OS and CPU), for maintenance simplicity. Super high resolution isn't terribly important. If the times are accurate to maybe 1/4 of a second, that's good enough. Must work in a multithreaded/multi-CPU context. The x86 rdtsc instruction won't work.

    Read the article

  • Unity and web service

    - by zachary
    I had this awesome idea... but I am afraid maybe it is actually a bad idea.... we use unity for dependency injection. I make interfaces from my web services using partial classes for the purpose of mocking and web services.... What I want to do is put my web services into unity and get them via dependency injection... What do you think? Is there too much overhead somewhere? Memory leaks? Is this a bad idea?

    Read the article

  • Forward a call to a webservice to another webservice?

    - by Luhmann
    If I have an https webservice behind a firewall on a machine (A) that I cannot access, but access to a machine on the same network (B), from where I can call the webservice on machine A. What is the best way of talking with the webservice on machine A, from the outside via machine B (that I access via VPN)? I can obviously create a service with a matching interface on machine B, and call the methods on the webservice on machine A, and return the result. But I fear for the overhead. Is there another way? Can i somehow forward the request?

    Read the article

  • what's the best (most effecient) way in asp .net to return a whole page into tabbed content?

    - by ijjo
    what i want to do is every time i click on a tab, the content area is replaced by pretty much a whole new page. i don't want a full page load so i want to do it in ajax, but i'm used to sending back small jason data via page methods. i'm not sure how i would construct a whole new page and return that via ajax and i would like to simply assign the whole content returned to a div and be done with it. what's the best way to do this with the least amount of overhead (i know there are some inefficient ways the scriptmanager does ajax)? or is it better to load the tabbed content in an iframe? fyi i'm already using jquery to call lightweight pagemethods on my asp net page and that works great.

    Read the article

  • sql unite fields to one result

    - by none
    i know this is a "not build in" or "the way dba thinks" but a programmer aproach , How could one request from 3 fields to get the one that is not null, into a result filed. lets say we have a table with f1,f2,f3,f4,f5. lets say f2,f3,f4 are the same type. lets say the content of the table be tupples of (key1,null,null,value1,value2) (key2,null,value3,value4,value5) (key3,null,null,null,value6) now if we return the first tupple then we get (key1) we get (key1,value1,value2) if we ask for key2 we get (key1,value3,value5) if we ask for key3 we get (key1,null,value6) how is it possible to get the fields in the priority of if you have value in f2, then its set into the returned field, only then if we have value in f3 then its set into the middle returned field, only then if we have value in f4 then its set into the middle returned field the main goal is to get the result into a sigel feild and prevent the overhead work needed at the result end.

    Read the article

  • Securing database keys for client-side processing

    - by danp
    I have a tree of information which is sent to the client in a JSON object. In that object, I don't want to have raw IDs which are coming from the database. I thought of making a hash of the id and a field in the object (title, for example) or a salt, but I'm worried that this might have a serious effect on processing overhead. SELECT * FROM `things` where md5(concat(id,'some salt')) = md5('1some salt'); Is there a standard practice for obscuring IDs in this kind of situation?

    Read the article

  • Memcached - how to deal with adding/deploying servers

    - by Industrial
    Hi everybody, How do you handle replacing/adding/removing memcached nodes in your production applications? I will have a number of applications that are cloned and customized due to each customers need running on one and same webserver, so i'll guess that there will be a day when some of the nodes will be changed. Here's how memcached is populated by normal: $m = new Memcached(); $servers = array( array('mem1.domain.com', 11211, 33), array('mem2.domain.com', 11211, 67) ); $m->addServers($servers); My initial idea, is to make the $servers array to be populated from the database, also cached, but file-based, done once a day or something, with the option to force an update on next run of the function that holds the $addservers call. However, I am guessing that this might add some additional overhead since disks are quite slow storage... What do you think?

    Read the article

  • How do I make "simple" throughput j2ee-filter?

    - by Tommy
    I'm looking to create a filter that can give me two things: number of request pr minute, and average responsetime pr minute. I already got the individual readings, I'm just not sure how to add them up. My filter captures every request, and it records the time each request takes: public void doFilter(ServletRequest request, ...() { long start = System.currentTimeMillis(); chain.doFilter(request, response); long stop = System.currentTimeMillis(); String time = Util.getTimeDifferenceInSec(start, stop); } This information will be used to create some pretty Google Chart charts. I don't want to store the data in any database. Just a way to get current numbers out when requested As this is a high volume application; low overhead is essential. I'm assuming my applicationserver doesn't provide this information.

    Read the article

  • Giving users a "reputation system" - Should I... ?

    - by RadiantHex
    Hi folks, I'm thinking of adding a reputation system to a web application, the site is already being used so I'm trying to be careful about my choices. I'm developing in Django/Python, thought this would be important. Reputation is generated in all actions that contribute to the site, similar to Stackoverflow's system. I know there are literally millions of ways of implementing this, and this is why I feel quite lost. Two alternatives I am not sure about are: Keep track of reasons why reputation was incremented Ignore reasons in order to reduce complexity of the site and overhead Would be happy with a few pointers, and directions. Would be very much appreciated!

    Read the article

  • Are spinlocks a good choice for a memory allocator?

    - by dsimcha
    I've suggested to the maintainers of the D programming language runtime a few times that the memory allocator/garbage collector should use spinlocks instead of regular OS critical sections. This hasn't really caught on. Here are the reasons I think spinlocks would be better: At least in synthetic benchmarks that I did, it's several times faster than OS critical sections when there's contention for the memory allocator/GC lock. Edit: Empirically, using spinlocks didn't even have measurable overhead in a single-core environment, probably because locks need to be held for such a short period of time in a memory allocator. Memory allocations and similar operations usually take a small fraction of a timeslice, and even a small fraction of the time a context switch takes, making it silly to context switch in the case of contention. A garbage collection in the implementation in question stops the world anyhow. There won't be any spinning during a collection. Are there any good reasons not to use spinlocks in a memory allocator/garbage collector implementation?

    Read the article

  • Why is numpy c extension slow?

    - by Bitwise
    I am working on large numpy arrays, and some native numpy operations are too slow for my needs (for example simple operations such as "bitwise" A&B). I started looking into writing C extensions to try and improve performance. As a test case, I tried the example given here, implementing a simple trace calculation. I was able to get it to work, but was surprised by the performance: for a (1000,1000) numpy array, numpy.trace() was about 1000 times faster than the C extension! This happens whether I run it once or many times. Is this expected? Is the C extension overhead that bad? Any ideas how to speed things up?

    Read the article

  • C++ performance when accessing class members

    - by Dr. Acula
    I'm writing something performance-critical and wanted to know if it could make a difference if I use: int test( int a, int b, int c ) { // Do millions of calculations with a, b, c } or class myStorage { public: int a, b, c; }; int test( myStorage values ) { // Do millions of calculations with values.a, values.b, values.c } Does this basically result in similar code? Is there an extra overhead of accessing the class members? I'm sure that this is clear to an expert in C++ so I won't try and write an unrealistic benchmark for it right now

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >