Search Results

Search found 14643 results on 586 pages for 'performance comparison'.

Page 469/586 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Java NullPointerException when traversing a non-null recordset

    - by Tim
    Hello again - I am running a query on Sybase ASE that produces a ResultSet that I then traverse and write the contents out to a file. Sometimes, this will throw a NullPointerException, stating that the ResultSet is null. However, it will do this after printing out one or two records. Other times, with the same exact input, I will receive no errors. I have been unable to consistently produce this error. The error message is pointing to a line: output.print(rs.getString(1)); It appears to happen when the query takes a little longer to run, for some reason. The recordset returns thus far have been very small (4 to 7 records). Sometimes I'll have to run the app 3 or 4 times, then the errors will just stop, as though the query was getting "warmed up". I've run the query manually and there doesn't appear to be any performance problems. Thanks again!

    Read the article

  • Good reasons why to not use XIB files?

    - by mystify
    Are there any good reasons why I should not use XIB / NIB files with an highly customized UI and extensive animations and super low memory footprint needs? As a beginner I started with XIB. Then I figured out I couldn't do just about everything in them. It started to get really hard to customize things the way I wanted them to be. So at the end, I threw all my XIBs away and did it all programmatically. So when someone asks me if XIB is good, I generally say: Yeah, if you want to make crappy boring interfaces and don't care too much about performance, go ahead. But what else could be a reason not to use XIB? Am I the only iPhone developer who prefers doing everything programmatically for this reasons?

    Read the article

  • What parallel programming model do you recommend today to take advantage of the manycore processors

    - by Doctor J
    If you were writing a new application from scratch today, and wanted it to scale to all the cores you could throw at it tomorrow, what parallel programming model/system/language/library would you choose? Why? I am particularly interested in answers along these axes: Programmer productivity / ease of use (can mortals successfully use it?) Target application domain (what problems is it (not) good at?) Concurrency style (does it support tasks, pipelines, data parallelism, messages...?) Maintainability / future-proofing (will anybody still be using it in 20 years?) Performance (how does it scale on what kinds of hardware?) I am being deliberately vauge on the nature of the application in anticipation of getting good general answers useful for a variety of applications.

    Read the article

  • What is the difference between "LINQ to Entities", "LINQ to SQL" and "LINQ to Dataset".

    - by Marcel
    I've been working for quite a while now with LINQ. However, it remains a bit of a mystery what the real differences are between the mentioned flavours of LINQ. The successful answer will contain a short differentiation between them. What is the main goal of each flavor, what is the benefit, and is there a performance impact... P.S. I know that there are a lot of information sources out there, but I'm looking for a kind of a "cheat sheet" which instructs a newbie where to head for a specific goal.

    Read the article

  • Attributed strings in UITableViewCells without WebView?

    - by arnekolja
    Hello, does anyone know if there's a way in with 3.0+ to display attributed strings within a UITableViewCell without using a UIWebView for that? I need to display a string with linked, tappable substrings as the typical detailTextLabel. I wouldn't mind exchanging this UILabel against another type of view, but I think a UIWebView could be just too slow when rendering a table with hundrets of cells. Or does someone have opposite experiences here? So my question is: what's the best way to achieve mixed strings in a very large table without a great performance hit? I searched for this almost a whole day now, but I can only find old posts mentioning that there's no attributed string on the iPhone (outdated, as this was pre-3.0) and/or saying that they use a UIWebView for that. But really, I don't think this would perform very well on large tables, would it? Many, many thanks in advance Arne

    Read the article

  • Database: relational/not relational/object oriented... What to choose?

    - by Damian
    I'm porting a website that I made for app engine to run on a dedicated server. It is coded in java and I'm looking for a database to replace google datastore. My first thougt was MySql because everybody uses it, but i dont like SQL and I think I would feel more comfortable using OODB or anything else. With google datastore I could modify my models and don't worry about the database definition at all. I know using MySql that isn't possible. And I don't want to miss that. And if I use a OODB, which should I use? What about performance compared to MySql? Well, any idea or tip will really help me since I know nothing about databases.

    Read the article

  • Javscript filter vs map problem

    - by graham.reeds
    As a continuation of my min/max across an array of objects I was wondering about the performance comparisons of filter vs map. So I put together a test on the values in my code as was going to look at the results in FireBug. This is the code: var _vec = this.vec; min_x = Math.min.apply(Math, _vec.filter(function(el){ return el["x"]; })); min_y = Math.min.apply(Math, _vec.map(function(el){ return el["x"]; })); The mapped version returns the correct result. However the filtered version returns NaN. Breaking it out, stepping through and finally inspecting the results, it would appear that the inner function returns the x property of _vec but the actual array returned from filter is the unfiltered _vec. I believe my usage of filter is correct - can anyone else see my problem?

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • Setting up Netbeans/Eclipse for Linux Kernel Development

    - by red.october
    Hi: I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include: files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions. cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem. can't find header files: self-explanatory I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion. Let me preface this question with some points: I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them. As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine SO's useful "Related Questions" feature has informed me that the following question is related: http://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers. Cheers

    Read the article

  • Caching for a Custom Repositiory Adapter for WebSphere Portal Virtual Member Manager

    - by Spike Williams
    I'm looking at writing a custom repository adapter to interact with Virtual Member Manager on WebSphere Portal 6.1. Basically, its a layer that takes a request in the form of a commonj.sco.DataObject and passes that on to an external web service, to get various information on our logged in users that is not otherwise available in LDAP. I'm concerned about the performance hit of going to a service every time we want to pull some permission from the back end. My question is, can the Virtual Member Manager handle caching of data going in and out of the custom repository adapters, or is that something I'm going to have to build into the adapter myself?

    Read the article

  • Why does Java not have any destructor like C++?

    - by Abhishek Jain
    Java has its own garbage collection implementation so it does not require any destructor like C++ . This makes Java developer lazy in implementing memory management. And Garbage Collection is very expensive. Still we can have destructor along with garbage collector where developer can free resources and which can save garbage collector's work. This might improves the performance of application. Why does Java not provide any destructor kind of mechanism? Developer does not have control over GC but he/she can control or create object. Then why not give them ability to destruct the objects?

    Read the article

  • Java 2D clip area to shape

    - by user2923880
    I'm quite new to graphics in java and I'm trying to create a shape that clips to the bottom of another shape. Here is an example of what I'm trying to achieve: Where the white line at the base of the shape is the sort of clipped within the round edges. The current way I am doing this is like so: g2.setColor(gray); Shape shape = getShape(); //round rectangle g2.fill(shape); Rectangle rect = new Rectangle(shape.getBounds().x, shape.getBounds().y, width, height - 3); Area area = new Area(shape); area.subtract(new Area(rect)); g2.setColor(white); g2.fill(area); I'm still experimenting with the clip methods but I can't seem to get it right. Is this current method ok (performance wise, since the component repaints quite often) or is there a more efficient way? Thanks in advance.

    Read the article

  • Speed up the loop operation in R

    - by Kay
    Hi, i have a big performance problem in R. I wrote a function that iterates over an data.frame object. It simply adds a new col to a data.frame and accumulate sth. (simple operation). The data.frame has round about 850.000 rows. My PC is still working about 10h now and i have no idea about the runtime. dayloop2 <- function(temp){ for (i in 1:nrow(temp)){ temp[i,10] <- i if (i > 1) { if ((temp[i,6] == temp[i-1,6]) & (temp[i,3] == temp[i-1,3])) { temp[i,10] <- temp[i,9] + temp[i-1,10] } else { temp[i,10] <- temp[i,9] } } else { temp[i,10] <- temp[i,9] } } names(temp)[names(temp) == "V10"] <- "Kumm." return(temp) } Any ideas how to speed up this operation ?

    Read the article

  • EJB3.1 Remote invocation - is it distributed automatically? is it expensive?

    - by Hank
    I'm building a JEE6 application with performance and scalability in the forefront of my mind. Business logic and JPA2-facade is held in stateless session beans (EJB3.1). As of right now, the SLSBs implement only @Remote-interfaces. When a bean needs to access another bean, it does so via RMI. My reasoning behind this is the assumption that, once the application runs on a bunch of clustered application servers, the RMI-part allows the execution to be distributed across the whole cluster automagically. Is that a correct assumption? I'm fine with dealing with the downsides of that (objects lose entityManager session, pass-by-value), at least I think so. But I am wondering if constant remote invocation isn't adding more load then necessary.

    Read the article

  • How would MVVM be for games?

    - by Benny Jobigan
    Particularly for 2d games, and particularly silverlight/wpf games. If you think about it, you can divide a game object into its view (the graphic on the screen) and a view-model/model (the state, ai, and other data for the object). In silverlight, it seems common to make each object a user control, putting the model and view into a single object. I suppose the advantage of this is simplicity. But, perhaps it's less clean or has some disadvantages in terms of the underlying "game engine". What are your thoughts on this matter? What are some advantages and disadvantages of using the MVVM pattern for game development? How about performance? All thoughts are welcome.

    Read the article

  • Lock HTML select element, allow value to be sent on submit

    - by ILMV
    I have a select box (for a customer field) on a complex order form, when the user starts to add lines to the order they should not be allowed to change the customer select box (unless all lines are deleted). My immediate thought was that I could use the disabled attribute, but when the box is disabled the selected value is no longer passed to the target. When the problem arose a while ago one of the other developers worked around this by looping through all the options and disabling all but the selected option, and sure enough the value was passed to the target and we've been using since. But now I'm looking for a proper solution, I don't want to loop through all the options because are data is expanding and it's starting to introduce performance issues. I'd prefer not to enable this / all the elements when the submit button is hit. How can I lock the input, whilst maintaining the selected option and passing that value to the target script? I would prefer a non-JavaScript solution if possible, but if needed we are running jQuery 1.4.2 so that could be used.

    Read the article

  • How do I temporarily monkey with a global module constant?

    - by Daniel
    Greetings, I want to tinker with the global memcache object, and I found the following problems. Cache is a constant Cache is a module I only want to modify the behavior of Cache globally for a small section of code for a possible major performance gain. Since Cache is a module, I can't re-assign it, or encapsulate it. I Would Like To Do This: Deep in a controller method... code code code... old_cache = Cache Cache = MyCache.new code code code... Cache = old_cache code code code... However, since Cache is a constant I'm forbidden to change it. Threading is not an issue at the moment. :) Would it be "good manners" for me to just alias_method the special code I need just for a small section of code and then later unalias it again? That doesn't pass the smell test IMHO. Does anyone have any ideas? TIA, -daniel

    Read the article

  • Google Web Optimizer -- How long until winning combination?

    - by Django Reinhardt
    I've had an A/B Test running in Google Web Optimizer for six weeks now, and there's still no end in sight. Google is still saying: "We have not gathered enough data yet to show any significant results. When we collect more data we should be able to show you a winning combination." Is there any way of telling how close Google is to making up its mind? (Does anyone know what algorithm does it use to decide if there's been any "high confidence winners"?) According to the Google help documentation: Sometimes we simply need more data to be able to reach a level of high confidence. A tested combination typically needs around 200 conversions for us to judge its performance with certainty. But all of our conversions have over 200 conversations at the moment: 230 / 4061 (Original) 223 / 3937 (Variation 1) 205 / 3984 (Variation 2) 205 / 4007 (Variation 3) How much longer is it going to have to run?? Thanks for any help.

    Read the article

  • how to work with javascript typed arrays without using for

    - by ramesh babu
    var sendBuffer = new ArrayBuffer(4096); var dv = new DataView(sendBuffer); dv.setInt32(0, 1234); var service = svcName; for (var i = 0; i < service.length; i++) { dv.setUint8(i + 4, service.charCodeAt(i)); } ws.send(sendBuffer); how to workout this wihout using for loop. for loop decreasing performance while works with huge amount of data.

    Read the article

  • Is loading a video in a browser multithreaded?

    - by mwilcox
    It's hard to know what is multithreaded in a browser and what isn't. It seems while a video streams or progressively downloads, it does not affect page performance, so my guess it is. Note I'm using Flash video, but it's really about video in general. Any other tips on what else is multithreaded (image loads?) is also helpful. I know JavaScript is not, and I thought Flash wasn't but I heard somewhere that it may be (or it could be done), but I think they were not well informed.

    Read the article

  • How to scale an image (in data URI format) in JavaScript (real scaling, not using styling)

    - by 103067513055141045393
    We are capturing a visible tab in a Chrome browser (by using the extensions API chrome.tabs.captureVisibleTab) and receiving a snapshot in the data URI scheme (Base64 encoded string). Is there a JavaScript library that can be used to scale down an image to a certain size? Currently we are styling it via CSS, but have to pay performance penalties as pictures are mostly 100 times bigger than required. Additional concern is also the load on the localStorage we use to save our snapshots. Does anyone know of a way to process this data URI scheme formatted pictures and reduce their size by scaling them down? References: Data URI scheme on http://en.wikipedia.org/wiki/Data_URI_scheme Chrome Extensions API onhttp://code.google.com/chrome/extensions/tabs.html The "Recently Closed Tabs" Chrome Extension onhttp://code.google.com/p/recently-closed-tabs

    Read the article

  • Reading files from an embedded ZIP archive

    - by aix
    I have a ZIP archive that's embedded inside a larger file. I know the archive's starting offset within the larger file and its length. Are there any Java libraries that would enable me to directly read the files contained within the archive? I am thinking along the lines of ZipFile.getInputStream(). Unfortunately, ZipFile doesn't work for this use case since its constructors require a standalone ZIP file. For performance reasons, I cannot copy the ZIP achive into a separate file before opening it. edit: Just to be clear, I do have random access to the file.

    Read the article

  • Interpreters: How much simplification?

    - by Ray
    In my interpreter, code like the following x=(y+4)*z echo x parses and "optimizes" down to four single operations performed by the interpreter, pretty much assembly-like: add 4 to y multiply <last operation result> with z set x to <last operation result> echo x In modern interpreters (for example: CPython, Ruby, PHP), how simplified are the "opcodes" for which are in end-effect run by the interpreter? Could I achieve better performance when trying to keep the structures and commands for the interpreter more complex and high-level? That would be surely a lot harder, or?

    Read the article

  • Hibernate criteria with projection not performing query for @OneToMany mapping

    - by Josh
    I have a domain object, Expense, that has a field called initialFields. It's annotated as so: @OneToMany(fetch = FetchType.EAGER, cascade = { CascadeType.ALL }, orphanRemoval = true) @JoinTable(blah blah) private final List<Field> initialFields; Now I'm trying to use Projections in order to only pull certain fields for performance reasons, but when doing so the initialFields field is always null. It's the only OneToMany field and the only field I am trying to retrieve with the projection that is behaving this way. If I use a regular HQL query initialFields is populated appropriately, but of course I can't limit the fields. Anyone ever seen anything like this?

    Read the article

  • What goes between SQL Server and Client?

    - by worlds-apart89
    This question is an updated version of a previous question I have asked on here. I am new to client-server model with SQL Server as the relational database. I have read that public access to SQL Server is not secure. If direct access to the database is not a good practice, then what kind of layer should be placed between the server and the client? Note that I have a desktop application that will serve as the client and a remote SQL Server database that will provide data to the client. The client will input their username and password in order to see their data. I have heard of terms like VPN, ISA, TMG, Terminal Services, proxy server, and so on. I need a fast and secure n-tier architecture. P.S. I have heard of web services in front of the database. Can I use WCF to retrieve, update, insert data? Would it be a good approach in terms of security and performance?

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >