Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 469/530 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • gcc, strict-aliasing, and horror stories

    - by Joseph Quinsey
    In http://stackoverflow.com/questions/2906365/gcc-strict-aliasing-and-casting-through-a-union I asked whether anyone had encountered problems with union punning through pointers. So far, the answer seems to be No. This question is broader: do you have any horror stories about gcc and strict-aliasing? Background: Quoting from AndreyT's answer in http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041: "Strict aliasing rules are rooted in parts of the standard that were present in C and C++ since the beginning of [standardized] times. The clause that prohibits accessing object of one type through a lvalue of another type is present in C89/90 (6.3) as well as in C++98 (3.10/15). ... It is just that not all compilers wanted (or dared) to enforce it or rely on it." Well, gcc is now daring to do so, with its -fstrict-aliasing switch. And this has caused some problems. See, for example, the excellent article http://davmac.wordpress.com/2009/10/ about a Mysql bug, and the equally excellent discussion in http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html. Some other less-relevant links: http://stackoverflow.com/questions/1225741/performance-impact-of-fno-strict-aliasing http://stackoverflow.com/questions/754929/strict-aliasing http://stackoverflow.com/questions/262379/when-is-char-safe-for-strict-pointer-aliasing http://stackoverflow.com/questions/725138/how-to-detect-strict-aliasing-at-compile-time So to repeat, do you have a horror story of your own? Problems not indicated by -Wstrict-aliasing would, of course, be preferred. And other C compilers are also welcome.

    Read the article

  • What is the the relation between programming and mathematics?

    - by Math Grad
    Programmers seem to think that their work is quite mathematical. I understand this when you try to optimize something in performance, find the most efficient alogithm, etc.. But it patently seems false when you look at a billing application for a shop, or a systems software riddled with I/O calls. So what is it exactly? Is computation and associated programming really mathematical? Here I have in mind particularly the words of the philosopher Schopenhauer in mind: That arithmetic is the basest of all mental activities is proved by the fact that it is the only one that can be accomplished by means of a machine. Take, for instance, the reckoning machines that are so commonly used in England at the present time, and solely for the sake of convenience. But all analysis finitorum et infinitorum is fundamentally based on calculation. Therefore we may gauge the “profound sense of the mathematician,” of whom Lichtenberg has made fun, in that he says: “These so-called professors of mathematics have taken advantage of the ingenuousness of other people, have attained the credit of possessing profound sense, which strongly resembles the theologians’ profound sense of their own holiness.” I lifted the above quote from here. It seems that programmers are doing precisely the sort of mechanized base mental activity the grand old man is contemptuous about. So what exactly is the deal? Is programming really the "good" kind of mathematics, or just the baser type, or altogether something else just meant for business not to be confused with a pure discipline?

    Read the article

  • OpenGL equivalent of GDI's HatchBrush or PatternBrush?

    - by Ptah- Opener of the Mouth
    I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL. I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush. More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on. Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"? I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled. Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn? Thanks in advance for any help.

    Read the article

  • Handling form from different view and passing form validation through session in django

    - by Mo J. Mughrabi
    I have a requirement here to build a comment-like app in my django project, the app has a view to receive a submitted form process it and return the errors to where ever it came from. I finally managed to get it to work, but I have doubt for the way am using it might be wrong since am passing the entire validated form in the session. below is the code comment/templatetags/comment.py @register.inclusion_tag('comment/form.html', takes_context=True) def comment_form(context, model, object_id, next): """ comment_form() is responsible for rendering the comment form """ # clear sessions from variable incase it was found content_type = ContentType.objects.get_for_model(model) try: request = context['request'] if request.session.get('comment_form', False): form = CommentForm(request.session['comment_form']) form.fields['content_type'].initial = 15 form.fields['object_id'].initial = 2 form.fields['next'].initial = next else: form = CommentForm(initial={ 'content_type' : content_type.id, 'object_id' : object_id, 'next' : next }) except Exception as e: logging.error(str(e)) form = None return { 'form' : form } comment/view.py def save_comment(request): """ save_comment: """ if request.method == 'POST': # clear sessions from variable incase it was found if request.session.get('comment_form', False): del request.session['comment_form'] form = CommentForm(request.POST) if form.is_valid(): obj = form.save(commit=False) if request.user.is_authenticated(): obj.created_by = request.user obj.save() messages.info(request, _('Your comment has been posted.')) return redirect(form.data.get('next')) else: request.session['comment_form'] = request.POST return redirect(form.data.get('next')) else: raise Http404 the usage is by loading the template tag and firing {% comment_form article article.id article.get_absolute_url %} my doubt is if am doing the correct approach or not by passing the validated form to the session. Would that be a problem? security risk? performance issues? Please advise Update In response to Pol question. The reason why I went with this approach is because comment form is handled in a separate app. In my scenario, I render objects such as article and all I do is invoke the templatetag to render the form. What would be an alternative approach for my case? You also shared with me the django comment app, which am aware of but the client am working with requires a lot of complex work to be done in the comment app thats why am working on a new one.

    Read the article

  • How to perform a binary search on IList<T>?

    - by Daniel Brückner
    Simple question - given an IList<T> how do you perform a binary search without writing the method yourself and without copying the data to a type with build-in binary search support. My current status is the following. List<T>.BinarySearch() is not a member of IList<T> There is no equivalent of the ArrayList.Adapter() method for List<T> IList<T> does not inherit from IList, hence using ArrayList.Adapter() is not possible I tend to believe that is not possible with build-in methods, but I cannot believe that such a basic method is missing from the BCL/FCL. If it is not possible, who can give the shortest, fastest, smartest, or most beatiful binary search implementation for IList<T>? UPDATE We all know that a list must be sorted before using binary search, hence you can assume that it is. But I assume (but did not verify) it is the same problem with sort - how do you sort IList<T>? CONCLUSION There seems to be no build-in binary search for IList<T>. One can use First() and OrderBy() LINQ methods to search and sort, but it will likly have a performance hit. Implementing it yourself (as an extension method) seems the best you can do.

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • lost session after redirect_to

    - by PeterWong
    I encountered a strange performance in my current project, which is about session. The strange part is it was normal in Safari but failed in other browsers (includes chrome, firefox and opera). There is a registration form for input of part of the key information (email, password, etc) and is submitted to an action called "create" This is the basic code of create action: @account = Account.new(params[:account]) if @account.save ApplicationController.current_account = @account session[:current_account] = ApplicationController.current_account session[:account] = ApplicationController.current_account.id email = @account.email Mailer.deliver_account_confirmation(email) flash[:type] = "success" flash[:notice] = "Successfully Created Account" redirect_to :controller => "accounts", :action => "create_step_2" else flash[:type] = "error" flash[:title] = "Oops, something wasn't right." flash[:notice] = "Mistakes are marked below in red. Please fix them and resubmit the form. Thanks." render :action => "new" end Also I created a before_filter in the application controller, which has the following code: ApplicationController.current_account = Account.find_by_id(session[:current_account].id) unless session[:current_account].blank? For Safari, there is no any problem. But for the other browsers, the session[:current_account] does not exist and so produced the following error message: RuntimeError in AccountsController#create_step_2 Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id Please could anyone help me?

    Read the article

  • Interface for reading variable length files with header and footer.

    - by John S
    I could use some hints or tips for a decent interface for reading file of special characteristics. The files in question has a header (~120 bytes), a body (1 byte - 3gb) and a footer (4 bytes). The header contains information about the body and the footer is only a simple CRC32-value of the body. I use Java so my idea was to extend the "InputStream" class and add a constructor such as "public MyInStream( InputStream in)" where I immediately read the header and the direct the overridden read()'s the body. Problem is, I can't give the user of the class the CRC32-value until the whole body has been read. Because the file can be 3gb large, putting it all in memory is a be an idea. Reading it all in to a temporary file is going to be a performance hit if there are many small files. I don't know how large the file is because the InputStream doesn't have to be a file, it could be a socket. Looking at it again, maybe extending InputStream is a bad idea. Thank you for reading the confused thoughts of a tired programmer. :)

    Read the article

  • Is there a language with native pass-by-reference/pass-by-name semantics, which could be used in mod

    - by Bubba88
    Hi! This is a reopened question. I look for a language and supporting platform for it, where the language could have pass-by-reference or pass-by-name semantics by default. I know the history a little, that there were Algol, Fortran and there still is C++ which could make it possible; but, basically, what I look for is something more modern and where the mentioned value pass methodology is preferred and by default (implicitly assumed). I ask this question, because, to my mind, some of the advantages of pass-by-ref/name seem kind of obvious. For example when it is used in a standalone agent, where copyiong of values is not necessary (to some extent) and performance wouldn't be downgraded much in that case. So, I could employ it in e.g. rich client app or some game-style or standalone service-kind application. The main advantage to me is the clear separation between identity of a symbol, and its current value. I mean when there is no reduntant copying, you know that you're working with the exact symbol/path you have queried/received. And intristing boxing of values will not interfere with the actual logic of program. I know that there is C# ref keyword, but it's something not so intristic, though acceptable. Equally, I realize that pass-by-reference semantics could be simulated in virtually any language (Java as an instant example) and so on.. not sure about pass by name :) What would you recommend - create a something like DSL for such needs wherever it be appropriate; or use some languages that I already know? Maybe, there is something that I'm missing? Thank you!

    Read the article

  • Collapsing data frame by selecing one row per group

    - by jkebinger
    I'm trying to collapse a data frame by removing all but one row from each group of rows with identical values in a particular column. In other words, the first row from each group. For example, I'd like to convert this > d = data.frame(x=c(1,1,2,4),y=c(10,11,12,13),z=c(20,19,18,17)) > d x y z 1 1 10 20 2 1 11 19 3 2 12 18 4 4 13 17 Into this: x y z 1 1 11 19 2 2 12 18 3 4 13 17 I'm using aggregate to do this currently, but the performance is unacceptable with more data: > d.ordered = d[order(-d$y),] > aggregate(d.ordered,by=list(key=d.ordered$x),FUN=function(x){x[1]}) I've tried split/unsplit with the same function argument as here, but unsplit complains about duplicate row numbers. Is rle a possibility? Is there an R idiom to convert rle's length vector into the indices of the rows that start each run, which I can then use to pluck those rows out of the data frame?

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Caching issue with javascript and asp.net

    - by Ed Woodcock
    Hi guys: I asked a question a while back on here regarding caching data for a calendar/scheduling web app, and got some good responses. However, I have now decided to change my approach and stat caching the data in javascript. I am directly caching the HTML for each day's column in the calendar grid inside the $('body').data() object, which gives very fast page load times (almost unnoticable). However, problems start to arise when the user requests data that is not yet in the cache. This data is created by the server using an ajax call, so it's asynchronous, and takes about 0.2s per week's data. My current approach is simply to block for 0.5s when the user requests information from the server, and cache 4 weeks either side in the inital page load (and 1 extra week per page change request), however I doubt this is the optimal method. Does anyone have a suggestion as to how to improve the situation? To summarise: Each week takes 0.2s to retrieve from the server, asynchronously. Performance must be as close to real-time as possible. (however the data is not needed to be fully real-time: most appointments are added by the user and so we can re-cache after this) Currently 4 weeks are cached on either side of the inial week loaded: this is not enough. to cache 1 year takes ~ 21s, this is too slow for an initial load.

    Read the article

  • When is a good time to start thinking about scaling?

    - by Slokun
    I've been designing a site over the past couple days, and been doing some research into different aspects of scaling a site horizontally. If things go as planned, in a few months (years?) I know I'd need to worry about scaling the site up and out, since the resources it would end up consuming would be huge. So, this got me to thinking, when is the best time to start thinking about, and designing for, scalability? If you start too early on, you could easily over complicate your design, and make it impossible to actually build. You could also get too caught up in the details, the architecture, whatever, and wind up getting nothing done. Also, if you do get it working, but the site never takes off, you may have wasted a good chunk of extra effort. On the other hand, you could be saving yourself a ton of effort down the road. Designing it from the ground up to be big would make it much easier later on to let it grow big, with very little rewriting going on. I know for what I'm working on, I've decided to make at least a few choices now on the side of scaling, but I'm not going to do a complete change of thinking to get it to scale completely. Notably, I've redesigned my database from a conventional relational design to one similar to what was suggested on the Reddit site linked below, and I'm going to give memcache a try. So, the basic question, when is a good time to start thinking or worrying about scaling, and what are some good designs, tips, etc. for when doing so? A couple of things I've been reading, for those who are interested: http://www.codinghorror.com/blog/2009/06/scaling-up-vs-scaling-out-hidden-costs.html http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html http://developer.yahoo.com/performance/rules.html

    Read the article

  • Button Onclick event (which is in codbehind) doesn't get triggered in MVC 2

    - by rksprst
    I had an MVC 1.0 web application that was in VS 2008; I just upgraded the project to VS 2010 which automatically upgraded MVC to 2.0. I have a bunch of viewpages have codebehind files that were manually added. The project worked fine before the upgrade, but now the onclick even't don't get triggered. I.e. I have an asp:button with an onclick event that points to a method in the codebehind. When you click the button, the onclick event doesn't get triggered. In fact, when you look at the Page variable, IsPostBack is false. This is really bizarre and I'm wondering if anyone know what happened and how to fix it. I'm thinking it has something to do with the changes in MVC 2.0; but I'm not sure. Any help is really appreciated, I've been trying to figure this out for a while. (deleting the codebehinds and moving that to the controller is not really an option since there is so many pages, moving back to vs 2008 is a last resort as I want to make use of some of the VS 2010 features like performance testing.)

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • What is the base open source java package to filter/match URLs?

    - by Boaz
    Hi, I have an high performance application which deals with URLs. For every URL it needs to retrieve the appropriate settings from a predefined pool. Every settings object is associated with a URL pattern which indicates which URLs should use these settings. The matching rules are as follows: "google.com" match pattern should match all URLs pointing to the google domain (thus, maps.google.com and www.google.com/match are matched). "*.google.com" should match all URLs pointing to a subdomain of google.com (thus, maps.google.com matches, but google.com and www.google.com don't). "maps.google.com" should match all URLs pointing to this specific subdomain. Apart from the above rules, every match rule can contain a path, which means that the path part of the URL should start with the match rule path. So: "*.google.com/maps" matches "maps.google.com/maps" but not "maps.google.com/advanced". As you can see the rules above are overlapping. In the case two rules exist which match the same URL the most specific should apply. The list above is ranked from least specific to most specific. This seems to be such a standard problem that I was hoping to use a ready made library rather than program my self. Google reveals a couple of options but without a clear way to choose between them. What would you recommend as a good library for this task? Thanks, Boaz

    Read the article

  • Looking for combinations of server and embedded database engines

    - by codeelegance
    I'm redesigning an application that will be run as both a single user and multiuser application. It is a .NET 2.0 application. I'm looking for server and embedded databases that work well together. I want to deploy the embedded database in the single user setup and of course, the server in the multiuser setup. Past releases have been based on MSDE but in the past year we've been having a lot of install issues: new installs hanging and leaving the system in an unknown state, upgrades disconnecting the database, etc. I migrated the application to SQL Server 2005 and the install is more reliable (as long as a user doesn't try to install over a broken MSDE installation). Since next year's release will be a complete redesign I figured now's the best time to address the database issue as well. The database has been abstracted from the rest of the application so I just need to choose which database(s) to use and write an implementation for each one. So far I've considered: SQL Server/ SQL Server Compact Edition Firebird (same DB engine is available in two different server modes and an embedded dll) Each has its own merits but I'm also interested in any other suggestions. This is a fairly simple program and its data requirements are simple as well. I don't expect it to strain whatever database I eventually choose. So easy configuration and deployment hold more weight than performance.

    Read the article

  • Should we deploy a Webkit browser for our intranet applications?

    - by Jeff Meatball Yang
    At my place of employment, we are increasingly finding it difficult to develop for IE, which was historically the easiest browser to target, from an intranet-app point of view. It was already deployed. It already understood NTLM authentication, thus well integrated with our domain-level security. It had neat, albeit non-standard features such as XMLDOM and XmlHTTP. Now, we are increasingly irritated by issues presented by IE: There are several versions: IE 7, 8, and soon 9 beta, which all have slightly different issues related to performance, functionality (especially re:security and zones), and aesthetics. IE 7 and 8 are slower than Webkit-based browsers. Period. There are technology limitations such as missing canvas element, CSS bugs, etc. that make it hard to use 3rd party packages or even consistently write code across IE versions. Users are increasingly using Firefox or Chrome, even for intranet use. Does anyone have experience with making a transition? Any advice would be welcome.

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Floating Point Arithmetic - Modulo Operator on Double Type

    - by CrimsonX
    So I'm trying to figure out why the modulo operator is returning such a large unusual value. If I have the code: double result = 1.0d % 0.1d; it will give a result of 0.09999999999999995. I would expect a value of 0 Note this problem doesn't exist using the dividing operator - double result = 1.0d / 0.1d; will give a result of 10.0, meaning that the remainder should be 0. Let me be clear: I'm not surprised that an error exists, I'm surprised that the error is so darn large compared to the numbers at play. 0.0999 ~= 0.1 and 0.1 is on the same order of magnitude as 0.1d and only one order of magnitude away from 1.0d. Its not like you can compare it to a double.epsilon, or say "its equal if its < 0.00001 difference". I've read up on this topic on StackOverflow, in the following posts one two three, amongst others. Can anyone suggest explain why this error is so large? Any any suggestions to avoid running into the problems in the future (I know I could use decimal instead but I'm concerned about the performance of that).

    Read the article

  • Guid Primary /Foreign Key dilemma SQL Server

    - by Xience
    Hi guys, I am faced with the dilemma of changing my primary keys from int identities to Guid. I'll put my problem straight up. It's a typical Retail management app, with POS and back office functionality. Has about 100 tables. The database synchronizes with other databases and receives/ sends new data. Most tables don't have frequent inserts, updates or select statements executing on them. However, some do have frequent inserts and selects on them, eg. products and orders tables. Some tables have upto 4 foreign keys in them. If i changed my primary keys from 'int' to 'Guid', would there be a performance issue when inserting or querying data from tables that have many foreign keys. I know people have said that indexes will be fragmented and 16 bytes is an issue. Space wouldn't be an issue in my case and apparently index fragmentation can also be taken care of using 'NEWSEQUENTIALID()' function. Can someone tell me, from there experience, if Guid will be problematic in tables with many foreign keys. I'll be much appreciative of your thoughts on it...

    Read the article

  • _dl_runtime_resolve -- When do the shared objects get loaded in to memory?

    - by windfinder
    We have a message processing system with high performance demands. Recently we have noticed that the first message takes many times longer then subsequent messages. A bunch of transformation and message augmentation happens as this goes through our system, much of it done by way of external lib. I just profiled this issue (using callgrind), comparing a "run" of just one message with a "run" of many messages (providing a baseline of comparison). The main difference I see is the function "do_lookup_x" taking up a huge amount of time. Looking at the various calls to this function, they all seem to be called by the common function: _dl_runtime_resolve. Not sure what this function does, but to me this looks like the first time the various shared libraries are being used, and are then being loaded in to memory by the ld. Is this a correct assumption? That the binary will not load the shared libraries in to memory until they are being prepped for use, therefore we will see a massive slowdown on the first message, but on none of the subsequent? How do we go about avoiding this? Note: We operate on the microsecond scale.

    Read the article

  • Call method immediately after object construction in LINQ query

    - by Steffen
    I've got some objects which implement this interface: public interface IRow { void Fill(DataRow dr); } Usually when I select something out of db, I go: public IEnumerable<IRow> SelectSomeRows { DataTable table = GetTableFromDatabase(); foreach (DataRow dr in table.Rows) { IRow row = new MySQLRow(); // Disregard the MySQLRow type, it's not important row.Fill(dr); yield return row; } } Now with .Net 4, I'd like to use AsParallel, and thus LINQ. I've done some testing on it, and it speeds up things alot (IRow.Fill uses Reflection, so it's hard on the CPU) Anyway my problem is, how do I go about creating a LINQ query, which calls Fills as part of the query, so it's properly parallelized? For testing performance I created a constructor which took the DataRow as argument, however I'd really love to avoid this if somehow possible. With the constructor in place, it's obviously simple enough: public IEnumerable<IRow> SelectSomeRowsParallel { DataTable table = GetTableFromDatabase(); return from DataRow dr in table.Rows.AsParallel() select new MySQLRow(dr); } However like I said, I'd really love to be able to just stuff my Fill method into the LINQ query, and thus not need the constructor overload.

    Read the article

  • Trouble Running Leaks Instrument

    - by TheGeoff
    I'm having trouble running the Leaks Instrument since installing the 3.0 SDK. An NDA disclaimer here I don't think this is a 3.0 SDK issue, just a configuration problem. So I'm looking for advice on configuring the tools in question not the 3.0 SDK per se. Here’s the breakdown of the behavior I am seeing. My Application is compiled to OS version 2.2. I can run it out of XCode in debug mode on the Simulator and Device running 2.2, 2.2.1, 3.0. If I start it with Performance Tools - Leaks, I get an error message from the OS, “The application xxxx quit unexpectedly”, “Ignore, Report, Relaunch.” If I click “Ignore” one of two things will happen, either Leaks tells me it couldn’t attach, or Leaks stop responding to input and I have to Force Quit. Interesting thing is the Simulator starts in 3.0 OS. If I start Instruments Manually and attach to a running 2.2 Simulator it shows the same behavior. If I attach Leaks to an iPhone Device it works. It seems that once I launch Leaks my app won't run in the simulator until I do a new build. Any ideas for getting my Simulator/Leaks/Xcode synced back up? Thanks, Geoff

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >