Search Results

Search found 54446 results on 2178 pages for 'struct vs class'.

Page 289/2178 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Performance of fopen vs stat

    - by Alex Marshall
    Hello, I'm writing several C programs for an embedded system where every bit of performance we can squeeze out will matter. Part of that is accessing log files. When determining if a file exists, is there any performance difference between using open / fopen, and stat ? I've been using stat on the assumption that it only has to do a quick check against the file system, whereas fopen would have to actually gain access to a file and manipulate internal data structures before returning. Is there any merit to this ?

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • Delete temp file during finally vs delete output file during catch

    - by Russell
    This is in Java 6. I've seen more than once that people create temp files, do something, then rename it to the output file. Everything is wrapped in a try-finally block, where the temp file is deleted in finally in case something goes wrong in between. try { //do something with tempFile //do something with tempFile //do something with tempFile tempFile.renameTo(outputFile); } finally { if (tempFile.exists()) tempFile.delete() } I was wondering what are the benefits of doing that instead of doing something to the output file directly and delete it in case of exceptions. try { //do something with outputFile //do something with outputFile //do something with outputFile } catch (Exception e) { if (outputFile.exists()) outputFile.delete(); } My guess is that deleting temp files in finally benefits me when the try block can throw many kinds of exceptions. Is my guess right? What else?

    Read the article

  • Checking for empty arrays: count vs empty

    - by Dan McG
    This question on 'How to tell if a PHP array is empty' had me thinking of this question Is there a reason that count should be used instead of empty when determining if an array is empty or not? My personal thought would be if the 2 are equivalent for the case of empty arrays you should use empty because it gives a boolean answer to a boolean question. From the question linked above, it seems that count($var) == 0 is the popular method. To me, while technically correct, makes no sense. E.g. Q: $var, are you empty? A: 7. Hmmm... Is there a reason I should use count == 0 instead or just a matter of personal taste? As pointed out by others in comments for a now deleted answer, count will have performance impacts for large arrays because it will have to count all elements, whereas empty can stop as soon as it knows it isn't empty. So, if they give the same results in this case, but count is potentially inefficient, why would we ever use count($var) == 0?

    Read the article

  • (MyClassName)object vs. object as MyClassName

    - by Matthew Doyle
    Hello all, I was wondering what is the better method for Casting objects for C#: MyClassName test = (MyClassName)object; MyClassName test = object as MyClassName; I know already that if you do the first way, you get an exception, and the second way it sets test as null. However, I was wondering why do one over the other? I see the first way a lot, but I like how the second way because then I can check for null... If there isn't a 'better way' of doing it, what are the guidelines for using one way or the other?

    Read the article

  • AuthenticationType Negotiate vs NTLM

    - by Claudio Redi
    I have the same code base used on 2 different sites hosted on the same server (IIS 7.5). For some reason, when I check the Identity.AuthenticationType property on the code behind of an http handler I see NTLM for 1 site and Negotiate for the other. This is causing some problems and I need both of them to use NTLM. Could you help me to figure out why this difference? So far I see both IIS sites are configured on the same way but of course there is at least 1 difference that I couldn't detect. Thanks!

    Read the article

  • ASP .net MVC Invoking default controller and action vs Setting a startup page

    - by SARAVAN
    Hi, I am developing code on the sample ASP .net MVC template provided by VS2010. The first time I ran the code without adding anything, the index.aspx page was invoked which is expected. But for some reasons I added a login.aspx and then accidentally set that as a startup page. Now when I ran the application the default startup url look like http://localhost/Views/login.aspx. I am thinking this is not a valid MVC routing path and I get the requested resource cannot be found error. I am not sure how to revert this back and make sure the default ../home/index is invoked. Can any one throw some light on this? Also should I not set the startup page as we do in asp .net webforms?

    Read the article

  • Using static variable in function vs passing variable from caller

    - by Patrick
    I have a function which spawns various types of threads, one of the thread types needs to be spawned every x seconds. I currently have it like this: bool isTime( Time t ) { return t >= now(); } void spawner() { while( 1 ) { Time t = now(); if( isTime( t ) )//is time is called in more than one place in the real function { launchthread() t = now() + offset; } } } but I'm thinking of changing it to: bool isTime() { static Time t = now(); if( t >= now() ) { t = now() + offset; return true; } return false; } void spawner() { if( isTime() ) launchthread(); } I think the second way is neater but I generally avoid statics in much the same way I avoid global data; anyone have any thoughts on the different styles?

    Read the article

  • Several ifstream vs. ifstream + constant seeking

    - by SpyBot
    I'm writing an external merge sort. It works like that: read k chunks from big file, sort them in memory, perform k-way merge, done. So I need to sequentially read from different portions of the file during the k-way merge phase. What's the best way to do that: several ifstreams or one ifstream and seeking? Also, is there a library for easy async IO?

    Read the article

  • ColdFusion CFC implementation of C# Partial Class?

    - by Brian David Berman
    Does ColdFusion offer a mechanism for splitting CFCs into multiple files? I am NOT talking about extension, I am talking about splitting the SAME CFC into multiple files; the same way C# allows for "partial" classes. The reason for this is because I am using T4 to generate a bunch of CFCs and I want to be able to tag functionality onto the generated CFC by doing so in another file. I want to do this in a way that doesn't violate the Open-Closed Principle.

    Read the article

  • Shadows vs Overloads in VB.NET

    - by serhio
    When we have new in C#, that personally I see only as a workaround to override a property that does not have a virtual/overridable declaration, in VB.NET we have two "concepts" Shadows and Overloads. In which case prefer one to another?

    Read the article

  • C# function normal return value VS out or ref argument

    - by misha-r
    Hi People, I've got a method in c# that needs to return a very large array (or any other large data structure for that matter). Is there a performance gain in using a ref or out parameter instead of the standard return value? I.e. is there any performance or other gain in using void function(sometype input, ref largearray) over largearray function(sometype input) ? Thanks in advance!

    Read the article

  • JS/CSS include section replacement, Debug vs Release

    - by Bayard Randel
    I'd be interested to hear how people handle conditional markup, specifically in their masterpages between release and debug builds. The particular scenario this is applicable to is handling concatenated js and css files. I'm currently using the .Net port of YUI compress to produce a single site.css and site.js from a large collection of separate files. One thought that occurred to me was to place the js and css include section in a user control or collection of panels and conditionally display the <link> and <script> markup based on the Debug or Release state of the assembly. Something along the lines of: #if DEBUG pnlDebugIncludes.visible = true #else pnlReleaseIncludes.visible = true #endif The panel is really not very nice semantically - wrapping <script> tags in a <div> is a bit gross; there must be a better approach. I would also think that a block level element like a <div> within <head> would be invalid html. Another idea was this could possibly be handled using web.config section replacements, but I'm not sure how I would go about doing that.

    Read the article

  • Zend_Cache_Backend_Sqlite vs Zend_Cache_Backend_File

    - by Alekc
    Hi, Currently i'm using Zend_Cache_Backend_File for caching my project (especially responses from external web services). I was wandering if I could find some benefit in migrating the structure to Zend_Cache_Backend_Sqlite. Possible advantages are: File system is well-ordered (only 1 file in cache folder) Removing expired entries should be quicker (my assumption, since zend wouldn't need to scan internal-metadatas for expiring date of each cache) Possible disadvantages: Finding record to read (with files zend check if file exists based on filename and should be a bit quicker) in term of speed. I've tried to search a bit in internet but it seems that there are not a lot of discussion about the matter. What do you think about it? Thanks in advance.

    Read the article

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • Active Record/ORM vs Normal Forms?

    - by Arsenal
    Hello, I've been playing around with Active Record a bit, and I have noticed that A.C./ORM always uses the following database model when creating a one-to-one relationship Person id | country_id | name | ... Country id | tld | name | ... No I wondered, isn't this a violiation of the third Normal Form? This clearly states "Every non-prime attribute is non-transitively dependent on every key of the table". Well this country_id isn't dependent of personid is it? So is this wrong or am I just not getting the point?

    Read the article

  • readObject() vs. readResolve() to restore transient fields

    - by Joonas Pulakka
    According to Serializable javadoc, readResolve() is intended for replacing an object read from the stream. But is it OK to use it for restoring transient fields, like so: private Object readResolve() { transientField = something; return this; } as opposed to using readObject(): private void readObject(ObjectInputStream s) { s.defaultReadObject(); transientField = something; } Is there any reason to choose one over other, when used to just restore transient fields?

    Read the article

  • Entity Framework VS LINQ to SQL VS ADO.NET with stored procedures?

    - by BritishDeveloper
    How would you rate each of them in terms of: Performance Speed of development Neat, intuitive, maintainable code Flexibility Overall I like my SQL and so have always been a die-hard fan of ADO.NET and stored procedures but I recently had a play with Linq to SQL and was blown away by how quickly I was writing out my DataAccess layer and have decided to spend some time really understanding either Linq to SQL or EF... or neither? I just want to check, that there isn't a great flaw in any of these technologies that would render my research time useless. E.g. performance is terrible, it's cool for simple apps but can only take you so far

    Read the article

  • DriveInfo Class

    - by Asim Sajjad
    How to I execute the following code in silverlight 3. DriveInfo[] drives = DriveInfo.GetDrives(); foreach (DriveInfo drive in drives) { if (drive.IsReady) { } } As it is giving me error and when I try to add the reference for the system.IO I can't fine any reference for that Dll.Thanks in advance

    Read the article

  • Consistency vs Design Guidelines

    - by Adrian Faciu
    Lets say that you get involved in the development of a large project that is already in development for a long period ( more than one year ). The projects follows some of the current design guidelines, but also has a few different, that are currently discouraged ( mostly at naming guidelines ). Supposing that you can't/aren't allowed to change the whole project: What should be more important, consistency, follow the existing ones and defy current guidelines or the usage of the guidelines, creating differences between modules of the same project ? Thanks.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >