Search Results

Search found 30896 results on 1236 pages for 'best buy'.

Page 8/1236 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • WCF Best Practice for "Overloaded" methods

    - by Nate Bross
    What is the best practice for emulating overloaded methods over WCF? Typically I might write an interface like this interface IInterface { MyType ReadMyType(int id); IEnumerable<MyType> ReadMyType(String name); IEnumerable<MyType> ReadMyType(String name, int maxResults); } What would this interface look like after you converted it to WCF?

    Read the article

  • Best way to store configuration settings outside of web.config

    - by Wil
    I'm starting to consider creating a class library that I want to make generic so others can use it. While planning it out, I came to thinking about the various configuration settings that I would need. Since the idea is to make it open/shared, I wanted to make things as easy on the end user as possible. What's the best way to setup configuration settings without making use of web.config/app.config?

    Read the article

  • highlite text parts with jquery, best practice

    - by helle
    Hey guys, i have a list of items containig names. then i have a eventlistener, which ckecks for keypress event. if the user types i.g. an A all names starting with an A should be viewed with the A bold. so all starting As should be bold. what is the best way using jquery to highlite only a part of a string? thanks for your help

    Read the article

  • best practice when referring to a program's name in C

    - by guest
    what is considered best practice when referring to a program's name? i've seen #define PROGRAM_NAME "myprog" printf("this is %s\n", PROGRAM_NAME); as well as printf("this is %s\n", argv[0]); i know, that the second approach will give me ./myprog rather than myprog when the program is not called from $PATH and that the first approach will guarantee consistence regarding the program's name. but is there anything else, that makes one approach superior to the other?

    Read the article

  • Are there any good references coparing Software Development CM best practices to IT CM best practice

    - by dkackman
    I have spent my career on the software development side of things and in the latter part have become more and more involved in the realm of Software Configuration Management. Now I am moving into an IT group and need to ramp up on CM practices from that standpoint. Are there any good references (books, websites, blogs whatever) out there comparing Software CM practices to IT CM practices? Basically I'm in learning mode and am trying compare things I already know from the software development side to things on the IT side.

    Read the article

  • Good scanner to buy as of January 2010

    - by AlexV
    I'm planning to buy a scanner soon and since I didn't purchased one in years I wanted to know which brand / models are the "best". The ability to scan negatives is a nice to have but not 100% required. If you are specifing a model in particular, plase note that it must work under 64-bit OS (I'm using Windows 7 Ulitmate 64-bit). I'm not looking for a all-in-one solution like scanner/fax/printer. I want to use this to scan "old" photos from the non-digital era :) And sometimes to scan magazines and books. I am located in Canada it would be nice if your suggestion is available here. I usually buy online at NCIX and DirectCanada if it can help you see what's available there. Edit: My search leaded me to 2 particular models. I hesitate between Canon CanoScan 8800F and Epson V600. Anyone have good/bad words on either of them?

    Read the article

  • What type of SATA cable will I buy?

    - by Mehper C. Palavuzlar
    I've bought a new optical drive. It's a Samsung SH-S223C with SATA connection. Since it is OEM with no box, no cable came with it, so I need two cables now. Since I have an extra SATA power cable, it's OK, but I have to buy the data cable. What type of cable should I buy? If you please give a picture of it besides its type as well, I'll be happy. My motherboard is GigaByte P43-ES3G. TIA.

    Read the article

  • Which Power supply I should buy

    - by arayman
    I am going to buy a new PSU of CORSAIR for my IBM thinkcentre desktop. My previous original PSU was of 230 watts. I have not added any hardware in the CPU. It's in same condition as it was when I bought. On that basis, please suggest which power supply of Corsair and of how many watts will be suitable for me to buy, after referring to the under given weblink. http://www.corsair.com/en/power-supply-units.html Thanks.

    Read the article

  • Repair BAD Sectors or Buy a new HDD?

    - by Nehal J. Wani
    I have a Seagate internal hard disk drive. I recently opened up my laptop [Dell Inspiron N5010] [Warranty has expired], cleaned it and it worked normally after waking up from hibernation. However, when I restarted it, it stuck on windows loading screen, then tried to boot from Dell recovery partition but failed. It gave the error: Windows has encounter a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removable storage is properly connected and then restart your computer If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occurred. While cleaning, I had mistakenly touched the round silvery thing at the bottom of the HDD. I don't know whether this has caused the problem or not. Since I have Fedora also installed in the same HDD, I can boot from it but it shows weird read errors when I ask it to mount Windows partitions. The disk utility also says that the Hard Disk has many bad sectors and needs to be replaced. I downloaded Seatools from Seagate website and used it. In the long test, I gave it permission to repair the first 100 errors which it did successfully. Now I am confused at what I should do. Internal Hard Disk Costs: a. Internal HDD 500GB Costs: Rs3518 b.1 External HDD 500GB Costs: Rs3472 b.2 External HDD 1TB Costs: Rs5500 c. Internal to External Converter Costs: Rs650 I have the following options: (i) Buy an External HDD, backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to buy another internal HDD and replace the damaged one. OR break the seal of the external one and put it inside my laptop as internal. Breaking the case involves risks. (ii) Buy a Internal HDD and an Internal to External Converter Case [Not very reliable], backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to just put in the new internal HDD I just bought. Experts, please guide me as to what will be the most VFM option? Also, if a HDD is failing, is it that I shouldn't read from it too otherwise there is a chance of other sectors failing? What I mean is, is it wrong to read from the HDD without taking backup first?

    Read the article

  • Best practices for using the Entity Framework with WPF DataBinding

    - by Ken Smith
    I'm in the process of building my first real WPF application (i.e., the first intended to be used by someone besides me), and I'm still wrapping my head around the best way to do things in WPF. It's a fairly simple data access application using the still-fairly-new Entity Framework, but I haven't been able to find a lot of guidance online for the best way to use these two technologies (WPF and EF) together. So I thought I'd toss out how I'm approaching it, and see if anyone has any better suggestions. I'm using the Entity Framework with SQL Server 2008. The EF strikes me as both much more complicated than it needs to be, and not yet mature, but Linq-to-SQL is apparently dead, so I might as well use the technology that MS seems to be focusing on. This is a simple application, so I haven't (yet) seen fit to build a separate data layer around it. When I want to get at data, I use fairly simple Linq-to-Entity queries, usually straight from my code-behind, e.g.: var families = from family in entities.Family.Include("Person") orderby family.PrimaryLastName, family.Tag select family; Linq-to-Entity queries return an IOrderedQueryable result, which doesn't automatically reflect changes in the underlying data, e.g., if I add a new record via code to the entity data model, the existence of this new record is not automatically reflected in the various controls referencing the Linq query. Consequently, I'm throwing the results of these queries into an ObservableCollection, to capture underlying data changes: familyOC = new ObservableCollection<Family>(families.ToList()); I then map the ObservableCollection to a CollectionViewSource, so that I can get filtering, sorting, etc., without having to return to the database. familyCVS.Source = familyOC; familyCVS.View.Filter = new Predicate<object>(ApplyFamilyFilter); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("PrimaryLastName", System.ComponentModel.ListSortDirection.Ascending)); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("Tag", System.ComponentModel.ListSortDirection.Ascending)); I then bind the various controls and what-not to that CollectionViewSource: <ListBox DockPanel.Dock="Bottom" Margin="5,5,5,5" Name="familyList" ItemsSource="{Binding Source={StaticResource familyCVS}, Path=., Mode=TwoWay}" IsSynchronizedWithCurrentItem="True" ItemTemplate="{StaticResource familyTemplate}" SelectionChanged="familyList_SelectionChanged" /> When I need to add or delete records/objects, I manually do so from both the entity data model, and the ObservableCollection: private void DeletePerson(Person person) { entities.DeleteObject(person); entities.SaveChanges(); personOC.Remove(person); } I'm generally using StackPanel and DockPanel controls to position elements. Sometimes I'll use a Grid, but it seems hard to maintain: if you want to add a new row to the top of your grid, you have to touch every control directly hosted by the grid to tell it to use a new line. Uggh. (Microsoft has never really seemed to get the DRY concept.) I almost never use the VS WPF designer to add, modify or position controls. The WPF designer that comes with VS is sort of vaguely helpful to see what your form is going to look like, but even then, well, not really, especially if you're using data templates that aren't binding to data that's available at design time. If I need to edit my XAML, I take it like a man and do it manually. Most of my real code is in C# rather than XAML. As I've mentioned elsewhere, entirely aside from the fact that I'm not yet used to "thinking" in it, XAML strikes me as a clunky, ugly language, that also happens to come with poor designer and intellisense support, and that can't be debugged. Uggh. Consequently, whenever I can see clearly how to do something in C# code-behind that I can't easily see how to do in XAML, I do it in C#, with no apologies. There's been plenty written about how it's a good practice to almost never use code-behind in WPF page (say, for event-handling), but so far at least, that makes no sense to me whatsoever. Why should I do something in an ugly, clunky language with god-awful syntax, an astonishingly bad editor, and virtually no type safety, when I can use a nice, clean language like C# that has a world-class editor, near-perfect intellisense, and unparalleled type safety? So that's where I'm at. Any suggestions? Am I missing any big parts of this? Anything that I should really think about doing differently?

    Read the article

  • Passing integer lists in a sql query, best practices

    - by Artiom Chilaru
    I'm currently looking at ways to pass lists of integers in a SQL query, and try to decide which of them is best in which situation, what are the benefots of each, and what are the pitfalls, what should be avoided :) Right now I know of 3 ways that we currently use in our application. 1) Table valued parameter: Create a new Table Valued Parameter in sql server: CREATE TYPE [dbo].[TVP_INT] AS TABLE( [ID] [int] NOT NULL ) Then run the query against it: using (var conn = new SqlConnection(DataContext.GetDefaultConnectionString)) { var comm = conn.CreateCommand(); comm.CommandType = CommandType.Text; comm.CommandText = @" UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN @values IDs ON DA.ID = IDs.ID"; comm.Parameters.Add(new SqlParameter("values", downloadResults.Select(d => d.ID).ToDataTable()) { TypeName = "TVP_INT" }); conn.Open(); comm.ExecuteScalar(); } The major disadvantages of this method is the fact that Linq doesn't support table valued params (if you create an SP with a TVP param, linq won't be able to run it) :( 2) Convert the list to Binary and use it in Linq! This is a bit better.. Create an SP, and you can run it within linq :) To do this, the SP will have an IMAGE parameter, and we'll be using a user defined function (udf) to convert this to a table.. We currently have implementations of this function written in C++ and in assembly, both have pretty much the same performance :) Basically, each integer is represented by 4 bytes, and passed to the SP. In .NET we have an extension method that convers an IEnumerable to a byte array The extension method: public static Byte[] ToBinary(this IEnumerable intList) { return ToBinaryEnum(intList).ToArray(); } private static IEnumerable<Byte> ToBinaryEnum(IEnumerable<Int32> intList) { IEnumerator<Int32> marker = intList.GetEnumerator(); while (marker.MoveNext()) { Byte[] result = BitConverter.GetBytes(marker.Current); Array.Reverse(result); foreach (byte b in result) yield return b; } } The SP: CREATE PROCEDURE [Accounts-UpdateImportAttempts] @values IMAGE AS BEGIN UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN dbo.udfIntegerArray(@values, 4) IDs ON DA.ID = IDs.Value4 END And we can use it by running the SP directly, or in any linq query we need using (var db = new DataContext()) { db.Accounts_UpdateImportAttempts(downloadResults.Select(d => d.ID).ToBinary()); // or var accounts = db.Accounts .Where(a => db.udfIntegerArray(downloadResults.Select(d => d.ID).ToBinary(), 4) .Select(i => i.Value4) .Contains(a.ID)); } This method has the benefit of using compiled queries in linq (which will have the same sql definition, and query plan, so will also be cached), and can be used in SPs as well. Both these methods are theoretically unlimited, so you can pass millions of ints at a time :) 3) The simple linq .Contains() It's a more simple approach, and is perfect in simple scenarios. But is of course limited by this. using (var db = new DataContext()) { var accounts = db.Accounts .Where(a => downloadResults.Select(d => d.ID).Contains(a.ID)); } The biggest drawback of this method is that each integer in the downloadResults variable will be passed as a separate int.. In this case, the query is limited by sql (max allowed parameters in a sql query, which is a couple of thousand, if I remember right). So I'd like to ask.. What do you think is the best of these, and what other methods and approaches have I missed?

    Read the article

  • Best practices for cross platform git config?

    - by Bas Bossink
    Context A number of my application user configuration files are kept in a git repository for easy sharing across multiple machines and multiple platforms. Amongst these configuration files is .gitconfig which contains the following settings for handling the carriage return linefeed characters [core] autocrlf = true safecrlf = false Problem These settings also gets applied on a GNU/Linux platform which causes obscure errors. Question What are some best practices for handling these platform specific differences in configuration files? Proposed solution I realize this problem could be solved by having a branch for each platform and keeping the common stuff in master and merging with the platform branch when master moves forward. I'm wondering if there are any easier solutions to this problem?

    Read the article

  • PHP Flush: How Often and Best Practises

    - by Cory Dee
    I just finished reading this post: http://developer.yahoo.com/performance/rules.html#flush and have already implemented a flush after the top portion of my page loads (head, css, top banner/search/nav). Is there any performance hit in flushing? Is there such a thing as doing it too often? What are the best practices? If I am going to hit an external API for data, would it make sense to flush before hand so that the user isn't waiting on that data to come back, and can at least get some data before hand? Thanks to everyone in advance.

    Read the article

  • <100% Test coverage - best practices in selecting test areas

    - by Paul Nathan
    Suppose you're working on a project and the time/money budget does not allow 100% coverage of all code/paths. It then follows that some critical subset of your code needs to be tested. Clearly a 'gut-check' approach can be used to test the system, where intuition and manual analysis can produce some sort of test coverage that will be 'ok'. However, I'm presuming that there are best practices/approaches/processes that identify critical elements up to some threshold and let you focus your test elements on those blocks. For example, one popular process for identifying failures in manufacturing is Failure Mode and Effects Analysis. I'm looking for a process(es) to identify critical testing blocks in software.

    Read the article

  • Best practices in ASP.Net code behind pages.

    - by patricks418
    Hi, I am an experienced developer but I am new to web application development. Now I am in charge of developing a new web application and I could really use some input from experienced web developers out there. I'd like to understand exactly what experienced web developers do in the code-behind pages. At first I thought it was best to have a rule that all the database access and business logic should be performed in classes external to the code-behind pages. My thought was that only logic necessary for the web form would be performed in the code-behind. I still think that all the business logic should be performed in other classes but I'm beginning to think it would be alright if the code-behind had access to the database to query it directly rather than having to call other classes to receive a dataset or collection back. Any input would be appreciated.

    Read the article

  • Best practices for fixed-width processing in .NET

    - by jmgant
    I'm working a .NET web service that will be processing a text file with a relatively long, multilevel record format. Each record in the file represents a different entity; the record contains multiple sub-types. (The same record format is currently being processed by a COBOL job, if that gives you a better picture of what we're looking at). I've created a class structure (a DATA DIVISION if you will) to hold the input data. My question is, what best practices have you found for processing large, complex fixed-width files in .NET? My general approach will be to read the entire line into a string and then parse the data from the string into the classes I've created. But I'm not sure whether I'll get better results working with the characters in the string as an array, or with the string itself. I guess that's the specific question, string vs. char[], but I would appreciate any other pointers anyone has. Thanks.

    Read the article

  • SSRS 2008 Installation Guidelines/Best Practices?

    - by Brad Bowman
    I know I have seen recommendations for installing SSRS 2005 and it stated that you should separate SSRS from the DB Engine that hosts the data sources for your reports, that you should not install them on the same server. Is there any documentation for SSRS 2008 that provides guidelines/best practices for installation? I am assuming that the same holds true for SSRS 2008 as it did for SSRS 2005 but have not seen it specifically stated anywhere. We have a project that will utilize SSRS 2008 and there is a difference of opinion on where to install it, on it's own dedicated server or on the same SQL Server as the report data sources. If there is a link to any documentation that could be provided it would be gratefully appreciated, all I am finding besides what is in BOL refers to 2005. Thanks in advance!

    Read the article

  • Developing ASP.NET MVC UI Extensions - best approach

    - by user252160
    What are the best practices in developing rich UI extensions for ASP.NET MVC (I mean asynchronous / partial loading, slick effects, skinning etc) ? I saw that Telerik has an MVC suite of extensions, but haven't tried them yet, so I cannot comment on them. My biggest concern as of the moment is how to structure the code of my extensions so that C#, ASP markup, and JQuery remain separate from each other, yet encapsulated in a way that the extension must be easy to distribute and reuse. I know that the user control approach had many flaws, yet it sort of allowed application developers to just reference a control, set some parameters and get it going in a few minutes. I'd like to achieve the same kind of portability/reusability as I still keep the code easy to extend and build upon. Now, some ideas that come to mind as topics for discussion are: template helpers vs. extension method helpers or a possible integration efficient use of jQuery - naming conventions - - asynchronous action loading - etc you can add whatever comes to your mind

    Read the article

  • Best Practices for Managed SalesForce App Development?

    - by Fiid
    We're developing applications for AppExchange and are trying to figure out the best way to do development and release management. There are several issues around this: 1) Package Prefixes. We are developing code in unmanaged mode and releasing as managed, so we have to add all the package prefixes into the code. Is there a way to do this dynamically at runtime? Right now we're using an Ant script, which stops us benefitting from the force.com IDE plugin. 2) Resource files... We are doing some ajax-ey stuff and as a result have a few different resource files we upload, some of which are multiple file resources (zip files). Has anyone automated the building of these resources using ANT, and does that work well? Our environment seems very fragile and works for some developers and not others; have other people had this problem? How did you resolve it?

    Read the article

  • Best practices for organizing .NET P/Invoke code to Win32 APIs

    - by Paul Sasik
    I am refactoring a large and complicated code base in .NET that makes heavy use of P/Invoke to Win32 APIs. The structure of the project is not the greatest and I am finding DllImport statements all over the place, very often duplicated for the same function, and also declared in a variety of ways: The import directives and methods are sometimes declared as public, sometimes private, sometimes as static and sometimes as instance methods. My worry is that refactoring may have unintended consequences but this might be unavoidable. Are there documented best practices I can follow that can help me out? My instict is to organize a static/shared Win32 P/Invoke API class that lists all of these methods and associated constants in one file... (The code base is made up of over 20 projects with a lot of windows message passing and cross-thread calls. It's also a VB.NET project upgraded from VB6 if that makes a difference.)

    Read the article

  • Best practices book for CRUD apps

    - by Kevin L.
    We will soon be designing a new tool to calculate commissions across multiple business units. This new compensation scheme is pretty clever and well thought-out, but the complexity that the implementation will involve will make the Hubble look like a toaster. A significant portion of the programming industry involves CRUD apps; updating insurance data, calculating commissions (Joel included) ...even storing questions and answers for a programmer Q&A site. We as programmers have Code Complete for the low-level formatting/style and Design Patterns for high-level architecture (to name just a few). Where’s the comparable book that teaches best practices for CRUD?

    Read the article

  • Mutex names - best practice?

    - by Argalatyr
    Related to this question, what is the best practice for naming a mutex? I realize this may vary with OS and even with version (esp for Windows), so please specify platform in answering. My interest is in Win XP and Vista. EDIT: I am motivated by curiousity, because in Rob Kennedy's comment under his (excellent) Answer to the above-linked Question, he implied that the choice of mutex name is non-trivial and should be the subject of a separate question. EDIT2: The referenced question's goal was to ensure only a single instance of an app is running.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >