Search Results

Search found 2286 results on 92 pages for 'benefits'.

Page 9/92 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Is there a resource that explains the benefits of layered programming?

    - by P.Brian.Mackey
    Some developers I know favor what I would call a procedural programming style. I recognize that procedural programming has its uses, albeit not in the business application world of .NET programming. So let's say we have a winform application with a buttonclick event. The buttonclick handles everything from the UI configuration to the database call and data manipulation. So you end up with a method that is 100's of lines of code long. Outside the fact that this code can't be considered test-able for various reasons, this style of programming is fragile to change. I can talk bout OO, Anti-patterns, etc. The problem is that any distinct topic I can dream up requires a great deal of explanation to understand the potential benefits. Outside of finding a new job (lots of businesses program this way), how can I teach these kinds of developers how to write better code? Obviously we can't sit around a round table and discuss pro's and con's all day due to time constraints and real work that has to be done. Although, training and intense training is the only thing I can think of to fix these problems. Not to say I write perfect code, I most certainly do not. I do believe there are certain best practices that should be followed as a rule E.G. OO in the context of .NET. The most common excuse I hear is "we can't write code fast enough if we do it like that".

    Read the article

  • What benefits can I get upgrading my ASP.NET (Webform) + DAL(EF) + Repository + BLL structure to MVC?

    - by Etienne
    I'm in the process of defining an approach that may best fit our needs for a big web application development. For now, I'm thinking going with an ASP.NET Architecture with a DAL using Entity Framework, a Repository concept to not access DAL directly from BLL and a BLL that call the repository and make every manipulations necessary to prepare data to push in a presentation layer (.aspx files). I don't plan to use ASP.Net controls and prefer to keep things simple and lightweight using plain html, jQuery UI controls and do most of the server calls with jQuery Ajax. Sometimes, when needed, I plan to use handlers (.ashx) to call BLL methods that will return JSON or HTML to client for dynamic stuff. My solution also has a test project that Mock the Repository with in-memory data to not repose on database for testing BLL methods... It may be usefull to add that we will build a big application over this architecture with hundreds of tables and store procedures with a lot of reading and writing to database. My question is, having this architecture in mind, Is there any evident advantages that I can obtain by using an MVC3 project instead of the described architecture base on Webform? Do you see any problem in this architecture that may cause us problem during the next steps of development? I know the MVC pattern for using it in others projects with Django... but the Microsoft MVC implementation look so much more complex and verbose than Django MVC and it's why I'm hesitating (or waiting for a little push?) right now before jumping into it... We are in a real project with deadlines and don't want to slow the development process without any real benefits.

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • Do the benefits of Resin/Quercus outweigh the overhead?

    - by Craige
    Lately, I've been looking more and more into Resin + Quercus as a technology to develop an application of mine. The reason I started looking into it was that this application has high reporting needs, a lot of which cannot (or realistically, should not) be created in real-time. Java would offer a nice backend to queue and generate reports. Also, with Quercus I would be able to develop my data models in Hibernate, and use them "from PHP", thus effectively stretching these models across front and back-end. This same concept would also apply to any front/back-end common business logic, which could be developed in Java libraries. Now, the downside is that whichever front-end (PHP) MVC Framework I choose (my goal was Symfony 2), it is unlikely to work without some heavy modification, if it can work at all. Quercus is a pretty close implementation of PHP, and is supposed to be compatible with PHP5.3, so namespaces and closures SHOULDN'T be a problem, but when I tried to run an existing Symfony 1.4 app, I failed miserably. So, my question to you is, do you think the benefits of Resin + Quercus outweigh the overhead of using a not-so-perfect/stable implementation of PHP? If this were your application, and your goal was and end-product, rather than educational purposes, what would you decide?

    Read the article

  • June Oracle Technology Network NEW Member Benefits - books books and more books!!!

    - by Cassandra Clark
    As we mentioned a few posts ago we are working to bring Oracle Technology Network members NEW benefits each month. Listed below are several discounts on technology books brought to you by Apress, Pearson, CRC Press and Packt Publishing. Happy reading!!! Apress Offers - Get 50% off the eBook below using promo code ORACLEJUNEJCCF. Pro ODP.NET for Oracle Database 11g By Edmund T. Zehoo This book is a comprehensive and easy-to-understand guide for using the Oracle Data Provider (ODP) version 11g on the .NET Framework. It also outlines the core GoF (Gang of Four) design patterns and coding techniques employed to build and deploy high-impact mission-critical applications using advanced Oracle database features through the ODP.NET provider. Pearson Offers - Get 35% off all titles listed below using code OTNMEMBER. SOA Design Patterns | Thomas Earl | ISBN: 0136135161 In cooperation with experts and practitioners throughout the SOA community, best-selling author Thomas Erl brings together the de facto catalog of design patterns for SOA and service-orientation. Oracle Performance Survival Guide | Guy Harrison | ISBN: 9780137011957 The fast, complete, start-to-finish guide to optimizing Oracle performance. Core JavaServer Faces, Third Edition | David Geary and Cay S. Horstmann | ISBN: 9780137012893 Provides everything you need to master the powerful and time-saving features of JSF 2.0? Solaris Security Essentials | ISBN: 9780137012336 A superb guide to deploying and managing secure computer environments.? Effective C#, Second Edition | Bill Wagner | ISBN: 9780321658708 Respected .NET expert Bill Wagner identifies fifty ways you can leverage the full power of the C# 4.0 language to express your designs concisely and clearly. CRC Press Offers - Use 813DA to get 20% off this the title below. Secure and Resilient Software Development This book illustrates all phases of the secure software development life cycle. It details quality software development strategies that stress resilience requirements with precise, actionable, and ground-level inputs. Packt Publishing Offers - Use the promo code "Java35June", to save 35% off of each eBook mentioned below. JSF 2.0 Cookbook By Anghel Leonard ISBN: 978-1-847199-52-2 Packed with fast, practical solutions and techniques for JavaServer Faces developers who want to push past the JSF basics. JavaFX 1.2 Application Development Cookbook By Vladimir Vivien ISBN: 978-1-847198-94-5 Fast, practical solutions and techniques for building powerful, responsive Rich Internet Applications in JavaFX.

    Read the article

  • What are the benefits of running chef-server instead of chef-solo?

    - by strife25
    I am looking at automated deployment solutions for my team and have been playing with Chef for the past few days. I've been able to get a simple web app up an running from a base Red Hat VM using chef-solo. Our end goal is to use Chef (or another system) to automatically deploy application topologies to the cloud as we run builds. Our process would basically run like so: Our web app code, dependencies, and chef cookbooks are stored in SCM A build is executed and greats a single package for images to acquire and test against The build engine then deploys new cloud images that run a chef client to get packages installed. The images acquire the cookbooks from SCM or the Chef server and install everything to get up and running What are the benefits and/or use cases for getting a Chef Server running? Are there any major benefits to have a Chef Server hold and acquire the cookbooks from SCM vs. using chef-solo and having a script that will pull the cookbooks from SCM?

    Read the article

  • What are the benefits of a disk install vs. Wubi? And can I migrate my settings easily?

    - by Alex Bixel
    I chose to do the Wubi install because it was short, simple, and easy to reverse (no messing with partitions required). To be honest, I can handle the lack of a hibernate function. I haven't really heard many other benefits of installing on a separate partition than hibernation and negligibly faster hard disk read/write. Yet almost everyone I encounter seems to have opted for the disk installation. Are there more benefits I should be aware of, especially as a college student who wants a fast, efficient machine for documents, web browsing, etc. (nothing big like gaming, I can run that on Windows)? Also, I have a fair amount of settings and packages installed that I spent a bit of time on and would rather not have to do again. Is there any way I can migrate all of these settings from the virtual disk on my C:/ drive (Wubi installation) to the disc installation in another partition? (I have a 16GB USB drive if that'll do the trick)

    Read the article

  • What Functional features are worth a little OOP confusion for the benefits they bring?

    - by bonomo
    After learning functional programming in Haskell and F#, the OOP paradigm seems ass-backwards with classes, interfaces, objects. Which aspects of FP can I bring to work that my co-workers can understand? Are any FP styles worth talking to my boss about retraining my team so that we can use them? Possible aspects of FP: Immutability Partial Application and Currying First Class Functions (function pointers / Functional Objects / Strategy Pattern) Lazy Evaluation (and Monads) Pure Functions (no side effects) Expressions (vs. Statements - each line of code produces a value instead of, or in addition to causing side effects) Recursion Pattern Matching Is it a free-for-all where we can do whatever the programming language supports to the limit that language supports it? Or is there a better guideline?

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • I need to know the reasons why learning Linux Shell Scripting (BASH) benefits me as a PHP developer

    - by Ahmad Farouk
    I have been developing web sites/applications using the LAMP stack for almost 5 years. Currently I am interested to dig more into Linux OS, specifically BASH but from a web developer perspective, not from sys admin perspective. I am not intending to administrate Linux Servers. Only, I want to know, does learning shell scripting benefit me as a PHP developer? Does it make me a better, more skilled developer, or just its something irrelevant? Reasons, and examples are highly appreciated. Thanks in advance.

    Read the article

  • Benefits and features of different requirements-management systems and tools available?

    - by Gnark
    I am looking for a good comparision of different available professionial requirement managment tools. I am especially interested in the the features available within the different software solutions. Additionally to the "obvious" features I am looking for a proffesional Requirement Management System that supports for: multi-lingual customizable generation of documentation & history (graphs) search features (e.g. fulltext for comments), ordering, priorities version history bi-directional traceability of changes, artefacts, requirements, changes in requirements, etc. Any kind of integration of V-Model XT would be a really-nice-to-have-feature... Besides, I'd like to hear any personal motivated recommendations and/or experiences with different requirement management systems. Any input is highly appreciated. content consulted : similar question reqm tool with v-model nice, but too old paper (pdf) Tools Journal

    Read the article

  • Benefits and features of different requirements-management systems and tools available?

    - by DevDevDave
    I am looking for a good comparision of different available professional requirement management tools. I am especially interested in the the features available within the different software solutions. Additionally to the "obvious" features I am looking for a proffesional Requirement Management System that supports for: multi-lingual customizable generation of documentation & history (graphs) search features (e.g. fulltext for comments), ordering, priorities version history bi-directional traceability of changes, artefacts, requirements, changes in requirements, etc. Any kind of integration of V-Model XT would be a really-nice-to-have-feature. Besides, I'd like to hear any personal motivated recommendations and/or experiences with different requirement management systems. Any input is highly appreciated. Content consulted: similar question reqm tool with v-model ToolsJournal.com nice, but too old paper (pdf)

    Read the article

  • Benefits of sharing one IP, or prefarably assigning a new IP?

    - by Luis Yang
    I think I am lost but not found yet, please as regards this very topic; my issue was that I bought a new VPS using WHM optimised and it's just one domain meaning one IP. All I want to know is the benefit with sharing one IP to many domains I created for the users (remembering the IP is for the root) or is it of a disadvantage? Probably help me too with knowing if it's prefarable to create/assign a new IP to each new domain created for users?

    Read the article

  • What benefits does a game design degree have for a hobby game programmer?

    - by sm4
    I am interested in studying game design, not because I want a job in the games industry, but because I am interested in the subject itself. I read the following questions, but they mostly deal with the effects on your career in game industry. Should I consider a graduate degree in game development? Game Development Degree vs Computer Science Degree First I thought a game development degree could be beneficial. But from the websites of colleges that offer such degrees, I feel like its more about basic programming with examples from games. This college offers game design degrees, for example. My question is, can I benefit from such a degree when I already have a degree in Computer Science, I already know programming, I'm already developing a game and finally, I have this site to help me when I get stuck?

    Read the article

  • What Are The Benefits of a .com Domain Name?

    Dot com, internet and the web are often used to mean the same thing by many people, although they are all different. Dot come has come become synonymous with the World Wide Web, although it is just o... [Author: Tanya Smith - Computers and Internet - April 01, 2010]

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • What are approaches for analyzing the cost-benefits of a development methodology?

    - by Garrett Hall
    There are many development practices (TDD, continuous integration, cowboy-coding), principles (SOLID, layers of abstraction, KISS), and processes (RUP, Scrum, XP, Waterfall). I have learned you can't follow any of these blindly, but have to consider context and ROI (return on investment). My question is: How do you know whether you are getting a good ROI by following a particular methodology? Metrics, guesstimation, experience? Do analytical methods exist? Or is this just the million-dollar question in software engineering that has no answer?

    Read the article

  • Realize the Benefits of Oracle Fusion Architecture Today; Get on the Path to Oracle Fusion Applicati

    Vijay Tella, Vice President and Chief Strategy Officer, Oracle Fusion Middleware, discusses with Cliff the relationship between Oracle Fusion Architecture and Service Oriented Architecture (SOA). They also discuss how Oracle is enabling Fusion Architecture with integration between Oracle Fusion Middleware and the Oracle E-Business Suite, PeopleSoft Enterprise, and JD Edwards Enterprise One suites of applications.

    Read the article

  • Best ways to sell management on the benefits of Open Source Software?

    - by james
    I have worked in a few places where the use of Open Source Software in products they produce is strictly forbidden for various reasons, such as: no formal support lack of trust in something perceived as "just downloaded from the internet" How can it be professional if it's not supported, we don't pay for it etc etc I'm looking for the best ways to convince/prove to management that things won't fall apart should we use these tools.

    Read the article

  • Benefits of PSD to HTML Service? For Whom and How

    With the advent of Internet and e-industry, most of the companies create website and hire web development professionals. And it is also true that the process of converting a design into web pages is ... [Author: Manish Rawat - Web Design and Development - June 13, 2010]

    Read the article

  • Multiplication for MVP matrices: Any benefits to doing so within the vertex shader?

    - by Nick Wiggill
    I'd like to understand under what circumstances (if any) it is worth doing MVP matrix multiplication inside a vertex shader. The vertex shader is run once per vertex, and a single mesh typically contains many vertices. All MVP inputs remain the same for each vertex in the vertex batch relating to a given draw call (model). Surely then, you're always better off keeping the multiplications in the client code, such that you pass in the whole MVP precalculated as a uniform? (avoiding redundant ops between individual vertices)

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >