Search Results

Search found 16 results on 1 pages for 'misnomer'.

Page 1/1 | 1 

  • Is aspect oriented programming a misnomer?

    - by glenviewjeff
    From everything I have learned about "Aspect Oriented Programming" or "Aspect Oriented Software Development," labeling it as a programming paradigm or methodology appears to be inaccurate. From what I can tell it is not a fundamental technique for programming. To nail down what is meant by "paradigm" and "methodology," please refer to the following definitions from the American Heritage Dictionary. Compare how well or poorly "Object-Oriented Programming" applies to each vs. how well AOP fits. Paradigm: A set of assumptions, concepts, values, and practices that constitutes a way of viewing reality for the community that shares them, especially in an intellectual discipline. Methodology: A body of practices, procedures, and rules used by those who work in a discipline or engage in an inquiry; a set of working methods. "Evidence-based medicine" satisfies the definition of paradigm, but "hysterectomy-based medicine" would be a misnomer because the problem space is too narrow. I am getting the impression that AOP may be misnamed because based on the "oriented-programming" suffix, AOP is alleging to be both a paradigm and a methodology in the same way "Object-Oriented Programming" is. Both of these terms (paradigm and methodology) indicate a fundamental technique, where what I understand about aspects is a technology for solving a narrow problem scope, maybe comparable in magnitude to the static variable feature of Java. If it's true that aspects solve a narrow set of problems, and AOP isn't a misnomer, then why shouldn't all programming techniques be given the "oriented-programming" suffix, such as "inheritance-oriented programming," "dependency-oriented programming," or "scope-oriented programming?"

    Read the article

  • What does really happen when we do a BEGIN TRAN in SQL Server 2005?

    - by Misnomer
    Hi all, I came across this issue or maybe something I didn't realize but I did a Begin Tran and had some code inside it and never ran a commit or rollback as I forgot about it. That caused all many of the database queries or even just a simple select top 1000 command were just sitting on loading..? Now it probably has put some locks on the tables I guess since it did not let me query them..but I just wanted to know what exactly happened and what are the practices to be followed here ?

    Read the article

  • What are the Pros and Cons of Cascading delete and updates?

    - by Misnomer
    Hi, Maybe this is sort of a naive question...but I think that we should always have cascading deletes and updates. But I wanted to know are there problems with it and when should we should not do it? I really can't think of a case right now where you would not want to do an cascade delete but I am sure there is one...but what about updates should they be done always? So can anyone please list out the pros and cons of cascading deletes and updates ? Thanks.

    Read the article

  • Question about architecting asp.net mvc application ?

    - by Misnomer
    I have read little bit about architecture and also patterns in order to follow the best practices. So this is the architecture we have and I wanted to know what you think of it and any proposed changes or improvements to it - Presentation Layer - Contains all the views,controllers and any helper classes that the view requires also it containes the reference to Model Layer and Business Layer. Business Project - Contains all the business logic and validation and security helper classes that are being used by it. It contains a reference to DataAccess Layer and Model Layer. Data Access Layer - Contains the actual queries being made on the entity classes(CRUD) operations on the entity classes. It contains reference to Model Layer. Model Layer - Contains the entity framework model,DTOs,Enums.Does not really have a reference to any of the above layers. What are your thoughts on the above architecture ? The problem is that I am getting confused by reading about like say the repository pattern, domain driven design and other design patterns. The architecture we have although not that strict still is relatively alright I think and does not really muddle things but I maybe wrong. I would appreciate any help or suggestions here. Thanks !

    Read the article

  • Best Practices of SEO Web Development

    Custom SEO Web Development is a bit of a misnomer. This is because all web development should be regarded as custom, as there are no two companies in existence that are identical. Here best practices... [Author: Patrick Perkins - Web Design and Development - April 28, 2010]

    Read the article

  • I'm porting my app from iOS to Android: what do I need to know?

    - by kubi
    What pitfalls should I avoid? What Java language paradigms do Objective-C developers consistently misunderstand? I learned to program in Java, but I have worked in nothing but Objective-C for years now. How are the design patterns different between Android and iOS? If you've made the transition yourself, what parts of Android confused you or took you longer to learn than it should have? Is Eclipse the best OS X IDE for Android? For the record, my app is very strongly tied to UIKit and Foundation, so the word "porting" may be a misnomer; I'll actually be completely rewriting it for Android. No code reuse. Also, I'm doing this to learn Android, so I'd rather fail at the port and learn Android than take a shortcut.

    Read the article

  • Why have minimal user/handwritten code and do everything in XAML?

    - by Mrk Mnl
    I feel the MVVM community has become overzealous like the OO programmers in the 90's - it is a misnomer MVVM is synonymous with no code. From my closed StackOverflow question: Many times I come across posts here about someone trying to do the equivalent in XAML instead of code behind. Their only reason being they want to keep their code behind 'clean'. Correct me if I am wrong, but is not the case that: XAML is compiled too - into BAML - then at runtime has to be parsed into code anyway. XAML can potentially have more runtime bugs as they will not be picked up by the compiler at compile time - from incorrect spellings - these bugs are also harder to debug. There already is code behind - like it or not InitializeComponent(); has to be run and the .g.i.cs file it is in contains a bunch of code though it may be hidden. Is it purely psychological? I suspect it is developers who come from a web background and like markup as opposed to code. EDIT: I don't propose code behind instead of XAML - use both - I prefer to do my binding in XAML too - I am just against making every effort to avoid writing code behind esp in a WPF app - it should be a fusion of both to get the most out of it. UPDATE: Its not even Microsoft's idea, every example on MSDN shows how you can do it in both.

    Read the article

  • Can VS2010 help me find memory leaks?

    - by Andrew Garrison
    I'm going through the pain right now of finding memory leaks in my application using WinDbg. Luckily, I've found a few good articles that give a very good step-by-step process of how to do it. Still, it is a fairly painful process. Does VS2010 have any built in features that can ease the burden of finding a memory leak in a Silverlight application? Of course, a memory leak in .NET sounds a bit like a misnomer, but what I intend to do is to find all objects that are still referencing an object that I believe should be garbage collected. For those that may be interested, here are some good articles on how to get started using WinDbg to find memory leaks in Silverlight: Finding Memory Leaks In Silverlight With WinDbg Hunting down memory leaks in Silverlight

    Read the article

  • Comparison of Extreme Programming (XP) to Traditional Programming Methodologies

    The comparison of extreme programming (XP) to traditional programming methodologies can find similarities between the historic biblical battle between David and Goliath. Goliath of Gath is a Philistine warrior renowned for his size, strength and battle tested skills. Much like Goliath, traditional methodologies are known to be cumbersome due to large amounts of documentation, and time consuming do to the time needed to gather all the information. However, traditional methodologies have been widely accepted by the software development community for years because of its attention to detail regarding project development and maintenance. David is a male Israelite teenager, who was small, fearless, and untrained in any type of formal combat. In a similar fashion, extreme programming focuses more on code over documentation so that time is spent on developing the project and not on cumbersome documentation of a project. Typically, project managers and developers are fearless when they start this type of project because they usually start with little to no documentation, and they expect to be given changes to be implemented at the start of every new project iteration. Because of the lack of need or desire for documentation in extreme programming projects they appear to act as if there is no formal process involved in developing an extreme programming project.  This is a misnomer, because of the consistent development iterations and interaction with clients and users the quickly takes form because each iteration allows the project to be refined as the customer needs and desires change. Ravikant Agarwal and David Umphress documented a new approach to extreme programming called personal extreme programming (PXP) at the ACM Southeast Regional Conference in 2008. PXP is the application of extreme programming core concepts in a single developer team environment.  PXP focuses on how to adjust the main concepts and practices of extreme programming that is typically centered in a group environment and how they can be altered to be beneficial for a single developer environment. Suzanne Smith and Sara Stoecklin are both advocates of extreme programming according to the Journal of Computing Sciences in Colleges and in fact they feel that it should receive more attention in introductory programming classes to allow students to better understand the software development process. Reasons why extreme programming is a good thing: Developers get to do more of what they love, Develop. Traditional software development methodologies tend to  add additional demands on a project by requiring all requirements and project specifications to be fully defined prior to the start of the implementation phase of a project. A standard 40 hour work week. With limiting the work week to only 40 hours prevents developers from getting burned out on projects.

    Read the article

  • The term "interface" in C++

    - by Flexo
    Java makes a clear distinction between class and interface. (I believe C# does also, but I have no experience with it). When writing C++ however there is no language enforced distinction between class and interface. Consequently I've always viewed interface as a workaround for the lack of multiple inheritance in Java. Making such a distinction feels arbitrary and meaningless in C++. I've always tended to go with the "write things in the most obvious way" approach, so if in C++ I've got what might be called an interface in Java, e.g.: class Foo { public: virtual void doStuff() = 0; ~Foo() = 0; }; and I then decided that most implementers of Foo wanted to share some common functionality I would probably write: class Foo { public: virtual void doStuff() = 0; ~Foo() {} protected: // If it needs this to do its thing: int internalHelperThing(int); // Or if it doesn't need the this pointer: static int someOtherHelper(int); }; Which then makes this not an interface in the Java sense anymore. Instead C++ has two important concepts, related to the same underlying inheritance problem: virtual inhertiance Classes with no member variables can occupy no extra space when used as a base "Base class subobjects may have zero size" Reference Of those I try to avoid #1 wherever possible - it's rare to encounter a scenario where that genuinely is the "cleanest" design. #2 is however a subtle, but important difference between my understanding of the term "interface" and the C++ language features. As a result of this I currently (almost) never refer to things as "interfaces" in C++ and talk in terms of base classes and their sizes. I would say that in the context of C++ "interface" is a misnomer. It has come to my attention though that not many people make such a distinction. Do I stand to lose anything by allowing (e.g. protected) non-virtual functions to exist within an "interface" in C++? (My feeling is the exactly the opposite - a more natural location for shared code) Is the term "interface" meaningful in C++ - does it imply only pure virtual or would it be fair to call C++ classes with no member variables an interface still?

    Read the article

  • Determine if the "yes" is necessary when doing an SCP

    - by glowcoder
    I'm writing a Groovy script to do an SCP. Note that I haven't ran it yet, because the rest of it isn't finished. Now, if you're doing an scp for the first time, have to authenticate the fingerprint. Future times, you don't. My current solution is, because I get 3 tries for the password, and I really only need 1 (it's not like the script will mistype the password... if it's wrong, it's wrong!) is to pipe in "yes" as the first password attempt. This way, it will accept the fingerprint if necessary, and use the correct password as the first attempt. If it didn't need it, it puts yes as the first attempt and the correct as the second. However, I feel this is not a very robust solution, and I know if I were a customer I would not like seeing "incorrect password" in my output. Especially if it fails for another reason, it would be an incredibly annoying misnomer. What follows is the appropriate section of the script in question. I am open to any tactics that involve using scp (or accomplishing the file transfer) in a different way. I just want to get the job done. I'm even open to shell scripting, although I'm not the best at it. def command = [] command.add('scp') command.add(srcusername + '@' + srcrepo + ':' + srcpath) command.add(tarusername + '@' + tarrepo + ':' + tarpath) def process = command.execute() process.consumeOutput(out) process << "yes" << LS << tarpassword << LS process << "yes" << LS << srcpassword << LS process.waitfor() Thanks so much, glowcoder

    Read the article

  • What was SPX from the IPX/SPX stack ever used for?

    - by Kumba
    Been trying to learn about older networking protocols a bit, and figured that I would start with IPX/SPX. So I built two MS-DOS virtual machines in VirtualBox, and got IPX communications working (after much trial and error). The idea being to get several old DOS games to run, link up to a multiplayer match, interact with each game window, and capture the traffic using Wireshark from the host machine. From this, I got Quake, Masters of Orion 2, and MechWarrior 2 to communicate back and forth. Doom, Doom2, Duke3d, Warcraft, and several others either buggered up under the VM or just couldn't see the other VM on the IPX network. What did I discover? None of the working games used SPX. Not even Microsoft's NET DIAG used SPX. They all ran ONLY on top of IPX. I can't even find SPX examples or use-cases of SPX traffic running over IEEE 802.3 Ethernet II framing. I did find references that it was in abundant use on token ring, but that's it. Yet any IPX-aware application that I've hunted down so far usually advertises itself as "IPX/SPX", which seems to be a bit of a misnomer, since it doesn't seem to use SPX. So what was SPX used for? Any DOS applications out there that use it which will run under my VM setup? Edit: I am aware that IPX is to SPX as IP is to TCP (layer 3 to layer 4), so I expected to see an SPX layer underneath the IPX layer in Wireshark when I ran my tests.

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Visual Studio 2010 Best Practices

    - by Etienne Tremblay
    I’d like to thank Packt for providing me with a review version of Visual Studio 2010 Best Practices eBook. In fairness I also know the author Peter having seen him speak at DevTeach on many occasions.  I started by looking at the table of content to see what this book was about, knowing that “best practices” is a real misnomer I wanted to see what they were.  I really like the fact that he starts the book by really saying they are not really best practices but actually recommend practices.  As a Team Foundation Server user I found that chapter 2 was more for the open source crowd and I really skimmed it.  The portion on Branching was well documented, although I’m not a fan of the testing branch myself, but the rest was right on. The section on merge remote changes (bring the outside to you) paradigm is really important and was touched on. Chapter 3 has good solid practices on low level constructs like generics and exceptions. Chapter 4 dives into architectural practices like decoupling, distributed architecture and data based architecture.  DTOs and ORMs are touched on briefly as is NoSQL. Chapter 5 is about deployment and is really a great primer on all the “packaging” technologies like Visual Studio Setup and Deployment (depreciated in 2012), Click Once and WIX the major player outside of commercial solutions.  This is a nice section on how to move from VSSD to WIX this is going to be important in the coming years due to the fact that VS 2012 doesn’t support VSSD. In chapter 6 we dive into automated testing practices, including test coverage, mocking, TDD, SpecDD and Continuous Testing.  Peter covers all those concepts really nicely albeit succinctly. Being a book on recommended practices I find this is really good. I really enjoyed chapter 7 that gave me a lot of great tips to enhance my Visual Studio “experience”.  Tips on organizing projects where good.  Also even though I knew about configurations I like that he put that in there so you can move all your settings to another machine, a lot of people don’t know about that. Quick find and Resharper are also briefly covered.  He touches on macros (depreciated in 2012).  Finally he touches on Continuous Integration a very important concept in today’s ALM landscape. Chapter 8 is all about Parallelization, threads, Async, division of labor, reactive extensions.  All those concepts are touched on and again generalized approaches to those modern problems are giving.       Chapter 9 goes into distributed apps, the most used and accepted practice in the industry for .NET projects the chapter tackles concepts like Scalability, Messaging and Cloud (the flavor of the month of distributed apps, although I think this will stick ;-)).  He also looks a protocols TCP/UDP and how to debug distributed apps.  He touches on logging and health monitoring. Chapter 10 tackles recommended practices for web services starting with implementing WCF services, which goes into all sort of goodness like how to host in IIS or self-host.  How to manual test WCF services, also a section on authentication and authorization.  ASP.NET Web services are also touched on in that chapter All in all a good read, nice tips and accepted practices.  I like the conciseness of the subjects and Peter touches on a lot of things in this book and uses a lot of the current technologies flavors to explain the concepts.   Cheers, ET

    Read the article

  • Towards an F# .NET Reflector add-in

    - by CliveT
    When I had the opportunity to spent some time during Red Gate's recent "down tools" week on a project of my choice, the obvious project was an F# add-in for Reflector . To be honest, this was a bit of a misnomer as the amount of time in the designated week for coding was really less than three days, so it was always unlikely that very much progress would be made in such a small amount of time (and that certainly proved to be the case), but I did learn some things from the experiment. Like lots of problems, one useful technique is to take examples, get them to work, and then generalise to get something that works across the board. Unfortunately, I didn't have enough time to do the last stage. The obvious first step is to take a few function definitions, starting with the obvious hello world, moving on to a non-recursive function and finishing with the ubiquitous recursive Fibonacci function. let rec printMessage message  =     printfn  message let foo x  =    (x + 1) let rec fib x  =     if (x >= 2) then (fib (x - 1) + fib (x - 2)) else 1 The major problem in decompiling these simple functions is that Reflector has an in-memory object model that is designed to support object-oriented languages. In particular it has a return statement that allows function bodies to finish early. I used some of the in-built functionality to take the IL and produce an in-memory object model for the language, but then needed to write a transformer to push the return statements to the top of the tree to make it easy to render the code into a functional language. This tree transform works in some scenarios, but not in others where we simply regenerate code that looks more like CPS style. The next thing to get working was library level bindings of values where these values are calculated at runtime. let x = [1 ; 2 ; 3 ; 4] let y = List.map  (fun x -> foo x) x The way that this is translated into a set of classes for the underlying platform means that the code needs to follow references around, from the property exposing the calculated value to the class in which the code for generating the value is embedded. One of the strongest selling points of functional languages is the algebraic datatypes, which allow definitions via standard mathematical-style inductive definitions across the union cases. type Foo =     | Something of int     | Nothing type 'a Foo2 =     | Something2 of 'a     | Nothing2 Such a definition is compiled into a number of classes for the cases of the union, which all inherit from a class representing the type itself. It wasn't too hard to get such a de-compilation happening in the cases I tried. What did I learn from this? Firstly, that there are various bits of functionality inside Reflector that it would be useful for us to allow add-in writers to access. In particular, there are various implementations of the Visitor pattern which implement algorithms such as calculating the number of references for particular variables, and which perform various substitutions which could be more generally useful to add-in writers. I hope to do something about this at some point in the future. Secondly, when you transform a functional language into something that runs on top of an object-based platform, you lose some fidelity in the representation. The F# compiler leaves attributes in place so that tools can tell which classes represent classes from the source program and which are there for purposes of the implementation, allowing the decompiler to regenerate these constructs again. However, decompilation technology is a long way from being able to take unannotated IL and transform it into a program in a different language. For a simple function definition, like Fibonacci, I could write a simple static function and have it come out in F# as the same function, but it would be practically impossible to take a mass of class definitions and have a decompiler translate it automatically into an F# algebraic data type. What have we got out of this? Some data on the feasibility of implementing an F# decompiler inside Reflector, though it's hard at the moment to say how long this would take to do. The work we did is included the 6.5 EAP for Reflector that you can get from the EAP forum. All things considered though, it was a useful way to gain more familiarity with the process of writing an add-in and understand difficulties other add-in authors might experience. If you'd like to check out a video of Down Tools Week, click here.

    Read the article

  • Need help with implementation of the jQuery LiveUpdate routine

    - by miCRoSCoPiC_eaRthLinG
    Hey all, Has anyone worked with the LiveUpdate function (may be a bit of a misnomer) found on this page? It's not really a live search/update function, but a quick filtering mechanism for a pre-existing list, based on the pattern you enter in a text field. For easier reference, I'm pasting the entire function in here: jQuery.fn.liveUpdate = function(list){ list = jQuery(list); if ( list.length ) { var rows = list.children('li'), cache = rows.map(function(){ return this.innerHTML.toLowerCase(); }); this .keyup(filter).keyup() .parents('form').submit(function(){ return false; }); } return this; function filter(){ var term = jQuery.trim( jQuery(this).val().toLowerCase() ), scores = []; if ( !term ) { rows.show(); } else { rows.hide(); cache.each(function(i){ var score = this.score(term); if (score > 0) { scores.push([score, i]); } }); jQuery.each(scores.sort(function(a, b){return b[0] - a[0];}), function(){ jQuery(rows[ this[1] ]).show(); }); } } }; I have this list, with members as the ID. And a text field with say, qs as ID. I tried binding the function in the following manner: $( '#qs' ).liveUpdate( '#members' ); But when I do this, the function is called only ONCE when the page is loaded (I put in some console.logs in the function) but never after when text is keyed into the text field. I also tried calling the routine from the keyup() function of qs. $( '#qs' ).keyup( function() { $( this ).liveUpdate( '#members' ); }); This ends up going into infinite loops (almost) and halting with "Too much recursion" errors. So can anyone please shed some light on how I am supposed to actually implement this function? Also while you are at it, can someone kindly explain this line to me: var score = this.score(term); What I want to know is where this member method score() is coming from? I didn't find any such method built into JS or jQuery. Thanks for all the help, m^e

    Read the article

1