Search Results

Search found 89673 results on 3587 pages for 'code conversion'.

Page 972/3587 | < Previous Page | 968 969 970 971 972 973 974 975 976 977 978 979  | Next Page >

  • GLSL, is it possible to offsetting vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others.

    Read the article

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.”

    Read the article

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.” Originally published on blogs.oracle.com/javaone.

    Read the article

  • Is paper indispensable in a programmer's everyday work?

    - by rwong
    As a programmer who work in a company whose vision is to make paperless office possible, is there any way I can work effectively while using less paper? I can list at least several kinds of papers I use quite often: Paper notebook, on which I do most of the pre-coding design work and ideas Books Temporary printouts of source code, though not so often (in color, with a 6 point font at 600 DPI) Sticky note, to remind myself of things that should be taken care of within a few days On the other hand, I also use a wiki and an office text editor. Once a while I would use a diagramming software to make a few flowcharts. Deeper questions: Is there a relationship between paper use and productivity? How can programmers help save the trees? Is paperless software development fundamentally different from paperless office? Related questions: Do you ever write code with pen and paper, and should we do it more often? What physical tools do you find useful to work as a programmer? What things are essential on a programmer's desk? Stuff every programmer needs while working Additional info, if it helps: Everyone has dual monitors. We have decent project management and issue tracking software (both web-based). Please be constructive. In particular, please give your answer to your peer programmers who wish to be flexible and are willing to change working style in order to become more productive as well as meeting certain their own personal values. Edited: I removed the company's view because it appears to be too flamebait. If you need to see my original words, go to the edit history. Deleted: Doxygen and whiteboard. Reason: disregarding my personal experience with these great tools, we never had to print out anything as a consequence of using/not using them. To see my original words, go to the edit history.

    Read the article

  • Version control for game development - issues and solutions?

    - by Cyclops
    There are a lot of Version Control systems available, including open-source ones such as Subversion, Git, and Mercurial, plus commercial ones such as Perforce. How well do they support the process of game-development? What are the issues using VCS, with regard to non-text files (binary files), large projects, etc? What are solutions to these problems, if any? For organization of Answers, let's try on a per-package basis. Update each package/Answer with your results. Also, please list some brief details in your answer, about whether your VCS is free or commercial, distributed versus centralized, etc. Update: Found a nice article comparing two of the VCS below - apparently, Git is MacGyver and Mercurial is Bond. Well, I'm glad that's settled... And the author has a nice quote at the end: It’s OK to proselytize to those who have not switched to a distributed VCS yet, but trying to convert a Git user to Mercurial (or vice-versa) is a waste of everyone’s time and energy. Especially since Git and Mercurial's real enemy is Subversion. Dang, it's a code-eat-code world out there in FOSS-land...

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • Porting Ruby/NCruses Rogue-Like to .NET and FlatRedBall

    - by ashes999
    I created an awesome rogue-like game in Ruby. For the GUI, I used NCurses. Since I'm using FlatRedBall as my engine of choice for Silverlight game development, I want to port this game over. What is the best way to efficiently doing this, and what are the pitfalls I should expect? For example, Ruby is object-oriented, like C#, and I should be able to just convert (rewrite) classes one by one. However, I will run into issues like: NCurses API. I need to possibly create my own notions of a "Window", or else rewrite GUI code. It's one class, but it's BIG. Mix-Ins. These are essentially aspect-oriented development. There are a couple of solutions in .NET, like dynamic classes. What else? Also, I should mention that I want to create a C# application out of this. As much as possible, I'll dump reusable and helper code, algorithms, etc. into separate projects and generate reusable DLLs.

    Read the article

  • Struggling to get set up with JOGL2.0

    - by thecoshman
    I guess that Game Dev' is a more sensible place for my problem then SO. I did have JOGL1.1 set up and working, but I soon discovered that it did not support the latest OpenGL, so I started work on upgrading to JOGL2.0 it's not gone too well. Firstly, is it worth me trying to get JOGL to work, or should I just move over to LWJGL? I am fairly comfortable with OpenGL (via C++) and from what I did get working with JOGL1.1, I seem to be OK adapting to it. Assuming that I stick with JOGL, am I foolish for trying to use JOGL2.0? From what I can gather, JOGL2.0 is still in beta, but I am willing to go with it as I want to make use of the latest OpenGL I can. I have been using the Eclipse IDE and have set up a user library for JOGL, here is a screen shot of the configuration and I have added this user library to my own Eclipse project. the system variable %JOGL_HOME% points to "C:\Users\edacosh\Downloads\JOGL2.0" so that should work fine. Now, the problem I actually having, when I try to run my code, on the line GLProfile glp = GLProfile.getDefault(); The code stops with the following message... Exception in thread "main" java.lang.NoClassDefFoundError: com/jogamp/common/jvm/JVMUtil at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1145) at DiCE.DiCE.<init>(DiCE.java:33) at App.<init>(App.java:17) at App.main(App.java:12) Caused by: java.lang.ClassNotFoundException: com.jogamp.common.jvm.JVMUtil at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 4 more I have also set my project to ensure that it is using jre6 along with jdk6, as I was having some issues. I hope I have given you enough information to be able to help me. It probably doesn't help that I am rather new to Java, been developing in C++ for ages. Thanks

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • libc-bin errors when trying to install php

    - by jonney
    i am trying to update and install php into my ubuntu server 12.04 using the command below: apt-get upgrade php apt-get install php5-curl php5-gd php5-mysql php5-pgsql However i receive this error all the time: gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-34-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-34-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-34-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-server: linux-image-server depends on linux-image-3.2.0-33-generic; however: Package linux-image-3.2.0-33-generic is not configured yet. dpkg: error processing linux-image-server (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-server: linux-server depends on linux-image-server (= 3.2.0.33.36); however: Package linux-image-server is not configured yet. dpkg: error processing linux-server (--configure): dependency problems - leaving unconfigured Setting up libpq5 (9.1.10-0ubuntu12.04) ... No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because MaxReports has already been reached Setting up php5-curl (5.3.10-1ubuntu3.8) ... Setting up php5-pgsql (5.3.10-1ubuntu3.8) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-32-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-32-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports has already been reached Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: linux-image-3.2.0-33-generic linux-image-3.2.0-34-generic linux-image-server linux-server initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Not sure whats wrong and why it cant process the linux-image files?

    Read the article

  • SQL SERVER – ERROR – FIX – Msg 3702, Level 16, State 3, Line 1 Cannot drop database “MyDBName” because it is currently in use

    - by pinaldave
    I often go to do various seminars and presentations at various organizations. During presentations I often create and drop various databases for demonstrations purpose. Recently in one of the presentations, I tried to remove my recently created database, I got following error. Msg 3702, Level 16, State 3, Line 1 Cannot drop database “MyDBName” because it is currently in use. The reason was very simple as my database was in use by another session or window. I had option that I should go and find open session and close it right away; later followed by dropping the database. As I was in rush I quickly wrote down following code and I was able to successfully drop the database. USE MASTER GO ALTER DATABASE MyDBName SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE MyDBName GO Please note that I am doing all this on my demonstrations, do not run above code on production without proper approvals and supervisions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • software architecture (OO design) refresher course

    - by PeterT
    I am lead developer and team lead in a small RAD team. Deadlines are tight and we have to release often, which we do, and this is what keep the business happy. While we (the development team) are trying to maintain the quality of the code (clean and short methods), I can't help but notice that the overall quality of the OO design&architecture is getting worse over the time - the library we are working on is gradually reducing itself to a "bag of functions". Well, we try to use the design patterns, but since we don't really have much time for a design as such we are mostly using the creational ones. I have read Code Complete / Design Patterns (GOF & enterprise) / Progmatic Programmer / and many books from Effective XXX series. Should I re-read them again as I have read them a long time ago and forgotten quite a lot, or there are other / better OO design / software architeture books been published since then which I should definitely read? Any ideas, recommendations on how can I get the situation under control and start improving the architecture. The way I see it - I will start improving the architectural / design quality of software components I am working on and then will start helping other team members once I find what is working for me.

    Read the article

  • Does the use of mongodb enhance extending/changing database driven applications?

    - by developer10214
    When an application is created which need to store data, an SQL database is used very often. So did I in a lot of asp.net applications. The resulting applications have often an ORM like the entity framework and maybe a business layer. So when such an application needs to be extended(let's say you have to add a comment property to an object), you have to change/extend the database, then the ORM and the business layer and so on. To deploy the changes you have to update the target database and the application. I know that things like code first and fluent can make this approach easier. I tried mongodb, I only used the standard driver and I had to extend some objects and all I had to do was changing the code. So it feels that such approaches are much easier to realize when using mongodb. I don't have much experience with larger applications an mongodb. I know that a SQL database or mongodb doesn't fit for all needs and both have their pros and cons. I want to know if my feeling is right, if yes I would choose rather choose mongodb than SQL database.

    Read the article

  • new project; entire node.js app

    - by Jared
    I have been looking into Node.js, express and Nowjs and love how easy it is to have real time interactions between clients. My background is mostly from CodeIgniter MVC using PHP and MYSql. I want to re make a current web project of mine from scratch to make everything better and more real time with this newer technology. After researching and doing test examples I want to use node.js , express and Nowjs for the real time interactions once someone connects to the socket.io to pull data back to clients. But use Code Igniter for the control of the site and user management , possible shopping cart/store , pretty much everything else. This is purely due to time constraints and that I am already familiar with doing it that way. I have been looking at MongoDB as an alternative to MySql, Basically the app is going to be multiple chat rooms all on one page. with the ability of notifications and private messaging. Lots of data transfer and images. before I started piecing it together I wanted to get people who have already done something similar. My model would use Code Igniter and MySQL to render the page and then connect them onto a node.js server and broadcast using express and nowjs would using a mongoDB be better than mySQL for tons of messages and data being stored or MYSQL? Also does it make since to not make the whole site on Node.js , kinda piece it together like that? I was asked to re post this somewhere else as it was not up to the format for SO, OP from here http://stackoverflow.com/questions/12649469/new-project-need-some-start-up-advice-node-js-app#comment17062924_12649469

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • How to force remove a package if dpkg removal script fails?

    - by fodon
    I'm trying to remove a package where I deleted the /etc/init.d/disco-master file (in an attempt to remove the package manually). I want to remove the disco-master package. How do I do this now? This is what happens when I do sudo apt-get remove disco-master: removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--remove): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master E: Sub-process /usr/bin/dpkg returned an error code (1) When I do sudo apt-get install --reinstall disco-master I get the following: You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.2+nmu1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I do sudo apt-get -f install I get this: Unpacking disco-node (from .../disco-node_0.4.2+nmu1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb (--unpack): trying to overwrite '/usr/lib/disco/master/ebin/disco.app', which is also in package disco-master 0.4.1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) When I run sudo apt-get remove disco-node I get the following: Package disco-node is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.1) but it is not going to be installed Depends: python-disco (= 0.4.1) but 0.4.2+nmu1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I did sudo dpkg -P --force-all disco-master I got: Removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--purge): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master

    Read the article

  • Advantages of Singleton Class over Static Class?

    Point 1) Singleton We can get the object of singleton and then pass to other methods. Static Class We can not pass static class to other methods as we pass objects Point 2) Singleton In future, it is easy to change the logic of of creating objects to some pooling mechanism. Static Class Very difficult to implement some pooling logic in case of static class. We would need to make that class as non-static and then make all the methods non-static methods, So entire your code needs to be changed. Point3:) Singleton Can Singletone class be inherited to subclass? Singleton class does not say any restriction of Inheritence. So we should be able to do this as long as subclass is also inheritence.There's nothing fundamentally wrong with subclassing a class that is intended to be a singleton. There are many reasons you might want to do it. and there are many ways to accomplish it. It depends on language you use. Static Class We can not inherit Static class to another Static class in C#. Think about it this way: you access static members via type name, like this: MyStaticType.MyStaticMember(); Were you to inherit from that class, you would have to access it via the new type name: MyNewType.MyStaticMember(); Thus, the new item bears no relationships to the original when used in code. There would be no way to take advantage of any inheritance relationship for things like polymorphism. span.fullpost {display:none;}

    Read the article

  • How to design a scriptable communication emulator?

    - by Hawk
    Requirement: We need a tool that simulates a hardware device that communicates via RS232 or TCP/IP to allow us to test our main application which will communicate with the device. Current flow: User loads script Parse script into commands User runs script Execute commands Script / commands (simplified for discussion): Connect RS232 = RS232ConnectCommand Connect TCP/IP = TcpIpConnectCommand Send data = SendCommand Receive data = ReceiveCommand Disconnect = DisconnectCommand All commands implement the ICommand interface. The command runner simply executes a sequence of ICommand implementations sequentially thus ICommand must have an Execute exposure, pseudo code: void Execute(ICommunicator context) The Execute method takes a context argument which allows the command implementations to execute what they need to do. For instance SendCommand will call context.Send, etc. The problem RS232ConnectCommand and TcpIpConnectCommand needs to instantiate the context to be used by subsequent commands. How do you handle this elegantly? Solution 1: Change ICommand Execute method to: ICommunicator Execute(ICommunicator context) While it will work it seems like a code smell. All commands now need to return the context which for all commands except the connection ones will be the same context that is passed in. Solution 2: Create an ICommunicatorWrapper (ICommunicationBroker?) which follows the decorator pattern and decorates ICommunicator. It introduces a new exposure: void SetCommunicator(ICommunicator communicator) And ICommand is changed to use the wrapper: void Execute(ICommunicationWrapper context) Seems like a cleaner solution. Question Is this a good design? Am I on the right track?

    Read the article

  • What are the legal considerations when forking a BSD-licensed project?

    - by Thomas Owens
    I'm interested in forking a project released under a two-clause BSD license: Copyright (c) 2010 {copyright holder} All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the disclaimer at the end. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (2) Neither the name of {copyright holder} nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. DISCLAIMER THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. I've never forked a project before, but this project is very similar to something that I need/want. However, I'm not sure how far I'll get, so my plan is to pull the latest from their repository and start working. Maybe, eventually, I'll get it to where I want it, and be able to release it. Is this the right approach? How, exactly, does this impact forking of the project? How do I track who owns what components or sections (what's copyright me, what's copyright the original creators, once I start stomping over their code base)? Can I fork this project? What must I do prior to releasing, and when/if I decide to release the software derived from this BSD-licensed work?

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • VS 11 Beta merge tool is awesome, except for resovling conflicts

    - by deadlydog
    If you've downloaded the new VS 11 Beta and done any merging, then you've probably seen the new diff and merge tools built into VS 11.  They are awesome, and by far a vast improvement over the ones included in VS 2010.  There is one problem with the merge tool though, and in my opinion it is huge.Basically the problem with the new VS 11 Beta merge tool is that when you are resolving conflicts after performing a merge, you cannot tell what changes were made in each file where the code is conflicting.  Was the conflicting code added, deleted, or modified in the source and target branches?  I don't know (without explicitly opening up the history of both the source and target files), and the merge tool doesn't tell me.  In my opinion this is a huge fail on the part of the designers/developers of the merge tool, as it actually forces me to either spend an extra minute for every conflict to view the source and target file history, or to go back to use the merge tool in VS 2010 to properly assess which changes I should take.I submitted this as a bug to Microsoft, but they say that this is intentional by design. WHAT?! So they purposely crippled their tool in order to make it pretty and keep the look consistent with the new diff tool?  That's like purposely putting a little hole in the bottom of your cup for design reasons to make it look cool.  Sure, the cup looks cool, but I'm not going to use it if it leaks all over the place and doesn't do the job that it is intended for. Bah! but I digress.Because this bug is apparently a feature, they asked me to open up a "feature request" to have the problem fixed. Please go vote up both my bug submission and the feature request so that this tool will actually be useful by the time the final VS 11 product is released.

    Read the article

  • Why the recent shift to removing/omitting semicolons from Javascript?

    - by Jonathan
    It seems to be fashionable recently to omit semicolons from Javascript. There was a blog post a few years ago emphasising that in Javascript, semicolons are optional and the gist of the post seemed to be that you shouldn't bother with them because they're unnecessary. The post, widely cited, doesn't give any compelling reasons not to use them, just that leaving them out has few side-effects. Even GitHub has jumped on the no-semicolon bandwagon, requiring their omission in any internally-developed code, and a recent commit to the zepto.js project by its maintainer has removed all semicolons from the codebase. His chief justifications were: it's a matter of preference for his team; less typing Are there other good reasons to leave them out? Frankly I can see no reason to omit them, and certainly no reason to go back over code to erase them. It also goes against (years of) recommended practice, which I don't really buy the "cargo cult" argument for. So, why all the recent semicolon-hate? Is there a shortage looming? Or is this just the latest Javascript fad?

    Read the article

  • Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots as a new subversion revision to the source tree. The end result being that the HEAD revision is the same as the last snapshot, and other revisions are as numbered. Some other requirements: that the HEAD revision is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the HEAD revision. meanwhile, there should be continuity between files that do persist between snapshots. Subversion should know that there are previous versions of these files and not treat them as brand new files within each revision. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in Subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering as I would also like a solution in GIT (I will post an answer separately for GIT so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same question but for GIT: Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree? An outline answer for Subversion in stackoverflow.com but not enough specifics about the script: what commands to use, code to check valid scenarios if necessary - i.e. a working script basically. http://stackoverflow.com/questions/2203818/is-there-anyway-to-import-xcode-snapshots-into-a-new-svn-repository

    Read the article

  • Make Your Menu Item Highlighted

    - by Shaun
    When I was working on the TalentOn project (Promotion in MSDN Chinese) I was asked to implement a functionality that makes the top menu items highlighted when the currently viewing page was in that section. This might be a common scenario in the web application development I think.   Simple Example When thinking about the solution of the highlighted menu items the biggest problem would be how to define the sections (menu item) and the pages it belongs to rather than making the menu highlighted. With the ASP.NET MVC framework we can use the controller – action infrastructure for us to achieve it. Each controllers would have a related menu item on the master page normally. The menu item would be highlighted if any of the views under this controller are being shown. Some specific menu items would be highlighted of that action was invoked, for example the home page, the about page, etc. The check rule can be specified on-demand. For example I can define the action LogOn and Register of Account controller should make the Account menu item highlighted while the ChangePassword should make the Profile menu item highlighted. I’m going to use the HtmlHelper to render the highlight-able menu item. The key point is that I need to pass the predication to check whether the current view belongs to this menu item which means this menu item should be highlighted or not. Hence I need a delegate as its parameter. The simplest code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace ShaunXu.Blogs.HighlighMenuItem 9: { 10: public static class HighlightMenuItemHelper 11: { 12: public static MvcHtmlString HighlightMenuItem(this HtmlHelper helper, 13: string text, string controllerName, string actionName, object routeData, object htmlAttributes, 14: string highlightText, object highlightHtmlAttributes, 15: Func<HtmlHelper, bool> highlightPredicate) 16: { 17: var shouldHighlight = highlightPredicate.Invoke(helper); 18: if (shouldHighlight) 19: { 20: return helper.ActionLink(string.IsNullOrWhiteSpace(highlightText) ? text : highlightText, 21: actionName, controllerName, routeData, highlightHtmlAttributes == null ? htmlAttributes : highlightHtmlAttributes); 22: } 23: else 24: { 25: return helper.ActionLink(text, actionName, controllerName, routeData, htmlAttributes); 26: } 27: } 28: } 29: } There are 3 groups of the parameters: the first group would be the same as the in-build ActionLink method parameters. It has the link text, controller name and action name, etc passed in so that I can render a valid linkage for the menu item. The second group would be more focus on the highlight link text and Html attributes. I will use them to render the highlight menu item. The third group, which contains one parameter, would be a predicate that tells me whether this menu item should be highlighted or not based on the user’s definition. And then I changed my master page of the sample MVC application. I let the Home and About menu highlighted only when the Index and About action are invoked. And I added a new menu named Account which should be highlighted for all actions/views under its Account controller. So my master would be like this. 1: <div id="menucontainer"> 2:  3: <ul id="menu"> 4: <li><% 1: : Html.HighlightMenuItem( 2: "Home", "Home", "Index", null, null, 3: "[Home]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "Index")%></li> 5:  6: <li><% 1: : Html.HighlightMenuItem( 2: "About", "Home", "About", null, null, 3: "[About]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "About")%></li> 7:  8: <li><% 1: : Html.HighlightMenuItem( 2: "Account", "Account", "LogOn", null, null, 3: "[Account]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Account")%></li> 9: 10: </ul> 11:  12: </div> Note: You need to add the import section for the namespace “ShaunXu.Blogs.HighlighMenuItem” to make the extension method I created below available. So let’s see the result. When the home page was shown the Home menu was highlighted since at this moment it was controller = Home and action = Index. And if I clicked the About menu you can see it turned highlighted as now the action was About. And if I navigated to the register page the Account menu was highlighted since it should be like that when any actions under the Account controller was invoked.   Fluently Language Till now it’s a fully example for the highlight menu item but not perfect yet. Since the most common scenario would be: highlighted when the action invoked, or highlighted when any action was invoked under this controller, we can created 2 shortcut method so for them so that normally the developer will be no need to specify the delegation. Another place we can improve would be, to make the method more user-friendly, or I should say developer-friendly. As you can see when we want to add a highlight menu item we need to specify 8 parameters and we need to remember what they mean. In fact we can make the method more “fluently” so that the developer can have the hints when using it by the Visual Studio IntelliSense. Below is the full code for it. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace Ethos.Xrm.HR 9: { 10: #region Helper 11:  12: public static class HighlightActionMenuHelper 13: { 14: public static IHighlightActionMenuProviderAfterCreated HighlightActionMenu(this HtmlHelper helper) 15: { 16: return new HighlightActionMenuProvider(helper); 17: } 18: } 19:  20: #endregion 21:  22: #region Interfaces 23:  24: public interface IHighlightActionMenuProviderAfterCreated 25: { 26: IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName); 27: } 28:  29: public interface IHighlightActionMenuProviderAfterOn 30: { 31: IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes); 32: } 33:  34: public interface IHighlightActionMenuProviderAfterWith 35: { 36: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate); 37: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch(); 38: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch(); 39: } 40:  41: public interface IHighlightActionMenuProviderAfterHighlightWhen 42: { 43: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText); 44: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes); 45: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText); 46: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass); 47: } 48:  49: public interface IHighlightActionMenuProviderAfterApplyHighlightStyle 50: { 51: MvcHtmlString ToActionLink(); 52: } 53:  54: #endregion 55:  56: public class HighlightActionMenuProvider : 57: IHighlightActionMenuProviderAfterCreated, 58: IHighlightActionMenuProviderAfterOn, IHighlightActionMenuProviderAfterWith, 59: IHighlightActionMenuProviderAfterHighlightWhen, IHighlightActionMenuProviderAfterApplyHighlightStyle 60: { 61: private HtmlHelper _helper; 62:  63: private string _controllerName; 64: private string _actionName; 65: private string _text; 66: private object _routeData; 67: private object _htmlAttributes; 68:  69: private Func<HtmlHelper, bool> _highlightPredicate; 70:  71: private string _highlightText; 72: private object _highlightHtmlAttributes; 73:  74: public HighlightActionMenuProvider(HtmlHelper helper) 75: { 76: _helper = helper; 77: } 78:  79: public IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName) 80: { 81: _actionName = actionName; 82: _controllerName = controllerName; 83: return this; 84: } 85:  86: public IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes) 87: { 88: _text = text; 89: _routeData = routeData; 90: _htmlAttributes = htmlAttributes; 91: return this; 92: } 93:  94: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate) 95: { 96: _highlightPredicate = predicate; 97: return this; 98: } 99:  100: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch() 101: { 102: return HighlightWhen((helper) => 103: { 104: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower(); 105: }); 106: } 107:  108: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch() 109: { 110: return HighlightWhen((helper) => 111: { 112: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower() && 113: helper.ViewContext.RouteData.Values["action"].ToString().ToLower() == _actionName.ToLower(); 114: }); 115: } 116:  117: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText) 118: { 119: _highlightText = highlightText; 120: _highlightHtmlAttributes = highlightHtmlAttributes; 121: return this; 122: } 123:  124: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes) 125: { 126: return ApplyHighlighStyle(highlightHtmlAttributes, _text); 127: } 128:  129: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText) 130: { 131: return ApplyHighlighStyle(new { @class = cssClass }, highlightText); 132: } 133:  134: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass) 135: { 136: return ApplyHighlighStyle(new { @class = cssClass }, _text); 137: } 138:  139: public MvcHtmlString ToActionLink() 140: { 141: if (_highlightPredicate.Invoke(_helper)) 142: { 143: // should be highlight 144: return _helper.ActionLink(_highlightText, _actionName, _controllerName, _routeData, _highlightHtmlAttributes); 145: } 146: else 147: { 148: // should not be highlight 149: return _helper.ActionLink(_text, _actionName, _controllerName, _routeData, _htmlAttributes); 150: } 151: } 152: } 153: } So in the master page when I need the highlight menu item I can “tell” the helper how it should be, just like this. 1: <li> 2: <% 1: : Html.HighlightActionMenu() 2: .On("Index", "Home") 3: .With(SiteMasterStrings.Home, null, null) 4: .HighlightWhenControllerMatch() 5: .ApplyHighlighStyle(new { style = "background:url(../../Content/Images/topmenu_bg.gif) repeat-x;text-decoration:none;color:#feffff;" }) 6: .ToActionLink() %> 3: </li> While I’m typing the code the IntelliSense will advise me that I need a highlight action menu, on the Index action of the Home controller, with the “Home” as its link text and no need the additional route data and Html attributes, and it should be highlighted when the controller was “Home”, and if it’s highlighted the style should be like this and finally render it to me. This is something we call “Fluently Language”. If you had been using Moq you will see that’s very development-friendly, document-ly and easy to read.   Summary In this post I demonstrated how to implement a highlight menu item in ASP.NET MVC by using its controller – action infrastructure. We can see the ASP.NET MVC helps us to organize our web application better. And then I also told a little bit more on the “Fluently Language” and showed how it will make our code better and easy to be used.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 968 969 970 971 972 973 974 975 976 977 978 979  | Next Page >