Search Results

Search found 59959 results on 2399 pages for 'time complexity'.

Page 254/2399 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Any idea for a master thesis in software engineering

    - by medusa
    Hi! I have to choose a thesis for my master degree. Time is limited to about 6 months. Do you have any idea? Any personal thesis that was successful? After searching around for some time now, i see the most famous topics are related to artificial intelligence, but i don't want something like that, because most of it would be just theory and boring. A lot of students present these kind of studies because those are the most difficult. I would prefer something that does not necessary include that mathematical complexity but which is an everyday-life topic, and gives concrete ideas, hypothesis, or solutions to some actual problems. Hope i gave my whole idea: i am looking for something that is different from the majority of what all students do, and able to impress the audience... :) I would really really appreciate any your suggestion, Thank you!

    Read the article

  • How to manage security cameras in Ubuntu?

    - by Josh
    I am setting up a server of sorts and chose ubuntu for the OS as my dad has it on a few computers. I am unimpressed with Windows or MAC due to all the add-ons and complexity of it when all I want is something simple. The system will have 3 purposes, storing my wife's photography work (she is a professional photographer) storing music for quick access to our entertainment system (will be running the system through the tv in our living room and thus through our surround sound) and will also serve as a DVR unit for a home security system I am going to put together. My question is what sort of software options are there for the Ubuntu system as far as a DVR with frame by frame playback. It does not need to be fancy but of course a variety of options are a nice touch.

    Read the article

  • Puppet: Making Windows Awesome Since 2011

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-making-windows-awesome-since-2011.aspxPuppet was one of the first configuration management (CM) tools to support Windows, way back in 2011. It has the heaviest investment on Windows infrastructure with 1/3 of the platform client development staff being Windows folks.  It appears that Microsoft believed an end state configuration tool like Puppet was the way forward, so much so that they cloned Puppet’s DSL (domain-specific language) in many ways and are calling it PowerShell DSC. Puppet Labs is pushing the envelope on Windows. Here are several things to note: Puppet x64 Ruby support for Windows coming in v3.7.0. An awesome ACL module (with order, SIDs and very granular control of permissions it is best of any CM). A wealth of modules that work with Windows on the Forge (and more on GitHub). Documentation solely for Windows folks - https://docs.puppetlabs.com/windows. Some of the common learning points with Puppet on Windows user are noted in this recent blog post. Microsoft OpenTech supports Puppet. Azure has the ability to deploy a Puppet Master (http://puppetlabs.com/solutions/microsoft). At Microsoft //Build 2014 in the Day 2 Keynote Puppet Labs CEO Luke Kanies co-presented with Mark Russonivich (http://channel9.msdn.com/Events/Build/2014/KEY02  fast forward to 19:30)! Puppet has a Visual Studio Plugin! It can be overwhelming learning a new tool like Puppet at first, but Puppet Labs has some resources to help you on that path. Take a look at the Learning VM, which has a quest-based learning tool. For real-time questions, feel free to drop onto #puppet on freenode.net (yes, some folks still use IRC) with questions, and #puppet-dev with thoughts/feedback on the language itself. You can subscribe to puppet-users / puppet-dev mailing lists. There is also ask.puppetlabs.com for questions and Server Fault if you want to go to a Stack Exchange site. There are books written on learning Puppet. There are even Puppet User Groups (PUGs) and other community resources! Puppet does take some time to learn, but with anything you need to learn, you need to weigh the benefits versus the ramp up time. I learned NHibernate once, it had a very high ramp time back then but was the only game on the street. Puppet’s ramp up time is considerably less than that. The advantage is that you are learning a DSL, and it can apply to multiple platforms (Linux, Windows, OS X, etc.) with the same Puppet resource constructs. As you learn Puppet you may wonder why it has a DSL instead of just leveraging the language of Ruby (or maybe this is one of those things that keeps you up wondering at night). I like the DSL over a small layer on top of Ruby. It allows the Puppet language to be portable and go more places. It makes you think about the end state of what you want to achieve in a declarative sense instead of in an imperative sense. You may also find that right now Puppet doesn’t run manifests (scripts) in order of the way resources are specified. This is the number one learning point for most folks. As a long time consternation of some folks about Puppet, manifest ordering was not possible in the past. In fact it might be why some other CMs exist! As of 3.3.0, Puppet can do manifest ordering, and it will be the default in Puppet 4. http://puppetlabs.com/blog/introducing-manifest-ordered-resources You may have caught earlier that I mentioned PowerShell DSC. But what about DSC? Shouldn’t that be what Windows users want to choose? Other CMs are integrating with DSC, will Puppet follow suit and integrate with DSC? The biggest concern that I have with DSC is it’s lack of visibility in fine-grained reporting of changes (which Puppet has). The other is that it is a very young Microsoft product (pre version 3, you know what they say :) ). I tried getting it working in December and ran into some issues. I’m hoping that newer releases are there that actually work, it does have some promising capabilities, it just doesn’t quite come up to the standard of something that should be used in production. In contrast Puppet is almost a ten year old language with an active community! It’s very stable, and when trusting your business to configuration management, you want something that has been around awhile and has been proven. Give DSC another couple of releases and you might see more folks integrating with it. That said there may be a future with DSC integration. Portability and fine-grained reporting of configuration changes are reasons to take a closer look at Puppet on Windows. Yes, Puppet on Windows is here to stay and it’s continually getting better folks.

    Read the article

  • Reflections on GiveCamp

    - by Reed
    I participated in the Seattle GiveCamp over the weekend, and am entirely impressed.  GiveCamp is a great event – I especially like how rewarding it is for everybody involved.  I strongly encourage any and all developers to watch for future GiveCamp events, and consider participating, for many reasons… GiveCamp provides real value to organizations that truly need help.  The Seattle event alone succeeded in helping sixteen non-profit organizations in many different ways.  The projects involved varied dramatically, including website redesigns, SEO, reworking data management workflows, and even game development.  Many non-profits have a strong need for good, quality technical help.  However, nearly every non-profit organization has an incredibly limited budget.  GiveCamp is a way to really give back, and provide incredibly valuable help to organizations that truly benefit. My experience has shown many developers to be incredibly generous – this is a chance to dedicate your energy to helping others in a way that really takes advantage of your expertise.  Your time as a developer is incredibly valuable, and this puts something of incredible value directly into the hands of places its needed. First, and foremost, GiveCamp is about providing technical help to non-profit organizations in need. GiveCamp can make you a better developer.  This is a fantastic opportunity for us, as developers, to work with new people, in a new setting.  The incredibly short time frame (one weekend for a deliverable project) and intense motivation to succeed provides a huge opportunity for learning from peers.  I’d personally like to thank off the developers with whom I worked – I learned something from each and every one of you.  I hope to see and work with all of you again someday. GiveCamp provides an opportunity for you to work outside of your comfort zone. While it’s always nice to be an expert, it’s also valuable to work on a project where you have little or no direct experience.  My team focused on a complete reworking of our organizations message and a complete new website redesign and deployment using WordPress.  While I’d used WordPress for my blog, and had some experience, this is completely unrelated to my professional work.  In fact, nobody on our team normally worked directly with the technologies involved – yet together we managed to succeed in delivering our goals.  As developers, it’s easy to want to stay abreast of new technology surrounding our expertise, but its rare that we get a chance to sit down and work on something practical that is completely outside of our normal realm of work.  I’m a desktop developer by trade, and spent much of the weekend working with CSS and Photoshop.  Many of the projects organizations need don’t match perfectly with the skill set in the room – yet all of the software professionals rose to the occasion and delivered practical, usable applications. GiveCamp is a short term, known commitment. While this seems obvious, I think it’s an important aspect to remember.  This is a huge part of what makes it successful – you can work, completely focused, on a project, then walk away completely when you’re done.  There is no expectation of continued involvement.  While many of the professionals I’ve talked to are willing to contribute some amount of their time beyond the camp, this is not expected. The freedom this provides is immense.  In addition, the motivation this brings is incredibly valuable.  Every developer in the room was very focused on delivering in time – you have one shot to get it as good as possible, and leave it with the organization in a way that can be maintained by them.  This is a rare experience – and excellent practice at time management for everyone involved. GiveCamp provides a great way to meet and network with your peers. Not only do you get to network with other software professionals in your area – you get to network with amazing people.  Every single person in the room is there to try to help people.  The balance of altruism, intelligence, and expertise in the room is something I’ve never before experienced. During the presentations of what was accomplished, I felt blessed to participate.  I know many people in the room were incredibly touched by the level of dedication and accomplishment over the weekend. GiveCamp is fun. At the end of the experience, I would have signed up again, even if it was a painful, tedious weekend – merely due to the amazing accomplishments achieved throughout the event.  However, the event is fun.  Everybody I talked to, the entire weekend, was having a good time.  While there were many faces focused into a near grimace at times (including mine, I’ll admit), this was always in response to a particularly challenging problem or task.  The challenges just added to the overall enjoyment of the weekend – part of why I became a developer in the first place is my love for challenge and puzzles, and a short deadline using unfamiliar technology provided plenty of opportunity for puzzles.  As soon as people would stand up, it was another smile.   If you’re a developer, I’d recommend looking at GiveCamp more closely.  Watch for an event in your area.  If there isn’t one, consider building a team and organizing an event.  The experience is worth the commitment. 

    Read the article

  • Recursion VS memory allocation

    - by Vladimir Kishlaly
    Which approach is most popular in real-world examples: recursion or iteration? For example, simple tree preorder traversal with recursion: void preorderTraversal( Node root ){ if( root == null ) return; root.printValue(); preorderTraversal( root.getLeft() ); preorderTraversal( root.getRight() ); } and with iteration (using stack): Push the root node on the stack While the stack is not empty Pop a node Print its value If right child exists, push the node's right child If left child exists, push the node's left child In the first example we have recursive method calls, but in the second - new ancillary data structure. Complexity is similar in both cases - O(n). So, the main question is memory footprint requirement?

    Read the article

  • Identity R2 - Experts Podcast Series

    - by Tanu Sood
    To follow up on the Identity Management R2 launch, a series of podcasts were recorded with subject matter experts from customer organizations, our partners and Oracle’s PM team to discuss key trends, R2 capabilities, implementation best practices and more. Below is a roll-up of the podcast series that is available on Fusion Middleware radio. R2 Podcasts:   ·         Designing the Next-Generation Identity Platform Vadim Lander, Oracle Highlights: Common architecture model, integration, interoperability and the driving factors behind R2 innovation IT Departments are shifting their Identity Management strategy to be able to support mobile, cloud and social applications. Oracle has anticipated this shift and has built a product roadmap to take advantage of this focus. Join Vadim as he discusses the design strategy behind the latest 11gR2 release and talks about how IDM services have to evolve to meet this new challenge.   ·         BETA Customer Perspective on R2 Ravi Meduri, Kaiser Permanente Highlights: R2 scalability and high availability In this podcast Ravi discusses the new features in 11gR2 that he is most interested in, including High Availability options for Access Management, multi-datacenter architecture, and what it was like working with the Oracle product team during the BETA program.   ·         Partner Perspective on R2 Rex Thexton, PricewaterhouseCoopers Highlights: Usability Enhancements for Users and Administrators A lot of new usability features went into the 11gR2 release making this the most business friendly IDM release to date. In this podcast Rex Thexton, Managing Director from PwC, talks about some of the new UI changes for both end users and administrators, and also about the new connector creation framework.   Access Request Updates in R2 Marc Boroditsky, Oracle Highlights: Access request User Interface innovations A lot of changes have been made to the Access Request user interface in the latest version of Oracle Identity Manager 11gR2. A real focus has been put on making the request process more business user friendly, and a lot of new customization capability has been added for the IT administrators. Hear Marc discuss the updated UI, and explain how administrators will be able to customize OIM to meet their company's requirements   ·         Oracle Optimized System for Oracle Unified Directory (OOS4OUD) Nick Kloski, Oracle Highlights: New Optimized System configuration for Unified Directory One of the new features in 11gR2 is the availability of an Optimized System configuration for Oracle Unified Directory. Oracle engineers installed the OUD software onto off the shelf hardware and then created a performance tuned configuration. Join us as we talk to Nick Kloski, Infrastructure Solutions Manager, all about the testing process and the resulting performance metrics.   Privileged Account Management Mark Wilcox, Oracle Highlights: Oracle Privileged Account Manager key capabilities, use cases The new release of Oracle Identity Management 11g R2 includes the capability to manage privileged accounts. Privileged accounts, if compromised, create a risk for fraud in the enterprise and as a result controlling access to privileged accounts is critical. Hear what Mark Wilcox, Principal Product Manager of Oracle Privileged Account Manager has to say about the capabilities of the offering in this podcast.   ·         Browser-based User Interface (UI) Customization Clayton Donley, Oracle Highlights: Benefits of Durable UI Configuration framework Business users need user interfaces that are not only friendly but also easily customizable. However the downside of any customization project is the cost and complexity involved in developing, testing, deploying and managing custom code. In this podcast, we examine how a new capability in Oracle Identity Management around browser based UI customization can reduce costs and complexity of customization while simplifying self service integration with corporate portal strategies.   ·         Simplifying Mobile and Social Sign-On Dan Killmer, Oracle Highlights: Secure mobile sign-on and consumption of social identities with Oracle Access Management The proliferation of mobile devices has spurred a new trend where employees tend to bring their own mobile devices to work and access corporate applications the same way they would access from a desktop or laptop. In this podcast, we examine how Oracle's latest innovation in Identity Management around Mobile and Social Sign On can simplify security and access management challenges posed by the widespread adoption of mobile devices in the enterprise. ·         Enabling Your Business with IDM R2 Scott Bonnell, Oracle Highlights: Self service, mobile access, personalization Gone are the days when Identity Management was just about stopping unauthorized users in their tracks. Identity Management if done right, can also enable your business. Join Scott Bonnell as he discusses how the IDM 11gR2 release enables the enterprise by providing self service, personalization and mobile access to corporate resources.

    Read the article

  • Is it worth learning either GWT or Vaadin?

    - by NimChimpsky
    I consider myself a decent java/web developer. In my career I have always used servlets and ejb's with a web front end, most recently incoporating jquery and ajax. I can see the theoretical benefit of using GWT or Vaadin: it is my understanding they convert Java code to the required JavaScript/HTML. So the developer gets the benefit of cross browser compatibility and compile time error checking (of web GUI elements). My question is threefold: Are there any other benefits I am missing that would be gained using Vaadin or GWT? I am actually quite confident and productive using HTML and JavaScript - so will I actually see any benefit? Or will it just make my knowledge of these areas redundant (as they are handled by GWT/Vaadin)? Will the end result be that I can create enterprise scale data driven websites in a reasonably short time? I can however already do this, and I have not wasted any time learning GWT/Vaadin.

    Read the article

  • NDepend 4 – First Steps

    - by Ricardo Peres
    Introduction Thanks to Patrick Smacchia I had the chance to test NDepend 4. I can only say: awesome! This will be the first of a series of posts on NDepend, where I will talk about my discoveries. Keep in mind that I am just starting to use it, so more experienced users may find these too basic, I just hope I don’t say anything foolish! I must say that I am in no way affiliated with NDepend and I never actually met Patrick. Installation No installation program – a curious decision, I’m not against it -, just unzip the files to a folder and run the executable. It will optionally register itself with Visual Studio 2008, 2010 and 11 as well as RedGate’s Reflector; also, it automatically looks for updates. NDepend can either be used as a stand-alone program (with or without a GUI) or from within Visual Studio or Reflector. Getting Started One thing that really pleases me is the Getting Started section of the stand-alone, with links to pages on NDepend’s web site, featuring detailed explanations, which usually include screenshots and small videos (<5 minutes). There’s also an How do I with hierarchical navigation that guides us to through the major features so that we can easily find what we want. Usage There are two basic ways to use NDepend: Analyze .NET solutions, projects or assemblies; Compare two versions of the same assembly. I have so far not used NDepend to compare assemblies, so I will first talk about the first option. After selecting a solution and some of its projects, it generates a single HTML page with an highly detailed report of the analysis it produced. This includes some metrics such as number of lines of code, IL instructions, comments, types, methods and properties, the calculation of the cyclomatic complexity, coupling and lots of others indicators, typically grouped by type, namespace and assembly. The HTML also includes some nice diagrams depicting assembly dependencies, type and method relative proportions (according to the number of IL instructions, I guess) and assembly analysis relating to abstractness and stability. Useful, I would say. Then there’s the rules; NDepend tests the target assemblies against a set of more than 120 rules, grouped in categories Code Quality, Object Oriented Design, Design, Architecture and Layering, Dead Code, Visibility, Naming Conventions, Source Files Organization and .NET Framework Usage. The full list can be configured on the application, and an explanation of each rule can be found on the web site. Rules can be validated, violated and violated in a critical manner, and the HTML will contain the violated rules, their queries – more on this later - and results. The HTML uses some nice JavaScript effects, which allow paging and sorting of tables, so its nice to use. Similar to the rules, there are some queries that display results for a number (about 200) questions grouped as Object Oriented Design, API Breaking Changes (for assembly version comparison), Code Diff Summary (also for version comparison) and Dead Code. The difference between queries and rules is that queries are not classified as passes, violated or critically violated, just present results. The queries and rules are expressed through CQLinq, which is a very powerful LINQ derivative specific to code analysis. All of the included rules and queries can be enabled or disabled and new ones can be added, with intellisense to help. Besides the HTML report file, the NDepend application can be used to explore all analysis results, compare different versions of analysis reports and to run custom queries. Comparison to Other Analysis Tools Unlike StyleCop, NDepend only works with assemblies, not source code, so you can’t expect it to be able to enforce brackets placement, for example. It is more similar to FxCop, but you don’t have the option to analyze at the IL level, that is, other that the number of IL instructions and the complexity. What’s Next In the next days I’ll continue my exploration with a real-life test case. References The NDepend web site is http://www.ndepend.com/. Patrick keeps an updated blog on http://codebetter.com/patricksmacchia/ and he regularly monitors StackOverflow for questions tagged NDepend, which you can find on http://stackoverflow.com/questions/tagged/ndepend. The default list of CQLinq rules, queries and statistics can be found at http://www.ndepend.com/DefaultRules/webframe.html. The syntax itself is described at http://www.ndepend.com/Doc_CQLinq_Syntax.aspx and its features at http://www.ndepend.com/Doc_CQLinq_Features.aspx.

    Read the article

  • SOA Suite HealthCare Integration Architecture

    - by Nitesh Jain
    Oracle SOA Suite for HealthCare integration is an integrated, best-of-breed suite that helps HealthCare organizations rapidly design and assemble, deploy and manage, highly agile and adaptable business applications.It  will help healthcare industry to  reduce operating costs and speeds time-to-market by delivering a consistent user interface, management console and monitoring environment, as well as healthcare libraries and templates for healthcare customer projects.Oracle SOA Suite for healthcare integration is fully configurable and extensible, providing a highly flexible platform for collaboration across all healthcare domains.Healthcare message standards support:    Messaging standards - HL7, HIPAA, Custom , X12N    Exchange standards - MLLP (v1.0, v2.0), TCP/IP, File, FTP, SFTP, JMSSimplified dashboards and customized reports helps users to advanced monitoring capabilities that support end-to-end healthcare message tracking.A toolkit for rapid HIPAA 5010 upgrade and compliance provides pre-defined healthcare integration mapping for HIPAA standards that is fully customizable and extensible.MLLP-HA helps easily failover and disaster recovery which makes system running on the long time without any issue.Audit keeps track of all the system changes. Alert and notification (SMS,Email etc) helps user to take the fast action and gives tracking on the real-time.

    Read the article

  • LiveCD does not work on my desktop

    - by Boris
    I've installed Oneiric on my laptop without any issue using the LiveCD downloaded here (from the French Ubuntu community server). But on my desktop, weird things happen: During the 1st try booting with the LiveCD on my desktop, my 2 year old child just hit the keyboard, and after several error messages the desktop loaded and I've been able to test Oneiric. But I wanted to redo a boot before installing Oneiric to avoid mistakes. So during the 2nd time I tried to boot with the LiveCD, I couldn't access to the point where I can choose to test or install. Before trying a 3rd time, I've "cleaned the system" from System Parameter System. But after that I'm still not able to access to the point where I can choose to test or install. Now it stops all the time on a black screen. I do not understand why several boot attempts with same CD have several results. So I wonder if the state of my current installation 11.04 can affect re-booting with my CD 11.10 ?

    Read the article

  • A Custom View Engine with Dynamic View Location

    - by imran_ku07
        Introduction:          One of the nice feature of ASP.NET MVC framework is its pluggability. This means you can completely replace the default view engine(s) with a custom one. One of the reason for using a custom view engine is to change the default views location and sometimes you need to change the views location at run-time. For doing this, you can extend the default view engine(s) and then change the default views location variables at run-time.  But, you cannot directly change the default views location variables at run-time because they are static and shared among all requests. In this article, I will show you how you can dynamically change the views location without changing the default views location variables at run-time.       Description:           Let's say you need to synchronize the views location with controller name and controller namespace. So, instead of searching to the default views location(Views/ControllerName/ViewName) to locate views, this(these) custom view engine(s) will search in the Views/ControllerNameSpace/ControllerName/ViewName folder to locate views.           First of all create a sample ASP.NET MVC 3 application and then add these custom view engines to your application,   public class MyRazorViewEngine : RazorViewEngine { public MyRazorViewEngine() : base() { AreaViewLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.cshtml", "~/Areas/{2}/Views/%1/{1}/{0}.vbhtml", "~/Areas/{2}/Views/%1/Shared/{0}.cshtml", "~/Areas/{2}/Views/%1/Shared/{0}.vbhtml" }; AreaMasterLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.cshtml", "~/Areas/{2}/Views/%1/{1}/{0}.vbhtml", "~/Areas/{2}/Views/%1/Shared/{0}.cshtml", "~/Areas/{2}/Views/%1/Shared/{0}.vbhtml" }; AreaPartialViewLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.cshtml", "~/Areas/{2}/Views/%1/{1}/{0}.vbhtml", "~/Areas/{2}/Views/%1/Shared/{0}.cshtml", "~/Areas/{2}/Views/%1/Shared/{0}.vbhtml" }; ViewLocationFormats = new[] { "~/Views/%1/{1}/{0}.cshtml", "~/Views/%1/{1}/{0}.vbhtml", "~/Views/%1/Shared/{0}.cshtml", "~/Views/%1/Shared/{0}.vbhtml" }; MasterLocationFormats = new[] { "~/Views/%1/{1}/{0}.cshtml", "~/Views/%1/{1}/{0}.vbhtml", "~/Views/%1/Shared/{0}.cshtml", "~/Views/%1/Shared/{0}.vbhtml" }; PartialViewLocationFormats = new[] { "~/Views/%1/{1}/{0}.cshtml", "~/Views/%1/{1}/{0}.vbhtml", "~/Views/%1/Shared/{0}.cshtml", "~/Views/%1/Shared/{0}.vbhtml" }; } protected override IView CreatePartialView(ControllerContext controllerContext, string partialPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.CreatePartialView(controllerContext, partialPath.Replace("%1", nameSpace)); } protected override IView CreateView(ControllerContext controllerContext, string viewPath, string masterPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.CreateView(controllerContext, viewPath.Replace("%1", nameSpace), masterPath.Replace("%1", nameSpace)); } protected override bool FileExists(ControllerContext controllerContext, string virtualPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.FileExists(controllerContext, virtualPath.Replace("%1", nameSpace)); } } public class MyWebFormViewEngine : WebFormViewEngine { public MyWebFormViewEngine() : base() { MasterLocationFormats = new[] { "~/Views/%1/{1}/{0}.master", "~/Views/%1/Shared/{0}.master" }; AreaMasterLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.master", "~/Areas/{2}/Views/%1/Shared/{0}.master", }; ViewLocationFormats = new[] { "~/Views/%1/{1}/{0}.aspx", "~/Views/%1/{1}/{0}.ascx", "~/Views/%1/Shared/{0}.aspx", "~/Views/%1/Shared/{0}.ascx" }; AreaViewLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.aspx", "~/Areas/{2}/Views/%1/{1}/{0}.ascx", "~/Areas/{2}/Views/%1/Shared/{0}.aspx", "~/Areas/{2}/Views/%1/Shared/{0}.ascx", }; PartialViewLocationFormats = ViewLocationFormats; AreaPartialViewLocationFormats = AreaViewLocationFormats; } protected override IView CreatePartialView(ControllerContext controllerContext, string partialPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.CreatePartialView(controllerContext, partialPath.Replace("%1", nameSpace)); } protected override IView CreateView(ControllerContext controllerContext, string viewPath, string masterPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.CreateView(controllerContext, viewPath.Replace("%1", nameSpace), masterPath.Replace("%1", nameSpace)); } protected override bool FileExists(ControllerContext controllerContext, string virtualPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.FileExists(controllerContext, virtualPath.Replace("%1", nameSpace)); } }             Here, I am extending the RazorViewEngine and WebFormViewEngine class and then appending /%1 in each views location variable, so that we can replace /%1 at run-time. I am also overriding the FileExists, CreateView and CreatePartialView methods. In each of these method implementation, I am replacing /%1 with controller namespace. Now, just register these view engines in Application_Start method in Global.asax.cs file,   protected void Application_Start() { ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new MyRazorViewEngine()); ViewEngines.Engines.Add(new MyWebFormViewEngine()); ................................................ ................................................ }             Now just create a controller and put this controller's view inside Views/ControllerNameSpace/ControllerName folder and then run this application. You will find that everything works just fine.       Summary:          ASP.NET MVC uses convention over configuration to locate views. For many applications this convention to locate views is acceptable. But sometimes you may need to locate views at run-time. In this article, I showed you how you can dynamically locate your views by using a custom view engine. I am also attaching a sample application. Hopefully you will enjoy this article too. SyntaxHighlighter.all()  

    Read the article

  • Application to display battery info

    - by Nik
    First of all, I am not sure the title of this question is the most appropriate however this is what I meant to say, There are many ways to extend the life of a laptop battery. One way is by not connecting it to the AC adapter all the time which will overcharge it. I read that in this website. Is there an application which automatically prevents the charging of the battery once it has reached 80% charged? I mean that is such a cool feature. Sometimes people tend to forget to remove the AC adapter and this could diminish the capacity and reduce the life time of the battery. Does the battery indicator in ubuntu display info or pop-up when the battery is almost dead (dead not in the sense of usage time) but rather a degraded battery?

    Read the article

  • Outdoor Programming Jobs...

    - by Rodrick Chapman
    Are there any kinds of jobs that require programming (or at least competency) but take place outdoors for a significant portion of the time? As long as I'm fantasizing, an ideal job would involve programming in a high level language like Haskell, F#, or Scala* for, say, 50% of the time and doing something like digging an irrigation trench the rest of the time. My background: I triple majored in mathematics, philosophy, and history (BS/BA) and have been working as a web developer for the past six years. I love hacking but I'm feeling a bit burned out. *I only chose these languages as examples since, ideally, I'd want to work among high caliber people... but it really doesn't matter.

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Keep it Professional &ndash; Multiple Environments

    - by AjarnMark
    I have certainly been reading blogs a whole lot more than writing them the last several weeks, and it’s about time I got back to writing.  I have been collecting several topics and references for blog posts…some of which will probably just never get written as the timeliness of the topics fade over time.  Nonetheless, I’m back, and I think it is time to revive my Doing Business Right series, this time coming from the slant of managing a development team rather than the previous angle of being self-employed.  First up: separating Dev, Test, and Prod. A few months ago, Colin Stasiuk (@BenchmarkIT) wrote a great post about separating your Dev, Test/UAT, and Prod environments.  This post covers all the important points such as removing Developer access from both PROD and UAT, and the importance of proper deployment (a.k.a. promotion) procedures.  I won’t repeat it all here, go read the original!  But what I do want to address is what I believe to be the #1 excuse people use for not having separate environments:  Money.  I discussed this briefly in my comment on Colin’s post at the time, but let me repeat it here and expand on it a bit. Don’t let the size of your company or the size of its budget dictate whether you do things professionally or not.  I am convinced that most developers and development teams would agree that it is a best practice to have separate environments for development, testing, and production (a.k.a. Live).  So why don’t they?  Because they think that it means separate servers which means more money.  While having separate physical servers for the different environments would be ideal, it is not an absolute requirement in order to make this work.  Here are a few ideas: Use multiple instances of SQL Server and multiple Web Sites with Headers or Ports.  For no additional fees* you can install multiple instances of SQL Server on the same machine.  This gives you a nice separation, allowing you to even use the same database names as will appear in PROD, yet isolating the data and security access.  And in IIS, you can create multiple Web Sites on the same server just by using Host Headers or different port numbers to separate them.  This approach does still pose the risk of non-Prod environments impacting performance on Prod, but when your application is busy enough for that to be a concern, you can probably afford one of the other options. Use desktop PCs instead of servers.  Instead of investing in full server-grade hardware, you can mimic the separate environments on old desktop PCs and at least get functional equivalency, if not performance matching.  The last I checked, Microsoft did not require separate licensing for SQL Server if that installation was used exclusively for dev or test purposes*.  There may be some version or performance differences between this approach and what you have in Prod, but you have isolated test from impacting Prod resources this way. Virtualization.  This is of course one of the hot topics of the day, and I would be remiss if I did not suggest this.  It is quite easy these days to setup virtual machines so that, again, your environments are fairly isolated from one another, and you retain all the security and procedural benefits of having separate environments. So the point is, keep your high professional standards intact.  You don’t need to compromise on using proper procedure just because you work in a small company with a small budget.  Keep doing things the right way! By the way, where I work, our DEV environment is not on a server.  All development is done on the developer’s individual workstation where it can be isolated from other developers’ work for the duration of writing the code, but also where the developers have to reconcile (merge) differences in code under concurrent development.  This usually means that each change is executed multiple times (once per developer to update their environments with the latest changes from others) giving us an extra, informal. test deployment before even going to the Test/UAT server.  It also means that if the network goes down, the developers can continue to hum along because they are not dependent on networked resources.  In fact, they will likely be even more productive because they aren’t being interrupted by email…but that’s another post I need to write. * I am not a lawyer, nor a licensing specialist, but it appeared to be so the last time I checked.  When in doubt, consult an expert on the topic.

    Read the article

  • Making body (box2d) a sprite (andengine) in Android

    - by Kadir
    I can't make body (box2d) a sprite (andengine) and at the same time apply MoveModifier to sprite which is body. If i can make just body, it works namely the sprites can collide. If I apply just MoveModifier to sprites, the sprites can move where i want. But I want to make body (they can collide) and apply MoveModifier (they can move where I want) at the same time. How can i do it? This my code just run MoveModifier not as body at the same time. circles[i] = new Sprite(startX, startY, textRegCircle[i]); body[i] = PhysicsFactory.createCircleBody(physicsWorld, circles[i], BodyType.DynamicBody, FIXTURE_DEF); physicsWorld.registerPhysicsConnector(new PhysicsConnector(circles[i], body[i], true, true)); circles[i].registerEntityModifier( (IEntityModifier) new SequenceEntityModifier ( new MoveModifier(10.0f, circles[i].getX(), circles[i].getX(), circles[i].getY(),CAMERA_HEIGHT+64.0f))); scene.getLastChild().attachChild(circles[i]); scene.registerUpdateHandler(physicsWorld);

    Read the article

  • Oracle announces Brand New Tuxedo 11g Release

    - by ruma.sanyal
    Today Oracle introduced two brand new products within the Tuxedo product line of its application grid portfolio. Oracle Tuxedo Application Runtime for CICS and Batch and Oracle Application Rehosting Workbench provide the ability to automate rehosting of mainframe Online and Batch applications to open systems running under Oracle Tuxedo. Oracle Application Rehosting Workbench automates adaptation of COBOL programs, JCL conversion for batch applications, and migration of VSAM files and DB2 data schema. Migration cost, risk, and project length and complexity are dramatically reduced with over 90% of application assets re-hosted on open systems 'as-is'. Impact on the organization is minimized - users are protected from change by support for 3270 green screens, and developers continue to use familiar CICS APIs, batxh functions, and common utilities. Other major features of this release are as follows: - Hotpluggability through introduction of Oracle Tuxedo JCA Adapter - Metadata driven application development using SCA programming model - Support for Python and Ruby languages to develop business services - Improved scalability and availability, TSAM enhancements Register for a live webinar with Oracle Fusion Middleware Senior VP Hasan Rizvi Read the press release Find more details on these exciting new products

    Read the article

  • Project Euler 52: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 52.  Compared to Problem 51, this problem was a snap. Brute force and pretty quick… As always, any feedback is welcome. # Euler 52 # http://projecteuler.net/index.php?section=problems&id=52 # It can be seen that the number, 125874, and its double, # 251748, contain exactly the same digits, but in a # different order. # # Find the smallest positive integer, x, such that 2x, 3x, # 4x, 5x, and 6x, contain the same digits. timer_start = Time.now def contains_same_digits?(n) value = (n*2).to_s.split(//).uniq.sort.join 3.upto(6) do |i| return false if (n*i).to_s.split(//).uniq.sort.join != value end true end i = 100_000 answer = 0 while answer == 0 answer = i if contains_same_digits?(i) i+=1 end puts answer puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • DNS slowdowns on development environment

    - by Sequenzia
    I have a local development environment setup on my Mac. I am running an Ubuntu Web Server inside of a Virtual Box VM. I setup a host file on my Mac that points my dev site to the IP of the Ubuntu Virtual Server. Everything works good other than the fact a lot (not all) of the time it takes more than 5 seconds to load a page. I used firebug to track down where the problem is and when it's slow the DNS part of my request is taking over 5 seconds. Like I said it's not all the time. Sometimes it resolves and loads the page within milliseconds. The same page one click will be super fast and then the next time it takes over 5 seconds. It's really slowing me down and I am not sure what is causing it. Anyone have any ideas? Any help would be great. Thank

    Read the article

  • Join Our Call: Sun Storage 2500-M2 Announcement

    - by user797911
    Oracle's Sun Storage 2500-M2 array brings together the latest Fibre Channel (FC) and SAS2 technologies with Oracle's Sun Storage Common Array software from Oracle to create a robust solution that’s equally adept in an entry-level storage area network (SAN) for the mid-size business and integrating into an existing storage network within the enterprise. The Sun Storage 2500-M2 replaces Sun's Storage 2500 array product line and is designed so that the customer may have a quick qualification time for fast and easy deployment in the traditional 2500 environments. Jun Jang, Oracle Principal Product Manager will be hosting this 1 hour live call (a recording will be available), please join us to find out more: Event Date: 24-JUN-11 Event Time: 08:00 am PST/PDT/4pm UK time Web Registration and Access: http://oukc.oracle.com/static09/opn/login/?t=livewebcast|c=1031672594 Access for Mobile Devices: http://my.oracle.com/content/web/cnt636926 Call Provider: Intercall International Participant Dial-In Number: 706-634-8508 Additional International Dial-In Numbers Link: http://www.intercall.com/national/oracleuniversity/gdnam.html Dial-In Passcode: 96395

    Read the article

  • Tuning Red Gate: #2 of Many

    - by Grant Fritchey
    In the last installment, I used the SQL Monitor tool to get a snapshot view of the current state of the servers at Red Gate that are giving us trouble. That snapshot suggested some areas where I should focus some time, primarily in which queries were being called most frequently or were running the longest. But, you don't want to just run off & start tuning queries. Remember, the foundation for query tuning is the server itself. So, I want to be sure I'm not looking at some major hardware or configuration issues that I need to address first. Rather than look at the current status of the server, I'm going to look at historical data. Clicking on the Analysis tab of SQL Monitor I get a whole list of counters that I can look at. More importantly, I can look at them over a period of time. Even more importantly, I can compare past periods with current periods to see if we're looking at a progressive issue or not. There are counters here that will give me an indication of load, and there are counters here that will tell me specifics about that load. First, I want to just look at the load to understand where the pain points might be. Trying to drill down before you have detailed information is just bad planning. First thing I'm going to check is the CPU, just to see what's up there. I have two servers I'm interested in, so I'll show you both: Looking at the last 30 days for both servers, well, let's just say that the first server is about what I would expect. It has an average baseline behavior with occasional, regular, peaks. This looks like a system with a fairly steady & predictable load that probably has a nightly batch process that spikes the processor. In short, normal stuff. The points there where the CPU drops radically. that might be worth investigating further because something changed the processing on this system a lot. But the first server. It's all over the place. There's no steady CPU behavior at all. It's spike high for long periods of time. It's up, it's down. I'm really going to have to spend time looking at CPU issues on this server to try to figure out what's up. It might be other processes being shared on the server, it might be something else. Either way, I'm going to have to spend time evaluating this CPU, especially those peeks about a week ago. Looking at the Pages/sec, again, just a measure of load, I see that there are some peaks on the rg-sql02 server, but over all, it looks like a fairly standard load. Plus, the peaks are only up to 550 pages/sec. Remember, this isn't a performance measure, but just a load measurement, but from this, I don't think we're looking at major memory issues, but I may want to correlate these counters with the CPU counters. Again, the other server looks like there's stuff going on. The load is not at all consistent. In fact there was a point earlier in the year that looks pretty severe. Plus the spikes here are twice the size of the other system. We've got a lot more load going on here and I will probably need to drill down on memory usage on this server. Taking a look at the disk transfers/sec the load on both systems seems to roughly correspond to the other load indicators. Notice that drop right in the middle of the graph for rg-sql02. I wonder if the office was closed over that period or a system was down for maintenance. If I saw spikes in memory or disk that corresponded to the drip in CPU, you can assume something was using those other resources and causing a drop, but when everything goes down, it just means that the system isn't gettting used. The disk on the rg-sql01 system isn't spiking exactly the same way as the memory & cpu, so there's a good chance (chance mind you) that any performance issues might not be disk related. However, notice that huge jump at the beginning of the month. Several disks were used more than they were for the rest of the month. That's the load on the server. What about the load on SQL Server itself? Next time.

    Read the article

  • Does UX matter for enterprise software?

    - by Ryan
    I've come to notice that a lot of software that companies use for managing things like time, expenses, setting up phone systems, etc is very non-intuitive from a user experience point of view. I know personally I waste a lot of time just trying to figure out how to navigate these systems, especially if I don't have a co-worker close by who I can bug to help me out. The help files are usually just as bad as the user interface itself. Are companies that complacent or are there just not any comparable enterprise products out there which do the job for these sorts of tasks? It seems that on the consumer side there is plenty of market opportunity for creating better user experiences, but how about for enterprise software? Obviously a certain level of slickness is not going to matter to a company, but when a better UX design translates to time saved, it's hard to argue against that. Edit: I'm not referring to in-house applications, but rather off the shelf systems from large software companies.

    Read the article

  • Oracle Database 12c

    - by jgelhaus
    What's your pain? Cost of IT and downtime, Complexity of IT, Poor database application performance? All of the above and more? These are real challenges caused by today's demands on data centers and their IT teams.  Oracle Database 12c provides a breakthrough architecture that makes it easy to deploy and manage databases in the cloud. Oracle partners will leverage Database 12c innovation to provide additional options for Oracle customer success and ROI. Download Oracle Database 12c and plug into the cloud! Join us for our July 10th webcast to learn about this database breakthrough.

    Read the article

  • Trigger Happy

    - by Tim Dexter
    Its been a while, I know, we’ll say no more OK? I’ll just write …In the latest BIP 11.1.1.6 release and if I’m really honest; the release before this (we'll call it dot 5 for brevity.) The boys and gals in the engine room have been real busy enhancing BIP with some new functionality. Those of you that use the scheduling engine in OBIEE may already know and use the ‘conditional scheduling’ feature. This allows you to be more intelligent about what reports get run and sent to folks on a scheduled basis. You create a ‘trigger’ analysis (answer) that is executed at schedule time prior to the main report. When the schedule rolls around, the trigger is run, if it returns rows, then the main report is run and delivered. If there are no rows returned, then the main report is not run. Useful right? Your users are not bombarded with 20 reports in their inbox every week that they need to wade throu. They get a handful that they know they need to look at. If you ensure you use conditional formatting in the report then they can find the anomalous data in the reports very quickly and move on to the rest of their day more quickly. You could even think of OBIEE as a virtual team member, scouring the data on your behalf 24/7 and letting you know when its found an issue.BI Publisher, wanting the team t-shirt and the khaki pants, has followed suit. You can now set up ‘triggers’ for it to execute before it runs the main report. Just like its big brother, if the scheduled report trigger returns rows of data; it then executes the main report. Otherwise, the report is skipped until the next schedule time rolls around. Sound familiar?BIP differs a little, in that you only need to construct a query to act as the trigger rather than a complete report. Let assume we have a monthly wage by department report on a schedule. We only want to send the report to managers if their departmental wages reach and/or exceed a certain amount. The toughest part about this is coming up with the SQL to test the business rule you want to implement. For my example, its not that tough: select d.department_name, sum(e.salary) as wage_total from employees e, departments d where d.department_id = e.department_id group by d.department_name having sum(e.salary) > 230000 We're looking for departments where the wage cost is greater than 230,000 Dexter Dollars! With a bit of messing I found out you can parametrize the query. Users can then set a value at schedule time if they need to. To create the trigger is straightforward enough. You can create multiple triggers for users to select at schedule time. Notice I also used a parameter in the query, :wamount. Note the matching parameter in the tree on the left. You also dont need to return multiple columns, one is fine, the key is if there are rows returned. You can build the rest of your report as usual. At scheduling time the Schedule tab has a bit more on it. If your users want to set the trigger, they check the Use Trigger box. The page will then pop fields to pick the appropriate trigger they want to use, even a trigger on another data model if needed. Note it will also ask for the parameter value associated with the trigger. At this point you should note that the data model does not make a distinction between trigger and data model (extract) parameters. So users will see the parameters on the General and Schedule tabs. If per chance you do need to just have a trigger parameters. You can just hide them from the report using the Parameters popup in the report designer, just un-check the 'Show' box I have tested the opposite case where you do not want main report parameters seen in the trigger section. BIP handles that for you! Once the report hits its allotted schedule time, the trigger is executed. Based on the results the report will either run or be 'skipped.' Now, you have a smarter scheduler that will only deliver reports when folks need to see them and take action on the contents. More official info here for developers and here for users.

    Read the article

  • Notebook MSI GT640 - noisy fan - Ubuntu 11.10 x64

    - by pablo
    I have all time this issue when using Ubuntus, the fan (fans?) runs most off the time and between 60%~100% even in idle which is weird. The GPU temperatures are mostly around 80 degrees and I don't think it's right :( I always use Jupiter with power saving preset and add the line in linux default like this: quiet splash pcie_aspm=force i915.i915_enable_rc6=1 So it helps a little (fan just doesn't run all the time 100%...) What else can I do? The notebook spec is: Core i7-720 (8 threads) NVIDIA GTS250M So it's rather one of the first, not so silent and cold notebooks;), with Core i7 family and without optimus technology (as nvidia GPU is the only graphic card built-in) but anyway I don't have theese strange hot times in Windows 7.

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >