Search Results

Search found 28459 results on 1139 pages for 'task base programming'.

Page 526/1139 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • What are the best resources for learning about concurrency and multi-threaded applications?

    - by Zepee
    I realised I have a massive knowledge gap when it comes to multi-threaded applications and concurrent programming. I've covered some basics in the past, but most of it seems to be gone from my mind, and it is definitely a field that I want, and need, to be more knowledgeable about. What are the best resources for learning about building concurrent applications? I'm a very practical oriented person, so if said book contains concrete examples the better, but I'm open to suggestions. I personally prefer to work in pseudocode or C++, and a slant toward game development would be best, but not required.

    Read the article

  • Compiler Dependencies [closed]

    - by asghar ashgari
    I'm a newbie researcher who's passion is programming languages (Web era). I'm wondering why all the Web frameworks and Web-based general purposes languages, have a huge number of dependencies when you want to install and then use (e.g., extend, alternate, etc.) their compilers. For example, Ruby on Rails or Scala. If I want to download their source code, and try to build it again, to me at least, feels like a can of worms. I have a MAC, so I need to install MACports, then update my XCode, then get the compiler source code that has bunch of other dependencies, then its hard to set things up; just to see the installed open-source compiler works fine.

    Read the article

  • Are CK Metrics still considered useful? Is there an open source tool to help?

    - by DeveloperDon
    Chidamber & Kemerer proposed several metrics for object oriented code. Among them, depth of inheritance tree, weighted number of methods, number of member functions, number of children, and coupling between objects. Using a base of code, they tried to correlated these metrics to the defect density and maintenance effort using covariant analysis. Are these metrics actionable in projects? Perhaps they can guide refactoring. For example weighted number of methods might show which God classes needed to be broken into more cohesive classes that address a single concern. Is there approach superseded by a better method, and is there a tool that can identify problem code, particularly in moderately large project being handed off to a new developer or team?

    Read the article

  • How Do I Export Pages from Browser with Embedded Hyperlinks?

    - by Volomike
    Made a sad discovery today. I have Ubuntu 10.04 LTS. My client is in the ad business and she had a marketing competition task for me. She wanted me to visit websites of the competitors, and export the home pages as PDF. However, she wanted me to do so with embedded hyperlinks. As it turns out, Firefox (and even the latest Chrome) on Ubuntu 10.04 LTS do not embed hyperlinks in PDF web page exports. Sure, there are several Chrome and FF plugins that let you export as PDF, but what these do is connect to the URL remotely, generate the PDF remotely, and then force a download in your browser to download it from a remote location. That's not good for me, though, because some of these competitor pages require an initial login. That means that all I get back on the PDF printing from these FF or Chrome plugins is a login page. Is there a way to get around this problem, to fix the broken PDF printer on Ubuntu 10.04?

    Read the article

  • Gilda Garretón, a Java Developer and Parallelism Computing Researcher

    - by Yolande
    In a new interview titled “Gilda Garretón, a Java Developer and Parallelism Computing Research,” Garretón shares her first-hand experience developing with Java and Java 7 for very large-scale integration (VLSI) of computer-aided design (CAD). Garretón gives an insightful overview of how Java is contributing to the parallelism development and to the Electric VLSI Design Systems, an open source VLSI CAD application used as a research platform for new CAD algorithms as well as the research flow for hardware test chips.  Garretón considers that parallelism programming is hard and complex, yet important developments are taking place.  "With the addition of the concurrent package in Java SE 6 and the Fork/Join feature in Java SE 7, developers have a chance to rely more on existing frameworks and dedicate more time to the essence of their parallel algorithms." Read the full article here  

    Read the article

  • Should I use C style in C++?

    - by c.hughes
    As I've been developing my position on how software should be developed at the company I work for, I've come to a certain conclusion that I'm not entirely sure of. It seems to me that if you are programming in C++, you should not use C style anything if it can be helped and you don't absolutely need the performance improvement. This way people are kept from doing things like pointer arithmetic or creating resources with new without any RAII, etc. If this idea was enforced, seeing a char* would possibly be a thing of the past. I'm wondering if this is a conclusion others have made? Or am I being too puritanical about this?

    Read the article

  • Were you a good programmer when you first left university?

    - by dustyprogrammer
    I recently graduated, from university. I have since then joined a development team where I am by far the least experienced developer, with maybe with a couple work terms under my belt, meanwhile the rest of the team is rocking 5-10 years experience. I am/was a very good student and a pretty good programmer when it came to bottled assignments and tests. I have worked on some projects with success. But now I working with a much bigger code-base, and the learning curve is much higher... I was wondering how many other developers started out their careers in teams and left like they sucked. When does this change? How can I speed up the process? My seniors are helping me but I want to be great and show my value now. I don't to start a flame war, this is just a question I have been having and I was hoping to get some advice from other experienced developers, as well as other beginners like me.

    Read the article

  • How To Disable the Charms Bar and Switcher Hot Corners in Windows 8

    - by Chris Hoffman
    Several programs can prevent the app switcher and charms from appearing when you move your mouse to the corners of the screen in Windows 8, but you can do it yourself with this quick registry hack. You can also hide the charms bar and switcher by installing an application like Classic Shell, which will also add a Start menu and let you log directly into the desktop. What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Should I avoid SharePoint Development in Visual Studio?

    - by SaphuA
    Hello, Not long ago I started an internship at a company that supplies SharePoint consultancy, hosting and development. While their consultancy seems to be pretty good and solid, I feel their development department lacks direction. The reason for this, most likely, is that they stopped outsourcing not too long ago. One thing that I've frequently bumped my head into is the following: My supervisor strongly insists that everything that can be done natively in SharePoint (somehow this includes editing xslt files in Designer) should be done in SharePoint. Even if this results in longer development time (at least when they make me write XSLT) and reduced usability. Her main arguments for this are: Better maintainability Editing the functionality doesn't require programming knowledge I feel the company is a little biassed and I am unable to get a decent discussion going. This is why I am looking for other places to get some responses on the subject (and not only on the arguments of my supervisor, but more on the subject in general). Kind regards

    Read the article

  • Oracle Utilities Framework Batch Easy Steps

    - by ACShorten
    Oracle Support have compiled a list of common Questions and Answers for Batch Processing in Oracle Utilities Application Framework. Customers and partners should take a look at these questions and answers before posting any question to support to save time. The Knowledge Base article is available from My Oracle Support under FW - Oracle Utilities Framework Batch Easy Steps (Doc ID 1306282.1). This article answers the questions but also posts links to other documents including the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) and Oracle Utilities CCB Batch Operations And Configuration Guide (Doc Id: 753301.1) for more detailed information and explanation. Customers of Oracle Utilities Meter Data Management V2.0 and above, Oracle Utilities Mobile Workforce Management V2.0 and above, Oracle Enterprise Taxation and Policy Management V2.0 and above, and Oracle Utilities Smart Grid Gateway V2.0 (all editions) and above should refer to the Batch Server Administration Guide shipped with their products on eDelivery instead of using Doc Id: 753301.1.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Ms Build publishing vs Visual Studio IDE publishing

    - by reggie
    I am currently working on ms build to publish my winform application based on the environment selected (Dev or Prod). I am using Ms Build Community Task and referencing this article to achieve this purpose. I had a few theoretical doubts based on publishing application. 1) Is there any difference in publishing through the visual studio ide and msbuild? 2) What do most developers prefer to use and why? 3) What are the advantages of using MsBuild to publish an application as compared to publishing through the visual studio IDE? 4) What is faster? I am using a .net 3.5 winform application developed in Csharp and my question is pertaining to clickonce windows applications only. Please help me clear these doubts

    Read the article

  • 7 Reasons for Abandonment in eCommerce and the need for Contextual Support by JP Saunders

    - by Tuula Fai
    Shopper confidence, or more accurately the lack thereof, is the bane of the online retailer. There are a number of questions that influence whether a shopper completes a transaction, and all of those attributes revolve around knowledge. What products are available? What products are on offer? What would be the cost of the transaction? What are my options for delivery? In general, most online businesses do a good job of answering basic questions around the products as the shopper engages in the online journey, navigating the product catalog and working through the checkout process. The needs that are harder to address for the shopper are those that are less concerned with product specifics and more concerned with deciding whether the transaction met their needs and delivered value. A recent study by the Baymard Institute [1] finds that more than 60% of ecommerce site visitors will abandon their shopping cart. The study also identifies seven reasons for abandonment out of the commerce process [2]. Most of those reasons come down to poor usability within the commerce experience. Distractions. External distractions within the shopper’s external environment (TV, Children, Pets, etc.) or distractions on the eCommerce page can drive shopper abandonment. Ideally, the selection and check-out process should be straightforward. One common distraction is to drive the shopper away from the task at hand through pop-ups or re-directs. The shopper engaging with support information in the checkout process should not be directed away from the page to consume support. Though confidence may improve, the distraction also means abandonment may increase. Poor Usability. When the experience gets more complicated, buyer’s remorse can set in. While knowledge drives confidence, a lack of understanding erodes it. Therefore it is important that the commerce process is streamlined. In some cases, the number of clicks to complete a purchase is lengthy and unavoidable. In these situations, it is vital to ensure that the complexity of your experience can be explained with contextual support to avoid abandonment. If you can illustrate the solution to a complex action while the user is engaged in that action and address customer frustrations with your checkout process before they arise, you can decrease abandonment. Fraud. The perception of potential fraud can be enough to deter a buyer. Does your site look credible? Can shoppers trust your brand? Providing answers on the security of your experience and the levels of protection applied to profile information may play as big a role in ensuring the sale, as does the support you provide on the product offerings and purchasing process. Does it fit? If it is a clothing item or oversized furniture item, another common form of abandonment is for the shopper to question whether the item can be worn by the intended user. Providing information on the sizing applied to clothing, physical dimensions, and limitations on delivery/returns of oversized items will also assist the sale. A photo alone of the item will help, as it answers some of those questions, but won’t assuage all customer concerns about sizing and fit. Sometimes the customer doesn’t want to buy. Prospective buyers might be browsing through your catalog to kill time, or just might not have the money to purchase the item! You are unlikely to provide any information in contextual support to increase the likelihood to buy if the shopper already has no intentions of doing so. The customer will still likely abandon. Ensuring that any questions are proactively answered as they browse through your site can only increase their likelihood to return and buy at a future date. Can’t Buy. Errors or complexity at checkout can be another major cause of abandonment. Good contextual support is unlikely to help with severe errors caused by technical issues on your site, but it will have a big impact on customers struggling with complexity in the checkout process and needing a question answered prior to completing the sale. Embedded support within the checkout process to patiently explain how to complete a task will help increase conversion rates. Additional Costs. Tax, shipping and other costs or duties can dramatically increase the cost of the purchase and when unexpected, can increase abandonment, particularly if they can’t be adequately explained. Again, a lack of knowledge erodes confidence in the purchase, and cost concerns in particular, erode the perception of your brand’s trustworthiness. Again, providing information on what costs are additive and why they are being levied can decrease the likelihood that the customer will abandon out of the experience. Knowledge drives confidence and confidence drives conversion. If you’d like to understand best practices in providing contextual customer support in eCommerce to provide your shoppers with confidence, download the Oracle Cloud Service and Oracle Commerce - Contextual Support in Commerce White Paper. This white paper discusses the process of adding customer support, including a suggested process for finding where knowledge has the most influence on your shoppers and practical step-by-step illustrations on how contextual self-service can be added to your online commerce experience. Resources: [1] http://baymard.com/checkout-usability [2] http://baymard.com/blog/cart-abandonment

    Read the article

  • how to follow python polymorphism standards with math functions

    - by krishnab
    So I am reading up on python in Mark Lutz's wonderful LEARNING PYTHON book. Mark makes a big deal about how part of the python development philosophy is polymorphism and that functions and code should rely on polymorphism and not do much type checking. However, I do a lot of math type programming and so the idea of polymorphism does not really seem to apply--I don't want to try and run a regression on a string or something. So I was wondering if there is something I am missing here. What are the applications of polymorphism when I am writing functions for math--or is type checking philosophically okay in this case.

    Read the article

  • Best Practices and Etiquette for Setting up Email Notifications

    - by George Stocker
    If you were going to set up a Email Alerts for the customers of your website to subscribe to, what rules of etiquette ought to be followed? I can think of a few off the top of my head: Users can Opt-Out Text Only (Or tasteful Remote Images) Not sent out more than once a week Clients have fine-grained control over what they receive emails about (Only receive what they are interested in) What other points should I consider? From a programming standpoint, what is the best method for setting up and running email notifications? Should I use an ASP.NET Service? A Windows Service? What are the pitfalls to either? How should I log emails that are sent? I don't care if they're received, but I do need to be able to prove that I did or did not send an email.

    Read the article

  • Floppy Autoloader Automatically Archives Thousands of Floppies

    - by Jason Fitzpatrick
    The thought of hand loading 5,000 floppy disks is more than enough to drive an inventive geek to create a better alternative–like this automated floppy disk archiver. DwellerTunes has several crates of floppy disks that contain old Amiga software and related material, personal programming projects, personal documents, and more. Realistically there’s no way he could devout time to hand loading and archiving thousands upon thousands of floppy disks so he built a automatic loader that accepts stacks of several hundred floppy disks at time. The loader not only loads and archives the floppy disks, but it photographs the label of each disk so that each archive includes a picture of the original label. Watch the video above to see it in action and then hit up the link below for more information. Converting All My Amiga Disks [DwellerTunes via Make] How to Own Your Own Website (Even If You Can’t Build One) Pt 2 How to Own Your Own Website (Even If You Can’t Build One) Pt 1 What’s the Difference Between Sleep and Hibernate in Windows?

    Read the article

  • Setting the Default Wiki Page in a SharePoint Wiki Library

    - by Damon Armstrong
    I’ve seen a number of blog posts about setting the default homepage in a wiki library, and most of them offer ways of accomplishing this task through PowerShell or through SharePoint designer.  Although I have become an ever increasing fan of PowerShell, I still prefer to stay away from it unless I’m trying to do something fairly complicated or I need a script that I can run over and over again.  If all you need to do is set the default homepage in a wiki library, there is an easier way! First, navigate to the wiki page you want to use as the default homepage.  Then click the Page tab in the ribbon.  In the Page Actions group there is a button called Make Homepage.  Click it.  A confirmation displays informing you that you are about to change the homepage.  Click OK and you will have a new homepage for your wiki library.  No PowerShell required.

    Read the article

  • Where should I draw the line between unit tests and integration tests? Should they be separate?

    - by Earlz
    I have a small MVC framework I've been working on. It's code base definitely isn't big, but it's not longer just a couple of classes. I finally decided to take the plunge and start writing tests for it(yes, I know I should've been doing that all along, but it's API was super unstable up until now) Anyway, my plan is to make it extremely easy to test, including integration tests. An example integration test would go something along these lines: Fake HTTP request object - MVC framework - HTTP response object - check the response is correct Because this is all doable without any state or special tools(browser automation etc), I could actually do this with ease with regular unit test frameworks(I use NUnit). Now the big question. Where exactly should I draw the line between unit tests and integration tests? Should I only test one class at a time(as much as possible) with unit tests? Also, should integration tests be placed in the same testing project as my unit testing project?

    Read the article

  • Can a loosely typed language be considered true object oriented?

    - by user61852
    Can a loosely typed programming language like PHP be really considered object oriented? I mean, the methods don't have returning types and method parameters has no declared type either. Doesn't class design require methods to have a return type? Don't methods signatures have specifically-typed parameters? How can OOP techniques help you code in PHP if you always have to check the types of parameters received because the language doesn't enforce types? Please, if I'm wrong, explain it to me. When you design things using UML, then code classes in PHP with no return-typed methods and no-type parameters... Is the code really compliant with the UML design? You spend time designing the architecture of your software, then the compiler doesn't force the programmer to follow your design while coding, letting he/she assign any object variable to any other variable with no "type-mismatch" warning.

    Read the article

  • Hack Your Lights for Remote Control

    - by Jason Fitzpatrick
    This clever hack combines a modified wall switch with unused buttons on a universal remote to create one-touch wireless control of the lighting in a media room. Andrew, the tinker behind this home theater hack, writes: I really liked the idea of controlling my “Home Theater” lights with a remote (TV or other), this would save me the exhausting task of heaving myself off the couch to turn the lights on or off. I found one of my remotes has a spare power button, its one of those stupid “universal” remotes that comes with DVD players or TVs but only work if you have all the same brand equipment, I don’t so this made a good option for a light switch. Hit up the link below to check out more photos of his project and download the source code. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • How to Change Your Default Applications on Ubuntu: 4 Ways

    - by Chris Hoffman
    There are several ways to change your default applications on Ubuntu. Whether you’re changing the default application for a particular task, file type, or a system-level application like your default text editor, there’s a different place to go. Unlike on Windows, applications won’t take over existing file extensions during the installation process — they’ll just appear as an option after you install them. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Python library for scripting (C++ integration)

    - by Edward83
    Please advise me good wrapper/library for python. I need to implement simple scripting in c++ app; Under "good" I mean pretty understandable, well documented, no memory leaking, fast. For creating base interface of GameObject on Python and C++; Your own experience and useful links will be nice!!! I found link about it, but I need more specific within gamedev context. What combinations of libraries you used for python integration into c++? For example about ogre-python it said built using Py++ and Boost.Python library And one more question, maybe someone of you know how Python was integrated into BigWorld engine (it's own port or some library)? Thank you!!!

    Read the article

  • Database vs Networking

    - by user16258
    I have completed my diploma in (IT) and now pursuing degree, i am in last semester of my B.E(I.T). I want to do specialization either in Database(oracle) or in Networking(cisco). Which one of two will be in more demand in near future, i know it's all about interest but still i would like to know your opinion. Most of people say that a network engineer is never paid as better as a programmer or a DBA, and few says they do get paid well. What would be the scope if i clear my CCNA and CCNP exams, or either OCA & OCP exams, what would be more rewarding. Also i have read somewhere that most of the task of DBA will be automated so in future demand of a DBA will reduce. I would also like to hear from Network engineers what's the scenario out there in India. Thanks

    Read the article

  • Java EE/GlassFish Adoption Story by Kerry Wilson/Vanderbilt University

    - by reza_rahman
    Kerry Wilson is a Software Engineer at the Vanderbilt University Medical Center. He served in a consultant role to design a lightweight systems integration solution for the next generation Foundations Recovery Network using GlassFish, Java EE 6, JPA, @Scheduled EJBs, CDI, JAX-RS and JSF. He shared his story at the JavaOne 2013 Sunday GlassFish community event - check out the video below: Kerry outlined some of the details of the implementation and emphasized the fact that Java EE can be a great solution for applications that are considered small/lightweight. He mentioned the productivity gains through the modern Java EE programming model centered on annotations, POJOs and zero-configuration - comparing it with competing frameworks that aim towards similar productivity for lightweight applications. Kerry also stressed the quality of the excellent NetBeans integration with GlassFish and the need for community self-support in free, non-commercial open source projects like GlassFish. You can check out the details of his story on the GlassFish stories blog. Do you have a Java EE/GlassFish adoption story to share? Let us know and we will highlight it for the community.

    Read the article

  • Java EE/GlassFish Adoption Story by Kerry Wilson/Vanderbilt University

    - by reza_rahman
    Kerry Wilson is a Software Engineer at the Vanderbilt University Medical Center. He served in a consultant role to design a lightweight systems integration solution for the next generation Foundations Recovery Network using GlassFish, Java EE 6, JPA, @Scheduled EJBs, CDI, JAX-RS and JSF. He lives in Nashville, TN where he helps organize the Nashville Java User Group. Kerry shared his Java EE/GlassFish adoption story at the JavaOne 2013 Sunday GlassFish community event - check out the video below: Here is the slide deck for his talk: GlassFish Story by Kerry Wilson/Vanderbilt University Medical Center from glassfish Kerry outlined some of the details of the implementation and emphasized the fact that Java EE can be a great solution for applications that are considered small/lightweight. He mentioned the productivity gains through the modern Java EE programming model centered on annotations, POJOs and zero-configuration - comparing it with competing frameworks that aim towards similar productivity for lightweight applications. Kerry also stressed the quality of the excellent NetBeans integration with GlassFish and the need for community self-support in free, non-commercial open source projects like GlassFish.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >