Search Results

Search found 29898 results on 1196 pages for 'go minimal'.

Page 343/1196 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • The clean coders videos [closed]

    - by Sebastian
    As many others, I have been reading Uncle Bob Martins books. More specifically, clean code and then "the clean coder". Now, over the last year he has been producing "code casts" that you can buy for ~20USD a piece. I bought the first episode sometime in mid 2011 and wasnt that impressed, as I really learned nothing new after reading his books. Last night I bought the first episode of test driven development with more or less the same result as last time. Now tonight I gave it one more go and bought TDD part 2 and this one was, IMO, really good. With this post I would like to tip others about his videos and would also like to know what others think. BR Sebastian

    Read the article

  • Google I/O 2010 - Scripting Google Apps for business

    Google I/O 2010 - Scripting Google Apps for business Google I/O 2010 - Scripting Google Apps for business process automation Enterprise 201 Evin Levey Learn how to use Google Apps for business process automation, and custom work-flow. We'll introduce the powerful scripting service along with several easy-to-use interfaces including Spreadsheets, Calendar, Sites and the Document List. We'll also demonstrate interoperability with third party web services and showcase exciting new developments in Google Apps Script. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 53:16 More in Science & Technology

    Read the article

  • "Oracle Enterprise Manager Grid Control Advanced OEM Techniques for the Real World" Book - My Humble Review

    - by cristobal.soto(at)oracle.com
    After reviewing this book, I am really amazed with it. I really recommend it, specially if you work with these tools (BPEL, SOA Suite and/or OSB), if you are a SOA Architect and/or if your work is focused on production environments.This book provides valuable and useful information for monitoring and automation tasks.In the books is very clearly explained and with screenshots (which makes it even easier to read, understand and follow) how to perform several tasks that are necessary to keep a correct performance on the production environments and the subtasks that must be executed on them.The test sections on chapters 3, 10 and 13 (SOAP tests for partner links and BPEL processes, service tests on web applications, and SOAP test OSB proxy and business service endpoints) look specially interesting for me and I really liked to see that there is special emphasis on the use of WebLogic Server as well.For further information and order the book, please go to the Packt Publishing web site.

    Read the article

  • How do I make Empathy match my keyring with my password?

    - by lisalisa
    I changed my password a few months ago from the password I first used when I installed Ubuntu on my machine. I tried to add a Google Talk account to Empathy, but every time Empathy gives me a message saying the following: Enter password to unlock your login keyring The password you use to log in to your computer no longer matches that of your login keyring. I do not remember my original password and I'm not sure if I should go to Prefrences Passwords and Keys and delete my login password or if there is a way to change the keyring so that it matches up with my current password.

    Read the article

  • What makes for a good JIRA workflow with a software development team?

    - by Hari Seldon
    I am migrating my team from a snarl of poorly managed excel documents, individual checklists, and personal emails to manage our application issues and development tasks to a new JIRA project. My team and I are new to JIRA (and issue tracking software in general). My team is skeptical of the transition at best, so I am also trying not to scare them off by introducing something overly complex at the start. I understand one of JIRA's strengths to be the customized workflows that can be created for a project. I've looked over the JIRA documentation and a number of tutorials, and am comfortable with the how in creating workflows, but I need some contextual What to go along with it. What makes a particular workflow work well? What does a poorly designed workflow look like? What are the benefits/drawbacks of a strict workflow with very specific states and transitions to a looser workflow, with fewer, broader defined states and transitions

    Read the article

  • How to spawn a character at certain point and walk to a set point

    - by Robert H.
    I am making a game where I have a background image of a neighborhood. Each location has a different number of customers that are generated to walk on sidewalks. They all walk to a specific location (like a stand or cart that sells stuff), after they get to location I want them to interact with the cart. However, if another customer is already in a sale interaction then the others get in line in order of arrival. After the transaction the customers walk off screen. Any information on how I can do this and what game engine would be needed? Any one have any idea where I should go for this. I already have my game done up through Eclipse/Java without any game engine.

    Read the article

  • General Policies and Procedures for Maintaining the Value of Data Assets

    Here is a general list for policies and procedures regarding maintaining the value of data assets. Data Backup Policies and Procedures Backups are very important when dealing with data because there is always the chance of losing data due to faulty hardware or a user activity. So the need for a strategic backup system should be mandatory for all companies. This being said, in the real world some companies that I have worked for do not really have a good data backup plan. Typically when companies tend to take this kind of approach in data backups usually the data is not really recoverable.  Unfortunately when companies do not regularly test their backup plans they get a false sense of security because they think that they are covered. However, I can tell you from personal and professional experience that a backup plan/system is never fully implemented until it is regularly tested prior to the time when it actually needs to be used. Disaster Recovery Plan Expanding on Backup Policies and Procedures, a company needs to also have a disaster recovery plan in order to protect its data in case of a catastrophic disaster.  Disaster recovery plans typically encompass how to restore all of a company’s data and infrastructure back to a restored operational status.  Most Disaster recovery plans also include time estimates on how long each step of the disaster recovery plan should take to be executed.  It is important to note that disaster recovery plans are never fully implemented until they have been tested just like backup plans. Disaster recovery plans should be tested regularly so that the business can be confident in not losing any or minimal data due to a catastrophic disaster. Firewall Policies and Content Filters One way companies can protect their data is by using a firewall to separate their internal network from the outside. Firewalls allow for enabling or disabling network access as data passes through it by applying various defined restrictions. Furthermore firewalls can also be used to prevent access from the internal network to the outside by these same factors. Common Firewall Restrictions Destination/Sender IP Address Destination/Sender Host Names Domain Names Network Ports Companies can also desire to restrict what their network user’s view on the internet through things like content filters. Content filters allow a company to track what webpages a person has accessed and can also restrict user’s access based on established rules set up in the content filter. This device and/or software can block access to domains or specific URLs based on a few factors. Common Content Filter Criteria Known malicious sites Specific Page Content Page Content Theme  Anti-Virus/Mal-ware Polices Fortunately, most companies utilize antivirus programs on all computers and servers for good reason, virus have been known to do the following: Corrupt/Invalidate Data, Destroy Data, and Steal Data. Anti-Virus applications are a great way to prevent any malicious application from being able to gain access to a company’s data.  However, anti-virus programs must be constantly updated because new viruses are always being created, and the anti-virus vendors need to distribute updates to their applications so that they can catch and remove them. Data Validation Policies and Procedures Data validation is very important to ensure that only accurate information is stored. The existence of invalid data can cause major problems when businesses attempt to use data for knowledge based decisions and for performance reporting. Data Scrubbing Policies and Procedures Data scrubbing is valuable to companies in one of two ways. The first can be used to clean data prior to being analyzed for report generation. The second is that it allows companies to remove things like personally Identifiable information from its data prior to transmit it between multiple environments or if the information is sent to an external location. An example of this can be seen with medical records in regards to HIPPA laws that prohibit the storage of specific personal and medical information. Additionally, I have professionally run in to a scenario where the Canadian government does not allow any Canadian’s personal information to be stored on a server not located in Canada. Encryption Practices The use of encryption is very valuable when a company needs to any personal information. This allows users with the appropriated access levels to view or confirm the existence or accuracy of data within a system by either decrypting the information or encrypting a piece of data and comparing it to the stored version.  Additionally, if for some unforeseen reason the data got in to the wrong hands then they would have to first decrypt the data before they could even be able to read it. Encryption just adds and additional layer of protection around data itself. Standard Normalization Practices The use of standard data normalization practices is very important when dealing with data because it can prevent allot of potential issues by eliminating the potential for unnecessary data duplication. Issues caused by data duplication include excess use of data storage, increased chance for invalidated data, and over use of data processing. Network and Database Security/Access Policies Every company has some form of network/data access policy even if they have none. These policies help secure data from being seen by inappropriate users along with preventing the data from being updated or deleted by users. In addition, without a good security policy there is a large potential for data to be corrupted by unassuming users or even stolen. Data Storage Policies Data storage polices are very important depending on how they are implemented especially when a company is trying to utilize them in conjunction with other policies like Data Backups. I have worked at companies where all network user folders are constantly backed up, and if a user wanted to ensure the existence of a piece of data in the form of a file then they had to store that file in their network folder. Conversely, I have also worked in places where when a user logs on or off of the network there entire user profile is backed up. Training Policies One of the biggest ways to prevent data loss and ensure that data will remain a company asset is through training. The practice of properly train employees on how to work with in systems that access data is crucial when trying to ensure a company’s data will remain an asset. Users need to be trained on how to manipulate a company’s data in order to perform their tasks to reduce the chances of invalidating data.

    Read the article

  • Should extension scripts be run in a sandbox?

    - by Cubic
    In particular, this is about game extensions written in lua (luajit-2.0). I was contemplating whether I should restrict what these scripts can do, and arrived at the conclusion that I probably shouldn't: It's hard to get right. Sounds silly, but chances are my sandbox is gonna end up leaky anyways. The only benefit I could think of would be giving users some sense of security when running third party scripts. The disadvantages would be that it's just incredibly annoying for extension writers. That is, for now, myself (game content will be mostly scripted). The reason I'm asking this now before I actually have anything presentable is that adding a sandbox early on is easy, but would impose said annoying restrictions on myself too. However if I first go on with it and then later decide I do need a sandbox after all, I'm gonna run into problems (I'd either have to rewrite the scripts that are already there, or introduce some form of trust management system which seems to be more trouble than it's worth).

    Read the article

  • Google I/O 2010 - Connecting users w/ places

    Google I/O 2010 - Connecting users w/ places Google I/O 2010 - Where you at? Connecting your users with the places around them Geo 201 Marcelo Camelo, Chris Lambert, Dave Wang (Booyah) With the proliferation of GPS-enabled mobile devices, the locations of your users are now readily accessible to applications. This session will illustrate how to manage this location data and exploit the rich local information that Google offers to place your users in the context of their surroundings. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 65 0 ratings Time: 01:01:55 More in Science & Technology

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • How can I create tiles that scale to multiple resolutions?

    - by Darestium
    I am trying to create a multiplayer version of the popular Flash game N in Java. However, I'm not sure how to create a tileset that will scale up. Are the tiles for N pre-drawn or are they defined with mathamatical formulas in code? I do see how they would scale up in Flash if they were pre-rendered. So if anyone has any ideas how I should go about creating the tileset, or how they are created in the game please let me know. You can check out the game here.

    Read the article

  • Two Weeks As A Software Estimation Rule of Thumb?

    - by Todd Williamson
    I saw a blog posting that spoke to me: http://james-iry.blogspot.com/2010/10/how-to-estimate-software.html Oddly, this is the kind of estimate that I tend to do on smaller projects. Just about everything is "two weeks" as that is comfortably far enough out. I once had an instructor walk us through how to create a more detailed estimate, wherein we already had the requirements up front, etc. and even after all the careful tabulation and such the final instruction was "Now that you have all this documentation go ahead and double it." Agile practitioners seem to like two weeks also as a sprint length. Is there something magical about two weeks? Is it a hrair number for our psyches or some other kind of crutch? Do you have an immediate default fall-back schedule strategy when you are pressed for an initial delivery date?

    Read the article

  • Add entries to Nautilus' right-click menu (copy, move to arbitrary directories)

    - by qbi
    Assume I want to copy a file from /home/foo/bar/baz to /opt/quuz/dir1/option3. When I try it with Nautilus, first I have to open the correct directory, copy the file, go to the other directory and paste it there. I was thinking of a better way and old KDE3 versions of Konqueror came to mind. It was possible to right-click on a file. The context menu had an option for copying, moving the file to some default directories. Furthermore you could select any directory under /. So for the above action one would right click on a file, select /opt first, a list of subdirectories will open, select /opt/quuz and so on. Using GNOME there are only two possible values (home and desktop). Is there any way to insert more directories to this context menu in GNOME? Can I copy somehow the behaviour of Konqueror?

    Read the article

  • Can't change brightness on Toshiba l755 laptop

    - by albert
    I have a Toshiba l755 laptop, and I installed 12.04 64bit on it. The default brightness of the laptop is set to maximum value, and I can't change it. If I go to Configuration ? Brightness, the brightness doesn't change and neither does pressing the function keys. I tried changing some parameters of grub file, setting GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=Linux acpi_backlight=vendor". When changing the brightness from configuration, the value of /sys/class/backlight/toshiba/brightess changes, but screen remains the same. If I use the key functions, it changes the "actual_brightness" files. I don't know what else can I do, any suggestions?

    Read the article

  • BizTalk 2009 - Project Creation Failed

    - by StuartBrierley
    A couple of weeks ago I had some issue with my BizTalk Server 2009 development environment  which resulted in a reinstall of Visual Studio 2008 and the Visual Studio 2008 Service Pack 1. Following this reinstall I began to have problems when trying to create  BizTalk 2009 projects: Error Details: “Create BizTalk Project …. Project Creation Failed” It turns out that this is a known issue with BizTalk Server 2009 and Visual Studio 2008, whereby the installation of the Visual Studio Service Pack 1 can cause corruption to the BizTalk installation preventing the creation of any new projects. To resolve this issue go to control panel > add or remove programs > Microsoft BizTalk Server 2009 and select Change or Remove.  When the window opens choose “Repair”.  Upon completion you should once again be able to create BizTalk projects.

    Read the article

  • How do you cope with ugly code that you wrote?

    - by Ralph
    So your client asks you to write some code, so you do. He then changes the specs on you, as expected, and you diligently implement his new features like a good little lad. Except... the new features kind of conflict with the old features, so now your code is a mess. You really want to go back and fix it, but he keeps requesting new things and every time you finish cleaning something, it winds up a mess again. What do you do? Stop being an OCD maniac and just accept that your code is going to wind up a mess no matter what you do, and just keep tacking on features to this monstrosity? Save the cleaning for version 2?

    Read the article

  • Has anyone used RemObjects' Hydra to mix a large Delphi project with new C# additions?

    - by robsoft
    (Hopefully this is deemed suitable for Programmers, not StackOverflow - I could imagine it getting closed at SO because there's no obvious 'right' answer.) We have a large Delphi 2007 VCL project that uses things like DBXpress, Report Builder, DevExpress and TMS components (both visual and non-visual) etc. For reasons I won't bore you with, the company would like to start adding new modules to the program using .Net (via C# in particular). Rewriting from scratch isn't an option and given the heavy use of Report Builder and various other bits of Delphi-specific 3rd party code, I suspect that using something like TurnSharp to regenerate a C# project wouldn't work well either. Ideally we want to keep our Win32 VCL Delphi code but add new modules (plug-ins, sections of contained functionality like wizards etc) via C#. So we're considering RemObjects' Hydra, and in the next few weeks will probably have a go at evaluating it on a smaller-but-representative project first. I wondered if anyone had experience of doing this kind of thing with Hydra...?

    Read the article

  • How can I determine if a cube is adjacent to another cube, and optimize its buffers if so?

    - by Christian Frantz
    I'm trying to optimize the rendering of a collection of cubes, (based on an answer I was given to another question I asked). I understand the logic behind occlusion culling, but I'm having trouble with the code. When I create a cube, I want to determine if that cube is touching another existing cube, and if so I don't want to generate the redundant data in my vertex or index buffers. I'm planning on making a method that I call from my cube constructor so that everytime I create a cube, these checks are made, and neither occluded face is ever drawn. How would I go about this?

    Read the article

  • Cancelling Route Navigation in AngularJS Controllers

    - by dwahlin
    If you’re new to AngularJS check out my AngularJS in 60-ish Minutes video tutorial or download the free eBook. Also check out The AngularJS Magazine for up-to-date information on using AngularJS to build Single Page Applications (SPAs). Routing provides a nice way to associate views with controllers in AngularJS using a minimal amount of code. While a user is normally able to navigate directly to a specific route, there may be times when a user triggers a route change before they’ve finalized an important action such as saving data. In these types of situations you may want to cancel the route navigation and ask the user if they’d like to finish what they were doing so that their data isn’t lost. In this post I’ll talk about a technique that can be used to accomplish this type of routing task.   The $locationChangeStart Event When route navigation occurs in an AngularJS application a few events are raised. One is named $locationChangeStart and the other is named $routeChangeStart (there are other events as well). At the current time (version 1.2) the $routeChangeStart doesn’t provide a way to cancel route navigation, however, the $locationChangeStart event can be used to cancel navigation. If you dig into the AngularJS core script you’ll find the following code that shows how the $locationChangeStart event is raised as the $browser object’s onUrlChange() function is invoked:   $browser.onUrlChange(function (newUrl) { if ($location.absUrl() != newUrl) { if ($rootScope.$broadcast('$locationChangeStart', newUrl, $location.absUrl()).defaultPrevented) { $browser.url($location.absUrl()); return; } $rootScope.$evalAsync(function () { var oldUrl = $location.absUrl(); $location.$$parse(newUrl); afterLocationChange(oldUrl); }); if (!$rootScope.$$phase) $rootScope.$digest(); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The key part of the code is the call to $broadcast. This call broadcasts the $locationChangeStart event to all child scopes so that they can be notified before a location change is made. To handle the $locationChangeStart event you can use the $rootScope.on() function. For this example I’ve added a call to $on() into a function that is called immediately after the controller is invoked:   function init() { //initialize data here.. //Make sure they're warned if they made a change but didn't save it //Call to $on returns a "deregistration" function that can be called to //remove the listener (see routeChange() for an example of using it) onRouteChangeOff = $rootScope.$on('$locationChangeStart', routeChange); } This code listens for the $locationChangeStart event and calls routeChange() when it occurs. The value returned from calling $on is a “deregistration” function that can be called to detach from the event. In this case the deregistration function is named onRouteChangeOff (it’s accessible throughout the controller). You’ll see how the onRouteChangeOff function is used in just a moment.   Cancelling Route Navigation The routeChange() callback triggered by the $locationChangeStart event displays a modal dialog similar to the following to prompt the user:     Here’s the code for routeChange(): function routeChange(event, newUrl) { //Navigate to newUrl if the form isn't dirty if (!$scope.editForm.$dirty) return; var modalOptions = { closeButtonText: 'Cancel', actionButtonText: 'Ignore Changes', headerText: 'Unsaved Changes', bodyText: 'You have unsaved changes. Leave the page?' }; modalService.showModal({}, modalOptions).then(function (result) { if (result === 'ok') { onRouteChangeOff(); //Stop listening for location changes $location.path(newUrl); //Go to page they're interested in } }); //prevent navigation by default since we'll handle it //once the user selects a dialog option event.preventDefault(); return; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Looking at the parameters of routeChange() you can see that it accepts an event object and the new route that the user is trying to navigate to. The event object is used to prevent navigation since we need to prompt the user before leaving the current view. Notice the call to event.preventDefault() at the end of the function. The modal dialog is shown by calling modalService.showModal() (see my previous post for more information about the custom modalService that acts as a wrapper around Angular UI Bootstrap’s $modal service). If the user selects “Ignore Changes” then their changes will be discarded and the application will navigate to the route they intended to go to originally. This is done by first detaching from the $locationChangeStart event by calling onRouteChangeOff() (recall that this is the function returned from the call to $on()) so that we don’t get stuck in a never ending cycle where the dialog continues to display when they click the “Ignore Changes” button. A call is then made to $location.path(newUrl) to handle navigating to the target view. If the user cancels the operation they’ll stay on the current view. Conclusion The key to canceling routes is understanding how to work with the $locationChangeStart event and cancelling it so that route navigation doesn’t occur. I’m hoping that in the future the same type of task can be done using the $routeChangeStart event but for now this code gets the job done. You can see this code in action in the Customer Manager application available on Github (specifically the customerEdit view). Learn more about the application here.

    Read the article

  • Can I release complementary Windows 8 and WP8 apps on their respective stores?

    - by Clay Shannon
    I am creating a pair of apps, one to run preferably on tablets, but also laptops and PCs, and the other for WP8. These apps are complementary - having one is of no use without the other. I know there is a Windows Store, and a Windows Phone store, so one would be released on one, and one on the other. My question is: as these apps are useless by themselves (although in most cases it won't be the same people running both apps), will there be a problem with offering these useless-when-used-alone apps? IOW: Person A will use the Windows 8 app to interact with some people that have the WP8 app installed; those with the WP8 app will interact with a person or people who have the Windows 8 app installed. What I'm worried about is if these apps go through a certification process where they must be useful "standalone" - is that the case?

    Read the article

  • USB-live does not save files between sessions

    - by Mads Skjern
    I created a USB-stick with Ubuntu, using the recommended tool "Startup Disk Creator" and the image for Ubuntu 13.10. The very simple interface looks like this: There can't be much to misunderstand in this GUI. I have chosen to create a USB stick with a live version of Ubuntu, which will save files and settings from session to session, on the USB drive, right? Well, it just doesn't save anything. I go in, create a file on Desktop, restart and it's gone. I did the whole procedure three times, i.e. first creating the USB, then testing if I could save. Have I misunderstood something?

    Read the article

  • Google I/O 2010 - Customizing Google Apps

    Google I/O 2010 - Customizing Google Apps Google I/O 2010 - Customizing Google Apps & integrating with customer environments Enterprise 201 Mike O'Brien, Matt Pruden (Appirio), Adam Graff (Genentech), Don Dodge (moderator) Learn from real life examples of customizing Google Apps to meet customer requirements. Hear from the customer (Genentech) and the System Integrator (Appirio). Explore integration issues and deployment best practices with people who have done it. Get your questions answered in this session. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 52:00 More in Science & Technology

    Read the article

  • Can anyone help solve this complex algorithmic problem?

    - by Locaaaaa
    I got this question in an interview and I was not able to solve it. You have a circular road, with N number of gas stations. You know the ammount of gas that each station has. You know the ammount of gas you need to GO from one station to the next one. Your car starts with 0. The question is: Create an algorithm, to know from which gas station you must start driving. As an exercise to me, I would translate the algorithm to C#.

    Read the article

  • How do I make Nautilus windows stick for drag & drop?

    - by e-satis
    When you drag and drop a folder with nautilus, you must carefully set both windows on non overlapping areas of your screen, otherwise selecting one folder will bring the windows to the front, hiding the second one. On Windows, doing so will stick the explorer.exe windows to the back and let you drag and drop the folder. I suppose it detect a long click to decide whether or not bring the window to the front. Is that possible with Ubuntu? Now I know that Nautilus now has split panels by pressing F3, but that not handy. Most of the time, you open a folder, THEN decide to copy. With split panel, you must decide, THEN split the panel and go to the right folder.

    Read the article

  • What has Ubuntu contributed to the Linux Kernel?

    - by Luis Alvarado
    This question is similar to this one: What unique enhancements and features has Ubuntu brought to the Linux Community but in this case it is directed towards what has Ubuntu contributed to the official Linux Kernel. For example, many times I hear about Intel contributing to patches for the Linux Kernel like the RC6 latest patches and any other related to the recent Sandy/Ivy Bridges. In another group, Android did an upstream patch and a lot of ARM patches have also come to the Linux Kernel. I have seeing only a small percent of companies and groups that have contributed to the Linux Kernel (http://kernel.org) but I want to know, since the beginning of Ubuntu till now, what has Ubuntu contributed to the Linux Kernel in regards to any aspect of the kernel. For Kernel information I typically go to http://kernelnewbies.org and http://kernel.org

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >