Search Results

Search found 59199 results on 2368 pages for 'time wasters'.

Page 364/2368 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Printing takes forever and gives pixelated prints

    - by kelvinsong
    I have a Brother HL-3070CW printer, which has some serious issues. Printing takes an abnormally long time, and if the file has anything more than text in it, it becomes horribly aliased and pixelated. The printer also sometimes complains about running out of memory. This problem has appeared on and off throughout various Ubuntu releases, and can sometimes be worked around by saving a file in Postscript through Evince, and printing the PS file in Evince. I tried to install drivers from the Brother website, but it gives an error "dependency not satisfiable: hl3070cwlpr" Help, I'm wasting a huge amount of paper and ink(not to mention time) reprinting pages over and over again in trial and error.

    Read the article

  • Moving from one DNS provider to another

    - by Senthil Kumaran
    I had registered with a particular DNS provider X and I have been unhappy with their services and now when the time for renewal came, I did not renew and I let it expire. I am hoping that once it is expired from this provider, I would be able to sign up for the same domain name from an alternative provider which I have tested and I am satisfied. What kind of precautions should I take? The domain name is not a critical one, it is of a NGO and we prefer to own it again without any change in the name. The information given by the expiry notice says Domains can be renewed between 90 days before and 14 days after the expiry date. If domains are not renewed they will be removed from the account and set for deletion. Should I wait for time till gets deleted at their end so that I can sign up for the same from another provider?

    Read the article

  • Mixing Forms and Token Authentication in a single ASP.NET Application (the Details)

    - by Your DisplayName here!
    The scenario described in my last post works because of the design around HTTP modules in ASP.NET. Authentication related modules (like Forms authentication and WIF WS-Fed/Sessions) typically subscribe to three events in the pipeline – AuthenticateRequest/PostAuthenticateRequest for pre-processing and EndRequest for post-processing (like making redirects to a login page). In the pre-processing stage it is the modules’ job to determine the identity of the client based on incoming HTTP details (like a header, cookie, form post) and set HttpContext.User and Thread.CurrentPrincipal. The actual page (in the ExecuteHandler event) “sees” the identity that the last module has set. So in our case there are three modules in effect: FormsAuthenticationModule (AuthenticateRequest, EndRequest) WSFederationAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest, EndRequest) SessionAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest) So let’s have a look at the different scenario we have when mixing Forms auth and WS-Federation. Anoymous request to unprotected resource This is the easiest case. Since there is no WIF session cookie or a FormsAuth cookie, these modules do nothing. The WSFed module creates an anonymous ClaimsPrincipal and calls the registered ClaimsAuthenticationManager (if any) to transform it. The result (by default an anonymous ClaimsPrincipal) gets set. Anonymous request to FormsAuth protected resource This is the scenario where an anonymous user tries to access a FormsAuth protected resource for the first time. The principal is anonymous and before the page gets rendered, the Authorize attribute kicks in. The attribute determines that the user needs authentication and therefor sets a 401 status code and ends the request. Now execution jumps to the EndRequest event, where the FormsAuth module takes over. The module then converts the 401 to a redirect (302) to the forms login page. If authentication is successful, the login page sets the FormsAuth cookie.   FormsAuth authenticated request to a FormsAuth protected resource Now a FormsAuth cookie is present, which gets validated by the FormsAuth module. This cookie gets turned into a GenericPrincipal/FormsIdentity combination. The WS-Fed module turns the principal into a ClaimsPrincipal and calls the registered ClaimsAuthenticationManager. The outcome of that gets set on the context. Anonymous request to STS protected resource This time the anonymous user tries to access an STS protected resource (a controller decorated with the RequireTokenAuthentication attribute). The attribute determines that the user needs STS authentication by checking the authentication type on the current principal. If this is not Federation, the redirect to the STS will be made. After successful authentication at the STS, the STS posts the token back to the application (using WS-Federation syntax). Postback from STS authentication After the postback, the WS-Fed module finds the token response and validates the contained token. If successful, the token gets transformed by the ClaimsAuthenticationManager, and the outcome is a) stored in a session cookie, and b) set on the context. STS authenticated request to an STS protected resource This time the WIF Session authentication module kicks in because it can find the previously issued session cookie. The module re-hydrates the ClaimsPrincipal from the cookie and sets it.     FormsAuth and STS authenticated request to a protected resource This is kind of an odd case – e.g. the user first authenticated using Forms and after that using the STS. This time the FormsAuth module does its work, and then afterwards the session module stomps over the context with the session principal. In other words, the STS identity wins.   What about roles? A common way to set roles in ASP.NET is to use the role manager feature. There is a corresponding HTTP module for that (RoleManagerModule) that handles PostAuthenticateRequest. Does this collide with the above combinations? No it doesn’t! When the WS-Fed module turns existing principals into a ClaimsPrincipal (like it did with the FormsIdentity), it also checks for RolePrincipal (which is the principal type created by role manager), and turns the roles in role claims. Nice! But as you can see in the last scenario above, this might result in unnecessary work, so I would rather recommend consolidating all role work (and other claims transformations) into the ClaimsAuthenticationManager. In there you can check for the authentication type of the incoming principal and act accordingly. HTH

    Read the article

  • Xobni Plus for Outlook [Review]

    - by The Geek
    Overview Xobni Plus is an addin that will bring a sidebar to Outlook which allows you to search through your inbox and contacts a lot easier. It provides the ability to search and keep track of your favorite social networks. Searching with Xobni is a lot more powerful than the default search feature in Outlook. It let’s you drill down your searches to conversations, email, links, and attachments. It now supports Outlook 2010 both 32 & 64-bit versions. Installation & Setup Installation is easy following the wizard. After completing the wizard you can tell you’re friends on Facebook and Twitter that you are now using it. You can also decide to join their Product Improvement Program if you want. After installation when you open Outlook, Xobni appears as a sidebar on the right side. Don’t worry about it always being in the way, as you can hide it if you need more room for other Outlook functions. After Xobni free is installed, you can upgrade to the Plus version at any time. A new window will open up and you can use your Credit Card, PayPal, or redeem a code if you have one. Features & Use Where to begin with the amount of features available in Xobni Plus? It really has an amazing amount of cool features. Of course you’ll have all of the features of the Free Version which we previously covered…and a lot more. After Xobni is installed you’ll notice a section for it on the Ribbon. From here you can search Xobni, show or hide the Sidebar, and change other options. It allows you to easily keep up with various social networks like Facebook, Twitter, and LinkedIn… Check out email analytics and contact ranks. Click on the Files Exchanged tab to search for specific attachments. Quickly search links exchanged with your contacts. Hover over a link to get a preview of what it entails. It gives you the ability to index all of your Yahoo mail as well, without the need for purchasing Yahoo Plus! Then your Yahoo messages appear in the Xobni sidebar. When you select a contact you can see related messages from you Yahoo account. Easily index all of your mail…including Yahoo mail for better organization and faster search results. There are several options you can select to change the way Xobni works. From setting up your Yahoo email, Indexing options, and much more. Additional Features of Xobni Plus Advanced Search Capabilities – Filter results, Boolean & Phrase Search, Ability to search Appointments & Tasks, Advanced Search Builder Search unlimited PST data files Xobni contacts in the compose screen Find links exchanged with your contacts View calendar appointments One year premium tech support No Ads! Performance We ran Xobni Plus on Outlook 2010 32-bit on a Dual-Core AMD Athlon system with 4GB of RAM and found it to run quite smoothly. However, we did notice it would sometimes slow down launching Outlook, especially if other apps are running at the same time. Product Support When you buy a license for Xobni Plus you get a full year of premium tech support. They provide a Questions and Answers page on their site where you can run a search query and answers appear instantly. You can contact support directly as a Plus member through their web form and they advise the turn around time is 2 business days. However, when we tested it, we received a response within 24 hours. They also provide FAQ, Community forum, and you can download the Owners Manual in PDF format from the support page. Conclusion Xobni Plus is a very powerful addin for Outlook and includes a lot more features that we didn’t cover in this review. You can download Xobni free edition which includes an 8 day free trial of the Plus version. This provides a good way to start getting familiar with it. Then upgrade to Xobni Plus at any time for $29.95. Once you get started, you’ll find the sidebar is nicely laid out and intuitive to use. If you live out of Outlook during the day, Xobni Plus is a great addition for fast and powerful searches. It provides an easy way to keep all of your contacts and messages well organized and easy to find. Xobni Plus works with XP, Vista, and Windows 7 (32 & 64-bit editions) Outlook 2003, 2007 and both 32 & 64-bit editions of Outlook 2010. Download Xobni Plus Download Xobni Free Edition Rating Installation: 8 Ease of Use: 8 Features: 9 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Xobni Free Powers Up Outlook’s Search and ContactsCreate an Email Template in Outlook 2003Add Social Elements to Your Gmail Contacts with RapportiveChange Outlook Startup FolderClear Outlook Searches and MRU (Most Recently Used) Lists TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • How to make a great functional specification

    - by sfrj
    I am going to start a little side project very soon, but this time i want to do not just the little UML domain model and case diagrams i often do before programming, i thought about making a full functional specification. Is there anybody that has experience writing functional specifications that could recommend me what i need to add to it? How would be the best way to start preparing it? Here i will write down the topics that i think are more relevant: Purpose Functional Overview Context Diagram Critical Project Success Factors Scope (In & Out) Assumptions Actors (Data Sources, System Actors) Use Case Diagram Process Flow Diagram Activity Diagram Security Requirements Performance Requirements Special Requirements Business Rules Domain Model (Data model) Flow Scenarios (Success, alternate…) Time Schedule (Task Management) Goals System Requirements Expected Expenses What do you think about those topics? Shall i add something else? or maybe remove something?

    Read the article

  • Volume setting isn't remembered after restart/shutdown

    - by Iulian
    This is my first time here and I'm new to linux and also to Ubuntu. I've installed first version 11.10 and there was some problems with the unity dock and also the problem with the volume not being remembered after restart or shutdown. I'm using dualboot with Windows 7. Ubuntu was installed after windows. I have 2 sound cards. One is onboard, on the motherboard, and the other is external, an E-MU 0404 USB 2.0 sound card. The last one is my primary sound card and I've choosed it as default output sound card. I've upgraded to 12.04 hopeing that this was fiex but even in this version the OS doesn't keep the volume where it was last time. The big problem is that sometimes I forget about this problem and start music and it starts at full volume and soon I think I will die of heart attack. Is there a way to make it remember or at least to tell him to start at a specific volume not at 100%?

    Read the article

  • USB Drive Warm Boot Issue. Cold boot = Perfect. Warm boot = "No boot sector found on USB device"

    - by Dan
    I'm experiencing a boot/reboot quirk similar to, yet somewhat different from others posted here. I have two Western Digital "Passport" external USB2 drives (160GB and 320GB) with Ubuntu 11.10 installed. The BIOS on my laptop is set to boot USB drives before the internal drive, and it works well. Either drive will cold-boot Ubuntu 11.10 perfectly. However, at times the computer must be restarted (warm boot) to have software updates or other changes take effect. When I warm-boot the computer, the computer goes through its normal shutdown/restart, then the usual BIOS tests, at which time the USB drive is selected momentarily (as normal), and that's when the trouble starts. I get a "No boot sector found on USB device", and the computer proceeds to boot Windows from the internal drive. If I press F12 early during the restart sequence (allows manual selection of the boot sequence), USB is still shown before the internal drive on the sequence list (as it should be). If I highlight "USB" and press ENTER, I still get a "No boot sector..." error message, but the external USB drive then proceeds to boot Ubuntu normally. I have a third external USB drive (Western Digital 160 GB Media Center) that cold-boots and warm-boots perfectly, so I know everything in the computer is set up properly. It also tells me Ubuntu is set up correctly on the drives, as all three drives were done identically, and at the same time. I've tried formatting the Passport drives NTFS and FAT32, but it didn't change anything. If I power-down the computer, then start it up again, it will boot Ubuntu from either of the Passport drives perfectly - every time. It's as if during a warm boot the Passport drives are somehow looking for the boot loader in the wrong location on the drive - yet either drives works when booting from a cold start. Not a show-stopper .. but frustrating. Computer is a Dell XPS M1530, A12 BIOS (latest), but I don't think the computer plays into the issue because one of the three drives works at all times. It's only the two WD Passport models at issue here. Thanks!

    Read the article

  • How to write efficient code in spite of heavy deadlines

    - by gladysbixly
    Hi all, I am working in an environment wherein we have many projects with strict deadlines on deliverables. We even talk directly to the clients so getting the jobs done and fast is a must. My issue is that i'd always write code for the first solution that comes to my mind, which of course I thought as best at that moment. It always ends up ugly though and i'd later realize that there are better ways to do it but can't afford to change due to time restrictions. Are there any tips by which I could make my code efficient yet deliver on time?

    Read the article

  • Clock is Ticking for OBI 10g Supported Customers to move to OBI 11g

    - by Mike.Hallett(at)Oracle-BI&EPM
    Now is a perfect time for Partners to approach existing OBI 10g customers, to encourage and help them to upgrade to all the great new features in the current OBI v 11g; including the ability to go Mobile, to be in-memory on Exalytics, and to get tighter integration with Hyperion applications, Strategic Scorecards and Essbase. Oracle Lifetime Support Policy for Oracle BI Suite, version 10gR3, will end ‘normal’ support in July 2013. The final point release of Oracle Business Intelligence EE & Publisher 10gR3 was 10.1.3.4.x, which was generally available from July 2008 and will end “Premier Support” in July 2013.  From this time, customers may purchase “Extended Support” until July 2015, and from then “Sustaining Support”  indefinitely. For more information : Upgrade to Oracle Business Intelligence Enterprise Edition 11g, article on Technet Planning to Upgrade from Oracle BI 10g to BI 11g ?, at docs.oracle.com Oracle Business Intelligence Enterprise Edition 11g Oracle Lifetime Support Policy

    Read the article

  • How do I improve my logic in general and programming in particular?

    - by Dinesh Venkata
    I'm good with understanding technology and implementing it. At least that is what I feel. But it seems that when I come across experienced programmers they point out that my logic is weak. I feel that I would need some time with real programming to improve it. But nobody is ready to give that time to me. I'm just about starting my carer and it often feels disheartening to hear this. I want know how can I improve my logic and also does this sort of thing happens to others too?

    Read the article

  • update-apt-xapian-index uses 100% CPU, even when Update Manager is set to not check for updates

    - by Dave M G
    I have a slightly older laptop running Ubuntu 11.10. It runs fine, but frequently, when I start it up, the CPU monitor in my Gnome Panel shows 100% usage for for what can be up to five minutes or so. It seems that the offending process is update-apt-xapian-index, which, if I understand correctly, is the update manager checking for updates. I have gone into the update manager settings, and selected to never check for updates. I'll do that manually when I feel like I have the time to leave the laptop running for that. However, despite my selection, this still happens. Roughly 50% of the time or more, when I start my laptop, it runs update-apt-xapian-index. How can I get the update manager to respect my settings, or at least to get this process to stop eating my CPU cycles?

    Read the article

  • Google I/O 2010 - Cloud computing for geospatial apps

    Google I/O 2010 - Cloud computing for geospatial apps Google I/O 2010 - Unleash your map data: Cloud computing for geospatial applications Geo 101 Tom Manshreck The Google Maps API made geospatial development accessible to all but hosting your data remains complex and time consuming. This session will detail the services Google offers for storing your geospatial data in the cloud, illustrate the ways in which that data can be accessed and visualized, and walk through development of a retail store finder using these technologies. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 322 0 ratings Time: 40:22 More in Science & Technology

    Read the article

  • Oracle IRM Desktop update

    - by martin.abrahams
    Just in time for Christmas, we have made a fresh IRM Desktop build available with a number of valuable enhancements: Office 2010 support Adobe Reader X support Enhanced compatibility with SharePoint Ability to enable the Sealed Email for Lotus Notes integration during IRM Desktop installation The kit is currently available as a patch that you can access by logging in to My Oracle Support and looking for patch 9165540. The patch enables you to download a package containing all 27 language variants of the IRM Desktop. We will be making the kit available from OTN as soon as possible, at which time you will be able to pick a particular language if preferred.

    Read the article

  • Iterative and Incremental Principle Series 2: Finding Focus

    - by llowitz
    Welcome back to the second blog in a five part series where I recount my personal experience with applying the Iterative and Incremental principle to my daily life.  As you recall from part one of the series, a conversation with my son prompted me to think about practical applications of the Iterative and Incremental approach and I realized I had incorporated this principle in my exercise regime.    I have been a runner since college but about a year ago, I sustained an injury that prevented me from exercising.  When I was sufficiently healed, I decided to pick it up again.  Knowing it was unrealistic to pick up where I left off, I set a goal of running 3 miles or approximately for 30 minutes.    I was excited to get back into running and determined to meet my goal.  Unfortunately, after what felt like a lifetime, I looked at my watch and realized that I had 27 agonizing minutes to go!  My determination waned and my positive “I can do it” attitude was overridden by thoughts of “This is impossible”.   My initial focus and excitement was not sustained so I never met my goal.   Understanding that the 30 minute run was simply too much for me mentally, I changed my approach.   I decided to try interval training.  For each interval, I planned to walk for 3 minutes, then jog for 2 minutes, and finally sprint for 1 minute, and I planned to repeat this pattern 5 times.  I found that each interval set was challenging, yet achievable, leaving me excited and invigorated for my next interval.  I easily completed five intervals – or 30 minutes!!  My sense of accomplishment soared. What does this have to do with OUM?  Have you heard the saying -- “How do you eat an elephant?  One bite at a time!”?  This adage certainly applies in my example and in an OUM systems implementation.  It is easier to manage, track progress and maintain team focus for weeks at a time, rather than for months at a time.   With shorter milestones, the project team focuses on the iteration goal.  Once the iteration goal is met, a sense of accomplishment is experience and the team can be re-focused on a fresh, yet achievable new challenge.  Join me tomorrow as I expand the concept of Iterative and incremental by taking a step back to explore the recommended approach for planning your iterations.

    Read the article

  • Is it worth it to learn some programming before college?

    - by Howlgram
    I'm in my last year of highschool and I am very interested in programming, the thing is I don't know if it could be worth it to start learning programming now, im afraid that when I get to college they might just teach all I might have learned in a few classes making a waste the time spent on learning that before college. Right now I'm on my last year of highschool and I know absolutely nothing about programming. There are other similar questions where people answer saying it is good to learn beforehand, but I doubt their situation might be like mine, maybe they had much more time to learn programming (as if they were not on their last year of highschool) before college so they could learn some serious skills, besides they say they already know at least the basics about a language, instead, I know nothing and I have no idea how much could I learn this year. If it wasn't clear I want to study computer's science

    Read the article

  • Software design methods for Java or any other programming language

    - by IkerB
    I'm junior programmer and I would like to know how professionals write their code or which steps they follow when they are creating new software. I mean, which steps they follow, which programming methodology, software architecture design application software, etc. I would like to find a tutorial where they explain from the beginning which steps I have to follow from The Idea I have in my mind to the final version of the application in any language. Or perhaps how is your programming steps or rules that you used to follow. Because everytime I want to create the an application I spend few time on the design and a lot of time coding (I know, that's not good).

    Read the article

  • What camera to choose for using with JMF (Java Media Framework)

    - by Leron
    For the past 2-3 weeks I've been searching for different ways to implement custom video streaming and in more general video capturing and manipulating, going through DVR-cards, Video Capture Cards and stuff like that. Somehow JMF was able to stay out of my sight for all this time, but since I find out about it I'm more and more sure that this is a comfortable level for me to start playing around with video and stuff. One major topic that occurs to me while searching for more info was the presence of many posts where people were complaining about the fact that any particular camera ( most of the time I think they mean web cameras) doesn't work with JMF. Even though there are a lot of different cameras (not necessarily a web cam) that are not that expensive I want to play it safe and buy one that is proven to work well with the JMF. Also due to lack of experience maybe this is irrelevant but since I'll buy the camera mostly for learning and experimenting I want to have the maximum freedom possible to mess with different features, options and so on.

    Read the article

  • Removing mdadm array and converting to regular disks while preserving data

    - by Jeffrey Kevin Pry
    I have a 6 disk (2TB each) mdadm RAID 5 volume created in Ubuntu 12.04 Server. However, I'm moving to a different solution and want to "unraid" my disks but keep the data. Only 50% is in use. From what I can surmise I basically have to do this recursively for each physical disk. Fail the disk Format the failed disk Move a portion of files to the new disk. Reshape the array Shrink the logical volume md0 This seems like a very time consuming process. Is there an easier way to do this (automatically perhaps) without buying new disks to temporarily hold the data? I am also aware that during this processing my RAID volume will be degraded and vulnerable the entire time. I am not too concerned about this and will be using battery backup and moving the most important files off first. Thank you for your help!

    Read the article

  • How do you handle increasingly long compile times when working with templates?

    - by Ghita
    I use Visual Studio 2012 and he have cases where we added templates parameters to a class "just" in order to introduce a "seam point" so that in unit-test we can replace those parts with mock objects. How do you usually introduce seam points in C++: using interfaces and/or mixing based on some criteria with implicit interfaces by using templates parameters also ? One reason to ask this is also because when compiling sometimes a single C++ file (that includes templates files, that could also include other templates) results in an object file being generated that takes in the order of around 5-10 seconds on a developer machine. VS compiler is also not particularly fast on compiling templates as far as I understand, and because of the templates inclusion model (you practically include the definition of the template in every file that uses it indirectly and possibly re-instantiate that template every time you modify something that has nothing to do with that template) you could have problems with compile times (when doing incremental compiling). What are your ways of handling incremental(and not only) compile time when working with templates (besides a better/faster compiler :-)).

    Read the article

  • Ubuntu on USB does not boot on MacBook

    - by Sean H
    Ubuntu is installed on a 32 gigabyte flash-drive and it successfully booted every time up until I partitioned my hard-drive and installed Windows as a secondary boot (for programming reasons). Now every time I attempt to boot the Ubuntu flash-drive it boots into Windows XP. The same goes for partitions, I partitioned my hard-drive and installed Ubuntu and it only booted Windows XP. I am on a MacBook 6,1 with Mac OS X 10.6.8, 2 partitions, and I am using ReFit as my boot-loader. EDIT: I had Ubuntu working fine from FLASH DRIVE and at one point as a partition. I later uninstalled Ubuntu from my hard-drive and installed Windows. I then had to re-image my computer for certain reasons and I installed windows. Now when I attempt to boot anything other than Windows or OS X it boots into windows. Ubuntu was never on my hard drive while Ubuntu was on it. The flash-drive has been its own thing and has the boot-loader installed to it and loads from ReFit but boots into windows.

    Read the article

  • GA and Unique visitors again

    - by DDEX
    I take care of a company intranet and measure the traffic with GA. I am absolutely sure that there are no more than 5000 URLs in our company and it is impossible to check the intranet from outside the company network. Yet when I check the number of Unique Visitors (UV) in the last year GA says there were 36.500 of them...How is that possible? I thought UV should measure each URL only once in the given time period. Could anybody explain how this actually works? Can it be that the cookie trackers expire after some time and are counted more then once?

    Read the article

  • Wifi problem in ubuntu using macbook pro when it restart

    - by Amro
    I was read that subject : http://ubuntuforums.org/showthread.php?t=2011756 and i follow it step by step in the page n 1 , then i was connect but after i restart my macbook again , i was lost the wifi connection.i dont know why or whats the problem exactly. every time I run this command: dmesg | grep -e b43 -e bcma I get this output: [ 2012.769684] bcma-pci-bridge 0000:02:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 2012.769701] bcma-pci-bridge 0000:02:00.0: setting latency timer to 64 [ 2012.769775] bcma: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x25, class 0x0) [ 2012.769808] bcma: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x1D, class 0x0) [ 2012.769889] bcma: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x13, class 0x0) [ 2012.770175] bcma: PMU resource config unknown for device 0x4331 [ 2012.824527] bcma: Bus registered [ 2012.831744] b43-phy0: Broadcom 4331 WLAN found (core revision 29) [ 2013.371031] b43-phy0: Loading firmware version 666.2 (2011-02-23 01:15:07) and to get the connection again every time i must entery that code in the step of reload driver. How i can let the ubuntu see my wifi and wireless device automatically when i reboot my computer????

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >