Search Results

Search found 18001 results on 721 pages for 'difficult people'.

Page 219/721 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • Why is purchasing Microsoft licences such a daunting task? [closed]

    - by John Nevermore
    I've spent 2 frustrating days jumping through hoops and browsing through different local e-shops for VS (Visual Studio) 2010 Pro. And WHS (Windows Home Server) FPP 2011 licenses. I found jack .. - or to be more precise, the closest I found in my country was WHS OEM 2011 licenses after multiple emails sent to individuals found on Microsoft partners page. Question being, why is it so difficult to get your hands on Microsoft licenses as an individual? Sure, you can get the latest end user operating systems from most shops, but when it comes to development tools or server software you are left dry. And companies that do sell licenses most of the time don't even put up pricing or a self service environment for buying the licenses, you need to have an hawk's eye for that shiny little Microsoft partner logo and spam through bunch of emails not knowing, if you can count on them to get the license or not. Sure, i could whip out my credit card and buy the VS 2010 license on the online Microsoft Shop. Well whippideegoddamndoo, they sell that, but they don't sell WHS 11 licenses. Why does a company make it so hard to buy their products? Let's not even talk about the licensing itself being a pain.

    Read the article

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • StyleCop Custom Rules

    - by Aligned
    There are several blogs on how to do this (http://scottwhite.blogspot.com/2008/11/creating-custom-stylecop-rules-in-c.html, etc). I’ve found a few useful things to point out: Debugging is difficult, but here are the steps (thanks to Tintin’s answer). “One way: 1) Delete your custom rules 2) Open Visual Studio (for dev), open your custom rule solution 3) Build & Deploy custom rules (a PostBuild action to copy the rules into the StyleCop folder is handy) 4) Open Visual Studio (for test) 5) Use VS (dev) and Attach to process devenv.exe (the test VS instance), set breakpoints in the rules you want to debug 6) Use VS’ (test) and right-click on project, Run StyleCop 7) Debug” ~ it worked once, now I’m having problems getting it to work again ~ I also get the message “Cannot evaluate expression because the code of the current method is optimized.” when I try to look at properties. Looking at the source code of the StyleCop.CSharp.Rules.dll that comes with the install. I used JustDecompile from Telerik. Create one xml file and name it the same as the one cs file (CodingGuildelineRules.cs and CodingGuidelinRules.xml) Deploy: 1. Build in Visual Studio 2. Close Visual Studio (Style cop is running so you can’t override your dll without closing) 3. Copy the dll from the bin to the C: \Program Files (x86)\StyleCop 4.7\ 4. Open the settings file or re-open Visual Studio

    Read the article

  • Using Ogre with android [closed]

    - by Rich
    I am trying get Ogre 3d to work on android, I have managed to download and run the ogre sample browser but I am really struggling with trying to get a basic application working i have been trying for days now with no avail. Does anyone have any pointers on where to start with this? Thanks if anyone can help EDIT: Very sorry for my rubbish question! I am a bit new to this and I am just trying to seek some guidance. Ok so i followed the instructions on the Ogre wiki to build ogre for android and the sample browser here: http://www.ogre3d.org/tikiwiki/tiki-index.php?page=CMake+Quick+Start+Guide&tikiversion=Android so it is deffinately possible. The issue I am having is knowing what I need to do to get started with ogre e.g just a simple hello world style app where it might just show the ogre head, so tutorials might actually be good because I could not really find any simple ones as I am very new to 3D development. I just found that the sample browser was just massive and yes it has everything in it but it's very difficult to understand how it all works. What I am asking is basically some help, as I have been trying to pull out some parts of the sample browser to just create a view with a 3D model. Hope this is better?

    Read the article

  • Need ideas for an innovation week

    - by slandau
    So 4 times a year we have an innovation week (to even out the odd sprint releases). This whole week is dedicated to experimenting with new technology/ideas that could potentially help progress the software department or the company as a whole, and serve as sort of a starting point for new ideas and brainstorming. For example, the last one contained a lot of projects. One was the re-design of our web app into more of a Web 2.0 look and feel using JQuery and a lot of cool CSS tricks. Another was a proposal for a new bug tracking software as opposed to the clearly outdated one we use, and another was a very cool JQuery/Js design that could show the same page to multiple users on different computers and allow each of them to take "charge" of the page, disabling the other one from doing anything, and vice versa, seeing all updates in real time -- sort of like Netmeeting through Js. Well, this is my first one as a new employee so I wanted to think of something cool. We get one week (anywhere from 40-60 hours or so), and we usually pair up or do this in groups of 3-4, depending on how many projects there are. Projects have to get approved but usually that doesn't prove to be too difficult. We are in the financial analysis software industry if the domain was leading you guys to think of anything helpful. I am primarily working on a web app in MVC 2 at the moment using a lot of JQuery and a C# backend. Do you guys have any idea of something that would be cool/beneficial/worth it?

    Read the article

  • Open Source Project all dressed up but nowhere to go...

    - by Calanus
    Over the past 2 years myself and a colleague have built an online statistical analysis application using a mixture of silverlight, wcf and R. I (a c# programmer) wrote all the silverlight and wcf stuff whilst my colleague (a statistician) came up with the stats algorithms and wrote the R code. Now we think that this app is fairly unique - a rich gui online statistics application that is much more intuitive than all the other online stat apps that I've seen. But despite this we don't really know where to go with the project, mainly for the following reasons: 1) Its fairly complicated stuff - without the mix of programing and stats skills it would be difficult for anyone to "get into" the project and contribute. 2) We are stalled by a lack of a proper place to host the site. Currently it sits on the family windows 7 media centre, not exactly the best place to host it as it could interfere with the missus trying to watch Corrie/Friends/Oprah etc. Soo, anyone got any ideas on how to move forward with this? I guess that my strength is programing not marketing so despite working hard at this for the past couple of years I feel that I've reached a dead end! Also, does anyone know of any free windows hosting for open source projects? If I could find a proper place to put the app I might feel re-energised about the whole thing. The source code is on codeplex at: http://silverstats.codeplex.com, whilst the app is currently hosted at http://silverstats.co.uk

    Read the article

  • VBScript and Xpath excluding duplicates [closed]

    - by Malachi
    I am trying to pull names from an XML Document using a vbscript. XML Document structure <Aliases> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> ... </Aliases> the XML File might have 100 rows with the same name coming from several different CaseID's because for this part of my vbscript I am trying to pull all the different Names from all cases, but here is the issue, I don't want to return duplicates. is there a way to do this with an xPath Expression or should I try to do this with VBScript? EDIT I am pretty sure that I am going to have to do this with VBScript. Would it be Faster and more efficient to solve this issue in VBScript, xPath, or in populating the XML I am retrieving information from ( this might prove more difficult than the other two options ) I am also asking a Similar question on stackoverflow

    Read the article

  • Can too much abstraction be bad?

    - by m3th0dman
    As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions.

    Read the article

  • Visual Studio 2012 first impressions...no Macros!

    - by bconlon
    Yesterday I installed Microsoft Visual Studio 2012 for the first time (all 8.5GB) and after 20 years of (mostly) happy times using VS they have removed Macros, one of the most handy features.The first thing I wanted to do when I upgraded my VS2010 project was to add a #elseif block to each file. This would usually be simple case of find in files of the previous #elseif and then Ctrl+Shift+R to record a macro which would be: F8 (to select the next file from find list), F3 (to find the correct position in file), Ctrl+V to paste the new code. Then all I would need to do is keep Ctrl+Shift+P (Play Macro) pressed until all the files were processed.But alas Ctrl+Shift+R does nothing! I won’t say that I use Macros every day but it was a very useful feature.To continue my moaning a little more, I also don't like the bland interface. This has been well documented by others, but now I have used it myself, I find it difficult to tell one grey area of screen from another and the lack of colour makes the icons unclear.I also don't see why the menus now need to SHOUT in capital letters?On the plus side, they have now added the ability to see WPF properties in the debugger...a bit of an oversight in Visual Studio 2010. Oh, but you still can't edit and continue on files that contain templated code.Whilst Visual Studio 2012 is not a complete disaster like Windows 8 (why develop a desk top OS to be the same as a Smart device OS), it does not float my boat.Rant over.#

    Read the article

  • Correct architecture for running and stopping complex tasks in the background

    - by Phonon
    I'm having trouble working out the correct architecture for the following task. I have a GUI in Windows Forms that contains a ListBox, listing certain architectural layouts. One an item in this list is selected, a custom Control displays an interactive visualization of the selected layout. Drawing of this interactive diagram is a CPU-intensive task, and can take up to a second on my machine. The kind of functionality I'm trying to achieve is that if a user wants to quickly scroll through the layouts in the ListBox (say, holding down the down arrow key), I don't want my computer to sit there thinking about how to draw the layout before it allows the user to do anything else. The obvious answer is, of course, to run the layout calculations in a separate thread. But how do I make that thread return a whole control? How do I make sure I'm not running two layout calculations at once? I'm fairly new to this complex GUI business. So the real question is what is the right architecture to implement something like this? This seems like something people do all the time, but finding any suggestions on how to do it properly is really difficult.

    Read the article

  • How do I throttle a command in a terminal window?

    - by To Do
    I needed to run convert with a lot of images at the same time. The command took quite a while but this doesn't bother me. The issue is that this command rendered my computer unusable while the command was running (for about 15 minutes). So is it possible to throttle the command by limiting resources (processor and memory) to the command, directly from the command line? This can only work if I add something to the same line before pressing Enter because once I start the process the computer slows so much that it is impossible for example to switch to "System monitor" and reduce priority. Edit: top and iotop results I managed to run top and sudo iotop >iotop.txt while doing one of these convert operations. (The iotop.txt file produced is difficult to read) Results of top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14275 username 20 0 4043m 3.0g 1448 D 7.0 80.4 0:16.45 convert Results of iotop: [?1049h[1;24r(B[m[4l[?7h[?1h=[39;49m[?25l[39;49m(B[m[H[2JTotal DISK READ: 1269.04 K/s | Total DISK WRITE:[59G0.00 B/s (B[0;7m TID PRIO USER DISK READ DISK WRITE SWAPIN(B[0;1;7m IO(B[0;7m COMMAND [3;2H(B[m2516 be/4 username 350.08 K/s 0.00 B/s 0.00 % 0.00 % zeitgeist-datahub 7394 be/4 username 568.88 K/s 0.00 B/s 77.41 % 0.00 % --rendere~.530483991[5;1H14275 idle username 350.08 K/s 0.00 B/s 37.49 % 0.00 % convert S~f test.pdf[6;2H2048 be/4 root[6;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:2] [5G1 be/4 root[7;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % init Furthermore, even after the process ends, the computer does not return to the previous performance. I found a way around this by running sudo swapoff -a followed by sudo swapon -a

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • Questioning one of the arguments for dependency injection: Why is creating an object graph hard?

    - by oberlies
    Dependency injection frameworks like Google Guice give the following motivation for their usage (source): To construct an object, you first build its dependencies. But to build each dependency, you need its dependencies, and so on. So when you build an object, you really need to build an object graph. Building object graphs by hand is labour intensive (...) and makes testing difficult. But I don't buy this argument: Even without dependency injection, I can write classes which are both easy to instantiate and convenient to test. E.g. the example from the Guice motivation page could be rewritten in the following way: class BillingService { private final CreditCardProcessor processor; private final TransactionLog transactionLog; // constructor for tests, taking all collaborators as parameters BillingService(CreditCardProcessor processor, TransactionLog transactionLog) { this.processor = processor; this.transactionLog = transactionLog; } // constructor for production, calling the (productive) constructors of the collaborators public BillingService() { this(new PaypalCreditCardProcessor(), new DatabaseTransactionLog()); } public Receipt chargeOrder(PizzaOrder order, CreditCard creditCard) { ... } } So there may be other arguments for dependency injection (which are out of scope for this question!), but easy creation of testable object graphs is not one of them, is it?

    Read the article

  • How does datomic handle "corrections"?

    - by blueberryfields
    tl;dr Rich Hickey describes datomic as a system which implicitly deals with timestamps associated with data storage from my experience, data is often imperfectly stored in systems, and on many occasions needs to retroactively be corrected (ie, often the question of "was a True on Tuesday at 12:00pm?" will have an incorrect answer stored in the database) This seems like a spot where the abstractions behind datomic might break - do they? If they don't, how does the system handle such corrections? Rich Hickey, in several of his talks, justifies the creation of datomic, and explains its benefits. His work, if I understand correctly, is motivated by core the insight that humans, when speaking about data and facts, implicitly associate some of the related context into their work(a date-time). By pushing the work required to manage the implicit date-time component of context into the database, he's created a system which is both much easier to understand, and much easier to program. This turns out to be relevant to most database programmers in practice - his work saves everyone a lot of time managing complex, hard to produce/debug/fix, time queries. However, especially in large databases, data is often damaged/incorrect (maybe it was not input correctly, maybe it eroded over time, etc...). While most database updates are insertions of new facts, and should indeed be treated that way, a non-trivial subset of the work required to manage time-queries has to do with retroactive updates. I have yet to see any documentation which explains how such corrections, or retroactive updates, are handled by datomic; from my experience, they are a non-trivial (and incredibly difficult to deal with) subset of time-related data manipulation that database programmers are faced with. Does datomic gracefully handle such updates? If so, how?

    Read the article

  • Make the Time

    - by WonderOfItAll
    Took the little one to the pool tonight for swim lessons. Okay, Okay. They're not really lessons so much as they are "Hey, here's a few bucks, let me rent out a small section of your pool to swim around with my little one" Saw a dad at the pool. Bluetooth on, iPad in hand, and two year old somewhere around there. Saw a mom at the pool. Arguing with her five year old to NOT take a shower after swimming. Bluetooth on, iPad in hand, work laptop open on stadium seats. Her reasoning for not wanting the child to shower "Look, I have to get this stuff to the office by 6:30, we don't have time for you to shower. Let's go" Wait, isn't the whole point of this little experience called Mommy and Me (or, as in my case, Daddy and Me). Wherein Mommy/Daddy is supposed to spend time with little one. Not with the Bluetooth. Not with the work laptop. Dad (yeah, the same dad from earlier), in the pool. Bluetooth off (it's not waterproof or I'm sure he would've had it on), two year old in hand and iPad somewhere put away. Getting frustrated with kid because he won't 'perform' on command. Here's a little exchange Kid: "I don't wanna get in the water" Dad: "Well, we're here for 30 minutes, get in the water" Kid: "No, don't wanna" Dad: "Fine, I'm getting in" and, true to his word, in he goes, off to swim. Kid: Crying Dad: "Well, c'mon" Kid: Walking to stands Dad: Ignoring kid Kid: At stands Dad: Out of pool, drying off. Frustrated. Grabs bag, grabs kid, leaves How sad. It really seems like I am living in a generation of parents who view their children as one big scheduled distraction to another. It's almost like the dad was saying "Look, little 2 year old boy, I have a busy scheduled. Right now my Outlook Calendar tells me that I have 30 mins to spend with you, so, let's go kid: PERFORM because I have the time" Really? Can someone please tell me when the hell this happened? When did spending time with your kid, spending time with your family, spending time with your spouse, etc... become a distraction? I've seen people at work all day Tweeting throughout the day, checked in with Four Square, IM up and running constantly so they can 'stay in touch' only to see these same folks come home and be irritated because their kids or their spouse wants to connect with the. I've seen these very same people leave the house, go to the corner bar/store/you-name-the-place to be 'alone' only to find them there, plugged in, tweeting away, etc, etc, etc I LOVE technology. I love working with technology. But I also know that I am a human being. A person who, by very definition, is a social being. I needed social interactions and contact--and, no, I'm not talking about the Social Graph kind of connections, I'm talking about those interactions which, *GASP* involve eye to eye contact and human contact. A recent study found that the number one complaint of kids is that they feel they have to compete with technology for their parents time and attention. The number one wish from high school kids? That there parents would turn off the computer/tv/cell phone at dinner. This, coming from high school kids. Shouldn't that tell you a whole helluva lot? So, do yourself a favor tomorrow. Plug into technology all day. Throw yourself into it. Be passionate about what you do. When you walk through the door to your family, turn it all off for 30 mins and be there with your loved ones. If you can manage to play Angry Birds, I'm sure you can handle being disconnected for 30 minutes. Make the time

    Read the article

  • How do you maintain focus when a particular aspect of programming takes 10+ seconds to complete?

    - by Jer
    I have a very difficult time focusing on what I'm doing (programming-wise) when something (compilation, startup time, etc.) takes more than just a few seconds. Anecdotally it seems that threshold is about 10 seconds (and I recall reading about study that said the same thing, though I can't find it now). So what typically happens is I make a change and then run the program to test it. That takes about 30 seconds, so I start reading something else, and before I know it 20 minutes have passed, and then it takes (if I'm lucky!) another 10+ minutes to deal with the context switch to getting back into programming. It's not an exaggeration to say that some things that should take me minutes literally take hours to complete. I'm very curious about what other programmers do to combat this tendency (or if I'm unique and they don't have this tendency?). Suggestions of any type at all are welcome - anything from "sit on your hands after hitting the compile button", to mental tricks, to "if it takes 30 seconds to start up something to test a change, then something's wrong with your development process!"

    Read the article

  • c# vocabulary

    - by foxjazz
    I have probably seen and used the word Encapsulation 4 times in my 20 years of programming.I now know what it is again, after an interview for a c# job. Even though I have used the public, private, and protected key words in classes for as long as c# was invented. I can sill remember coming across the string.IndexOf function and thinking, why didn't they call it IndexAt.Now with all the new items like Lambda and Rx, Linq, map and pmap etc, etc. I think the more choices there is to do 1 or 2 things 10 or 15 differing ways, the more programmers think to stay with what works and try and leverage the new stuff only when it really becomes beneficial.For many, the new stuff is harder to read, because programmers aren't use to seeing declarative notation.I mean I have probably used yield break, twice in my project where it may have been possible to use it many more times. Or the using statement ( not the declaration of namespace references) but inline using. I never really saw a big advantage to this, other than confusion. It is another form of local encapsulation (oh there 5 times used in my programming career) but who's counting?  THE COMPUTERS ARE COUNTING!In business logic most programming is about displaying lists, selecting items in a list, and sending those choices to some other system or database to keep track of those selections. What makes this difficult is how these items relate to one, each other, and two externally listed items.Well I probably need to go back to school and learn c# certification so I can say I am an expert in c#. Apparently using all aspects of c# (even unsafe code) in my programming life, doesn't make me certified, just certifiable.This is a good time to sign off:Fox-jazzy

    Read the article

  • Virtualization in Ubuntu 11.10

    - by Mascarpone
    Since Ubuntu 11.10 use a new kernel, it's very difficult to have a decent support for virtualization. VirtualBox doesn't support guest additions for ubuntu 11.10, so I can't copy to and from my ubuntu desktop and windows, which I absolutely require, plus FreeBSD seems not to be able to use DHCP without guest additions. Virt-manager instead gives an error on launch: Unable to open a connection to the libvirt management daemon. Libvirt URI is: qemu:///system Verify that: - The 'libvirt-bin' package is installed - The 'libvirtd' daemon has been started - You are member of the 'libvirtd' group unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/connection.py", line 1146, in _open_thread self.vmm = self._try_open() File "/usr/share/virt-manager/virtManager/connection.py", line 1130, in _try_open flags) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 102, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirtError: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied The problem is solved by running virt-manager as root, but I don't like that. How do I change permissions to run Virt-Manager as user? Is there a way to install guest additions on Ubuntu 11.10?

    Read the article

  • custom domain point to tumblr blog

    - by Julius
    My domain mydomain.com is registered with godaddy. I wish to host my tumblr blog on this domain with nearlyfreespeech.net hosting. My active nameservers at godaddy already point to my authoritative ones at NFS.net which is working. However i'm baffled of the correct configuration to set to point to my Tumblr. Preferably id like (A) my domain http://mydomain.com to host the blog and have http://www.mydomain.com redirect also to http://mydomain.com If this is too difficult my next preference is (B) to have http://www.mydomain.com host the blog whilst http://mydomain.com redirects to http://www.mydomain.com My 3rd preference is to have (C) a sub-domain like http://tumblr.mydomain.com or http://tumblr.mydomain.com to host the blog and i guess have http://mydomain.com and http://www.mydomain.com both redirect to it. I've tried having two aliases mydomain.com and www.mydomain.com pointing to my permanent NFS ip at mydomain.nfshost.com and when i try to add: (1) an A record pointing mydomain.com to the ip 66.6.44.4 as per Tumblr's instructions it tells me i already have the bare domain as an alias so i cant do that. (2) the A record on the www.mydomain.com alias. I can do this with either www.mydomain.com set as an alias or not. But when i tried this with mydomain.com set as the canonical name the result when visiting either mydomain.com or www.mydomain.com was them both continually redirecting to eachother until an error was thrown. So i was wondering if there is a ninja that could save me some hairpulling and tell me the correct way to config A, or else B, or else C.

    Read the article

  • How to Reap Anticipated ROI in Large-Scale Capital Projects

    - by Sylvie MacKenzie, PMP
    Only a small fraction of companies in asset-intensive industries reliably achieve expected ROI for major capital projects 90 percent of the time, according to a new industry study. In addition, 12 percent of companies see expected ROIs in less than half of their capital projects. The problem: no matter how sophisticated and far-reaching the planning processes are, many organizations struggle to manage risks or reap the expected value from major capital investments. The data is part of the larger survey of companies in oil and gas, mining and metals, chemicals, and utilities industries. The results appear in Prepare for the Unexpected: Investment Planning in Asset-Intensive Industries, a comprehensive new report sponsored by Oracle and developed by the Economist Intelligence Unit. Analysts say the shortcomings in large-scale, long-duration capital-investments projects often stem from immature capital-planning processes. The poor decisions that result can lead to significant financial losses and disappointing project benefits, which are particularly harmful to organizations during economic downturns. The report highlights three other important findings. Teaming the right data and people doesn’t guarantee that ROI goals will be achieved. Despite involving cross-functional teams and looking at all the pertinent data, executives are still failing to identify risks and deliver bottom-line results on capital projects. Effective processes are the missing link. Project-planning processes are weakest when it comes to risk management and predicting costs and ROI. Organizations participating in the study said they fail to achieve expected ROI because they regularly experience unexpected events that derail schedules and inflate budgets. But executives believe that using more-robust risk management and project planning strategies will help avoid delays, improve ROI, and more accurately predict the long-term cost of initiatives. Planning for unexpected events is a key to success. External factors, such as changing market conditions and evolving government policies are difficult to forecast precisely, so organizations need to build flexibility into project plans to make it easier to adapt to the changes. The report outlines a series of steps executives can take to address these shortcomings and improve their capital-planning processes. Read the full report or take the benchmarking survey and find out how your organization compares.

    Read the article

  • Internet of Things Becoming Reality

    - by kristin.jellison
    The Internet of Things is not just on the radar—it’s becoming a reality. A globally connected continuum of devices and objects will unleash untold possibilities for businesses and the people they touch. But the “things” are only a small part of a much larger, integrated architecture. A great example of this comes from the healthcare industry. Imagine an expectant mother who needs to watch her blood pressure. She lives in a mountain village 100 miles away from medical attention. Luckily, she can use a small “wearable” device to monitor her status and wirelessly transmit the information to a healthcare hub in her village. Now, say the healthcare hub identifies that the expectant mother’s blood pressure is dangerously high. It sends a real-time alert to the patient’s wearable device, advising her to contact her doctor. It also pushes an alert with the patient’s historical data to the doctor’s tablet PC. He inserts a smart security card into the tablet to verify his identity. This ensures that only the right people have access to the patient’s data. Then, comparing the new data with the patient’s medical history, the doctor decides she needs urgent medical attention. GPS tracking devices on ambulances in the field identify and dispatch the closest one available. An alert also goes to the closest hospital with the necessary facilities. It sends real-time information on her condition directly from the ambulance. So when she arrives, they already have a treatment plan in place to ensure she gets the right care. The Internet of Things makes a huge difference for the patient. She receives personalized and responsive healthcare. But this technology also helps the businesses involved. The healthcare provider achieves a competitive advantage in its services. The hospital benefits from cost savings through more accurate treatment and better application of services. All of this, in turn, translates into savings on insurance claims. This is an ideal scenario for the Internet of Things—when all the devices integrate easily and when the relevant organizations have all the right systems in place. But in reality, that can be difficult to achieve. Core design principles are required to make the whole system work. Open standards allow these systems to talk to each other. Integrated security protects personal, financial, commercial and regulatory information. A reliable and highly available systems infrastructure is necessary to keep these systems running 24/7. If this system were just made up of separate components, it would be prohibitively complex and expensive for almost any organization. The solution is integration, and Oracle is leading the way. We’re developing converged solutions, not just from device to datacenter, but across devices, utilizing the Java platform, and through data acquisition and management, integration, analytics, security and decision-making. The Internet of Things (IoT) requires the predictable action and interaction of a potentially endless number of components. It’s in that convergence that the true value of the Internet of Things emerges. Partners who take the comprehensive view and choose to engage with the Internet of Things as a fully integrated platform stand to gain the most from the Internet of Things’ many opportunities. To discover what else Oracle is doing to connect the world, read about Oracle’s Internet of Things Platform. Learn how you can get involved as a partner by checking out the Oracle Java Knowledge Zone. Best regards, David Hicks

    Read the article

  • How to customize web-app (pages and UI) for different customers

    - by demoncodemonkey
    We have an ASP.NET web-application which has become difficult to maintain, and I'm looking for ideas on how to redesign it. It's an employee administration system which can be highly customized for each of our customers. Let me explain how it works now: On the default page we have a menu where a user can select a task, such as Create Employee or View Timesheet. I'll use Create Employee as an example. When a user selects Create Employee from the menu, an ASPX page is loaded which contains a dynamically loaded usercontrol for the selected menuitem, e.g. for Create Employee this would be AddEmployee.ascx If the user clicks Save on the control, it navigates to the default page. Some menuitems involve multiple steps, so if the user clicks Next on a multi-step flow then it will navigate to the next page in the flow, and so on until it reaches the final step, where clicking Save navigates to the default page. Some customers may require an extra step in the Create Employee flow (e.g. SecurityClearance.ascx) but others may not. Different customers may use the same ASCX usercontrol, so in the AddEmployee.OnInit we can customize the fields for that customer, i.e. making certain fields hidden or readonly or mandatory. The following things are customizable per customer: Menu items Steps in each flow (ascx control names) Hidden fields in each ascx Mandatory fields in each ascx Rules relating to each ascx, which allows certain logic to be used in the code for that customer The customizations are held in a huge XML file per customer, which could be 7500 lines long. Is there any framework or rules-engine that we could use to customize our application in this way? How do other applications manage customizations per customer?

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >