Search Results

Search found 12431 results on 498 pages for 'finance management'.

Page 153/498 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • Two objects with dependencies for each other. Is that bad?

    - by Kasper Grubbe
    Hi SO. I am learning a lot about design patterns these days. And I want to ask you about a design question that I can't find an answer to. Currently I am building a little Chat-server using sockets, with multiple Clients. Currently I have three classes. Person-class which holds information like nick, age and a Room-object. Room-class which holds information like room-name, topic and a list of Persons currently in that room. Hotel-class which have a list of Persons and a list of Rooms on the server. I have made a diagram to illustrate it (Sorry for the big size!): http://i.imgur.com/Kpq6V.png I have a list of players on the server in the Hotel-class because it would be nice to keep track of how many there are online right now (Without having to iterate through all of the rooms). The persons live in the Hotel-class because I would like to be able to search for a specific Person without searching the rooms. Is this bad design? Is there another way of achieve it? Thanks.

    Read the article

  • How/when to hire new programmers, and how to integrate them?

    - by Shaul
    Hiring new programmers, especially in a small company, can often present a Catch-22 situation. We have too much work to do, so we need to hire new programmers. But we can't hire new programmers now, because they will need mentoring and several months of learning curve in your industry/product/environment before they're useful, and none of the programmers has time to be a mentor to a new programmer, because they're all completely swamped with the current work load. That may be a slightly frivolous way of describing the situation, but nevertheless, it's difficult for a small company on a tight budget to justify hiring someone who is not only going to be unproductive for a long time, but will also take away from the performance of the current programmers. How have you dealt with this kind of situation? When is the best time to hire someone? What are the best tasks to assign to a new team member so that they can learn their way around your code base and start getting their hands dirty as quickly as possible? How do you get the new guy useful without bogging your existing programmers down in too much mentoring? Any comments & suggestions you have are much appreciated!

    Read the article

  • Understanding addSubview: memoryLeak

    - by Leandros
    I don't really understand, why this code leaks. ParentViewController *parentController = [[ParentViewController alloc] init]; ChildViewController *childController = [[ChildViewController alloc] init]; [parentController containerAddChildViewController:childController]; [[self window] setRootViewController:parentController]; - (void)containerAddChildViewController:(UIViewController *)childViewController { [self addChildViewController:childViewController]; [self.view addSubview:childViewController.view]; // Instruments is telling me, the leak occurs here! [childViewController didMoveToParentViewController:self]; } According to Instruments, this line: [self.view addSubview:childViewController.view]; is leaking. The whole code is called once in application:didFinishLaunchingWithOptions:, but it is shown that this code is responsible for 30 leaks (approx. 1.12 kB).

    Read the article

  • What software development process do you use and how do you implement it?

    - by clyfe
    Post only what you do use not what you would like to use, so we can see what is the most popular in real life. I am interested only in theese issues: Project Model (waterfall, agile...) How are requirements gathered (and stored)? Revision control - what software, what workflow Build automation, what software, where does it fit ? How is the testing done ? How is the documentation done ? How is the quality assurance done ? Please provide short objective answers, don't speak from the books. EXAMPLE: In my company we are a small team of 5 people and we develop webapps using ruby. agile PM cucumber requirements git SCM - Integration Manager Workflow integrity CI rspec automated tests the project lead creats the documentation skeleton then it is filled by the developers ensure quality by peer reviewing code and manual peer-testing

    Read the article

  • mprotect - how aligning to multiple of pagesize works?

    - by user299988
    Hi, I am not understanding the 'aligning allocated memory' part from the mprotect usage. I am referring to the code example given on http://linux.die.net/man/2/mprotect char *p; char c; /* Allocate a buffer; it will have the default protection of PROT_READ|PROT_WRITE. */ p = malloc(1024+PAGESIZE-1); if (!p) { perror("Couldn't malloc(1024)"); exit(errno); } /* Align to a multiple of PAGESIZE, assumed to be a power of two */ p = (char *)(((int) p + PAGESIZE-1) & ~(PAGESIZE-1)); c = p[666]; /* Read; ok */ p[666] = 42; /* Write; ok */ /* Mark the buffer read-only. */ if (mprotect(p, 1024, PROT_READ)) { perror("Couldn't mprotect"); exit(errno); } For my understanding, I tried using a PAGESIZE of 16, and 0010 as address of p. I ended up getting 0001 as the result of (((int) p + PAGESIZE-1) & ~(PAGESIZE-1)). Could you please clarify how this whole 'alignment' works? Thanks,

    Read the article

  • Arguments for moving from LINQtoSQL to Nhibernate?

    - by sah302
    Backstory: Hi all, I just spent a lot of time reading many of the LINQ vs Nhibernate threads here and on other sites. I work in a small development team of 4 people and we don't even have really any super experienced developers. We work for a small company that has a lot of technical needs but not enough developers to implement them (and hiring more is out of the question right now). Typically our projects (which individually are fairly small) have been coded separately and weren't really layered in anyway, code wasn't re-used, no class libraries, and we just use the LINQtoSQL .dbml files for our pojects, we really don't even use objects but pass around values and stuff, the only time we use objects is when inserting to a database (heck not even querying since you don't need to assign it to a type and can just bind to gridview). Despite all this as I said our company has a lot of technical needs, no one could come to us for a year and we would have plenty of work to implement requested features. Well I have decided to change that a bit first by creating class libraries and actually adding layers to our applications. I am trying to meet these guys halfway by still using LINQtoSQL as the ORM yet and still use VB as the language. However I am finding it a b***h of a time dealing with so many thing in LINQtoSQL that I found easy in Nhibernate (automatic handling of the session, criteria creation easier than expression trees, generic an dynamic querying easier etc.) So... Question: How can I convince my lead developers and other senior programmers that switching to Nhibernate is a good thing? That being in control of our domain objects is a good thing? That being able to implement interfaces is a good? I've tried exlpaining the advantages of this before but it's not understood by them because they've never programmed in a true OO & layered way. Also one of the counter arguments to this I can see is sqlMetal generates those classes automatically and therefore it saves a lot of time. I can't really counter that other than saying spending more time on infrastructure to make it more scalable and flexible is good, but they can't see how. Again, I know the features and advantages (somewhat enough I believe) of each, but I need arguments applicable to my context, hence why I provided the context. I just am not a very good arguer I guess. (Caveat: For all the LINQtoSQL lovers, I may just not be super proficient as LINQ, but I find it very cumbersome that you are required to download some extra library for dynamic queries which don't by default support guid comparisons, and I also find the way of updating entitites to be cumbersome as well in terms of data context managing, so it could just be that I suck hehe.)

    Read the article

  • When do you tag your software project?

    - by WilhelmTell of Purple-Magenta
    I realize there are various kinds of software projects: commercial (for John Doe) industrial (for Mr. Montgomery Burns) successful open-source (with audience larger than, say, 10 people) personal projects (with audience size in the vicinity of 1). each of which release a new version of their product on difference conditions. I'm particularly interested in the case of personal projects and open-source projects. When, or under what conditions, do you make a new release of any kind? Do you subscribe to a fixed recurring deadline such as every two weeks? Do you commit to a release of at least 10 minor fixes, or one major fix? Do you combine the two conditions such as at least one condition must hold, or both must hold? I reckon this is a subjective question. I ask this question in light of searching for tricks to keep my projects alive and kicking. Sometimes my projects are active but look as if they aren't because I don't have the confidence to make a release or a tag of any sort for a long time -- in the order of months.

    Read the article

  • Strange menu issue

    - by Ber53rker
    Just started out with Orchard and trying to just make pages and menus. I'm running into an issue where on my home page my menu shows: Home, Fruit, Vegetable, Carrot. I want it to show just Home, Fruit, Vegetable and when I click the respective item it adds the proper tabs. It DOES do it with Banana. ie: When I click Fruit, it shows Banana in the menu. But it always shows Carrot for some reason! I have the following setup. Navigation -Main Menu --Home --Fruit --Vegetable -Fruit Menu --Banana -Vegetable Menu --Carrot Widgits > Navigation > Main Menu, Fruit Menu, Vegetable Menu (Fruit and Vegetable Menus are in their own respective layers.) Path Examples: /Fruit, /Fruit/Banana, /Vegetable, /Vegetable/Carrot Thanks in advance.

    Read the article

  • What is the main purpose and sense to have staging server the same as production?

    - by truthseeker
    Hi, In our company we have staging and production servers. I'm trying to have them in state 1:1 after latest release. We've got web application running on several host and many instances of it. The issue is that I am an advocate of having the same architecture (structure) of web applications on staging and production servers to easily test new features and avoid creating of new bugs with new releases. But not everyone agree with me, and for them is not a such big deal to have different connection between staging application instances. Even maybe to have more application and connections between application on staging than on production server. I would like to ask about pros and cons of such an approach? I mean some good points to agree with me, or some bad why maybe i don't have right. Some examples of consequences and so forth.

    Read the article

  • Checking that all libs and dlls are from the same build?

    - by unknownthreat
    I am developing a program in VS C++ 2008. Right now, I have a huge list of dll and lib dependencies and I am adding some more. I worry that when I need to update a dependency by building from source (where I have to manually replace built dlls and libs in the correct place), if I accidently forgot to replace something or vice versa, I may run into a compile and/or runtime problem. And finding which place goes wrong can be a bit difficult. So is there some sort of program or method out there that can suit this task to ease building a program with many updating dependencies?

    Read the article

  • How do you protect code from leaking outside?

    - by cubex
    Besides open-sourcing your project and legislation, are there ways to prevent, or at least minimize the damages of code leaking outside your company/group? We obviously can't block Internet access (to prevent emailing the code) because programmer's need their references. We also can't block peripheral devices (USB, Firewire, etc.) The code matters most when it has some proprietary algorithms and in-house developed knowledge (as opposed to regular routine code to draw GUIs, connect to databases, etc.), but some applications (like accounting software and CRMs) are just that: complex collections of routine code that are simple to develop in principle, but will take years to write from scratch. This is where leaked code will come in handy to competitors. As far as I see it, preventing leakage relies almost entirely on human process. What do you think? What precautions and measures are you taking? And has code leakage affected you before?

    Read the article

  • Usage of Maven (and open source in general) in high governance and risk-averse large organizations (

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source (such as tools like Maven etc). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. Bonus points for any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • Evidence-Based-Scheduling - are estimations only as accurate as the work-plan they're based on?

    - by Assaf Lavie
    I've been using FogBugz's Evidence Based Scheduling (for the uninitiated, Joel explains) for a while now and there's an inherent problem I can't seem to work around. The system is good at telling me the probability that a given project will be delivered at some date, given the detailed list of tasks that comprise the project. However, it does not take into account the fact that during development additional tasks always pop up. Now, there's the garbage-can approach of creating a generic task/scheduled-item for "last minute hacks" or "integration tasks", or what have you, but that clearly goes against the idea of aggregating the estimates of many small cases. It's often the case that during the development stage of a project you realize that there's a whole area your planning didn't cover, because, well, that's the nature of developing stuff that hasn't been developed before. So now your ~3 month project may very well turn into a 6 month project, but not because your estimations were off (you could be the best estimator in the world, for those task the comprised your initial work plan); rather because you ended up adding a whole bunch of new tasks that weren't there to begin with. EBS doesn't help you with that. It could, theoretically (I guess). It could, perhaps, measure the amount of work you add to a project over time and take that into consideration when estimating the time remaining on a given project. Just a thought. In other words, EBS works on a task basis, but not on a project/release basis - but the latter is what's important. It's what your boss typically cares about - delivery date, not the time it takes to finish each task along the way, and not the time it would have taken, if your planning was perfect. So the question is (yes, there's a question here, don't close it): What's your methodology when it comes to using EBS in FogBugz and how do you solve the problem above, which seems to be a main cause of schedule delays and mispredictions? Edit Some more thoughts after reading a few answers: If it comes down to having to choose which delivery date you're comfortable presenting to your higher-ups by squinting at the delivery-probability graph and choosing 80%, or 95%, or 60% (based on what, exactly?) then we've resorted to plain old buffering/factoring of our estimates. In which case, couldn't we have skipped the meticulous case by case hour-sized estimation effort step? By forcing ourselves to break down tasks that take more than a day into smaller chunks of work haven't we just deluded ourselves into thinking our planning is as tight and thorough as it could be? People may be consistently bad estimators that do not even learn from their past mistakes. In that respect, having an EBS system is certainly better than not having one. But what can we do about the fact that we're not that good in planning as well? I'm not sure it's a problem that can be solved by a similar system. Our estimates are wrong because of tendencies to be overly optimistic/pessimistic about certain tasks, and because of neglect to account for systematic delays (e.g. sick days, major bug crisis) - and usually not because we lack knowledge about the work that needs to be done. Our planning, on the other hand, is often incomplete because we simply don't have enough knowledge in this early stage; and I don't see how an EBS-like system could fill that gap. So we're back to methodology. We need to find a way to accommodate bad or incomplete work plans that's better than voodoo-multiplication.

    Read the article

  • What is the difference between these two different lines of Objective-C, and why does one work and n

    - by jrtc27
    If I try and release tempSeedsArray after seedsArray = tempSeedsArray , I get an EXEC_BAD_ACCESS, and Instruments shows that tempSeedsArray has been released twice. Here is my viewWillAppear method: - (void)viewWillAppear:(BOOL)animated { NSString *arrayFilePath = [[NSBundle mainBundle] pathForResource:@"SeedsArray" ofType:@"plist"]; NSLog(@"HIT!"); NSMutableArray *tempSeedsArray = [[NSMutableArray alloc] initWithContentsOfFile:arrayFilePath]; seedsArray = tempSeedsArray; NSLog(@"%u", [seedsArray retainCount]); [seedsArray sortUsingSelector:@selector(localizedCaseInsensitiveCompare:)]; [super viewWillAppear:animated]; } seedsArray is an NSMutableArray set as a nonatomic and a retain property, and is synthesised. However, if I change seedsArray = tempSeedsArray to self.seedsArray = tempSeedsArray (or [self seedsArray] = tempSeedsArray etc.), I can release tempSeedsArray. Could someone please explain simply to me why this is, as I am very confused! Thanks

    Read the article

  • Alternative to MS Project 2007 for production scheduling?

    - by john c
    OK... Im coming to grips with the fact that MS Project 2007 may not be the correct tool for my production scheduling. We serve 120 to 150 projects a year with durations from 6 weeks to 12 months. The task are simple (6 to 8 per project) and the resource pool is stable (15 to 20 people). It's really an assembly line product but with extremely varied durations. I need to be able to prioritize the projects for production and run projects concurrently to fully utilize my resources. What are the feelings of the stackoverflow community. Am I using the wrong program? I was really hoping to make this simple for non-programer types to input project data into a form and have the schedule software automated enough to make most of the decisions. Is there a better solution available commercially? I'd like to hold on writing a custom spreadsheet as a last resort but if that's the best route then so be it. Thank you so much for your input.

    Read the article

  • Need post-it notes that don't fall off the whiteboard after a week.

    - by jdv
    In my company, we plan our develpment work with scrum. We track progress using post-it stickies on a big whiteboard, and it works great. It is my understanding that's kind of standard. We are just one location, so we don't need or want to do this electronically. But to our (and the Q/A rep's) annoyance, the sticky notes begin to fall off the whiteboard after a week or two, or even sooner if you stick them on top of each other. I've experimented with extra tape on the stickies. That helped, but it also ruins the whiteboard. So I am looking for a pragmatic and preferably low-cost alternative. Are some post-it brands better than others? Or do you have another solution for a scrum board does not suffer from this?

    Read the article

  • Way to check files in and out with multiple developers.

    - by Roeland
    I am a web developer working with 2-3 people including me. Our current setup is very simplistic. We try to let each other know when we are working on a specific file. We use FTP to edit our files. Recently we have run into the problem of 2 people accidentally editing one file, or working on a local file then uploading when another person just did the same. From what I have read I need some sort of control system. I have heard of subversion and mercurial. It seems that these systems may not be what I need though, since its just giving me different versions of the files. I don't know if it solves the issue of two people working on a file and overwriting each others work. What are your suggestions for solving my problem?

    Read the article

  • Release objects before returning a value based on those object?

    - by Moshe
    Consider the following method, where I build a string and return it. I would like to release the building blocks of the sting, but then the string is based on values that no longer exists. Now what? Am I leaking memory and if so, how can I correct it? - (NSString) getMiddahInEnglish:(int)day{ NSArray *middah = [[NSArray alloc] initWithObjects:@"Chesed", @"Gevurah", @"Tiferes", @"Netzach", @"Hod", @"Yesod", @"Malchus"]; NSString *firstPartOfMiddah = [NSString stringWithFormat: @"%@", [middah objectAtIndex: ((int)day% 7)-1]]; NSString *secondPartOfMiddah = [NSString stringWithFormat: @"%@", [middah objectAtIndex: ((int)day / 7)]]; NSString *middahStr = [NSString string@"%@ She'bi@%", firstPartOfMiddah, secondPartOfMiddah]; [middah release]; [firstPartOfMiddah release]; [secondPartOfMiddah release]; return middahStr; }

    Read the article

  • What causes memory fragmentation in .NET

    - by Matt
    I am using Red Gates ANTS memory profiler to debug a memory leak. It keeps warning me that: Memory Fragmentation may be causing .NET to reserver too much free memory. or Memory Fragmentation is affecting the size of the largest object that can be allocated Because I have OCD, this problem must be resolved. What are some standard coding practices that help avoid memory fragmentation. Can you defragment it through some .NET methods? Would it even help?

    Read the article

  • how to maintain multiple components for multiple client for multiple features?

    - by Dhana
    Basically my project is product based. Once we developed a project and catch the multiple client and deploy the application based on their needs. But We decided to put the new features and project dependent modules are as component. Now my application got many number of customer. Every customer needs a different features based on the component. But we have centralized component for all client . we move the components additional feature to client specific folder and deploy. My problem is , I am unable maintain the components features for multiple client. My component feature code is increased and I am unable to track the client features. Is there any solution for maintaining the multiple component features for multiple client ?

    Read the article

  • How to copy an array of char pointers with a larger list of char pointers?

    - by Casey Link
    My function is being passed a struct containing, among other things, a NULL terminated array of pointers to words making up a command with arguments. I'm performing a glob match on the list of arguments, to expand them into a full list of files, then I want to replace the passed argument array with the new expanded one. The globbing is working fine, that is, g.gl_pathv is populated with the list of expected files. However, I am having trouble copying this array into the struct I was given. #include <glob.h> struct command { char **argv; // other fields... } void myFunction( struct command * cmd ) { char **p = cmd->argv; char* program = *p++; // save the program name (e.g 'ls', and increment to the first argument glob_t g; memset(&g, 0, sizeof(g)); int res = glob(*p, 0, NULL, &g); *p++ // increment while (*p) { glob(*p++, GLOB_APPEND, NULL, &g); // append the matches } // here i want to replace cmd->argv with the expanded g.gl_pathv memcpy(cmd->argv, g.gl_pathv, g.gl_pathc ); // this doesn't work globfree(&g); }

    Read the article

  • What was the most surprising failure of your 'Engineer's intuition'?

    - by Bubba88
    Hi! This may seem as an open-ended question but I'll surely accept the most impressive and upvoted answer ;) Basically, I could describe my own case - I just fail 5 times a day with my intuition cause very frequently I can be just not up-to-the-speed with my requirements/manager/team/etc. and I just have to make code quickly - that's why proper formalization in many cases stands aside. I want to gather some experience of yours - what was the most epic failure when you did rely on you implicit reasoning/intuitive knowledge/immediate perception etc. of course everything you describe should be related to programming/computers. It's mostly just to measure the danger of using that 'it's obvious..' words. I've made it com. wiki to be properly transformed after gathering enough views count. Thank you!

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >