Search Results

Search found 13237 results on 530 pages for 'risk management'.

Page 153/530 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • ECM (Niche Vs Mass Market)

    - by Luj Reyes
    Hi Everyone, I recently started a little company with a couple of guys. Ours is the typical startup, a lot of ideas, dreams, talent and work hours :P. Our initial business plan was to develop a DM (Document Manager) with several features found on DropBox and other tools but with a big differentiator. Then we got in the team this Business Guy (I must say that several of us could be called 'Business Guys' but we are mainly hackers, he is just Another 'Networking Guy'), and along with him came this market analysis for a DM aimed at a very specific and narrow niche. We have many elements to believe in his market study and the idea is the classic "The market is X million, so if we grab a 10%...", and the market is really there to grab because all big providers deemed it too little and fled, let's say that the market is 5 million USD and demand very specific features. If we decide to go for this niche product we face a sales cycle of about 7 months, and the main goal of these revenue is to develop more ambitious projects. (Institutional VC is out of the question if you want to keep a marginal ownership of your company in my country). The only overlap between the niche and the mass market product features is the ability to store documents; everything else requires that we focus all of our efforts towards one or the other. I've studied a lot about the differences between Mass and Niche Markets, but I want to hear from people with actual experience. So everything comes down to this: If you have a really “saleable” idea what is the right thing to do: to go for the niche or go for the big prize and target primarily the mass market? Thanks for your input

    Read the article

  • Codeplex/Sourceforge for internal use

    - by Josh
    I'm looking for a free/open source collaborative project manager that can be deployed internally in my workplace that would act similar to Codeplex or Sourceforge. Does anyone know of something like this, and if so do you have experience with it. Requirements: Open Source or Free Locally Deployable Has the same types of features found in Sourceforge / Codeplex Issue/Feature Tracking Community Interaction (ie. Voting, Roles, etc.) SCM Integration (Optional) .NET/Windows Friendly (Optional) Every business ends up having internal utilities, and domain specific apps that developers create to make life easier. Given the input of the internal developer community they have the potential to become much better (can you say GMail...), and I would simply like to foster such an environment internally by providing an easy place for that interaction to take place. UPDATE: So I like what I am seeing in both Trac and GForge, but both are heavily geared towards UNIX/Subversion environments. I should have specified this, but we are a MS shop from top to bottom. How practical do you think it is going to be to try and use these in a MS .NET environment? Would that be like trying to shove a square peg through a round hole?

    Read the article

  • Solving iPhone/iPad out of memory issues

    - by Joonas Trussmann
    I have a strange issue where I'm scrolling through a paged UIScrollView which displays the pages of a PDF document (using Quartz 2D and CATiledLayer). When I page through memory allocation looks fine with it going up with a few initial pages and then keeping it steady as it obviously releases the memory kept for earlier pages. Upon hitting page x (not a certain PDF page or a certain number per se) memory usage goes from a couple of megs to 308 megs and the app crashes. So my question is: how to best try to find what's causing this? The object alloc tool in instruments shows the memory as simply going to malloc. (in huge chunks).

    Read the article

  • SQL table relationship not showing up in visual studio

    - by PFranchise
    Hey, I made several tables and relationships using SQL server manager. I then imported them to visual studio and it all appeared in the correct form, except one of the relationships did not appear. I have checked everything I could think of and it is the exact same as the other relationships. If you know anything I can check, I would appreciate it. Thanks I am using the entity framework, visual studio 2010.

    Read the article

  • dev and prod systems in rails

    - by poseid
    What exactly is the difference in rails between dev and prod environments. When I develop an application in dev mode, do I have peformance problems, or others if I clone my dev environment on prod?

    Read the article

  • Putting a C++ Vector as a Member in a Class that Uses a Memory Pool

    - by Deep-B
    Hey, I've been writing a multi-threaded DLL for database access using ADO/ODBC for use with a legacy application. I need to keep multiple database connections for each thread, so I've put the ADO objects for each connection in an object and thinking of keeping an array of them inside a custom threadInfo object. Obviously a vector would serve better here - I need to delete/rearrange objects on the go and a vector would simplify that. Problem is, I'm allocating a heap for each thread to avoid heap contention and stuff and allocating all my memory from there. So my question is: how do I make the vector allocate from the thread-specific heap? (Or would it know internally to allocate memory from the same heap as its wrapper class - sounds unlikely, but I'm not a C++ guy) I've googled a bit and it looks like I might need to write an allocator or something - which looks like so much of work I don't want. Is there any other way? I've heard vector uses placement-new for all its stuff inside, so can overloading operator new be worked into it? My scant knowledge of the insides of C++ doesn't help, seeing as I'm mainly a C programmer (even that - relatively). It's very possible I'm missing something elementary somewhere. If nothing easier comes up - I might just go and do the array thing, but hopefully it won't come to that. I'm using MS-VC++ 6.0 (hey, it's rude to laugh! :-P ). Any/all help will be much appreciated.

    Read the article

  • OS memory allocation addresses

    - by user1777914
    Quick curious question, memory allocation addresses are choosed by the language compiler or is it the OS which chooses the addresses for the memory asked? This is from a doubt about virtual memory, where it could be quickly explained as "let the process think he owns all the memory", but what happens on 64 bits architectures where only 48 bits are used for memory addresses if the process wants a higher address? Lets say you do a int a = malloc(sizeof(int)); and you have no memory left from the previous system call so you need to ask the OS for more memory, is the compiler the one who determines the memory address to allocate this variable, or does it just ask the OS for memory and it allocates it on the address returned by it?

    Read the article

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • Two objects with dependencies for each other. Is that bad?

    - by Kasper Grubbe
    Hi SO. I am learning a lot about design patterns these days. And I want to ask you about a design question that I can't find an answer to. Currently I am building a little Chat-server using sockets, with multiple Clients. Currently I have three classes. Person-class which holds information like nick, age and a Room-object. Room-class which holds information like room-name, topic and a list of Persons currently in that room. Hotel-class which have a list of Persons and a list of Rooms on the server. I have made a diagram to illustrate it (Sorry for the big size!): http://i.imgur.com/Kpq6V.png I have a list of players on the server in the Hotel-class because it would be nice to keep track of how many there are online right now (Without having to iterate through all of the rooms). The persons live in the Hotel-class because I would like to be able to search for a specific Person without searching the rooms. Is this bad design? Is there another way of achieve it? Thanks.

    Read the article

  • Releasing Xmlparser and NSXMLParser objects

    - by erastusnjuki
    How can I release the variables xmlParser and parser safely in the function below? - (id)callRestService: (NSString *) methodName : (NSDictionary *) params { NSURL *url=[self getRestUrl: methodName : params]; XmlParser *xmlParser = [[XmlParser alloc] init]; NSXMLParser *parser = [[NSXMLParser alloc] initWithContentsOfURL:url]; [parser setDelegate:xmlParser]; [parser setShouldProcessNamespaces:NO]; [parser setShouldReportNamespacePrefixes:NO]; [parser setShouldResolveExternalEntities:NO]; [parser parse]; [parser setDelegate:nil]; return xmlParser.dictionaryArray; }

    Read the article

  • What software development process do you use and how do you implement it?

    - by clyfe
    Post only what you do use not what you would like to use, so we can see what is the most popular in real life. I am interested only in theese issues: Project Model (waterfall, agile...) How are requirements gathered (and stored)? Revision control - what software, what workflow Build automation, what software, where does it fit ? How is the testing done ? How is the documentation done ? How is the quality assurance done ? Please provide short objective answers, don't speak from the books. EXAMPLE: In my company we are a small team of 5 people and we develop webapps using ruby. agile PM cucumber requirements git SCM - Integration Manager Workflow integrity CI rspec automated tests the project lead creats the documentation skeleton then it is filled by the developers ensure quality by peer reviewing code and manual peer-testing

    Read the article

  • Understanding addSubview: memoryLeak

    - by Leandros
    I don't really understand, why this code leaks. ParentViewController *parentController = [[ParentViewController alloc] init]; ChildViewController *childController = [[ChildViewController alloc] init]; [parentController containerAddChildViewController:childController]; [[self window] setRootViewController:parentController]; - (void)containerAddChildViewController:(UIViewController *)childViewController { [self addChildViewController:childViewController]; [self.view addSubview:childViewController.view]; // Instruments is telling me, the leak occurs here! [childViewController didMoveToParentViewController:self]; } According to Instruments, this line: [self.view addSubview:childViewController.view]; is leaking. The whole code is called once in application:didFinishLaunchingWithOptions:, but it is shown that this code is responsible for 30 leaks (approx. 1.12 kB).

    Read the article

  • How/when to hire new programmers, and how to integrate them?

    - by Shaul
    Hiring new programmers, especially in a small company, can often present a Catch-22 situation. We have too much work to do, so we need to hire new programmers. But we can't hire new programmers now, because they will need mentoring and several months of learning curve in your industry/product/environment before they're useful, and none of the programmers has time to be a mentor to a new programmer, because they're all completely swamped with the current work load. That may be a slightly frivolous way of describing the situation, but nevertheless, it's difficult for a small company on a tight budget to justify hiring someone who is not only going to be unproductive for a long time, but will also take away from the performance of the current programmers. How have you dealt with this kind of situation? When is the best time to hire someone? What are the best tasks to assign to a new team member so that they can learn their way around your code base and start getting their hands dirty as quickly as possible? How do you get the new guy useful without bogging your existing programmers down in too much mentoring? Any comments & suggestions you have are much appreciated!

    Read the article

  • mprotect - how aligning to multiple of pagesize works?

    - by user299988
    Hi, I am not understanding the 'aligning allocated memory' part from the mprotect usage. I am referring to the code example given on http://linux.die.net/man/2/mprotect char *p; char c; /* Allocate a buffer; it will have the default protection of PROT_READ|PROT_WRITE. */ p = malloc(1024+PAGESIZE-1); if (!p) { perror("Couldn't malloc(1024)"); exit(errno); } /* Align to a multiple of PAGESIZE, assumed to be a power of two */ p = (char *)(((int) p + PAGESIZE-1) & ~(PAGESIZE-1)); c = p[666]; /* Read; ok */ p[666] = 42; /* Write; ok */ /* Mark the buffer read-only. */ if (mprotect(p, 1024, PROT_READ)) { perror("Couldn't mprotect"); exit(errno); } For my understanding, I tried using a PAGESIZE of 16, and 0010 as address of p. I ended up getting 0001 as the result of (((int) p + PAGESIZE-1) & ~(PAGESIZE-1)). Could you please clarify how this whole 'alignment' works? Thanks,

    Read the article

  • When do you tag your software project?

    - by WilhelmTell of Purple-Magenta
    I realize there are various kinds of software projects: commercial (for John Doe) industrial (for Mr. Montgomery Burns) successful open-source (with audience larger than, say, 10 people) personal projects (with audience size in the vicinity of 1). each of which release a new version of their product on difference conditions. I'm particularly interested in the case of personal projects and open-source projects. When, or under what conditions, do you make a new release of any kind? Do you subscribe to a fixed recurring deadline such as every two weeks? Do you commit to a release of at least 10 minor fixes, or one major fix? Do you combine the two conditions such as at least one condition must hold, or both must hold? I reckon this is a subjective question. I ask this question in light of searching for tricks to keep my projects alive and kicking. Sometimes my projects are active but look as if they aren't because I don't have the confidence to make a release or a tag of any sort for a long time -- in the order of months.

    Read the article

  • What is the main purpose and sense to have staging server the same as production?

    - by truthseeker
    Hi, In our company we have staging and production servers. I'm trying to have them in state 1:1 after latest release. We've got web application running on several host and many instances of it. The issue is that I am an advocate of having the same architecture (structure) of web applications on staging and production servers to easily test new features and avoid creating of new bugs with new releases. But not everyone agree with me, and for them is not a such big deal to have different connection between staging application instances. Even maybe to have more application and connections between application on staging than on production server. I would like to ask about pros and cons of such an approach? I mean some good points to agree with me, or some bad why maybe i don't have right. Some examples of consequences and so forth.

    Read the article

  • Strange menu issue

    - by Ber53rker
    Just started out with Orchard and trying to just make pages and menus. I'm running into an issue where on my home page my menu shows: Home, Fruit, Vegetable, Carrot. I want it to show just Home, Fruit, Vegetable and when I click the respective item it adds the proper tabs. It DOES do it with Banana. ie: When I click Fruit, it shows Banana in the menu. But it always shows Carrot for some reason! I have the following setup. Navigation -Main Menu --Home --Fruit --Vegetable -Fruit Menu --Banana -Vegetable Menu --Carrot Widgits > Navigation > Main Menu, Fruit Menu, Vegetable Menu (Fruit and Vegetable Menus are in their own respective layers.) Path Examples: /Fruit, /Fruit/Banana, /Vegetable, /Vegetable/Carrot Thanks in advance.

    Read the article

  • Arguments for moving from LINQtoSQL to Nhibernate?

    - by sah302
    Backstory: Hi all, I just spent a lot of time reading many of the LINQ vs Nhibernate threads here and on other sites. I work in a small development team of 4 people and we don't even have really any super experienced developers. We work for a small company that has a lot of technical needs but not enough developers to implement them (and hiring more is out of the question right now). Typically our projects (which individually are fairly small) have been coded separately and weren't really layered in anyway, code wasn't re-used, no class libraries, and we just use the LINQtoSQL .dbml files for our pojects, we really don't even use objects but pass around values and stuff, the only time we use objects is when inserting to a database (heck not even querying since you don't need to assign it to a type and can just bind to gridview). Despite all this as I said our company has a lot of technical needs, no one could come to us for a year and we would have plenty of work to implement requested features. Well I have decided to change that a bit first by creating class libraries and actually adding layers to our applications. I am trying to meet these guys halfway by still using LINQtoSQL as the ORM yet and still use VB as the language. However I am finding it a b***h of a time dealing with so many thing in LINQtoSQL that I found easy in Nhibernate (automatic handling of the session, criteria creation easier than expression trees, generic an dynamic querying easier etc.) So... Question: How can I convince my lead developers and other senior programmers that switching to Nhibernate is a good thing? That being in control of our domain objects is a good thing? That being able to implement interfaces is a good? I've tried exlpaining the advantages of this before but it's not understood by them because they've never programmed in a true OO & layered way. Also one of the counter arguments to this I can see is sqlMetal generates those classes automatically and therefore it saves a lot of time. I can't really counter that other than saying spending more time on infrastructure to make it more scalable and flexible is good, but they can't see how. Again, I know the features and advantages (somewhat enough I believe) of each, but I need arguments applicable to my context, hence why I provided the context. I just am not a very good arguer I guess. (Caveat: For all the LINQtoSQL lovers, I may just not be super proficient as LINQ, but I find it very cumbersome that you are required to download some extra library for dynamic queries which don't by default support guid comparisons, and I also find the way of updating entitites to be cumbersome as well in terms of data context managing, so it could just be that I suck hehe.)

    Read the article

  • Checking that all libs and dlls are from the same build?

    - by unknownthreat
    I am developing a program in VS C++ 2008. Right now, I have a huge list of dll and lib dependencies and I am adding some more. I worry that when I need to update a dependency by building from source (where I have to manually replace built dlls and libs in the correct place), if I accidently forgot to replace something or vice versa, I may run into a compile and/or runtime problem. And finding which place goes wrong can be a bit difficult. So is there some sort of program or method out there that can suit this task to ease building a program with many updating dependencies?

    Read the article

  • How do you protect code from leaking outside?

    - by cubex
    Besides open-sourcing your project and legislation, are there ways to prevent, or at least minimize the damages of code leaking outside your company/group? We obviously can't block Internet access (to prevent emailing the code) because programmer's need their references. We also can't block peripheral devices (USB, Firewire, etc.) The code matters most when it has some proprietary algorithms and in-house developed knowledge (as opposed to regular routine code to draw GUIs, connect to databases, etc.), but some applications (like accounting software and CRMs) are just that: complex collections of routine code that are simple to develop in principle, but will take years to write from scratch. This is where leaked code will come in handy to competitors. As far as I see it, preventing leakage relies almost entirely on human process. What do you think? What precautions and measures are you taking? And has code leakage affected you before?

    Read the article

  • Evidence-Based-Scheduling - are estimations only as accurate as the work-plan they're based on?

    - by Assaf Lavie
    I've been using FogBugz's Evidence Based Scheduling (for the uninitiated, Joel explains) for a while now and there's an inherent problem I can't seem to work around. The system is good at telling me the probability that a given project will be delivered at some date, given the detailed list of tasks that comprise the project. However, it does not take into account the fact that during development additional tasks always pop up. Now, there's the garbage-can approach of creating a generic task/scheduled-item for "last minute hacks" or "integration tasks", or what have you, but that clearly goes against the idea of aggregating the estimates of many small cases. It's often the case that during the development stage of a project you realize that there's a whole area your planning didn't cover, because, well, that's the nature of developing stuff that hasn't been developed before. So now your ~3 month project may very well turn into a 6 month project, but not because your estimations were off (you could be the best estimator in the world, for those task the comprised your initial work plan); rather because you ended up adding a whole bunch of new tasks that weren't there to begin with. EBS doesn't help you with that. It could, theoretically (I guess). It could, perhaps, measure the amount of work you add to a project over time and take that into consideration when estimating the time remaining on a given project. Just a thought. In other words, EBS works on a task basis, but not on a project/release basis - but the latter is what's important. It's what your boss typically cares about - delivery date, not the time it takes to finish each task along the way, and not the time it would have taken, if your planning was perfect. So the question is (yes, there's a question here, don't close it): What's your methodology when it comes to using EBS in FogBugz and how do you solve the problem above, which seems to be a main cause of schedule delays and mispredictions? Edit Some more thoughts after reading a few answers: If it comes down to having to choose which delivery date you're comfortable presenting to your higher-ups by squinting at the delivery-probability graph and choosing 80%, or 95%, or 60% (based on what, exactly?) then we've resorted to plain old buffering/factoring of our estimates. In which case, couldn't we have skipped the meticulous case by case hour-sized estimation effort step? By forcing ourselves to break down tasks that take more than a day into smaller chunks of work haven't we just deluded ourselves into thinking our planning is as tight and thorough as it could be? People may be consistently bad estimators that do not even learn from their past mistakes. In that respect, having an EBS system is certainly better than not having one. But what can we do about the fact that we're not that good in planning as well? I'm not sure it's a problem that can be solved by a similar system. Our estimates are wrong because of tendencies to be overly optimistic/pessimistic about certain tasks, and because of neglect to account for systematic delays (e.g. sick days, major bug crisis) - and usually not because we lack knowledge about the work that needs to be done. Our planning, on the other hand, is often incomplete because we simply don't have enough knowledge in this early stage; and I don't see how an EBS-like system could fill that gap. So we're back to methodology. We need to find a way to accommodate bad or incomplete work plans that's better than voodoo-multiplication.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >