Search Results

Search found 19778 results on 792 pages for 'task management'.

Page 194/792 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • How to react when the client's response is negative on delivery?

    - by ZiG
    I am a junior programmer. Since my supervisor told me to sit in with the client, I joined. I saw the unsatisfied face of the client despite the successful (from my programmer's perspective) delivery of the project! Client: You could have included this! Us: Was not in the specification! Client: Common Sense! As a programmer, how do you respond in this situation?

    Read the article

  • Memory Profiling with DotTrace Questions

    - by cam
    I ran dotTrace on my application (which is having some issues). IntPtr System.Windows.Forms.UnsafeNativeMethods.CallWindowProc(IntPtr, IntPtr, Int32, IntPtr, IntPtr) Void System.Windows.Forms.UnsafeNativeMethods.WaitMessage() Are the two main functions that came up, taking about 94% of the application time. Since I didn't know what these two functions were, I ran through my code line by line. It runs smooth and efficiently until a point where it just hangs. "newFrm.Show()". The newFrm only contains a textbox. The larger the file I load into the text box (it's a notepad program), the longer it takes. Now normally this makes sense, but it takes about 30 seconds for a 167 kB file. Now I'm not sure what to do. It runs incredibly slow/stops functioning when you load a textfile and try to resize the window containing the text file too. Then I realized that it is only struggling to open text files with a long string of hex inside (ie) "XX-XX-XX-" etc. With other similarly sized files it struggles with resizing somewhat, but opens within a couple seconds. Does this have something to do with the textbox properties? I've set it to multiline and set maximum characters to 0 (so unlimited). How do I solve this issue? Is there some way I can see what is being called in those functions?

    Read the article

  • High memory usage for dummies

    - by zaf
    I've just restarted my firefox web browser again because it started stuttering and slowing down. This happens every other day due to (my understanding) of excessive memory usage. I've noticed it takes 40M when it starts and then, by the time I notice slow down, it goes to 1G and my machine has nothing more to offer unless I close other applications. I'm trying to understand the technical reasons behind why its such a difficult problem to sol ve. Mozilla have a page about high memory usage: http://support.mozilla.com/en-US/kb/High+memory+usage But I'm looking for a slightly more in depth and satisfying explanation. Not super technical but enough to give the issue more respect and please the crowd here. Some questions I'm already pondering (they could be silly so take it easy): When I close all tabs, why doesn't the memory usage go all the way down? Why is there no limits on extensions/themes/plugins memory usage? Why does the memory usage increase if it's left open for long periods of time? Why are memory leaks so difficult to find and fix? App and language agnostic answers also much appreciated.

    Read the article

  • String Constant Pool memory sector and garbage collection

    - by WickeD
    I read this question on the site How is the java memory pool divided? and i was wondering to which of these sectors does the "String Constant Pool" belongs? And also does the String literals in the pool ever get GCed? The intern() method returns the base link of the String literal from the pool. If the pool does gets GCed then wouldn't it be counter-productive to the idea of the string pool? New String literals would again be created nullifying the GC. (It is assuming that only a specific set of literals exist in the pool, they never go obsolete and sooner or later they will be needed again)

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Local variable assign versus direct assign; properties and memory.

    - by Typeoneerror
    In objective-c I see a lot of sample code where the author assigns a local variable, assigns it to a property, then releases the local variable. Is there a practical reason for doing this? I've been just assigning directly to the property for the most part. Would that cause a memory leak in any way? I guess I'd like to know if there's any difference between this: HomeScreenBtns *localHomeScreenBtns = [[HomeScreenBtns alloc] init]; self.homeScreenBtns = localHomeScreenBtns; [localHomeScreenBtns release]; and this: self.homeScreenBtns = [[HomeScreenBtns alloc] init]; Assuming that homeScreenBtns is a property like so: @property (nonatomic, retain) HomeScreenBtns *homeScreenBtns; I'm getting ready to submit my application to the app store so I'm in full optimize/QA mode.

    Read the article

  • Managing my TODO list - how to get organised

    - by sparkes
    What's the best way to organise my personal TODO list? and what tools are available for organising team TODO lists? Should I still be thinking in terms of TODO or are there better ways to manage my time and projects? See also this question on Organization which is similar

    Read the article

  • How to execute an Ant task only when source files have been modified?

    - by hughalexb
    There must be an easy way to do this. I build a Flex app using ant that depends on a SWC library, which works fine except that it rebuilds the library whether it needs to or not. How do I tell ant to only run the task if any of the sources files of the library (*.as, *.mxml) are newer than the SWC? I've looked at <dependset but it only seems to delete files, not determine whether a task should be run or not. <depend seems to expect a one-to-one relationship between the source and target files rather than a one-to-many relationship -- I have many input files and one output file, but no intermediate object files. Thanks a lot, Alex

    Read the article

  • G++ Multi-platform memory leak detection tool

    - by indyK1ng
    Does anyone know where I can find a memory memory leak detection tool for C++ which can be either run in a command line or as an Eclipse plug-in in Windows and Linux. I would like it to be easy to use. Preferably one that doesn't overwrite new(), delete(), malloc() or free(). Something like GDB if its gonna be in the command line, but I don't remember that being used for detecting memory leaks. If there is a unit testing framework which does this automatically, that would be great. This question is similar to other questions (such as http://stackoverflow.com/questions/283726/memory-leak-detection-under-windows-for-gnu-c-c ) however I feel it is different because those ask for windows specific solutions or have solutions which I would rather avoid. I feel I am looking for something a bit more specific here. Suggestions don't have to fulfill all requirements, but as many as possible would be nice. Thanks. EDIT: Since this has come up, by "overwrite" I mean anything which requires me to #include a library or which otherwise changes how C++ compiles my code, if it does this at run time so that running the code in a different environment won't affect anything that would be great. Also, unfortunately, I don't have a Mac, so any suggestions for that are unhelpful, but thank you for trying. My desktop runs Windows (I have Linux installed but my dual monitors don't work with it) and I'd rather not run Linux in a VM, although that is certainly an option. My laptop runs Linux, so I can use that tool on there, although I would definitely prefer sticking to my desktop as the screen space is excellent for keeping all of the design documentation and requirements in view without having to move too much around on the desktop. NOTE: While I may try answers, I won't mark one as accepted until I have tried the suggestion and it is satisfactory. EDIT2: I'm not worried about the cross-platform compatibility of my code, it's a command line application using just the C++ libraries.

    Read the article

  • Heap corruption detected error when attempting to free pointer

    - by AndyGeek
    Hi, I'm pretty new to C++ and have run into a problem which I have not been able to solve. I'm trying to convert a System::String to a wchar_t pointer that I can keep for longer than the scope of the function. Once I'm finished with it, I want to clean it up properly. Here is my code: static wchar_t* g_msg; int TestConvert() { pin_ptr<const wchar_t> wchptr = PtrToStringChars("Test"); g_msg = (wchar_t*)realloc(g_msg, wcslen(wchptr) + 1); wcscpy(g_msg, wchptr); free (g_msg); // Will be called from a different method } When the free is called, I'm getting "HEAP CORRUPTION DETECTED: after Normal block (#137) at 0x02198F90." Why would I be getting this error? Andrew L

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • memory issue iPad 4.2 crashes

    - by Manoj Kumar
    I am developing a application which receives 600-700 KB of XML data from the server. I have to do some manipulations in that data so once received the data the memory increases to 600 KB to 2 M.B. Already view occupied 4 M.B of memory in the application. So while processing the XML data i m doing some manipulation(pre-parsing) and the memory increases to 600 K.B to 2 M.B and finally decreases to 600 K.B. due to increase in memory, application gives the memory warning. While getting memory warning i m releasing all the views in the navigation controller but it releases only 1 M.B of memory. Even though I release all the views the application is crashing. Please help me out in this issue. It happens in iPad 4.2. Thanks in advance

    Read the article

  • How to determine the size of a project (lines of code, function points, other)

    - by sixtyfootersdude
    How would you evaluate project size? Part A: Before you start a project. Part B: For a complete project. I am interested in comparing unrelated projects. Here are some options: 1) Lines of code. I know that this is not a good metric of productivity but is this a reasonable measure of project size? If I wanted to estimate how long it would take to recreate a project would this be a reasonable way to do it? How many lines of code should I estimate a day? 2) Function Points. Functions points are defined as the number of: inputs outputs inquires internal files external interfaces Anyone have a veiw point on whether this is a good measure? Is there a way to **actually do this? Does anyone have another solution? Hours taken seems like it could be a useful metric but not solely. If I ask you what is a "bigger program" and give you two programs how would you approach the question? I have seen several discussions of this on stackover flow but most discuss how to measure programmer productivity. I am more interested in project size.

    Read the article

  • Any sense to set obj = null(Nothing) in Dispose()?

    - by serhio
    Is there any sense to set custom object to null(Nothing in VB.NET) in the Dispose() method? Could this prevent memory leaks or it's useless?! Let's consider two examples: public class Foo : IDisposable { private Bar bar; // standard custom .NET object public Foo(Bar bar) { this.bar = bar; } public void Dispose() { bar = null; // any sense? } } public class Foo : RichTextBox { // this could be also: GDI+, TCP socket, SQl Connection, other "heavy" object private Bitmap backImage; public Foo(Bitmap backImage) { this.backImage = backImage; } protected override void Dispose(bool disposing) { if (disposing) { backImage = null; // any sense? } } }

    Read the article

  • Function Point Analysis -- a seriously overestimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week (I"m not even saying weekend) to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • Wiki for requirements engineering

    - by Shanon
    Hi, I'm looking to to build a wiki based tool the helps/aides in the requirements engineering process. More specifically I am hoping to end up with a tool that helps inexperienced users easily create and design requirements documents on a wiki platform. I was wondering if there exist any wiki/wiki platforms that either already exist or are easily extendible or would be worth looking at that for this purpose. For instance some of the features I was hoping to add would be to add structure to a document so that information is filled out in a standardised manner. Another idea I was looking at was to somehow create relationships between different types of documents (for example- a goal diagram gets evolves/ helps in the development of the class diagram). So far I have come across FOSwiki which claims to to fully customisalble...but I'm not sure what it means and what I can really do with that. Any input on FOSwiki is also highly appreciated.

    Read the article

  • How can I free all allocated memory at once?

    - by Tommy
    Here is what I am working with: char* qdat[][NUMTBLCOLS]; char** tdat[]; char* ptr_web_data; // Loop thru each table row of the query result set for(row_index = 0; row_index < number_rows; row_index++) { // Loop thru each column of the query result set and extract the data for(col_index = 0; col_index < number_cols; col_index++) { ptr_web_data = (char*) malloc((strlen(Data) + 1) * sizeof(char)); memcpy (ptr_web_data, column_text, strlen(column_text) + 1); qdat[row_index][web_data_index] = ptr_web_data; } } tdat[row_index] = qdat[col_index]; After the data is used, the memory allocated is released one at a time using free(). for(row_index = 0; row_index < number_rows; row_index++) { // Loop thru all columns used for(col_index = 0; col_index < SARWEBTBLCOLS; col_index++) { // Free memory block pointed to by results set array free(tdat[row_index][col_index]); } } Is there a way to release all the allocated memory at once, for this array? Thank You.

    Read the article

  • How have you implemented SCRUM for working alone?

    - by chipk
    I am working alone at the beginning of a sizable open source project and would like to leverage some of the core ideas/methods from Scrum to help manage my time and remain focused on development and deploying early, demonstrable functionality. I would like to hear from others who have used Scrum alone and what you have found particularly useful to these ends. Thanks.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >