Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 93/563 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Is there an established convention for separating Windows file names in a string?

    - by Heinzi
    I have a function which needs to output a string containing a list of file paths. I can choose the separation character but I cannot change the data type (e.g. I cannot return a List<string> or something like that). Wanting to use some well-established convention, my first intuition was to use the semicolon, similar to what Windows's PATH and Java's CLASSPATH (on Windows) environment variables do: C:\somedir\somefile.txt;C:\someotherdir\someotherfile.txt However, I was surprised to notice that ; is a valid character in an NTFS file name. So, is the established best practice to just ignore this fact (i.e. "no sane person should use ; in a file name and if they do, it's their own fault") or is there some other established character for separating Windows paths or files? (The pipe (|) might be a good choice, but I have not seen it used anywhere yet for this purpose.)

    Read the article

  • What are algorithmic paradigms?

    - by Vaibhav Agarwal
    We generally talk about paradigms of programming as functional, procedural, object oriented, imperative etc but what should I reply when I am asked the paradigms of algorithms? For example are Travelling Salesman Problem, Dijkstra Shortest Path Algorithm, Euclid GCD Algorithm, Binary search, Kruskal's Minimum Spanning Tree, Tower of Hanoi paradigms of algorithms? Should I answer the data structures I would use to design these algorithms?

    Read the article

  • A talk about observer pattern

    - by Martin
    As part of a university course I'm taking, I have to hold a 10 minute talk about the observer pattern. So far these are my points: What is it? (defenition) Polling is an alternative Polling VS Observer When Observer is better When Polling is better (taken from here) A statement that Mediator pattern is worth checking out. (I won't have time to cover it in 10 minutes) I would be happy to hear about: Any suggestions to add something? Another alternative (if exists)

    Read the article

  • Is eval the defmacro of javascript?

    - by Florian Margaine
    In Common Lisp, defmacro basically allows us to build our own DSL. I read this page today and it explains something cleverly done: But I wasn't about to write out all these boring predicates myself, so I defined a function that, given a list of words, builds up the text for such a predicate automatically, and then evals it to produce a function. Which just looks like defmacro to me. Is eval the defmacro of javascript? Could it be used as such?

    Read the article

  • What is the story behind Java Vulnerabilities?

    - by Maryam
    I always appreciated the Java language. It is known as a very secure platform and many banks use it in their web applications. I wanted to build a project for my school and I discussed the options with some developers. However, one of them said we should ignore Java ecause of vulnerabilities appreared recently in it. For this reason I want to make sure, what is the story behind this and does that mean that Java today considered not much secure as it was previously?

    Read the article

  • Are CK Metrics still considered useful? Is there an open source tool to help?

    - by DeveloperDon
    Chidamber & Kemerer proposed several metrics for object oriented code. Among them, depth of inheritance tree, weighted number of methods, number of member functions, number of children, and coupling between objects. Using a base of code, they tried to correlated these metrics to the defect density and maintenance effort using covariant analysis. Are these metrics actionable in projects? Perhaps they can guide refactoring. For example weighted number of methods might show which God classes needed to be broken into more cohesive classes that address a single concern. Is there approach superseded by a better method, and is there a tool that can identify problem code, particularly in moderately large project being handed off to a new developer or team?

    Read the article

  • scrapy cannot find div on this website [on hold]

    - by Jaspal Singh Rathour
    I am very new at this and have been trying to get my head around my first selector can somebody help? i am trying to extract data from page http://groceries.asda.com/asda-webstore/landing/home.shtml?cmpid=ahc--ghs-d1--asdacom-dsk-_-hp#/shelf/1215337195041/1/so_false all the info under div class = listing clearfix shelfListing but i cant seem to figure out how to format response.xpath(). I have managed to launch scrapy console but no matter what I type in response.xpath() i cant seem to select the right node. I know it works because when I type response.xpath('//div[@class="container"]') I get a response but don't know how to navigate to the listsing cleardix shelflisting. I am hoping that once i get this bit I can continue working my way through the spider. Thank you in advance! PS I wonder if it is not possible to scan this site - is it possible for the owners to block spiders?

    Read the article

  • Is it reasonable to expect knowing the whole stack bottom up?

    - by Vaibhav Garg
    I am an Sr. developer/architect/Product Manager for embedded systems. The systems that I have had experience with have typically been small to medium size codebases - typically close to 25-30K LOC in C, using 8-16 and 32 bit low end microcontrollers. The systems have been entirely bootstrapped by our team - meaning right from the start-up code to the end application code has either been written by the team, or at the very least, is thoroughly understood and maintained by us. Now, if we were to start developing more complex systems with complex peripherals, such as USB OTG et al. (think, low end cell phones), there are libraries and stacks available commercially and from chip vendors that reduce the task to just calling the right APIs and being able to use those peripherals. Now, from a habit point of view, this does not give me and the team a comfortable feeling, not being able to comprehend the entire code tree, with virtual black boxes at the lower layers. Is it reasonable to devote, and reserve, time getting into the details of how the APIs are implemented, assuming that the same would also entail getting into details of relevant standards (again, for USB as an example)? Or, alternatively, should a thorough understanding of the top level usage of the APIs be sufficient? This of course assumes that the source codes to all libraries are available, which they are, in almost all cases. Edit: In partial response to @Abhi Beckert, the documentation is refreshingly very comprehensive and meticulously maintained, AFAIK and been able to judge. I have not had a long experience with the same.

    Read the article

  • Question about casting a class in Java with generics

    - by Florian F
    In Java 6 Class<? extends ArrayList<?>> a = ArrayList.class; gives and error, but Class<? extends ArrayList<?>> b = (Class<? extends ArrayList<?>>)ArrayList.class; gives a warning. Why is (a) an error? What is it, that Java needs to do in the assignment, if not the cast shown in (b)? And why isn't ArrayList compatible with ArrayList? I know one is "raw" and the other is "generic", but what is it you can do with an ArrayList and not with an ArrayList, or the other way around?

    Read the article

  • Is Debug.Assert obsolete if you write unit tests?

    - by Justin Pihony
    Just like the question asks, is there a need to add Debug.Assert into your code if you are writing unit tests (which has its own assertions)? I could see that this might make the code more obvious without having to go into the tests, however it just seems that you might end up with duplicated asserts. It seems to me that Debug.Assert was helpful before unit-testing became more prevalent, but is now unnecessary. Or, am I not thinking of some use case?

    Read the article

  • How do you keep code with continuations/callbacks readable?

    - by Heinzi
    Summary: Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks? I'm using a JavaScript library that does a lot of stuff asynchronously and heavily relies on callbacks. It seems that writing a simple "load A, load B, ..." method becomes quite complicated and hard to follow using this pattern. Let me give a (contrived) example. Let's say I want to load a bunch of images (asynchronously) from a remote web server. In C#/async, I'd write something like this: disableStartButton(); foreach (myData in myRepository) { var result = await LoadImageAsync("http://my/server/GetImage?" + myData.Id); if (result.Success) { myData.Image = result.Data; } else { write("error loading Image " + myData.Id); return; } } write("success"); enableStartButton(); The code layout follows the "flow of events": First, the start button is disabled, then the images are loaded (await ensures that the UI stays responsive) and then the start button is enabled again. In JavaScript, using callbacks, I came up with this: disableStartButton(); var count = myRepository.length; function loadImage(i) { if (i >= count) { write("success"); enableStartButton(); return; } myData = myRepository[i]; LoadImageAsync("http://my/server/GetImage?" + myData.Id, function(success, data) { if (success) { myData.Image = data; } else { write("error loading image " + myData.Id); return; } loadImage(i+1); } ); } loadImage(0); I think the drawbacks are obvious: I had to rework the loop into a recursive call, the code that's supposed to be executed in the end is somewhere in the middle of the function, the code starting the download (loadImage(0)) is at the very bottom, and it's generally much harder to read and follow. It's ugly and I don't like it. I'm sure that I'm not the first one to encounter this problem, so my question is: Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks?

    Read the article

  • Project Manager that wants to lock in time estimate with a signed contract

    - by sunpech
    At a previous employment, a project manager (PM) wasn't satisfied with the delivery time of the code on a project I was on. I was told by my project lead that that the PM was considering having me sign a contract to lock-in my time estimates I gave for tasks and delivery dates. The situation on the project was that we were working with new technologies, codebase, coding standards, and very prone-to-change requirements. I was learning new things and applying them the best I could on requirements that kept on changing. The requirements throughout the iterations grew by 2-3 times, with my estimate-to-complete growing by roughly 5-8 times. The only things that didn't change were the estimates and delivery dates. Yes, I did end up missing most deadlines. And I was working on some very new technologies that no one else on the entire development team could really help out on because they wouldn't be familiar with it. At least not easily. It seemed to me then, that the PM wanted his numbers to add up-- and thus wanted me to sign a contract to "ensure" that I would always deliver working code on time. I suppose with a signed contract the PM could use it against me if I couldn't deliver on time. I believe what happened next was that other project managers and/or project leads defended me, and didn't let this happen. My question is, should this raise a red flag about the manager? Is it common practice for a manager to lock-in time estimates of a software developer with a signed contract? Or in this case, try to. Please note, I was a full time employee, not an independent consultant. Update: I want to add that I did give new estimates weekly, but it seems the original estimates and delivery dates were what the PM was fixated on.

    Read the article

  • Architecture advice for converting biz app from old school to new school?

    - by Aaron Anodide
    I've got a WinForms business application that evolved over the past few years. It's forms over data with a number custom UI experiences taylored to the business, so I don't think it's a candidate to port to something like SharePoint or re-write in LightSwitch (at least not without significant investment). When I started it in 2009 I was new to this type of development (coming from more low level programming and my RDBMS knowledge was just slightly greater than what I got from school). Thus, when I was confronted with a business model that operates on a strict monthly accounting cycle, I made the unfortunate decision to create a separate database for each accounting period. Also, when I started I knew DataSets, then I learned Linq2Sql, then I learned EntityFramework. The screens are a mix and match of those. Now, after a few years developing this thing by myself I've finally got a small team. Ultimately, I want a web front end (for remote access to more straight up screens with grids of data) and a thick client (for the highly customized interfaces). My question is: can you offer me some broad strokes architecture advice that will help me formulate a battle plan to convert over to a single database and lay the foundations for my future goals at the same time? Here's a screen shot showing how an older screen uses DataSets and a newer screen uses EF (I'm thinking this might make it more real for someone reading the question - I'm willing to add any amount of detail if someone is willing to help).

    Read the article

  • Getting from a user-story to code while using TDD (scrum)

    - by Ittai
    I'm getting into scrum and TDD and I think I have some confusion which I'd like to get your feedback about. Let's assume I have a user-story in my backlog, in order for me to start developing it as part of TDD I need to have requirements, right so far? Is it true to say that the product manager and the QA should be responsible for taking the user-story and breaking it down to acceptance tests? I think the above is true since the acceptance tests need to be formal, so they can be used as tests, but also human readable so that the product can approve they are the requirements, right? Is it also true that I later take these acceptance tests and use them as my requirements, i.e. they are a set of use-cases which I implement (through TDD)? I hope I'm not making too much of a mess but that's the current flow I have in mind right now. Update I think my initial intentions were unclear so I'll try to rephrase. I want to know more details about the scrum flow of turning a user-story into code while using TDD. The starting point is obvious, a user surfaces a need (or the user's representative as the product) which is a short 1-2 lines description in the known format and that is added to the product backlog. When there is a spring planning meeting user-stories are taken from the backlog and assigned to developers. In order for a developer to write code they need requirements (especially in TDD since the requirements are what the tests are derived from). When, by whom and to which format are the requirements compiled? What I had in mind was that the product and QA define the requirements via acceptance tests (I'm thinking of automatic using FitNesse or the sort but that's not the core I think) which help to serve 2 purposes at the same time: They define "Done" properly. They give a developer something to derive tests from. I wasn't sure when these were written (before the sprint they're picked then that might be a waste since additional information will arrive or the story won't be picked, during the iteration then the developer might get stuck waiting for them...)

    Read the article

  • Great Debugging skills weak problem solving

    - by Mahmoud
    For the 5 years I worked for various companies, I worked in large software like computer vision kits, embedded, games. I found myself very good at debuggins skills, I've even found and fixed bugs in frameworks and I solved them. The problem is that I'm very weak at problem solving. I got interview with Qualcomm, and they said you're fine at software, but you have a limited problem solving, I also had the same results with Google. I'm very bad at solving puzzles and brain teasers. During the interviews I solve all of the software related problems on the blackboard, but when I went to the GM and face math problems and probabilities, I struggle. How can I improve my problem solving skills? Edit Some of the problems: A cake that is cut from anywhere and needs just one cut to halved in equal. I told him cut it horizontally, he said No, consider it as a 2D Problem!. Consider a concenteric 3 circles, each one can get a color, but not matched with the other circle, how many blobs you can make out of those circles ? this was with the GM ( Augmented Reality SDK) Consider a train, an infinite one, and you looked at the window, and there are two cars, one big, and one small, what is the probability of having only a big car, I said 50%, he said, what if that two cars you dont know their length, and you want to get the probability of getting the biggest one, I struggled, didn't solve it... I was really exahusted after long day of interviews prob of having a number divisible by 5 in numbers from 1 to 100.. struggled!! All coding questions I solved them like reverse a string, detect a cycle in a linked list,..etc.

    Read the article

  • Is software development an engineering discipline?

    - by Vaibhav Garg
    Can software development be considered engineering? If no, what are the things that it lacks in order to be qualified as an engineering discipline? Related to this is this question on Stack Overflow about the difference between a programmer and a software engineer. There is the Software Engineering Institute at Carnigie Mellon University that prescribes and maintains the CMMI standards. Is this something that will turn development into engineering?

    Read the article

  • How can i be sure that professional programming is not for me ?

    - by user17766
    Hello everybody I love programming, developing projects for hobby and learning new concepts. I am getting harder too much in current job Despite learnt many thing well. I can even hardly understand assigned tasks. I am asking why i am getting harder to myself. It may not my fault? Our architecture doesn't spend enough time to explain complicated sides of project for us or i am not enough smart one for understand fastly. Our architecture also doesn't know what kind of hell he is creating ? Seeing 3 level generic types and 4-5 level generic inheritance in domain model objects hell makes me think so really. It looks abusing concepts more than reduce complexity. Thinking that he hasn't experienced before such a big project while he is getting confused in problems of the project. May i am not in right company ? May i am not good programmer ? May i am really stupid ? Become good in programming concepts is not enough to deal big project's complications so someone should to tell me that i have to still effort too much even i am good programmer for adopting myself to any big project ? Also i had another bad experiences from previous job but my professional experiences is almost few months but i spend 2 years for learning and coding for fun and i really can say that i have well skills on OOP, Design Patterns, coding standards and deep knowledge in language currently used. Sometimes i am thinking to leave programming professionally and work in any lame job while doing programming just for hobby. Waiting suggestions and insights

    Read the article

  • Programming Test

    - by Travis Webb
    We are looking to hire some more Java developers onto our team, and plan to test their coding abilities with a test. We currently use a web-based Java test that automatically compiles and runs the code, but it is very flaky and we're having problems with our candidates losing their work on this site. Not only is this frustrating for everyone, it makes us look like we don't know what we're doing. Is there a popular testing suite out there? What do you use? I'm not interested in dogmatic arguments on whether or not I should be testing my candidates in this way, I'm looking for a tool that will help me do it.

    Read the article

  • A deque based on binary trees

    - by Greg Ros
    This is a simple immutable deque based on binary trees. What do you think about it? Does this kind of data structure, or possibly an improvement thereof, seem useful? How could I improve it, preferably without getting rid of its strengths? (Not in the sense of more operations, in the sense of different design) Does this sort of thing have a name? Red nodes are newly instantiated; blue ones are reused. Nodes aren't actually red or anything, it's just for emphasis.

    Read the article

  • Code Generation and IDE vs writing per Hand

    - by sytycs
    I have been programming for about a year now. Pretty soon I realized that I need a great Tool for writing code and learned Vim. I was happy with C and Ruby and never liked the idea of an IDE. Which was encouraged by a lot of reading about programming.[1] However I started with (my first) Java Project. In a CS Course we were using Visual Paradigm and encouraged to let the program generate our code from a class diagram. I did not like that Idea because: Our class diagram was buggy. Students more experienced in Java said they would write the code per hand. I had never written any Java before and would not understand a lot of the generated code. So I took a different approach and wrote all methods per Hand (getter and Setter included). My Team-members have written their parts (partly generated by VP) in an IDE and I was "forced" to use it too. I realized they had generated equal amounts of code in a shorter amount of time and did not spend a lot of time setting their CLASSPATH and writing scripts for compiling that son of a b***. Additionally we had to implement a GUI and I dont see how we could have done that in a sane matter in Vim. So here is my Problem: I fell in love with Vim and the Unix way. But it looks like for getting this job done (on time) the IDE/Code generation approach is superior. Do you have equal experiences? Is Java by the nature of the language just more suitable for an IDE/Code generated approach? Or am I lacking the knowledge to produce equal amounts of code "per Hand"? [1] http://heather.cs.ucdavis.edu/~matloff/eclipse.html

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • Messaging technologies between applications ?

    - by Samuel
    Recently, I had to create a program to send messages between two winforms executable. I used a tool with simple built-in functionalities to prevent having to figure out all the ins and outs of this vast quantity of protocols that exist. But now, I'm ready to learn more about the internals difference between each of theses protocols. I googled a couple of them but it would be greatly appreciate to have a good reference book that gives me a clean idea of how each protocol works and what are the pros and cons in a couple of context. Here is a list of nice protocols that I found: Shared memory TCP List item Named Pipe File Mapping Mailslots MSMQ (Microsoft Queue Solution) WCF I know that all of these protocols are not specific to a language, it would be nice if example could be in .net. Thank you very much.

    Read the article

  • Do you develop with security in mind?

    - by MattyD
    I was listening to a podcast on Security Now and they mentioned about how a lot of the of the security problems found in Flash were because when flash was first developed it wasdn't built with security in mind because it didn't need to thus flash has major security flaws in its design etc. I know best practices state that you should build secure first etc. Some people or companies don't always follow 'best practice'... My question is do you develop to be secure or do you build with all the desired functionality etc then alter the code to be secure (Whatever the project maybe) (I realise that this question could be a possible duplicate of Do you actively think about security when coding? but it is different in the fact of actually process of building the software/application and design of said software/application)

    Read the article

  • Junior developer support

    - by lady_killer
    I am a junior developer in my first work experience after university. I joined the company as PHP developer but I ended up developing using C# and ASP.NET. Right from the start I did not receive any training in C# and I was assigned with ASP projects with quite tight deadlines scoped by Senior developers. The few project hand overs I had from other developers were brief and it looked like I had to discover the system myself, in really short time. This is my first job as web developer and I wonder whether it is normal not to have a kind of mentor to show me how to do things, especially because I am completely new to the technology. Also, do you have idea how to tackle this? As you can imagine, it gets really frustrating! Thank you!

    Read the article

  • Strategies for browser compatibility on web applications in a corporate environment

    - by TiagoBrenck
    With the new CSS 3 and HTML 5 technology, web applications have gained a lot of new tools for a better UI (user interface) interaction, beautiful templates and even responsive layout to fit into tablets and smartphones. Within a corporate environment, those new technologies are required so the company can "follow" the IT evolution and their concurrent, but they also want that those new web applications supports old browsers. How should I deal with this situation? By one side we are asked to follow the the evolution of technology, create responsive layouts and use a lot of cool jQuery plugins. On the other hand, we are asked to support old browsers that do not support those new responsive features, plugins or components. I would like advice and strategies on how to create "modern" web applications that are also supported on old browsers. How does your company deal with this situation? Is it possible to have the same web application run well and beautifully on old browsers, and be responsive and interactive on newer browsers?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >