Search Results

Search found 16752 results on 671 pages for 'multi language'.

Page 156/671 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • What do you call the concept of dynamic data definition?

    - by DJTripleThreat
    Maybe this is simpler and more straightforward then what I'm thinking but I can't seem to find this concept on google anywhere. The concept is this: You have a table in a database and the table has a specified number of columns. However, it has been asked of me by previous clients that there also be a set of dynamic user defined columns that can be added on the fly. What is this concept called and is it considered a design pattern?

    Read the article

  • How would one go about adding (minor) syntactic sugars to Java?

    - by polygenelubricants
    Suppose I want to add minor syntactic sugars to Java. Just little things like adding regex pattern literals, or perhaps base-2 literals, or multiline strings, etc. Nothing major grammatically (at least for now). How would one go about doing this? Do I need to extend the bytecode compiler? (Is that possible?) Can I write Eclipse plugins to do simple source code transforms before feeding it to the standard Java compiler?

    Read the article

  • iPhone - Track three touches

    - by Striker
    Suppose you have three points of contact on the iPhone screen and one of those touches moves... The touchesMoved method will be invoked and the [[event touchesForView:self] count] will be equal to '3' because there are three touches for the event, but how can you distinguish between the touches? For example - find out whether it was the first, second, or third touch which moved? Thanks.

    Read the article

  • Optimality of Binary Search

    - by templatetypedef
    Hello all- This may be a silly question, but does anyone know of a proof that binary search is asymptotically optimal? That is, if we are given a sorted list of elements where the only permitted operation on those objects is a comparison, how do you prove that the search can't be done in o(lg n)? (That's little-o of lg n, by the way.) Note that I'm restricting this to elements where the only operation permitted operation is a comparison, since there are well-known algorithms that can beat O(lg n) on expectation if you're allowed to do more complex operations on the data (see, for example, interpolation search). Thanks so much! This has really been bugging me since it seems like it should be simple but has managed to resist all my best efforts. :-)

    Read the article

  • Can I use part of MD5 hash for data identification?

    - by sharptooth
    I use MD5 hash for identifying files with unknown origin. No attacker here, so I don't care that MD5 has been broken and one can intendedly generate collisions. My problem is I need to provide logging so that different problems are diagnosed easier. If I log every hash as a hex string that's too long, inconvenient and looks ugly, so I'd like to shorten the hash string. Now I know that just taking a small part of a GUID is a very bad idea - GUIDs are designed to be unique, but part of them are not. Is the same true for MD5 - can I take say first 4 bytes of MD5 and assume that I only get collision probability higher due to the reduced number of bytes compared to the original hash?

    Read the article

  • C++0x implementation guesstimates?

    - by dsimcha
    The C++0x standard is on its way to being complete. Until now, I've dabbled in C++, but avoided learning it thoroughly because it seems like it's missing a lot of modern features that I've been spoiled by in other languages. However, I'd be very interested in C++0x, which addresses a lot of my complaints. Any guesstimates, after the standard is ratified, as to how long it will take for major compiler vendors to provide reasonably complete, production-quality implementations? Will it happen soon enough to reverse the decline in C++'s popularity, or is it too little, too late? Do you believe that C++0x will become "the C++" within a few years, or do you believe that most people will stick to the earlier standard in practice and C++0x will be somewhat of a bastard stepchild, kind of like C99?

    Read the article

  • Why isn't the eigenclass equivalent to self.class, when it looks so similar?

    - by The Wicked Flea
    I've missed the memo somewhere, and I hope you'll explain this to me. Why is the eigenclass of an object different from self.class? class Foo def initialize(symbol) eigenclass = class << self self end eigenclass.class_eval do attr_accessor symbol end end end My train of logic that equates the eigenclass with class.self is rather simple: class << self is a way of declaring class methods, rather than instance methods. It's a shortcut to def Foo.bar. So within the reference to the class object, returning self should be identical to self.class. This is because class << self would set self to Foo.class for definition of class methods/attributes. Am I just confused? Or, is this a sneaky trick of Ruby meta-programming?

    Read the article

  • Can a variable like 'int' be considered a primitive/fundamental data structure?

    - by Ravi Gupta
    A rough definition of a data structure is that it allows you to store data and apply a set of operations on that data while preserving consistency of data before and after the operation. However some people insist that a primitive variable like 'int' can also be considered as a data structure. I get that part where it allows you to store data but I guess the operation part is missing. Primitive variables don't have operations attached to them. So I feel that unless you have a set of operations defined and attached to it you cannot call it a data structure. 'int' doesn't have any operation attached to it, it can be operated upon with a set of generic operators. Please advise if I got something wrong here.

    Read the article

  • Is there anything wrong with taking immediate actions in constructors?

    - by pestaa
    I have classes like this one: class SomeObject { public function __construct($param1, $param2) { $this->process($param1, $param2); } ... } So I can instantly "call" it as some sort of global function just like new SomeObject($arg1, $arg2); which has the benefits of staying concise, being easy to understand, but might break unwritten rules of semantics by not waiting till a method is called. Should I continue to feel bad because of a bad practice, or there's really nothing to worry about? Clarification: I do want an instance of the class. I do use internal methods of the class only. I initialize the object in the constructor, but call the "important" action-taker methods too. I am selfish in the light of these sentences.

    Read the article

  • C# loop - break vs. continue

    - by Terrapin
    In a C# (feel free to answer for other languages) loop, what's the difference between break and continue as a means to leave the structure of the loop, and go to the next iteration? Example: foreach (DataRow row in myTable.Rows){ if (someConditionEvalsToTrue) { break; //what's the difference between this and continue ? //continue; }}

    Read the article

  • How would you name...

    - by BeowulfOF
    Since naming is a so important thing in programming, I would like to start a thread for giving help to all those that have same problems as I sometimes. Rules: Set a post with the description of the form||control||class or whatever you need to find a good name for. Get name hints in the answers.

    Read the article

  • Extensions methods and forward compatibilty of source code.

    - by TcKs
    Hi, I would like solve the problem (now hypothetical but propably real in future) of using extension methods and maginification of class interface in future development. Example: /* the code written in 17. March 2010 */ public class MySpecialList : IList<MySpecialClass> { // ... implementation } // ... somewhere elsewhere ... MySpecialList list = GetMySpecialList(); // returns list of special classes var reversedList = list.Reverse().ToList(); // .Reverse() is extension method /* now the "list" is unchanged and "reveresedList" has same items in reversed order */ /* --- in future the interface of MySpecialList will be changed because of reason XYZ*/ /* the code written in some future */ public class MySpecialList : IList<MySpecialClass> { // ... implementation public MySpecialList Reverse() { // reverse order of items in this collection return this; } } // ... somewhere elsewhere ... MySpecialList list = GetMySpecialList(); // returns list of special classes var reversedList = list.Reverse().ToList(); // .Reverse() was extension method but now is instance method and do something else ! /* now the "list" is reversed order of items and "reveresedList" has same items lake in "list" */ My question is: Is there some way how to prevent this case (I didn't find them)? If is now way how to prevent it, is there some way how to find possible issues like this? If is now way how to find possible issues, should I forbid usage of extension methods? Thanks.

    Read the article

  • How would you calculate all possible permutations of 0 through N iteratively?

    - by Bob Aman
    I need to calculate permutations iteratively. The method signature looks like: int[][] permute(int n) For n = 3 for example, the return value would be: [[0,1,2], [0,2,1], [1,0,2], [1,2,0], [2,0,1], [2,1,0]] How would you go about doing this iteratively in the most efficient way possible? I can do this recursively, but I'm interested in seeing lots of alternate ways to doing it iteratively.

    Read the article

  • What is the smallest amount of bits you can write twin-prime calculation?

    - by HH
    A succinct example in Python, its source. Explanation about the syntactic sugar here. s=p=1;exec"if s%p*s%~-~p:print`p`+','+`p+2`\ns*=p*p;p+=2\n"*999 The smallest amount of bits is defined by the smallest amount of 4pcs of things you can see with hexdump, it is not that precise measure but well-enough until an ambiguity. $ echo 's=p=1;exec"if s%p*s%~-~p:print`p`+','+`p+2`\ns*=p*p;p+=2\n"*999' > .test $ hexdump .test | wc 5 36 200 $ hexdump .test 0000000 3d73 3d70 3b31 7865 6365 6922 2066 2573 0000010 2a70 2573 2d7e 707e 703a 6972 746e 7060 0000020 2b60 2b2c 7060 322b 5c60 736e 3d2a 2a70 0000030 3b70 2b70 323d 6e5c 2a22 3939 0a39 000003e so in this case it is 31 because the initial parts are removed.

    Read the article

  • Why is there "data" and "newtype" in Haskell?

    - by martingw
    To me it seems that a newtype definition is just a data definition that obeys some restrictions (only one constructor and such), and that due to these restrictions the runtime system can handle newtypes more efficiently. Ok, and the handling of pattern matching for undefined values is slightly different. But suppose Haskell would only knew data definitions, no newtypes: Couldn't the compiler find out for himself whether a given data definition obeys these restrictions, and automatically treat it more efficiently? I'm sure I'm missing out on something, these Haskell designers are so clever, there must be some deeper reason for this...

    Read the article

  • Add custom method to string object [closed]

    - by cru3l
    Possible Duplicate: Can I add custom methods/attributes to built-in Python types? In Ruby you can override any built-in object class with custom method, like this: class String def sayHello return self+" is saying hello!" end end puts 'JOHN'.downcase.sayHello # >>> 'john is saying hello!' How can i do that in python? Is there a normally way or just hacks?

    Read the article

  • Mandatory method documentation

    - by Sjoerd
    On my previous job, providing all methods with javadoc was mandatory, which resulted in things like this: /** * Sets the Frobber. * * @param frobber The frobber */ public setFrobber(Frobber frobber) { ... } As you can see, the documentation adds little to the code, but takes up space and work. Should documenting all methods be mandatory or optional? Is there a rule for which methods to document? What are pros and cons of requiring every method to be documented?

    Read the article

  • What is the cost of memory access?

    - by Jurily
    We like to think that a memory access is fast and constant, but on modern architectures/OSes, that's not necessarily true. Consider the following C code: int i = 34; int *p = &i; // do something that may or may not involve i and p {...} // 3 days later: *p = 643; What is the estimated cost of this last assignment in CPU instructions, if i is in L1 cache, i is in L2 cache, i is in L3 cache, i is in RAM proper, i is paged out to an SSD disk, i is paged out to a traditional disk? Where else can i be? Of course the numbers are not absolute, but I'm only interested in orders of magnitude. I tried searching the webs, but Google did not bless me this time.

    Read the article

  • Highly efficient filesystem APIs for certain kinds of operations

    - by romkyns
    I occasionally find myself needing certain filesystem APIs which could be implemented very efficiently if supported by the filesystem, but I've never heard of them. For example: Truncate file from the beginning, on an allocation unit boundary Split file into two on an allocation unit boundary Insert or remove a chunk from the middle of the file, again, on an allocation unit boundary The only way that I know of to do things like these is to rewrite the data into a new file. This has the benefit that the allocation unit is no longer relevant, but is extremely slow in comparison to some low-level filesystem magic. I understand that the alignment requirements mean that the methods aren't always applicable, but I think they can still be useful. For example, a file archiver may be able to trim down the archive very efficiently after the user deletes a file from the archive, even if that leaves a small amount of garbage either side for alignment reasons. Is it really the case that such APIs don't exist, or am I simply not aware of them? I am mostly interested in NTFS, but hearing about other filesystems will be interesting too.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >