Search Results

Search found 27233 results on 1090 pages for 'information quality'.

Page 24/1090 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • What should my "code sample" look like?

    - by thesunneversets
    I've just had quite a good phone interview (for a CakePHP-related position, not that it's especially important to the question). The interviewer seemed to be impressed with my resume and personality. At the end, though, he asked me to email him a code sample from my existing work project, "to check you're not secretly a terrible programmer, ha ha!" I'm not too worried that my code can't stand on its own two feet, but I'm very much an intermediate programmer rather than an expert. What obvious pitfalls should I make sure my code sample doesn't fall into, in case they rule me out on the spot? Secondly, and this is probably the harder part of the question to answer, what features in a code sample would be so impressive that they would instantly make you much more favourably inclined towards the programmer? All ideas or suggestions welcomed!

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • Why should main() be short?

    - by Stargazer712
    I've been programming for over 9 years, and according to the advice of my first programming teacher, I always keep my main() function extremely short. At first I had no idea why. I just obeyed without understanding, much to the delight of my professors. After gaining experience, I realized that if I designed my code correctly, having a short main() function just sortof happened. Writing modularized code and following the single responsibility principle allowed my code to be designed in "bunches", and main() served as nothing more than a catalyst to get the program running. Fast forward to a few weeks ago, I was looking at Python's souce code, and I found the main() function: /* Minimal main program -- everything is loaded from the library */ ... int main(int argc, char **argv) { ... return Py_Main(argc, argv); } Yay python. Short main() function == Good code. Programming teachers were right. Wanting to look deeper, I took a look at Py_Main. In its entirety, it is defined as follows: /* Main program */ int Py_Main(int argc, char **argv) { int c; int sts; char *command = NULL; char *filename = NULL; char *module = NULL; FILE *fp = stdin; char *p; int unbuffered = 0; int skipfirstline = 0; int stdin_is_interactive = 0; int help = 0; int version = 0; int saw_unbuffered_flag = 0; PyCompilerFlags cf; cf.cf_flags = 0; orig_argc = argc; /* For Py_GetArgcArgv() */ orig_argv = argv; #ifdef RISCOS Py_RISCOSWimpFlag = 0; #endif PySys_ResetWarnOptions(); while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) { if (c == 'c') { /* -c is the last option; following arguments that look like options are left for the command to interpret. */ command = (char *)malloc(strlen(_PyOS_optarg) + 2); if (command == NULL) Py_FatalError( "not enough memory to copy -c argument"); strcpy(command, _PyOS_optarg); strcat(command, "\n"); break; } if (c == 'm') { /* -m is the last option; following arguments that look like options are left for the module to interpret. */ module = (char *)malloc(strlen(_PyOS_optarg) + 2); if (module == NULL) Py_FatalError( "not enough memory to copy -m argument"); strcpy(module, _PyOS_optarg); break; } switch (c) { case 'b': Py_BytesWarningFlag++; break; case 'd': Py_DebugFlag++; break; case '3': Py_Py3kWarningFlag++; if (!Py_DivisionWarningFlag) Py_DivisionWarningFlag = 1; break; case 'Q': if (strcmp(_PyOS_optarg, "old") == 0) { Py_DivisionWarningFlag = 0; break; } if (strcmp(_PyOS_optarg, "warn") == 0) { Py_DivisionWarningFlag = 1; break; } if (strcmp(_PyOS_optarg, "warnall") == 0) { Py_DivisionWarningFlag = 2; break; } if (strcmp(_PyOS_optarg, "new") == 0) { /* This only affects __main__ */ cf.cf_flags |= CO_FUTURE_DIVISION; /* And this tells the eval loop to treat BINARY_DIVIDE as BINARY_TRUE_DIVIDE */ _Py_QnewFlag = 1; break; } fprintf(stderr, "-Q option should be `-Qold', " "`-Qwarn', `-Qwarnall', or `-Qnew' only\n"); return usage(2, argv[0]); /* NOTREACHED */ case 'i': Py_InspectFlag++; Py_InteractiveFlag++; break; /* case 'J': reserved for Jython */ case 'O': Py_OptimizeFlag++; break; case 'B': Py_DontWriteBytecodeFlag++; break; case 's': Py_NoUserSiteDirectory++; break; case 'S': Py_NoSiteFlag++; break; case 'E': Py_IgnoreEnvironmentFlag++; break; case 't': Py_TabcheckFlag++; break; case 'u': unbuffered++; saw_unbuffered_flag = 1; break; case 'v': Py_VerboseFlag++; break; #ifdef RISCOS case 'w': Py_RISCOSWimpFlag = 1; break; #endif case 'x': skipfirstline = 1; break; /* case 'X': reserved for implementation-specific arguments */ case 'U': Py_UnicodeFlag++; break; case 'h': case '?': help++; break; case 'V': version++; break; case 'W': PySys_AddWarnOption(_PyOS_optarg); break; /* This space reserved for other options */ default: return usage(2, argv[0]); /*NOTREACHED*/ } } if (help) return usage(0, argv[0]); if (version) { fprintf(stderr, "Python %s\n", PY_VERSION); return 0; } if (Py_Py3kWarningFlag && !Py_TabcheckFlag) /* -3 implies -t (but not -tt) */ Py_TabcheckFlag = 1; if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') Py_InspectFlag = 1; if (!saw_unbuffered_flag && (p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0') unbuffered = 1; if (!Py_NoUserSiteDirectory && (p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0') Py_NoUserSiteDirectory = 1; if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') { char *buf, *warning; buf = (char *)malloc(strlen(p) + 1); if (buf == NULL) Py_FatalError( "not enough memory to copy PYTHONWARNINGS"); strcpy(buf, p); for (warning = strtok(buf, ","); warning != NULL; warning = strtok(NULL, ",")) PySys_AddWarnOption(warning); free(buf); } if (command == NULL && module == NULL && _PyOS_optind < argc && strcmp(argv[_PyOS_optind], "-") != 0) { #ifdef __VMS filename = decc$translate_vms(argv[_PyOS_optind]); if (filename == (char *)0 || filename == (char *)-1) filename = argv[_PyOS_optind]; #else filename = argv[_PyOS_optind]; #endif } stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0); if (unbuffered) { #if defined(MS_WINDOWS) || defined(__CYGWIN__) _setmode(fileno(stdin), O_BINARY); _setmode(fileno(stdout), O_BINARY); #endif #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ); #else /* !HAVE_SETVBUF */ setbuf(stdin, (char *)NULL); setbuf(stdout, (char *)NULL); setbuf(stderr, (char *)NULL); #endif /* !HAVE_SETVBUF */ } else if (Py_InteractiveFlag) { #ifdef MS_WINDOWS /* Doesn't have to have line-buffered -- use unbuffered */ /* Any set[v]buf(stdin, ...) screws up Tkinter :-( */ setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); #else /* !MS_WINDOWS */ #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ); #endif /* HAVE_SETVBUF */ #endif /* !MS_WINDOWS */ /* Leave stderr alone - it should be unbuffered anyway. */ } #ifdef __VMS else { setvbuf (stdout, (char *)NULL, _IOLBF, BUFSIZ); } #endif /* __VMS */ #ifdef __APPLE__ /* On MacOS X, when the Python interpreter is embedded in an application bundle, it gets executed by a bootstrapping script that does os.execve() with an argv[0] that's different from the actual Python executable. This is needed to keep the Finder happy, or rather, to work around Apple's overly strict requirements of the process name. However, we still need a usable sys.executable, so the actual executable path is passed in an environment variable. See Lib/plat-mac/bundlebuiler.py for details about the bootstrap script. */ if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0') Py_SetProgramName(p); else Py_SetProgramName(argv[0]); #else Py_SetProgramName(argv[0]); #endif Py_Initialize(); if (Py_VerboseFlag || (command == NULL && filename == NULL && module == NULL && stdin_is_interactive)) { fprintf(stderr, "Python %s on %s\n", Py_GetVersion(), Py_GetPlatform()); if (!Py_NoSiteFlag) fprintf(stderr, "%s\n", COPYRIGHT); } if (command != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } if (module != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' so that PySys_SetArgv correctly sets sys.path[0] to '' rather than looking for a file called "-m". See tracker issue #8202 for details. */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind); if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) && isatty(fileno(stdin))) { PyObject *v; v = PyImport_ImportModule("readline"); if (v == NULL) PyErr_Clear(); else Py_DECREF(v); } if (command) { sts = PyRun_SimpleStringFlags(command, &cf) != 0; free(command); } else if (module) { sts = RunModule(module, 1); free(module); } else { if (filename == NULL && stdin_is_interactive) { Py_InspectFlag = 0; /* do exit on SystemExit */ RunStartupFile(&cf); } /* XXX */ sts = -1; /* keep track of whether we've already run __main__ */ if (filename != NULL) { sts = RunMainFromImporter(filename); } if (sts==-1 && filename!=NULL) { if ((fp = fopen(filename, "r")) == NULL) { fprintf(stderr, "%s: can't open file '%s': [Errno %d] %s\n", argv[0], filename, errno, strerror(errno)); return 2; } else if (skipfirstline) { int ch; /* Push back first newline so line numbers remain the same */ while ((ch = getc(fp)) != EOF) { if (ch == '\n') { (void)ungetc(ch, fp); break; } } } { /* XXX: does this work on Win/Win64? (see posix_fstat) */ struct stat sb; if (fstat(fileno(fp), &sb) == 0 && S_ISDIR(sb.st_mode)) { fprintf(stderr, "%s: '%s' is a directory, cannot continue\n", argv[0], filename); fclose(fp); return 1; } } } if (sts==-1) { /* call pending calls like signal handlers (SIGINT) */ if (Py_MakePendingCalls() == -1) { PyErr_Print(); sts = 1; } else { sts = PyRun_AnyFileExFlags( fp, filename == NULL ? "<stdin>" : filename, filename != NULL, &cf) != 0; } } } /* Check this environment variable at the end, to give programs the * opportunity to set it from Python. */ if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') { Py_InspectFlag = 1; } if (Py_InspectFlag && stdin_is_interactive && (filename != NULL || command != NULL || module != NULL)) { Py_InspectFlag = 0; /* XXX */ sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0; } Py_Finalize(); #ifdef RISCOS if (Py_RISCOSWimpFlag) fprintf(stderr, "\x0cq\x0c"); /* make frontend quit */ #endif #ifdef __INSURE__ /* Insure++ is a memory analysis tool that aids in discovering * memory leaks and other memory problems. On Python exit, the * interned string dictionary is flagged as being in use at exit * (which it is). Under normal circumstances, this is fine because * the memory will be automatically reclaimed by the system. Under * memory debugging, it's a huge source of useless noise, so we * trade off slower shutdown for less distraction in the memory * reports. -baw */ _Py_ReleaseInternedStrings(); #endif /* __INSURE__ */ return sts; } Good God Almighty...it is big enough to sink the Titanic. It seems as though Python did the "Intro to Programming 101" trick and just moved all of main()'s code to a different function called it something very similar to "main". Here's my question: Is this code terribly written, or are there other reasons to have a short main function? As it stands right now, I see absolutely no difference between doing this and just moving the code in Py_Main() back into main(). Am I wrong in thinking this?

    Read the article

  • Adding complexity to remove duplicate code

    - by Phil
    I have several classes that all inherit from a generic base class. The base class contains a collection of several objects of type T. Each child class needs to be able to calculate interpolated values from the collection of objects, but since the child classes use different types, the calculation varies a tiny bit from class to class. So far I have copy/pasted my code from class to class and made minor modifications to each. But now I am trying to remove the duplicated code and replace it with one generic interpolation method in my base class. However that is proving to be very difficult, and all the solutions I have thought of seem way too complex. I am starting to think the DRY principle does not apply as much in this kind of situation, but that sounds like blasphemy. How much complexity is too much when trying to remove code duplication? EDIT: The best solution I can come up with goes something like this: Base Class: protected T GetInterpolated(int frame) { var index = SortedFrames.BinarySearch(frame); if (index >= 0) return Data[index]; index = ~index; if (index == 0) return Data[index]; if (index >= Data.Count) return Data[Data.Count - 1]; return GetInterpolatedItem(frame, Data[index - 1], Data[index]); } protected abstract T GetInterpolatedItem(int frame, T lower, T upper); Child class A: public IGpsCoordinate GetInterpolatedCoord(int frame) { ReadData(); return GetInterpolated(frame); } protected override IGpsCoordinate GetInterpolatedItem(int frame, IGpsCoordinate lower, IGpsCoordinate upper) { double ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame); var x = GetInterpolatedValue(lower.X, upper.X, ratio); var y = GetInterpolatedValue(lower.Y, upper.Y, ratio); var z = GetInterpolatedValue(lower.Z, upper.Z, ratio); return new GpsCoordinate(frame, x, y, z); } Child class B: public double GetMph(int frame) { ReadData(); return GetInterpolated(frame).MilesPerHour; } protected override ISpeed GetInterpolatedItem(int frame, ISpeed lower, ISpeed upper) { var ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame); var mph = GetInterpolatedValue(lower.MilesPerHour, upper.MilesPerHour, ratio); return new Speed(frame, mph); }

    Read the article

  • Information Driven Value Chains: Achieving Supply Chain Excellence in the 21st Century With Oracle -

    World-class supply chains can help companies achieve top line and bottom line results in today’s complex,global world.Tune into this conversation with Rick Jewell,SVP,Oracle Supply Chain Development,to hear about Oracle’s vision for world class SCM,and the latest and greatest on Oracle Supply Chain Management solutions.You will learn about Oracle’s complete,best-in-class,open and integrated solutions,which are helping companies drive profitability,achieve operational excellence,streamline innovation,and manage risk and compliance in today’s complex,global world.

    Read the article

  • Diagram to show code responsibility

    - by Mike Samuel
    Does anyone know how to visually diagram the ways in which the flow of control in code passes between code produced by different groups and how that affects the amount of code that needs to be carefully written/reviewed/tested for system properties to hold? What I am trying to help people visualize are arguments of the form: For property P to hold, nd developers have to write application code, Ca, without certain kinds of errors, and nm maintainers have to make sure that the code continues to not have these kinds of errors over the project lifetime. We could reduce the error rate by educating nd developers and nm maintainers. For us to be confident that the property holds, ns specialists still need to test or check |Ca| lines of code and continue to test/check the changes by nm maintainers. Alternatively, we could be confident that P holds if all code paths that could violate P went through tool code, Ct, written by our specialists. In our case, test suites alone cannot give confidence that P holdsnd » nsnm ns|Ca| » |Ct| so writing and maintaining Ct is economical, frees up our developers to worry about other things, and reduces the ongoing education commitment by our specialists. or those conditions do not hold, so focusing on education and testing is preferable. Example 1 As a concrete example, suppose we want to ensure that our web-service only produces valid JSON output. Our web-service provides several query and mutation operators that can be composed in interesting ways. We could try to educate everyone who maintains those operations about the JSON syntax, the importance of conformance, and libraries available so that when they write to an output buffer, every possible sequence of appends results in syntactically valid JSON. Alternatively, we don't expose an output stream handle to application code, and instead expose a JSON sink so that every code path that writes a response is channeled through a JSON sink that is written and maintained by a specialist who knows JSON syntax and can use well-written libraries to produce only valid output. Example 2 We need to make sure that a service that receives a URL from an untrusted source and tries to fetch its content does not end up revealing sensitive files from the file-system, like file:///etc/passwd. If there is a single standard way that any developer familiar with the application language's libraries would use to fetch URLs, which has file-system access turned off by default, then simply educating developers about the standard mechanism, and testing that file probing fails for some inputs, will probably be sufficient.

    Read the article

  • Which is more maintainable -- boolean assignment via if/else or boolean expression?

    - by Bret Walker
    Which would be considered more maintainable? if (a == b) c = true; else c = false; or c = (a == b); I've tried looking in Code Complete, but can't find an answer. I think the first is more readable (you can literally read it out loud), which I also think makes it more maintainable. The second one certainly makes more sense and reduces code, but I'm not sure it's as maintainable for C# developers (I'd expect to see this idiom more in, for example, Python).

    Read the article

  • What simple techniques do you use to improve performance?

    - by Cristian
    I'm talking about the way we write simple routines in order to improve performance without making your code harder to read... for instance, this is the typical for we learned: for(int i = 0; i < collection.length(); i++ ){ // stuff here } But, I usually do this when a foreach is not applicable: for(int i = 0, j = collection.length(); i < j; i++ ){ // stuff here } I think this is a better approach since it will call the length method once only... my girlfriend says it's cryptic though. Is there any other simple trick you use on your own developments?

    Read the article

  • If your unit test code "smells" does it really matter?

    - by Buttons840
    Usually I just throw my unit tests together using copy and paste and all kind of other bad practices. The unit tests usually end up looking quite ugly, they're full of "code smell," but does this really matter? I always tell myself as long as the "real" code is "good" that's all that matters. Plus, unit testing usually requires various "smelly hacks" like stubbing functions. How concerned should I be over poorly designed ("smelly") unit tests?

    Read the article

  • I am afraid that my University is not going to teach me enough information [closed]

    - by Muhklayne
    I attend a University and am a Computer Science major. I have barely entered into the major, as I am a sophomore. However, the coursework I am doing is extremely easy already and I feel as though this degree is going to lead me to a path of knowledge without knowing how to bring it all together. Therefore, I am coming to you to ask where I should begin learning on my own! I am willing to dedicate hours upon hours of learning to code outside of class, as it is truly my passion. I will begin by completing all work on http://www.codecademy.com, however I feel this will not be enough either. I would love to learn to integrate visual languages for video games such as NXA and C# combining it with C++ (as I understand video games can be created in this manner). I would also like to look into LUA and Python scripting. I am asking for advice as to where I should begin my personal studies of learning to program, as with my research it has become quite apparent that simply attaining a degree in Computer Science is quite frankly not enough. Thank you for your time!

    Read the article

  • Duplication of code (backend and javascript - knockout)

    - by Michal B.
    We have a new developer in our team. He seems a smart guy (he just came in so I cannot really judge). He started with implementing some small enhancements in the project (MVC3 web application using javascript - jquery and knockout). Let's say we have two values: A - quite complex calculation C - constant B = A + C On the screen there is value B and user can change it (normal texbox). When B changes, A changes as well because C is constant. So there is linear dependency between A and B. Now, all the calculations are done in the backend, but we need to recalculate A as user changes B (in js, I would use knockout). I thought about storing old A and B and when B changes by 10 then we know that new A will be old A + 10. He says this is dirty, because it's duplication of code (we make use of the fact that they are dependent and according to him that should be only in one place in our app). I understand it's not ideal, but making AJAX request after every key press seems a bit too much. It's a really small thing and I would not post if we haven't had long discussion about it. How do you deal with such problems? Also I can imagine that using knockout implies lots of calculations on the client side, which very often leads to duplication of the same calculations from the backend. Does anyone have links to some articles/thoughts on this topic?

    Read the article

  • Are too many assertions code smell?

    - by Florents
    I've really fallen in love with unit testing and TDD - I am test infected. However, unit testing is used for public methods. Sometimes though I do have to test some assumptions-assertions in private methods too, because some of them are "dangerous" and refactoring can't help further. (I know, testing frameworks allo testing private methods). So, It became a habit of mine that (almost always) the first and the last line of a private method are both assertions. I guess this couldn't be bad (right ??). However, I've noticed that I also tend to use assertions in public methods too (as in the private) just "to be sure". Could this be "testing duplication" since the public method assumpotions are tested from the unit testng framework? Could someone think of too many assertions as a code smell?

    Read the article

  • Getting solutions off the internet. Bad or Good? [closed]

    - by Prometheus87
    I was looking on the internet for common interview questions. I came upon one that was about finding the occurrences of certain characters in an array. The solution written right below it was in my opinion very elegant. But then another thought came to my mind that, this solution is way better than what came to my mind. So even though I know the solution (that most probably someone with a better IQ had provided) my IQ was still the same. So that means that even though i may know the answer, it still wasn't mine. hence if that question was asked and i was hired upon my answer to that question i couldn't reproduce that same elegance in my other ventures within the organization My question is what do you guys think about such "borrowed intelligence"? Is it good? Do you feel that if solutions are found off the internet, it makes you think in that same more elegant way?

    Read the article

  • One-week release cycle: how do I make this feasible?

    - by Arkaaito
    At my company (3-yr-old web industry startup), we have frequent problems with the product team saying "aaaah this is a crisis patch it now!" (doesn't everybody?) This has an impact on the productivity (and morale) of engineering staff, self included. Management has spent some time thinking about how to reduce the frequency of these same-day requests and has come up with the solution that we are going to have a release every week. (Previously we'd been doing one every two weeks, which usually slipped by a couple of days or so.) There are 13 developers and 6 local / 9 offshore testers; the theory is that only 4 developers (and all testers) will work on even-numbered releases, unless a piece of work comes up that really requires some specific expertise from one of the other devs. Each cycle will contain two days of dev work and two days of QA work (plus 1 day of scoping / triage / ...). My questions are: (a) Does anyone have experience with this length of release cycle? (b) Has anyone heard of this length of release cycle even being attempted? (c) If (a) or (b), how on Earth do you make it work? (Any pitfalls to avoid, etc., are also appreciated.) (d) How can we minimize the damage if this effort fails?

    Read the article

  • How can I quantify the amount of technical debt that exists in a project?

    - by Erik Dietrich
    Does anyone know if there is some kind of tool to put a number on technical debt of a code base, as a kind of code metric? If not, is anyone aware of an algorithm or set of heuristics for it? If neither of those things exists so far, I'd be interested in ideas for how to get started with such a thing. That is, how can I quantify the technical debt incurred by a method, a class, a namespace, an assembly, etc. I'm most interested in analyzing and assessing a C# code base, but please feel free to chime in for other languages as well, particularly if the concepts are language transcendent.

    Read the article

  • Information About Windows Reseller Web Hosting

    Windows reseller web hosting services are hosting services provided to the user completely dedicated to the Windows Operating System. The hosting can be managed effortlessly with navigation, control,... [Author: John Anthony - Web Design and Development - March 23, 2010]

    Read the article

  • How do you overcome your own coding biases when handed legacy code?

    - by Bryan M.
    As programmers, we often take incredible pride in our skills and hold very strong opinions about what is 'good' code and 'bad' code. At any given point in our careers, we've probably had some legacy system dropped in our laps, and thought 'My god, this code sucks!' because it didn't fit into our notion of what good code should be, despite the fact that it may have well been perfectly functional, maintainable code. How do you prepare yourself mentally when trying to get your head around another programmer's work?

    Read the article

  • Failed to download repository information (Maveric)

    - by Rhiannon
    I have been through most of the duplicates for this question, and still can't find an answer. I may have missed one but hopefully this isn't a duplicate! Having a problem with updates. I get the "failed to download..."message followed by "Check your internet connection", which is clearing working fine as I am on it now. I click details and get the following **W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/maverick-updates/universe/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-updates/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/multiverse/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/universe/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , E:Some index files failed to download. They have been ignored, or old ones used instead.** All the faults have "maveric" somewhere in them, so I have gone to settings and unticked all the Mavarics I can find, but this problem is still happening. Any ideas? Many thanks

    Read the article

  • 14.04 LTS, 32-bit, Software Updater error "Failed to download repository information: Check your internet connection"

    - by Lucas W
    There isn't much to say about this one: when I run Software Updater, I get the above error message. That can't be good. Interestingly, when I click on "Settings..." and then close the settings dialogue that pops up, all of a sudden Software Updater successfully finds updates and installs them. I thought I should bring this to the attention of the Ubuntu community. sudo apt-get update returns the following: W: Failed to fetch http://ppa.launchpad.net/deluge-team/ppa/ubuntu/dists/trusty/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. I have screen captures, but I don't have enough reputation points to post them.

    Read the article

  • Is micro-optimisation important when coding?

    - by BozKay
    I recently asked a question on stackoverflow.com to find out why isset() was faster than strlen() in php. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, I showed him the responses and he was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of php in the above case). The environmental factors could be important - the internet consumes 10% of the worlds energy, I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? EDIT : My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() && counter < X) Should be if (counter < X && expensiveFunction()) (example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to because you would write it correctly in the first place.

    Read the article

  • Getting out of my head

    - by BenCole
    (I put this on SO, but it got a couple close votes saying it belonged here instead...) I've spent the last year as a single person team developing a rich-client application (35,000+ LoC, for what it's worth). It's currently stable and in production. However, I know that my skills were rusty at the beginning of the project, so without a doubt there are major issues to the code. At this point, most of the issues are in architecture, structure, or interactions - the easy problems, even architecture/design problems, have already been weeded out. Unfortunately, I've spent so much time with this project that I'm having a hard time thinking outside of it - approaching it from a new perspective to see the flaws deeply buried or inherent in the design. How do I step outside my head and outside my code so I can get a fresh look at this code so I can make it better? Is this less of an issue than I think it is, or is this a problem for other people as well?

    Read the article

  • How can I sell a legacy program rewrite to the business?

    - by Wil
    We have a legacy classic ASP application that's been around since 2001. It badly needs to be re-written, but it's working fine from an end user perspective. The reason I feel like a rewrite is necessary is that when we need to update it (which is admittedly not that often) then it takes forever to go through all the spaghetti code and fix problems. Also, adding new features is also a pain since it was architect-ed and coded badly. I've run cost analysis for them on maintenance but they are willing to spend more for the small maintenance jobs than a rewrite. Any suggestions on convincing them otherwise?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >