Search Results

Search found 8219 results on 329 pages for 'less'.

Page 141/329 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Why the recent shift to removing/omitting semicolons from Javascript?

    - by Jonathan
    It seems to be fashionable recently to omit semicolons from Javascript. There was a blog post a few years ago emphasising that in Javascript, semicolons are optional and the gist of the post seemed to be that you shouldn't bother with them because they're unnecessary. The post, widely cited, doesn't give any compelling reasons not to use them, just that leaving them out has few side-effects. Even GitHub has jumped on the no-semicolon bandwagon, requiring their omission in any internally-developed code, and a recent commit to the zepto.js project by its maintainer has removed all semicolons from the codebase. His chief justifications were: it's a matter of preference for his team; less typing Are there other good reasons to leave them out? Frankly I can see no reason to omit them, and certainly no reason to go back over code to erase them. It also goes against (years of) recommended practice, which I don't really buy the "cargo cult" argument for. So, why all the recent semicolon-hate? Is there a shortage looming? Or is this just the latest Javascript fad?

    Read the article

  • Effective versus efficient code

    - by Todd Williamson
    TL;DR: Quick and dirty code, or "correct" (insert your definition of this term) code? There is often a tension between "efficient" and "effective" in software development. "Efficient" often means code that is "correct" from the point of view of adhering to standards, using widely-accepted patterns/approaches for structures, regardless of project size, budget, etc. "Effective" is not about being "right", but about getting things done. This often results in code that falls outside the bounds of commonly accepted "correct" standards, usage, etc. Usually the people paying for the development effort have dictated ahead of time what it is that they value more. An organization that lives in a technical space will tend towards the efficient end, others will tend towards the effective. Developers often refuse to compromise their favored approach for the other. In my own experience I have found that people with formal education in software development tend towards the Efficient camp. Those that picked up software development more or less as a tool to get things done tend towards the Effective camp. These camps don't get along very well. When managing a team of developers who are not all in one camp it is challenging. In your own experience, which camp do you land in, and do you find yourself having to justify your approach to others? To management? To other developers?

    Read the article

  • Better ways to have valuable data indexed, which is ignored currently

    - by Sam
    <a title="">.../a> Hi folks. It seems that my title tag which holds extremely valuable and describes contents on my simple design page is currently compeltely denied by search engines and not indexed at all!! Those descriptions should however be indexed as the describe valuable portions to an otherwise empty page with clean glossary (thats neat and organised to the eye of the viewer. So putting all that descriptive data into visible space would ruin the designish less is more fundamental... So, which alternatives to the title tag do I have, in order to put important contents that are relevant for both user as well as search engines? A <a name="">......</> B <p name="">......</> C <a alt="">.......</> D <p alt="">.......</> From the above list, arose my question: Which of the above is advisable alternative in order to get the valuable actual content indexed? Should it be in a a tag or p tag? Or are there even better tags for this which still keep layout clean? You suggestions are Much appreciated!

    Read the article

  • 10 Tech Products Ahead of Their Time [Video]

    - by Jason Fitzpatrick
    Sometimes a product just can’t help but be too far ahead of it’s time to be adopted. Check out these 10 products that had their moment of glory a moment (or a decade) too soon. At Mashable they’ve gathered up 10 products that hit the market too soon for people to really appreciate them. Among them, as seen in the video above, a super simple internet-focused computer. At the time it hit the market people simply didn’t get the value of having a cheap, easy to use internet terminal. It probably didn’t help much that the 1990s internet didn’t have the plethora of powerful and useful web-based applications we have now. None the less we now have tons of lightweight and “underpowered” devices focused on the internet experience (like netbooks, iPads, smart phones, chromebooks, and more). Hit up the link below to see the 9 other gems from their collection of products ahead of their times. 10 Tech Products Ahead of Their Time [Mashable] How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • Stuck at enemy movement

    - by Syed
    I am making a TD game in unity, Initially I made all of my enemy movements frame rate dependent say: I had a grid point1 at -22.65 and other at -21.1, diagrammatically: (-22.65) _________(-21.1)_______(-21.1+1.55) ...... so the distance on x axis between two points is 1.55, divided it by 25 jumps with each enemy jump of 0.062 of each frame. On reaching on next point of grid the enemy find again its path. All went fine until I have requirement of FastForward and Pause feature. I used timeScale property of unity but it wont work as they are frame dependent. I also tried double speed of enemy on clicking fast forward button at any time, it has some issues that enemy jumps are now less and it fails to reach on next grid point. Could someone suggest me solution to my problem. Do I need to change the enemy movement code to make it frame independent ?? I need the enemy to reach on the grid specific point I also need later to slow down any one enemy's speed when tower fires on it. Thnx

    Read the article

  • CodeGolf : Find the Unique Paths

    - by st0le
    Here's a pretty simple idea, in this pastebin I've posted some pair of numbers. These represent Nodes of a unidirected connected graph. The input to stdin will be of the form, (they'll be numbers, i'll be using an example here) c d q r a b d e p q so x y means x is connected to y (not viceversa) There are 2 paths in that example. a->b->c->d->e and p->q->r. You need to print all the unique paths from that graph The output should be of the format a->b->c->d->e p->q->r Notes You can assume the numbers are chosen such that one path doesn't intersect the other (one node belongs to one path) The pairs are in random order. They are more than 1 paths, they can be of different lengths. All numbers are less than 1000. If you need more details, please leave a comment. I'll amend as required. Shameless-Plug For those who enjoy Codegolf, please Commit at Area51 for its very own site:) (for those who don't enjoy it, please support it as well, so we'll stay out of your way...)

    Read the article

  • What functionality of an iPad can I use with Ubuntu?

    - by Andrew Ferrier
    What functionality of an iPad can I use with Ubuntu? I'm thinking about buying an iPad, but I only have Ubuntu PCs these days (no Windows, no Mac), and I'm nervous that it may be too reliant on the existence of iTunes. I'm less concerned about getting media onto and off it (and I have read that I can do this with libimobiledevice), but will I be able to activate it with iTunes? Does it need to be USB synced on a regular basis, or can I do most anything I want through the cloud (i.e. over wifi)? Edit: In answer to jrgifford's question, in theory, I'd like to do everything I would want to do with it were I using Windows/Mac. But more specifically, I think I'd want to: "Activate" it, if that's necessary Copy music / video onto it Install apps How much of this can I do without a Windows/Mac PC? Is it necessary to activate it? I don't think I'm that interested in backing it up (what would I back up?), but then again, maybe I'm confused as to how easy it is to get data on and off. Can I get a regular SFTP app for the iPad to copy data to my Ubuntu machine over wi-fi, for example?

    Read the article

  • Animating DOM elements vs refreshing a single Canvas

    - by mgibsonbr
    A few years ago, when the HTML Canvas element was still kinda fresh, I wrote a small game in a rather "unusual" way: each game element had its own canvas, and frequently animated elements even had multiple canvases, one for each animation sprite. This way, the translation would be done by manipulating the DOM position of the canvases, while the sprite animation would consist of altering the visibility of the already drawn canvases. (z-indexes, of course, were the tricky part) It worked like a charm: even in IE6 with excanvas it showed a decent performance, and everything was rather consistent between browsers, including some smartphones. Now I'm thinking in writing a larger game engine in the same fashion, so I'm wondering whether it would be a good idea to do so in the current context (with all the advances in browsers and so on). I know I'm trading memory for time, so this needs to be customizable (even at runtime) for each machine the game will be running. But I believe using separate canvases would also help to avoid the game "freezing" on CPU spikes, since the translation would still happen even if the redraws lag for a while. Besides, the browsers' rendering engines are already optimized in may ways, so I'm guessing this scheme would also reduce the load on the CPU (in contrast to doing everything in JavaScript - specially the less optimized ones). It looks good in my head, but I'd like to hear the opinion of more experienced people before proceeding further. Is there any known drawback of doing this? I'm particulartly unexperienced in dealing with the GPU, so I wonder whether this "trick" would nullify any benefit of using a single, big canvas. Or maybe on modern devices it's overkill (though I'm skeptic about the claims that canvas+js - especially WebGL - will ever be a good alternative to native code). Any thoughts?

    Read the article

  • Advantages and Disadvantages of the Waterfall Methodology

    In my personal opinion I believe the waterfall method is one of the worst methodologies to use when developing larger systems because it leaves is no room for mistakes. As the name implies the waterfall methodology does not allow  for projects to go back up stream to recover from design errors, missing and/or limited requirements. In addition, hidden bugs are not usually found until the testing phase. This can prove to be very costly and time consuming to the developer and the client. According to NCycles.com, the waterfall methodology structures a project into separate stages with defined deliverables from each phase. Define Design Code Test Implement Document and Maintain The advantages found by Ncycle.com to this methodology are: Ease in analyzing potential changes  Ability to coordinate larger teams, even if geographically distributed Can enable precise dollar budget Less total time required from Subject Matter Experts The disadvantages found by Ncycle.com to this methodology are: Lack of flexibility Hard to predict all needs in advance Intangible knowledge lost between hand-offs Lack of team cohesion Design flaws not discovered until the Testing phase References: NCycles.com  (2002). Retrieved from http://www.ncycles.com/e_whi_Methodologies.htmmethodology on April 17, 2009

    Read the article

  • SQL SERVER – Tell me What You Want to Listen – My 2 TechED 2011 Sessions

    - by pinaldave
    I am going to present two sessions at TechEd India on March 25th, 2011. I would like to know what do you want me to cover in this session. Watch the video taken by my wife when I was preparing for the session. Sessions Date: March 25, 2011 Understanding SQL Server Behavioral Pattern – SQL Server Extended Events Date and Time: March 25, 2011 12:00 PM to 01:00 PM SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Date and Time: March 25, 2011 04:15 PM to 05:15 PM I promise following for both of my sessions: I will share the scripts demonstrated in the session right at the end of the sessions The sessions will be 300-400 level but I promise to make the concept very simple Less slides and lots of meaningful Demos Session close to real life cases and scenarios Surprise gifts to best participants I promise to answer all the questions either in session or right after the hall after the session Lots of Technical Education and FUN! Please leave your comments with your expectation and if you are going to attend the session do let me know here. We will for sure meet at the event and do some interesting talk. You can read the abstract of the session over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Experience vs. versatility

    - by Florin Bombeanu
    Let's say a .NET programmer works at a company which provides software on demand, not as a product. The programmer works in WPF for a period of time and he/she invests lots of time in it. He/she get very good at WPF and Windows Forms and desktop development in general. But the company has to provide a web application now, so the developer has to learn MVC or Web Forms. He/she is not experienced in web development so he/she starts investing time in this new technology and in time they get good at it. But this time the company has to provide a Sharepoint solution, and so on. What is more important: Being very very good at a certain technology, Or be as versatile as possible knowing less in each technology but covering a greater area of expertise? Should the programmer keep studying and working in WPF until he/she reaches a guru level or is it a good thing that they had to learn other technologies as well? I agree with those of you who will say that when learning different technologies you will also learn things which are useful no matter the technology you're programming in. But eventually, when the programmer will want to change jobs, will it matter more that he/she knows some WPF, MVC or Sharepoint than the fact that he/she is insanely good at one of them? I would think the second one is more important since most companies are looking for a developer for a certain technology. I don't think there are many companies looking for technical know-it-all people. What do you think?

    Read the article

  • JEE frameworks, a road map to learn? and should I learn them?

    - by vibhor
    Background Information I have been into programming since past 1 years professionally, my day to day work includes writing BIRT reports, designing and validating forms using JEE (struts/spring, hibernate). I don't have a comp Sci 4 year degree (Electronics), so I have very Limited experience in comp Sci. Question JEE frameworks (struts1/2, spring, hibernate etc) are hot nowadays, however java world have a tendency of building A4j, B4J... mayway4J kind of stuff (and I am tired of it). AFAIK, frameworks are nothing but bunch of XML config files and hundreds of classes built to cram (by developer). And sooner then later a new framework come into picture that says I am the best among all. So My Question is - 1.What will you do to learn a framework (many frameworks) considering that it can be obsolete till you'll be master in it (Learning frameworks can take significant amount of time)? 2.Considering early into your career, will you give a damn that how well someone knows framework (knowing frame work is important but still..) and why/how should I learn a framework knowing I have to (un)learn it in order to learn other one (plenty of of 4Js....)? I am just trying to get a big picture, that, if you're in place of me, what would be your learning/cramming strategy (Road map)? I am not intended to start a holy war between A versus B, (frameworks are more or less essential).

    Read the article

  • Why isn't Java used for modern web application development?

    - by Cliff
    As a professional Java programmer, I've been trying to understand - why the hate toward Java for modern web applications? I've noticed a trend that out of modern day web startups, a relatively small percentage of them appears to be using Java (compared to Java's overall popularity). When I've asked a few about this, I've typically received a response like, "I hate Java with a passion." But no one really seems to be able to give a definitive answer. I've also heard this same web startup community refer negatively to Java developers - more or less implying that they are slow, not creative, old. As a result, I've spent time working to pick up Ruby/Rails, basically to find out what I'm missing. But I can't help thinking to myself, "I could do this much faster if I were using Java," primarily due to my relative experience levels. But also because I haven't seen anything critical "missing" from Java, preventing me from building the same application. Which brings me to my question(s): Why is Java not being used in modern web applications? Is it a weakness of the language? Is it an unfair stereotype of Java because it's been around so long (it's been unfairly associated with its older technologies, and doesn't receive recognition for its "modern" capabilities)? Is the negative stereotype of Java developers too strong? (Java is just no longer "cool") Are applications written in other languages really faster to build, easier to maintain, and do they perform better? Is Java only used by big companies who are too slow to adapt to a new language?

    Read the article

  • Nvidia API mismatch

    - by Oli
    I had planned a day of relaxing with Portal 2 but on starting Steam (for the first time in a couple of weeks) I was greeted with the following message in the terminal: Error: API mismatch: the NVIDIA kernel module has version 270.41.19, but this NVIDIA driver component has version 270.41.06. Please make sure that the kernel module and all NVIDIA driver components have the same version. I'll confess I don't really know what it's talking about when it says driver. The verion of nvidia-current is 270.41.19. I thought that was the driver and module, all in one. I use the X-SWAT PPA and I have noted that the nvidia-settings package has boosted to 275.09.07. As this is just a settings application, I don't think this mismatch has anything to do with this. It's also not the same version as the problem being described. I'd rather not purge back to the standard Nvidia driver as it's less than stable on my GTX580. I would accept an answer that takes the manual setup and makes it recompile when the kernel recompiles (ie, some DKMS wizardry) but it has to work. I don't want to drop back to text-mode every time I restart after a kernel upgrade. Edit: Minecraft works without a single complaint about driver versions. Penumbra dies with roughly the same error when entering a game.

    Read the article

  • Should I pick up a functional programming language?

    - by Statement
    I have recently been more concerned about the way I write my code. After reading a few books on design patterns (and overzealous implementation of them, I'm sure) I have shifted my thinking greatly toward encapsulating that which change. I tend to notice that I write less interfaces and more method-oriented code, where I love to spruce life into old classes with predicates, actions and other delegate tasks. I tend to think that it's often the actions that change, so I encapsulate those. I even often, although not always, break down interfaces to a single method, and then I prefer to use a delegate for the task instead of forcing client code to create a new class. So I guess it then hit me. Should I be doing functional programming instead? Edit: I may have a misconception about functional programming. Currently my language of choice is C#, and I come from a C++ background. I work as a game developer but I am currently unemployed. I have a great passion for architecture. My virtues are clean, flexible, reusable and maintainable code. I don't know if I have been poisoned by these ways or if it is for the better. Am I having a refactoring fever or should I move on? I understand this might be a question about "use the right tool for the job", but I'd like to hear your thoughts. Should I pick up a functional language? One of my fear factors is to leave the comfort of Visual Studio.

    Read the article

  • Rawr Code Clone Analysis&ndash;Part 0

    - by Dylan Smith
    Code Clone Analysis is a cool new feature in Visual Studio 11 (vNext).  It analyzes all the code in your solution and attempts to identify blocks of code that are similar, and thus candidates for refactoring to eliminate the duplication.  The power lies in the fact that the blocks of code don't need to be identical for Code Clone to identify them, it will report Exact, Strong, Medium and Weak matches indicating how similar the blocks of code in question are.   People that know me know that I'm anal enthusiastic about both writing clean code, and taking old crappy code and making it suck less. So the possibilities for this feature have me pretty excited if it works well - and thats a big if that I'm hoping to explore over the next few blog posts. I'm going to grab the Rawr source code from CodePlex (a World Of Warcraft gear calculator engine program), run Code Clone Analysis against it, then go through the results one-by-one and refactor where appropriate blogging along the way.  My goals with this blog series are twofold: Evaluate and demonstrate Code Clone Analysis Provide some concrete examples of refactoring code to eliminate duplication and improve the code-base Here are the initial results:   Code Clone Analysis has found: 129 Exact Matches 201 Strong Matches 300 Medium Matches 193 Weak Matches Also indicated is that there was a total of 45,181 potentially duplicated lines of code that could be eliminated through refactoring.  Considering the entire solution only has 109,763 lines of code, if true, the duplicates lines of code number is pretty significant. In the next post we’ll start examining some of the individual results and determine if they really do indicate a potential refactoring.

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • How should I deal with user agent parsing in logs?

    - by Mr. Jefferson
    My web app project includes logging functionality so we can see where visitors are coming from (referrer URL), what the popular user agents are, what pages are most popular, etc. The log is stored in SQL Server, and when I query the user agents I use a large (almost 100 lines) and growing CASE statement to separate the user agents using string matching (i.e. if the user agent contains the string "Firefox/9" then it's Firefox 9). Is there a better way to do this so I don't have to continually add to that CASE statement to deal with new browser releases? Also, how should I deal with less common, weird/unknown user agents? I've seen the following in the logs and been unable to find good information online about what they are: WordPress/3.3.1; http://www.facecolony.org Mozilla/4.0 ( http://www.hairirons.org redips; <a href=http://hairirons.org/>chi hair iron</a>) I'd guess they're bots/crawlers, but the sites they point to don't appear to reference web crawlers (or even be available sometimes). I've seen other user agents aren't familiar to me, but I know they're bots because they include "bot" or "spider" or something similar in them.

    Read the article

  • Can windows XP be better than any Ubuntu (and Linux) distro for an old PC?

    - by Robert Vila
    The old laptop is a Toshiba 1800-100: CPU: Intel Celeron 800h Ram 128 MB (works ok) HDD: 15GB (works ok) Graphics adapter: Integrated 64-bit AGP graphics accelerator, BitBIT, 3D graphic acceleration, 8 MB Video RAM Only WindowsXP is installed, and works ok: it can be used, but it is slow (and hateful). I thought that I could improve performance (and its look) easily, since it is an old PC (drivers and everything known for years...) by installing a light Linux distro. So, I decided to install a light or customized Ubuntu distro, or Ubuntu/Debian derivative, but haven't been successful with any; not even booting LiveCDs: not even AntiX, not even Puppy. Lubuntu wiki says that it won't work because the last to releases need more ram (and some blogs say much more cpu -even core duo for new Lubuntu!-), let alone Xubuntu. The problems I have faced are: 1.There are thousands of pages talking about the same 10/15 lightweight distros, and saying more or less the same things, but NONE talks about a simple thing as to how should the RAM/swap-partition proportion be for this kind of installations. NONE! 2.Loading the LiveCD I have tried several different boot options (don't understand much about this and there's ALWAYS a line of explanation missing) and never receive error messages. Booting just stops at different stages but often seems to stop just when the X server is going to start. I am able to boot to command line. 3.I ignore whether the problem is ram size or a problem with the graphics driver (which surprises me because it is a well known brand and line of computers). So I don't know if doing a partition with a swap partition would help booting the LiveCD. 4.I would like to try the graphical interface with the LiveCD before installing. If doing the swap partition for this purpose would help. How can I do the partition? I tried to use Boot Rescue CD, but it advises me against continuing forward. I would appreciate any ideas as regards these questions. Thank you

    Read the article

  • How to use the Nautilus search option

    - by Luis Alvarado
    In Nautilus if I press CTRL+F I get a search box that helps me search in the current directory and sub directories for specific names, but what if I want to: Find ALL files (including files without extensions) Find a file without an extension (Without the dot symbol or without any other name/extension separator) Find a file with/without a special character Find all files that start/not start with a character Find all files that end/not end with a character Find all files that start/no start with a character but end/not end with a character Find only files/folders Find files with specific text in them Find files with less/more/equal than/to X size Find files modified/created in X date All of this searches in the Nautilus search box I mentioned before. I ask this since the KDE's search is much better in this and gives pretty good freedom in searching for virtually anything, so I might not be learning how to use the Nautilus search option correctly. Note that I am talking about the first search done since some of this options show AFTER a search is done so the user can narrow it down more by doing a more specific search inside the Search results (for the first search). I am asking here how to do any of the search options I mentioned above in the first search.

    Read the article

  • In the days of modern computing, in 'typical business apps' - why does performance matter?

    - by Prog
    This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.

    Read the article

  • Star Trek inspired home automation visualisation

    - by Zak McKracken
    I’ve always been a more or less active fan of Star Trek. During the construction phase of my house I started coding a GUI for controlling the house which has an EIB. Just for fun I designed a version inspired by the LCARS design used in Star Trek TNG and showed this to my wife. I showed her several designs before but this was the only one, she really liked. So I decided to go on with this. I started a C# WinForms application. The software runs on a wall mounted Shuttle Barebone-PC. First plan was an industrial panel-pc but the processor was too slow. The now-used Atom is ok. I started with the LCARS-controls found on Codeproject. Since the classic LCARS design divides the screen into two parts this tended to be impracticable, so I used my own design For now the software is able to: Switch lights/wall outlets Show current temperatures for all room controllers Show outside temperature with a 24h trend chart Show the status of the two heat pumps Provide an alarm clock (e.g. for cooking) Play internet radio streams Control absence Mute the door bell Speak status messages via speech synthesis For now, I’m working on an integration of my electric meter. The main heat pump and the electric meter are connected to my LAN. I also tried some speech recognition, but I’ve problems with the microphone. I't’s working when you are right in front of the PC, but not far away, let’s say on the other side of the room. So this is the main view. The table displays raw values which are sent over the EIB – completely useless but looks great For each floor I have a different view. Here you can see the temperatures and check the status of the lights (the buttons are blinking when a light is switched on) This is the view for the heat pump:   Next step would be to integrate a control of my squeezebox server (I use different Squeezeboxes through the house as a multiroom audio solution)

    Read the article

  • The Oracle MDM Portfolio & Strategy Session - It All Comes Down to Master Data

    - by Mala Narasimharajan
     By Narayana Machiraju We are less than a week now from the start of Oracle Open World 2012 and I would like to introduce you all to one of the most awaited MDM strategy sessions this year titled “What’s there to Know about Oracle’s Master Data Management Portfolio and Roadmap?”. Manouj Tahiliani, Senior Director of MDM Product Strategy provides you a complete picture of the Oracle MDM Portfolio, the Product releases, the Strategy and the Roadmaps. Manoj will be discussing Oracle Fusion MDM applications, the first enterprise-grade SaaS MDM product suite. You’ll hear strategies for leveraging MDM and data quality in the enterprise and how you can derive business value by deploying an MDM foundation for strategic initiatives such as customer experience management, product innovation, and financial transformation. And as a bonus, he is also going to discuss the confluence of MDM with emerging technologies such as big data, social, and mobile. The session is co-presented by GEHC and Westpac. Tony Craddock from Westpac is going to share the insights of their MDM Implementation in the lines of Business drivers, data governance, ROI and other important implementation considerations. A reprsentative from GEHC is going to talk about their MDM journey and the multi-domain MDM story. I strongly recommend yo not miss this important session The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several Customer Panels, Conference sessions and Customer round table sessions featuring a lot of marquee Customers. You can see an overview of MDM sessions here. 

    Read the article

  • Why is it that software is still easily pirated today?

    - by mohabitar
    I've always been curious about this. Now I wouldn't call myself a programmer yet, but I'm learning, so maybe the answer to this is obvious. It just seems a little hard to believe that with all of our technological advances and the billions of dollars spent on engineering the most unbelievable and mind-blowing software, we still have no other means of protecting against piracy then a "serial number/activation key". I'm sure a ton of money, maybe even billions, went into creating Windows 7 or Office and even Snow Leopard, yet I can get it for free in less than 20 minutes. Same for all of Adobe's products, which are probably the easiest. I guess my question is: can there exist a fool proof and hack proof method of protecting your software against piracy? If not realistically, how about theoretically possible? Or no matter what mechanisms these companies deploy, hackers can always find a way around it? EDIT: So apparently, the answer is no. There's pretty much no way. And so I'm sure these big companies have realized this as well. Should they adopt another sales model rather than charging a crapload for their software (I know its justified and they put a lot of hard work into their software, but its still a lot of money). Are there any alternative solutions that will benefit both the company and the user (i.e. if you purchase our product, we'll apply $X dollars to your account that will apply to future purchases from our company)?

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >