Search Results

Search found 53818 results on 2153 pages for 'system testing'.

Page 453/2153 | < Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >

  • How do you track bugs in your personal projects?

    - by bedwyr
    I'm trying to decide if I need to reassess my defect-tracking process for my home-grown projects. For the last several years, I really just track defects using TODO tags in the code, and keeping track of them in a specific view (I use Eclipse, which has a decent tagging system). Unfortunately, I'm starting to wonder if this system is unsustainable. The defects I find are typically associated with a snippet of code I'm working on; bugs which are not immediately understood tend to be forgotten, or ignored. I wrote an application for my wife which has had a severe defect for almost 9 months, and I keep forgetting to fix it. What mechanism do you use to track defects in your personal projects? Do you have a specific system, or a process for prioritizing and managing them?

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • "kde-config not found" error while installing kstars from source

    - by tachyons
    I am trying to install kstars from source ,but I got the following error while configuring ./configure checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking for -p flag to install... yes checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for kde-config... not found configure: error: The important program kde-config was not found! Please check whether you installed KDE correctly. What does it mean ,I already installed kde in my computer

    Read the article

  • Distributing a very simple application

    - by vanna
    I have a very simple working console application written in C++ linked with a light static library. It is just for testing purposes. Now that the coding part is done, I would like to know the process of actually distributing the program. I wrote a very basic CMakeLists.txt that create makefiles or VS projects to build the sources. I also have a program that calls the static library in order to make some google tests. To me, the distribution of this application goes like this : to developpers : the src directory with the CMakeLists.txt file (multi-platform distribution) with a README.txt and an INSTALL.txt to users : the executable and a README.txt on my git repo : everything mentionned above plus the sources for testing and the gtest external lib A this point : considering the complexity of my application, am I doing it right ? Is there any reference that would formalize this distribution process so I can get better and go further ? Say I would like to add dynamic libraries that can be updated, external libraries like boost : how should I package this to distribute it in a professionnal way ?

    Read the article

  • XHTML fix solution republished

    - by TATWORTH
    As a post VS2010 SP1 installation activity, I am recompiling all my open source projects. The first is XHTMLFIX at http://xhtmlfix.codeplex.com/ This LGPL project has simple fixes to ASP.NET 2.0/4.0 to achieve XHTML compliance as measured by the W3C tests at http://validator.w3.org/ The XHTML project shows as untrue the commonly held belief that MVP or MVC are necessary for producing XHTML compliant web pages. Incidentally the other supposed advantage of MVP and MVC over web forms of easier testing is also very dubious as web forms can be tested by systems such as Selenium or WaTiN. I have used NUnitASP (alas sadly discontinued) with web forms and found it be more effective than unit testing MVP. Now if you prefer the MVP and / or MVC approach over Web forms then fine, that is your preferance. Now if you can find an example where ASP.NET 4.0 Web forms properly written do not produce XHTML compliant markup, I would be glad of your example and will look at ways of modifying the markup to be XHTML compliant.

    Read the article

  • Which code module should map physical keys to abstract keys?

    - by Paul Manta
    How do you bridge the gap between the library's low-level event system and your engine's high-level event system? (I'm not necessarily talking about key events, but also about quit events.) At the top level of my event system, I send out KeyPressedEvents, KeyRelesedEvents and others of this kind. These high-level events only contain the abstract values of the keys (they don't say that Space way pressed, but that the JumpKey was pressed, for example). Whose responsibility should it be to map the "JumpKey" to an actual key on the keyboard?

    Read the article

  • Acer Extensa 5620 - Graphic unknown!

    - by Nycxzon
    I install Ubuntu 12.04 Beta 2, I found out the my graphic driver is "Unknown" and experience is "Standard". Please help me know how to install my graphic driver. as per my laptop specs: Mobile Intel GM965 Express Chipset with integrated 3D graphics, featuring Intel Graphics Media Accelerator (GMA) X3100 with up to 358/3845 MB of Intel Dynamic Video Memory Technology 4.0 (8 MB of dedicated system memory, up to 350/3765 MB of shared system memory), supporting Microsoft DirectX 9. Hope you can help me! Thanks in advance! Noob @ Ubuntu :) Updates! ______________ I found a solution! I found a site where I ask how to check the graphic of my system. by using terminal command "glxinfo" command ask me to install mesa-utils! and update. After that, My graphic driver is listed! Thanks for your continues support! :)

    Read the article

  • Component based design, but components rely on eatchother

    - by MintyAnt
    I've begun stabbing at a "Component Based" game system. Basically, each entity holds a list of components to update (and render) I inherit the "Component" class and break each game system into it. Examples: RenderComponent - Draws the entity MovementComponent - Moves the entity, deals with velocity and speed checks DamageComponent - Deals with how/if the entity gets damaged... So. My system has this: MovementComponent InputComponent Now maybe my design is off, but the InputComponent should say things like if (w key is down) add y speed to movement if (x key is down) Trigger primary attack This means that the InputComponent sort of relies on these other components. I have to do something alone the lines of: if (w key is down) { MovementComponent* entityMovement = mEntity->GetMovement(); if (entityMovement != NULL) add y speed to movement } which seems kinda crappy every update. Other options? Better design? Is this the best way? Thanks!

    Read the article

  • Is is possible to get a patch included in the current release? If so, how?

    - by Oli
    So a while back I reported a bug in Compiz's Place Window plugin. It's a fairly major regression for people affected by it: mainly those using Gnome-Fallback, judging by the reports. A patch surfaced a short time later. I created a PPA for testing and everybody involved so far is reporting the issues are fixed. It even fixes another bug. I've done testing with a standard Unity desktop and can say (for my testing) no adverse effects were visible. I want to get this pushed to Ubuntu right now for two main reasons: I'm selfish. I don't want to need to update my PPA every time a new version of Compiz is pushed to 12.04. I don't want Ubuntu users seeing their windows flying around because of a silly little bug. I want this patch pushed to Ubuntu's version of Compiz as soon as possible, so we can mark these bugs fixed and move on with our lives. Whose leg do I have to hump to get this pulled into Ubuntu right now? I don't maintain this project and it's an upstream thing but it's fairly integral to Ubuntu. I could go to Compiz but I imagine that if they accept the patch, it'll be months (at least a release) before it's anywhere near Ubuntu. And when I do find the right person, how can I make the process as slick as possible for them? I want them to see my request, go "Yup, that all looks great, done" and that be it. I don't want seventeen rounds of emails addressing aspects of the patch. More importantly, I don't want to waste their time either. And what do I have to provide them? My packaging skills are... lamentable. This was my first attempt at patching a package for redistribution so I've probably made every single packaging error known to man. Will they be happy with the original patch (so they can apply it themselves) or should I repackage things so the diff/changelog is a little cleaner (it took me a few goes and the versioning is all over the place). Note: This question is about Compiz but I'd prefer if answers could address other styles of package too so we have an authoritative and comprehensive thread of how to get things fixed.

    Read the article

  • Can not use keyboard on unity

    - by ikhsan
    Dear Ubuntu Community, currently I am using Ubuntu 14.04, and few hours ago, an update notifier prompted to install an update. After update finished, it ask for system restart, I think there is some kernel update etc. The problem start after restart, I can type password when login, but after entering unity desktop, my keyboard become suddenly unusable, system doesn't respond to any key press, after few minutes, it lock the screen automatically, but still I can't type password to unlock the screen. I tried to logout (mouse is working properly), and login again, try starting onscreen keyboard, but still have no luck, system still doesn't respond to the key press. I tried to login in console, and keyboard working well, tried to install xfce, and keyboard also working properly, keyboard also working properly when login to unity as guest, it only not working when I login using my account. I also try to reset unity config via unity-tweak-tool --reset-unity , but still no luck any suggestion to resolve this?

    Read the article

  • Passing functions into other functions as parameters, bad practice?

    - by BlueHat
    We've been in the process of changing how our AS3 application talks to our back end and we're in the process of implementing a REST system to replace our old one. Sadly the developer who started the work is now on long term sick leave and it's been handed over to me. I've been working with it for the past week or so now and I understand the system, but there's one thing that's been worrying me. There seems to be a lot of passing of functions into functions. For example our class that makes the call to our servers takes in a function that it will then call and pass an object to when the process is complete and errors have been handled etc. It's giving me that "bad feeling" where I feel like it's horrible practice and I can think of some reasons why but I want some confirmation before I propose a re-work to system. I was wondering if anyone had any experience with this possible problem?

    Read the article

  • Getting the front buffer into a gfx mem surface (Dx9)

    - by lapin
    I'm using DirectX 9 to acquire the frontbuffer. There are a couple of ways I know of to get at the front buffer: GetRenderTargetData() GetFrontBufferData() The MSDN page on both of these API calls state that the data is copied from device memory to system memory. I'd like to copy the front buffer surface directly to another graphics memory surface, as I have other manipulations to perform on the acquired surface before returning it to system memory. I'm creating a D3DUSAGE_DYNAMIC texture (gfx mem texture) and calling GetFrontBufferData() to write the front buffer to my textures surface0. Is this valid? Will the operation remain in gfx memory, or will it need to move to system memory and then back to graphics memory? If this is the case, is what I'm trying to achieve possible?

    Read the article

  • Reboot only shuts down and doesn't actually boot again

    - by PherricOxide
    I'm running a fresh install of Ubuntu 12.04 Server on an abmx rack mount server. When I attempt to reboot with sudo shutdown -r now, the machine just shuts down and doesn't come back up without me manually pressing the power button. The output of last -x, runlevel (to lvl 2) 3.2.0-29-generic Wed Oct 31 14:32 - 14:37 (00:05) reboot system boot 3.2.0-29-generic Wed Oct 31 14:32 - 14:37 (00:05) shutdown system down 3.2.0-29-generic Wed Oct 31 14:30 - 14:32 (00:02) runlevel (to lvl 6) 3.2.0-29-generic Wed Oct 31 14:30 - 14:30 (00:00) This appears to show that the system rebooted, but it went dark (no power lights, BIOS, etc) and I had to go press the power button in order to make it boot back up. The machine does have some sort of Intel Boot Agent that usually appears before the BIOS, I'm wondering if it could be causing this. I'm not sure what information is useful for debugging this, but I put the output of lshw in http://pastebin.com/mBy72kTQ

    Read the article

  • No dual boot menu

    - by Christian Galo
    I formatted all of my disk and installed Ubuntu on my computer. I immediately partitioned, from an Ubuntu live CD, my hard drive, creating an NTFS partition for for Windows. After successfully doing so, I went on to install windows 8.1. After I installed Windows 8 in the new partition and turned off my PC and turned it on again the option to chose which Operating system I wanted to use didn't appear, loading Windows like if Ubuntu didn't exist. How can I have the option to chose which operating system I want to run or at least from which partition to boot from when I start my computer? EVERYTHING IS OKAY WITH MY OPERATING SYSTEM The only thing I need is for ubuntu to appear as an option on the boot menus

    Read the article

  • What options should I consider for a modern Web/Mobile development stack? [on hold]

    - by jimmy_terra
    I'm a long time server side dev who has been tasked with building a bleeding edge web UI (go figure), so apologies for the very broad nature of the question. What are the best modern libraries, tools, languages and patterns for building a dynamic web application that will run seamlessly on mobiles also? My requirements are that it must be dynamic (push updates), support automated testing, and should allow 'componentization' (a team of devs will be working on this). What should I check out and why? I will start off with some of the things I'm looking at already: Front-end HTML5 CSS3 JavaScript AngularJs Testing Karma Testem Jasmine Patterns Single Page Applications

    Read the article

  • Announcement: Oracle SuperCluster T5-8

    - by uwes
    Oracle's Fastest Engineered System On 27th of June we are announcing Oracle SuperCluster T5-8, Oracle’s fastest engineered system. Combining powerful virtualization and unique Exadata and Exalogic optimizations, SuperCluster is optimized to run both database and enterprise applications, and is ideal for consolidation and private cloud. SuperCluster is a complete system integrating SPARC T5-8 servers, Exadata Storage Servers, ZFS Storage Appliance, InfiniBand network and software, delivering extreme performance, no single point of failure, and highest efficiency while reducing risks and costs. Leverage Oracle SuperCluster T5-8 for IBM and HP competitive displacements, upgrading existing data centers, or new customer deployments. Please read the Product Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) For More Information Go To: Oracle SuperCluster T5-8 oracle.com OTN

    Read the article

  • 12.04 Screen goes OFF and ON and hangs

    - by SKC
    i have installed ubuntu 12.04. while i'm using the screen goes off, comes back on (during which time, the system is not responsive) and this process repeats one more time in immediate succession after which the computer is responsive again. sometimes the system hangs indefinitely. sometimes the windows go blank. only the title bars are seen. i have tried re-installing the OS three times with no improvements. i have tried both 32 bit as well as 64 bit. i have no idea what the problem is and have tried hard to pin point that. i never had this prob till 11.10 my system config is as follow processor - 2nd gen intel core i7 (2660k) graphics - intel onboard HD graphics 3000 (no external graphics card) RAM - 4GB none of my friends seem to experience this problem. it really gets annoying as it happens really often. please advice

    Read the article

  • Can realtek 8192cu usb wireless card be used in ubuntu 12.04 with kernel 3.2.0?

    - by waterloo2005
    I do like the post RTL8188CUS Wireless USB Dongle doesn't work unless I disable wireless security . But when I plug my 8192cu usb wireless card, my computer screen is off. At that time I even can not use Alt SysRq k or Alt SysRq + reisub. I compile the latest driver of 8192cu on ubuntu 12.04 with kernel 3.2.0-34. In RealTek site, I download 8192cu drive which is for Linux Kernel 2.6.18~2.6.38 and Kernel 3.0.8. But now in Ubuntu12.04 my kernel is 3.2.0-34. Every time I plug the usb 8192cu wireless card my system halts. Now I try to blacklist both the system's rtl8192cu driver and the new 8192cu driver I compile, but the system still halts when I plug the usb driver. What about you ? Thanks!

    Read the article

  • vlc 1.1.9 not working properly

    - by jaggib
    I have installed vlc 1.1.9 on wubi installed Ubuntu 11.04 using Ubuntu Software Center. Now when i tried playing videos in vlc (any format) full screen mode doesnt show controls and usually doesnt gets me out to window mode. it sometimes crashes to login screen. I tried the video output to 'X11 output mode' and 'XVideo output (XCB)' but the above problem persists and also bring another problem. full screen mode doesnt responds always when it does, it shows the video over the desktop instead of inside the player. the only way to get ubuntu to function normally is to restart the system. I tried with not using the 'Embed video in interface' but still the same problem. vlc runs perfectly well in windows. How can i make vlc function properly on my system or i need to install another player? My system config is: Graphics: VIA/S3G UniChrome Pro IGP Processor: AMD Sempron(tm) Processor 2800+ RAM: 1.5GB Motherboard Name: MS-7181 (MSI)

    Read the article

  • Best in-depth analytics or stats tools? (preferrably server-side)

    - by Litso
    Hey all, I know there's been questions about this before, but mine is a little more specific. I work for a high traffic website and we want to start tracking our visitors better. Unfortunately, Google Analytics is not an option at the moment, so what I'm looking for is some alternatives, preferrably server-side (but not necessarily). We're currently running Urchin, but what I'm missing most there is the way you can set conversions in Analytics and then track (for example) which keywords convert better or which landing pages convert better. Also, A/B testing is something I really miss. Which analytics tools can be compared to analytics in terms of advanced segmentation, navigation summaries, A/B testing, etc?

    Read the article

  • Is it easier to develop from scratch or not? [closed]

    - by Gnijuohz
    I am currently reading the book computer science: an overview by J. Glenn Brookshear.And In chapter 7,there is one passage as follows: In fact, it is often within this phase(modification) that a piece of software is discarded under the pretense (too often true) that it is easier to develop a new system from scratch than to modify the existing package successfully. When I read this,an article by Joel occurred to me which mentions how Mozilla shouldn't have written its browser from scratch.(The article is here) So,Is it mostly true that it's easier to develop a new system from scratch than to modify the existing one?Or it's closely related to the complexity of the system?

    Read the article

< Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >