Search Results

Search found 28627 results on 1146 pages for 'case statement'.

Page 572/1146 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • I have a ESXi 5.0 installed and when I am Installing Ubuntu Server 12.04 LTS it is giving an error saying grub installation failed?

    - by Rishee
    I have a ESXi 5.0 installed and when I am Installing Ubuntu Server 12.04 LTS (32 bit) it is giving an error saying grub installation failed? Please check the below screenshot of the error. I have other Ubuntu servers running fine on this esxi server, so I don't think problem is with ESXi. I have 32 GB of ram spare on this ESXi and have given 2 GB of RAM to this 12.0 LTS VM. I have given 2 cores of processor. I have tried supplying different ISO Image to this VM as I thought the 1st image that I downloaded has errors.. But defiantly that's not the case as all 3 different ISO images that I downloaded of Ubuntu server 12.04 LTS (32-bit) can't be corrupt!! Just to make sure that the Image does not have problem I used that Image to install it for testing on stand alone system. it works fine there!! This is a production ESXi server with which I can't play with, how ever I can play whith the Ubuntu Server 12.04 LTS (32-Bit) VM that we have created on that ESXi. I need help on this as soon as possible. go live date of this server is really close. (This Question is already there on Super user and Server Fault.)

    Read the article

  • What FOSS solutions are available to manage software requirements?

    - by boos
    In the company where I work, we are starting to plan to be compliant to the software development life cycle. We already have, wiki, vcs system, bug tracking system, and a continuous integration system. The next step we want to have is to start to manage, in a structured way, software requirements. We dont want to use a wiki or shared documentation because we have many input (developer, manager, commercial, security analyst and other) and we dont want to handle proliferation of .doc around the network share. We are trying to search and we hope we can find and use a FOSS software to manage all this things. We have about 30 people, and don't have a budget for commercial software. We need a free solution for requirements management. What we want is software that can manage: Required features: Software requirements divided in a structured configurable way Versioning of the requirements (history, diff, etc, like source code) Interdependency of requirements (child of, parent of, related to) Rule Based Access Control for data handling Multi user, multi project File upload (for graph, document related to or so on) Report and extraction features Optional Features: Web Based Test case Time based management (timeline, excepted data, result data) Person allocation and so on Business related stuff Hardware allocation handling I have already play with testlink and now i'm playing with RTH, the next one i try is redmine.

    Read the article

  • Apache + SuExec + php-fpm - how to set them up?

    - by FractalizeR
    Hello. I wonder if there is a good guide on how to setup Apache + SuExec + php-fpm? I have a server which I am going to use several separate website. So, I need php to be run as site-owner user. As I can see, php-fpm is a little different from php-fcgi. Is there a need in mod_fcgid from Apache in this case? How to set this all up? For now my site is running Apache + mod_suphp + php-cgi, so... it's good, but a little slow. I want to preserve security and gain an ability to use APC.

    Read the article

  • Autocorrect for "fat fingers" - MS Word

    - by Jamie Bull
    I'm wondering if anyone knows of a plug-in for MS Word which can handle key-presses of surrounding keys when typing at speed (rather like iPhone or Android autocorrect)? My use case is in transcribing interviews where I need to type quickly (even with the playback at half speed) - but I don't do this often enough to become a proficient touch typist. I will also be paying close attention to the text produced in subsequent analysis so I have a reasonable expectation that I'll catch any "hilarious" autocorrect errors. Any pointers to plug-ins which work at either a system level or within MS Word would be great. Even in an open source word processor at a pinch, though I'd miss the MS Word environment and my macros. Thanks.

    Read the article

  • Rate-Limit affects All clients or single IP?

    - by Asad Moeen
    Well up-til now I've considered iptables rate-limit commands with the "recent" module to work for each IP Address. For example rate-limit rule of 20k/s will trigger only if a single IP exceeds 20k/s rate and not if 4 different IPs exceed 5k/s rate. Please correct me if I considered this wrong as I've only used these rules for TCP/ UDP. But today I tried similar rules for ICMP and applied 4/s Input/Output. But then on trying to ping-test from just-ping.com I could see packet loss on almost all IP Addresses. How could that happen because if it worked for each IP Address then it wouldn't be triggering the rule because I believe each IP from just-ping has a rate of probably 1/s. I still think the first one is true because if it wasn't then my GameServer would block everyone if the combined rate ( in case of more connected players ) increased the threshold. This hasn't happened up til now so the ICMP thing really confused me. Thank you.

    Read the article

  • Troubleshooting a high SQL Server Compilation/Batch-Ratio

    - by Sleepless
    I have a SQL Server (quad core x86, 4GB RAM) that constantly has almost the same values for "SQLServer:SQL Statistics: SQL compilations/sec" and "SQLServer:SQL Statistics: SQL batches/sec". This could be interpreted as a server running 100% ad hoc queries, each one of which has to be recompiled, but this is not the case here. The sys.dm_exec_query_stats DMV lists hundreds of query plans with an execution_count much larger than 1. Does anybody have any idea how to interpret / troubleshoot this phenomenon? BTW, the server's general performance counters (CPU,I/O,RAM) all show very modest utilization.

    Read the article

  • How to get both PIPESTATUS and output in bash script

    - by Mustafa Serdar Sanli
    I'm trying to get last modification date of a file with this command TM_LOCAL=`ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }'` TM_LOCAL has value like "2012-05-16 23:18" after execution of this line I'd also like to check PIPESTATUS to see if there was an error. For example if file does not exists, ls returns 2. But $? has value 0 as it has the return value of awk. If I run this command alone, I can check the return value of ls by looking at ${PIPESTATUS[0]} ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }' But $PIPESTATUS does not work as I've expected if I assign the output to a variable as in the first example. In this case, $PIPESTATUS array has only 1 element which is same as $? So, the question is, how can I get both $PIPESTATUS and assign the output to a variable at the same time?

    Read the article

  • Kerberos: connection from win app running from IIS to SQL failed

    - by Mikhail Kislitsyn
    I have an IIS web-application with Windows authentication and impersonation. This application connects to SQL server. In this case Kerberos works fine. But there is a problem. Web-application runs windows application (not .NET), which also connects to the SQL server. Windows application runs with IIS app user credentials and impersonates current site user to connect to SQL server. scheme: http://i.stack.imgur.com/2cgv7.png When delegation for IIS user is set to "Trust this computer for delegation to any service" everything works fine. But I can't use this type of delegation according to security requirements. When I set delegation to "Specific services" and choose MSSQLSvc SPN, connection from windows application fails with "ANONIMOUS" fault. WireShark shows "KRB5KDC_ERR_BADOPTION" packet. What I'm doing wrong?

    Read the article

  • Dynamic/Adaptive RLE

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Ubuntu 13.10 AMD/ATI proprietary driver slow boot time, black screen after logging in and lengthy login/logout delays

    - by NahsiN
    Ubuntu 13.10 is causing me major headaches with my AMD/ATI HD 5770 GPU. Below is a list of problems I am currently encountering. 1) The boot time is extended by at least 25s after installing catalyst 13.4. Using open source radeon drivers, my boot time till the login screen is ~10s. With catalyst 13.4 installed, the boot time increases to ~35s. This was not the case in Ubuntu 13.04, 12.10 or 12.04. I have done the driver installation manually (instructions from wiki.cchtml.com) and using software center and there is no difference. I have not tried the catalyst 13.8 beta driver. 2) After manual installation of catalyst 13.4, I get stuck at a black screen after logging in. I have to purge fglrx to resolve the problem. I tried sudo amdconfig --initial -f but it didn't help. 3) The delay between logging in and unity being displayed is ~10-15s for BOTH open source and proprietary drivers. During the delay, it's just a black screen. Whenever I logout, there is again a ~10-15s delay with the login screen appearing stuck before lightdm allows me to enter my password again. This is ridiculous! Yes, I could stick with open source radeon drivers but I would like to install Steam and play my Valve collection on the machine. Is anybody else encountering similar issues?

    Read the article

  • Simultaneous AI in turn based games

    - by Eduard Strehlau
    I want to hack together a roguelike. Now I thought about entity and world representation and got to a quite big problem. If you want all the AI to act simultaneously you would normally(in cellular automa for examble) just copy the cell buffer and let all action of indiviual cells depend on the copy. Actions which are not valid anymore after some cell before the cell you are currently operating on changed the original enviourment(blocking the path) are just ignored or reapplied with the "current"(between turns) environment. After all cells have acted you copy the current map to the buffer again. Now for an environment with complex AI and big(datawise) entities the copying would take too long. So I thought you could put every action and entity makes into a que(make no changes to the environment) and execute the whole que after everyone took their move. Every interaction on this que are realy interacting entities, so if a entity tries to attack another entity it sends a message to it, the consequences of the attack would be visible next turn, either by just examining the entity or asking the entity for data. This would remove problems like what happens if an entity dies middle in the cue but got actions or is messaged later on(all messages would go to null, and the messages from the entity would either just be sent or deleted(haven't decided yet) But what would happen if a monster spawns a fireball which by itself tracks the player(in the same turn). Should I add the fireball to the enviourment beforehand, so make a change to the environment before executing the action list or just add the ball to the "need updated" list as a special case so it doesn't exist in the environment and still operates on it, spawing after evaluating the action list? Are there any solutions or papers on this subject which I can take a look at? EDIT: I don't need information on writing a roguelike I need information on turn based ai in respective to a complex enviourment.

    Read the article

  • How common is prototyping as the first stage of development?

    - by EpsilonVector
    I've been taking some software design courses in the past few semesters, and while I see the benefit in a lot of the formalism, I feel like it doesn't tell me anything about the program itself: You can't tell how the program is going to operate from the Use Case spec, even though it discusses what the program can do. You can't tell anything about the user experience from the requirements document, even though it can include quality requirements. Sequence diagrams are a good description of how the software works as the call stack, but are very limited, and give a highly partial view of the overall system. Class diagrams are great for describing how the system is built, but are utterly useless in helping you figure out what the software needs to be. Where in all this formalism is the bottom line: how the program looks, operates, and what experience it gives? Doesn't it make more sense to design off of that? Isn't it better to figure out how the program should work via a prototype and strive to implement it for real? I know that I'm probably suffering from being taught engineering by theoreticians, but I need to ask, do they do this in the industry? How do people figure out what the program actually is, not what it should conform to? Do people prototype a lot, or do they mostly use the formal tools like UML and I just didn't get the hang of using them yet?

    Read the article

  • Ubuntu and racadm

    - by lmqcn
    I recently purchased a used poweredge 1850 server and it came with a DRAC card. After wiping the HDD and installing ubuntu server 12.04.3 LTS amd64 on it, I am now trying to gain access to the DRAC which I believe is version 4. I have properly configured the DRAC to use it's own IP on my LAN and when I point my browser to the IP address, I am greeted with the DRAC login page (it has the dell logo and everything). However, after trying the credentials of root/calvin, I was denied access. So I think that the previous owners had set their own password. After doing some reading, it appears that I can reset the credentials to the default using racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword but upon entering the command, I get this error: bash: /usr/sbin/racadm: No such file or directory This holds true even if I run sudo su prior to running the racadm command. If, however, I run sudo racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword there are no errors. Yet, when I try to log into the DRAC via the web interface using the credentials of root/newpassword I am still not granted access. I installed the dell utilities via the guide at https://wiki.ubuntu.com/HardwareSupportMachinesServersDellNotes. I first tried to install the 64 bit version that is on the dell repositories, but after that was unsuccessful, I just followed the guide verbatim. No errors were produced in either case. I even followed the information at the bottom of the guide by executing sudo pppd /dev/ttyS1 1382400 crtscts noipdefault noauth lock persist connect 'chat -v "" CLIENT CLIENTSERVER "\\c"' but obviously, replacing the /dev/ttyS1 with the correct information for my system. ls -l /usr/sbin/ | grep racadm yields -rwxr-xr-x 1 root root 87930 Sep 16 04:03 racadm I have tried these credentials after each attempt of changing the password: root/calvin root/newpassword admin/calvin admin/newpassword All have been unsuccessful. What is the next course of action that I should take?

    Read the article

  • How do graphics programmers deal with rendering vertices that don't change the image?

    - by canisrufus
    So, the title is a little awkward. I'll give some background, and then ask my question. Background: I work as a web GIS application developer, but in my spare time I've been playing with map rendering and improving data interchange formats. I work only in 2D space. One interesting issue I've encountered is that when you're rendering a polygon at a small scale (zoomed way out), many of the vertices are redundant. An extreme case would be that you have a polygon with 500,000 vertices that only takes up a single pixel. If you're sending this data to the browser, it would make sense to omit ~499,999 of those vertices. One way we achieve that is by rendering an image on a server and and sending it as a PNG: voila, it's a point. Sometimes, though, we want data sent to the browser where it can be rendered with SVG (or canvas, or webgl) so that it can be interactive. The problem: It turns out that, using modern geographic data sets, it's very easy to overload SVG's rendering abilities. In an effort to cope with those limitations, I'm trying to figure out how to visually losslessly reduce a data set for a given scale and map extent (and, if necessary, for a known map pixel width and height). I got a great reduction in data size just using the Douglas-Peucker algorithm, and I believe I was able to get it to keep the polygons true to within one pixel. Unfortunately, Douglas-Peucker doesn't preserve topology, so it changed how borders between polygons got rendered. I couldn't readily find other algorithms to try out and adapt to the purpose, but I don't have much CS/algorithm background and might not recognize them if I saw them.

    Read the article

  • Execute background program in bash without job control

    - by Wu Yongzheng
    I often execute GUI programs, such as firefox and evince from shell. If I type "firefox &", firefox is considered as a bash job, so "fg" will bring it to foreground and "hang" the shell. This becomes annoying when I have some background jobs such as vim already running. What I want is to launch firefox and dis-associate it with bash. Consider the following ideal case with my imaginary runbg: $ vim foo.tex ctrl+z and vim is job 1 $ pdflatex foo $ runbg evince foo.pdf evince runs in background and I get me bash prompt back $ fg vim goes foreground Is there any way to do this using existing program? If no, I will write my own runbg.

    Read the article

  • Android-Libgdx-ProGuard: Usefulness without DexGuard? [on hold]

    - by Rico Pablo Mince
    So I'm developing a game for Android - using LibGDX - and noticed that the Android SDK (HDK, MDK, WhatTheHellEvarDK) has ProGuard built-in. Browsing the ProGuard page is like searching Google: you get that the idea is to sell some product (in this case, it's DexGuard). That leaves me wondering what features are left out of ProGuard that a game developer targeting Android should worry about. For instance, the ProGuard FAQs answer the question: "Does ProGuard encrypt string constants?" by saying: "No. String encryption in program code has to be perfectly reversible by definition, so it only improves the obfuscation level. It increases the footprint of the code. However, by popular demand, ProGuard's closed-source sibling for Android, DexGuard, does provide string encryption, along with more protection techniques against static and dynamic analysis." Alright. OK. But isn't "...improves the obfuscation level" EXACTLY what ProGuard is supposed to do? Are there better options that can be implemented at build-time in Eclipse using the Gradle options and Libgdx? In particular, the assets folder and res-specific folders will need some protection. The code itself doesn't cure cancer, but I'd prefer if nobody could copy/paste it with different game art and call it "IhAxEdUrGamE"....

    Read the article

  • Any language where every class instance is a class too?

    - by Dokkat
    Taking inspiration from Javascript prototypes, I had the idea of a language where every instance can be used as a class. Before I potentially reinvent the wheel, I would like to ask if there is a language already using this concept: //To declare a Class, extend the base class (in this case, Type) Type(Weapon,{price:0}); //Same syntax to inherit; simply extend the parent: Weapon(Sword,{price:3}); Weapon(Axe,{price:4}); Sword(Katana,{price:7}); Sword(Dagger,{price:3}); //And the same to create an instance: Katana(myKatana,{nickname:"Leon"}); myKatana.price; // 7 myKatana.nickname; // Leon // An operator to return children of a class; Sword_; // [Katana, Dagger] // An operator to return array of descendants; Sword__; // [Katana, Dagger, myKatana] // An operator to return array of parents; Sword^; // Weapon // Arrays can be used as elements Sword__.price += 1; //increases price of Sword's descendants by 1 mySword.price; //8 // And to access specific element (using its name instead of index) var name = "mySword" Katana_[name]; // [mySword] Katana_[name].nickname; // Leon Has this kind of approach been already studied/implemented?

    Read the article

  • Getting Internal Name of a Share Point List Fields

    - by Gino Abraham
    Over the last 2 weeks i was developing a tool to migrate Lotus notes data base to Share point. The mapping between Lotus notes schema and share point list schema was done manually in an xml file for out tool. To map the columns we wanted internal names of each field. There are quite a few ways to achieve this, have explained few below. If you want internal names for one or 2 columns you can do so by navigating to the list setting and clicking on the column name. Once you are in column's details, you can check the query string of the page. The last item in the query string would be field's internal. Replace all "%5f" with '_' will give you the field internal name. In my case there were more than 80 columns. I used power shell to get the list of columns with details. Open windows Powershell and paste the following script after modifying the url and list name. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site = new-object Microsoft.SharePoint.SPSite(http://yousitecolurl) $web = $site.OpenWeb() $list = $web.Lists["yourlist name"] $list.Fields | Format-Table Title, InternalName, TypeAsString I also found a tool in Codeplex.com which can generate a wrapper class for a list. The wrapper class will give you the guid and internal name for all fields in the list.  You can download the tool from http://imtech.codeplex.com/ Just enter the url in the text box and hit open. All the site content will be listed at the left hand side, expand the list, right click and select generate wrapper class.

    Read the article

  • EC2 instance store cloning or to ebs via guy management console

    - by devnull
    I have found similar questions here but the answer are either outdated or are from the command line. The case is this. I have an EC2 instance using instance store (this was the only AMI available for Debian 6 in Ireland). Now through the AWS GUI I can do a snapshot of the instance volume and/or even create a volume. But an image made from the snapshot doesn't boot. What is the best solution to either clone an EC2 instance that uses instance store OR from the created snapshot of the instance store to launch a new EBS instance (identical clone) FROM the gui aws management console and not command line ? Before turning this down consider that there is not similar question on how to do it via the aws management console. hint can't be done is not an appropriate answer. As you can create a snapshot of the instance store backed instance and/or a volume and create an AMI from that snapshot.

    Read the article

  • Best choice for off-site backup: dd vs tar

    - by plok
    I have two 1TB single-partition hard disks configured as RAID1, of which I would like to make an off-site backup on a third disk, which I am still to buy. The idea is to store the backup at a relative's house, considerably far away from my place, in the hope that all the information will be safe in the case of a global thermonuclear apocalypse. Of course, this backup would be well encrypted. What I still have to decide is whether I am going to simply tar the entire partition or, instead, use dd to create an image of the disks. Is there any non-trivial difference between these two approaches that I could be overlooking? This off-site backup would be updated no more than two or three times a year, in the best of the cases, so performance should not be a factor to be pondered at all. What, and why, would you use if you were me? dd, tar, or a third option?

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • Which Message Queue should I choose (must run on Linux)

    - by MHS
    There are many open source Message queues for Linux, and I need some help deciding what I should go for. My problem is simple - I get sent a list of files that needs to be processed. Each job can't be split up, but they are self contained and can be spread to multiple computers. I'm thinking of solving this using a message queue. Multiple clients send a message to a central queue. Each queue has a number of subscribers that will take jobs from that queue when they have finished processing the current job. Ideally it should have the following qualities Message queue must be able to store unprocessed messages in case of a shutdown/reboot A job can only be processed by a single subscriber (don't want duplicate jobs) The subscribers should be able to send jobs of their own, that will be processed by a different set of subscribers. Can anyone suggest a simple to use message queue?

    Read the article

  • Home-Router: Access internal server using external ip [migrated]

    - by user15863
    If I've got a typical home router -- say a Net Gear -- which has certain ports forwarded to a internal server, is there a way to tweak the router to let me access that internal server using the external IP address from within the same network? Is there a non-enterprise grade router that can handle this type of thing? In case that was strangely worded, let me re-phrase with an example. My external IP is 1.2.3.4. My internal server is 10.4.3.100 Port 1178 is being forwarded from the router to 10.4.3.100. I'd like to be able to be able to hit 10.4.3.100 from an internal ip of 10.4.3.10 by using the external ip of 1.2.3.4. Possible?

    Read the article

  • How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?

    - by Nupul
    We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user-stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user-stories?

    Read the article

  • Does Lapping a CPU / Heatsink actually drop the temp?

    - by Pure.Krome
    Hi folks, i've been watching some YouTube vids about Lapping a CPU. I've never heard of this modding technique before and, though extreame, I was wondering if it acutally works? Assuming you lap your cpu and/or heatsink correctly, will the temps drop? When I say drop, at least a 1 degree drop is success (for the debate of this topic). To keep this topic clean, please refrain from anyone commenting on the overkill of labour, just for a 1 degree (worst case) drop, etc. This is a discussion about the theory and concept, not personal opionion of wether to lap or not.

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >