Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 384/916 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • Using an Apt Repository for Paid Software Updates

    - by Scott Warren
    I'm trying to determine a way to distribute software updates for a hosted/on-site web application that may have weekly and/or monthly updates. I don't want the customers who use the on-site product to have to worry about updating it manually I just want it to download and install automatically ala Google Chrome. I'm planning on providing an OVF file with Ubuntu and the software installed and configured. My first thought on how to distributed software is to create six Apt repositories/channels (not sure which would be better at this point) that will be accessed through SSH using keys so if a customer doesn't renew their subscription we can disable their account: Beta - Used internally on test data to check the package for major defects. Internal - Used internally on live data to check the package for defects (dog fooding stage). External 1 - Deployed to 1% of our user base (randomly selected) to check for defects. External 9 - Deployed to 9% of our user base (randomly selected) to check for defects. External 90 - Deployed to the remaining 90% of users. Hosted - Deployed to the hosted environment. It will take a sign off at each stage to move into the next repository in case problems are reported. My questions to the community are: Has anyone tried something like this before? Can anyone see a downside to this type of a procedure? Is there a better way?

    Read the article

  • Getting Internal Name of a Share Point List Fields

    - by Gino Abraham
    Over the last 2 weeks i was developing a tool to migrate Lotus notes data base to Share point. The mapping between Lotus notes schema and share point list schema was done manually in an xml file for out tool. To map the columns we wanted internal names of each field. There are quite a few ways to achieve this, have explained few below. If you want internal names for one or 2 columns you can do so by navigating to the list setting and clicking on the column name. Once you are in column's details, you can check the query string of the page. The last item in the query string would be field's internal. Replace all "%5f" with '_' will give you the field internal name. In my case there were more than 80 columns. I used power shell to get the list of columns with details. Open windows Powershell and paste the following script after modifying the url and list name. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site = new-object Microsoft.SharePoint.SPSite(http://yousitecolurl) $web = $site.OpenWeb() $list = $web.Lists["yourlist name"] $list.Fields | Format-Table Title, InternalName, TypeAsString I also found a tool in Codeplex.com which can generate a wrapper class for a list. The wrapper class will give you the guid and internal name for all fields in the list.  You can download the tool from http://imtech.codeplex.com/ Just enter the url in the text box and hit open. All the site content will be listed at the left hand side, expand the list, right click and select generate wrapper class.

    Read the article

  • wi-fi connection drops periodically for a few seconds

    - by sergiom
    I've read the similar question on wireless connections dropping, but no answer seems to apply to my case I have configured the wi-fi lan of my router to broadcast sid and use WPA-PSK. Every few minutes my wi-fi connection drops for a few seconds and then restores. When I use two computers and run a ping -n 50000 on both computers, I see that the connection drops at different times but with almost the same rate. the router is a zyxel, one pc runs windws vista and uses a USB wi-fi device from Belkin: F6D4050 the other one runs windows 7 is a Dell PC with an Intel(R) WiFi Link 5100 AGN there are no other wi-fi lans around

    Read the article

  • MongoDB ReplicaSet Elections when some nodes are down

    - by SecondThought
    I'm trying to get into ReplicaSet concept, and found something weird in mongoDB Documentation: For a node to be elected primary, it must receive a majority of votes. This is a majority of all votes in the set: if you have a 5-member set and 4 members are down, a majority of the set is still 3 members (floor(5/2)+1). Each member of the set receives a single vote and knows the total number of available votes. If no node can reach a majority, then no primary can be elected and no data can be written to that replica set (although reads to secondaries are still possible). (taken from here) So, If I got that right, in the 5-member case mentioned there the one node that's still standing WILL NOT be chosen as primary and the whole set will not get any writes? and that's even if this single node was the last primary before the elections? If it's true there can be many less-radical cases which will end up with a degenerated set. How can we avoid this?

    Read the article

  • SQL SERVER – Validating Unique Columnname Across Whole Database

    - by pinaldave
    I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. Asker: “Our business requirement is when we add new column we want it unique across current database.” Pinal: “Why do you have such requirement?” Asker: “Do you know the solution?” Pinal: “Sure I can come up with the answer but it will help me to come up with an optimal answer if I know the business need.” Asker: “Thanks – what will be the answer in that case.” Pinal: “Honestly I am just curious about the reason why you need your column name to be unique across database.” (Silence) Pinal: “Alright – here is the answer – I guess you do not want to tell me reason.” Option 1: Check if Column Exists in Current Database IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn') BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END Option 2: Check if Column Exists in Current Database in Specific Table IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn' AND OBJECT_ID = OBJECT_ID(N'tableName')) BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END I guess user did not want to share the reason why he had a unique requirement of having column name unique across databases. Here is my question back to you – have you faced a similar situation ever where you needed unique column name across a database. If not, can you guess what could be the reason for this kind of requirement?  Additional Reference: SQL SERVER – Query to Find Column From All Tables of Database Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is this kind of Design by Contract useless?

    - by Charlie Pigarelli
    I've just started informatics university and I'm attending a programming course about C(++). The programming professor prefers to teach very few things (in 3 month we have just reached the functions topic) and connect every topic with a type of programming design that somehow is similar to the Design by Contract design. Basically what he ask us to do is to write every exercise with comments Pre-conditions, Post-conditions and Invariants that should prove the correctness of each program we write. But this doesn't make any sense to me. I mean, ok: maybe writing down your thoughts prevent you from doing some mistakes, but if this is all an abstract thing, then if your program intuition is wrong you'll write your program wrong and then you'll also write pre and post conditions wrong probably auto convincing your self about its correctness. Most of the time, both me and other students have written programs that seemed ok and that had correct pre and post condition too. But at the moment of testing it was just completely wrong. I had some experience before this course of programming and I had written a lot of line of code before and I found myself comfortably with just writing a program and unit test it. It take less time to accomplish and is less "abstract" than just thinking about what every single piece of your program should do in every case (which is kinda like mentally testing it). Finally, all this pre and post conditions takes me like 80% of the total time of the exercise. It's harder to think about putting down this pre and post correct than to write the program itself. Since we are like the only course of the only university probably in the entire world that makes this things, could someone please tell me how should I manage this thing? Am I right thinking that this doesn't worth anything? Should I change university? (there are like double of the people attending that course and it seems that usually very few people passes the exam the first year). Should I convince myself it's method is right?

    Read the article

  • Ubuntu 13.10 AMD/ATI proprietary driver slow boot time, black screen after logging in and lengthy login/logout delays

    - by NahsiN
    Ubuntu 13.10 is causing me major headaches with my AMD/ATI HD 5770 GPU. Below is a list of problems I am currently encountering. 1) The boot time is extended by at least 25s after installing catalyst 13.4. Using open source radeon drivers, my boot time till the login screen is ~10s. With catalyst 13.4 installed, the boot time increases to ~35s. This was not the case in Ubuntu 13.04, 12.10 or 12.04. I have done the driver installation manually (instructions from wiki.cchtml.com) and using software center and there is no difference. I have not tried the catalyst 13.8 beta driver. 2) After manual installation of catalyst 13.4, I get stuck at a black screen after logging in. I have to purge fglrx to resolve the problem. I tried sudo amdconfig --initial -f but it didn't help. 3) The delay between logging in and unity being displayed is ~10-15s for BOTH open source and proprietary drivers. During the delay, it's just a black screen. Whenever I logout, there is again a ~10-15s delay with the login screen appearing stuck before lightdm allows me to enter my password again. This is ridiculous! Yes, I could stick with open source radeon drivers but I would like to install Steam and play my Valve collection on the machine. Is anybody else encountering similar issues?

    Read the article

  • nginx dynamic servername with regular expression doesn't work for co.uk

    - by redn0x
    I'm trying to setup a nginx server which dynamically loads content from a folder for a domain. To do this I'm using regular expressions in the server name like so: server_name ((?<subdomain>.+)\.)?(?<domain>.+)\.(?<tld>.*); This will create a 3 variables for nginx to use later on, for example when using the following url: test.foo.example.com this will evaluate to: $subdomain = test.foo $domain = example $tld = com The problem arises when the co.uk top-level domain is used. In this case when using the url test.foo.example.co.ukit will evaluate to: $subdomain = test.foo.cedira $domain = co $tld = uk How can I edit the regular expression so that it will also work for co.uk?

    Read the article

  • Disable laptop's display on boot when used with external display

    - by Ryan
    I keep my laptop tucked away and solely use an external display with it via HDMI. In Windows 7 display settings, I have it set up to "Show desktop only on 2 [my external display]" This works fine in all cases except when I boot the laptop when the external display is already connected. In that case, the laptop's display stays on and sticks at the Windows 7 boot logo unless I manually shut the display off. (I should mention that while the laptop's display is stuck at the boot logo, the external monitor and computer are running just fine.) The laptop is an Asus N56VZ with Nvidia 650m graphics and the latest drivers. I've checked Nvidia's control panel as well as the BIOS and nothing looked very promising. Any ideas as to how I can get my laptop screen to shut itself off after booting into Windows?

    Read the article

  • Java EE 7 JSR update

    - by Heather VanCura
    Java EE 7 JSR update...in case you missed the last few entries with JSR updates, there are 8 Java EE 7 JSRs currently in JCP milestone review stages.  Your input is requested and needed! JSR 342: Early Draft Review 2– Java Platform, Enterprise Edition 7 (Java EE 7) Specification (review ends 30 November); Oracle JSR 107: Early Draft Review - JCACHE - Java Temporary Caching API (review ends 22 November); Greg Luck, Oracle JSR 236: Early Draft Review – Concurrency Utilities for Java EE (review ends 15 December); Oracle JSR 338: Early Draft Review 2 – Java Persistence 2 (review ends 30 November); Oracle JSR 346: Public Review – Contexts and Dependency Injection for Java EE 1.1 (EC ballot 4-17 December); RedHat JSR 352: Public Review – Batch Applications for the Java Platform (EC ballot 4-17 December); IBM JSR 349: Public Review – Bean Validation 1.1 (EC ballot 20- 26 November); RedHat JSR 339: Public Review – JAX-RS 2.0: The Java API for RESTful Web Services (Review period ended, EC ballot ends 26 November); Oracle  Also, check out the Java EE wiki with a specification and schedule update, including most recently, the addition of JSR 236.

    Read the article

  • Linux Logitech QuickCam Pro 9000 - microphone issues

    - by drahnr
    I got a Logitech Quickcam Pro 9000, the cam itself is working as it honors the UVC spec. This fancy WebCam has a integrated mic which worked some time before but now, it does no more. (Note: I use pulseaudio as it is a USB Mic and I am not really keen on the hassle of ALSA setup) Things I check already are if it gets detected at all: $ lsusb |grep Logi Bus 002 Device 002: ID 046d:0809 Logitech, Inc. Webcam Pro 9000 is muted in alsa-mixer - not the case, volume at 100 pavucontrol shows it too, but no input level bar! On top of that, if I open the gnome3 (fallback mode) audio panel(from the desktop panel), and disabel/reenable it in the hardware tab, it works "for some time"... Any hints? Any ideas? I am really out options for now, and the fact it worked like 6 months ago (perfectly) makes it no better.

    Read the article

  • Using Exception Handler in an ADF Task Flow

    - by anmprs
    Problem Statement: Exception thrown in a task flow gets wrapped in an exception that gives an unintelligible error message to the user. Figure 1 Solution 1. Over-writing the error message with a user-friendly error message. Figure 2 Steps to code 1. Generating an exception: Write a method that throws an exception and drop it in the task flow.2. Adding an Exception Handler: Write a method (example below) to overwrite the Error in the bean or data control and drop the method in the task flow. Figure 3 This method is marked as the Exception Handler by Right-Click on method > Mark Activity> Exception Handler or by the button that is displayed in this screenshot Figure 4 The Final task flow should look like this. This will overwrite the exception with the error message in figure 2. Note: There is no need for a control flow between the two method calls (as shown below). Figure 5 Solution 2: Re-Routing the task flow to display an error page Figure 6 Steps to code 1. This is the same as step 1 of solution 1.2. Adding an Exception Handler: The Exception handler is not always a method; in this case it is implemented on a task flow return.  The task flow looks like this. Figure 7 In the figure below you will notice that the task flow return points to a control flow ‘error’ in the calling task flow. Figure 8 This control flow in turn goes to a view ‘error.jsff’ which contains the error message that one wishes to display.  This can be seen in the figure below. (‘withErrorHandling’ is a  call to the task flow in figure 7) Figure 9

    Read the article

  • What FOSS solutions are available to manage software requirements?

    - by boos
    In the company where I work, we are starting to plan to be compliant to the software development life cycle. We already have, wiki, vcs system, bug tracking system, and a continuous integration system. The next step we want to have is to start to manage, in a structured way, software requirements. We dont want to use a wiki or shared documentation because we have many input (developer, manager, commercial, security analyst and other) and we dont want to handle proliferation of .doc around the network share. We are trying to search and we hope we can find and use a FOSS software to manage all this things. We have about 30 people, and don't have a budget for commercial software. We need a free solution for requirements management. What we want is software that can manage: Required features: Software requirements divided in a structured configurable way Versioning of the requirements (history, diff, etc, like source code) Interdependency of requirements (child of, parent of, related to) Rule Based Access Control for data handling Multi user, multi project File upload (for graph, document related to or so on) Report and extraction features Optional Features: Web Based Test case Time based management (timeline, excepted data, result data) Person allocation and so on Business related stuff Hardware allocation handling I have already play with testlink and now i'm playing with RTH, the next one i try is redmine.

    Read the article

  • Avoid unwanted path in Zip file

    - by jerwood
    I'm making a shell script to package some files. I'm zipping a directory like this: zip -r /Users/me/development/something/out.zip /Users/me/development/something/folder/ The problem is that the resultant out.zip archive has the entire file path in it. That is, when unzipped, it will have the whole "/Users/me/development/anotherthing/" path in it. Is it possible to avoid these deep paths when putting a directory into an archive? When I run zip from inside the target directory, I don't have this problem. zip -r out.zip ./folder/ In this case, I don't get all the junk. However, the script in question will be called from wherever. FWIW, I'm using bash on Mac OS X 10.6.

    Read the article

  • How to install SIP+PyQt with apt-get + pip + virtualenv?

    - by kjo
    [I originally posted this question, under a different title, in StackOverflow (here), but later I realized that my problem is very specific to apt-get, hence I am re-posting it here. Sorry for the duplication.] I'm trying to install PyQt on Ubuntu (and within a virtualenv). The list of obstacles I'm dealing with is far too long to include here, but the one I'm currently trying to get past is this: % workon myvenv (myvenv)% cd ~/.virtualenvs/myvenv/build/pyqt (myvenv)% python ./configure.py Traceback (most recent call last): File "./configure.py", line 32, in <module> import sipconfig OK, so let's install sipconfig... (myvenv)% pip install SIP Downloading/unpacking SIP Downloading sip-4.14.8-snapshot-02bdf6cc32c1.zip (848Kb): 848Kb downloaded Running setup.py egg_info for package SIP Traceback (most recent call last): File "<string>", line 14, in <module> IOError: [Errno 2] No such file or directory: '/home/yt/.virtualenvs/myvenv/build/SIP/setup.py' Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 14, in <module> IOError: [Errno 2] No such file or directory: '/home/yt/.virtualenvs/myvenv/build/SIP/setup.py' ---------------------------------------- Command python setup.py egg_info failed with error code 1 in /home/yt/.virtualenvs/myvenv/build/SIP Storing complete log in /home/yt/.pip/pip.log The only recipe I've found so far installing SIP is this % python configure.py % make % sudo make install ...but this recipe goes against my policy of doing all my Ubuntu installations either through apt-get (or through pip in the case of Python modules). Is there some way that I can install SIP with apt-get (and possibly pip)?

    Read the article

  • Bluetooth adapter turned from working fine to unrecognized

    - by easoncxz
    i had been using bluetooth fine, with devices working, but today when i turned on my computer again bluetooth strangely failed. there is a bluetooth icon on the top bar, showing "bluetooth on", but if i click on the "bluetooth settings" item, a system settings window shows up and shows me a bluetooth on-off switch which is disabled (i.e. fixed to off). more information about my case: i am a new linux used, coming from windows, and do not know supposedly-obvious commands. i am using a laptop. it initially doesn't have bluetooth. i bought a built-in type (instead of USB type) bluetooth module, and added it inside the laptop. hence, i do not have a specific FN+* key for bluetooth. in windows, i needed to install an additional driver that was intended for other machines in my laptop's seires which have built-in (i.e. factoryly built-in)j bluetooth modules. the Fn+* key seemed to only affect wifi under ubuntu. i have been successfully using magicmouse with my later-added built-in bluetooth module/adapter on both windows and ubuntu i have been trying to tweak the magicmouse scrolling speed with commands rmmod something, modprobe hid_magicmouse --scroll_speed=45 --scroll_acceleration=30 or something, then added a file `/etc/modprobe.d/magicmouse.conf". the mouse seemed to be working fine with these changes. now if i run commands like hcitool dev, the shell tells me that i do not have any "Devices" or "adapters". i seem to have bluez installed, because when i type "blue" then tab-autocomplete, a bunch of commands like bluez-test-device pops up. -- update -- some commands and their results: easoncxz@eason-Aspire-4741-ubuntu:/etc$ hcitool dev Devices: easoncxz@eason-Aspire-4741-ubuntu:/etc$ hcitool scan Device is not available: No such device easoncxz@eason-Aspire-4741-ubuntu:/etc$ rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no 2: acer-bluetooth: Bluetooth Soft blocked: no Hard blocked: no easoncxz@eason-Aspire-4741-ubuntu:/etc$ rfkill list 0: phy0: Wireless LAN Soft blocked: yes Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no 2: acer-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no

    Read the article

  • Screen Resolution stuck at 640x480 after installing Bumblebee

    - by Saurabh Agarwal
    I have a Dell XPS 15z laptop. As you can see here, there are some issues with NVidia drivers. The site recommends installation of Bumblebee (instructions given in the link). I am posting it again for ease: $ sudo add-apt-repository ppa:bumblebee/stable $ sudo apt-get update && sudo apt-get upgrade $ sudo apt-get install bumblebee bumblebee-nvidia $ sudo usermod -a -G bumblebee $USER After restarting the computer however, the screen resolution was stuck at 640x480 and I got the following error message as soon as I logged in: **Could not apply the stored configuration for monitors** none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Prior to the update, the display was absolutely normal and thus there is no doubt about the cause. Albeit, there was no support for graphic drivers. In case it helps, some features of graphics drivers seem to be functional after bumblebee, ie, all features are in order except for the resolution. And if the resolution can't be fixed, please suggest a way to retract the changes so that atleast the prior state may be reachieved. Any help in the matter would be highly appreciated.

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • Can my PowerMac G3 B&W really take a harddrive larger than 128GB?

    - by Josh Calvetti
    So it's a well known fact that the PowerMacs manufactured before 2002 cannot take a harddrive larger than 128GB. I have an old B&W that was running 10.4, and upon putting a 250GB drive inside, it told me that I had inserted a 128GB. That was expected. However, I recently decided to turn that machine into a Debian home file server. I shoved the 250GB drive inside, did some formatting, and now it tells me that it is a 250GB drive. Is this safe to use? Will all my data go corrupt after I've added more than 128GB of stuff? In case the specs are helpful to have, it's a 400MHz B&W, 1GB RAM, Rev. B.

    Read the article

  • Sharing Windows Store apps between accounts

    - by Klas Mellbourn
    In Windows 8 it seems natural to me that each person in a family has their own Microsoft Account with which they log in. If you pay for an app on the Windows Store, you can install that same app on several computers using the same Microsoft Account. Good. However, if several persons, in this case my children, each have their own account on the same computer, they do not get access to apps bought on a sibling's account, even if the app has been installed on the same computer. Bad. (Compare this to iOS where you are allowed have several iPhones with different iCloud-connected accounts but all using the same iTunes App Store account, which is perfect for a family where all can then use the same app which was bought just once) Is there any way to share apps between Microsoft Accounts (e.g. members of the same family)? Alternatively, is there a way to run apps that are installed on a computer when you are logged in with a Microsoft Account different than the one used when installing the app?

    Read the article

  • Virtual DNS recommended setup...

    - by luison
    Hi. We are new to virtualization which we are setting up with Proxmox VE (OpenVZ + KVM). I am a bit lost about the recommended DNS forwarder config specially in the OpenVZ (Virtuosso type) of enviroiment. Our intention was to have a small dnsmasq running in one of the VM acting as backup DHCP server and serving our in-office local addresses (and PCs) by an additional resolve.conf file which dnsmasq supports, but I've read that all VM should share DNS pointing to the host machine in which case it would make more sense having it there. My problem is that I would like to have as least as possible apps in the host so a reinstall of the environment (porxmox ve) and a machine restore can be as quick as possible. Does anyone have a similar setup? Does it make sense to have the 1st virtual machine running the local dns forwarder? Also... dnsmasq seems to want to have root permissions when running on an OpenVZ container... are there any work arrounds anyone knows for that.

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • How to edit semi-plaintext file and maintaining character structure?

    - by Raul
    I am using a software (Groupmail from Infacta) that uses exact / absolute %PATHS% for saving some settings in specific semi-plaintext file. This is a really bad idea because you can't move to USER folder, or like my case it does not work after migrating to a new computer with different language. For example: C:\Documents and Settings\USER\Local Settings\Application Data\Infacta is different than C:\Documents and Settings\USER\Configuración local\Datos de programa\Infacta Obviously, the software does not work well. I tried to solve this using Find/Replace the new PATH with Notepad++. While the Groupmail software loads well and shows settings correctly, the software fails when trying to save data on that file. I guess this is because length or number of replaced characters is different and also it corrupt the file. Please could you help me to edit this file maintaining file integrity / structure?

    Read the article

  • IP KVM switch, or serial console box for remote admin?

    - by grahzny
    We have a small server farm (11 now, may add more in the future) of HP Proliant DL160 G6s. They all run either Linux (server only, no X11) or VMware ESX. We had intended to get models with iLO, in case BIOS-level remote admin became an issue, but that didn't happen. I had an IP KVM switch recommended to me (along with some sort of Remote Reboot hardware.) I've since realized that none of our machines need GUI administration, so perhaps a serial console switch would be a cheaper and more appropriate option. Something like this: http://www.kvm-switches-online.com/serimux-cs-32.html Do you folks have an opinions on which way is a better choice? Should we go for the ease of setup (plug and go, instead of turning on the feature in the BIOS and making sure the serial settings are correct) and the flexibility of an IP KVM switch even with the extra cost? Or is a serial console switch just fine?

    Read the article

  • Tuning B2B Server Engine Threads in SOA Suite 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g has a number of parameters that can be tweaked to tune the engine for handling high volumes of messages. These parameters are also known as B2B server properties and managed via the EM console.  This note highlights one aspect of the tuning exercise and describes the different threads, that can be configured to tune the performance of a B2B server. Symptoms The most common indicator of a B2B engine in need of a tuning is reflected in the constant build-up of messages in an internal JMS queue within the B2B server. It is called B2B_EVENT_QUEUE and can be monitored via the Weblogic server console. Whenever such a behaviour is seen, it usually results in general degradation of performance. Remedy There could be many contributing factors behind a B2B server's degradation of performance. However, one of the first places to tune the server from the out-of-the-box, default configuration is to change the number of internal engine threads allocated within the B2B server. Usually the default configuration for the B2B server engine threads is not suitable for high-volume of messaging loads. So, it is necessary to increase the counts for 3 types of such threads, by specifying the appropriate B2B server properties via the EM console, namely, Inbound - b2b.inboundThreadCount Outbound - b2b.outboundThreadCount Default - b2b.defaultThreadCount The function of these threads are fairly self-explanatory. In other words, the inbound threads process the inbound messages that are coming into the B2B server from an external endpoint. Similarly, the outbound threads processes the messages that are sent out from the B2B server. The default threads are responsible for certain B2B server-specific special tasks. In case the inbound and outbound thread counts are not specified, the default thread count also dictates the total number of inbound and outbound threads. As found in any tuning exercise, the optimisation of these threads is usually reached via an iterative process. The best working combination of the thread counts are directly related to the system infrastructure, traffic load and several other environmental factors.

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >