Search Results

Search found 3479 results on 140 pages for 'chris'.

Page 22/140 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Solution for installing the ADF 11.1.1.6.0 Runtimes onto a standalone WLS 10.3.6

    - by Chris Muir
    A few customers are hitting the following problem with regards to JDeveloper 11.1.1.6.0 so it's worth documenting a solution. As noted in my previous post the ADF Runtimes for JDeveloper 11.1.1.6.0 will work against a 10.3.5 and 10.3.6 WLS server.  In terms of the JDeveloper 11.1.1.6.0 download Oracle has coupled the 10.3.5 server with that release, not a 10.3.6 server. This has caught some customers out as they are attempting to use the JDeveloper installer to load the 11.1.1.6.0 ADF Runtimes into the middleware home of a standalone 10.3.6 WLS server.  When doing so the installer complains as follows: "The product maintenance level of the current server (WebLogic Server: 10.3.5.0) is not compatible with the maintenance level of the product installed on your system (WebLogic Server: 10.3.6.0).  Please obtain a compatible installer or perform maintenance or your current system to achieve the desired level." The solution is to install the runtimes using the standalone 11.1.1.6.0 ADF Runtime Libraries installer available from OTN (see the options under "Application Development Runtime").

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • Embracing Community

    - by Chris Williams
    I just put the finishing touches on another article for my Code Magazine column: Embracing Community. You won't see this one until around July, but it focuses on a subject near and dear to my heart: Code Camps! At the end of the article, I mention that I'm interested in hearing some of your war stories about community and what you do to be a part of it. I'll be talking to people at Tech Ed 2010 and Codestock, but I would also like to hear from some of you that read this blog. If you have an interesting story to share, drop me a line (via this blog) and tell me about it. You never know, it just might end up in my column.

    Read the article

  • 5 Ways To Try Out and Install Ubuntu On Your Computer

    - by Chris Hoffman
    Want to try out Ubuntu, but not sure where to start? There are lots of ways to try out Ubuntu – you can even install it on Windows and uninstall it from your Control Panel if you don’t like it. Ubuntu can be booted from a USB or CD drive and used without installation, installed under Windows with no partitioning required, run in a window on your Windows desktop, or installed alongside Windows on your computer. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Why Solid-State Drives Slow Down As You Fill Them Up

    - by Chris Hoffman
    The benchmarks are clear: Solid-state drives slow down as you fill them up. Fill your solid-state drive to near-capacity and its write performance will decrease dramatically. The reason why lies in the way SSDs and NAND Flash storage work. Filling the drive to capacity is one of the things you should never do with a solid-state drive. A nearly full solid-state drive will have much slower write operations, slowing down your computer.    

    Read the article

  • How to Play PC Games on Your TV

    - by Chris Hoffman
    No need to wait for Valve’s Steam Machines — connect your Windows gaming PC to your TV and use powerful PC graphics in the living room today. It’s easy — you don’t need any unusual hardware or special software. This is ideal if you’re already a PC gamer who wants to play your games on a larger screen. It’s also convenient if you want to play multiplayer PC games with controllers in your living rom. HDMI Cables and Controllers You’ll need an HDMI cable to connect your PC to your television. This requires a TV with HDMI-in, a PC with HDMI-out, and an HDMI cable. Modern TVs and PCs have had HDMI built in for years, so you should already be good to go. If you don’t have a spare HDMI cable lying around, you may have to buy one or repurpose one of your existing HDMI cables. Just don’t buy the expensive HDMI cables — even a cheap HDMI cable will work just as well as a more expensive one. Plug one end of the HDMI cable into the HDMI-out port on your PC and one end into the HDMI-In port on your TV. Switch your TV’s input to the appropriate HDMI port and you’ll see your PC’s desktop appear on your TV.  Your TV becomes just another external monitor. If you have your TV and PC far away from each other in different rooms, this won’t work. If you have a reasonably powerful laptop, you can just plug that into your TV — or you can unplug your desktop PC and hook it up next to your TV. Now you’ll just need an input device. You probably don’t want to sit directly in front of your TV with a wired keyboard and mouse! A wireless keyboard and wireless mouse can be convenient and may be ideal for some games. However, you’ll probably want a game controller like console players use. Better yet, get multiple game controllers so you can play local-multiplayer PC games with other people. The Xbox 360 controller is the ideal controller for PC gaming. Windows supports these controllers natively, and many PC games are designed specifically for these controllers. Note that Xbox One controllers aren’t yet supported on Windows because Microsoft hasn’t released drivers for them. Yes, you could use a third-party controller or go through the process of pairing a PlayStation controller with your PC using unofficial tools, but it’s better to get an Xbox 360 controller. Just plug one or more Xbox controllers into your PC’s USB ports and they’ll work without any setup required. While many PC games to support controllers, bear in mind that some games require a keyboard and mouse. A TV-Optimized Interface Use Steam’s Big Picture interface to more easily browse and launch games. This interface was designed for using on a television with controllers and even has an integrated web browser you can use with your controller. It will be used on the Valve’s Steam Machine consoles as the default TV interface. You can use a mouse with it too, of course. There’s also nothing stopping you from just using your Windows desktop with a mouse and keyboard — aside from how inconvenient it will be. To launch Big Picture Mode, open Steam and click the Big Picture button at the top-right corner of your screen. You can also press the glowing Xbox logo button in the middle of an Xbox 360 Controller to launch the Big Picture interface if Steam is open. Another Option: In-Home Streaming If you want to leave your PC in one room of your home and play PC games on a TV in a different room, you can consider using local streaming to stream games over your home network from your gaming PC to your television. Bear in mind that the game won’t be as smooth and responsive as it would if you were sitting in front of your PC. You’ll also need a modern router with fast wireless network speeds to keep up with the game streaming. Steam’s built-in In-Home Streaming feature is now available to everyone. You could plug a laptop with less-powerful graphics hardware into your TV and use it to stream games from your powerful desktop gaming rig. You could also use an older desktop PC you have lying around. To stream a game, log into Steam on your gaming PC and log into Steam with the same account on another computer on your home network. You’ll be able to view the library of installed games on your other PC and start streaming them. NVIDIA also has their own GameStream solution that allows you to stream games from a PC with powerful NVIDIA graphics hardware. However, you’ll need an NVIDIA Shield handheld gaming console to do this. At the moment, NVIDIA’s game streaming solution can only stream to the NVIDIA Shield. However, the NVIDIA Shield device can be connected to your TV so you can play that streaming game on your TV. Valve’s Steam Machines are supposed to bring PC gaming to the living room and they’ll do it using HDMI cables, a custom Steam controller, the Big Picture interface, and in-home streaming for compatibility with Windows games. You can do all of this yourself today — you’ll just need an Xbox 360 controller instead of the not-yet-released Steam controller. Image Credit: Marco Arment on Flickr, William Hook on Flickr, Lewis Dowling on Flickr

    Read the article

  • HTG Explains: How Windows 8's Secure Boot Feature Works & What It Means for Linux

    - by Chris Hoffman
    Whether you plan on using Windows 8 or not, everyone buying a PC in the future will end up with the Microsoft-driven Secure Boot feature enabled. Secure Boot prevents “unauthorized” operating systems and software from loading during the startup process. Secure Boot is a feature enabled by UEFI – which replaces the traditional PC BIOS – but Microsoft mandates specific implementations for x86 (Intel) and ARM PCs. Any computer with a Windows 8 logo sticker has Secure Boot enabled. Image Credit: Kiwi Flickr HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization HTG Explains: What Is RSS and How Can I Benefit From Using It?

    Read the article

  • How can I unit test a class which requires a web service call?

    - by Chris Cooper
    I'm trying to test a class which calls some Hadoop web services. The code is pretty much of the form: method() { ...use Jersey client to create WebResource... ...make request... ...do something with response... } e.g. there is a create directory method, a create folder method etc. Given that the code is dealing with an external web service that I don't have control over, how can I unit test this? I could try and mock the web service client/responses but that breaks the guideline I've seen a lot recently: "Don't mock objects you don't own". I could set up a dummy web service implementation - would that still constitute a "unit test" or would it then be an integration test? Is it just not possible to unit test at this low a level - how would a TDD practitioner go about this?

    Read the article

  • Learning about C Bitshift Operators

    - by Chris Hammond
    So I was doing some reading tonight on my Nerdkit , I had planned to actually do some playing around with it, but decided just to read a bit. I’ve never coded in C, I did C++ in College (not very well) and do most of my development in C# these days (when I’m doing code, mostly for fun). While all similar, there are a few differences, so doing things in C is a learning experience. There was some practice questions for AND and OR using Binary. Here are some examples. When comparing binary with AND ...(read more)

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • Weblogs.asp.net has a problem, it is spam

    - by Chris Hammond
    Is anyone at Microsoft listening to the SPAM problem here on Weblogs.asp.net? My “ Can anyone do anything about the spam here on weblogs.asp.net? ” post from October got over 12 spam comments posted to it in the past 24 hours. I have comments all moderated, but that just means I have a crapload of work to do each time people comment. Also, when you click on a link from a comment notification email you are taken to an insecure site warning due to an invalid SSL Cert. We really just need some updates...(read more)

    Read the article

  • Root username is different to admin username

    - by Chris Poole
    I have somehow changed my root username which seems to have caused my system to disallow me to mount USB, CDROM. My normal username is jenchris, however if I type: su root (and enter the password) then it shows root@jenchris-H55M-UD2H:/home/jenchris# (PLEASE NOTE THE HASH AT THE END OF THE USERNAME!) I think I accidentally hit the hash key at some point whilst typing my username.... This is causing huge problems as I have lost lots of permissions, please can someone help?

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • 42+ Text-Editing Keyboard Shortcuts That Work Almost Everywhere

    - by Chris Hoffman
    Whether you’re typing an email in your browser or writing in a word processor, there are convenient keyboard shortcuts usable in almost every application. You can copy, select, or delete entire words or paragraphs with just a few key presses. Some applications may not support a few of these shortcuts, but most applications support the majority of them. Many are built into the standard text-editing fields on Windows and other operating systems. Image Credit: Kenny Louie on Flickr HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now

    Read the article

  • Are Chromebooks the New Netbooks, and What Does That Mean?

    - by Chris Hoffman
    Netbooks — small, cheap, slow laptops — were once very popular. They fell out of favor — people bought them because they seemed cheap and portable, but the actual experience was lackluster. Most netbooks now sit unused. Windows netbooks have vanished from stores today, but there’s a new super-cheap laptop — the Chromebook. Chromebook sales numbers are impressive, but their usage statistics tell a different story. Are Chromebooks just the new netbook? The Problem With Netbooks Netbooks seemed appealing, especially in an age before tablets and lightweight ultrabooks. You could buy a netbook for $200 or so and have a portable device that let you get on the Internet. The name “netbook” spelled that out — it was a portable device for getting on the ‘net. They weren’t really that great. The original netbook was a lightweight Asus Eee PC that ran Linux alone and had a small amount of fast flash storage. Netbooks eventually ran heavier Windows XP operating systems — Windows Vista was out, but it was just too bloated to run on netbooks. Manufacturers added slow magnetic hard drives, bloatware, and even DVD drives! They couldn’t run most Windows software very well. The build quality was poor and their keyboards were tiny and cramped. People liked the idea of a lightweight device that let them get on the Internet and loved the cheap price, but the actual experience wasn’t great. Chromebook Sales Chromebook sales numbers seem surprisingly high. NPD reported that Chromebooks were 21% of all notebooks sold in the US in 2013. If you combine laptop and tablet sales into a single statistic, Chromebooks were 9.6% of all those devices sold. That’s 2/3 as many Chromebooks sold as iPads in the US! Of Amazon’s best-selling laptop computers, two of the top three are Chromebooks. These definitely look like successful products. Unlike netbooks, Chromebooks are taking off in a big way in the education market. Many schools are buying Chromebooks for their students instead of more expensive Windows laptops. They’re easier to manage and lock down than Windows laptops, but — more importantly for cash-strapped schools — they’re very cheap. Netbooks never had this sort of momentum in schools. Chromebook Usage Statistics Here’s where the rosy picture of Chromebooks starts to become more realistic. StatCounter’s browser usage statistics show how widely used different operating systems are. For example, Windows 7 has the highest share with 35.71% of web activity in April, 2014. The chart doesn’t even show Chrome OS at all, although there is an “Other” number near the bottom. Click the Download Data link to download a CSV file and we can view more detailed information. Chrome OS only accounted for 0.38% of web usage in April, 2014. Desktop Linux, which people often shrug at, accounted for 1.52% in the same month. To its credit, Chrome OS usage has increased. Chromebooks were widely mocked back in November, 2013 when the sales numbers came out. After all, they only accounted for 0.11% of web usage globally in November, 2013! But Chrome OS numbers have been improving: Nov, 2013: 0.11% Dec, 2013: 0.22% Jan, 2014: 0.31% Feb, 2014: 0.35% Mar, 2014: 0.36% Apr, 2014: 0.38% Chrome OS is climbing, but it’s definitely still in the “Other” category. It isn’t as high as we’d expect to see it with those types of sales numbers. Chromebooks vs. Netbooks Chromebooks are more limited devices than traditional PCs. You can do quite a few things, but you have to do it all using Chrome or Chrome apps. Most people won’t be enabling developer mode and installing a Linux desktop. You don’t have access to the powerful desktop software available for Windows and even Mac OS X. On the other hand, these Chromebooks are less compromised than netbooks in many ways. They come with a lightweight operating system designed for portable, mobile devices. They don’t come packed with any bloatware, like the bloatware you’ll find on competing Windows PCs and the original netbooks. They’re cheaper because the manufacturer doesn’t have to pay for a Windows license. There’s no need for antivirus software weighing the operating system down. They’re larger than the original netbooks, with many of them being 11.6-inches instead of the original 8-inch bodies many older netbooks came with. They have larger, more comfortable keyboards and fast solid-state storage. Really, Chromebooks are what netbooks wanted to be. People didn’t buy netbooks to use typical Windows software — they just wanted a lightweight PC. Of course, for many people, the real successor to netbooks is tablets. If all you want is a portable device to throw in a bag so you can get online, maybe a tablet is better. Where Does This Leave Chromebooks? So, are Chromebooks the new netbooks? It’s a bit early to answer that question. Chromebooks are definitely not out of the competition — their sales look good and their usage share is increasing. On the other hand, Chrome OS is still pretty far behind. They’re not catching fire like tablets did. Maybe netbooks were just before their time and Chromebooks were what they were always meant to be. Just as Microsoft’s Windows XP tablets failed, Windows XP netbooks also failed. Tablets took off with a more refined operating system on better hardware years later. “Netbooks” — or Chromebooks — are now taking off with a more purpose-built operating system on better hardware, too. It’s hard to count Chromebooks out because they provide a much better experience than netbooks ever did. If you’re one of the people who wants to use old Windows desktop apps on your portable laptop, you may think netbooks were better — but most people don’t want that. But maybe people either want a full desktop PC experience or a full mobile tablet experience. Is there a place for a laptop with a keyboard that can only view websites? We’ll have to wait and see. Image Credit: Kevin Jarret on Flickr, Clive Darra on Flickr, Sean Freese on Flickr

    Read the article

  • Stored procedure Naming conventions?

    - by Chris
    One of our senior developers has stated that we should be using a naming convention for stored procedures with an "objectVerb" style of naming such as ("MemberGetById") instead of a "verbObject" type of naming ("GetMemberByID"). The reasoning for this standard is that all related stored procedures would be grouped together by object rather than by the action. While I see the logic for this way of naming things, this is the first time that I have seen stored procedures named this way. My opinion of the naming convention is that the name can not be read naturally and takes some time to determine what the words are saying and what the procedure might do. What are your takes on this? Which way is the more common way of naming a stored proc, and does a what types of stored proc naming conventions have you used or go by?

    Read the article

  • Three apps going through apache. How to configure apache httpd? [migrated]

    - by Chris F.
    I have a quick question but I've been struggling to find the best solution: I have two java webapps and wordpress (php) that I need to serve through my Prod website: App #1 should be accessed when pointing to www.example.com/ (this would have other url too such as "www.example.com/book") App #2 should be accessed when pointing to www.example.com/manage Finally WordPress would be accessed at www.example.com/info How can I configure apache to serve all these three instances at the same time? So far I have and it's not quite working right. Any suggestions would be much appreciated! Listen 8081 <VirtualHost *:8081> DocumentRoot /var/www/html </VirtualHost> ProxyPass /manage http://127.0.0.1:8080/manage ProxyPassReverse /manage http://127.0.0.1:8080/manage ProxyPass /info http://127.0.0.1:8081/info ProxyPassReverse /info http://127.0.0.1:8081/info ProxyPass / http://127.0.0.1:9000/ ProxyPassReverse / http://127.0.0.1:9000/

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • My Tech Ed North America Preview - Certification Edition

    - by Chris Gardner
    In my previous TechEd North America Preview, I addressed all the content I wanted to see at the show. This time, we shall turn our attention to the certifications I might try to pick up. If you have never been to TechEd North America before, one of the greatest things about the event is an on-site certification center. If you have a couple hours to spare, you can walk up to a test. The first test on my agenda is 70-5231. I took this update test once, but did not do well on the MVC portion2. A few practice tests later, and I think I'm ready to fake that section. After that, I need to complete my road to being a master. The good folks here at work have been having a real love / hate relationship with the idea of me become an MCM in SQL Server3. Of course, before I do that, I need to finally take the SQL Administration tests. Thus, we shall add 70-4324 and 70-4505 to the list. Speaking of MCM, TechEd North America will have a special on test 88-9706. This test is normally $500, and you have to find a place to take it7. However, there is a special 50% off rate for people who take it on location. With those kind of prices, I may just take it as a form of study guide. As a final push, I may take some Windows Phone exams. I mentioned in my previous post that I may attend the 70-5998 Exam Cram session. Unfortunately, I will be staffing the Hands-On-Lab at that time. As we know, this has never stopped me from taking a test. This may lead to fits of 70-5069, but after we've come this far... That should complete my list. Do I really think I'll find time to take 6 tests at TechEd North America? Probably not. I have done it at TechEd North America before, but that was before I was TechEd North America staff. I also had a co-worker pass 9 in one year, but he basically did nothing but travel to Orlando in 2007 to take tests. And what's the point of attending a HUGE conference if you don't network? Of course, networking will have to wait for Friday's post... 1 Upgrade: Transition Your MCPD .NET Framework 3.5 Web Developer Skills to MCPD .NET Framework 4 Web Developer 2Because I never have used, nor do I really think I ever will use, MVC... 3By that, I mean they love the idea, and they hate the price 4Microsoft SQL Server 2008, Implementation and Maintenance 5PRO: Designing, Optimizing and Maintaining a Database Administrative Solution Using Microsoft SQL Server 2008 6SQL Server 2008 Microsoft Certified Master: Knowledge Exam 7Which isn't nearly as expensive as the Lab Exam, nor as difficult to find a location. However, it is not offered at every testing facility. 8PRO: Designing and Developing Windows Phone Applications 9TS: Silverlight 4, Development

    Read the article

  • The 20 Most Important Keyboard Shortcuts For Windows PCs

    - by Chris Hoffman
    Keyboard shortcuts are practically essential for using any type of PC. They’ll speed up almost everything you do. But long lists of keyboard shortcuts can quickly become overwhelming if you’re just getting started. This list will cover the most useful keyboard shortcuts that every Windows user should know. If you haven’t used keyboard shortcuts much, these will show you just how useful keyboard shortcuts can be. Windows Key + Search The Windows key is particularly important on Windows 8 — especially before Windows 8.1 — because it allows you to quickly return to the Start screen. On Windows 7, it opens the Start menu. Either way, you can start typing immediately after you press the Windows key to search for programs, settings, and files. For example, if you want to launch Firefox, you can press the Windows key, start typing the word Firefox, and press Enter when the Firefox shortcut appears. It’s a quick way to launch programs, open files, and locate Control Panel options without even touching your mouse and without digging through a cluttered Start menu. You can also use the arrow keys to select the shortcut you want to launch before pressing Enter. Copy, Cut, Paste Copy, Cut, and Paste are extremely important keyboard shortcuts for text-editing. If you do any typing on your computer, you probably use them. These options can be accessed using the mouse, either by right-clicking on selected text or opening the application’s Edit menu, but this is the slowest way to do it. After selecting some text, press Ctrl+C to copy it or Ctrl+X to cut it. Position the cursor where you want the text and use Ctrl+V to paste it. These shortcuts can save you a huge amount of time over using the mouse. Search the Current Page or File To quickly perform a search in the current application — whether you’re in a web browser, PDF viewer, document editor, or almost any other type of application — press Ctrl+F. The application’s search (or “Find”) feature will pop up, and you can instantly start typing a phrase you want to search for. You can generally press Enter to  go to the next appearance of the word or phrase in the document, quickly searching through it for what you’re interested in. Switch Between Applications and Tabs Rather than clicking buttons on your taskbar, Alt+Tab is a very quick way to switch between running applications. Windows orders the list of open windows by the order you accessed them, so if you’re only using two different applications, you can just press Alt+Tab to quickly switch between them. If switching between more than two windows, you’ll have to hold the Alt key and press Tab repeatedly to toggle through the list of open windows. If you miss the window you want, you can always press Alt+Shift+Tab to move through the list in reverse. To move between tabs in an application — such as the browser tabs in your web browser — press Ctrl+Tab. Ctrl+Shift+Tab will move through tabs in reverse. Quickly Print If you’re the kind of person who still prints things, you can quickly open the print window by pressing Ctrl+P. This can be faster than hunting down the Print option in every program you want to print something from. Basic Browser Shortcuts Web browser shortcuts can save you tons of time, too. Ctrl+T is a very useful one, as it will open a new tab with the address bar focused, so you can quickly press Ctrl +T, type a search phrase or web address, and press Enter to go there. To go back or forward while browsing, hold the Ctrl key and press the left or right arrow keys. If you’d just like to focus your web browser’s address bar so you can type a new web address or search without opening a new tab, press Ctrl + L. You can then start typing something and press Enter. Close Tabs and Windows To quickly close the current application, press Alt+F4. This works on the desktop and even in new Windows 8-style applications. To quickly close the current browser tab or document, press Ctrl+W. This will often close the current window if there are no other tabs open. Lock Your Computer When you’re done using your computer and want to step away, you may want to lock it. People won’t be able to log in and access your desktop unless they know your password. You can do this from the Start menu or Start screen, but the fastest way to lock your screen is by quickly pressing Windows Key + L before you get up. Access the Task Manager Ctrl+Alt+Delete will take you to a screen that allows you to quickly launch the Task Manager or perform other operations, such as signing out. This is particularly useful because if can be used to recover from situations where your computer doesn’t appear responsive or isn’t accepting input. For example, if a full-screen game becomes unresponsive, Ctrl+Alt+Delete will often allow you to escape from it and end it via the Task Manager. Windows 8 Shortcuts On Windows 8 PCs, there are other very important keyboard shortcuts. Windows Key + C will open your Charms bar, while Windows Key + Tab will open the new App Switcher. These keyboard shortcuts will allow you to avoid the hot corners, which can be tedious to use with a mouse. On the desktop side, Windows Key + D will take you back to the desktop from anywhere. Windows Key + X will open a special “power user menu” that gives you quick access to options that are hidden in the new Windows 8 interface, including Shut Down, Restart, and Control Panel. If you’re interested in learning more keyboard shortcuts, be sure to check our longer lists of 47 keyboard shortcuts that work in all web browsers and 42+ keyboard shortcuts to speed up text-editing. Image Credit: Jeroen Bennink on Flickr     

    Read the article

  • How to price code reviews to encourage good behavior?

    - by Chris Clark
    I work for a company that has a hosted .net internet application with many clients. Those clients often want to write customizations for our application. We have APIs to hook into the app, but the customizations themselves are written in .net. This is a shared, secure hosting environment and we have to code review these customizations before we can deploy them in our datacenter to ensure that they don't degrade performance, crash our servers, or open any security vulnerabilities. We charge for these code reviews. The current pricing model is simply a function of the number of lines of code. I think this is a bad idea for a variety of reasons, but primarily because, if we are interested in verifying that the code works as expected, we should be incentivizing good, readable code, not compaction. I would like to propose a pricing model that incorporates some, or all of the following as inputs: Lines of code Cyclomatic complexity Avg function length # of functions Are there any other metrics I should incorporate, or other ideas for how we can reasonably create pricing for code reviews that encourages safe and understandable code?

    Read the article

  • How To Delete, Move, or Rename Locked Files in Windows

    - by Chris Hoffman
    Windows won’t allow you to modify files that open programs have locked. if you try to delete a file and see a message that it’s open in a program, you’ll have to unlock the file (or close the program). In some cases, it may not be clear which program has locked a file – or a background process may have locked a file and not terminated correctly. You must unlock the stubborn file or folder to modify it. Note: Unlocking certain files and deleting them may cause problems with open programs. Don’t unlock and delete files that should remain locked, including Windows system files. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Immutable Method in Java

    - by Chris Okyen
    In Java, there is the final keyword in lieu of the const keyword in C and C++. In the latter languages there are mutable and immutable methods such as stated in the answer by Johannes Schaub - litb to the question How many and which are the uses of “const” in C++? Use const to tell others methods won't change the logical state of this object. struct SmartPtr { int getCopies() const { return mCopiesMade; } }ptr1; ... int var = ptr.getCopies(); // returns mCopiesMade and is specified that to not modify objects state. How is this performed in Java?

    Read the article

  • I have neither grub.conf nur menu.lst

    - by chris
    Hi, I just compiled my own kernel for the first time. At it is written in a tutorial I now want to check whether the kernel got written into the grub.conf file. Well, I did not find a grub.conf file. So I googled. Answer: Ubuntu does not have grub.conf, but rather has a menu.lst (also stored in /boot/grub). So I looked for that file. But I don't have that one either. So now the question: Where is my GRUB data stored? Im using Ubuntu 10.10 with Kernel 2.6.35-27

    Read the article

  • How to stop fan running always on Asus P8P76LE motherboard with ATI Radeon HD6900

    - by Chris Good
    I'm using Ubuntu 12.04 LTS. I'm not sure if it is the CPU (i7) fan or the video card fan. I've tried using lm-sensors & fancontrol sudo sensors-detect Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `w83627ehf': * ISA bus, address 0x290 Chip `Nuvoton NCT6776F Super IO Sensors' (confidence: 9) Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: # Chip drivers coretemp w83627ehf Like many people, I'm also getting error: /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed Here is the output of sensors: # sensors radeon-pci-0100 Adapter: PCI adapter temp1: +71.0°C coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 1: +40.0°C (high = +80.0°C, crit = +98.0°C) Core 2: +43.0°C (high = +80.0°C, crit = +98.0°C) Core 3: +42.0°C (high = +80.0°C, crit = +98.0°C) I'm hoping some-one has already solved this for my configuration because this seems to be a problem for many people and there are many different suggestions.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >