Search Results

Search found 20442 results on 818 pages for 'software evaluation'.

Page 559/818 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • Drobo not mounting. Disk repair doesn't work either.

    - by kohei
    Hi, While transferring data to my 2nd gen Drobo power went out. Now my Drobo is not mounting to my OS X 10.6.3 I have tried Disk Repair and this error message appears: Verify and Repair volume “disk1s2” Checking Journaled HFS Plus volume. Invalid key length Invalid record count Catalog file entry not found for extent The volume could not be verified completely. Volume repair complete.Updating boot support partitions for the volume as required.Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files. I tried DiskWarrior too but it doesn't work either. It gives me that I need more memory to continue and software shuts down. Any one know solution to this one?

    Read the article

  • How to encrypt disk transparently without destroying data

    - by aseq
    On a Linux system, be it Debian, Redhat or any distro, I would like to encrypt the disk on the fly. That is, encrypt the disk transparently to the OS whilst it is running, reboots, shuts down etc. In addition it should not destroy data. I know there is (commercial) software for windows that does it. But I need a Linux solution. The solutions I know of do not support this as far as I can see (luks, truecrypt...). Maybe there is some hack or work around available?

    Read the article

  • Can't ssh to instance

    - by megas
    I have a linode instance, I was successfully connecting to it via ssh. But I've decided to rebuild my instance and then I can not connect to that instance via ssh. The linode works correctly because I can get access via Lish (lonode ssh) I've tried to clear known_hosts with: ssh-keygen -R 212.71.xxx.xx But I still getting message: ssh [email protected] -v OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 212.71.238.74 [212.71.238.74] port 22. debug1: Connection established. debug1: identity file /home/megas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/megas/.ssh/id_rsa-cert type -1 debug1: identity file /home/megas/.ssh/id_dsa type -1 debug1: identity file /home/megas/.ssh/id_dsa-cert type -1 debug1: identity file /home/megas/.ssh/id_ecdsa type -1 debug1: identity file /home/megas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA c5:c3:a7:c0:5a:25:a1:64:c4:04:0c:42:bb:46:f6:96 debug1: Host '212.71.238.74' is known and matches the ECDSA host key. debug1: Found key in /home/megas/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/megas/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/megas/.ssh/id_dsa debug1: Trying private key: /home/megas/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey,password). How to resolve this problem? Thanks

    Read the article

  • Mother Board Question

    - by user33931
    1st, I am software guy. I do not do hardware. So I know to you hardware geeks, this is a dumb question. I just inherited a box with a ASUS P5GZ-MX mother board. I have attempted to install two nVidia PCI video cards. I put a 750w power supply in the system to be sure I have enough power. With no extra video cards, the 3.3 v shows normal. When I put one card in, the 3.3 goes to 3.5-3.6 and flashes red (over voltage) about 30% of the time. When I put the 2nd card in, it goes to 3.73 v and stays red all the time. Any Ideas why the voltage goes up when I add cards instead of going down? More Importantly, is this dangerous to the system?

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • links for 2010-06-01

    - by Bob Rhubart
    Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 -- Do we need measures in a Fact Table? Troubleshooting from Rittman Mead's Venkatakrishnan J. (tags: oracle otn businessintelligence datawarehouse) Grid container support : JavaFX Composer An overview how JavaFX Composer supports the grid container. (tags: oracle sun javafx) John Brunswick: Site Studio Mobile Example - WCM Reuse The example highlighted in John Brunswick's post takes advantage of dynamic conversion capabilities in Oracle UCM that allow site content to be created and updated via MS Office documents.  (tags: oracle otn enterprise2.0) @glassfish: GlassFish 3 in the EC2 Cloud powering Dutch and Belgian community polls "The infrastructure is Amazon's Elastic Cloud Computing (EC2) environment because of the dynamic provisioning (elasticity) required by such an online service. Requests are handled directly by the grizzly layer of GlassFish with no extra front-end HTTP layer and shows great performance and scalability." -- The Aquarium (tags: oracle java sun glassfish cloud) James Morle: Flash Storage Will Be Cheap: The End of the World is Nigh "We now need technologies that look more like Oracle Exadata v2, with low-latency RDMA interfaces directly into the Operating System/Database. However, they need to easily and natively support other types of storage (unstructured data such as files, VMware datastores and so forth). The Exadata architecture lends itself well to changes in this area in both hardware trends and access protocols." -- James Morle (tags: oracle otn exadata database architecture virtualization) Java / Oracle SOA blog: HTTP binding in Soa Suite 11g PS2 (tags: ping.fm) Confessions of a Software Developer: Some Tips for Installing Oracle BPM 11g on Windows XP (tags: ping.fm) SOA and Java using Oracle technology: Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid (tags: ping.fm)

    Read the article

  • remote desktop: the app on the remote machine is confusing the controller's hostname with its own hostname

    - by David Dai
    I have 2 machines A, B, both run Windows OS. A is my work machine, B is a server on which I have already installed SQLServer. Now I want to install another software on B which runs on top of the SQLServer. I remote connect to B from A. Then on the remote desktop, I start the installer, along the installation process, there's a step where I can configure which server to connect to. normally B's hostname is entered automatically to the hostname field. The issue I'm having is, when I get to that step, A's hostname is entered automatically instead of B's, and even if I manually correct it to 'localhost' or '127.0.0.1' or B's hostname, the installer still cannot connect to B's service as if it still try to connect to A. Theoretically, how does this happen? how is this possible?

    Read the article

  • The Oracle VM Hall of Fame

    - by Kristin Rose
    “Take me out to the ball game, take me out to the crowd. Buy me a new Oracle VM, I want my competition to be history!”...Yes, baseball is in full swing, and as we make our way to the closing of the quarter, Oracle is ready to “knock it out of the park” with its newly updated release of Oracle VM 3.1. This home run of a server virtualization solution will let you deploy software faster, as it intelligently manages your entire infrastructure, from application to disk. As if that wasn’t enough, the competition can’t even get on base! Have a look at the final score below: Partners will be hitting grand slams left and right because management tools, application templates and single source support, have all teamed up to create one heck of a curve ball for the competition, but more importantly, an absolute first draft pick for our partners. With no license cost and an affordable enterprise support cost, crowds have gathered to see this ‘All Star’ play some hard ball. Watch as Jeff Doolan, Sr. Director of Linux and Virtualization Channel Sales at Oracle, goes into more depth on how Oracle VM is a real game changer and eliminates the competition.Adding to the line-up are two key components of Oracle VM 3.1: Enhanced Ease-of-use: The new GUI design is engineered for faster execution of workflow and to maximize ease of use and reduce deployment time. Administrators have more time to spend at the ball park or focus on the business.New Oracle VM Templates: such as the Oracle E-Business Suite 12.1.3; Oracle PeopleSoft FSCM 9.1; Oracle Enterprise Manager 12c; Oracle Linux 5.8; Oracle Linux 6.1; Oracle Solaris 11 – which add to the existing 100+ existing templates that are ready for download. Oracle VM Templates are pre-configured as an entire stack including OS and application fully tested, production ready and certified from Oracle.For more information on Oracle newest player, Oracle VM 3.1, read this press release or visit our technology information page. Batter Up,The OPN Communications Team

    Read the article

  • How to View Netflix Watch Instantly in XBMC

    - by Justin Garrison
    Netflix streaming isn’t just a feature that is nice to have, for many people it is a must have for any video streaming software. Unfortunately it has been missing from XBMC for various reasons, until today. In order to get Netflix Watch Instantly working in XBMC you just need to have XBMC 10.0+ installed on Windows or OS X. Because of a lack of Silverlight support, this currently does not work on XBMC Live, Linux, or iOS devices (iPhone, iPad, AppleTV). You also need to live in a region that offers Netflix streaming (currently US and Canada) Latest Features How-To Geek ETC How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7

    Read the article

  • Is aspect oriented programming a misnomer?

    - by glenviewjeff
    From everything I have learned about "Aspect Oriented Programming" or "Aspect Oriented Software Development," labeling it as a programming paradigm or methodology appears to be inaccurate. From what I can tell it is not a fundamental technique for programming. To nail down what is meant by "paradigm" and "methodology," please refer to the following definitions from the American Heritage Dictionary. Compare how well or poorly "Object-Oriented Programming" applies to each vs. how well AOP fits. Paradigm: A set of assumptions, concepts, values, and practices that constitutes a way of viewing reality for the community that shares them, especially in an intellectual discipline. Methodology: A body of practices, procedures, and rules used by those who work in a discipline or engage in an inquiry; a set of working methods. "Evidence-based medicine" satisfies the definition of paradigm, but "hysterectomy-based medicine" would be a misnomer because the problem space is too narrow. I am getting the impression that AOP may be misnamed because based on the "oriented-programming" suffix, AOP is alleging to be both a paradigm and a methodology in the same way "Object-Oriented Programming" is. Both of these terms (paradigm and methodology) indicate a fundamental technique, where what I understand about aspects is a technology for solving a narrow problem scope, maybe comparable in magnitude to the static variable feature of Java. If it's true that aspects solve a narrow set of problems, and AOP isn't a misnomer, then why shouldn't all programming techniques be given the "oriented-programming" suffix, such as "inheritance-oriented programming," "dependency-oriented programming," or "scope-oriented programming?"

    Read the article

  • Non-Apple RAID card for Mac PRO (TOWER)

    - by Arthor
    I have the following: MAC PRO (Model Number: A1186) (PCIe - SLOTS) At present I am using the software RAID however I wish to move to the hardware raid because of the following: Performance (4 x 300gb SATA II in RAID 5) Redundancy (Raid 5, 1 drive can fail and system will be online) I do not wish to use the Apple RAID card (very expensive), I would like to use an aftermarket one which is cheaper. Questions: Does anyone have a WORKING aftermarket RAID card working in their MAC PRO (TOWER)? -(Have done some research, ROCKETRAID, need confirmation) If so to the above, does it work from boot? Thanks

    Read the article

  • How to access an encrypted INI file from C on an embedded system with little RAM

    - by Mawg
    I want to encrypt an INI file using a Delphi program on a Windows PC. Then I need to decrypt & access it in C on an embedded system with little RAM. I will do that once & fetch all info; I will not be consutinuously accessing the INI file whenever my program needs data from the file. Any advice as to which encryption to use? Nothing too heavyweight, just good enough for "Security through obscurity" and FOSS for both Delphi & C. And how can I decrypt, get all the info from the INI file - using as little RAM as possible, and then free any allocated RAM? I hope that someone can help. [Update] I am currently using an Atmel UC3, although I am not sure if that will be the final case. It has 512kB falsh & 128kB RAM. For an INI file, I am talking of max 8 sections, with a total of max 256 entries, each max 8 chars. I chose INI (but am not married to it), because i have had major problems in the past when the format of a data fiel changes, no matter whether binary, or text. For tex, I prefer the free format of INI (on PC), but suppose I could switch to line_1=data_1, line_2=data_2 and accept that if I add new fields in future software erleases they must come at the end, even if it is not pretty when read directly by humans. I suppose if I choose a fixed format text file then I never need get more than one line into RAM at a time ...

    Read the article

  • Free Team Foundation Service a Boon for Play Projects

    - by Ken Cox [MVP]
    My ‘jump in and sink or swim’ learning style leads me to create dozens of unbillable ‘play’ projects in Visual Studio 2012.  For example, I’m currently learning to customize the powerful auction/fixed price/classifieds software called AuctionWorx Enterprise.  I make stupid mistakes as I grasp a new API and configure projects. It’s frustrating to go down a rat hole and discover the VS IDE’s Undo doesn’t reach back to a working build from the previous weekend. Enter Visual Studio’s Free Team Foundation Service.  Put your play projects into the free source control and check in (or shelve) a snapshot before embarking on something risky. (I already use Discount ASP.NET’s Team Foundation Server hosting for client projects so there’s zero TFS/VS learning curve for me and first class GUI support.) TFS is free right now for teams of five or fewer. I’m not naive enough to expect ‘free’ to last forever. So, it’ll be interesting to see how much Microsoft intends to charge in 2013 for TFS. In the meantime, why not grab some free source code storage? BTW, it’s weird to realize that Microsoft is backing up my junky little playtime code on three physically-distinct servers every day - and taking incremental backups every hour.

    Read the article

  • How to recreate missing Team Foundation Server database?

    - by Amadiere
    I've been trying out TFS 2010 Beta 2 on my local machine, or at least, had installed ready to do so. I had some issues with my MSSQL2008 server so I completely uninstalled and re-installed it and that sorted it. However, I'm now in limbo with TFS. I have the software installed, but it has none of the SQL databases installed that go with it. I had no data and am not precious about how to go about it. I figure completely uninstalling and re-installing might be an idea and will most likely fix it (repair didn't work). Is there a quicker way? Is there a command line utility that I can run, or a SQL script to recreate it all?

    Read the article

  • How do I choose which Ethernet Adapter to bridge in VMPlayer

    - by Catherine MacInnes
    I am running vmplayer 3.1.0 on Ubuntu. The host machine has four ethernet adapters that are configured to run on four different subnets. I need to run four VMs each with a single ethernet adapter bridged onto a specific one of the physical ethernet adapters. Does anyone know how to do this? Am I simply exceeding the capabilities of vmplayer and have to go to one of the other vmware products, if so, which one. Note that I have no need to create additional VMs, these are VMs that are being given to me by companies that want us to develop software for their products.

    Read the article

  • apt-get upgrade gives "403 forbidden" error

    - by 3l4ng
    I'm running Ubuntu 13.04 64b. sudo apt-get update works fine, but when I run sudo apt-get upgrade I get these errors: Err http://archive.ubuntu.com/ubuntu/ raring-updates/main python3.3-minimal amd64 3.3.1-1ubuntu5.2 403 Forbidden Err http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ raring/main gimp amd64 2.8.6-0raring1~ppa 403 Forbidden Err http://archive.ubuntu.com/ubuntu/ raring-security/main python3.3-minimal amd64 3.3.1-1ubuntu5.2 403 Forbidden Err http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ raring/main gimp-help-en all 1:2.8-0raring16~ppa 403 Forbidden Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python3.3/python3.3-minimal_3.3.1-1ubuntu5.2_amd64.deb 403 Forbidden Failed to fetch http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/pool/main/g/gimp/gimp_2.8.6-0raring1~ppa_amd64.deb 403 Forbidden Failed to fetch http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/pool/main/g/gimp-help/gimp-help-en_2.8-0raring16~ppa_all.deb 403 Forbidden E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Running sudo apt-get upgrade --fix-missing installs some updates, but the above errors still persist when I run apt-get upgrade again. The software update app shows the error: https://www.dropbox.com/s/2cr450557hmahzz/software_update.jpg and selecting continue shows: https://www.dropbox.com/s/l7u32sxyfbxxeeg/soft_upd2.jpg (sorry for the links, I don't have enough rep to post images) I am behind a proxy, but apt-get update and web browsing work without issues. I also do not believe a server being down is causing this, as the problem has been there over a month. Any ideas on how to fix this?

    Read the article

  • Ubuntu 12.04.1 Update Manager and synaptic Package Manager not working

    - by Ashoke
    Recently installed Ubuntu 12.04.1, 64bit on Intel Quad core system. There is a Red Circle with a WHITE 'minus' sign on the upper right hand corner. In the end of a long message displayed below the RED CIRCLE in 'grey' says that INSTALLED PACKAGES HAVE UNMET DEPENDENCIES. None of the following is working. 1. Ubuntu Software Center not opening. UPDATE MANAGER Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/in.archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_binary-i386_Packages, E:The package lists or status file could not be parsed or opened.' SYNAPTIC PACKAGE MANAGER AN ERROR OCCURED The following details are provided E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/in.archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_binary-i386_Packages E: The package lists or status file could not be parsed or opened. E: _cache-open() failed, please report. Otherwise, the computer is working fine with broadband WiFi internet. InstallPLEASE HELP.

    Read the article

  • Two Tomcat SSL Providers & One FreeBSD

    - by mosg
    Hello everyone. Question: On FreeBSD8 I need to have two opened HTTPS different ports (443 and 444, for example). In other words, I need two providers, working simultaneously: Ordinary SSL signed certificate (# Thawte) on 443 port Special russian security provider (# DIGTProvider, based on CryptoPro CSP software) on 444 port I also have to mentioned, that the major provider is the 2'nd provider. Here is some of DIGTProvider options: add to ${JRE_HOME}/lib/security/java.security this line security.provider.N=com.digt.trusted.jce.provider.DIGTProvider ssl.SocketFactory.provider=com.digt.trusted.jsse.provider.DigtSocketFactory uncomment and edit in conf/server.xml HTTPS section: sslProtocol="GostTLS" (added) edit bin/catalina.sh and add: export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/opt/cprocsp/lib/ia32" export JAVA_OPTS="${JAVA_OPTS} -Dcom.digt.trusted.jsse.server.certFile=/home//server-gost.cer -Dcom.digt.trusted.jsse.server.keyPasswd=11111111" As I know if I just define in server.xml tomcat's configuration file two SSL connectors, tomcat would not start, because in JRE you can use only one JSSE provider. Thanks for help.

    Read the article

  • SBS 08 Backup fail

    - by Bastien974
    Hi, I'm trying to backup my SBS 08 (only C:) with Windows Server Backup. It fails a few minutes after it started : Backup started at '08/12/2009 1:27:23 PM' failed as Volume Shadow copy operation failed for backup volumes with following error code '2155348022'. Please rerun backup once issue is resolved. In the EventViewer i have lots of error : VSS : 12289 SQLVDI : 1 MSSQL$MICROSOFT##SSEE : 18210 MSSQL$MICROSOFT##SSEE : 3041 SQLWRITER : 24583 All VSS ans SQL services are started. I have WSUS 3.0, Exchange 07. I don't have any third party backup software running at the same time. Thanks for your help !

    Read the article

  • Internet connection issue

    - by Mr New
    For some reason this laptop doesn't want to stay connected to the Internet... I have to restart the DHCP client service every time to fix the connection problem. Could someone tell me whats going on because I didn't have to usually do this? And I'm not sure if these problems are connected but the sound also disconnects itself and I have to enable it back, but everything that was using audio has to be restarted in order to hear it, even the browser? There are no external speakers and I didn't install any new software. My laptop is an XPS M1530, running Windows Vista.

    Read the article

  • How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?

    - by Nupul
    We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user-stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user-stories?

    Read the article

  • How to share problem solving knowledge in a multiteam group?

    - by jonathan
    I've been working in multiteam groups for as long as I'm a webdeveloper, for me a team can be a lonely soldier or several people, generally a company will have multiple teams working in different projects and once the project is out in the wild, any team can perform the maintenance. This is a small picture since I'm not talking only about project wise knowledge, but "craft wise" knowledge, but it gives the picture of how I'm used to work, so: Since we work on modularised teams, sometimes I feel like the teams are too tightly enclosed in their projects, I've seen cases where after an hour of discussion, someone asked the question aloud and other person totally unrelated answered in a much simpler fashion. The problem is not so simple to solve as people tend not to be available all the time, also sometimes people can't afford the time to go through a problem with the "asker", but could do it alone. I've thought about software based solutions, something in the lines of SE, but I'd like to know other programmers opinions on the subject. EDIT I don't know if this is a wikipedia complex, but I feel that Wikis don't encourage the user to actually ask questions, but rather to write articles, and sometimes we don't know the knowledge we need, before needing it.

    Read the article

  • Whats after the iPAD?? check out the new Augmented Reality Glasses

    - by Stephen Slade
    Everyone loves their new iPad! The rich features, portability, plethora of apps and ease of use make this the new clipboard on the factory floor or electronic notebook for meetings. But how many business people walk into an hours meeting and really start typing notes on their PC?..yes some do, but for the general business public there are new technologies coming down the road. The iPad is the latest holla-hoop; and the next gen device I feel is the Augmented Reality Glasses. Your glasses will be your screen, have an earpiece, be wireless enabled, your smart watch be your electronics and maybe your belt can be an extended battery pack.  I'm anxious to test one of these. The TELEGRAPH writes:  "Android software is believed to power the gadget, enabling similar features to its smartphone and tablets. A 3G /4G data connection, motion sensors and GPS navigation are believed to be included in the device's capabilities. The augmented-reality glasses are the culmination of a two-year initiative called Project Glass, developed in the clandestine Google X lab, ..in Mountain View, ...The New York Times suggested they could cost between $250-$500 " http://www.telegraph.co.uk/technology/news/9187547/Google-unveils-augmented-reality-glasses.html

    Read the article

  • Slight network lag on Small Business Server 2008 R2

    - by Sir.Nathan Stassen
    I recently upgraded a network from a SBS 2003 server to a SBS 2008 R2 Server. Both I and users have noticed a slight delay in network applications and browsing network drives on the new server. It is minor, maybe a second or two at most. However I am wondering if anyone knows of anyway to optimize the networking to service requests sooner to the workstations. The network is running a 1 gig network with some 100 meg devices (mainly network printers). All workstations are XP SP3 Network software runs out of a shared folder mapped as a network drive, no sql databases. Server is a Dell Poweredge T610 with plenty of ram, cpu power, and storage.

    Read the article

  • Why is Latex so big?

    - by putmatrix
    On windows, MikTex comes in a DVD. It's several times bigger than a typical Linux distribution. This makes it impossible to carry Latex on a memory stick like I do with many other useful software. Why is it so big? I thought it was just a language or system, but I've never seen any programming language with gigabytes of libraries. It's just that there's a bad feeling when your Latex distribution takes up four gigabytes of space when you expect it to be more of, say, 200mb.

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >