Search Results

Search found 23021 results on 921 pages for 'process monitoring'.

Page 532/921 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • How to install network drivers during installation?

    - by Matt
    I have a server that I'm attempting to install windows onto. However, the disk is an iscsi target provided by ipxe. Everything appears to go well until about 3/4 through the install process I get an error about a critical driver missing and the installation is cancelled. I would say the critical driver would be the network card. It's an intel nic and the drivers are not on the windows installation CD. I tried slipstreaming them with RTSevenLite, but after it created the CD it seems it failed to make it bootable. I've also not been successful in making a bootable USB thumb drive or USB HDD. I suspect a buggy bios even though I have the latest. How to install network drivers during installation? Windows used to provide an optional F6 during install feature but this seems to be missing in Windows Servert 2008. Perhaps there is a way to do this, or another method?

    Read the article

  • Should one generally develop a client library for REST services to help prevent API breakages?

    - by BestPractices
    We have a project where UI code will be developed by the same team but in a different language (Python/Django) from the services layer (REST/Java). The code for each layer exits in different code repositories and which can follow different release cycles. I'm trying to come up with a process that will prevent/reduce breaking changes in the services layer from the perspective of the UI layer. I've thought to write integration tests at the UI layer level that we'll run whenever we build the UI or the services layer (we're using Jenkins as our CI tool to build the code which is in two Git repos) and if there are failures then something in the services layer broke and the commit is not accepted. Would it also be a good idea (is it a best practice?) to have the developer of the services layer create and maintain a client library for the REST service that exists in the UI layer that they will update whenever there is a breaking change in their Service API? Conceivably, we would then have the advantage of a statically-typed API that the UI code builds against. If the client library API changes, then the UI code won't compile (so we'll know sooner that there was a breaking change). I'd also still run the integration tests upon building the UI or services layer to further validate that the integration between UI and the service(s) still works.

    Read the article

  • Hylafax and "No response to MPS"

    - by Joril
    We have an Hylafax 5.2.5 CentOS 5 installation hosted inside a Xen virtual machine. It works quite well, but now I'm in the process of upgrading/migrating it to a KVM virtual machine running Ubuntu 10.04 and Hylafax 5.5.1 (compiled from source using http://sourceforge.net/projects/hylafax/files/hylafax%20debian%20build%20files/ ) The problem I'm having is that - while receiving works fine - sending faxes is extremely unreliable, I get lots of "No response to MPS repeated 3 tries", or "Failure to transmit clean ECM image data." The line, modem and configuration files I'm using are the same as before, so I thought that it could be a KVM scheduling issue, but even setting cpu_shares to 10240 instead of 1024 doesn't change a thing... What else could I try? Here's an example log file http://pastebin.com/cN01cpEs

    Read the article

  • Get cryptic error when trying to create a snapshot of any of my VMs

    - by Zolt
    I'm using Oracle VM VirtualBox. I have 6 VMs that I've imported. When I click on an individual VM, and then click the camera image (or Ctrl+Shift+S) to take a snapshot, the snapshot process fails and VirtualBox gives the following error: Failed to create a snapshot of the virtual machine vmName. Details: Result Code: VBOX_E_IPRT_ERROR (0x80BB0005) This happens not only for one of my VMs, but all of them, if I try to take a snapshot of each one separately, one at a time. My computer is a Windows 7 machine. I have 200 GB free on my hard drive, and I see no reason why the error should occur. I can import VMs, run them, and clone then without any problems. Can anyone tell me what to do to fix the issue?

    Read the article

  • SharePoint Web Part Constructor Fires Twice When Adding it to the Page (and has a different security

    - by Damon
    We had some exciting times debugging an interesting issue with SharePoint 2007 Web Parts.  We had some code in staging that had been running just fine for weeks and had not been touched or changed in about the same amount of time.  However, when we tried to move the web part into a different staging environment, the part started throwing a security exception when we tried to add it to a page.  After a bit of debugging, we determined that the web part was throwing the exception while trying to access the SPGroups property on the SharePoint site.  This was pretty strange because we were logged in as an admin and the code was working perfectly fine before.  During the debugging process, however, we found out that the web part constructor was being fired twice.  On one request, the security context did not seem to have everything it needed in order to run.  On the other request, the security context was populated with the user context with the user making the request (like it normally is).  Moving the security code outside of the constructor seems to have fixed the issue. Why the discrepancy between the two staging environments?  Turns out we deployed the part originally, then deployed an update with the security code.  Since the part was never "added" to the page after the code updates were made (we just deployed a new assembly to make the updates), we never saw the problem.  It seems as though the constructor fires twice when you are adding the web part to the page, and when you run the web part from the web part gallery.  My only thought on why this would occur is that SharePoint is instantiating an instance to get some information from it - which is odd because you would think that would happen with reflection without requiring a new object.  Anyway, the work around is to just not put anything security related inside the constructor, or to do a good job accounting for the possibility of the security context not being present if you are adding the item to the page. Technorati Tags: SharePoint,.NET,Microsoft,ASP.NET

    Read the article

  • Unusually high memory usage on a CentOS VPS with 512 guaranteed RAM

    - by Andrei Bârsan
    I'm working on a medium-sized web application written in PHP that's running on a VPS with 512mb ram. The webapp hasn't been officially launched yet, so there isn't too much traffic going on, just me and a few other people working on it. There is another slightly smaller webapp also hosted on this machine, among 4-5 other small static sites. We are running Centos 5 32-bit & cPanel/WHM. This is the result of running ps aux and, as you can see, it's not using 100% of the RAM. However, on the hypanel overview, it's always shown as using aroun 500MB ram, just for running apache, mysql, and the lowest-memory-footprint versions of the mail server, ftp server etc. -bash-3.2# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2156 664 ? Ss 12:08 0:00 init [3] root 1123 0.0 0.0 2260 548 ? S<s 12:08 0:00 /sbin/udevd -d root 1462 0.0 0.0 1812 568 ? Ss 12:08 0:00 syslogd -m 0 named 1496 0.0 0.0 3808 820 ? Ss 12:08 0:00 nsd named 1497 0.0 0.0 10672 756 ? S 12:08 0:00 nsd named 1499 0.0 0.0 3880 584 ? S 12:08 0:00 nsd root 1514 0.0 0.1 7240 1064 ? Ss 12:08 0:00 /usr/sbin/sshd root 1522 0.0 0.0 2832 832 ? Ss 12:08 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 1534 0.0 0.1 3712 1328 ? S 12:08 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql - mysql 1667 0.0 2.9 225680 30884 ? Sl 12:08 0:00 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql - mailnull 1766 0.0 0.1 9352 1100 ? Ss 12:08 0:00 /usr/sbin/exim -bd -q60m root 1797 0.0 0.0 2156 708 ? Ss 12:08 0:00 /usr/sbin/dovecot root 1798 0.0 0.0 2632 1012 ? S 12:08 0:00 dovecot-auth root 1816 0.0 3.0 38580 32456 ? Ss 12:08 0:01 /usr/local/bin/spamd -d --allowed-ips=127.0.0.1 --pidfi root 1839 0.0 1.6 63200 17496 ? Ss 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 1846 0.0 0.1 5416 1468 ? Ss 12:08 0:00 pure-ftpd (SERVER) root 1848 0.0 0.1 6212 1244 ? S 12:08 0:00 /usr/sbin/pure-authd -s /var/run/ftpd.sock -r /usr/sbin root 1856 0.0 0.1 4492 1112 ? Ss 12:08 0:00 crond root 1864 0.0 0.0 2356 428 ? Ss 12:08 0:00 /usr/sbin/atd dovecot 1927 0.0 0.1 5196 1952 ? S 12:08 0:00 pop3-login dovecot 1928 0.0 0.1 5196 1948 ? S 12:08 0:00 pop3-login dovecot 1929 0.0 0.1 5316 2012 ? S 12:08 0:00 imap-login dovecot 1930 0.0 0.2 5416 2228 ? S 12:08 0:00 imap-login root 1939 0.0 0.1 3936 1964 ? S 12:08 0:00 cPhulkd - processor root 1963 0.0 0.8 15876 8564 ? S 12:08 0:00 cpsrvd (SSL) - waiting for connections root 1966 0.0 0.7 15172 7748 ? S 12:08 0:00 cpdavd - accepting connections on 2077 and 2078 root 1990 0.0 0.2 5008 3136 ? S 12:08 0:00 queueprocd - wait to process a task root 2017 0.0 2.9 38580 31020 ? S 12:08 0:00 spamd child root 2018 0.0 0.5 8904 5636 ? S 12:08 0:00 /usr/bin/perl /usr/local/cpanel/bin/leechprotect nobody 2021 0.0 3.2 66512 33724 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2022 0.0 3.1 67812 33024 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2024 0.0 1.9 64364 20680 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 2027 0.0 0.4 9000 4540 ? S 12:08 0:00 tailwatchd root 2032 0.0 0.1 4176 1836 ? SN 12:08 0:00 cpanellogd - sleeping for logs nobody 3096 0.0 1.9 64572 20264 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3097 0.0 2.8 66008 30136 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3098 0.0 2.8 65704 29752 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3099 0.0 3.1 67260 32816 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL andrei 3448 0.0 0.1 3204 1632 ? S 12:50 0:00 imap nobody 3537 0.0 1.9 64308 20108 ? S 13:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3614 0.0 1.9 64576 20628 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3615 0.0 1.3 63200 14672 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 3626 0.0 0.2 10232 2964 ? Rs 13:14 0:00 sshd: root@pts/0 root 3648 0.0 0.1 3844 1600 pts/0 Ss 13:14 0:00 -bash root 3826 0.0 0.0 2532 908 pts/0 R+ 13:21 0:00 ps aux Lately, without any significant changes to the configuration, the memory usage started peaking and going over 512, causing the virtual server to kill apache, basically murdering our site in the process. Do you have any idea if this is normal and more resources should be acquired? I don't think... since there isn't too much data or traffic online yet.

    Read the article

  • General video performance affected on Mac OSX 10.5 (PowerBook G4)

    - by r0ca
    Hi all, I'm quite new to Mac and I just got a PowerBook G4 for free. I installed OSX 10.5 on it and for the first two weeks, everything was going kinda smooth even if this is similar to a P3. I'm not expecting awsome video performance but at least be able to watch some videos from Youtube. Yesterday night, I installed Office 2008 for mac and this morning, even after a reboot, my computer is way much slower that I used to know. I watched a youtube video and the framerate was 1:1. I also noticed it on flash adds, it's way slower! Is there anything that I can do to increase video performance, see what's the process list running and taking more GPU or CPU, what's taking more ram and stuff like that?! What do you guys, Mac pros, would do on an old laptop with OSX 10.5 Thanks!

    Read the article

  • Sync Google Contacts with QuickBooks

    - by dataintegration
    The RSSBus ADO.NET Providers offer an easy way to integrate with different data sources. In this article, we include a fully functional application that can be used to synchronize contacts between Google and QuickBooks. Like our QuickBooks ADO.NET Provider, the included application supports both the desktop versions of QuickBooks and QuickBooks Online Edition. Getting the Contacts Step 1: Google accounts include a number of contacts. To obtain a list of a user's Google Contacts, issue a query to the Contacts table. For example: SELECT * FROM Contacts. Step 2: QuickBooks stores contact information in multiple tables. Depending on your use case, you may want to synchronize your Google Contacts with QuickBooks Customers, Employees, Vendors, or a combination of the three. To get data from a specific table, issue a SELECT query to that table. For example: SELECT * FROM Customers Step 3: Retrieving all results from QuickBooks may take some time, depending on the size of your company file. To narrow your results, you may want to use a filter by including a WHERE clause in your query. For example: SELECT * FROM Customers WHERE (Name LIKE '%James%') AND IncludeJobs = 'FALSE' Synchronizing the Contacts Synchronizing the contacts is a simple process. Once the contacts from Google and the customers from QuickBooks are available, they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete contacts in either data source as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for QuickBooks V3, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the QuickBooks ADO.NET Data Provider V3, which can be obtained here.

    Read the article

  • Moving Windows 7 profile to new user

    - by Kevin Grossnicklaus
    I have a laptop which I've been using as part of a corporate network with an AD login (and associated local profile). The laptop is loaded with Windows 7 Ultimate. I need to remove the laptop from this domain and, to start this process, I have already configured a local user on the box for me to use moving forward (granting this user the same local admin rights as the AD user). I'd like to migrate all the files, settings, etc from the local AD profile to the new non-AD profile. Is there a simple way to do this? Anything built into Win 7? As far as basic files I can probably just manually copy all the documents, pictures, music, desktop, favorites, etc... But is there a more streamlined way to move profile information? -Kevin

    Read the article

  • Tool to maintain/keep track of filesystem content integrity?

    - by Jesse
    I'm looking for a tool to maintain the integrity of a filesystem and it's contents using checksums. Effectively storing a list of checksums/filename pairs somewhere on the filesystem in a way that can be verified later if files are somehow damaged or lost. Git does what I want, but because it stores the contents of every file in it's object database, the disk usage will at least double. And the fact that it does not provide a progress bar when scanning files tells me it was not designed for the multi-terabyte filesystem I have in mind. I can do this crudely by storing the output of md5deep, but is there a tool specifically designed for this purpose, using whatever smarts possible to make the process efficient?

    Read the article

  • How to establish the real-time communication between Shopping cart running MySQL and Internal System Running PostgreSQL [closed]

    - by Andrew
    I am thinking about the way of establishing some-sort of real-time connection between MySQLpowered shopping cart and internal system that is running on PostgreSQL. Could you give me some sort of insight on this topic? For example, I can write some sort of csv export application, then enable remote MySQL for over the internet connection and then import csv to mysql directly from PC. Or upload csv and run cron on server. But this way of import-export causing delays; so I would like to link databased (or some msort). I have never done it before and would like to hear some opinions about this. Another way "just a thought" might to implement triggers that would initiate the update process via csv; but again, I would like to avoid csv. Do you have any good advise? Maybe some specific examples?

    Read the article

  • How to Tell If Your Computer is Overheating and What to Do About It

    - by Chris Hoffman
    Heat is a computer’s enemy. Computers are designed with heat dispersion and ventilation in mind so they don’t overheat. If too much heat builds up, your computer may become unstable or suddenly shut down. The CPU and graphics card produce much more heat when running demanding applications. If there’s a problem with your computer’s cooling system, an excess of heat could even physically damage its components. Is Your Computer Overheating? When using a typical computer in a typical way, you shouldn’t have to worry about overheating at all. However, if you’re encountering system instability issues like abrupt shut downs, blue screens, and freezes — especially while doing something demanding like playing PC games or encoding video — your computer may be overheating. This can happen for several reasons. Your computer’s case may be full of dust, a fan may have failed, something may be blocking your computer’s vents, or you may have a compact laptop that was never designed to run at maximum performance for hours on end. Monitoring Your Computer’s Temperature First, bear in mind that different CPUs and GPUs (graphics cards) have different optimal temperature ranges. Before getting too worried about a temperature, be sure to check your computer’s documentation — or its CPU or graphics card specifications — and ensure you know the temperature ranges your hardware can handle. You can monitor your computer’s temperatures in a variety of different ways. First, you may have a way to monitor temperature that is already built into your system. You can often view temperature values in your computer’s BIOS or UEFI settings screen. This allows you to quickly see your computer’s temperature if Windows freezes or blue screens on you — just boot the computer, enter the BIOS or UEFI screen, and check the temperatures displayed there. Note that not all BIOSes or UEFI screens will display this information, but it is very common. There are also programs that will display your computer’s temperature. Such programs just read the sensors inside your computer and show you the temperature value they report, so there are a wide variety of tools you can use for this, from the simple Speccy system information utility to an advanced tool like SpeedFan. HWMonitor also offer this feature, displaying a wide variety of sensor information. Be sure to look at your CPU and graphics card temperatures. You can also find other temperatures, such as the temperature of your hard drive, but these components will generally only overheat if it becomes extremely hot in the computer’s case. They shouldn’t generate too much heat on their own. If you think your computer may be overheating, don’t just glance as these sensors once and ignore them. Do something demanding with your computer, such as running a CPU burn-in test with Prime 95, playing a PC game, or running a graphical benchmark. Monitor the computer’s temperature while you do this, even checking a few hours later — does any component overheat after you push it hard for a while? Preventing Your Computer From Overheating If your computer is overheating, here are some things you can do about it: Dust Out Your Computer’s Case: Dust accumulates in desktop PC cases and even laptops over time, clogging fans and blocking air flow. This dust can cause ventilation problems, trapping heat and preventing your PC from cooling itself properly. Be sure to clean your computer’s case occasionally to prevent dust build-up. Unfortunately, it’s often more difficult to dust out overheating laptops. Ensure Proper Ventilation: Put the computer in a location where it can properly ventilate itself. If it’s a desktop, don’t push the case up against a wall so that the computer’s vents become blocked or leave it near a radiator or heating vent. If it’s a laptop, be careful to not block its air vents, particularly when doing something demanding. For example, putting a laptop down on a mattress, allowing it to sink in, and leaving it there can lead to overheating — especially if the laptop is doing something demanding and generating heat it can’t get rid of. Check if Fans Are Running: If you’re not sure why your computer started overheating, open its case and check that all the fans are running. It’s possible that a CPU, graphics card, or case fan failed or became unplugged, reducing air flow. Tune Up Heat Sinks: If your CPU is overheating, its heat sink may not be seated correctly or its thermal paste may be old. You may need to remove the heat sink and re-apply new thermal paste before reseating the heat sink properly. This tip applies more to tweakers, overclockers, and people who build their own PCs, especially if they may have made a mistake when originally applying the thermal paste. This is often much more difficult when it comes to laptops, which generally aren’t designed to be user-serviceable. That can lead to trouble if the laptop becomes filled with dust and needs to be cleaned out, especially if the laptop was never designed to be opened by users at all. Consult our guide to diagnosing and fixing an overheating laptop for help with cooling down a hot laptop. Overheating is a definite danger when overclocking your CPU or graphics card. Overclocking will cause your components to run hotter, and the additional heat will cause problems unless you can properly cool your components. If you’ve overclocked your hardware and it has started to overheat — well, throttle back the overclock! Image Credit: Vinni Malek on Flickr     

    Read the article

  • Chrome pegs CPU playing YouTube videos

    - by AngryHacker
    I am not sure when this started, but the issue definitely exists in the latest version of Chrome (10.0.648.127). Playing youtube video at 320p essentially pegs the CPU. The process that kills is WindowServer which takes up the lion's share of the CPU. When I close the tab, everything goes back to normal. If I try this in Safari or Firefox, CPU usage is within acceptable levels. I am on a 2-3 year old MacMini, which is not the latest, but runs most apps fine. 1.83Mhz Intel Core 2 Duo. Is there a setting within Chrome I can set to fix this issue?

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • Cannot run update due to a dpkg error with burg-theme-minimal-sir

    - by boywithaxe
    I cannot run an update or indeed run $: apt-get remove due to a dpkg error with a package that's a part of super-boot-manager. Running an update returns: dpkg: error processing burg-theme-minimal-sir (--configure): subprocess installed post-installation script returned error exit status 1 I tried removing this package alone, with the same error, also trying to remove super-boot-manager returns: (Reading database ... 225474 files and directories currently installed.) Removing burg-theme-minimal-sir ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing burg-theme-minimal-sir (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing super-boot-manager ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Errors were encountered while processing: burg-theme-minimal-sir E: Sub-process /usr/bin/dpkg returned an error code (1) I'm sort of stuck now and Google has failed me. Has anyone encountered this problem before? Or does anyone know a way for fixing this?

    Read the article

  • Who is code wanderer?

    - by DigiMortal
    In every area of life there are people with some bad habits or misbehaviors that affect the work process. Software development is also not free of this kind of people. Today I will introduce you code wanderer. Who is code wanderer? Code wandering is more like bad habit than serious diagnose. Code wanderers tend to review and “fix” source code in files written by others. When code wanderer has some free moments he starts to open the code files he or she has never seen before and starts making little fixes to these files. Why is code wanderer dangerous? These fixes seem correct and are usually first choice to do when considering nice code. But as changes are made by coder who has no idea about the code he or she “fixes” then “fixing” usually ends up with messing up working code written by others. Often these “fixes” are not found immediately because they doesn’t introduce errors detected by compilers. So these “fixes” find easily way to production environments because there is also very good chance that “fixed” code goes through all tests without any problems. How to stop code wanderer? The first thing is to talk with person and explain him or her why those changes are dangerous. It is also good to establish rules that state clearly why, when and how can somebody change the code written by other people. If this does not work it is possible to isolate this person so he or she can post his or her changes to code repository as patches and somebody reviews those changes before applying them.

    Read the article

  • Xerox Workcentre 3119 and Linux

    - by Milan Babuškov
    I'm trying to get Xerox Workcentre 3119 printer to work on Linux. It's a multifunction device (printer and scanner). I run the CUPS web interface at: http://localhost:631/ and it recognizes it on USB port and even suggests Gutenprint driver from the list. When I try to print a test page, the printer goes through "warming up" process (i.e. lights blink and sound is heard) but does not print anything. There are no errors in /var/log/cups/error_log and access_log shows as if everything is ok. The printer works fine in Windows XP. Does anyone have any experience with this printer on Linux?

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • links for 2011-01-10

    - by Bob Rhubart
    Clusterware 11gR2: Setting up an Active/Passive failover configuration (Oracle Luxembourg XPS on Database) Some think that expensive third-party cluster systems are necessary when it comes to protecting a system with an Active/Passive architecture with failover capabilities. Not true, according to Gilles Haro. (tags: oracle otn database) Atul Kumar: Part IX : Install OAM Agent - 11g WebGate with OAM 11g Part 9 of Atul's step by step guide to the installation of Oracle Identity Management. (tags: oracle oam identitymanagement security otn) Michel Schildmeijer: Oracle Service Bus: enable / disable proxy service with WLST Amis Technology's Michel Schildmeijer shares a process he found for enabling / disabling a proxy service within Oracle Service Bus 11g with WLST (WebLogic Scripting tool). (tags: oracle soa servicebus weblogic) @andrejusb: SOA & E2.0 Partner Community Forum XIII - in Utrecht, The Netherlands Oracle ACE Director Andrejus Baranovskis shares a nice plug for the SOA & E2.0 Partner Community Forum XIII coming up in March in the Netherlands. (tags: oracle oracleace otn soa enterprise2.0) Oracle Magazine Architect Column: Enterprise Architecture in Interesting Times Oracle ACE Directors Lonneke Dikmans, Ronald van Luttikhuizen, Mike van Alst, and Floyd Teter and Oracle enterprise architect Mans Bhuller share their thoughts on the forces that are shaping enterprise architecture. (tags: oracle otn architect entarch oraclemag) InfoQ: Deriving Agility from SOA and BPM - Ten Things that Separate the Winners from the Losers In this presentation from SOA Symposium 2010, Manas Deb and Clemens Utschig-Utschig discuss how to derive business agility from SOA and BPM, motivations for agility, developing and nurturing agility, influencers and dependencies, how SOA and BPM enable agility, pitfalls and recommendations for organizational culture, and pitfalls and recommendations for business and technical architectures. (tags: ping.fm)

    Read the article

  • How to Kill and Alternate X session via cli

    - by L. D. James
    Can someone tell me how to remove dormant X sessions. This question is similar to Logging out other users from the command line, but more specific to controlling X displays which I find hard to kill. I used the command "who -u" to get the session of the other screens: $ who -u Which gave me: user1 :0 2014-08-18 12:08 ? 2891 (:0) user1 pts/26 2014-08-18 16:11 17:18 3984 (:0) user2 :1 2014-08-18 18:21 ? 25745 (:1) user1 pts/27 2014-08-18 23:10 00:27 3984 (:0) user1 pts/32 2014-08-18 23:10 10:42 3984 (:0) user1 pts/46 2014-08-18 23:14 00:04 3984 (:0) user1 pts/48 2014-08-19 04:10 . 3984 (:0) The kill -9 25745 doesn't appear to do anything. I have a workshop where a number of users will use the computer under their own login. After the workshop is over there are a number of logins that are left open. I would prefer to kill the open sessions rather than try to log into each users' screen. Again, this question isn't just about logging users' out. I'm hoping to get clarity also for killing/removing stuck processes that are hard to kill. New Info While still pondering how to kill the process I wrote the following script, which did it: #!/bin/bash results=1 while [[ $results > 0 ]] do sudo kill -9 25745 results=$? echo -ne "Response:$results..." sleep 20 done After a graceful waiting period, if there isn't a better answer I'll mark this as answered with this resolution. This may resolve the problem with other stuck processes I have had in the past.

    Read the article

  • Windows 7 backup network restore: "The network location cannot be reached, 0x800704CF"

    - by Znarkus
    When I try to restore from a backup image, I get this error. After I enter the network address (\\10.0.0.1\backup or \\z\backup), the wizard presents me with the network login dialog, which leads me to believe that it can connect to the network (yes, the share is password protected). I decided to install Windows 7, since I thought that I could restore the image from Windows. The restore process in Windows can locate the backups, but to do an image restore it needs to reboot to the wizard above. Which of course gives the very same error. This is what \\z\backup looks like. Please help, I'm getting desperate. Update: Forgot to mention that the NAS is running Ubuntu, if that's relevant.

    Read the article

  • sudo apt-get install -f doesn't fix broken packages. And now?

    - by Du Oliveira
    $ sudo apt-get install -f [sudo] password for ...: Lendo listas de pacotes... Pronto Construindo árvore de dependências Lendo informação de estado... Pronto Corrigindo dependências... Pronto Os seguintes pacotes foram instalados automaticamente e já não são necessários: python-pyasn1 libconfig++8 python-twisted-runner linux-headers-3.0.0-12 libvamp-sdk2 python-twisted-mail libgnomecanvasmm-2.6-1c2a python-twisted-lore python-twisted-conch python-twisted-news python-twisted-words python-twisted libffado2 linux-headers-3.0.0-12-generic libaubio2 Use 'apt-get autoremove' para removê-los. Os pacotes extra a seguir serão instalados: libmpeg3cine Os NOVOS pacotes a seguir serão instalados: libmpeg3cine 0 pacotes atualizados, 1 pacotes novos instalados, 0 a serem removidos e 0 não atualizados. 2 pacotes não totalmente instalados ou removidos. É preciso baixar 0 B/2.573 kB de arquivos. Depois desta operação, 6.762 kB adicionais de espaço em disco serão usados. Você quer continuar [S/n]? S (Lendo banco de dados ... 317732 ficheiros e directórios actualmente instalados.) Desempacotando libmpeg3cine (de .../libmpeg3cine_1%3a2.2-0.3~ppa1~oneiric1_i386.deb) ... dpkg: erro processando /var/cache/apt/archives/libmpeg3cine_1%3a2.2-0.3~ppa1~oneiric1_i386.deb (--unpack): a tentar sobre-escrever '/usr/bin/mpeg3cat', que também está no pacote mpeg3-utils 1.5.4-5ubuntu1 Erros foram encontrados durante o processamento de: /var/cache/apt/archives/libmpeg3cine_1%3a2.2-0.3~ppa1~oneiric1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) .....:~$ apt-get autoremove E: Não foi possível abrir arquivo de trava /var/lib/dpkg/lock - open (13: Permissão negada) E: Não foi possível travar o diretório administrativo (/var/lib/dpkg/), você é root?

    Read the article

  • Dual booting Windows and Arch Linux (with GRUB2) - after using Windows, Windows Boot Manager made first in boot priority list

    - by louis058
    I am dual booting Windows 7 and Arch Linux (both 64bit), with GRUB2, using the 64-bit EFI version. I partitioned my drive into a GPT drive and installed Windows first according to this guide. I then installed Arch Linux using the Beginner's Guide, installing grub2-efi-x86_64 in the process. Everything is working fine now, but with one problem. I can set the boot priority in BIOS (or is it UEFI?) to have GRUB boot try and boot before Windows Boot Manager. Then I chainload Windows Boot Manager using GRUB. However, when I actually use Windows in this manner, upon shutting down and turning on again, or rebooting, Windows seems to set Windows Boot Manager first in the priority list again, with the result being I have to manually set GRUB again, or I can't boot into Linux. My motherboard is an Asrock H61M/USB3, if that helps. I want to know how to turn off this behaviour.

    Read the article

  • Error with APE Server Installation

    - by sadmicrowave
    I was trying to install APE-Server from the .deb file at the ape-server homepage (www.ape-project.org) and I ran into an error so wanted to try removing the installation and reinstalling. I did a sudo apt-get remove ape-server which ran successfully but left ape-server folders in my /etc/ and /etc/init.d locations. Me being an idiot new comer to linux decided that manually delete those folders. Now when I reinstall the ape-server those folders don't get recreated and therefore I cannot send the /etc/init.d/ape-server [option] command because the folder is not found. When I try to sudo apt-get purge (or remove) ape-server I get the following sudo apt-get purge ape-server Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: ape-server* 0 upgraded, 0 newly installed, 1 to remove and 92 not upgraded. 1 not fully installed or removed. After this operation, 1,753kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 43924 files and directories currently installed.) Removing ape-server ... invoke-rc.d: unknown initscript, /etc/init.d/ape-server not found. dpkg: error processing ape-server (--purge): subprocess installed pre-removal script returned error exit status 100 update-rc.d: /etc/init.d/ape-server: file does not exist dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ape-server E: Sub-process /usr/bin/dpkg returned an error code (1) My question is; how do I remove all of the ape-server installation packages that were installed so I can reinstall from scratch?

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >