Search Results

Search found 59449 results on 2378 pages for 'disk error'.

Page 186/2378 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Cryptswap boot error - can't mount?

    - by woody
    I believe i have my swap set up but am not sure because on start up it says that it is something along the lines of "could not mount /dev/mapper/cryptswap1 M for manual S for skip". But it appears to be mounted? I have already tried this solution with no success. When i run free -m the output is: total used free shared buffers cached Mem: 3887 769 3117 0 54 348 -/+ buffers/cache: 366 3520 Swap: 4026 0 4026 and sudo bklid is: /dev/sda1: UUID="9fb3ccd6-3732-4989-bfa4-e943a09f1153" TYPE="ext4" /dev/mapper/cryptswap1: UUID="bd9fe154-8621-48b3-95d2-ae5c91f373fd" TYPE="swap" and cat /etc/crypttab is: cryptswap1 /dev/sda5 /dev/urandom swap,cipher=aes-cbc-essiv:sha256 my /etc/fstab is: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=9fb3ccd6-3732-4989-bfa4-e943a09f1153 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation #UUID=bb0e378e-8742-435a-beda-ae7788a7c1b0 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 cat /proc/swaps output is: Filename Type Size Used Priority /dev/dm-0 partition 4123644 0 -1 Is my swap not setup correctly or how can i fix my boot message?

    Read the article

  • Http header 302 error

    - by Katherine Katie
    Response Headers status HTTP/1.1 302 Found connection close pragma no-cache cache-control no-cache location / location /NKiXN/ I don't know how it became so but i used w3 total cache plugin but at this time i have deactivated it please help me to solve it, site is coming down in search engine ranking and google bot is unable to follow it. Urgent help required, if this is configuration problem with server please let me know the solution. Site: http://onlinecheapestcarinsurance.co.uk/

    Read the article

  • Install Lightscribe on 64 bit AMD Error

    - by user170573
    I am trying to install lightscribe on a 64 bit Ubuntu 12.04. I have installed the 32 bit libs and I keep getting the following message: tedsch47@Ted-Laptop:~/Downloads/Programs$ sudo dpkg --install --force architecture lightscribe-1.18.27.10-linux-2.6-intel.deb (Reading database ... 574566 files and directories currently installed.) Preparing to replace lightscribe:i386 1.18.27.10 (using lightscribe-1.18.27.10-linux-2.6-intel.deb) ... Unpacking replacement lightscribe:i386 ... Setting up lightscribe:i386 (1.18.27.10) ... ln: failed to create symbolic link `/usr/lib/libstdc++.so.5': File exists How do I fix this?

    Read the article

  • 504 Gateway Time-out after php fatal error

    - by tiagojsag
    I'm using nginx and php-fpm to develop a symfony2 based website, under ubuntu 12.10 (yes, I know I'm using a beta OS). Everything was working out fine until, due to an error on my code, I called an unexisting function, and got the following: Fatal error: Call to a member function (....) This isn't a problem (it's a bug in my code, easily fixable), but after this, no other page loads. My browser just keeps trying to load the page from the webserver, until nginx timeouts (after +- 30s, which should be some default timeout) and returns: 504 Gateway Time-out Restarting php-fpm solves the issue. Nginx logs show a timeout message, and nothing appears on php-fpm logs, even if I set them to debug level. I tried switching from fpm to fastcgi, and the same thing happens. I've looked around, but all similar error are related to big requests/file handling, which isn't the case. All the pages on my website load in a few seconds, even under development conditions (no caching, etc).

    Read the article

  • Hard Drive missing drive space

    - by Chance Robertson
    I have a 500 GB hard drive which I previously attached to my Mac. I detached the drive without going through the eject procedure. When I did this a message showed up, which of course I did not read. I could not use the drive until I formatted again. Now, when I attach the drive it says it is formatted NTFS and has 280.39 of 500 GB free. When I open the drive in Windows Explorer, Finder, or in Linux, is only shows a handful of files totaling 54 MB. How can I find out what is taking up all the space.

    Read the article

  • Time machine link error

    - by robinjam
    When attempting to perform a Time Machine backup, the backup appears to proceed as normal, until it finishes copying files at which point it complains that an "error occurred while linking files for" one of my external hard disks. During the previous backup that particular disk was in fact empty, and therefore I can't understand why Time Machine is attempting to link back to it. But alas. I've verified all my disks using Disk Utility and they all appear to be fine. Does anybody know what causes this error, and how I might go about fixing it? Failing that is there a way to force Time Machine to create a brand new backup rather than an incremental one? Thanks in advance!

    Read the article

  • How to Get the Folder Name of USB Disk?

    - by Kate Moss' Open Space
    When an USB Disk plugs into CE/Mobile based device, how do you know the folder name of the mounting point? Usually, it should be "USB Disk" but it is really depends on how OS image builder; they may change the folder name for whatever reason. FindFirstFlashCard looks simple and promising, the drawback is it only available on Windows Mobile. In fact, these find flash card API set will enumerate all of the mountable file system which includes SD card, CF and etc that we don't expect to get. So I am going to introduce you another way via Storage Manager. Here is the steps.

    Read the article

  • Files not accessible

    - by gokul
    My system is running on a pc with C:\ Drive out of space. So I tried to delete some file and clean up to get more space. I found that the %Temp% {C:\Users\Username\AppData\Local\Temp} takes lots of space and tried to delete files in it. But when I open it , it alerted me with the message C:\Users\Username\AppData\Local\Temp is not accessible The file or directory is corrupted and unreadable? What to do? Is deleting files from Temp harmful to computer?

    Read the article

  • Caching/preloading files on Linux into RAM

    - by Andrioid
    I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free". Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good. Is it possible to tell the filesystem to always serve certain files out of RAM? Are there any other methods I can use to improve file reading capabilities by use of RAM? More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me. Possible applications here are: Web servers with static files that get read alot Application servers with large libraries Desktop computers with too much RAM Any ideas? Edit: Found this very informative: The Linux Page Cache and pdflush As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.

    Read the article

  • Configure htaccess to show index.php as the default page instead of permissions error

    - by Jan De Laet
    Having a problem with my .htaccess. I have this to secure all my documents: Order Deny,Allow Deny from all Allow from 127.0.0.1 <FilesMatch "\.(htm|html|css|js|php)$"> Order Allow,Deny Allow from all Allow from 127.0.0.1 </FilesMatch> Now everything works fine except that the index page of www.mysite.com doesn't work and gives me the notification: You don't have permission to access / on this server. How can you fix this? If there stands www.example.com/index.php it works but if you surf to www.example.com I get this message.

    Read the article

  • How can I create bootable DOS usb stick?

    - by Grzenio
    I need to use this utility to change one of the parameters of my new WD hard drive: http://support.wdc.com/product/download.asp?groupid=609&sid=113&lang=en It has truly unreadable instructions: Extract wdidle3.exe onto a bootable medium (floppy, CD-RW, network drive, etc.). Boot the system with the hard drive to be updated to the medium where the update file was extracted to. Run the file by typing wdidle3.exe at the command prompt and press enter. I understand that this bootable medium should be some version of DOS? How can I make my USB stick a bootable medium compatible with this utility (I don't have a diskette drive)? I have Windows 7 and Debian Linux installed.

    Read the article

  • Trying to install SawMill and getting the following error:

    - by Itai Ganot
    [root@sawmill sawmill]# ./sawmill ./sawmill: error while loading shared libraries: libldap-2.3.so.0: cannot open shared object file: No such file or directory Using yum provides libldap_r-2.3.so.0 i found that the package which includes this file is: compat-openldap-2.3.43-2.el6.i686 . After installing it i still get the error. If i use locate, i can find the file in /usr/lib, so I tried to create a symbolic link to the file from /usr/lib to /usr/lib64 but i still get the same error. I also tried setting LD_LIBRARY_PATH=/usr/lib/ and LD_LIBRARY_PATH=/usr/lib64 but it doesn't allow me to run the sawmill installation script. Anyone knows how to solve this issue?

    Read the article

  • Thunderbird cant open due to GLiB Error

    - by Elli
    i recently updated Lubuntu to 13.10 and now after some days Thunderbird stopped working. It just won't open. When i try to open it with the terminal i get the following text: user@user-rechner:~$ thunderbird (process:6231): GLib-CRITICAL **: g_slice_set_config: assertion 'sys_page_size == 0' failed (thunderbird:6231): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::sm-connect after class was initialised (thunderbird:6231): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::show-crash-dialog after class was initialised (thunderbird:6231): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::display after class was initialised (thunderbird:6231): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::default-icon after class was initialised GNOME-Tastaturkürzel-Verzeichnis »/home/user/.gnome2/accels« konnte nicht angelegt werden: Keine Berechtigung (Translation: gnome shortcut-directory could not be created - no permission) Now i reinstalled it once, deleted this .thunderbird folder in my home directory and it still won't work. I hope someone can help me. Thanks.

    Read the article

  • Bad performance with Linux software RAID5 and LUKS encryption

    - by Philipp Wendler
    I have set up a Linux software RAID5 on three hard drives and want to encrypt it with cryptsetup/LUKS. My tests showed that the encryption leads to a massive performance decrease that I cannot explain. The RAID5 is able to write 187 MB/s [1] without encryption. With encryption on top of it, write speed is down to about 40 MB/s. The RAID has a chunk size of 512K and a write intent bitmap. I used -c aes-xts-plain -s 512 --align-payload=2048 as the parameters for cryptsetup luksFormat, so the payload should be aligned to 2048 blocks of 512 bytes (i.e., 1MB). cryptsetup luksDump shows a payload offset of 4096. So I think the alignment is correct and fits to the RAID chunk size. The CPU is not the bottleneck, as it has hardware support for AES (aesni_intel). If I write on another drive (an SSD with LVM) that is also encrypted, I do have a write speed of 150 MB/s. top shows that the CPU usage is indeed very low, only the RAID5 xor takes 14%. I also tried putting a filesystem (ext4) directly on the unencrypted RAID so see if the layering is problem. The filesystem decreases the performance a little bit as expected, but by far not that much (write speed varying, but 100 MB/s). Summary: Disks + RAID5: good Disks + RAID5 + ext4: good Disks + RAID5 + encryption: bad SSD + encryption + LVM + ext4: good The read performance is not affected by the encryption, it is 207 MB/s without and 205 MB/s with encryption (also showing that CPU power is not the problem). What can I do to improve the write performance of the encrypted RAID? [1] All speed measurements were done with several runs of dd if=/dev/zero of=DEV bs=100M count=100 (i.e., writing 10G in blocks of 100M). Edit: If this helps: I'm using Ubuntu 11.04 64bit with Linux 2.6.38. Edit2: The performance stays approximately the same if I pass a block size of 4KB, 1MB or 10MB to dd.

    Read the article

  • google earth 7, 32bit in 12.10 runs without error but there is no image (globe)

    - by Dennis
    Everything seemed to install fine. I can start google earth and all layers are available, I can even zoom in and look at 3-D buildings. But there is absolutely no image data displayed at all. If you look at the whole globe the outlines are there on an invisible globe. As you zoom in the base looks dark grey almost black but there is no image. I have tried. Tools Options Graphics Safe Mode Tools Options Texture colors all combinations Tools Options Cache (tried several changes to the numbers) lspci shows Display controller: Intel Corporation Mobile 915GM/GMS/910GML Express Graphics Controller (rev 03) Running on a Dell Inspiron 6000 laptop (1.5Gb memory)

    Read the article

  • What is the best drive cleaner?

    - by allindal
    What is the best drive "cleaner" application, an application that deletes roaming, temp. and different useless caches. Something similar to CCleaner, but more powerful. I need it to delete more than the basic stuff. Like duplications of complex files or redundancies, (example... for every game there's the DirectX suite) without deleting program essential files, obviously. I know most of this has to do with my selection of these programs, but I haven't seen anything that lets me select types of files to delete, not just specific files.

    Read the article

  • Is it possible to FORMAT an external hard disk that has been encrypted using Storagecrypt?

    - by Pandian John
    Basically the big problem is that about 680 GB of data from my Seagate 2 TB Ext HD is lost because I was experimenting with a software called storagecrypt. I used it a few months ago and today I tried it again but i didn't know that the old password is already set in the hard disk when I pressed the encrypt button. I forgot the password which is disappointing. Not to mention that software uses 128 bit AES encryption so there is no way iam going to recover that data. My question is that is it possible to Format my Hard disk which has been encrypted? What i mean is that is it possible to completely wipe the data just like it is newly bought so that I can use my External Hard disk?( I tried to format by right click-- Format. But the size of the disk is shown as 1 MB. Answers would be very much appreciated. Thanks.

    Read the article

  • A space-efficient guest filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • heimdal error Decrypt integrity check failed for checksum type

    - by user880414
    when I try to authentication with heimdal-kdc ,I get this error in kdc log : (enctype aes256-cts-hmac-sha1-96) error Decrypt integrity check failed for checksum type hmac-sha1-96-aes256, key type aes256-cts-hmac-sha1-96 and authentication failed!!! but authentication with kinit is correct!! my kerb5.conf is [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log krb5 = FILE:/var/log/krb5.log [libdefaults] default_realm = AUTH.LANGHUA clockskew = 300 [realms] AUTH.LANGHUA = { kdc = AUTH.LANGHUA } [domain_realm] .langhua = AUTH.LANGHUA [kdc] and when add this line to krb5.conf (in kdc tag) require-preauth = no I get this error krb5_get_init_creds: Client have no reply key

    Read the article

  • What's the best way to clone multiple PCs from one machine?

    - by Jason T.
    Where I work we have dozens and dozens of old ThinkPad laptops. A lot of these can be reused but not for our needs. They have been long since replaced. The higher-ups have decided to donate them to charity. For better or for worse I have been tasked with reimaging them. I took a laptop and installed the factory copy of Windows, updated it, configured it appropriately. Now I'm trying to reimage it to dozens of other laptops. What's some good software to do this? First I used clonezilla to clone the hdd in the laptop to an internal drive in an external enclosure and it worked. Then I tried taking the base image out and connecting it externally to a laptop that needed to be imaged and I got it to work a few times. So far so good, right? Well once I informed my boss of my findings and what I would want to do then the images started to not work on new laptops. One of three things would happen: The Thinkpads would just blink at me and Windows wouldn't load. Or Windows would load but freeze within two minutes. Last but not least the laptops would BSOD during the Windows XP bootup. These laptops are not going to be used by the company. They're going to charity. So can anyone else recommend a way to reimage multiple laptops?

    Read the article

  • Installing HTK Error

    - by Alex Madill
    I am having an issue when I try and make the file, ./configure worked perfectly fine for me when I try and make: zodiac@Zodiac:~/Downloads/htk$ make all (cd HTKTools && make all) \ || case "" in *k*) fail=yes;; *) exit 1;; esac; make[1]: Entering directory `/home/zodiac/Downloads/htk/HTKTools' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/zodiac/Downloads/htk/HTKTools' (cd HLMTools && make all) \ || case "" in *k*) fail=yes;; *) exit 1;; esac; make[1]: Entering directory `/home/zodiac/Downloads/htk/HLMTools' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/zodiac/Downloads/htk/HLMTools' Thanks in advance

    Read the article

  • Is Joerg Schilling's "sdd" a full replacement for "dd"

    - by fishtoprecords
    I'm trying to use 'sdd' on my Debian system, and can't get one set of options to work. They do work in 'dd' so I am wondering if I am specifying them incorrectly, or if sdd didn't implement them, or something else. What I want to do is sdd if=/dev/hdh1 of=/bay5/imagebay1 bs=4096 conv=sync,noerror if I leave out the "conv=..." option, it works, or at least starts copying data. sdd if=/dev/hdh1 of=/bay5/imagebay1 bs=4096 Can you shed a bit of light?

    Read the article

  • 11.10 7600gt display error

    - by Justin Ray
    So i used to use Ubuntu back when it was 8 and it never gave me any problems but recently i did a fresh install of 11.10. But now when i install the N-Vidia restricted drivers i cant change my screen resolution from 640x480, the display menu says "can not detect display". There is no way to navigate the screen really because the windows don't work and I'm not very command prompt keen please help i love Ubuntu!

    Read the article

  • Can't start Windows 7 after cloning HDD

    - by Paul
    Brief description: cloned HDD1 - HDD2 HDD1 partition 1 boots HDD1 partition 2 boots HDD2 partition 1 boots HDD2 partition 2 doesn't boot Windows, but is bootable in general Now verbosely: In all the cases computer is the same. I have two Windows 7 installations on HDD1 - both are booting fine. I choose between them using standard Windows 7 boot loader menu. Technically there are 4 partitions: 100 MB Boot loader partition (active), Windows 7 copy 1 (25 GB), Windows 7 copy 2 (150 GB) and Working partition. All are primary. In past few days I tried to clone the whole HDD1 to HDD2 of the same size (but 2,5 inch form factor) as is using Minitool Partition wizard. Everything has been copied, all files are accessible, no faults in file system structure, even boot loader wasn't damaged and I hadn't to repair it. But I can boot only first installation of Windows 7 (it boots without issues). When I choose the second installation, I get immediately a completely black screen without any texts, cursors and other data. HDD isn't accessed after that. This black screen is sensitive to Ctrl-Alt-Delete which causes computer reboot. I did some experimenting: Installed Windows 7 to that partition - it booted fine. Then I renamed "Windows" to "Windows.old" and copied Windows directory from HDD1 as it was, using Far Manager, and got the same troubles - black screen. (Of course I performed renaming and copying from other copy of Windows). So, it seems that problems are inside this installation of Windows, somewhere in its files.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >