Search Results

Search found 6881 results on 276 pages for 'storage spaces'.

Page 179/276 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • How to create a readonly root linux: Can be mounted as writeable for persistent changes?

    - by Mr Anderson
    I'd like a read only file system that runs almost entirely in RAM but the compact flash or hardrive can be mounted and made writeable to make persistent changes. How do I do this on Linux? I've looked at several tutorials but none really explain how to create such a system with the option of being able to mount the storage device and make persistent changes. I looked at this so far: http://chschneider.eu/linux/thin_client/ I also looked on the old gentoo wiki but the article was very specific to Gentoo. I'll be using a debian based Linux but it would be nice I've someone could explain to me how to do this in pretty generic instructions ,that would work on any Linux distro. Thanks.

    Read the article

  • Estimate compressed file size in tar.gz

    - by liori
    I've got a set of .tar.gz files, which are duplicity backup files (either full backups or incremental ones). I'd like to compute which directories take the most space on backups. This will most probably be a different figure to calculating which directories take the most space on a live filesystem because I need to account for how often are files changing (and therefore taking space on incremental backups) and how compressible are files. I know that while many other archive formats store compressed files as different entities inside the archive file, .tar.gz files do not, and therefore it is impossible to get an exact amount of storage taken in the archive by a single file after compression. Are there any tools to calculate at least some estimates?

    Read the article

  • Are SANs unreliable?

    - by chaos
    So at the place where I wear one of my various hats, this one representing a development rather than admin role, there's been an initiative to move to SANs. So far, I have been spectacularly unimpressed. First it was this behavior where, when MySQL databases are on the SAN, the first few tables that anything tries to hit after the system boots come up as nonexistent and MySQL has to be restarted before it realizes they're actually there. Then today, on multiple systems (including the primary SVN repository, ever-so-wonderfully) we get SAN mounts spewing IO errors and the filesystems going into read-only, which is the kind of behavior I expect from directly mounted naked disks, not fault-tolerant managed storage. Right now, I'm at the point where if I were putting together a project and somebody said "hey we should use SANs", my response would be "GTFO". So basically I want to know whether my experience is typical or even common, or whether I'm having some kind of freakishly bad luck with SANs. The systems these SANs are attached to are all CentOS machines, if that's relevant.

    Read the article

  • I want to change hard drive. How to move system partition with Windows 7?

    - by Semyon Perepelitsa
    I've bought a new hard drive and want to move all my data to it. I had no problem with moving all files on non-system partition. But I don't know how to move system partiton. Now I have 3 partitions on the new disk, fist two was created by Windows installation CD (I tried to move system using internal tools, but it didn't work for me), third is filled with my successfully transferred data from old disk. And there are two partitions on the old disk: the first one is system (Windows 7) and the second one is my old main storage, that I already moved to the new hard drive and now it is empty. How can I change the placement of Windows 7 with minimal difficulties and losses, so I could work on the new hard drive just as I did it on the old one?

    Read the article

  • Moving MODx Files to Other MODx Website

    - by Austin
    I have one website with modx installed to www.website.com/modx/ --- Keep in mind there are other Websites In Progress on this storage server. My issue is that I'm moving all this: templates, template variables, chunks, snippets, etc to another server that already has modx installed in it. My first instinct is to go to phpMyAdmin and export the sql file and import them to the new website's server. However, an error occurred when I attempted to do this. It had found many duplicates in fields that were associated a PK (due to it being the same website just a redesign). I don't have to go and dump the table of the oldsite and then upload the new sql file do i? Please advise.

    Read the article

  • Retrieving a specific value from “df -h” using shell

    - by diegodias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • ash scripting: space-containing variable refuses to be grepped

    - by Luci Sandor
    I am trying to run the script listed at http://talk.maemo.org/showthread.php?t=70866&page=2 on its intended hardware, a Nokia Linux phone running BusyBox ash. The script receives the name of WiFi network as a parameter, and tries to connect the phone to it. I suspect the script works, but my SSID, BU (802.1x), has space and parentheses in it. So when I type at the command prompt autoconnect.sh BU\ \(802.1x\) I get various errors. First, LIST=`iwconfig wlan0 | awk -F":" '/ESSID/{print $2}'` if [ $LIST = "\"$1\"" ]; then ...fails, even I am connected to the network. The error is not avoided by using single or double quotes instead of escaping characters at the command prompt. Second, if [ -z `iwlist wlan0 scan | grep -m 1 -o \"$1\"` ]; then echo SSID \"$1\" not found; shows that grep does not find the string, although the same grep, typed directly into the command prompt, does find 'BU (802.1x)'. How do I quote $1 in the two circumstances above so that it will work with my network SSID, containing spaces and parentheses? Thank you.

    Read the article

  • hosting environment for delivering FLVs

    - by Gotys
    What would be the ideal hardware setup for pushing lots of bandwith on a tube site? We have ever-expanding cloud storage where users upload the movies, then we have these web-delivery machines which cache the FLV files on its local harddrives and deliver them to users. Each cache machine can deliver 1200 mbits/s , if it has SAS 8 harddrives. Such a cache machine costs us $550/month for 8x160gb -- so each machine can cache only 160GB at any given time. If we want to cache more then 160gb , we need to add another machine..another $550/month..etc. This is very un-economical so I am wondering if we have any experts here who can figure out a better setup. I've been looking into "gluster FS", but I am not sure if this thing can push a lot of bandwith. Any ideas highly appreciated. Thank you!

    Read the article

  • Advanced file compression software for Mac OSX

    - by Steven Roose
    Back when I used Windows, I always used WinRAR for file compression and decompression. It had a fair amount of options like 'just storage' vs 'hard compression', password protection and archive type. Now that I use Mac OSX, the only compression possibility I have is the default Finder's Compress to Zip. I downloaded the most popular decompression software "Unarchiver". But this app can't compress other archive types either. I went for a search but there seem to be hardly any good advanced compression tools that work nice in OSX and have the options WinRAR has. (WinRAR works in OSX but command line only, I'm looking for something with a GUI.) Any ideas? I strongly prefer freeware. I found Archiver and StiffIt, but they are both commercial.

    Read the article

  • Which is the most independent and secure email service? [closed]

    - by Rafal
    I'm looking for a provider with a secure transfer protocol (like https) Secured (as much as it is possible) from being hacked or spied on. One that won't scan my email in order to display more accurate ads. One that won't sell my personal information. One that won't disclose my emails to some sort of government (it probably must be based outside of US or Chinese jurisdiction I reckon) Encrypted if possible. It can be simple and without huge storage. If you know/use any similar service I would be really grateful if you could point me there. Cheerz

    Read the article

  • Innodb : cannot allocate the memory for the buffer pool

    - by mingyeow
    My innodb keeps crashing. This is the error message below. Does anyone know why this keeps happening? InnoDB: by InnoDB 49201616 bytes. Operating system errno: 12 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. InnoDB: Note that in most 32-bit computers the process InnoDB: memory space is limited to 2 GB or 4 GB. InnoDB: We keep retrying the allocation for 60 seconds... 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! InnoDB: Fatal error: cannot allocate the memory for the buffer pool [ERROR] Default storage engine (InnoDB) is not available

    Read the article

  • How to copy directories using debugfs?

    - by tjbp
    The debugfs manpage gives the impression that the command 'rdump . .' will recursively copy all files found on the specified filesystem from the debugfs cwd to the native filesystem's cwd. Instead I seem to receive a syntax error, and no copy is initiated? These are the commands I run: cd /path/to/transfer/destination debugfs /dev/sda1 -R rdump . . My task is to copy the entire contents of a clean yet unmountable USB storage device to its host machine's HD. The host machine does not support the inode size used by the USB device's filesystem (256) and its software is not upgradeable, so my intention was to use debugfs to transfer the files. If anyone has any other suggestions for this task I'd be grateful.

    Read the article

  • Sync Windows Live Mail between two computers

    - by Jesper
    Hi, I have a laptop where I have been using Windows Live Mail as my email application for the last year. Yesterday I got a dell desktop as well and I am desperatly trying to set up so my desktop and laptop sync the email between each other. Im using Super Flexible Synchronizer to sync the email storage folder to a NAS on my network, so when setting up the desktop naturally I set it up to download all from the NAS. But each time I open Windows Live Mail on the new machine, some emails suddenly come in duplicates, one is read, the other is not. I have ran through the registry on the new machine and updated an ID i found, in 3 places, one being: C:\Users\\AppData\Local\Microsoft\Windows Live Contacts{blah blah blah}\DBStore\contacts.edb Still doesnt seem to be enough. Does anyone have any tips or ideas how to sync Windows Live Mail between two computers without duplicates and weird behaviour etc. grateful for your help, jesper

    Read the article

  • Is using Capistrano for user maintenance tasks on university lab feasible?

    - by danielkza
    I've been looking around for tools to replace some legacy scripts for creating and maintaining accounts in a university computer lab ecosystem consisting of things like: LDAP and Kerberos for authentication User home storage and web pages Entries on an SQL database Printing quotas Mailing lists, etc. I'd also like to automate machine and VM membership for Kerberos and Puppet if possiible. I've found Capistrano, and while the basic principle of running tasks on remote hosts through SSH seems to fit, and the DSL in Ruby looks quite nice, I've found most documentation is related to application deployment, not generic tasks. I'm also not aware of any good way to parameterize tasks so I can pass on the user information for creation. Is something about Capistrano I am missing, or is it not the correct tool for this job? Are there any more userful alternatives?

    Read the article

  • How to change default permission for uploaded files in apache with mounted webroot?

    - by faridv
    I have an ubuntu server 11.10 with apache 2.2.20, php 5.3.6 and an installation of Joomla cms. I have used an extra hard disk as my web server storage and mounted it into /data/www/ (I hope it's not where my problem us!). I've set permission of all files and folders in my web root to 755 and user groups for them is set to [default ubuntu user(in my case radio)]:www-data. In past days I had serious problems with joomla not showing new uploaded images and other files and also I can't install any extensions. After hours of searching I found out that uploaded files don't have appropriate permission (they are -rw-------) and Joomla application cannot read, copy or move them after upload. I’m wondering how can I set a default permission so all files that I upload use it? PS: I’ve tested umask but it did nothing. I think it has nothing to do with my problem.

    Read the article

  • NFS-Root not working when booting over PXE

    - by Randy
    I am desperately trying to get a diskless client running over PXE-Boot using a NFS-Share as a root file system. I did this before some years ago but for some reason I am stucked at this since days. The TFTP-Server itself is running fine and booting a netinstaller works also fine. The kernel and initrd are loaded also but the bootprocess stops with this (screenshot) kernel panic. I'm using the squeeze standard i386-Kernel and I have prepared the initrd with this config: MODULES=most BUSYBOX=y KEYMAP=n COMPRESS=gzip BOOT=nfs DEVICE= NFSROOT=auto I also tried MODULES=netboot with the same outcome. My PXE-configuration looks like this: LABEL linux KERNEL diskless/debian-default/vmlinuz-2.6.32-5-686 APPEND root=/dev/nfs initrd=diskless/debian-default/vmlinuz-2.6.32-5-686 nfsroot=192.168.140.2:/storage/nfs-boot-images/default-squeeze ip=dhcp rw Furthermore I have captured the network communication of the client via tcpdump and learned that the client isn't even trying to connect to the NFS-share. Does anybody has got an idea what is going wrong here?

    Read the article

  • Setting Sql server security rights for multiple situations

    - by DanDan
    We have an application which uses an instance of Sql Server locally for its backend storage. The administrator windows login has had its sysadmin right revoked, and instead two sql logins have been created; one for the application with a secret password and one read only login we let users view the raw data with. This was working fine until we moved on FileStreams, which requires intergrated windows authentication. So now the sql server logins must be replaced. As a result, I am now reviewing all of our logins but I am not sure how it is possible. It seems that the application needs full read/write access, yet I still need to lock down writing to the tables so the user cannot login into the database and delete data randomly. Does anyone have any tips for setting multiple levels of security using intergrated windows logins, or can you direct me to any further reading? Thanks.

    Read the article

  • HP ProLiant Smart Array "lock up" code 0x11

    - by ewwhite
    I've a ProLiant DL580 G7 server that experienced a storage subsystem failure during production. The system appeared available and responded to pings, but all I/O access stalled (the system load must have been 100+). The ASR did not trigger at the specified watchdog timeout. I had to force a reboot from the ILO. During POST, I received the following error: A controller failure event occurred prior to this power-up. (Previous lock up code = 0x11) I haven't pulled the ADU report yet, but I'm curious as to what this error actually means. I was not responsible for the the installation, but can see that the firmware is very old. But if there's anything else I should know about the error, I'd like to know for the post-mortem report. edit - I should add that the server had 95 days of uptime prior to the lock up.

    Read the article

  • rsync to windows (cygwin)

    - by abergmeier
    We have a windows file storage (don't ask) and now I want to rsync with the machine from Windows, Mac and Linux. So I installed freeSSHd (login shell is set to C:/cygwin64/bin/sh.exe), set up certificates and testing from Linux the test.dat has 0 bytes: ssh myuser@winmachinename "C:/cygwin64/bin/true.exe" > test.dat Even double checking with actual output works fine: ssh myuser@winmachinename "C:/cygwin64/bin/ls.exe" > test.dat Now, when I call rsync: rsync --progress -avz -e ssh myuser@winmachinename:/c/Users ~/test it fails with: protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(174) [Receiver=3.1.0] As far as reading the docs, this should not happen, when the first test is successful!? I am by now out of ideas - any recommendations how to debug this? EDIT: | OS | rsync version | |:--------------|:------------------------------------------| | Windows | rsync version 3.0.9 protocol version 30 | | Linux | rsync version 3.1.0 protocol version 31 |

    Read the article

  • Skipping hardlinks when using TSM Backup

    - by Lars Haugseth
    We need to backup a filesystem with lots of hardlinks. Since there are several hardlinks for each "true" file, we would like to skip all the hardlinks when backing up the filesystem to avoid n exact copies of each file. The backup is done using Tivoli Storage Manager Backup, and we've been unable to get it to treat hardlinks as anything other than separate files to be backed up alongside each other. In case it's relevant for possible solutions, I'd like to note that it's possible to tell a hardlink from a proper file by the filename: foobarbaz-123.ext # file foobarbaz-123-1.ext # hardlink foobarbaz-123-2.ext # hardlink barbazfoo-456.ext # file barbazfoo-456-1.ext # hardlink barbazfoo-456-2.ext # hardlink barbazfoo-456-3.ext # hardlink That is, all hardlinks have two hyphens in the filename, where as proper files have just the one. The server is running Ubuntu Linux, and the files are situated on a gfs volume on our SAN.

    Read the article

  • Which way should we choose to shorten backup time?

    - by facebook-100005613813158
    A company performs a full backup for its data in a daily basis for disaster recovery purposes. However, their backup process cannot be completed within the assigned backup time window. What would you recommend to this company about how to restructure its backup environment in order to minimize the backup time? We got 4 candidates, 1. Perform LAN based backup 2. Weekly full backup and daily incremental 3. Weekly full backup and daily cumulative 4. Add more ISL to increase bandwidth when comparing incremental backup with cumulative backup ,incremental backup time is surely shorter than cumulative backup time .But I don's know adding more ISL is allowed in an existing storage system,or can this operation really shorten backup time ?

    Read the article

  • Windows 2008 DHCP service fails - "...failed to see a directory server for authorization."

    - by ewwhite
    I have a small environment running Windows 2008 R2 where the DHCP service on the domain controller fails every two weeks. The most-visible error is Event ID 1059 and the Event Viewer message is: "The DHCP service failed to see a directory server for authorization." The setup features two domain controller and the usual services and roles (file, print, Exchange). Restarting the service fails for a variety of reasons. I've had the following messages at different times: "Not enough storage is available to complete this operation". "Unable to determine the DHCP Server version for the Server 192.168.x.x" "The DHCP service has detected that it is running on a DC and has no credentials configured for use with Dynamic DNS registrations initiated by the DHCP service." A reboot of the domain controller resolves the issue for ~2 weeks. The systems are virtualized and there are no network connectivity issues. Any ideas what's happening here?

    Read the article

  • Western Digital My Book World drops off network

    - by Macha
    Most of my storage in my house relies on a WD My Book World Edition 500GB network drive. I threw out the vendor crapware they give you to access it (a trial version of Mionet) after it starting nagging me to upgrade, and set it up as a standard network drive using Window's Map Network Drive. However, since then, it has been dropping off the network after 30 minutes of non-usage. The only way to get it back on is to switch it off and on again at the plug socket. Does anyone know what is causing this, and hopefully how to fix it? EDIT: it's the original "blue rings" version with the latest firmware.

    Read the article

  • Simple copy to pen-drive - 0x80070057

    - by yzraeu
    Hello guys, I have this problem for a while and still didn't find the answer. I'm copying a specifc 10mb file to my pen-drive, from any folder on PC to any folder on the pen-drive and all i get is this: 0x80070057 The parameter is incorrect I simply cannot copy the file at all!! The pen-drive in case is my Nokia 5800, in "Mass Storage" mode. Sometimes I cannot copy a single MP3 file, 5 or 7mb. So i have to disconnect and connect again. The source file is not corrupted, the destination works fine with other files. It's just with some files. If I change to another pen-drive, works fine.

    Read the article

  • MS SQL dts to ssis migration error

    - by Manjot
    Hi, I have migrated some DTS packages to SSIS 2005 using "Migration" wizard. When I tried to run it, it fails saying you need a higher version of SSIS even though the destination SSIS server is on 9.0.4211 level. then I digged in the package using business intelligence studio and saw that one of the package subtasks is "Transform data task" (the dts version) and the package fails to run that. The storage location for this dts task is set to "Embedded in Task". I didn't touch it. why didn't it convert this task to an SSIS data flow task? any help please? Thansk in advance

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >