Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 530/1663 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • LUKS-Encrypted Root Partition in Ubuntu 9.04

    - by Martindale
    I have a LUKS-encrypted root partition that I have installed Ubuntu 9.04 to. I have of course placed /boot on a separate ext2 partition, and my boot loader loads and functions correctly. However, I can't seem to get my initrd to load the LUKS-encrypted root using the appropriate /dev/mapper/ address. What hooks and scripts do I need to add to get this to function correctly, and what is the correct way to regenerate my initrd? I can CHROOT into this install, and everything works fine - but I just can't seem to get it to actually boot. Help!

    Read the article

  • Preferred apache permissions for www files with several authors

    - by user1316464
    I can't for the life of me figure out how to design my permissions scheme for my apache files. My requirements seem pretty simple: Apache should have standard permissions of RX for Directories and R for files Web authors should have RWX for Directories and RW for files Don't want to give any access to "other" Want new files/folders to inherit the proper permissions Here are the schemes I've tried 570 for directories and 460 for files Owner: Apache Group: Webdev The problem here is that new files created by users int the Webdev group are owned by user:Webdev and Apache can't read them. If Apache were in the group Webdev then it would also have the wrong permissions (ie it would have Write permissions to files) 750 for directories and 640 for files Owner: Webdev Group: Apache (Webdev is a member of Apache) The problem here is that there is only one webdev account and I have multiple people who need access to contribute. In theory this would work with only one developer if Webdev were also a member of the Apache group. Any ideas?

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

  • How can I set deadline as the I/O scheduler for USB Flash devices by using udev rules?

    - by ????
    I have set CFQ as the default I/O scheduler. I often get bad performance when I write data into a Flash device. This is resolved if I use deadline as the I/O scheduler for USB Flash devices. I can't always change the scheduler manually, right? I think writing udev rules is a good idea. Can someone please write rules for me? I want: When I plug in a USB device, detect the type of the device. If it is a portable USB hard disk, do nothing (I think if a device has more than one partitions, it always a portable hard disk. If it is a USB Flash device, set deadline as it's scheduler.

    Read the article

  • Unable to use cloned VM, OpenSUSE, VirtualBox

    - by Kremchik
    I've cloned a VM and now while booting it I see a message: Trying manual resume from /dev/sda1 Invoking userspace resume from /dev/sda1 resume: libgcrypt version: 1.5.0 Trying manual resume from /dev/sda1 invoking in-kernel resume from /dev/sda1 Waiting for device /dev/disk/by-id/ata-VBOX_HARDDISK_.....-part2 to appear: ... Could not find /dev/disk/...-part2 Want me to fall back to /dev/disk/...-part2 (Y/n) If I press 'Y' it tries to boot again with failure, then exits to /bin/sh. If I press 'n' it exits to /bin/sh immediately. I've read a solution here: http://diggerpage.blogspot.com/2011/11/cannot-boot-opensuse-12-after-cloning.html but I don't understand how to access files on disk to edit /etc/fstab and /boot/grub/menu.lst?

    Read the article

  • unable to start apache after changes to rc.conf and resov.conf

    - by shupru
    I had a working configuration this morning with the following simple /etc/rc.conf ifconfig_rl0="DHCP" ifconfig_xl="inet 192.168.1.11 netmask 255.255.255." defaultrouter="192.168.1.1" I added the following lines: firewall_enable="YES" firewall_type="SIMPLE" firewall_logging="YES" sshd_enable="YES" apache_enable="YES" mysql_enable="YES" my httpd.conf includes: NameVirtualHost 192.168.1.11 <VirtualHost 192.168.1.11> ... </VirtualHost> now apache and ssh server are down. changed rc.conf back to last working configuration and still no ssh or apache apachectl start #--> /usr/local/sbin/apachectl start: httpd could not be started apachectl status #--> Looking up localhost Making http connection to localhost Alert!: Unable to connect to remote host.

    Read the article

  • Explanation of nodev and nosuid in fstab

    - by Ivan Kovacevic
    I see those two options constantly suggested on the web when someone describes how to mount a tmpfs or ramfs. Often also with noexec but I'm specifically interested in nodev and nosuid. I basically hate just blindly repeating what somebody suggested, without real understanding. And since I only see copy/paste instructions on the net regarding this, I ask here. This is from documentation: nodev - Don't interpret block special devices on the filesystem. nosuid - Block the operation of suid, and sgid bits. But I would like a practical explanation what could happen if I leave those two out. Let's say that I have configured tmpfs or ramfs(without these two mentioned options set) that is accessible(read+write) by a specific (non-root)user on the system. What can that user do to harm the system? Excluding the case of consuming all available system memory in case of ramfs

    Read the article

  • How could I portably split large backup files over multiple discs?

    - by sourcejedi
    Context: I make backups / archives, primarily of photos. I'm experimenting with Bup, which is designed for backup to hard disk. Basically it creates Git repos which include packfiles of up to 1GB. But I still need last-ditch backups to keep offline and move offsite (and keeping them on read-only media is good too!). What are the options for archiving and splitting large files over several discs like CDs (and reading them back!)? I'd prefer methods which will stay readable in future. are portable e.g. to Windows. have known simple implementations, so I could re-implement them myself if necessary. (Using Bup packs will stretch my robustness budget. So I want to be confident about how other parts of the system would behave). I heard split archives are possible with both ZIP and 7-Zip. Is that right?

    Read the article

  • How do I map a network drive in Ubuntu? I want to save my Firefox downloads directly in the mapped n

    - by NJTechie
    I work in an environment wherein files are exchanged over email which are then processed into databases. In Windows, mapping a network drive and storing files directly to a folder in the network drive from Firefox/Chrome downloads is a breeze. How to achieve the same in Ubuntu? I don't see the SFTP'ed drive/directory as options in Firefox- Downloads setup. Thanks in advance!

    Read the article

  • How do I map a network drive in Ubuntu? I want to save my Firefox downloads directly in the mapped n

    - by NJTechie
    I work in an environment wherein files are exchanged over email which are then processed into databases. In Windows, mapping a network drive and storing files directly to a folder in the network drive from Firefox/Chrome downloads is a breeze. How to achieve the same in Ubuntu? I don't see the SFTP'ed drive/directory as options in Firefox- Downloads setup. Thanks in advance!

    Read the article

  • Apache worker is crashing after 3.000 users

    - by user1618606
    I activated Apache Worker on my VPS and I'm having problems, 'cause the website is crashing when 3000 users are accessing the website. I'm using http://whos.amung.us/stats/2jzwlvbhvpft/ as counter. My Apache Worker configuration: KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 1 <IfModule mpm_worker_module> ServerLimit 20000 StartServer 8000 MinSpareThreads 10400 MaxSpareThreads 14200 ThreadLimit 5 ThreadsPerChild 5 MaxClients 20000 MaxRequestsPerChild 0 </IfModule> The VPS have the SO: Debian 64 LAMP, memory: 14gb and CPU: 24ghz What I could to do to give a best performance?

    Read the article

  • VLC Dynamic Range compression multiple songs

    - by Sion
    In my collection of music I have some songs which seem to be compressed nicely. But in addition to those I have songs which are overly quite compared to the louder compressed songs. So maybe the problem isn't compression but average volume. Would the Dynamic Range Compressor in VLC work for this type of problem or would I have better luck using external speakers and running it through a guitar compressor?

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Setting up Web server so it is easy to migrate

    - by Nyxynyx
    Hi I am about to move my site from a VPS to another host's dedicated server. One of my concern is about scaling the site in the future that involves a change of server. Now that I am starting the dedicated server from scratch with only the OS, this means that I need to install the web server stack, including Apache and its mods, PHP, MySQL, PostgreSQL, Tomcat, Solr and a few other softwares like ImageMagick and git. Question: Is there a way for me to setup this new dedicated server such that I can easily migrate the entire site, both the technology stack and the code to the a newer server (upgrade from this new dedicated server) easily without reinstalling and reconfiguring everything? The code for the website is being handled by git and github so thats not a problem. I'm more conerned about the rest of the software required. Side question: The current VPS uses CentOs with cpanel and it seems that many packages are outdated on yum and cpanel interfers with the installation of many packages. Which OS should I go with for the new server? Ubuntu?

    Read the article

  • OpenVPN access to a private network

    - by Gior312
    There are many similar topics about my issue, however I cannot figure out a solution for myself. There are three hosts. A without a routable address but with an Internet access. Server S with a routable Internet address and host B behind NAT in a private network. What I've managed to do is a OpenVPN connection between A and B via S. Everything works fine so far according to this manual VPN Setup What I want to do is to connect A to Bs private network 10.A.B.x I tried this manual but had no luck. So A has a vpn address 10.9.0.10, B's vpn address is 10.9.0.6 and B's private network is 10.20.20.0/24. When at the Server I try to make a route to Bs private network like this sudo route add 10.20.20.0 netmask 255.255.255.0 gw 10.9.0.6 dev tun0 it says "route: netmask 000000ff doesn't make sense with host route" but I don't know how to tell Server to look for a private network in a different way. Do you know how can I make it right ?

    Read the article

  • How can I change 'change' date of file?

    - by Someone1234
    How can I change 'change' date? $ touch -t 9901010000 test;stat test File: `test' Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fe01h/65025d Inode: 11279017 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 1000/ x) Gid: ( 1000/ x) Access: 1999-01-01 00:00:00.000000000 +0100 Modify: 1999-01-01 00:00:00.000000000 +0100 **Change: 2012-04-08 19:26:56.061614473 +0200** Birth: -

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • What process is resurrecting mysqld?

    - by ripper234
    I'm following this guide to reset my mysql root password (I'm on ubuntu). When I kill the mysqld process, it immediately gets resurrected. The parent process ID is 1. How can I find what keeps resurrecting mysqld? $ ps -ef | grep mysql mysql 30136 1 0 07:16 ? 00:00:00 /usr/sbin/mysqld root 30295 30274 0 07:18 pts/0 00:00:00 grep --color=auto mysql $ kill -9 30136 $ ps -ef | grep mysql mysql 30302 1 2 07:18 ? 00:00:00 /usr/sbin/mysqld root 30404 30274 0 07:18 pts/0 00:00:00 grep --color=auto mysql $

    Read the article

  • Changed array composition, mdadm --detail still shows the old array size

    - by Prody
    I have a machine with 8 disks. I installed it with my hoster's install automation (it's OVH, I don't have physical access to it). The machine installed correctly, but it made an array that I wanted to change. It created a raid5 array across 5/8 disks and I've changed it to raid10 across 8 disks. I've done this by first --stopping the old array and then --creating the new array. It warned me that a previous array was there, but I chose to continue. So it created the array, spent 10ish hours syncing it and now that it's ready I get this strange behavior: When I fdisk p on it, I see the correct size. But when I mdadm --detail it I see the old array's size even tho I get the new composition and level. When I try to pvcreate on it, i get the old size again for some reason. Did I have to do something else? Did I miss something?

    Read the article

  • How to diagnose RAM?

    - by x-man
    I have a java process that is aborted after a while with SIGSEGV. It started to happen after I upgraded the server with more RAM. Having tested on different JVMs I suspect it might be a hardware problem. But no problem was detected by memtest86. So, what else can I do to detect the source of the problem is? Should I take the RAM modules one by one to detect the faulty module? The server is running on 64bit OpenSuse11.3. The memory is not an ECC one it seems. I have a kit of this (3*4GB * 2 = 24GB): http://www.kingston.com/datasheets/KHX1600C9S3K2_8GX.pdf

    Read the article

  • Determine display or VNC session based on PID

    - by Daniel Kessler
    I frequently VNC into a server where we run many concurrent computationally intensive matlab processes. Sometimes, one of my processes misbehave, which I can see from top, but I have a hard time figuring out which VNC session it's running on, or more specifically, which display it's running on. Suppose I see that PID 8536 looks like a resource hog, and I want to investigate. Because it's a matlab session, I know there is likely an IDE open somewhere, and I want to check to see if anything important is happening before I kill it. We've solved this somewhat awkwardly in the past by identifying which PTY 8536 was launched from, then looking at a process tree to figure out things launched in that context, scrolling up, and seeing the VNC initialization. Seems like there must be a better way to go PID - X Display (or VNC Session).

    Read the article

  • Why doesn't NFS recognize a new UID?

    - by user76177
    I have two servers running RHEL6. I have root access to both. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's UID on client is 502. appuser's UID on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Running id appuser on each yields: uid=506(appuser). Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed UID of user in /etc/passwd on client to be 506. Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client (or anything else.)

    Read the article

  • winbind not working

    - by Yon
    I'm trying to set up winbind with an Active Directory running on Win2003. This works: net rpc user -S SOMEDOMAIN -U Administrator Password: Administrator ASPNET Demo Guest IUSR_SERVER20 IWAM_SERVER20 krbtgt RemoteUser SUPPORT_388945a0 This does not: wbinfo -u Error looking up domain users From the winbindd log: [2012/05/31 16:45:38, 1] nsswitch/winbindd_ads.c:ads_cached_connection(128) ads_connect for domain SOMEDOMAIN failed: Operations error [2012/05/31 16:46:38, 1] nsswitch/winbindd_util.c:trustdom_recv(230) Could not receive trustdoms ADS is not working with this domain. Why is winbind trying to use it instead of RPC? How can I force it to use only RPC and for all of this to work?

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >