Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 689/1831 | < Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >

  • How to change MySQL data directory?

    - by Jonathan Frank
    I want to place my databases in another directory, so I can store them in an ESB (elastic block storage, just a fancy name for a virtualized harddisk) together with my web-apps and other persistent data. I have tried to walk through a tutorial at http://crashmag.net/change-the-default-mysql-data-directory-with-selinux-enabled. Everything seems fine until I type this command: # semanage fcontext -a -t mysqld_db_t "/srv/mysql(/.*)?" Then the command fails and tells me that mysqld_db_t is an invalid SELinux context even if the default MySQL data directory is labelled with this context. I am running Fedora 15 on Virtualbox (behaves like an ordinary x86-compatible box) and Amazon EC2 (based on Xen) so the tutorial should be compatible. It is also worth to mention that turning off SELinux globally or just for the MySQL process is not an option, because such a solution will decrease the security of the system if a hacker gains access to the system via the MySQL server. I have never seen this problem before I changed to the Redhat/Fedora architecture, so it could be a distribution specific issue. Any help is highly appreciated

    Read the article

  • Nagios orphaned services warnings

    - by Gordon
    We have had Nagios running on one of our servers with out any problems for a while but lately certain old service warning have been reappearing and then disappearing on the service detail page. From looking at the logs I found warning like the following. Warning: The check of service 'Tomcat' on host 'virtual1' looks like it was orphaned (results never came back). I'm scheduling an immediate check of the service... Has anyone ever came across this before or at least know a way to delete the old Orphaned Warnings. The Nagios Version we are running is Version 3.0b7 so an update might be in order. Thanks.

    Read the article

  • page allocation failure - am I running out of memory?

    - by mfriedman
    Lately I've noticing entries like this one in the kern.log of one of my servers: Feb 16 00:24:05 aramis kernel: swapper: page allocation failure. order:0, mode:0x20 This is what I'd like to know: What exactly does that message mean? Is my server running out of memory? The swap usage is quite low (less than 10%), and so far I haven't noticed any processes being killed because of lack of memory. Additional information: The server is a Xen instance (DomU) running Debian 6.0 It has 512 MB of RAM and a 512 MB swap partition CPU load inside the virtual machine shows an average of 0.25

    Read the article

  • alternative filesystems for SSD

    - by freedrull
    I am tired of watching fsck check my filesystem when my eeepc 901 shuts down abruptly due to a crash. I know that with a journaling filesystem, I won't have to wait for a check. However, I am well aware of the poor I/O performance of the SSD, so I can imagine using a journaling filesystem being even more frustrating, since there will be constant writes to the journal? I will buy a new laptop without such a crummy ssd someday but, is there anything I can do now, on the software side of things?

    Read the article

  • RAID FS detection at boot time

    - by alex
    An excerpt from dmesg: md: Autodetecting RAID arrays. md: Scanned 2 and added 2 devices. md: autorun ... md: considering sdb1 ... md: adding sdb1 ... md: adding sda1 ... md: created md1 md: bind<sda1> md: bind<sdb1> md: running: <sdb1><sda1> raid1: raid set md1 active with 2 out of 2 mirrors md1: detected capacity change from 0 to 1500299198464 md: ... autorun DONE. md1: unknown partition table EXT3-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT2-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT4-fs (md1): mounted filesystem with ordered data mode Is it OK that kernel tries to mount an ext4 raid as ext3, ext2 first? Is there a way to tell it to skip those two steps? Just in case: /dev/md1 / ext4 noatime 0 1 TIA.

    Read the article

  • How do I automate OS installation on 500+ machines?

    - by Igor
    My company has to image a large amount of machines by the end of the year. Each of the machines will have hardware RAID 1 and running CentOS 6. What options do I have for automating the OS installation on these systems? I have a little mini desktop I can set up as an install server, and we can get a switch to create an installation network, but I'm not sure how to go about actually performing the automated installs.

    Read the article

  • sorry, the maximum allowed clients from your host (10) are already connected" FTP error

    - by Sejanus
    Hello, I keep getting the "sorry, the maximum allowed clients from your host (10) are already connected" error whenever I try to transfer a large number of files. At first I thought it's a filezilla bug, however I get the same error basically with every FTP client I've tried, including Total Commander under Wine. I do not get that error using Windows. I did try to limit maximum allowed connections for Filezilla, both in server settings and in global settings, it didnt change anything. I did try to switch between passive and active modes (not sure if it's related at all, just last desperate attempt), and it didnt change anything either. When I try to use native ftp client (not sure how is it called, the one in Places - Connect to a server) I get abstract "connection refused" error every time I transfer large number of files. Connection is refused for separate particular files, if I click "Ignore" each time the rest of files are transfered perfectly well, so I assume it's the very same error. Anything I could do? This really drives me mad, transfering large numbers of files is a part of my everyday job... P.S. and this happens with many different FTP servers. Also I dont get this error in Windows. So I assume it's not a server problem. P.P.S. I am aware of similar question here, the answer provided just didn't solve it to me.

    Read the article

  • SmartSVN - Unable to create new repository profile

    - by Sandeepan Nath
    I have just installed SmartSVN on this fedora system. The application starts (on running ./smartsvn.sh) with its usual UI but many things are not working. Creating New repository profile Trying to create a new repository profile (Repositories- Repository Profiles- Add) An Error occurred while processing an SVN command - Cannot connect to 'svn+ssh://192.168.0.103': There was a problem while connecting to 192.168.0.103:22 Quick Checkout Trying to do Quick Checkout (less configuration) An Error occurred while processing an SVN command - Malformed XML. Some Observations When I run the smartsvn.sh file like this:- ./smartsvn.sh It shows this in the console - Warning: /bin/java does not exist Could not lock /root/.smartsvn/_lock_ Switched to running instance I was using SmartSVN in another system before this where it was working. There too, it was showing the warning like Warning: /bin/java does not exist but this part was not showing:- Could not lock /root/.smartsvn/_lock_ Switched to running instance I have only JRE installed in both the systems and not JDK. So, what could be the reason? Any pointers? Thanks, Sandeepan

    Read the article

  • Bash: Read lines in a file scenario with sed or awk

    - by user105566
    I have this scenarios: File Content: 10.1.1.1 10.1.1.2 10.1.1.3 10.1.1.4 I want sed or awk so that when i cat the file every time new line is returned. like First iteration: cat ip | some magic 10.1.1.1 Second iteration returns 10.1.1.2 Third iteration returns 10.1.1.3 Fourth iteration returns 10.1.1.4 and after n number of iterations, it returns to line 1 Fifth iteration returns: 10.1.1.1 Can we do it using sed or awk.

    Read the article

  • Tomcat fails to start, no logs or error provided

    - by Alex Kuhl
    I have a Centos5 box running tomcat5 (a version before 5.5, there's no bin/version.sh script). When attempting to start tomcat, whether through init.d or service, I get the FAILED message with no other information provided. The date on catalina.out changes but it has no contents and is 0 bytes. logging.conf has not been edited and everything is marked as FINE detail. Has anyone experienced this and know of a solution? Or, failing that, how can I get some log/error info from tomcat to try to pinpoint the issue?

    Read the article

  • How can I still see the 'man' text after I quit man?

    - by Sol
    I typically use tcsh or bash and often want to use 'man' to review a command's options. Currently when I quit man or ctrl-C, the man text disappears and I see the scrollback buffer that was there before I performed the 'man' command. I would like to still see the 'man' text I was viewing as a reference while I'm typing the command at the command prompt without opening a second window, how can I do that?

    Read the article

  • Read-only file system RHEL

    - by gthm geeky
    I am using a RHEL 5.5 on my PC. I was playing around with chmod and chown. suddenly my home folder become read-only. all the folders in /home/goutham/, where goutham is username, became read-only. I can delete files after turning on system for few seconds, after that it says Permission denied:read only file system. I cant even create folder with sudo mkdir also. Please help me. My os is on /dev/sda3

    Read the article

  • configure squid3 to set up a web proxy in ubuntu12.04

    - by Gnijuohz
    I am in a LAN and have to use a proxy given to access the web in a very limited way. I can't even use google, github.com or SE sites. However I can use ssh to log into a server, which I have root access so basically I can do anything I want with it. So I was thinking that maybe I could use that server as a proxy so I can visit sites through it. I tested it using ssh -vT [email protected] which gave a proper response. And In my computer I can't do this. Also I tried downloading something from the gun.org using wget, which can't be done in my computer too. And it succeeded on that server. I don't know if that's enough to say that this server have full access to the Internet. But I assumed so and I installed squid3 on it. After trying some while, I failed to get it working. I got this after I run squid3 -k parse 2012/07/06 21:45:18| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/07/06 21:45:18| Processing: acl manager proto cache_object 2012/07/06 21:45:18| Processing: acl localhost src 127.0.0.1/32 ::1 2012/07/06 21:45:18| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/07/06 21:45:18| Processing: acl localnet src 10.1.0.0/16 # RFC1918 possible internal network 2012/07/06 21:45:18| Processing: acl SSL_ports port 443 2012/07/06 21:45:18| Processing: acl Safe_ports port 80 # http 2012/07/06 21:45:18| Processing: acl Safe_ports port 21 # ftp 2012/07/06 21:45:18| Processing: acl Safe_ports port 443 # https 2012/07/06 21:45:18| Processing: acl Safe_ports port 70 # gopher 2012/07/06 21:45:18| Processing: acl Safe_ports port 210 # wais 2012/07/06 21:45:18| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/07/06 21:45:18| Processing: acl Safe_ports port 280 # http-mgmt 2012/07/06 21:45:18| Processing: acl Safe_ports port 488 # gss-http 2012/07/06 21:45:18| Processing: acl Safe_ports port 591 # filemaker 2012/07/06 21:45:18| Processing: acl Safe_ports port 777 # multiling http 2012/07/06 21:45:18| Processing: acl CONNECT method CONNECT 2012/07/06 21:45:18| Processing: http_port 3128 transparent vhost vport 2012/07/06 21:45:18| Starting Authentication on port [::]:3128 2012/07/06 21:45:18| Disabling Authentication on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Disabling IPv6 on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Processing: cache_mem 1000 MB 2012/07/06 21:45:18| Processing: cache_swap_low 90 2012/07/06 21:45:18| Processing: coredump_dir /var/spool/squid3 2012/07/06 21:45:18| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/07/06 21:45:18| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/07/06 21:45:18| Processing: refresh_pattern -i (/cgi-bin/|?) 0 0% 0 2012/07/06 21:45:18| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/07/06 21:45:18| Processing: refresh_pattern . 0 20% 4320 2012/07/06 21:45:18| Processing: ipcache_high 95 2012/07/06 21:45:18| Processing: http_access allow all I deleted some allow and deny rules and added http_access allow all so that all the request would be allowed. After configuring my computer, I got this error: Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. And the log in the server showed that my TCP requests had all been denied. So, first of all, is what I am trying to do achievable? If so, how to configure the squid in the server so that I use it as a proxy to surf the Internet? My computer and the server both run Ubuntu11.04. Thanks for any help~

    Read the article

  • httpd dead but subsys locked

    - by McShark
    Hello, I modified today max_execution_time in php.ini, when I restarted the server, I get this error : Stopping httpd: [FAILED] Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs I killed httpd proc : killall httpd, and started it fine, but the I can't open any web site on the server. service httpd status OUTPUT : httpd dead but subsys locked I removed httpd file from /var/lock/subsys/ :S Same problem. Please Help!

    Read the article

  • o2cb thinks ocfs2 cluster is still online, and refuses to shut down

    - by Kendall
    I have a handful of OpenSuSE 11.2 servers that utilize OCFS2 volumes. I've noticed that o2cb can't figure out when the OCFS2 cluster is actually mounted. For example, when I try to shutdown o2cb, after stopping OCSF2, o2cb refuses to shutdown because it thinks OCFS2 is still up! After stopping OCFS2 I try to stop o2cb... hamguy:/dev/disk/by-label # /etc/init.d/o2cb stop Stopping O2CB cluster ocfs2: Failed Unable to stop cluster as heartbeat region still active So I check the status... hamguy:/dev/disk/by-label # /etc/init.d/o2cb status Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster ocfs2: Online Heartbeat dead threshold = 31 Network idle timeout: 30000 Network keepalive delay: 2000 Network reconnect delay: 2000 Checking O2CB heartbeat: Active And double check OCFS2... hamguy:/dev/disk/by-label # /etc/init.d/ocfs2 status Configured OCFS2 mountpoints: /u/conf /u/logs /u/backup /u/client /u/data /u/mdata OCFS2 is clearly down, while o2cb clearly thinks otherwise. The versions of OCFS2 and o2cb are... kendall@hamguy:~> rpm -qa |grep ocfs2 ocfs2console-1.4.1-25.6.x86_64 ocfs2-tools-o2cb-1.4.1-25.6.x86_64 ocfs2-tools-1.4.1-25.6.x86_64 kendall@hamguy:~> rpm -qa |grep o2cb ocfs2-tools-o2cb-1.4.1-25.6.x86_64 What causes this, and is there a way around it? If I try to reboot the machine, it will just sit there forever until your physically power cycle it. That obviously is a bit of a problem. Any insight is appreciated, thank you. Kendall

    Read the article

  • nginx start failing, says error.log doesn't exist

    - by Blankman
    I structured my sites like: /home/www/domain.com/public,private, log, backup In the log folder, I created a blank error.log and access.log. My nginx file in sites-available for the domain looks like: server { access_log /home/www/domain1.com/log/access.log; error_log /home/www/domain1.com/log/error.log; } Trying to start nginx it says: starting nginx: the config file /etc/nginx/nginx/conf syntax is ok [emrg] open() ".../access.log" failed (2: no such file or directory) Is this a permission issue?

    Read the article

  • nginx start failing, says error.log doesn't exist

    - by sososo
    I structured my sites like: /home/www/domain.com/public,private, log, backup In the log folder, I created a blank error.log and access.log. My nginx file in sites-available for the domain looks like: server { access_log /home/www/domain1.com/log/access.log; error_log /home/www/domain1.com/log/error.log; } Trying to start nginx it says: starting nginx: the config file /etc/nginx/nginx/conf syntax is ok [emrg] open() ".../access.log" failed (2: no such file or directory) Is this a permission issue?

    Read the article

  • Sorting by date

    - by user62367
    Original: Jan 23 2011 10:42 SOMETHING 2007.12.20.avi Jun 26 2009 SOMETHING 2009.06.25.avi Feb 12 2010 SOMETHING 2010.02.11.avi Jan 29 2011 09:17 SOMETHING 2011.01.27.avi Feb 11 2011 20:06 SOMETHING 2011.02.10.avi Feb 27 2011 23:05 SOMETHING 2011.02.24.avi Output: Feb 27 2011 23:05 SOMETHING 2011.02.24.avi Feb 11 2011 20:06 SOMETHING 2011.02.10.avi Jan 29 2011 09:17 SOMETHING 2011.01.27.avi Jan 23 2011 10:42 SOMETHING 2007.12.20.avi Feb 12 2010 SOMETHING 2010.02.11.avi Jun 26 2009 SOMETHING 2009.06.25.avi How could I get the output where the newest file is at the top?

    Read the article

  • Best MTA setup for home or laptop computers - not server

    - by thomasrutter
    Hello, What is a good MTA (e.g. Postfix or something else) setup for a home computer behind a NAT, or a laptop that connects to various different wifi networks? I've read a lot of Postfix tutorials on how to set it up this way or that, but they are usually geared towards computers that are servers ie they have a static IP have a domain name are always connected to the same network My requirements are, I guess: Ability to forward mail for "root" to another server of my choosing. No listening for incoming SMTP connections - outgoing only Ability to route outgoing mail via an external SMTP server with authentication (and perhaps encryption) If not Postfix, I need an MTA which can queue up mails in case it temporarily has no internet connection.

    Read the article

  • Unix Permissions issue with users belonging to the same group accessing a folder

    - by TK Kocheran
    I have a folder I'd really like to allow another user on this machine access to. I'm using mt-daapd to serve music to the network, so I'd like to enable the mt-daapd user to access my Music directory, /home/rfkrocktk/Music. The master user is rfkrocktk obviously. I've tried to set all of my permissions properly on the directory, but the mt-daapd user can't acces the files. I created a group called media-users and added both rfkrocktk and mt-daapd to it in order to give mt-daapd permission to simply read all of the files in that directory and subdirectories. If I run id on each of my users, here's what's displayed: $ id rfkrocktk > uid=1000(rfkrocktk) gid=1000(rfkrocktk) groups=1000(rfkrocktk),4(adm),20(dialout),24(cdrom),29(audio),46(plugdev),104(lpadmin),115(admin),120(sambashare),124(vboxusers),1001(jupiter),2002(media-users) $ id mt-daapd > uid=123(mt-daapd) gid=65534(nogroup) groups=65534(nogroup),2002(media-users) It definitely seems that both users are a part of the media-users group, so what could be going wrong? If I run ls -l on the actual Music directory to see its permissions, here's the output: drwxr-Sr-- 201 rfkrocktk media-users 12288 2011-01-13 12:26 Music If I run ls -l on the Music directory to get its children, here's the output: drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-12-20 15:31 2DBoy drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-05-25 12:50 ABBA drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Access Denied drwxr-Sr-- 10 rfkrocktk media-users 4096 2009-12-28 15:19 AC-DC drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Aerosmith drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-04 10:45 A Flock of Seagulls drwxr-Sr-- 4 rfkrocktk media-users 4096 2010-05-28 18:13 Alestorm drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-22 23:29 Amon Amarth drwxr-Sr-- 5 rfkrocktk media-users 4096 2009-12-28 15:19 Anberlin ... From this, it would seem that I should be able to access the folders from mt-daapd, but I can't. Running sudo -i -u mt-daapd ls -l /home/rfkrocktk/Music displays nothing, indicating to me that for whatever reason, mt-daapd doesn't have access to read the folder. What am I doing wrong?

    Read the article

  • Resize underlying partitions in mdadm RAID1

    - by kyork
    I have a home built NAS, and I need to slightly reconfigure some of my drive usage. I have an mdadm RAID1 composed of two 3TB drives. Each drive has one ext3 partition that uses the entire drive. I need to shrink the ext3 partition on both drives, and add a second 8GB or so ext3 partition to one, and swap partition of equal size to the other. I think I have the steps figured out, but wanted some confirmation. Resize the mdadm RAID resize2fs /dev/md0 [size] where size is a little larger than the currently used space on the drive Remove one of the drives from the RAID mdadm /dev/md0 --fail /dev/sda1 Resize the removed drive with parted Add the new partition to the drive with parted Restore the drive to the RAID mdadm -a /dev/md0 /dev/sda1 Repeat 2-5 for the other device Resize the RAID to use the full partition mdadm --grow /dev/md0 -z max Is there anything I've missed, or haven't considered?

    Read the article

  • MySQL blocking new connections, and mysqladmin flush-hosts

    - by aidan
    I'm running MySQL on a remote server, and it suddenly started rejecting all connections: $ mysql -h 192.168.1.10 -u root -p ERROR 1129 (00000): Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' So, I try this flush-hosts command... $ mysqladmin flush-hosts -h 192.168.1.10 -u root -p mysqladmin: connect to server at '192.168.1.10' failed error: 'Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'' I.e. it's blocking the very un-blocking tool it recommends. Am I doing it wrong, or will I have to resort to ssh/cpanel/physical access?

    Read the article

< Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >