Search Results

Search found 24646 results on 986 pages for 'linux vserver'.

Page 336/986 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Tomcat: how to change location of manager and host-manager to a subdirectory

    - by rolandpish
    Hi there. I'm running a Tomcat 6.0.28 at port 8080 in a Debian Squeeze box. I'm a newbie in tomcat. I would like to change the location of manager and host-manager applications. That is, instead of going to: http://myserver:8080/manager/html I would like that to be: http://myserver:8080/somesubdirectory/manager/html Is this possible? If yes, how can I achieve this? I would really appreciate any help in this. I've been trying to change the context of /etc/tomcat6/Catalina/localhost/manager.xml from /manager to /somesubdirectory/manager with no success. Also I tried to create a symlink under /var/lib/tomcat6/webapps/ROOT/somesubdirectory/manager with no success. Thanks in advance. Cheers.

    Read the article

  • How do I enable SELinux when booting from a CD/DVD?

    - by JeffG
    I have a bootable DVD which boots the same Kernel as the Hard Drive (which uses SELinux). I have copied /etc/selinux and all the kernel modules to my ramdisk, and have tried both selinux=1 and selinux 1 as Kernel boot parameters. After the system boots, I check dmesg: % dmesg | grep -i selinux Kernel command line: initrd=idrd.img ramdisk_size=110476 selinux=1 SELinux: Initializing. SELinux: Starting in permissive mode selinux_register_security: Registering secondary module capability SElinux: Registering netfilter hooks But SELinux isn't running: % /usr/sbin/getenforce Disabled % /usr/sbin/setenforce 1 /usr/sbin/setenforce: SELinux is disabled /var/log/messages does not hold any clues. /proc/kmsg also has nothing

    Read the article

  • Disk full, du tells different. How to further investigate?

    - by initall
    I have a SCSI disk in a server (hardware Raid 1), 32G, ext3 filesytem. df tells me that the disk is 100% full. If I delete 1G this is correctly shown. However, if I run a du -h -x / then du tells me that only 12G are used (I use -x because of some Samba mounts). So my question is not about subtle differences between the du and df commands but about how I can find out what causes this huge difference? I rebooted the machine for a fsck that went w/out errors. Should I run badblocks? lsof shows me no open deleted files, lost+found is empty and there is no obvious warn/err/fail statement in the messages file. Feel free to ask for further details of the setup.

    Read the article

  • rebuild yum index on aws s3

    - by Chucks
    I am trying to rebuild yum repo on aws S3 after adding new packages. Here are few commands I am trying, but it is not helping. [root@chucks ~]$ createrepo --baseurl http://rpmcopy.xxxxx.com.s3-website-us-east-1.amazonaws.com /repodata/ --update Saving Primary metadata Saving file lists metadata Saving other metadata Generating sqlite DBs Sqlite DBs complete How do I give a path from S3? /repodata/ path is not relevent I believe. All my pkgs are under bucket s3://rpmcopy.xxxxx.com/. And repodata dir is under s3://rpmcopy.xxxxx.com/repodata

    Read the article

  • nginx - 403 Forbidden

    - by michell90
    I've trouble to get aliases working correctly on nginx. When i try to access the aliases, /pma and /mba (see secure.example.com.conf), i get a 403 Forbidden but the base url works correctly. I read a lot of posts but nothing helped, so here i am. Nginx and php-fpm are running as www-data:www-data and the permissions for the directories are set to: drwxrwsr-x+ 5 www-data www-data 4.0K Dec 5 22:48 ./ drwxr-xr-x. 3 root root 4.0K Dec 4 22:50 ../ drwxrwsr-x+ 2 www-data www-data 4.0K Dec 5 13:10 mda.example.com/ drwxrwsr-x+ 11 www-data www-data 4.0K Dec 5 10:34 pma.example.com/ drwxrwsr-x+ 3 www-data www-data 4.0K Dec 5 11:49 www.example.com/ lrwxrwxrwx. 1 www-data www-data 18 Dec 5 09:56 secure.example.com -> www.example.com/ Im sorry for the bulk, but i thought better too much than too little. Here are the configuration files: /etc/nginx/nginx.conf user www-data www-data; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/secure.example.com server { listen 80; server_name secure.example.com; return 301 https://$host$request_uri; } server { listen 443; server_name secure.example.com; access_log /var/log/nginx/secure.example.com.access.log; error_log /var/log/nginx/secure.example.com.error.log; root /srv/http/secure.example.com; include /etc/nginx/ssl/secure.example.com.conf; include /etc/nginx/conf.d/index.conf; include /etc/nginx/conf.d/php-ssl.conf; autoindex off; location /pma/ { alias /srv/http/pma.example.com; } location /mda/ { alias /srv/http/mda.example.com; } } /etc/nginx/ssl/secure.example.com.conf ssl on; ssl_certificate /etc/nginx/ssl/secure.example.com.crt; ssl_certificate_key /etc/nginx/ssl/secure.example.com.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/index.conf index index.php index.html index.htm; /etc/nginx/conf.d/php-ssl.conf location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; } /var/log/nginx/secure.example.com.error.log 2013/12/05 22:49:04 [error] 29291#0: *2 directory index of "/srv/http/pma.example.com" is forbidden, client: 176.199.78.88, server: secure.example.com, request: "GET /pma/ HTTP/1.1", host: "secure.example.com" EDIT: forgot to mention, i'm running CentOS 6.4 x86_64 and nginx 1.0.15 Thanks in advance!

    Read the article

  • Configuring Apache for multiple clients

    - by Chris_K
    Last week I had a question here about suexec / suphp but I tried to accomplish too much. I'm going to narrow the scope a bit and try again. I'd like to configure a LAMP server to host multiple clients. I'd like it to seem (from the client's viewpoint) just like any other shared hosting environment. Web sites in their home directory, no need to muck around with file ownerships to get pages served, etc. It would seem that a configuration that involves suexec and suphp is the way to go(?) I'm specifically looking for a current/modern guide on how to accomplish this (I'll be using CentOS if it matters) and I'm afraid I need more than a link to Apache docs. Are there any good How-To's out there? The few I've found have been pretty out of date, but it is quite possible my search was weak.

    Read the article

  • What amount of physical RAM would a typical "commodity class" server have, as of late 2013?

    - by marathon
    I'm trying to spec out servers for my company's infrastructure group to build. They tell me anything more than 2GB is too much, which I find ridiculous considering cheap DRAM is about 15 bucks a dimm in bulk and our particular software runs better with more memory. I tried to find out how much google servers use, and pinning down a number is hard. Best I could find in a google research paper was that in 2008, their commodity servers were using 2 and 4GB dimms, but the paper never said how many. I realize "commodity server" is a vague term, but I'm just looking for a rough range in RAM used. I suspect at least 16GB is going to be the norm.

    Read the article

  • "eject" command not working..

    - by shadyabhi
    shadyabhi@shadyabhi-desktop:~$ eject -v eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/media/cdrom' eject: `/media/cdrom' is a link to `/media/cdrom0' eject: `/media/cdrom0' is not mounted eject: `/media/cdrom0' is not a mount point eject: tried to use `/media/cdrom0' as device name but it is no block device eject: unable to find or open device for: `cdrom' shadyabhi@shadyabhi-desktop:~$ The tray doesnt open.. How do I open tray using command line?

    Read the article

  • Problem with PXE boot

    - by user70523
    I followed the following link for PXE boot, http://www.howtoforge.com/setting-up-a-pxe-install-server-on-ubuntu-9.10-p3 and I was able to ping the client from the server and also when I booted up the client It is getting the IP address from the server. But later,I got this error PXELinux 3.82 2009-06-09 . . . [other informations] !PXE Entry point found (we hope) at 9D3B:0109 via plan A UNDI code segment at 9D3B len 16C2 UNDI data segment at 933B len A000 Getting cached packet 01 02 03 . . . [other informations] TFTP prefix: Trying to load: pxelinux.cfg/ec5db4c0-74fe-d511-b9e7-3d9235afe5a1 Trying to load: pxelinux.cfg/01-00-17-31-b6-5e-a8 Trying to load: pxelinux.cfg/0A64491E Trying to load: pxelinux.cfg/0A64491 Trying to load: pxelinux.cfg/0A6449 Trying to load: pxelinux.cfg/0A644 Trying to load: pxelinux.cfg/0A64 Trying to load: pxelinux.cfg/0A6 Trying to load: pxelinux.cfg/0A Trying to load: pxelinux.cfg/0 Trying to load: pxelinux.cfg/default Unable to locate configuration file Boot failed: press a key to retry or wait for reset I have put all the files mentioned in the link in tftpboot. Can anyone explain what could be the problem. Thanks in advance

    Read the article

  • repair partition table

    - by m.sr
    Hallo. I've just overwritten my partition table of my system's hard disk. i made a cfdisk on the wrong device (/dev/sda instead of /dev/sdd), deleted all partitions, made one new primary spanning over the whole device, set its type to 07 (NTFS) and hit write. So here i am with my system running. Until i reboot, i hope/guess nothing will change - meaning: all my data is accessible (I'm currently making a dd-backup of the whole device and plan to make a .tar.gz-backup of the most important data later). I also backed up /proc/partitions, /proc/diskstats (even though i guess this is more about throughput and stuff like this ...) and /sys/block/sda/sda?/{start,size}. Some further things i know: 4 primary partitions 1st partition: ~100Mb, ext3, /boot 2nd partition: ~100Mb, "Win7 Boot Partition", ntfs(?) 3rd partition: ~20...30GB, Win7, ntfs 4th partition: ~20...30GB, luks-encrypted device The luks- de crypted device is a LVM-PV The /, /home & swap-partitions are all LVs on the (VG on the) above noted PV So my questions: What is the simplest way to just write the kernels partition table to the disk? What is the simplest way to take the above mentioned (and perhaps other I don't know of ...) data and generate the partition table? Are there any problems to take care of regarding to luks and/or lvm? Is there any data I should backup before rebooting (meanig stuff from kernel [ /sys/..., /proc/...] and so on, which could help me regenerate the partition table)? Thanks a lot! P.S.: debian sid, Kernel 2.6.34-1-amd64 from debian-experimental, 80GB Intel SSD

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • gitolite post commit hook to update redmine's repository

    - by eliocs
    Hello, I currently have a ubuntu server machine which has gitolite and redmine installed. Redmine accesses repository copies which are updated using a cron task. Having a cron task to pull the updates seems like an overkill is there anyway a gitolite post-commit script could execute a pull as the redmine user. My current update script looks like this: */15 * * * * redmine cd /home/redmine/repositories/support && git pull The post-commit script I guess should be similar, how can I give the gitolite user the privileges to execute the pull as the redmine user? Thanks in advance. p.s: don't have enough reputation to create de gitolite tag.

    Read the article

  • Apache strace to hunt down a memory leak

    - by Zipp
    We have a server with a memory issue: the server keeps allocating itself memory and doesn't release it. We're running Apache. I set MaxReqsPerClient to a really low value just so the threads don't hold a lot of memory, but has anyone seen calls like this? Am I wrong in thinking that it's probably Drupal pulling too much data back from the cache in DB? read(52, "h_index\";a:2:{s:6:\"weight\";i:1;s"..., 6171) = 1368 read(52, "\";a:2:{s:6:\"author\";a:3:{s:5:\"la"..., 4803) = 1368 read(52, ":\"description\";s:19:\"Term name t"..., 3435) = 1368 read(52, "abel\";s:4:\"Name\";s:11:\"descripti"..., 2067) = 1368 read(52, "ions\";a:2:{s:4:\"form\";a:3:{s:4:\""..., 16384) = 708 brk(0x2ab554396000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55f653000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55f753000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55f853000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55f953000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55fa53000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55fb53000 brk(0x2ab554356000) = 0x2ab5542f5000 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ab55fc53000 poll([{fd=52, events=POLLIN|POLLPRI}], 1, 0) = 0 (Timeout) write(52, "d\0\0\0\3SELECT cid, data, created, "..., 104) = 104 read(52, "\1\0\0\1\5E\0\0\2\3def\23drupal_database_nam"..., 16384) = 1368 read(52, ";s:11:\"granularity\";a:5:{s:4:\"ye"..., 34783) = 1368 read(52, ":4:\"date\";}s:9:\"datestamp\";a:9:{"..., 33415) = 1368 read(52, "\";i:0;s:15:\"display_default\";i:0"..., 32047) = 1368 read(52, "e as an integer value.\";s:8:\"set"..., 30679) = 1368 read(52, "label' pairs, i.e. 'Fraction': 0"..., 29311) = 1368 top (the procs just keep growing in memory..): 12845 apache 15 0 581m 246m 37m S 0.0 4.1 0:17.39 httpd 12846 apache 15 0 571m 235m 37m S 0.0 4.0 0:12.13 httpd 12833 apache 15 0 420m 117m 37m S 0.0 2.0 0:06.04 httpd 12851 apache 15 0 412m 113m 37m S 0.0 1.9 0:05.32 httpd 13871 apache 15 0 409m 109m 37m S 0.0 1.8 0:04.90 httpd 12844 apache 15 0 407m 108m 37m S 0.0 1.8 0:04.50 httpd 13870 apache 15 0 407m 108m 37m S 0.3 1.8 0:03.50 httpd 14903 apache 15 0 402m 103m 37m S 0.3 1.7 0:01.29 httpd 14850 apache 15 0 397m 100m 37m S 0.0 1.7 0:02.08 httpd 14907 apache 15 0 390m 93m 36m S 0.0 1.6 0:01.32 httpd 13872 apache 15 0 386m 91m 37m S 0.0 1.5 0:03.13 httpd 12843 apache 15 0 373m 81m 37m S 0.0 1.4 0:02.51 httpd 14901 apache 15 0 370m 75m 33m S 0.0 1.3 0:00.78 httpd 14904 apache 15 0 335m 29m 15m S 0.0 0.5 0:00.26 httpd

    Read the article

  • server will not reply (syn -ack)

    - by Brent
    I like to use the following commands to manage 'TIME_WAIT', in the hope to free up resources. echo 20 > /proc/sys/net/ipv4/tcp_fin_timeout sysctl -w net.ipv4.tcp_tw_reuse=1 sysctl -w net.ipv4.tcp_tw_recycle=1 I found something interesting while doing a tcpdump. Sometimes if a client makes a connection (syn), the server will not reply (syn -ack). My question is, could it be because of the top three commands.

    Read the article

  • Reuse remote ssh connections and reduce command/session logging verbosity?

    - by ewwhite
    I have a number of systems that rely on application-level mirroring to a secondary server. The secondary server pulls data by means of a series of remote SSH commands executed on the primary. The application is a bit of a black box, and I may not be able to make modifications to the scripts that are used. My issue is that the logging in /var/log/secure is absolutely flooded with requests from the service user, admin. These commands occur many times per second and have a corresponding impact on logs. They rely on passphrase-less key exchange. The OS involved is EL5 and EL6. Example below. Is there any way to reduce the amount of logging from these actions. (By user? By source?) Is there a cleaner way for the developers to perform these ssh executions without spawning so many sessions? Seems inefficient. Can I reuse the existing connections? Example log output: Jul 24 19:08:54 Cantaloupe sshd[46367]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46446]: Accepted publickey for admin from 172.30.27.32 port 33526 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46475]: Accepted publickey for admin from 172.30.27.32 port 33527 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46504]: Accepted publickey for admin from 172.30.27.32 port 33528 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46583]: Accepted publickey for admin from 172.30.27.32 port 33529 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46612]: Accepted publickey for admin from 172.30.27.32 port 33530 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46641]: Accepted publickey for admin from 172.30.27.32 port 33531 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46720]: Accepted publickey for admin from 172.30.27.32 port 33532 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46749]: Accepted publickey for admin from 172.30.27.32 port 33533 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46778]: Accepted publickey for admin from 172.30.27.32 port 33534 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46857]: Accepted publickey for admin from 172.30.27.32 port 33535 ssh2

    Read the article

  • Bootable GRUB partition

    - by MA1
    I have a customized live fedora 12 USB which is working fine. What i want to do is to make a partition of my hard disk bootable so that my customized fedora can be run from hard disk. To accomplish this i did the following steps: Created a primary partition(/dev/sda2) and format it as ext3 and set it as active. Copy all the files in the live usb to /dev/sda2. Following are the live usb contents(all directories): a. boot b. EFI c. LiveOS d. syslinux Then i installed the GRUB in boot/grub Created the grub.conf in boot/grub Following are the contents of each directory in the USB: syslinux/ boot.cat isolinux.bin splash.jpg vesamenu.c32 initrd0.img ldlinux.sys syslinux.cfg vmlinuz0 LiveOS/ livecd-iso-to-disk osmin.img squashfs.img EFI/ boot/ boot.conf grub.conf boot.efi bootia32.conf bootia32.efi splash.jpg splash.xpm.gz vesamenu.c32 initrd0.img isolinux.bin isolinux.cfg vmlinuz0 boot/grub/ core GRUB files grub.conf olpc.fth Following are contents of grub.conf default=0 splashimage=/EFI/boot/splash.xpm.gz timeout 2 hiddenmenu title funLinux kernel /EFI/boot/vmlinuz0 root=live:LABEL=myFun rootfstype=auto ro liveimg quiet ssb.blacklist=1 selinux=0 vga=normal nomodeset rhgb initrd /EFI/boot/initrd0.img Now when i try to boot from the hard disk it shows the grub menu and fedora starting to load but during loading it said No root device found Boot has failed, sleeping forever So, where is the problem? what i am doing wrong?

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • How to reset KDE System Monitor (KSysGuard)

    - by Deltik
    Something went wrong while I was attempting to restore a backup, and KDE System Guard ceased to display properly. This is the correct display (command running from root: kdesudo ksysguard): This is the incorrect display (command: ksysguard): Here in the incorrect display, the menu bar is missing, and the tab "Process Table" is unclickable. I have already tried to remove the directory ~/.kde/share/apps/ksysguard/ but to no avail. My question: How do I restore KSysGuard back to factory defaults/normal functionality?

    Read the article

  • Outlook 2003 under RHEL 5 server

    - by Kumar P
    I am using RHEL 5 server as proxy server in Local network. Under server i have few windows machines. Now i want to configure Outlook 2003 for send and receive mails in windows boxes, When i configure and test connections, It showing connection failed. In browser, internet working well. Without proxy, windows outlook - 2003 configure well in windows boxes also working well. What you think about it and How can i solve this problem ? Please give clear steps to solve.

    Read the article

  • Debugging "clogged" TCP connections

    - by Nikratio
    I'm having trouble with an internet connection that seems to randomly "freeze" arbitrary tcp connections. The connections stay established, but no data is coming through. When this happens, netstat still shows the connection status as ESTABLISHED on both the local computer: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 53 192.168.0.10:41129 173.255.235.238:143 ESTABLISHED 8219/gnutls-cli on (79.31/13/0) ..and the remote server: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 0 173.255.235.238:143 68.5.174.98:41129 ESTABLISHED 5303/imapd off (0.00/0/0) However, it seems that no data at all is transferred. If I run strace on the local and remote process, both just show a repeating sequence of select calls (with different fds of course), e.g. select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) The internet connection overall does not seem affected, I can still establish new connections to the same service on the same server without any problems. However, the affected local applications seem to be unaware of the problem and just hang. When I look at a packet capture of this connection on the client side, the last thing that happens is that the client transmits some data, then nothing happens for about 1100 seconds, and then several TCP Retransmission requests go out, with intervals increasing from 4 seconds to 130 seconds. No activity is captured after that. After about 10 minutes, the connection on the remote end disappears from the netstat (I wasn't able to catch any intermediate state), but still stays ESTABLISHED on the local end. Finally, after some more minutes, the local application aborts with a timeout and disappears from the local netstat output as well. Does anyone have a suggestion of how I could debug this further to find out where the problem lies and how to fix it? Additionaly and/or as a temporary workaround: is is there some way to globally reduce the timeout on client and/or server to reduce the time before the local application aborts?

    Read the article

  • could corosync can support unicast heartbeat mode?

    - by Emre He
    could corosync can support unicast heartbeat mode? from another thread in serverfault, some guy raised below corosync conf: totem { version: 2 secauth: off interface { member { memberaddr: 10.xxx.xxx.xxx } member { memberaddr: 10.xxx.xxx.xxx } ringnumber: 0 bindnetaddr: 10.xxx.xxx.xxx mcastport: 694 } transport: udpu } is this conf type means unicast mode? thanks, Emre

    Read the article

  • Issues while installing no machine setup (NX )

    - by TopCoder
    I am trying to connect to NX server from windows client but it reports following exception NX 203 NXSSH running with pid: 5404 NX 285 Enabling check on switch command NX 285 Enabling skip of SSH config files NX 285 Setting the preferred NX options NX 200 Connected to address: 10.43.51.77 on port: 22 NX 202 Authenticating user: nx NX 208 Using auth method: publickey NX 204 Authentication failed. I have regenearted the default_dsa.key on server and imported the same for client but still not working. Any solutions?

    Read the article

  • Failed to su after making a chroot jail

    - by arepo21
    On a 64 bit CentOS host I am using script make_chroot_jail.sh to put a user in a jail, not permitting it to see anything expect it's home at /home/jail/home/user1. I did it typing this: sudo ./make_chroot_jail.sh user1 after, when trying to connect to user1 first i was getting an error like: /bin/su: user guest does not exist i have fixed this by copying some missed libraries: sudo cp /lib64/libnss_compat.so.2 /lib64/libnss_files.so.2 /lib64/libnss_dns.so.2 /lib64/libxcrypt.so.2 /home/jail/lib64/ sudo cp -r /lib64/security/ /home/jail/lib64/ But now, when trying to connect to user1 typing su user1 and then typing it's password, i am getting this error: could not open session So the question is how to connect to user1 in this situation? P.S. Here are the permissions of some files, this might be helpful in order to provide a solution: -rwsr-xr-x 1 root root /home/jail/bin/su drwxr-xr-x 4 root root /home/jail/etc -rw-r--r-- 1 root root /home/jail/etc/pam.d/su -rw-r--r-- 1 root root /home/jail/etc/passwd -rw------- 1 root root /home/jail/etc/shadow UPDATE1 After some modifications i managed to connect to user1, but the session closes immediately! I guess this a PAM issue, however cant find a way to fix it. Here the log entry for close action from /val/log/secure: Oct 6 15:19:42 localhost su: pam_unix(su:session): session closed for user user1 What makes the session to exit immediately after launching?

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >