Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 527/1663 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • mongod fork vs nohup

    - by Daniel Kitachewsky
    I'm currently writing process management software. One package we use is mongo. Is there any difference between launching mongo with mongod --fork --logpath=/my/path/mongo.log and nohup mongod >> /my/path/mongo.log 2>&1 < /dev/null & ? My first thought was that --fork could spawn more processes and/or threads, and I was suggested that --fork could be useful for changing the effective user (downgrading privileges). But we run all under the same user (process manager and mongod), so is there any other difference? Thank you

    Read the article

  • MySQL wants a password but it's empty

    - by gAMBOOKa
    mysql -uroot ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) mysql -uroot -p Enter password: <-- leave blank, hit enter without entering anything mysql> <-- i am logged in NOTE: This is a new mysql instance installation So if the password is blank, why won't it log me in without a -p flag? For a little clarification. I am running into this issue when attempting to change the password using a script: We're using a bash script to do that. mysqladmin -u root password abc wouldn't work (access denied) mysqladmin -u root -p password abc cannot be used because it prompts for a password and we need to automate this. mysqladmin -u root -p'' password abc is not working either

    Read the article

  • central log-server with auditdisp

    - by johan
    I want to setup a central log-server. The log-server is running with debian 6.0.6 and the audit daemon is installed in version 1.7.13-1. The Clients are running with Red Hat 5.5 and they connect to the log-server via audispd. The connection works fine and i get all messages from each node. My questions is: is it possible that the auditd daemon from the log server write the messages from each node in a separate file? I try to transfer the messages via the syslog daemon, that works but i can not use tools like ausearch to analyze these log-files.

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Untangle VPN setup, how to see internal addresses?

    - by NFS user
    So Untangle is setup as the default gateway at 192.168.100.1/24, it is the authorative DHCP server issuing addresses from 192.168.100.100 to 192.168.100.200 and is successfully connected to the Internet. Untangle uses OpenVPN for remote access. Accessing the VPN gives me the address 192.168.40.5. However, I cannot ping any machines on the internal 192.168.100.x network remotely. Clearly, there is something basic that I am missing. What is it and how is it solved? Update: The VPN was not setup with the internal network. Since Untangle only allows editing the VPN setup once, the VPN had to be removed and reinstalled with the internal network exported. Now it works. The lesson is that the internal network must be setup before configuring the VPN.

    Read the article

  • Is it possible to open server ports on TUN devices?

    - by JosephH
    If I make a VPN connection to a server (say myvpn.com; assume this server is not behind any router/firewall) via a TUN device and open a port (say 5555), will someone else be able to connect to me via myvpn.com:5555? If not, is there a tunneling software that does exactly this in a transparent manner? i.e. run any TCP/UDP-based server instance behind a router without NAT using another remote server.

    Read the article

  • Cannot boot NixOS Install CD

    - by InFreefall
    I am trying to install NixOS on an Acer laptop. When I try to boot off of the install CD, the system starts up and shows the Acer logo. Then, the boot menu of the CD appears, but it only displays on the top left corner of the screen. The rest of the screen still shows the Acer logo. If I try to select "boot" from the menu, that area of the screen goes black, and nothing else happens. I tried adding "nomodeset" to the boot arguments, but that did not affect anything. Are there any other boot arguments or anything else that could fix this?

    Read the article

  • LUKS-Encrypted Root Partition in Ubuntu 9.04

    - by Martindale
    I have a LUKS-encrypted root partition that I have installed Ubuntu 9.04 to. I have of course placed /boot on a separate ext2 partition, and my boot loader loads and functions correctly. However, I can't seem to get my initrd to load the LUKS-encrypted root using the appropriate /dev/mapper/ address. What hooks and scripts do I need to add to get this to function correctly, and what is the correct way to regenerate my initrd? I can CHROOT into this install, and everything works fine - but I just can't seem to get it to actually boot. Help!

    Read the article

  • How to keep subtree removal (`rm -rf`) from starving other processes for Disk I/O?

    - by David Eyk
    We have a very large (multi-GB) Nginx cache directory for a busy site, which we occasionally need to clear all at once. I've solved this in the past by moving the cache folder to a new path, making a new cache folder at the old path, and then rm -rfing the old cache folder. Lately, however, when I need to clear the cache on a busy morning, the I/O from rm -rf is starving my server processes of disk access, as both Nginx and the server it fronts for are read-intensive. I can watch the load average climb while the CPUs sit idle and rm -rf takes 98-99% of Disk IO in iotop. I've tried ionice -c 3 when invoking rm, but it seems to have no appreciable effect on the observed behavior. Is there any way to tame rm -rf to share the disk more? Do I need to use a different technique that will take its cues from ionice? Update: The filesystem in question is an AWS EC2 instance store (the primary disk is EBS). The /etc/fstab entry looks like this: /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2

    Read the article

  • USB forwarding from dom0 to domU

    - by Karolis T.
    What are my options to forward two USB connected phones to xen guest? I've read about PCI-passthrough http://www.wlug.org.nz/XenPciPassthrough, but I'm sure usb controller in the server isn't a pci card. There's device level forwarding, but I need to forward two devices, this here doesn't say how to do it: http://www.olivetalks.com/2008/02/03/usb-forwarding-on-xen-it-just-does-not-work/ Would something as simple as: usbdevice = [ 'host:xxx', 'host:yyy', ] work? EDIT: I'm now starting a bounty. This is really important for me and for other people also, hoping someone who have this resolved will be able to help.

    Read the article

  • s3fs changing s3 permissions?

    - by magd1
    My developer believes that s3fs is changing my bucket's permissions. Is this possible? I want my bucket to be public, but it keeps reverting back to private. Here's my fstab. s3fs#production /mnt/production fuse use_cache=/tmp,use_rrs=1,allow_other,uid=1000,gid=1000 0 0 My developer mentioned the "-o default_acl (default="private")" option. The documentation refers to "canned acl", but I don't understand what these are.

    Read the article

  • How to disable or tune filesystem cache sharing for OpenVZ?

    - by gertvdijk
    For OpenVZ, an example of container-based virtualization, it seems that host and all guests are sharing the filesystem cache. This sounds paradoxical when talking about virtualization, but this is actually a feature of OpenVZ. It makes sense too. Because only one kernel is running, it's possible to benefit from sharing the same pages of filesystem cache in memory. And while it sounds beneficial, I think a set up here actually suffers in performance from it. Here's why I think why: my machines aren't actually sharing any files on disk so I can't benefit from this feature in OpenVZ. Several OpenVZ machines are running MySQL with MyISAM tables. MyISAM relies on the system's filesystem cache for caching of data files, unlike InnoDB's buffer pool. Also some virtual machines are known to do heavy and large I/O operations on the same filesystem in the host. For example, when running cat *.MYD > /dev/null on some large database in one machine, I saw the filesystem cache lowering in another, monitored by htop. This essentially flushes all the useful filesystem cache in guests (FIFO) and so it flushes the MySQL caches in the guests. Now users are complaining that MySQL is very slow. And it is. Some simple SELECT queries take several seconds on times disk I/O is heavily used by other machines. So, simply put: Is there a way to avoid filesystem cache being wiped out by other virtual machines in container-based virtualization? Some thoughts: Choosing algorithm for flushing filesystem cache in the kernel. (possible? how?) Reserving a certain amount of pages for a single VM. (seems no option for filesystem cache type of pages that reading man vzctl) Will running MySQL on another filesystem get me anywhere? If not, I think my alternatives are: Use KVM for MySQL-MyISAM running VMs. KVM actually assigns memory to the VM and does not allow swapping out caches unless using a balloon driver. Move to InnoDB and tune the buffer pools, dirty pages, etc. This is now considered to be 'nice to have' on the long-term as not everyone responsible for administration of the system understands InnoDB. more suggestions welcome. System software: Proxmox (now 1.9, could be upgraded to 2.x). One big LV assigned for the VMs.

    Read the article

  • virtual box strectch to fit resolution

    - by Scarface
    Hey guys I have a really annoying problem, that I hope someone has figured out. I just installed ubuntu on virtual box and installed the guest additions so everything was great. I had a resolution that stretched across my screen from left to right and the only virtual box components that were visible were the windows vista title bar : minimize/exit/maximize buttons and virtual box controls at the bottom. Now all of a sudden now that I have installed the ubuntu 170mb of automatic updates, I see vertical and horizontal scroll bars that are part of virtual box and the ubuntu resolution will not stretch across my screen anymore. What I want is a ubuntu resolution that will stretch to fit the maximized window of virtual box, and remove the scroll bars. If anyone has any ideas, I would really appreciate it.

    Read the article

  • my.cnf in server directory, why

    - by Mellon
    On my Ubuntu machine, I have installed MySQL . I notice that there are /etc/my.cnf file which contain the content (only two lines): innodb_buffer_pool_size = 1G max_allowed_packet = 512M While there is also /etc/mysql/my.cnf with a long content like: # The MySQL database server configuration file. ... ... For me, it looks like both are configurations for MySQL server, but Why there are two my.cnf in different locations, can't the content to be merged to one my.cnf ? What is the purpose to have seperate my.cnf for MySQL server ?

    Read the article

  • Feeding the kernels entropy source from other machines and/or increasing its maximum size

    - by David Spillett
    We have has a little trouble with a small box that acts as a VPN end-point and mail relay for our network, caused by the available entropy for /dev/random being too low (which causes TLS connection attempts by exim to fail). The machine doesn't do anything else, so the normal feed into the entropy pool (interrupt timings from things like disk access) is not enough. As a quick hack I've set a looping script that reads from /dev/hda at a couple of Mbyte/sec which keeps it topped up. Other than buying a hardware RNG, is there a clean way of piping data for entry from elsewhere, such as a copy of the data our file server uses for its entropy source? I've spotted several tips for using rng-tools to feed it from /dev/urandom on the same machine but that "feels dirty". Also, is it possible to increase the maximum pool size? It currently seems to max out at 3585.

    Read the article

  • DEBIAN repository signing: a step-by-step guide.

    - by jldupont
    I've got many DEBIAN repository for my projects (e.g. EPAPI, erlang-dbus etc.). It seems that now Synaptic wants those to be signed for the packages to appear by default. For the DEBIAN kung-fu masters out there, please provide me with a step-by-step guide to achieving this, please. I've googled a lot but I am still a bit confused on the subject. update: I use a Launchpad PPA now... saves me from all this trouble.

    Read the article

  • Very high memory usage, but not claimed by any process?

    - by SharkWipf
    While stress-testing LVM on one of our Debian servers, I came across this issue where memory would fill up a lot to the point where it would run the server out of memory, but no process would claim the memory. See http://i.imgur.com/cLn5ZHS.png, and see http://serverfault.com/a/449102/125894 for an explanation on the colors used in htop. Why is this happening? And is there any way to see what process is using the memory? Htop is configured not to hide any processes, so what is it that htop is missing? In this particular case, I can fairly certainly say that it is caused, directly or indirectly, by lvmcreate, lvmremove or dmsetup, as I was stress-testing that. Do note that this question is not about solving the LVM problem, but about why the memory isn't claimed by any process. Stopping all LVM commands does bring the memory back down to <600MB.

    Read the article

  • compressing dd backup on the fly

    - by Phil
    Maybe this will sound like dumb question but the way i'm trying to do it doesn't work. I'm on livecd, drive is unmounted, etc. When i do backup this way sudo dd if=/dev/sda2 of=/media/disk/sda2-backup-10august09.ext3 bs=64k ...normally it would work but i don't have enough space on external hd i'm copying to (it ALMOST fits into it). So I wanted to compress this way sudo dd if=/dev/sda2 | gzip > /media/disk/sda2-backup-10august09.gz ...but i got permissions denied. I don't understand.

    Read the article

  • Daemons did not start automatically ubuntu 10.04

    - by Anton Prokofiev
    Hello, All! I have a strange behavior on Ubuntu 10.4: few daemons (apache2 and postgresql (8.4SS from enterpriseDB) did not start automatically. Funny things that time-to-to they do. (If I just restart my computer everything looks ok, but if I turn it off for the night, nothing work..., so I have to start them manually) I've googled this problem a little bit, but the only answer I have found was to run: sudo update-rc.d apache2 defaults I've called it but the answer was: System start/stop links for /etc/init.d/apache2 already exist. Any Ideas?

    Read the article

  • Apache worker is crashing after 3.000 users

    - by user1618606
    I activated Apache Worker on my VPS and I'm having problems, 'cause the website is crashing when 3000 users are accessing the website. I'm using http://whos.amung.us/stats/2jzwlvbhvpft/ as counter. My Apache Worker configuration: KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 1 <IfModule mpm_worker_module> ServerLimit 20000 StartServer 8000 MinSpareThreads 10400 MaxSpareThreads 14200 ThreadLimit 5 ThreadsPerChild 5 MaxClients 20000 MaxRequestsPerChild 0 </IfModule> The VPS have the SO: Debian 64 LAMP, memory: 14gb and CPU: 24ghz What I could to do to give a best performance?

    Read the article

  • How to increase acpiphp slots?

    - by Eil
    Hi, Oh RHEL 5.5, there are 31 ACPI PCI hotplug slots by default: acpiphp: Slot [1] registered ... acpiphp: Slot [31] registered Is there a way to increase this number? I haven't been able to find an argument to supply to modprobe, or a sysctl knob to tweak, but I know there must be ways to get more slots based on some Google sleuthing. (For the curious, this is just preliminary experimentation to see how many virtual disks I can hot-add to a running KVM guest.)

    Read the article

  • Do background processes get a SIGHUP when logging off?

    - by Massimo
    This is a followup to this question. I've run some more tests; looks like it really doesn't matter if this is done at the physical console or via SSH, neither does this happen only with SCP; I also tested it with cat /dev/zero > /dev/null. The behaviour is exactly the same: Start a process in the background using & (or put it in background after it's started using CTRL-Z and bg); this is done without using nohup. Log off. Log on again. The process is still there, running happily, and is now a direct child of init. I can confirm both SCP and CAT quits immediately if sent a SIGHUP; I tested this using kill -HUP. So, it really looks like SIGHUP is not sent upon logoff, at least to background processes (can't test with a foreground one for obvious reasons). This happened to me initially with the service console of VMware ESX 3.5 (which is based on RedHat), but I was able to replicate it exactly on CentOS 5.4. The question is, again: shouldn't a SIGHUP be sent to processes, even if they're running in background, upon logging off? Why is this not happening? Edit I checked with strace, as per Kyle's answer. As I was expecting, the process doesn't get any signal when logging off from the shell where it was launched. This happens both when using the server's console and via SSH.

    Read the article

  • Convert shell logs (incl. escape characters) to HTML?

    - by dehmann
    Is there tool or a regexp that can convert shell escape characters to HTML code? As an example, here is a logfile from GNU screen: ^MESC[K$ ^MESC[K$ exit Executing .bashrc ESC[00;31;31mserver.xyz.com: ESC[00;34;34m~ which I would like to convert to something like this: $ exit Executing .bashrc <font color=red>server.xyz.com</font>: <font color=blue>~</font> and send as HTML e-mail to an e-mail address, to archive my work. Here is a related question, which shows how to convert it to regular text, but it would be nice to convert to HTML and not just throw the escape characters away.

    Read the article

  • SSH connection falling down

    - by kappa
    I've set up a connection with autossh that creates some tunnels at system startup, but if I try to connect, after successful login (with RSA key) connection fall down, here a trace: debug1: Authentication succeeded (publickey). debug1: Remote connections from LOCALHOST:5006 forwarded to local address localhost:22 debug1: Remote connections from LOCALHOST:6006 forwarded to local address localhost:80 debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: remote forward success for: listen 5006, connect localhost:22 debug1: remote forward success for: listen 6006, connect localhost:80 debug1: All remote forwarding requests processed debug1: Sending environment. debug1: Sending env LANG = it_IT.UTF-8 debug1: Sending env LC_CTYPE = en_US.UTF-8 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 2400, received 2312 bytes, in 1.3 seconds Bytes per second: sent 1904.2, received 1834.4 debug1: Exit status 1 What can be the problem? All this stuff is managed by a script already running on another machine (creating reverse tunnels on the same machine but with different ports)

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >