Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 444/991 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • nfs mount fails in Ubuntu 10, but not with -v

    - by stuartreynolds
    (1) mount -t nfs remotehost:/remotedir localmountpoint -o owner,rw (2) mount -v -t nfs remotehost:/remotedir localmountpoint -o owner,rw (1) Used to work with Ubuntu 9 and now fails with Ubuntu 10 (2.6.32-21-generic kernel) with the error: mount.nfs: an incorrect mount option was specified Strangely, adding -v (verbose) in (2) makes the problem go away. This is currently a blocker for me because the fstab line: remotehost:/remotedir localmountpoint nfs owner,rw 0 0 causes the same error (I don't believe I can specify verbose in fstab). Is this a bug in mount or my options really incorrect?

    Read the article

  • How to stop LDAP authentication in ubuntu?

    - by Kery
    My OS is Ubuntu 12.04 and use LDAP authentication. Now I meet a problem that another people want to access my system. But he is in another domain so he can't login. And I have no right to change this configuration in LDAP server. So I have to choose a workaround to solve this problem, for example close the LDAP authentication and use local authentication (I have root right in my system) or create another account which is not registered in LDAP server (I did this but can't change the created account password. The error is 'password reset by root is not supported'). Of course any other suggestion is appreciated! Than you in advance!

    Read the article

  • Continue process after closing terminal?

    - by Jakobud
    Recently, I tried to unzip a 30 gig zip file on a remote system using Putty. As the long unzipping process continued, I closed Putty, assuming that the process would just continue to run on the remote machine. When I came back later and logged back into the machine again, I realized that the process must have stopped only part way through when I closed Putty. I wasn't expecting that to happen. My question is, how do I prevent this problem? Can I somehow fire off a process in the background? Or should just setup a one time cronjob that will run the process for me?

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • getting Thunderbird rescan imap folder

    - by asdmin
    Since I got an external program (imapfilter) modifying my imap folder, thunderbird keeps loosing track of new messages. Messages are moved upon arrival in sub-folders, making Thunderbird unable keep tracking them - therefore I have no clue which folders to look for new messages, and newly created folders (even I subscribe them after creation) does not show up until I restart the mail client. Is there any extension or setting for Thunderbird which I could use to trigger re-scanning my folder tree? Please don't waste time on advices like restarting Thunderbird: takes a great amount of time, or "use Evolution (or any other mail client)", or use internal mail filters: they are not sophisticated enough or procmail/fetchmail: I'm arranging a remote imap server for good EXTENSION 1: even folders can be created in the background, without Thunderbird would know it has been created.

    Read the article

  • HAProxy overload protection

    - by user2050516
    using the HAProxy, would it be possible to configure an overload protection, to limit the amount of requests sent to the backing http server(s) to a given rate (z.B 100 Request per second ). If the threshold is exceeded requests should be answered with a default response. I am interested in requests per second not connections per second as a connection can have many requests. And yes to improve the servers is not an option here. If yes a configuration example to achieve that would be excellent. Thank you in advance.

    Read the article

  • Is there a BSD equivalent to "!!"?

    - by CT
    I often find myself issuing a command that I do not have the proper elevated privileges for. On Ubuntu I could use sudo !! This would issue the same command with sudo privlidges. Is there an equivalent on OpenBSD? Edit: I should have been more specific on what version of OpenBSD. I am using OpenBSD 4.8 where sudo seems to be installed by default. I have already created a user besides root and edited my sudoers file to allow for that user to use sudo. My question is, is there already a built-in shortcut for the "!!" to use previous command.

    Read the article

  • apache: lists of all directives for a context?

    - by ajsie
    in the apache online documentation each directive could belong to a context eg: server-config, virtualhost, directory, .htaccess and so on. i wonder if there is a list of all directives belonging to each context? eg. a list with all directives for virtualhost so i know exactly which one i can use? and also, where can i find directives for apache modules? on their page or does each module has its own page with documentation (eg. mod_rewrite)?

    Read the article

  • How to make my Ubuntu an internet gateway for my Android phone

    - by yacine
    I want to use the internet of my school on my Android, the problem is they have a Squid proxy, and many applications on my phone don't use the proxy at all. The obvious solution is to install a transparent proxy on my Android to force all applications to connect through it. The problem is that I need to root the phone to make it work, and I don't want to do it because it's not really my phone and rooting is a little risky- Another solution, which is safer, is to make my computer run as a gateway, so I put my Ubuntu IP in the gateway parameter of the phone. I'm running a small proxy on my ubuntu (cntlm), so I redirect the Android traffic to it. I did it with "iptables" as follows: iptables -t nat -A PREROUTING -s 10.0.1.118 -p tcp -j REDIRECT --to-ports 8888 iptables -t nat -A PREROUTING -s 10.0.1.118 -p udp -j REDIRECT --to-ports 8888 10.0.1.118 is the IP of the phone, 8888 is the port of cntlm (proxy on my PC). Now, on the phone: When I enter www.google.com on the navigator I get nothing (web site not found, error message of Firefox). But, when I enter http://74.125.143.101 (IP of Google) I get an error message from the school proxy (so it worked in some way – my PC redirected the traffic of the phone to the Squid proxy). The error message is : The requested URL could not be retrieved while trying to process the request get / http/1.1 host 74.125.143.101 user-Agent ... ... I think the problem is in the "GET" header,it should be GET 74.125.143.101 HTTP/1.1. But I don't understand what's happening, and I'm a certified CCNA.

    Read the article

  • udev rule not being executed

    - by jyavenard
    I have the following device that udevadm lists as: looking at device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0/ttyUSB0/tty/ttyUSB0': KERNEL=="ttyUSB0" SUBSYSTEM=="tty" DRIVER=="" looking at parent device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0/ttyUSB0': KERNELS=="ttyUSB0" SUBSYSTEMS=="usb-serial" DRIVERS=="pl2303" ATTRS{port_number}=="0" looking at parent device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0': KERNELS=="6-2:1.0" SUBSYSTEMS=="usb" DRIVERS=="pl2303" ATTRS{bInterfaceNumber}=="00" ATTRS{bAlternateSetting}==" 0" ATTRS{bNumEndpoints}=="03" ATTRS{bInterfaceClass}=="ff" ATTRS{bInterfaceSubClass}=="00" ATTRS{bInterfaceProtocol}=="00" ATTRS{supports_autosuspend}=="1" So I created the rule: KERNEL=="ttyUSB0", SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", DRIVERS=="pl2303", KERNELS=="6-2:1.0", SYMLINK+="cc128serial" this doesn't work. However if I do: KERNEL=="ttyUSB0", SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", DRIVERS=="pl2303", SYMLINK+="cc128serial" then it works. I tried with KERNELS=="6*" etc.. to no available any ideas ? thanks

    Read the article

  • Crontab -- scheduling my backups

    - by Garfonzo
    I want to do a backup every Friday night (no, this is not the whole backup routine, just part of it). Each Friday night's backup will not be overwritten until 4 weeks later. So, essentially, I have a four revolving backups: Week1, week2, week3, and week4. Now, I need the week1 backup script to run every 4 weeks. But I also want week2's script to run every four weeks. I know that I can tell the crontab to execute something every X weeks/days/hours/whatever. However, how do I set it up so that each of these four scripts actually run on different weeks, how do I avoid all 4 scripts running on the same night, then dutifully waiting for weeks only to all run again? Thanks.

    Read the article

  • Allow PHP to write file without 777

    - by camerongray
    I am setting up a simple website on webspace provided by my university. I do not have database access so I am storing all the data in a flat file. The issue I am experiencing is related to file permissions. I need PHP to be able to read and write the data file but I don't really want to set the file to 777 as anybody else on the system could modify it, they already have read access to everyone's web directories. Does anyone have any ideas on how to accomplish this? Thanks in advance

    Read the article

  • How to auto detect text file encoding?

    - by ???
    There are many plain text files which were encoded in variant charsets. I want to convert them all to UTF-8, but before running iconv, I need to know its original encoding. Most browsers have an Auto Detect option in encodings, however, I can't check those text files one by one because there are too many. Only having known the original encoding, I then can convert the texts by iconv -f DETECTED_CHARSET -t utf-8. Is there any utility to detect the encoding of plain text files? It doesn't have to be a 100% perfect correct, but it should recognize most of them.

    Read the article

  • Choose source interface for PPTP VPN on Ubuntu

    - by Emyl
    I have an Ubuntu Virtualbox guest with two network interface, eth0 (NAT) and eth1 (bridged). I want to connect to a PPTP VPN using eth1, but I don't know how to specify which interface to use. If i just try: sudo pon myvpn nodetach It fails with: Using interface ppp0 Connect: ppp0 <--> /dev/pts/1 Modem hangup Connection terminated. Looking at routes with route seems to indicate that eth0 is being used: x.x.x.x.no 10.0.2.2 255.255.255.255 UGH 0 0 0 eth0

    Read the article

  • Creating full, global clang+llvm environment

    - by Griwes
    What is the easiest way to setup full Clang, libc++ and LLVM as default global toolchain? All of my attempts to build it, in most of the configurations I could think of, resulted in working Clang, but it didn't use libc++ headers, but default GCC's libstd++'s ones, resulting in numerous faults in incompatible pieces of library code. I would like it working out of the box, without having to do magic in .bashrc or passing all those -stdlib=libc++ and -lc++ to compiler and linker.

    Read the article

  • How to configure OpenVPN server to use custom default gateway?

    - by Arenim
    I have a vpn server at address 10.1.0.2 and the server have another ip in it's network -- 10.0.0.2 in his subnet (it's a tun2socks router). But default server's gateway is NOT 10.0.0.2 (and it's ok) but another external IP. I want all the client's traffic to be forwarded through this ip address -- 10.0.0.2. Here is part of my server's config: dev tap0 server-bridge 10.1.0.1 255.255.255.0 10.1.0.50 10.1.0.100 push "route 10.0.0.0 255.255.255.0" ; now client can ping 10.0.0.2 push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 10.1.0.1" push "dhcp-option WINS 10.1.0.1" in fact i want some like push "redirect-gateway 10.0.0.2" How can I achieve this?

    Read the article

  • how can i disable safe mode for php on web server?

    - by user1767434
    I am using wkhtmltopdf for making a pdf of a page. My code executes the shell to run a command using this wkhtmltopdf library. Everything works fine in my wamp server but when the code runs on my web-server it does not work and gives the following error: Warning: shell_exec() has been disabled for security reasons in /home/pssptech/public_html/.../cert.php on line 272 I think that the php is running on safe mode on the server that's why the shell execution is disabled. But the main problem is I am unable to find the php.ini file on my remote web-server. Can you tell me where can I find the config file so that I can disable the safe mode? Thanks in advance.

    Read the article

  • Apache2 WebServer not allowing me to view website/files in /var/www

    - by CitadelCSAlum
    I used to be able to access websites/files that were stored in the directory /var/www I have not used this for a while, but now I have a need to store, media in this directory or in the directory/var/www/images I noticed that my apache web server wasnt running correctly so I did a complete package removal and then reinstalled, but I am still unable to access a test page inde.html in the directory /var/www/index.html by going to http://myipaddresshere/index.html Is there some initial configuration I need to do to allow me to store HTML and media files in this directory and be able to access them from the browser? I dont remember having to do anything before.

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • Server load increases by lot of httpd request with same PID

    - by user3740955
    I can see that my server load increases to more than 200-300 range. Before 1 week the maximum load was around 20-25. In top and ps -ef i can see a lot of httpd threads and the PPID of most of the httpd request are of the same PID. When i verified this the parent process ID is of root. Please let me know how i can reduce the server load. I have searched a lot for this but not able to find out a proper solution for this. Please let me know. Please see below a part of the top output. apache 29698 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29700 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29701 2062 10 16:54 ? 00:00:02 /usr/sbin/httpd apache 29702 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29703 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29705 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29706 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29707 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29708 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29709 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29710 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29711 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29712 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd Server version: Apache/2.2.3

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >