Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 573/1831 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Log viewer server and client

    - by Scott Crooks
    I'm looking for a log viewing solution for (mostly) Linux and (preferably) Windows too. I want to be able to centralize the log information for a lot of servers so that people in the company can see what's going on for different servers. I would guess this would involve having a central server which accepts information from the various computers / virtual machines with (perhaps) a running daemon on each of the servers. Does such a software exist?

    Read the article

  • Trying to setup virtual hosts on unix PHP on nginx

    - by user1634653
    I have tried to install php5-fpm and Nginx on Ubuntu machine, but I got a problem. When I have only one virtual host on a unix port it is all fine but when I try to add another virtual host Nginx goes to default web page "Welcome to Nginx!" but when I run it on a tcp port example port 9000 it work fine with multisites. It is a fresh install of ubuntu 11.10, Nginx 1.2.3 with php5-fpm installed. It also has extra php installs such as php-apc. I can only give the links to the virtual hosts because I am doing it from a mobile phone. Here are the links for the two virtual hosts I am using: http://ic0nic.co.uk/ic0nic.txt, http://ic0nic.co.uk/sourproxy.txt also I want to use unix port because I find it a whole lot faster. Edit: Here are the nginx configs server { server_name ic0nic.co.uk www.ic0nic.co.uk; root /var/www/ic0nic.co.uk; listen 8080; index index.html index.htm index.php; include conf.d/drop; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_buffers 8 256k; fastcgi_buffer_size 128k; fastcgi_intercept_errors on; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/dev/shm/php-fpm-www.sock; root /var/www/ic0nic.co.uk; } } server { server_name sourproxy.co.uk www.sourproxy.co.uk; root /var/www/sourproxy.co.uk/; listen 8080; index index.html index.htm index.php; include conf.d/drop; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_buffers 8 256k; fastcgi_buffer_size 128k; fastcgi_intercept_errors on; include fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/sourproxy.co.uk$fastcgi_script_name; fastcgi_pass unix:/dev/shm/php-fpm-www.sock; } }

    Read the article

  • UDISKS instead of HAL

    - by MeJ
    Does anybody have some expirence with udisks, because HAL won't be longer supported on the most linux distribution, so I am thinking of to use udisks for UDI in $(hal-find-by-property --key storage.bus --string usb) do HAL_TMP=`hal-get-property --udi $UDI --key storage.removable.media_available` if [ "$HAL_TMP" = "true" ]; then HAL_DEV=$(hal-get-property --udi $UDI --key block.device) HAL_SIZE=$(hal-get-property --udi $UDI --key storage.removable.media_size) HAL_TYPE=$(hal-get-property --udi $UDI --key storage.drive_type) How do I have to adapt the above mentioned commands but use udisks instead of hal Thanks!

    Read the article

  • Simple email server with a web interface [on hold]

    - by user196989
    I have purchased a domain name for my blog, and I'd like to use [email protected] as my email address. I have a Linux (Ubuntu 13.10) VPS that's running the LAMP stack. I would like to install some software that would include spam filtering, email delivery, etc, but would be simpler to maintain than something like this (possibly hundreds of steps, and a lot of maintenance headaches too I suppose). I would also require a web interface at mail.mydomain.com - but I suppose that Roundcube is an option

    Read the article

  • Write, wall, who and mesg

    - by miniBill
    I want to set up a server with a lot of users so that (in order of importance): Users cannot obtain ip addresses of other users with who, or last Users cannot wall Users can write to each other Users are able to selectively mesg n other users, as opposed to simply blocking everyone Point 1 is easily solved by a 660 on wtmp and utmp, but I don't know how to achieve the other points The server runs Gentoo Linux

    Read the article

  • Testifying rasing net.core.somaxconn can make a difference

    - by petermolnar
    I got into an argument on the net.core.somaxconn parameter: I was told that it will not make any difference if we change the default 128. I believed this might be enough proof: "If the backlog argument is greater than the value in /proc/sys/net/core/somaxconn, then it is silently truncated to that value" http://linux.die.net/man/2/listen but it's not. Does anyone know a method to testify this with two machines, sitting on a Gbit network? The best would be against MySQL, LVS, apache2 ( 2.2 ), memcached.

    Read the article

  • Is it a good practice to run identd in 2010?

    - by Alex R
    I know in the "old days" it was good practice to shut this off. But nowadays I have heard that it improves deliverability of email. In the old days people were not worried about spam (or having their outbound email rejected), so that made sense. Of course, the question is only relevant to servers that send email. What is the current, common practice among discerning Linux admins? Run identd or leave it off? Thanks

    Read the article

  • Mobile Internet Problem

    - by alskndalsnd
    I am using mobile dial up on ubuntu. However, SOMETIMES even though I am connected to the ISP, I do not have any entries in /etc/resolv.conf. I often restart network-manager or networking hoping it will change but normally it doesn't do any good. and by connected I mean I can see that the network notification icon has switched to a few bars indicating connectivity). Anyone know a good solution around this?

    Read the article

  • restore -A usage

    - by Martin v. Löwis
    I have created a number of dump files using Linux dump(8), using the -A option to get a table of contents on disk (the backups are on tape). Now I'm trying to look into these archive files, using restore -i -A <archive>` However, this insists on asking what tape to use, and complains if I say none. What am I doing incorrectly? I was hoping that I can use these archive index files without having to insert the tape to use.

    Read the article

  • Why does pinging a local router return "Destination Host Unreachable"?

    - by Matt H
    I have two tomato routers. One is bridged wirelessly with the other. I have a new server on the network. It's running Ubuntu Server 11.04. They are all connected like this: A - Linux PC B - New Server C - Mac Mini D - Macbook T1 - Tomato 1 T2 - Tomato 2 They are connected like so: A -----+-T1 ==== wireless bridge ==== T2----- ADSL modem | | C & D Connected wirelessly to T2 B -----+ A, C & D do not experience any issues. I have an active SSH session to B from A and it's not experiencing any loss. B, the new server occasionally cannot ping T2 and therefore cannot connect to the internet. However, A can always contact B and B can ping A and B When the network is lost, B can still ping T1, but not T2 yet at the same as B has lost connection to T2, A can still ping T2. Any ideas on what this could be? there is nothing that gives any clues in any of the logs on either router or the linux server. One thing that is interesting is that I set up a ping running between B and T2. T2 has the IP address 192.68.1.1 Here is what I am seeing: From 192.168.1.1 icmp_seq=26 Destination Host Unreachable From 192.168.1.1 icmp_seq=27 Destination Host Unreachable From 192.168.1.1 icmp_seq=28 Destination Host Unreachable From 192.168.1.1 icmp_seq=29 Destination Host Unreachable From 192.168.1.1 icmp_seq=30 Destination Host Unreachable From 192.168.1.1 icmp_seq=31 Destination Host Unreachable From 192.168.1.1 icmp_seq=33 Destination Host Unreachable From 192.168.1.1 icmp_seq=34 Destination Host Unreachable From 192.168.1.1 icmp_seq=35 Destination Host Unreachable 64 bytes from 192.168.1.1: icmp_req=36 ttl=63 time=3.40 ms 64 bytes from 192.168.1.1: icmp_req=37 ttl=63 time=5.70 ms 64 bytes from 192.168.1.1: icmp_req=38 ttl=63 time=2.25 ms 64 bytes from 192.168.1.1: icmp_req=39 ttl=63 time=2.18 ms 64 bytes from 192.168.1.1: icmp_req=40 ttl=63 time=3.12 ms 64 bytes from 192.168.1.1: icmp_req=41 ttl=63 time=2.15 ms 64 bytes from 192.168.1.1: icmp_req=42 ttl=63 time=1.97 ms 64 bytes from 192.168.1.1: icmp_req=43 ttl=63 time= And it cycles to being reachable and not. So I guess you could say the question is, why is the router responding that it cannot be reached?

    Read the article

  • How to monitor streaming servers

    - by pcdinh
    Hi all, I have had a bunch of Linux based streaming servers that employed lighttpd web server to provide video streaming via port 80. Recently, our service is very slow. Therefore, I would like to ask if there is a good software package that helps us monitor and record our bandwidth usage, lighttpd established connections, TCP sync connections, disk I/O ... over time. Any suggestions? Regards, Dinh

    Read the article

  • How to build a cheap and fanless server

    - by dag729
    Any advice about how to build a cheap and fanless server? It's main uses would be web and file servering, but it could be a day when I'd like to add some streaming and mailing capabilities as well. OS of choice: GNU/Linux Thanks in advance

    Read the article

  • Cross-platform file system

    - by Console
    I would like my external drives to be readable and writable from Linux, Mac OS X and Windows. FAT32 works, but the 4 GB file size limit is a showstopper these days. Are there any alternatives?

    Read the article

  • How can I direct rsync output / log to the remote server?

    - by Guest
    I am able to output rsync logs on the client machine using --log-file=FILE but I want the output to be sent to the server instead. The client is a W7 machine (cygwin) and the server a Linux NAS. This is the command I use which successfully logs the file on the client. I'm looking to have the file sent to the server instead: rsync -PavOs --delete --log-file=/somepath/rsynclog.txt -e "ssh -i /somepath/keyfile -p 1000" "/somepath/User/" [email protected]:/somepath/User/ Thanks

    Read the article

  • Configure Postfix to allow incoming mails only from one (defined) mail-id

    - by Saurabh
    I have set-up Postfix with Spamassassin on Ubuntu 12.04.5. Fundamental usage of Postfix is to (pipe) trigger a PHP file. Till here I've arrived successfully. Now, to avoid unneccessary load on the server, and also to avoid unwanted mails triggering my PHP script, I want to configure Postfix to allow mails only from [email protected] and reject everything else. How to achieve this absolute lock-down on mail server unless mail comes from [email protected]?

    Read the article

  • Root SSH/SFTP Always 777

    - by Fluidbyte
    I have an Ubuntu serve that I'm connecting to via SFTP (and also an SSHFS mount locally). When I move a file to the server via the mount I need it to have permissions set to 777. I've added umask 000 to the .bashrc file at the advice of a friend and it doesn't appear to be working. Basically I'm working completely in a restricted folder and need the root to always leave the permissions open - wether I'm SSH'ed in or moving files to the server.

    Read the article

  • Shell script fro daily disk usage report

    - by Master
    I am doing backups on my local drives. The drives are mounted in /media folder. Now i want to run cron job daily which will tell in table format how much disk is used by folder and how much free space is left on drive It would be good if i can insert that info in database and i can see that info use webpage on locahost ubuntu 10

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >