Search Results

Search found 5638 results on 226 pages for 'debian sys maint'.

Page 62/226 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • How to find what is written to filesystem under linux

    - by bardiir
    How can i find out what processes write to a specific disc over time? In my particular case I got a little homeserver running 24/7 and I included a script in the crontab to shutdown all drives that are not used (no change in /proc/diskstats for 15 minutes). But my system disc won't come down at all. I'm suspecting logs but it's probably not only logs writing to the filesystem on the system disk and I don't want to go all the way moving the logfiles to something else just to find out the disc still doesn't spin down and there's nothing i can do against it.

    Read the article

  • Why is XSP/mono giving me file/write errors with lucence?

    - by acidzombie24
    I am using mono 2.6.7, xsp 2.8.? and i am not sure what else. I'll try downgrading xsp and whatever else. Today i notice my server randomly gets Http Error 500, internal server error. I looked in my log files http://www.pastie.org/1426236 and it appears that the problem is locking lucence which it does by creating a file called write.lock. This use to work with no problem but now it gives me pain. It will fix itself if i refresh the page several times but it will break just as easily. My other site will not work everytime i restart apache and i need to by hand delete the file (at least it doesnt get random 500 errors). However now it just says Lock obtain timed out: NativeFSLock@/var/www/SITE_2/App_Data/LuceneIndex_a/write.lock no matter what. I even chmod -R 777 the directory and still no luck. write.lock doesnt even exist. I have no clue whats going on. Any ideas?

    Read the article

  • 404 not found error for virtual host

    - by qubit
    Hello, In my /etc/apache2/sites-enabled, i have a file site2.com.conf, which defines a virtual host as follows : <VirtualHost *:80> ServerAdmin hostmaster@wharfage ServerName site2.com ServerAlias www.site2.com site2.com DirectoryIndex index.html index.htm index.php DocumentRoot /var/www LogLevel debug ErrorLog /var/log/apache2/site2_error.log CustomLog /var/log/apache2/site2_access.log combined ServerSignature Off <Location /> Options -Indexes </Location> Alias /favicon.ico /srv/site2/static/favicon.ico Alias /static /srv/site2/static # Alias /media /usr/local/lib/python2.5/site-packages/django/contrib/admin/media Alias /admin/media /var/lib/python-support/python2.5/django/contrib/admin/media WSGIScriptAlias / /srv/site2/wsgi/django.wsgi WSGIDaemonProcess site2 user=samj group=samj processes=1 threads=10 WSGIProcessGroup site2 </VirtualHost> I do the following to enable the site : 1) In /etc/apache2/sites-enabled, i run the command a2ensite site2.com.conf 2) I then get a message site successfully enabled, and then i run the command /etc/init.d/apache2 reload. But, if i navigate to www.site2.com, i get 404 not found. I do have an index.html in /var/www (permissions:777 and ownership www-data:www-data), and i have also verified that a symlink was created for site2.com.conf in /etc/apache2/sites-enabled. Any way to fix this ? Thank you.

    Read the article

  • Sending mail from PHP with exim4

    - by jfoucher
    Hello, A web server I manage is having problems sending mail from PHP. This server uses exim4 for MTA, and it is configured correctly. I can send emails from PHP's CLI, but not from the web. i.e. if I do "php mailtest.php" on the command line, the email gets sent correctly, but if I browse to server.com/mailtest.php, mail() returns false and the email never gets sent. Nothing appears in the exim mainlog. Any advice, or things I should look for ? Thanks!

    Read the article

  • Assigning IPs to OpenVZ containers

    - by Vojtech
    I have recently bought myself a physical server and I am trying to create containers which would have their IPs. The physical machine has both IPv4 and IPv6 addresses. I have accessible another IPv4 and some other IPv6 addresses which I would like to assign to the container. I managed to assign the addresses as follows: # vzctl set 101 --ipadd 144.76.195.252 --save I can ping to the machine from the physical machine, but not from the outside world. This also applies to the IPv6 I assigned as well. This is ifconfig of the physical machine: eth0 Link encap:Ethernet HWaddr d4:3d:7e:ec:e0:04 inet addr:144.76.195.232 Bcast:144.76.195.255 Mask:255.255.255.224 inet6 addr: 2a01:4f8:200:71e7::2/64 Scope:Global inet6 addr: fe80::d63d:7eff:feec:e004/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:217895 errors:0 dropped:0 overruns:0 frame:0 TX packets:16779 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:322481419 (307.5 MiB) TX bytes:1672628 (1.5 MiB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:3 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) This is ifconfig of the OpenVZ container: # ifconfig venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: 2a01:4f8:200:71e7::3/64 Scope:Global UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:144.76.195.252 P-t-P:144.76.195.252 Bcast:144.76.195.252 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 What do I need to do to have the container accessible from the outside world? What could I have forgotten? Thanks.

    Read the article

  • Upgrading only certain packages via the getdeb repo

    - by intuited
    I'm a bit confused about how getdeb.net works now. The last time I got a package from there was a while ago; at that point the procedure was that you would just download a .deb for each package that you wanted to install/upgrade and then install it using dpkg -i. However the inexorable march of progress has lent its trumpets to this system as well, and getdeb installs are now done via their repo, which is registered with apt in /etc/apt/sources.list.d, after you install a single package that makes the changes to the apt database. I've installed that package, and I've discovered that aptitude dist-upgrade now wants to upgrade a lot of packages on my system that weren't ready for upgrades prior to the installation of the getdeb package. If I rename the file /etc/apt/sources.list.d/getdeb.list to something with a different extension, then do aptitude update && aptitude dist-upgrade, it stops wanting to upgrade packages. So I gather that the default behaviour is now to upgrade all packages to the version available at getdeb. This is not particularly appropriate, since these packages are not as well tested as the officially released versions. Is there a config setting somewhere that will prevent upgrading packages to versions from the getdeb repo unless this action is specifically selected? I'd like to be able to pick and choose what packages are upgraded via getdeb.

    Read the article

  • Creating a fallback error page for nginx when root directory does not exist

    - by Ruirize
    I have set up an any-domain config on my nginx server - to reduce the amount of work needed when I open a new site/domain. This config allows me to simply create a folder in /usr/share/nginx/sites/ with the name of the domain/subdomain and then it just works.™ server { # Catch all domains starting with only "www." and boot them to non "www." domain. listen 80; server_name ~^www\.(.*)$; return 301 $scheme://$1$request_uri; } server { # Catch all domains that do not start with "www." listen 80; server_name ~^(?!www\.).+; client_max_body_size 20M; # Send all requests to the appropriate host root /usr/share/nginx/sites/$host; index index.html index.htm index.php; location / { try_files $uri $uri/ =404; } recursive_error_pages on; error_page 400 /errorpages/error.php?e=400&u=$uri&h=$host&s=$scheme; error_page 401 /errorpages/error.php?e=401&u=$uri&h=$host&s=$scheme; error_page 403 /errorpages/error.php?e=403&u=$uri&h=$host&s=$scheme; error_page 404 /errorpages/error.php?e=404&u=$uri&h=$host&s=$scheme; error_page 418 /errorpages/error.php?e=418&u=$uri&h=$host&s=$scheme; error_page 500 /errorpages/error.php?e=500&u=$uri&h=$host&s=$scheme; error_page 501 /errorpages/error.php?e=501&u=$uri&h=$host&s=$scheme; error_page 503 /errorpages/error.php?e=503&u=$uri&h=$host&s=$scheme; error_page 504 /errorpages/error.php?e=504&u=$uri&h=$host&s=$scheme; location ~ \.(php|html) { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_intercept_errors on; } } However there is one issue that I'd like to resolve, and that is when a domain that doesn't have a folder in the sites directory, nginx throws an internal 500 error page because it cannot redirect to /errorpages/error.php as it doesn't exist. How can I create a fallback error page that will catch these failed requests?

    Read the article

  • Is it a good idea to put "screen -r" in my .bashrc?

    - by marcusw
    I'd like to use screen to keep ssh sessions alive on my server. It would be nice if I could automatically resume any running session for my user when I log in. The straightforward way to do this would be adding "screen -r" to my .bashrc, and this seems to work fine. I'm just wondering if this will break anything under conditions which I haven't tested yet. Anyone with experience here who can tell me whether this is what I should do?

    Read the article

  • How can I include a line until # but without the # when parsing 'sources.list' with regex?

    - by stwissel
    I want to parse my sources.list to extract the list of repositories. I have: ## Some comment deb http://some.vendor.com/ubuntu precise stable deb-src http://some.vendor.com/ubuntu precise stable deb http://some.othervendor.com/ubuntu precise experimental # my current favorite I want: http://some.vendor.com/ubuntu precise stable http://some.othervendor.com/ubuntu precise experimental So I need: only lines with deb at the beginning and until the end of the line or a # character but excluding it. So far I have: grep -o "^deb .*" But how to match # or LineEnd and excluding the #?

    Read the article

  • Securing debain with fail2ban or iptables

    - by Jimmy
    I'm looking to secure my server. Initially my first thought was to use iptables but then I also learnt about Fail2ban. I understand that Fail2ban is based on iptables, but it has the advantages of being able to ban IP's after a number of attempts. Let's say I want to block FTP completely: Should I write a separate IPtable rule to block FTP, and use Fail2ban just for SSH Or instead simply put all rules, even the FTP blocking rule within the Fail2Ban config Any help on this would be appreciated. James

    Read the article

  • Allow outgoing connections for DNS

    - by Jimmy
    I'm new to IPtables, but I am trying to setup a secure server to host a website and allow SSH. This is what I have so far: #!/bin/sh i=/sbin/iptables # Flush all rules $i -F $i -X # Setup default filter policy $i -P INPUT DROP $i -P OUTPUT DROP $i -P FORWARD DROP # Respond to ping requests $i -A INPUT -p icmp --icmp-type any -j ACCEPT # Force SYN checks $i -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Drop all fragments $i -A INPUT -f -j DROP # Drop XMAS packets $i -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Drop NULL packets $i -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Stateful inspection $i -A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT # Allow established connections $i -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow unlimited traffic on loopback $i -A INPUT -i lo -j ACCEPT $i -A OUTPUT -o lo -j ACCEPT # Open nginx $i -A INPUT -p tcp --dport 443 -j ACCEPT $i -A INPUT -p tcp --dport 80 -j ACCEPT # Open SSH $i -A INPUT -p tcp --dport 22 -j ACCEPT However I've locked down my outgoing connections and it means I can't resolve any DNS. How do I allow that? Also, any other feedback is appreciated. James

    Read the article

  • Can't set up Usermin correctly to allow users to login outside of local network, what am I missing?

    - by thecraic
    I'm fairly new at creating a server, but the biggest problem I am currently having at the moment is getting Usermin set up to be accessible from outside the LAN. I talked to other people that use it and was told that all I need to do is type the url:20000 to access the login screen, but that doesn't work. I have also tried the ip:20000 and that doesn't lead to anything. Instead I get the error message: Error - Bad Request This web server is running in SSL mode. Try the URL https://hostname:10000/ instead. (where hostname is my server's hostname) I know it must be a configuration issue, but I have checked all my settings and as far as I can tell I don't have the ports blocked anywhere. I have the correct ports forwarded on my router and my server firewall doesn't have the port block either. Is there anything I am missing? Any help would be appreciated and I will add more information upon request. Thank You.

    Read the article

  • Generating a list of installed packages in Ubuntu

    - by Johan
    I want to backup my list of manually selected packages in Ubuntu, without listing packages installed as dependencies. For example, dpkg --get-selections returns a complete list of all installed packages, manually selected as well as dependencies. How can I filter dependencies?

    Read the article

  • push commits to git (gitolite) repository messes up file permissions (no more trac access)

    - by klemens
    already posted here so feel free to answer there. everytime i commit/push something to the git server the file permissions change (all added/edited files in the repository have no read and execute access for the group). thus trac can't access the repository. do I need to change permissions of the folder differently? chmod u=rwx,g=rx,o= -R /home/git/repositories or do i need to setup gitolite somehow to write files with different permissions??? regards, klemens

    Read the article

  • NSD reply from unexpected source

    - by Ximik
    I have server with NSD. There are MAIN_IP and ADD_IP. When I try to get IP of my site from server I have right output dig @localhost my_site.com But when I try to make this from my PC, I have dig @my_ns_server.com my_site.com ;; reply from unexpected source: MAIN_IP#53, expected ADD_IP#53 (ADD_IP is IP of my_ns_server.com) What should I do? UPD: My interfaces conf auto eth2 allow-hotplug eth2 iface eth2 inet static address xxx.xxx.xxx.234 netmask 255.255.255.252 network xxx.xxx.xxx.232 broadcast xxx.xxx.xxx.235 gateway xxx.xxx.xxx.233 dns-nameservers MY_ISP_IP dns-search MY_ISP_DOMAIN auto eth2:0 iface eth2:0 inet static address xxx.xxx.xxx.124 netmask 255.255.255.0 xxx.xxx.xxx is the same for all IPs

    Read the article

  • Run preseed commands as specific user / switching users

    - by pduersteler
    Beside the usual setup where I create a normal user foo, I want to run a few d-i preseed/late_command commands as that foo user. My initial thought was to simply call those commands with sudo, e.g: d-i preseed/late_command in-target echo "<pwd>" | sudo -Si <command>. This works for some sort of commands. However the problem is that some of the commands load up shell scripts which require to not be run with sudo. Issuing a su -c "<command>" would be an alternative, but su does not offer the possibility to read the password from stdin. Is it safe to jump around between the users using su (And if yes, how do I provide the stdin? and does it work or just result in a su: must be run from a terminal) or would this cause issues?

    Read the article

  • Postfix filter messages and pass to PHP script

    - by John Magnolia
    Each time a user signs up to our website through an external provider we get a basic email with the body contents containing the user details. I want to write a personalised automatic reply to this user. The actual parsing of the email body and reply via PHP I have already wrote but how do I go about configuring this from postfix? At the moment it is configured using a roundcube Sieve plugin where the email gets moved into a folder "Subscribe". Is it possible to create a custom action here? Debain Squeeze, Postfix and Dovecot

    Read the article

  • Workaround broken sudo?

    - by perreal
    I managed to break sudo by deleting the libc.so.6 sym-link in /lib. I copied the actual file and created a symbolic link with the same name under my home directory by using LD_PRELOAD=/lib/libc-2.11.3.so. At this point, all binaries linking libc are working through preload except sudo. For sudo, I need to write (and don't know why): $ /lib/ld-linux-x86-64.so.2 --library-path . /usr/bin/sudo but this gives me: $ sudo: must be setuid root Checking the permissions: $ ls -l /usr/bin/sudo $ -rwsr-xr-x 2 root root 166120 So the setuid bit is actually set. Question: I need to create a symbolic link named /lib/libc.so.6 through my active ssh connection without using sudo, or, make sudo work somehow. I don't have the root password and I can't connect through ssh anymore. Is there any other way I can get authorization?

    Read the article

  • how to use rsync over ftp

    - by bumperbox
    debian4 linux i have the following cmd line which works fine rsync -avr -e ssh /home/dir [email protected]:/home/ but i need to setup it up now to rsync to a remote server that only has ftp on it how do i go about that ? i looked at the rsync help but quickly got lost (i don't do this stuff very often) thanks alex

    Read the article

  • How can I use sudo when I logged in with a SSH key in PuTTY?

    - by Alex
    I know the title probably doesn't even make sense, but anyway. I downloaded PuTTY and set it up, and followed this tutorial to set up SSH keys so I don't have to input a user or password when logging in with SSH. I noticed that when I made a new user I used the --disabled-password parameter, since I wouldn't be needing it... but now when I give the user sudo powers I can't proceed as it asks me for the user's password, and I don't have one. What do I do?

    Read the article

  • Why doesn't the value in /proc/meminfo seem to map exactly to the system RAM?

    - by Eric Asberry
    The values in /proc/meminfo for MemTotal don't make sense. As a human, eyeballing it, it seems to roughly correspond to the installed RAM, but for using it to display the installed RAM from an automated utility it appears to be inexact, and inconsistent. For a system with 1G of RAM, I would expect the MemTotal line to have a value of 1048576 - 1024*1024. But instead, I'm seeing 1029392. On another 4G box, I'm seeing 3870172, which is not a multiple of 1024, and it's not even close to 1029392*4. On an 8G box, I get 8128204, which again seems to have no correlation to the other values, nor is it a multiple of 1024. I'm trying to use this information to report the RAM on a status web page. My work-around is to just "round" it to the nearest 1G multiple, but I'd like to understand why these values seem inconsistent and don't match my expectations. Can somebody fill me in on what I'm missing here? EDIT: To expand on the accepted answer below.... The reference can be found here. Also of interest to me from that page, which explains the inconsistency, is this bit: meminfo: Provides information about distribution and utilization of memory. This varies by architecture and compile options. ...

    Read the article

  • Unable to view users and groups

    - by Ewr Xcerq
    I am using Centos5 running on a VMWare but whenever I choose to open the User Manager menu from System-Administration, an error message always displays The user database cannot be read. This problem is most likely caused by a mismatch between etc/passwd and /etc/shadow or /etc/group and /etc/gshadow/. The program will now exit. I am a Linux novice and have no idea how to fix this tiny issue. ANy help is thankful. Thank you.

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • How can I start new window in the same screen session automatically?

    - by Mato
    I read How can I start multiple screen sessions automatically?, but I don't understand the first accepted reply: screen -dmS "$SESSION_NAME" "$COMMAND" "$ARGUMENTS" In my case I need to automatically create one screen session for one script, and afterwards I need to create a new window in the same session for another script. Manually, I would: run screen enter command CTRL+A CTRL+C enter command CTRL+A CTRL+D How can I do this automatically in a script? A simple example would help me a lot. Thank you for replies.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >