Search Results

Search found 16544 results on 662 pages for 'sys path'.

Page 276/662 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • Bounce backs from web-generated e-mails are missing

    - by JerSchneid
    We use Google Apps to host my company's mail. On our website, we send some e-mails on behalf of our users. In those e-mails we include lines like this: Return-Path: <[email protected]> Sender: <[email protected]> Sending the messages works great (passes SPF tests), but in the case that the message is sent TO an invalid e-mail address, we expect to get a bounce back message sent to "[email protected]". That message never arrives. (If we send an e-mail manually from within the gmail interface to the same bad e-mail, the message does arrive). We used to receive the bounce back messages as expected, but it seems like they are always quietly blocked now (not in spam or anything). Is there a new policy that blocks bounce backs when the "From" does not match the "Return-Path" or something? We would really like to get these bounce-backs to verify the delivery of the messages. Is there any way to prevent them from being blocked?! Thank you!

    Read the article

  • Secure Login add on stopped working after installing BitDefender

    - by ldigas
    I'm using FF 3.5.4. with Secure Login 0.9.3 add on (lovely little thing). After a lot of persuading, my sys admin finally got to me, and I let him install BitDefender on my machine as well ... and naturally, like all anti virus programs do, it had to screw up something, and it was that add on. It says now in the add onns menu, that it isn't compatible with FF 3.5.4. (which is possible, I don' know, but it did work until one hour ago). What to do to make it work again? All ideas welcomed. I really hate writing all that logins/passwords by ahnd.

    Read the article

  • sg_map & lsscsi showing old storage version

    - by PratapSingh
    I am using SUN storage and recently upgraded/refreshed my ISCSI LUN storage. We have replicated old storage to new storage and attached to our servers. I can see at SUN storage side that storage is attached to server and also from server when I run the below command it prints the following output : iscsiadm -m session tcp: [1] 10.1.1.10:3260,2 iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd The above storage is SUN STORAGE 7420 But when I run sg_map or lsscsi command it prints different version: lsscsi disk SUN Sun Storage 7410 1.0 /dev/sda disk SUN Sun Storage 7410 1.0 /dev/sdb disk SUN Sun Storage 7410 1.0 /dev/sdc disk SUN Sun Storage 7410 1.0 /dev/sdd Output of ls on "/dev/disk/by-path/" ls -1 /dev/disk/by-path/ ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-0 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-0-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-18 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-18-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-2 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-2-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-4 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-4-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-6 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-6-part1 I have rebooted server twice but still I am getting the same output as given above.

    Read the article

  • Can't configure PAM + LDAP on Debian Lenny - Getting error=49 on server logs

    - by Jorge Suárez de Lis
    I've been migrating some servers and desktops using Ubuntu 10.04 from getting the users from an old OpenLDAP implementation to a newer Centos Active Directory. I haven't had any problems so far, until I reached a Debian Lenny server. I've set up the server as the others, setting /etc/ldap.conf and /etc/ldap/ldap.conf. However, when I issue "getent passwd", I get nothing from the LDAP server. Reading the pam_ldap manpage, I realized that /etc/ldap.conf was not an accepted file by pam_ldap -it worked with Ubuntu though-, so I renamed it to /etc/pam_ldap.conf. Same result. However, once I've changed the name of this file, when I login using SSH I get this on the LDAP server logs: [20/Jul/2012:11:19:40 +0200] conn=16501 fd=155 slot=155 connection from x.x.x.50 to 10.1.176.237 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:19:40 +0200] conn=16501 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:19:40 +0200] conn=16501 op=2 RESULT err=49 tag=97 nentries=0 etime=0 The password isn't working. I don't know that could be wrong, anything else seems to be OK. That user/password is working from another clients: [20/Jul/2012:11:29:39 +0200] conn=16528 fd=188 slot=188 connection from x.x.x.224 to 10.1.176.237 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 BIND dn="uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=ubuntu,ou=applications,ou=citius,dc=inv,dc=usc,dc=es" [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 SRCH base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" scope=2 filter="(uid=jorge.suarez)" attrs=ALL [20/Jul/2012:11:29:39 +0200] conn=16528 op=1 RESULT err=0 tag=101 nentries=1 etime=0 notes=U [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 BIND dn="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es" method=128 version=3 [20/Jul/2012:11:29:39 +0200] conn=16528 op=2 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=jorge.suarez,ou=people,ou=citius,dc=inv,dc=usc,dc=es" I'm using SSHA for storing passwords on the LDAP server. Maybe this is not supported by Debian Lenny? On pam_ldap.conf, I've set up this, as in all the other servers: # Do not hash the password at all; presume # the directory server will do it, if # necessary. This is the default. pam_password md5 Also tried clear, but it didn't work. Anyways, it's weird that issuing getent passwd still gets me no users. However, if I use pamtest from the package libpam-dotfile to test login, it works. # pamtest ssh jorge.suarez Trying to authenticate <jorge.suarez> for service <ssh>. Password: Authentication successful. # pamtest foo jorge.suarez Trying to authenticate <jorge.suarez> for service <foo>. Password: Authentication successful. But "su" won't work also: # su jorge.suarez Id. descoñecido: jorge.suarez Just the output from getent passwd : # getent passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/bin/sh man:x:6:12:man:/var/cache/man:/bin/sh lp:x:7:7:lp:/var/spool/lpd:/bin/sh mail:x:8:8:mail:/var/mail:/bin/sh news:x:9:9:news:/var/spool/news:/bin/sh uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh list:x:38:38:Mailing List Manager:/var/list:/bin/sh irc:x:39:39:ircd:/var/run/ircd:/bin/sh gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh nobody:x:65534:65534:nobody:/nonexistent:/bin/sh libuuid:x:100:101::/var/lib/libuuid:/bin/sh Debian-exim:x:101:103::/var/spool/exim4:/bin/false statd:x:102:65534::/var/lib/nfs:/bin/false sshd:x:104:65534::/var/run/sshd:/usr/sbin/nologin luser:x:1000:1000:Usuario local de Burdeos,,,:/home/luser:/bin/bash messagebus:x:105:107::/var/run/dbus:/bin/false sge-admin:x:1001:1001:Administrador do SGE,,,:/home/cluster/sge-admin:/bin/bash ntp:x:107:110::/home/ntp:/bin/false haldaemon:x:108:111:Hardware abstraction layer,,,:/var/run/hald:/bin/false vde2-net:x:109:114::/var/run/vde2:/bin/false uml-net:x:110:115::/home/uml-net:/bin/false polkituser:x:111:116:PolicyKit,,,:/var/run/PolicyKit:/bin/false Debian-pxe:x:113:65534:Dummy user for Debian pxe package,,,:/home/Debian-pxe:/bin/false Nscd was stopped from the beginning.

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Hylafax: Encounter "No font metric information" when try to send a fax

    - by Chau Chee Yang
    I am using Hylafax 6.0.5 on Fedora 13 x86_64. As there are no rpm package available for Fedora 13, I use the source tar ball to install hylafax myself. Everything seems fine during compile and install. I try to send a fax with sendfax and encounter error: # sendfax -n -d <fax-number> /etc/passwd /usr/local/sbin/textfmt: No font metric information found for "Courier-Bold". Usage: /usr/local/sbin/textfmt [-1] [-2] [-B] [-c] [-D] [-f fontname] [-F fontdir(s)] [-m N] [-o #] [-p #] [-r] [-U] [-Ml=#,r=#,t=#,b=#] [-V #] files... >out.ps Default options: -f Courier -1 -p 11bp -o 0 Error converting document; command was "/usr/local/sbin/textfmt -B -f Courier-Bold -Ml=0.4in -p 11 -s default >'/tmp//sndfaxp5GdJ9' <'/etc/passwd'" It seems like there is problem with font problem. I have ghostscript-fonts installed too. I can't find hyla.conf in path /etc/hylafax. There is no /etc/hylafax path in my file system. All configuration files seems located in /var/spool/hylafax/etc. Please advice. Thank you.

    Read the article

  • litespeed issue with content-type

    - by sandeep.s85
    I am running magento with litespeed. The problem I am facing is that ajax call is being made of which header is set as x-json, but lightspeed is setting another header of text/html content type I've checked that page with apache and everything is working fine. I checked the response headers with apache and litespeed and here are they: With apache: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 05:58:47 GMT Server: Apache Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 06:58:48 GMT; path=/; domain=mydomain.com; httponly Connection: close Transfer-Encoding: chunked Content-Type: application/x-json With litespeed: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 06:10:55 GMT Server: LiteSpeed Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 07:10:55 GMT; path=/; domain=mydomain.com; httponly Content-Type: text/html; charset=UTF-8 Content-Length: 474 Vary: User-Agent I've also added application/json to mime.properties of litespeed,restarted it but that did not work. Here is the screenshot

    Read the article

  • VNC server failed to start CentOS

    - by Shaun
    I followed a tutorial on how to install and get VNCserver to run on CentOS 6 (since freenx isnt supported yet) and I keep getting Starting VNC server: 1:user [FAILED] How do I figure out whats going on here? Im new to Linux/CentOS and im trying to get RDP going so I can step away from SSH as much as possible (you know us Windows users love our pretty GUI's). So, where is the error log at and how do I find it? Or maybe someone else has experienced this and knows the solution based on the simple error given? After running in debug mode, here is my error + . /etc/init.d/functions ++ TEXTDOMAIN=initscripts ++ umask 022 ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin ++ export PATH ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -z '' ']' +++ /sbin/consoletype ++ CONSOLETYPE=pty ++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']' ++ . /etc/profile.d/lang.sh ++ unset LANGSH_SOURCED ++ '[' -z '' ']' ++ '[' -f /etc/sysconfig/init ']' ++ . /etc/sysconfig/init +++ BOOTUP=color +++ RES_COL=60 +++ MOVE_TO_COL='echo -en \033[60G' +++ SETCOLOR_SUCCESS='echo -en \033[0;32m' +++ SETCOLOR_FAILURE='echo -en \033[0;31m' +++ SETCOLOR_WARNING='echo -en \033[0;33m' +++ SETCOLOR_NORMAL='echo -en \033[0;39m' +++ PROMPT=yes +++ AUTOSWAP=no +++ ACTIVE_CONSOLES='/dev/tty[1-6]' +++ SINGLE=/sbin/sushell ++ '[' pty = serial ']' ++ __sed_discard_ignored_files='/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\|\.rpmsave\)$/d' + '[' -r /etc/sysconfig/vncservers ']' + . /etc/sysconfig/vncservers ++ VNCSERVERS='1:larry 2:moe 3:curly' ++ VNCSERVERARGS[1]='-geometry 800x600' ++ VNCSERVERARGS[2]='-geometry 640x480' ++ VNCSERVERARGS[3]='-geometry 640x480' + prog='VNC server' + RETVAL=0 + case "$1" in + start + '[' 0 '!=' 0 ']' + . /etc/sysconfig/network ++ NETWORKING=yes ++ HOSTNAME=vps.binaryvisionaries.com ++ DOMAINNAME=server.name ++ GATEWAYDEV=venet0 ++ NETWORKING_IPV6=yes ++ IPV6_DEFAULTDEV=venet0 + '[' yes = no ']' + '[' -x /usr/bin/vncserver ']' + '[' -x /usr/bin/Xvnc ']' + echo -n 'Starting VNC server: ' Starting VNC server: + RETVAL=0 + '[' '!' -d /tmp/.X11-unix ']' + for display in '${VNCSERVERS}' + SERVS=1 + echo -n '1:larry ' 1:larry + DISP=1 + USER=larry + VNCUSERARGS='-geometry 800x600' + runuser -l larry -c 'cd ~larry && [ -r .vnc/passwd ] && vncserver :1 -geometry 800x600' + RETVAL=1 + '[' 1 -eq 0 ']' + break + '[' -z 1 ']' + '[' 1 -eq 0 ']' + failure 'vncserver start' + local rc=1 + '[' color '!=' verbose -a -z '' ']' + echo_failure + '[' color = color ']' + echo -en '\033[60G' + echo -n '[' [+ '[' color = color ']' + echo -en '\033[0;31m' + echo -n FAILED FAILED+ '[' color = color ']' + echo -en '\033[0;39m' + echo -n ']' ]+ echo -ne '\r' + return 1 + '[' -x /usr/bin/plymouth ']' + /usr/bin/plymouth --details + return 1 + echo + '[' 1 -eq 98 ']' + return 1 + exit 1

    Read the article

  • How to enable error log in lighttpd properly?

    - by Tomaszs
    I have a Centos 5 system with Lighttpd and fastcgi enabled. It does log access but does not log errors. I have Internal Server Error 500 and no info in log and when I try to open not -existing file also - no info in error log. How to enable it properly? Below is list of modules that I've enabled: server.modules = ( "mod_rewrite", "mod_redirect", "mod_alias", # "mod_access", # "mod_cml", # "mod_trigger_b4_dl", # "mod_auth", "mod_status", "mod_setenv", "mod_fastcgi", # "mod_webdav", # "mod_proxy_core", # "mod_proxy_backend_fastcgi", # "mod_proxy_backend_scgi", # "mod_proxy_backend_ajp13", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) Here are setting of debugging: ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" debug.log-file-not-found = "enable" #debug.log-condition-handling = "enable" Setting of path to error and access log: ## where to send error-messages to server.errorlog = "/home/lxadmin/httpd/lighttpd/error.log" #### accesslog module accesslog.filename = "/home/lxadmin/httpd/lighttpd/ligh.log" Settings of fastcgi: fastcgi.debug = 1 fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 12, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "500" ) ))) And in included config file I have: server.errorlog = "/home/httpd/mywebsite.com/stats/mywebsite.com-error_log" What comes to log files: /home/httpd/mywebsite.com/stats/ -rw-r--r-- 1 apache apache 5173239 May 16 11:34 mywebsite.com-custom_log -rwxrwxrwx 1 root root 0 Mar 27 2009 mywebsite.com-error_log /home/lxadmin/httpd/lighttpd/ -rwxrwxrwx 1 apache apache 2184 Apr 22 22:59 error.log -rwxrwxrwx 1 apache apache 6088621 May 16 11:26 ligh.log I gave error logs chmod 777 for a try to check if it's the issue, but apparently it's not. So my question is: what to do to have error log enabled?

    Read the article

  • Safe place to put an executable file on Windows 7 (and Windows XP)

    - by Ricket
    I'm working on a tweak to our logon script which will copy an executable file to the local hard drive and then, using the schtasks command, schedule a task to run that executable daily. It's a standalone executable file, and when run it creates a folder in the working directory (which would be the same directory as the executable in this case). In Windows XP, of course, it can be put anywhere - I'd probably just throw it in C:\SomeRandomFolder and let it be. But this logon script also runs on Windows 7 64-bit machines, and those are trickier with UAC and all that. The user is a local administrator but UAC is enabled, so I'm pretty sure that the executable would be blocked from copying to a location like C:\ or C:\Program Files (since those seem to be at least mildly protected by UAC). The scheduled task needs to run under the user's profile, so I can't just run it with SYSTEM and ignore the UAC boundaries; I need to find a path which the user can copy into. Where can I copy this standalone executable file, so that the copy operation succeeds without a UAC prompt on Windows 7, the path is either common to both WinXP and Win7 or uses environment variables, and the scheduled task running with user permissions is able to launch the executable?

    Read the article

  • List symlinks in specific relative directories

    - by Clinton Blackmore
    I have a server that shares out user home folders over the network. Each user has a Cache folder. Sometimes a symlink is used to redirect this folder to the hard drive of whichever machine they are using (and sometimes that doesn't work and they have a broken symlink [which is a matter for another day].) I'm trying to find out which users have symlinks and which don't. Within the shared folder, to get to the Cache folder you would substitute folders like so: $GRADE/$USERNAME/Library/Caches Right now I'm searching to see which users have symlinks and which do not. I've come up with: cd /path/to/shared/home/folders sudo find . -name "Caches" -exec ls -ld {} \; and get results like this: lrwxr-xr-x@ 1 name0 ES_Students 27 Jan 18 11:05 ./CES_Grade_03/name0/Library/Caches -> /tmp/name0/Library/Caches drwx------ 11 name1 ES_Students 374 Dec 8 15:44 ./CES_Grade_03/name1/Library/Caches lrwxr-xr-x@ 1 name2 ES_Students 27 Feb 23 14:27 ./CES_Grade_03/name2/Library/Caches -> /tmp/name2/Library/Caches drwx------ 17 name3 ES_Students 578 Jan 25 11:13 ./CES_Grade_03/name3/Library/Caches drwx------ 12 name4 ES_Students 408 Mar 22 13:09 ./CES_Grade_03/name4/Library/Caches but it nags at me that there must be a better way. Yes, it is good enough, and a one-off task, but I want to know how to do it right! Surely, I should be able to do something like: cd /path/to/shared/home/folders sudo ls -ld **/**/Library/Caches I'm afraid that I don't know the proper syntax or if there is a recursive folder-replacing wildcard format in bash, and my google-fu failed me. So, how do I properly formulate the search?

    Read the article

  • Help about NAT with virtual server

    - by Thanh Tran
    I have a dedicated server running Linux CentOS 5.3 with 2 IP addresses. I've installed a virtual machine using VMware Server. The host and the guest have a host-only network. Now I want to map the 2nd IP address to the virtual machine so that it can run as a second dedicated server for me. Here is what I do: modprobe iptable_nat echo "1" > /proc/sys/net/ipv4/ip_forward iptables -t filter -A FORWARD -s 192.168.78.128 -d 64.85.164.184 -j ACCEPT iptables -t nat -A PREROUTING -d 64.85.164.184 -i eth0 -j DNAT --to-destination 192.168.78.128 iptables -t nat -A POSTROUTING -s 192.168.78.128 -o eth0 -j SNAT --to-source 64.85.164.184</p> But it not working as intended. What is the matter?

    Read the article

  • Debugging Samba/CUPS printer sharing with Windows

    - by mrdrbob
    I've got a HP Deskjet hooked up to a Slackware 12.2 box. I've got CUPS set up and can print a test page from the box just fine. I've also got Samba set up and have a couple file shares that work fine. I'm trying to share that HP Deskjet out via Samba, but I can't get it to show up in any Windows system. I see the server and its file shares in Windows networking, but when I open the Printers, no printer shows up. Running net view \\servername from the command line lists the file shares, but no printers. Here's the pertinent part of my smb.conf, if that helps: [global] workgroup = HOMENET security = share hosts allow = 192.168.1. 192.168.2. 127. load printers = yes printcap name = cups printing = cups log file = /var/log/samba.%m max log size = 50 [printers] comment = All Printers path = /var/spool/samba browseable = no public = yes writable = no printable = yes guest only = yes Can anyone give me some pointers as to where to start looking for potential causes? Update: Running testparm shows no errors. Here's the output (minus the file shares): [global] workgroup = HOMENET security = SHARE log file = /var/log/samba.%m max log size = 50 printcap name = cups hosts allow = 192.168.1., 192.168.2., 127. [printers] comment = All Printers path = /var/spool/samba guest only = Yes guest ok = Yes printable = Yes browseable = No

    Read the article

  • Set up Glassfish connection pool to talk to a database on a Ubuntu VPS

    - by Harry Pham
    On my Ubuntu VPS, i have a mysql server running and a Glassfish 3.0.1 Application Server running. And I am having a hard to have my GF successfully ping the database. Here is my GF set up Assume: x.y.z.t is the ip of my VPS Resource Type: javax.sql.ConnectionPoolDataSource User: root DatabaseName: scholar Url: jdbc:mysql://x.y.z.t:3306/scholar URL: jdbc:mysql://x.y.z.t:3306/scholar Password: xxxx PortNumber: 3306 ServerName: x.y.z.t Inside my glassfish3/glassfish/lib, I have my mysql-connector-java-5.1.13-bin.jar Inside the database, table mysql here is the result of the query select User, Host from user; +------------------+-----------+ | User | Host | +------------------+-----------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | root | localhost | | root | yunaeyes | +------------------+-----------+ Now from my machine, if I try to connect to this db via mysql browser (mysql client software), well I cant. Well from the table above, seem like it only allow localhost to connect to this db. Keep in mind that both my db and my GF are on the same VPS. Please help

    Read the article

  • Security and Windows Login

    - by Mimisbrunnr
    I'm not entirely sure this is the right place for the is question but I cannot think of another so here goes. In order to login to the windows machines at my office one must press the almighty CTRL-ALT-DELETE command combo first. I, finding this very frustrating, decided to look into why and found claims from both my sys and Microsoft stating that it's a security feature and that "Because only windows could read the CTRL-ALT-DELETE it helped to ensure that an automated program cannot log in. Now I'm not a master of the windows operating system ( as I generally use *nix ) but I cannot believe that "Only windows can send that signal" bull. It just doesn't sit right. Is there a good reason for the CTRL-ALT-DELETE to login thing? is it something I'm missing? or is it another example of antiquated legacy security measures?

    Read the article

  • The meaning of thermal throttle counters and package power limit notifications in Linux

    - by Trustin Lee
    Whenever I do some performance testing on my Linux-installed MacBook Pro, I often see the following messages in dmesg: Aug 8 09:29:31 infinity kernel: [79791.789404] CPU1: Package power limit notification (total events = 40365) Aug 8 09:29:31 infinity kernel: [79791.789408] CPU3: Package power limit notification (total events = 40367) Aug 8 09:29:31 infinity kernel: [79791.789411] CPU2: Package power limit notification (total events = 40453) Aug 8 09:29:31 infinity kernel: [79791.789414] CPU0: Package power limit notification (total events = 40453) I also see the throttle counters in the sysfs increases over time: trustin@infinity:/sys/devices/system/cpu/cpu0/thermal_throttle $ ls core_power_limit_count package_power_limit_count core_throttle_count package_throttle_count $ cat core_power_limit_count 0 $ cat core_throttle_count 41912 $ cat package_power_limit_count 67945 $ cat package_throttle_count 67565 What do these counters mean? Do they affect the performance of CPU or system? Do they result in increased deviation of performance numbers? (i.e. Do they prevent me from getting reliable performance numbers?) If so, how do I avoid these messages and increasing counters? Would running the performance tests on a well-cooled desktop system help?

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • Apache virtualhost - only apply script if file does not exist in document root

    - by Brett Thomas
    Sorry for the newbie apache question. I'm wondering if it's possible to set up the following non-conventional apache virtualhost (for a Django app): -- If a file exists in the DocumentRoot (/var/www) it will be shown. So if /var/www/foo.html exists, then it can be seen at www.example.com/foo.html. -- If file does not exist, it is served via a virtualhost. I'm using mod_wsgi with a WSGIScriptAlias directive that points to a Django app. So if there is no /var/www/bar.html, www.example.com/bar.html will be passed to the Django app, which may or may not be a 404 error. One option is to create an Alias for each individual file/directory, but people want to be able to post a file without adding an alias, and we want to keep the above URL structure for legacy reasons. Simplified Virtualhost is: <VirtualHost *:80> ServerName www.example.com DocumentRoot /var/www WSGIScriptAlias / /path/to/django.wsgi <Directory /path/to/app> Order allow,deny Allow from all </Directory> Alias /hi.html /var/www/hi.html </VirtualHost> The goal is to have www.example.com/hi.html work as above, without the Alias line

    Read the article

  • Wrong Outlook anywhere settings

    - by Ken Guru
    Hey all I wanted to enable NTLM authentication on OutlookAnywhere, and after doing the command Set-OutlookAnywhere -IISAuthenticationMethods Basic,NTLM, my settings got changed. This is a dump before I run the command: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN= EXCAS01,CN=Servers,CN=Exchange Administrative Grou p (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Fi rst Organization,CN=Microsoft Exchange,CN=Services ,CN=Configuration,DC=asp,DC=ssc,DC=no Identity : EXCAS01\Rpc (Default Web Site) Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 05.01.2011 16:59:55 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : IsValid : True Noticde the settings for "Name", "DistinguishedName", and "Identity". After I run the command, I ended up with this: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic, Ntlm} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : EXCAS01 DistinguishedName : CN=EXCAS01,CN=HTTP,CN=Protocols,CN=EXCAS01,CN=Serv ers,CN=Exchange Administrative Group (FYDIBOHF23SP DLT),CN=Administrative Groups,CN=First Organizatio n,CN=Microsoft Exchange,CN=Services,CN=Configurati on,DC=asp,DC=ssc,DC=no Identity : EXCAS01\EXCAS01 Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 06.01.2011 09:43:50 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : ASP-DC-2. IsValid : True Now, the "Name", "DistinguishedName" and "Identity" has changed, and when I try to change it back by running "Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)", I get the following error: [PS] C:\Windows\system32Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)" Set-OutlookAnywhere : The operation could not be performed because object 'EXCA S01\Rpc (Default Web Site)' could not be found on domain controller 'ASP-DC-2.'. Remember, the RPC over HTTP works fine with Basic authentication (even with the wrong settings), but NTLM still doesnt work. How do I change back the settings?

    Read the article

  • Random flickering on 2.6.32 kernels after suspend

    - by whitequark
    I have XUbuntu 9.10 installed on a Toshiba NB200 netbook with Intel video card that's handled with i915 driver. With 2.6.31 stable, recommended kernel everything but WiFi works fine: Atheros ath9k WiFi shows too small signal power and loses packets in 'bursts' sometimes. With 2.6.32-* (I tested -9 to -11 from Ubuntu's kernel unstable ppa) everything works fine just prior to first suspend: echo mem >/sys/power/state. After it random unidentified fullscreen 'one-frame' flickering begins in Xorg, and after a couple of minutes everything eventually hangs while showing filled grey (not white; it is like default button colour) screen; no X keys are working: Ctrl+Alt+Fn don't, blind typing in console too. Magic SysRq still works and I was able to reboot. Also there is one out-of-tree kernel module called omnibook that is required to turn on WiFi and Bluetooth. Any advices?

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

  • Win2k8R2 / IIS 7.5 - users getting 503 response, no 503 error reported in logs

    - by merk
    I've got 2 web servers with mirrored content. There's a load balancer sitting in front of them. Starting yesterday we've been getting people complaining about 503 errors. i can't find any 503 errors in the IIS log file. However the server host is saying these errors are due to .Net errors in our website which are causing the app pool to recycle. They pointed out several errors in the windows application event log which look like this: Log Name: Application Source: ASP.NET 4.0.30319.0 Date: 3/31/2012 8:35:37 PM Event ID: 1309 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: 6251.local Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 3/31/2012 8:35:37 PM Event time (UTC): 4/1/2012 1:35:37 AM Event ID: e7a580c7b38545cca3416a8595408f24 Event sequence: 97 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2/ROOT-1-129777167518960645 Trust level: Full Application Virtual Path: / Application Path: C:\inetpub\wwwroot\mywebsite\ Machine name: 6252 Process information: Process ID: 20000 Process name: w3wp.exe Account name: IIS APPPOOL\MyAppPool In particular they are saying that the account name under Process Information indicates that the app pool is recycling. They said if the app pool were not recycle, the accountname would instead be the folder where the website files are located. I checked the app pool settings - it's set to recycle every 29 hours. And the rapid fail protection is set to the default of 5 failures in 5 minutes. But i have not seen 5 failures in the event log in that short of a time span. Can anyone help me confirm if the 503 responses are indeed being generated by the app pools recycling? Or are these errors coming from somewhere else? My guess at the time was their load balancer was the one actually returning the 503 error. But that was just a guess.

    Read the article

  • apache authentication

    - by veilig
    I'm trying to set up a local webserver on my network. I want to be able to be able to access the webserver from any machine inside my network w/out authenticating. and two extra domains need access to it w/out authenticating. Everyone else I would like to authenticate in. so far, I can get to it from inside my network. and the two extra domains can access my webserver, but everyone else is just hanging. They don't get an authentication or anything. can anyone tell me what I'm doing wrong here? This is part of my apache's site-available file so far: <Directory /path/to/server/> Options Indexes FollowSymLinks -Multiviews Order Deny,Allow Deny from All Allow from 192.168 Allow from localhost Allow from domain1 Allow from domain2 AuthType Basic AuthName "my authentication" AuthUserFile /path/to/file Require valid-user Satisfy Any AllowOverride All <Files .htaccess> Order Allow,Deny Allow from All </Files> </Directory>

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >