Search Results

Search found 41497 results on 1660 pages for 'fault'.

Page 308/1660 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error readi

    - by Zigzag
    Hi, I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • NFS inherit permissions from shared directory - Mac OS client

    - by devius
    Short question: Is there a way to have files on a NFS share on the client inherit the permissions of the shared directory? Scenario: Ubuntu 12.04 server Mac 10.7.4 client shared directory has 775 permissions created files on client have 644 permissions I tried setting ACLs with the setfacl command, as explained here, and it appears they are set on the server. getfacl returns this: # file: Documents/ # owner: someguy # group: somegroup # flags: -s- user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:group:somegroup:rwx default:mask::rwx default:other::r-x However, when I create a new file on the Mac OS client it still has 644 permissions and not the 664 I would expect. Files created on the server have the expected permissions. Files created with another Ubuntu client also have the expected permissions.

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • If a raid controller changes, are the drives still usable without re-formatting?

    - by Jeremy
    I've been wanting to do a raid 1 setup in my home with a pair of sata drives. Someone told me that if the controller fails, you can't just get a new controller because you'll have to reformat the drives. Or is that true only in some implementations? I was originally just looking at an onboard raid controller, or an entry level nas drvice like the intel SS4200-E, but If the hardware (controller) ever fails, will I be out of luck accessing the data if I can't get the exact same hardware to replace it?

    Read the article

  • How to setup nginx and a subdomain

    - by Evolutio
    i have gitlab installed on my server and it works on all domains eg: git.lars-dev.de, lars-dev.de and *.lars-dev.de how I can run gitlab only on git.lars-dev.de and another subdomain on files.lars-dev.de? my lars-dev conf: server { listen *:80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/webdata/lars-dev.de/htdocs; index index.html index.htm; server_name lars-dev.de; location / { try_files $uri $uri/ /index.html; } #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } and the gitlab configuration: upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.lars-dev.de; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }

    Read the article

  • How to combine try_files and sendfile on Nginx?

    - by hcalves
    I need Nginx to serve a file relative from document root if it exists, then fallback to an upstream server if it doesn't. This can be accomplished with something like: server { listen 80; server_name localhost; location / { root /var/www/nginx/; try_files $uri @my_upstream; } location @my_upstream { internal; proxy_pass http://127.0.0.1:8000; } } Fair enough. The problem is, my upstream is not serving the contents of URI directly, but instead, returning X-Accel-Redirect with a location relative to document root (it generates this file on-the-fly): % curl -I http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg HTTP/1.0 200 OK Date: Mon, 26 Nov 2012 20:58:25 GMT Server: WSGIServer/0.1 Python/2.7.2 X-Accel-Redirect: animals/kitten.jpg__100x100__crop.jpg Content-Type: text/html; charset=utf-8 Apparently, this should work. The problem though is that Nginx tries to serve this file from some internal default document root instead of using the one specified in the location block: 2012/11/26 18:44:55 [error] 824#0: *54 open() "/usr/local/Cellar/nginx/1.2.4/htmlanimals/kitten.jpg__100x100__crop.jpg" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /animals/kitten.jpg__100x100__crop.jpg HTTP/1.1", upstream: "http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg", host: "127.0.0.1:80" How do I force Nginx to serve the file relative to the right document root? According to XSendfile documentation the returned path should be relative, so my upstream is doing the right thing.

    Read the article

  • HTTP/1.1 Status Codes 400 and 417, cannot choose which

    - by TheDeadLike
    I have been referred to here that it might be of better help, I've got a processing file which handles the user sent data, before that, however, it compares the input from client to the expected values to ensure no client-side data change. I can say I don't know lot about HTTP status codes, but I have made up some research on it, and to choose which one is the best for unexpected input handling. So I came up with: 400 Bad Request: The request cannot be fulfilled due to bad syntax 417 Expectation Failed: The server cannot meet the requirements of the Expect request-header field Now, I cannot be really sure which one to use, I have seen 400 Bad Request being used alot, however, whatI get from explanation is that the error is due to an unexistent request rather than an illegal input. On the other side 417 Expectation Failed seems to just fit for my use, however, I have never seen or experimented this header status before. I need your experience and opinions, thanks alot! For a full detailed with form/process page drafts, and my experiments, follow this link.

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • What compatibility trade-offs do we need to make in order to use a hardened SSL config for Nginx?

    - by nathan.f77
    I found some hardened SSL settings in github.com/ioerror/duraconf. Here is the header from the config: This is an example of a high security, somewhat compatible SSLv3 and TLSv1 enabled HTTPS proxy server. The server only allows modes that provide perfect forward secrecy; no other modes are offered. Anonymous cipher modes are disabled. This configuation does not include the HSTS header to ensure that users do not accidentally connect to an insecure HTTP service after their first visit. It only supports strong ciphers in PFS mode: ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; # Only strong ciphers in PFS mode ssl_ciphers ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA; ssl_protocols SSLv3 TLSv1; If we were to use these settings on our website, what does "somewhat compatible" mean? For example, would IE6 still be able to connect?

    Read the article

  • Emails from web site sometimes blank or gibberish

    - by John Gardeniers
    Our company has one web site with an online store based on osCommerce. The system sends emails for various reasons, such as password changes, order confirmations, etc., using PHP's mail() function. We occasionally have customers report that the email they received is either blank (email is plain text format) or gibberish (email is in HTML format). In the latter case it's really just HTML that's being displayed as raw text but of course the customers can't read it. In this case the first opening tag's <, and sometimes a few more characters, has gone missing. In an attempt to determine whether this was happening only for certain customers or email systems I configured the web site to send a CC of each message to a service account at my end. Those CC'd messages always arrive intact and display correctly in Outlook. For what it's worth, it seems to happen a little more frequently to Hotmail users but is certainly not limited to them. As the web site is on a shared (Debian) host there's precious little I can do about debugging things from that end, although if I made the right request I feel the hosting company staff would help me, even though they have limited resources to spend on such matters. Any suggestions on what else I might do to try and determine just why those emails are not being received correctly by some customers, yet a CC copy arrives just fine?

    Read the article

  • How to use webdav and user dir in the same time in the same section ?

    - by Louis
    Dear community, i would like to mount trough webdav my https://myserver/~user_account but not https://myserver/. What i am doing now is : <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /banonymous/data/home/*/public_html> DAV On AllowOverride FileInfo Limit AuthConfig Options MultiViews SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> and i am setting the authentification in the .htaccess of eache user. AuthType Basic AuthName "Password Required" AuthUserFile /etc/apache2/users/htpasswd Require User geeky It does not work. Is there someone who can tell me if it is possible ? and if it is how to do it. My dream would have been to put the Dav On in the .htaccess.

    Read the article

  • Mysql not loading correctly

    - by mcondiff
    PHP 5.3.2 Apache 2.2.15 Mysql 5.1.X Windows XP SP3 I have now configured everything correctly but get a timeout when trying to connect to Mysql via PHP. So frustrated. I don't get an error message, the script just times out. I have made sure I have the correct paths. Fatal error: Maximum execution time of 60 seconds exceeded Any idea why this might be happening? I do a php -v from the command line and everything is normal, no errors. i upgraded PHP from 5.2.6 to 5.3.2 - does there seem to be problems or bugs with this? I am essentially using my previous PHP.ini while editing paths. I am lost. Help! If you need anything from phpinfo() or httpd.conf or php.ini let me know. else

    Read the article

  • maillog "No route to host" error

    - by Sherwood Hu
    I have a CentOS server. It has sendmail installed but not used for a mail server. I forwarded the root email to another email address. However, I keep getting errors in maillog: Dec 6 08:49:16 server1 sm-msp-queue[16191]: qB6601et005433: to=root, ctladdr=root (0/0), delay=08:49:15, xdelay=00:00:00, mailer=relay, pri=883224, relay=[127.0.0.1], dsn=4.0.0, stat=Deferred: [127.0.0.1]: No route to host Dec 6 08:49:16 server1 sendmail[16190]: qB39nDfQ014062: to=<[email protected]>, delay=3+05:00:02, xdelay=00:00:00, mailer=esmtp, pri=6965048, relay=subdomain.example.com., dsn=4.0.0, stat=Deferred: subdomain.example.com.: No route to host Dec 6 08:49:16 server1 sendmail[16190]: qB39nDfR014062: to=<[email protected]>, delay=3+05:00:02, xdelay=00:00:00, mailer=esmtp, pri=7004959, relay=subdomain.example.com., dsn=4.0.0, stat=Deferred: subdomain.example.com.: No route to host In the forwarded email address, I received notification "it can't deliver email to [email protected]. subdoamin.example.com does have a MX record, and I do not want to add one. Is there any configuration that I can change to prevent this error? I want all emails to the root to be forwarded to the forward address.

    Read the article

  • How to Remove a VM From Hyper-V Without Deleting the Configuration File?

    - by Steven Murawski
    I'm in the process of moving a number of virtual machines that are homed on shared storage (a file share, though shared cluster disk would work as well) to a new VM host with access to the same shared storage. The new host is a different build version (moving from Windows Server 2012 Beta to Windows Server 2012 RC - though this same process could be used with migrations of Windows Server 2008/2008 R2 to Windows Server 2012 as well), so I cannot migrate the machine with inbox tooling. I need to remove the VM from management of the source Hyper-V host in order to import the VM to the new Hyper-V host. I want to retain the configuration file, so I can import the VM as it stands and not need to reconfigure it. The VHD files are rather large and they are staying on the same file share, so I'd rather not duplicate them during the move process.

    Read the article

  • Apache subdomain not working

    - by tandu
    I'm running apache on my local machine and I'm trying to create a subdomain, but it's not working. Here is what I have (stripped down): <VirtualHost *:80> DocumentRoot /var/www/one ServerName one.localhost </VirtualHost> <VirtualHost *:80> DocumentRoot /var/www/two ServerName two.localhost </VirtualHost> I recently added one. The two entry has been around for a while, and it still works fine (displays the webpage when I go to two.localhost). In fact, I copied the entire two.localhost entry and simply changed two to one, but it's not working. I have tried each of the following: * `apachectl -k graceful` * `apachectl -k restart` * `/etc/init.d/apache2 restart` * `/etc/init.d/apache2 stop && !#:0 start` Apache will complain if /var/www/one does not exist, so I know it's doing something, but when I visit one.localhost in my browser, the browser complains that nothing is there. I put an index.html file there and also tried going to one.localhost/index.html directly, and the browser still won't fine it. This is very perplexing since the entry I copied from two.localhost is exactly the same .. not only that, but if something were wrong I would expect to get a 500 rather than the browser not being able to find anything. The error_log also has nothing extra.

    Read the article

  • Disabling the Squid Error pages

    - by Nicholas Smith
    I've just started looking at using Squid for a project and can't seem to see an easy way of disabling the Squid error pages (e.g. "Name Error: The domain name does not exist"). We use a custom browser which handles that scenario in our way, so the Squid error pages are overriding our custom logic. Is it possible to set them too 'off'? I've been through the .conf and I've found where the error pages are stored, but no real options to disable them.

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • Linux TC / Policy Routing tools

    - by Zoredache
    In addition to a really good firewall Linux has a builtin advanced routing and traffic shaping (lartc). There are many applications (firehol, firestarter, etc) to make the creation of iptables firewall easier, what similar to tools exist to make working with the policy routing and traffic control easy?

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • Where are ethernet errors logged?

    - by Matt
    Munin is showing me a graph like this: During that spike, I was unable to access my server through the eth0 port (I could access it through my IPMI port). I'm trying to figure out what happened, but I can't seem to locate any log files for eth0. I don't see anything in /var/log/(kern|syslog|messages) that is out of the ordinary. And I don't see a log file specifically for eth0. Are there logs for eth0, and if so, where can I find them? I am running Ubuntu 10.04 LTS.

    Read the article

  • MTU, DSL router and stalling TCP

    - by user38843
    I am discovering stalling TCP connection problem. The problem arises when I try to scp stuff from remote system from my home network. My home network is connected to internet via PPPoE (ADSL+) and everything works perfectly once working from my home network. The ADSL router has MTU set to 1492 but with that setting the scp from remote system does not work - stalling! When I change the MTU on my router to 1500 the that scp works perfectly but internet accesses from my home network is very slow to most of the www sites - even local ones. Just wondering where the problem exists - my ISP blocking ICMP, etc? Thanks!

    Read the article

  • Cant access a remote server due mistake by setting firewall rule

    - by LMIT
    I need help due a my silly mistake! So for long time i have a dedicate server hosted by register.it Usually i access remotly to this server (Windows 2008 server) by Terminal Server. Today i wanted to block one site that continually send request to my server. So i was adding a new rule in the firewall (the native firewall on windows 2008 server), as i did many time, but this time, probably i was sleeping with my brain i add a general rules that stop everything! So i cant access to the server anymore, as no any users can browse the sites, nothing is working because this rule block everything. I know that is a silly mistake, no need to tell me :) so please what i can do ? The only 1 thing that my provider let me is reboot the server by his control panel, but this not help me in any way because the firewall block me again. i have administrator username and password, so what i really can do ? there are some trick some tecnique, some expert guru that can help me in this very bad situation ? UPDATE i follow the Tony suggest and i did a NMAP to check if some ports are open but look like all closed: NMAP RESULT Starting Nmap 6.00 ( http://nmap.org ) at 2012-05-29 22:32 W. Europe Daylight Time NSE: Loaded 93 scripts for scanning. NSE: Script Pre-scanning. Initiating Parallel DNS resolution of 1 host. at 22:32 Completed Parallel DNS resolution of 1 host. at 22:33, 13.00s elapsed Initiating SYN Stealth Scan at 22:33 Scanning xxx.xxx.xxx.xxx [1000 ports] SYN Stealth Scan Timing: About 29.00% done; ETC: 22:34 (0:01:16 remaining) SYN Stealth Scan Timing: About 58.00% done; ETC: 22:34 (0:00:44 remaining) Completed SYN Stealth Scan at 22:34, 104.39s elapsed (1000 total ports) Initiating Service scan at 22:34 Initiating OS detection (try #1) against xxx.xxx.xxx.xxx Retrying OS detection (try #2) against xxx.xxx.xxx.xxx Initiating Traceroute at 22:34 Completed Traceroute at 22:35, 6.27s elapsed Initiating Parallel DNS resolution of 11 hosts. at 22:35 Completed Parallel DNS resolution of 11 hosts. at 22:35, 13.00s elapsed NSE: Script scanning xxx.xxx.xxx.xxx. Initiating NSE at 22:35 Completed NSE at 22:35, 0.00s elapsed Nmap scan report for xxx.xxx.xxx.xxx Host is up. All 1000 scanned ports on xxx.xxx.xxx.xxx are filtered Too many fingerprints match this host to give specific OS details TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 ... ... ... 13 ... 30 NSE: Script Post-scanning. Read data files from: D:\Program Files\Nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 145.08 seconds Raw packets sent: 2116 (96.576KB) | Rcvd: 61 (4.082KB) Question: The provider locally can access by username and password ?

    Read the article

  • Failover Issuer CAs without Clustering

    - by James Santiago
    I am attempting to setup a Certificate Authority with some failover capabilities for the issuer CAs. I have an offline root CA and am attempting to setup two subordinate CAs on our domain which will handle issuing certificates. I'm trying to determine the architecture needed for these two CAs to allow one to go down and the other to take over without the use of failover clustering, as the two are in different geographic locales. Are there documents regarding this setup?

    Read the article

  • Remove Applications page on MDT 2012 Deployment Summary

    - by Ben M.
    I am using SCCM 2012 and MDT 2012 for OSD and app deployment. Because of the nature of how some of the apps need to be detected (and which changes as apps get updated) they show up with a warning sign on the Applications tab of the Deployment Summary. There is pretty much no way to get them all green check marked (not realistically in our setup anyway) so I would like to get rid of that page so that I don't have to hear about it from my teams as they deploy. Any help would be appreciated. MDT Summary screen: Applications Installed (the tab I want to remove)

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >