Search Results

Search found 26912 results on 1077 pages for 'default programs'.

Page 847/1077 | < Previous Page | 843 844 845 846 847 848 849 850 851 852 853 854  | Next Page >

  • Local DNS and Apache Server Configuration Interferring - example.com / www.example.com

    - by nicorellius
    I have a domain for my site: example.com I am also running local DNS with these lines: www IN CNAME server.<host_provider>.com. dev IN CNAME server.<host_provider>.com. So www.example.com and dev.example.com go to production and development sites, respectively, that are hosted by a host company. In my Apache configuration for the main site, I'm running a rewrite rule like this: RewriteEngine ON RewriteCond %{HTTP_HOST} ^example\.com$|!dev\.example\.com$ [NC] RewriteRule ^(.*)$ http://www\.%{HTTP_HOST}/$1 [R=302,L,NE] This rule seems to work, as when you are off the network and go to example.com in the browser, you get redirected to www.example.com. The problem is when I'm on the network, and I go to example.com I get an error page, saying page can't be found. No server errors; just a page can't be found, as if the local DNS is causing it to stop looking at that point. I'm also using Nettica for DNS service and have this A record in place: example.com Host (A) Default xxx.xx.xxx.xx This handles the external DNS, but my problem seems to be related to my internal DNS. For example, inside my network, I can go to servers on the network with addresses like this: server.example.com server1.example.com server2.example.com These are configured in my local DNS. I'm just not sure how to get past the "empty" subdomain and go to example.com. Adding to this since it might not be clear. If I'm out side the example.com network, on another network, like example123.com, then when I go to example.com I'm redirected to www.example.com as expected, eg, the Apache rewrite rule is working. Thanks in advance for any information.

    Read the article

  • Web Interfaces not opening even after Port Forwarding is said to be working!

    - by Ahmad
    I'm encountering this strange problem which has baffled me to the ground, and which I haven't encountered even after years of doing port forwarding .. ! I am hoping somebody here can help me solve this mystery .. :) My network configuration is as follows: I have a DSL modem (custom made and branded by my ISP) which is receiving a DSL stream ... it has an external IP which is visible to the world, say, 11.22.33.44 ... This modem has DHCP enabled, has an internal IP for itself, which is 192.168.1.1 .. it is connected to 2 laptops via and ethernet cable .. Laptop 1 has IP 192.168.1.2, and Laptop 2 has IP 192.168.1.3 ... On Laptop 1, two applications are running, jDownloader and Media Player Classic, which have their web interfaces on ports 8765 and 13579, respectively ... I can access both of these web interfaces from Laptop 2 by opening these addresses: 192.1681.2:8765 and 192.168.1.2:13579 ... both of their web interfaces open up, meaning the web interfaces are working fine .. Moving on, I now want to access these web interfaces from outside my network as well, and so I've configured port forwarding in my PTCL modem to forward all traffic on ports between 8000 and 14000 (both TCP and UDP) to IP 192.168.1.2 ... I have verified that port forwarding is working by testing it using PortForward.com's port checker tool, and this website too: [URL]http://www.yougetsignal.com/tools/open-ports/[/URL] When I use the website, if I'm running the applications on Laptop 2, the website reports that the port is open .. if I then close the application, the website reports the port is closed ... This makes sense as nothing is listening on my machine in the latter case .. Also, if I disable port forwarding in my modem, again, the website reports the port is closed ... so, the website's results seem to be okay ... Same of the above can be said when I'm used PortForward.com's port checker tool ... So again, everything okay so far ... Now, here comes the problem !! ... Despite the above tools reporting that port forwarding is working, I am unable to open the web interfaces from outside my network ... So for example, if I tried to browse 11.22.33.44:8765 or 11.22.33.44:13579, nothing opens in my browser ... But if I accessed these web server's locally from Laptop 3, by typing in 192.168.1.2:8765 or 192.168.1.2:13579, they opened ... So where is the problem here ?? The tools report unanimously that port forwarding is working, and yet I am unable to open the web interfaces from outside the network .. Also note that I have disabled the firewall from my computer, and have also made sure that any option in the above programs (whose web interfaces I am trying to open) that says only local connections are to be accepted, is disabled ... So whats the problem ... ?!! Any ideas ??

    Read the article

  • Apache + PHP via FastCGI

    - by Wilco
    I'm running into some problems while attempting to run PHP via FastCGI in Apache. I have the FastCGI module loaded, but get the following error when attempting to load a page: The requested URL /fastcgi/php54.fcgi/index.php was not found on this server. Somewhere, it seems that the script to be executed is appended to the executable without any spaces. Is this where the problem likely is? Below I've included snippets from my Apache configuration (hopefully this is enough): LoadModule fastcgi_module libexec/apache2/mod_fastcgi.so FastCgiIpcDir /var/run/fastcgi AddHandler fastcgi-script .fcgi FastCgiConfig -autoUpdate -singleThreshold 100 -killInterval 300 AddType application/x-httpd-php .php ScriptAlias /fastcgi/ /Library/WebServer/FCGI-Executables/ <Directory "/Library/WebServer/FCGI-Executables"> Options +ExecCGI SetHandler fastcgi-script Order allow,deny Allow from all <VirtualHost *:80> ServerName www.somedomain.com ServerAdmin [email protected] DocumentRoot "/Web/www.somedomain.com" DirectoryIndex index.html index.php default.html CustomLog /var/log/apache2/access_log combinedvhost ErrorLog /var/log/apache2/error_log Action application/x-httpd-php /fastcgi/php54.fcgi <IfModule mod_ssl.c> SSLEngine Off SSLCipherSuite "ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM" SSLProtocol -ALL +SSLv3 +TLSv1 SSLProxyEngine On SSLProxyProtocol -ALL +SSLv3 +TLSv1 </IfModule> <Directory "/Web/www.somedomain.com"> Options All -Indexes +ExecCGI +Includes +MultiViews AllowOverride All <IfModule mod_dav.c> DAV Off </IfModule> <IfDefine !WEBSERVICE_ON> Deny from all ErrorDocument 403 /customerror/websitesoff403.html </IfDefine> </Directory> </VirtualHost> ... and this is the executable: #!/bin/sh PHP_FCGI_CHILDREN=1 PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_CHILDREN export PHP_FCGI_MAX_REQUESTS exec /opt/local/bin/php-cgi54

    Read the article

  • Exchange 2010 Hub cannot deliver to Exchange 2007 Hub - "451 5.7.3 Cannot achieve Exchange Server authentication"

    - by Graeme Donaldson
    We have an existing Exchange 2007 server in Site A (exch07). I've installed an Exchange 2010 server in Site B (exch10). Both servers have the CAS, Mailbox and Hub roles. Messages sent via SMTP on exch10 which are destined for mailboxes on exch07 are queued with the "Last Error" reported in Queue Viewer as '451 4.4.0 Primary target IP address responded with: "451 5.7.3 Cannot achieve Exchange Server authentication." Attempted failover to alternate host, but that did not succeed. Either there are no alternate hosts, or delivery failed to all alternate hosts.' I've found that some people have resolved this by creating new Receive Connectors which are scoped specifically to apply to connections from the remote hub/s, but I have had no luck doing this. Specifically I created new receive connectors on both servers with the following settings: Remote IP = IP/s of remote server Authentication = "Transport Layer Security (TLS)" and "Exchange Server authentication" Permission Groups = "Exchange servers" and "Legacy Exchange Servers" This made no difference, I see the same error message. What am I missing? Update: We noticed that the Application log had this error message from MSExchangeTransportService: Microsoft Exchange could not find a certificate that contains the domain name exch07.domain.local in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector exch10 with a FQDN parameter of exch07.domain.local. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key. It turns out that the default self-signed certificate was no longer enabled for the SMTP service for some reason. After enabling the self-signed certificate for SMTP, we no longer get the error in the event logs, but delivery is still failing with the same error message. Update 2: I put a mailbox on exch10 and attempted to deliver a message via SMTP on exch07 and I get the same error.

    Read the article

  • weird routes automatically being added to windows routing table

    - by simon
    On our windows 2003 domain, with XP clients, we have started seeing routes appearing in the routing tables on both the servers and the clients. The route is a /32 for another computer on the domain. The route gets added when one windows computer connects to another computer and needs to authenticate. For example, if computer A with ip 10.0.1.5/24 browses the c: drive of computer B with ip 10.0.2.5/24, a static route will get added on computer B like so: dest netmask gateway interface 10.0.1.5 255.255.255.255 10.0.2.1 10.0.2.5 This also happens on windows authenticated SQL server connections. It does not happen when computers A and B are on the same subnet. None of the servers have RIP or any other routing protocols enabled, and there are no batch files etc setting routes automatically. There is another windows domain that we manage with a near identical configuration that is not exhibiting this behaviour. The only difference with this domain is that it is not up to date with its patches. Is this meant to be happening? Has anyone else seen this? Why is it needed when I have perfectly good default gateways set on all the computers on the domain?!

    Read the article

  • Upgrade an Ubuntu 8.04 installation with VMware Server 1.0.8 and lots of guest OSes to Something Els

    - by Glyph
    I have an Ubuntu 8.04 (Hardy Heron) host machine which is running a whole slew of virtual machines in VMWare Server 1.0.8. Among other guest OSes, there is every release version of Ubuntu since 6.06, OpenSolaris 2009.06, and Windows XP. Right now I access these VMs from a variety of client OSes as well; Linux and Windows via the VMWare server console, and MacOS via X-forwarding the host machine's server console. I'd like to upgrade the host to Ubuntu 10.04 (Lucid Lynx), but from what I can tell, getting VMWare Server 1.x to work on a more recent version of Linux is a real pain. While VMware Server 2.x is a bit easier, it's still not packaged as Debian packages, so installing security updates is a big chore. As long as I'm upgrading anyway, I'd like to move to a virtualization solution that will allow me to automate applying updates. The options that I'm aware of right now are KVM (managed via virt-manager) and VirtualBox (as managed by its own tools or via its own libvirt bindings), but I'm open to other suggestions. For each option, I'd like to know how do I convert my guest images to the new format? am I going to have to re-activate my Windows guests (alternatively, "If the virtual hardware is different by default, can I avoid re-activation by changing some virtualization configuration to provide me with more similar virtual hardware") what are the management options like for each client OS (mac, linux, windows)? Thanks.

    Read the article

  • Apache web logs "GET http/1.1" 200 error?

    - by songdogtech
    I've got some puzzling (at least to me) errors that show up in my webserver logs. What's puzzling is that they show my error page being referenced, but that error page doesn't show up as being displayed in hits on statcounter.com. Can someone tell me the difference in these two kinds of errors? Or is only one a "real" error? This is in WordPress, and I have my 404.php file in my theme redirecting to a static Wordpress page at mydomain.com/error/ These first two log entries show /error/, but a "hit" on my /error/ page doesn't show in my webstats. The page referenced - /2009/08/delete-preference-files/ - exists, and shows a "hit" at statcounter. 92.17.232.242 - - [13/Mar/2010:01:45:43 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/2009/08/delete-preference-files/" "Mozilla/5.0 (Windows;U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 732 4432 92.17.232.242 - - [13/Mar/2010:01:47:11 -0800] "GET /error/ HTTP/1.1" 200 4012 "http://markratledge.com/wp-content/themes/default/style.css" "Mozilla/5.0(Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)" 861 4432 These are two error entries that display my /error/ page, because /error/ gets a "hit" in statcounter. The page /cookies/ doesn't exist. 72.174.16.202 - - [13/Mar/2010:10:53:37 -0800] "GET /cookies HTTP/1.1" 302 21 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 394 642 72.174.16.202 - - [13/Mar/2010:10:53:38 -0800] "GET /error/ HTTP/1.1" 200 4012 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729)" 393 4432

    Read the article

  • Update Grub on Squeeze - Kernel downgrade due VMware Server

    - by vodoo_boot
    Hi! I happen to run into various problems regarding grub and kernels. I don't really care about the kernel internas. All I want is VMware server in that dedicated root-server. 1.) What is a bzImage vs. vmlinuz? kaze:~# ls /boot/ System.map-2.6.32-5-amd64 bzImage-2.6.33.2 config-2.6.33.2 initrd.img-2.6.32-5-amd64 System.map-2.6.33.2 bzImage-2.6.35.6 config-2.6.35.6 vmlinuz-2.6.32-5-amd64 System.map-2.6.35.6 config-2.6.32-5-amd64 grub I updated my menu.lst (grub2): timeout 5 default 0 fallback 1 title 2.6.32.5 kernel (hd0,1)/boot/vmlinuz-2.6.32-5-amd64 root=/dev/sda2 panic=60 noapic acpi=off title 2.6.35.6 kernel (hd0,1)/boot//bzImage-2.6.35.6 root=/dev/sda2 panic=60 noapic acpi=off title 2.6.32.3 kernel (hd0,1)/boot//bzImage-2.6.33.2 root=/dev/sda2 panic=60 noapic acpi=off That doesn't do well... I think the vmlinuz file is missing initrd or so. Dunno. In fact I don't give too much about kernel boot voodoo as long as it works. update-grub(2) does not work. Does anybody know what magical trick there is to get the 2.6.32-5 booting? 2.) I thought t follow the Deban wiki.. I cannot get header-files for the installed 35.6 or 33.2 kernel in the repositories. I cannot build foreign headers because they will not match the running kernel. So how does one deal with that situtation? I'd prefer not to have to downgrade the kernel. Thanks for any answers!

    Read the article

  • Xen Windows Guest spawn doesn't spawn a vnc display

    - by Henrik P. Hessel
    I'm using this HVM File to create a new guest kernel = "/usr/lib/xen-3.2-1/boot/hvmloader" builder='hvm' memory = 4096 # Should be at least 2KB per MB of domain memory, plus a few MB per vcpu. shadow_memory = 64 name = "hessel-windows2008" vif = [ 'ip=188.40.xx.xx,mac=00:16:3E:C1:8F:CE' ] acpi = 1 apic = 1 disk = [ 'file:/home/xen/disks/hessel/win2008/win2008.img,hda,w', 'file:/home/xen/isopool/win2008_32.iso,hdc:cdrom,r' ] device_model = '/usr/lib/xen/bin/qemu-dm' #----------------------------------------------------------------------------- # boot on floppy (a), hard disk (c) or CD-ROM (d) # default: hard disk, cd-rom, floppy boot="dc" sdl=0 vnc=1 vncdisplay=1 vnclisten="0.0.0.0" vncconsole=1 vncpasswd='howtoforge' stdvga=0 serial='pty' usbdevice='tablet' The guest is created without an error. But no vnc display is created. Any ideas, how to fix that? prometheus:~# netstat -ant Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:615 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN tcp 0 232 188.40.xx.xx:8080 195.36.75.26:54032 ESTABLISHED tcp 0 0 188.40.xx.xx:8080 195.36.75.26:53085 ESTABLISHED tcp6 0 0 :::8080 :::* LISTEN tcp6 0 0 :::53 :::* LISTEN tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 ::1:6010 :::* LISTEN

    Read the article

  • Postfix: Using google apps for stmp errors

    - by Zed Said
    I am using postfix and need to send the mail using google apps smtp. I am getting errors after I thought I had set everything up correctly: May 11 09:50:57 zedsaid postfix/error[22214]: 00E009693FB: to=<[email protected]>, relay=none, delay=2466, delays=2462/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22213]: 0ACB36D1B94: to=<[email protected]>, relay=none, delay=2486, delays=2482/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22232]: 067379693D3: to=<[email protected]>, relay=none, delay=2421, delays=2417/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = zedsaid.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = #relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all delay_warning_time = 4h smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 ## Gmail Relay relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_mechanism_filter = login smtp_tls_eccert_file = smtp_tls_eckey_file = smtp_use_tls = yes smtp_enforce_tls = no smtp_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_received_header = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport debug_peer_list = smtp.gmail.com debug_peer_level = 3 What am I doing wrong?

    Read the article

  • Why apache throws 403 on index file after install?

    - by den-javamaniac
    Hi. I've just installed apache and php from sources using next commands: ./configure --prefix="/mnt/workspace/servers/web/apache-2.2.17" \ --enable-info --enable-rewrite --enable-usertrack --enable-mime-magic for apache and ./configure --with-apxs2=/mnt/workspace/servers/web/apache-2.2.17/bin/apxs \ --prefix=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-config-file-path=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-mysql=mysqlnd for php. After adjusting configuration (httpd.conf) and starting apache it gives a 403 response on http://localhost:8060/index.html (presuming that 8060 is used) request. There are next directory settings in httpd.conf: <Directory "/mnt/workspace/servers/web/apache-2.2.17/htdocs"> ... Order allow,deny Allow from all ... </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> It should be noted that I've got apache on a mounted (default auto mount configured while installing ubuntu) partition. Log Files Access log: ::1 - - [12/Feb/2011:17:48:30 +0200] "GET / HTTP/1.1" 403 202 ::1 - - [12/Feb/2011:17:48:31 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /favicon.ico HTTP/1.1" 403 213 Error log: [Sat Feb 12 18:59:13 2011] [notice] Apache/2.2.17 (Unix) PHP/5.3.5 configured -- resuming normal operations [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to / denied [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to /favicon.ico denied [Sat Feb 12 18:59:36 2011] [error] [client ::1] (13)Permission denied: access to /index.html denied

    Read the article

  • Can't connect to research.microsoft.com on home Qwest DSL connection

    - by rakingleaves
    I have a puzzling issue regarding accessing research.microsoft.com from my home Qwest DSL connection. By default, I frequently get timeouts when accessing research.microsoft.com from Firefox, Safari, or Chrome on my Mac. I also cannot access the site from Internet Explorer in a Windows VM. However, I am able to access the site through proxify.com, so I know the site is not down. Furthermore, I haven't noticed problems accessing other sites (in particular, www.microsoft.com works fine). Also, I can access research.microsoft.com when I'm connected to networks other than my home Qwest DSL connection. Together, the above make me suspect a problem with either my router (Airport Express) or, more likely, my ISP. Anyone have any thoughts on how I can narrow down the problem further? I could call my ISP and tell them the above, but my feeling is that probably won't get me very far. I can get by browsing research.microsoft.com through a proxy, but it would be nice to figure out what's going on here and fix the problem. Oh, the only relevant discussion I found via Google was here: http://forums.whirlpool.net.au/forum-replies-archive.cfm/1311734.html Update: Thanks to those who have tried to help! I found one other thing while Googling that may be vaguely relevant: http://thedaneshproject.com/posts/supportmicrosoftcom-not-working-behind-squid/ Disabling the Accept-Encoding headers in Firefox actually didn't make a difference for me. I just thought the above might spark some other ideas about how mishandling of HTTP headers somewhere might be causing this problem. Thanks again! Another update: In case anyone is still thinking about this; I've found that I can't surf research.microsoft.com using the links text-based browser, but I can reliably download individual files with wget. Maybe that helps?

    Read the article

  • PostgreSQL user authentication against PAM

    - by elmuerte
    I am trying to set up authentication via PAM for PostgreSQL 9.3. I already managed to get this working on an Ubuntu 12.04 server, but I am unable to get this working on a Centos-6 install. The relevant pg_hba.conf line: host all all 0.0.0.0/0 pam pamservice=postgresql93 The pam.d/postgressql93 is the default config shipped with the official postgresql 9.3 package: #%PAM-1.0 auth include password-auth account include password-auth When a user tries to authenticate the following is reported in secure log: hostname unix_chkpwd[31807]: check pass; user unknown hostname unix_chkpwd[31808]: check pass; user unknown hostname unix_chkpwd[31808]: password check failed for user (myuser) hostname postgres 10.1.0.1(61459) authentication: pam_unix(postgresql93:auth): authentication failure; logname= uid=26 euid=26 tty= ruser= rhost= user=myuser The relevant content of password-auth config is: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so The problem is with the pam_unix.so. It is unable to validate the password, and unable to retrieve the user info (when I remove the auth entry of pam_unix.so). The Centos-6 install is only 5 days old, so it does not have a lot of baggage. The unix_chkpwd is suid and has execute rights for everybody, so it should be able to check the shadow file (which has no privileges at all?).

    Read the article

  • SSL Returning Blank Page, No Catalina Errors

    - by Mr.Peabody
    This is my second, maybe third, time configuring SSL with Tomcat. Earlier I had created a self signed, which worked, and now using my signed is proving fruitless. I am using Tomcat, operating from the Amazon Linux API. When using the signed cert/keystore, my server is starting normally without errors. However, when trying to navigate to the domain it is giving me an "ERR_SSL_VERSION_OR_CIPHER_MISMATCH" error. My server.xml file looks as follows: <Connector port="8443" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/home/ec2-user/.keystore/starchild.jks" keystorePass="d6b5385812252f180b961aa3630df504" /> It couldn't hurt to also mention that I'm using a wildcard certificate. Please let me know if anything looks amiss! EDIT: After looking more into this, I've determined there may be nothing is wrong with the Server.xml, or the listening ports. This is becoming more of an actual certificate error, as the curl request is giving me this error: curl: (35) Unknown SSL protocol error in connection to jira.mywebsite.com:-9824 Though, I can't seem to figure out what the "-9824" is. When comparing this curl to another similar setup (using the same Wildcard Certificate) it's turning up the full handshake, which is to be expected. I believe this is now between the protocol/cypher set default on JIRA servers.

    Read the article

  • File permissions question

    - by Matthew Robert Keable
    I just switched my site's server from Windows to Linux, and am finally able to control file permissions from my ftp. So, seeing that all permissions were 705 by default (and not wanting just anyone to have permission to execute), I went and changed everything to 744. Now, gif and jpg links don't work, pdf download links don't work, php links don't load, and mov files don't play. Conversely, all html files work perfectly. Setting things back doesn't seem to help. Even setting to 777 gets me nowhere. Any ideas on what might be going wrong? I've been googling file permissions all day (solved that problem with the Windows-Linux switch, which has bred a new problem), and I don't think anything I can find has escaped my attention. The site: absis-minas.com Go easy on a n00b. I took up learning php out of interest, and wound up delving into server management issues due to a very simple line of code not working the way it was supposed to. Thanks!

    Read the article

  • Advanced Linux file permission question (ownership change during write operation)

    - by Kent
    By default the umask is 0022: usera@cmp$ touch somefile; ls -l total 0 -rw-r--r-- 1 usera usera 0 2009-09-22 22:30 somefile The directory /home/shared/ is meant for shared files and should be owned by root and the shared group. Files created here by usern (any user) are automatically owned by the shared group. There is a cron-job taking care of changing owning user and owning group (of any moved files) once per day: usera@cmp$ cat /etc/cron.daily/sharedscript #!/bin/bash chown -R root:shared /home/shared/ chmod -R 770 /home/shared/ I was writing a really large file to the shared directory. It had me (usera) as owning user and the shared group as group owner. During the write operation the cron job was run, and I still had no problem completing the write process. You see. I thought this would happen: I am writing the file. The file permissions and ownership data for the file looks like this: -rw-r--r-- usera shared The cron job kicks in! The chown line is processed and now the file is owned by the root user and the shared group. As the owning group only has read access to the file I get a file write error! Boom! End of story. Why did the operation succeed? A link to some kind of reference documentation to back up the reason would be very welcome (as I could use it to study more details).

    Read the article

  • httpd 2.2.15 + suPHP + suExec + php5 = permission and information ?

    - by Prix
    Hi, i am currently playing around with suexec, suphp, php5 on my apache on slackware 13.1. Everything is installed and working properly but now i did like to got further into the directory permissions and at suphp settings and options available. initially i was planning to leave suphp disabled unless a virtualhost has it specified to be enabled but it does not seem to work, see sample: mod_php.conf which is included in my httpd.conf # # mod_php & mod_suPHP - PHP Hypertext Preprocessor module # # Load the PHP module: LoadModule php5_module lib/httpd/modules/libphp5.so # Load the suPHP module: LoadModule suphp_module lib/httpd/modules/mod_suphp.so <IfModule mod_php5.c> # Tell Apache to feed all *.php files through PHP. If you'd like to # parse PHP embedded in files with different extensions, comment out # these lines and see the example below. <FilesMatch \.php$> SetHandler application/x-httpd-php </FilesMatch> </IfModule> <IfModule mod_suphp.c> # This option tells mod_suphp if a PHP-script requested on this server (or # VirtualHost) should be run with the PHP-interpreter or returned to the # browser "as it is". suPHP_Engine off </IfModule> With the above first sample it makes suPHP and PHP not work if i comment out the php5 stuff but the module it will run just fine ... So my first question is, how could i possible make this setup work ? Leave suPHP disabled using php5 by default and if a virtualhost has suPHP enabled it will disable php5 and use suPHP. if any information is lacked here please let me know and i will update with any additional information you may need. Thanks in advance.

    Read the article

  • Keep Windows Installer from using largest drive for temporary files

    - by stefan.at.wpf
    By default Windows Installer uses the largest drive for temporary storage, no matter if that's needed (meaning there would also be enough space on the system drive). Taken from http://msdn.microsoft.com/en-us/library/aa371372%28VS.85%29.aspx: During an administrative installation the installer sets ROOTDRIVE to the first connected network drive it finds that can be written to. If it is not an administrative installation, or if the installer can find no network drives, the installer sets ROOTDRIVE to the local drive that can be written to having the most free space. Now my system drive is an SSD, my largest drive is a RAID, that spins down when it's not used. Remember the SSD as system drive? Everything is silent now! Until I install something and Windows Installer wakes up my RAID again just to put a small .tmp file on it... How can I prevent Windows Installer from using the largest drive as temporary storage? Can I maybe set some access rights to disallow the Windows Installer to write on my RAID drive? Any other ideas? Thank you!

    Read the article

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load

    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • Intel NIC X540-T1 non-functional in Ubuntu Server 12.04

    - by Jeff Carr
    I have installed three Intel X540-T1's in servers running Ubuntu Server 12.04, but all are non-functional, no link lights, no packets sent or received, and no connection on ip4 or ip6 whether set up as dhcp or static. Also, dmesg doesn't detect cable connection or disconnection. I updated the default ixgbe driver to Intel's latest version (3.11.33) with no change. The ethernet controller is being reported as X540-AT2 (which might be a problem that I can't figure out how to fix), but the subsystem is X540-T1 so I believe that might be intended. Does anyone have any experience with this that could assist? ifconfig eth2 eth2 Link encap:Ethernet HWaddr a0:36:9f:14:5f:ea inet addr:192.168.101.1 Bcast:192.168.101.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1<br> RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ethtool -i eth2 driver: ixgbe version: 3.11.33 firmware-version: 0x8000037c bus-info: 0000:08:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes lspci -vvnns 08:00.0 08:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10 Gigabit X540-AT2 [8086:1528] (rev 01) Subsystem: Intel Corporation Ethernet Converged Network Adapter X540-T1 [8086:0002] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at e8000000 (64-bit, prefetchable) [size=2M] Region 4: Memory at e8200000 (64-bit, prefetchable) [size=16K] [virtual] Expansion ROM at e8280000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: ixgbe Kernel modules: ixgbe

    Read the article

  • Redmine plug-in fails at rake db:migrate_plugins

    - by Drew
    Hey, First post, so hope I'm in the right place. While trying to install the Redmine plug-in 'Wiki Extensions', I keep getting stuck when I try to run the "rake db:migrate_plugins RAILS_ENV=production" command. I am moving server and I'm in bit over my head. Haven't found anything on Google that has helped me much, though I might have missed something. I have pasted in the output with --trace: (in /srv/www/vastpark.org/redmine) ** Invoke db:migrate_plugins (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate_plugins Migrating engines... Migrating acts_as_activity_provider... Migrating acts_as_attachable... Migrating acts_as_customizable... Migrating acts_as_event... Migrating acts_as_list... Migrating acts_as_searchable... Migrating acts_as_tree... Migrating acts_as_versioned... Migrating acts_as_watchable... Migrating awesome_nested_set... Migrating classic_pagination... Migrating coderay-0.9.2... Migrating gravatar... Migrating open_id_authentication... Migrating prepend_engine_views... Migrating redmine_wiki_extensions... == CreateWikiExtensionsComments: migrating =================================== -- create_table(:wiki_extensions_comments) rake aborted! An error has occurred, all later migrations canceled: Mysql::Error: Table 'wiki_extensions_comments' already exists: CREATE TABLE 'wiki_extensions_comments' ('id' int(11) DEFAULT NULL auto_increment PRIMARY KEY, 'wiki_page_id' int(11), 'key_word' varchar(255), 'user_id' int(11), 'comment' text, 'created_at' datetime, 'updated_at' datetime) ENGINE=InnoDB

    Read the article

  • Setting kernel memory for installing postgresql

    - by Matthieu Taymans
    My question is about setting the kernel shared memory for installing postgresql on mac osx 10.6.8. In the readme file of postgresql it is said: Shared Memory PostgreSQL uses shared memory extensively for caching and inter-process communication. Unfortunately, the default configuration of Mac OS X does not allow suitable amounts of shared memory to be created to run the database server. Before running the installation, please ensure that your system is configured to allow the use of larger amounts of shared memory. Note that this does not 'reserve' any memory so it is safe to configure much higher values than you might initially need. You can do this by editting the file /etc/sysctl.conf - e.g. % sudo vi /etc/sysctl.conf On a MacBook Pro with 2GB of RAM, the author's sysctl.conf contains: kern.sysv.shmmax=1610612736 kern.sysv.shmall=393216 kern.sysv.shmmin=1 kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.maxprocperuid=512 kern.maxproc=2048 Note that (kern.sysv.shmall * 4096) should be greater than or equal to kern.sysv.shmmax. kern.sysv.shmmax must also be a multiple of 4096. Once you have edited (or created) the file, reboot before continuing with the installation. If you wish to check the settings currently being used by the kernel, you can use the sysctl utility: % sysctl -a The database server can now be installed. I'm a real beginner with all this but need to instal postgresql for academic purposes do you know how i can set this kernel shared memory. Won't that be harmful for my system? Thank you in advance. Matthieu

    Read the article

  • Cannot Resolve Host Or Access Website Through Router

    - by Boris_yo
    This is weird. I am on Windows XP with Edimax BR 6204Wg. I have 3 devices - 2 laptops and 1 smartphone. 1st laptop and smartphone are connected through WiFi to router and 2nd laptop is connected through LAN to router. Before firmware upgrade i did not try to access website but after firmware upgrade to latest version: http://www.edimax.eu/en/support_detail.php?pd_id=11&pl1_id=3#02 i had problems resolving host, pinging, tracerting and accessing website. Sometimes ping and tracert work but i cannot access website and sometimes i can access website but ping and tracert do not work. Weird? I downgraded to previous version and no changes. If i can no longer access that website through Internet Explorer, i can access it in Firefox. I tried deleting cookies, clearing cache and that seem not make difference. Switching LAN port did not make difference. When i disconnect router and connect laptop through LAN to internet modem, everything is normal. I tried resetting router, resetting to factory default settings and all did not help. At the moment i can access website on laptop connected through LAN from Firefox and Internet Explorer, but on my smartphone i can access website only with Opera but not with built-in browser and Skyfire. UPDATE: I just could only access with Internet Explorer but not with other browsers on my PC. Minutes later i could access with all browsers. But on smartphone i could only access with Opera and not with other browsers. I am confused. I also determined that sometimes i can access and sometimes can't. What is also weird is that when ping and tracert cannot resolve host, i still am able to access website.

    Read the article

  • Easy_install the wrong version of python modules (Mac OS)

    - by user73250
    I installed Python 2.7 on my Mac. When typing "python" in terminal, it shows: $ python Python 2.7 (r27:82508, Jul 3 2010, 20:17:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. The Python version is correct here. But when I try to easy_install some modules. The system will install the modules with python version 2.6 which are not able be imported to Python 2.7. And of course I can not do the functions I need in my code. Here's an example of easy_install graphy: $ easy_install graphy Searching for graphy Reading pypi.python.org/simple/graphy/ Reading http://code.Google.com/p/graphy/ Best match: Graphy 1.0.0 Downloading http://pypi.python.org/packages/source/G/Graphy/Graphy- 1.0.0.tar.gz#md5=390b4f9194d81d0590abac90c8b717e0 Processing Graphy-1.0.0.tar.gz Running Graphy-1.0.0/setup.py -q bdist_egg --dist-dir /var/folders/fH/fHwdy4WtHZOBytkg1nOv9E+++TI/-Tmp-/easy_install-cFL53r/Graphy-1.0.0/egg-dist-tmp-YtDCZU warning: no files found matching '*.tmpl' under directory 'graphy' warning: no files found matching '*.txt' under directory 'graphy' warning: no files found matching '*.h' under directory 'graphy' warning: no previously-included files matching '*.pyc' found under directory '.' warning: no previously-included files matching '*~' found under directory '.' warning: no previously-included files matching '*.aux' found under directory '.' zip_safe flag not set; analyzing archive contents... graphy.all_tests: module references __file__ Adding Graphy 1.0.0 to easy-install.pth file Installed /Library/Python/2.6/site-packages/Graphy-1.0.0-py2.6.egg Processing dependencies for graphy Finished processing dependencies for graphy So it installs graphy for Python 2.6. Can someone help me with it? I just want to set my default easy_install Python version to 2.7.

    Read the article

  • Netgear GS724Tv3 and link aggregation Mac OS X Server 10.6.8

    - by Manca Weeks
    I need to link aggregate 2 sets of ports on the Netgear GS724T with my Apple server tower (latest generation). I have 2 built in ports and 2 ports on a PCIe ethernet card. It is not obvious to me how to properly configure the Netgear end. I have access to the Netgear box through its web interface, just don't know how to properly set the settings. I tried going to Netgear for help, but they said my software support has expired. I bought this unit on their recommendation - they say it is compatible with 802.3ad protocol. I cannot locate any references to this protocol in the manual and I noticed some people in formus say that this device is actually not compatible with 802.3ad and that Netgear is misleading potential customers by saying it is. Any help will be appreciated. Thanks, M My own answer - posted as edit because of restrictions on my user: OK folks, turns out one must use a Windows machine on this one or nothing makes sense. I was unable to get much farther than viewing the default inactive LAGs because in Firefox and Safari on Mac things don't make much sense - i.e. the Apply buttons (supposedly JavaScript) don't work. You can view the configurations, but none of the modifications you make stick. Then, in Switching - LAGs, choose the ports to include and make sure you switch the LAG type from Static to LACP and all is well. Haven't tested the performance of the config yet, but both sides appear to be happy with the configuration. Apple server says link active and so does the Netgear. Will report if any other discoveries. Thanks for all who read and to user84104 for responding. M

    Read the article

< Previous Page | 843 844 845 846 847 848 849 850 851 852 853 854  | Next Page >