Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 552/683 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • How to test a HTTPS URL with a given IP address

    - by GreatFire
    Let's say a website is load-balanced between several servers. I want to run a command to test whether it's working, such as curl DOMAIN.TLD. So, to isolate each IP address, I specify the IP manually. But many websites may be hosted on the server, so I still provide a host header, like this: curl IP_ADDRESS -H 'Host: DOMAIN.TLD'. In my understanding, these two commands create the exact same HTTP request. The only difference is that in the latter one I take out the DNS lookup part from cURL and do this manually (please correct me if I'm wrong). All well so far. But now I want to do the same for an HTTPS url. Again, I could test it like this curl https://DOMAIN.TLD. But I want to specify the IP manually, so I run curl https://IP_ADDRESS -H 'Host: DOMAIN.TLD'. Now I get a cURL error: curl: (51) SSL: certificate subject name 'DOMAIN.TLD' does not match target host name 'IP_ADDRESS'. I can of course get around this by telling cURL not to care about the certificate (the "-k" option) but it's not ideal. Is there a way to isolate the IP address being connected to from the host being certified by SSL?

    Read the article

  • Apache2 VirtualHosts 403 Oddity

    - by Carson C.
    I'm sure this is something I should already understand, but I'm finding myself confused. The configs in play add up to this: NameVirtualHost *:80 Listen 80 <VirtualHost *:80> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName domain.tld ServerAlias *.domain.tld DocumentRoot /var/www/domain.tld <Directory /var/www/domain.tld> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> DNS is working correctly. The issue is, every variant of http://*.domain.tld/ (including http://domain.tld/) works correctly, except http://www.domain.tld/ which throws a 403. The logs state: client denied by server configuration: /etc/apache2/htdocs If I remove the first VirtualHost block from play, everything works as expected including http://www.domain.tld. This leads me to believe that for some reason, Apache is not considering www.domain.tld to match the second VirtualHost block, and is thereby falling back to deny all. This seems wrong. Shouldn't the second block match www.domain.tld? I've been able to resolve this, but I still don't understand why. In my original configs, I was using the real ip address of the server instead of *. Switching all instances to * as shown above made everything work as expected. Does this have something to do with the way browsers request resources?

    Read the article

  • Add single sign-on into existing web app

    - by EvilDr
    Apologies if this isn't the best site, I've search for an answer but can't find anything quite right. I don't actually now the correct terminology I should be using here, so any pointers will be appreciated. I have a web application that accessed by many different users across different organisations. Access is provided by each user having a unique username/password which is stored in SQL (database fields are customerID, userID, username). Some organisations are now asking if we can change this to allow "Active Directory single sign-on" so that users don't need to remember yet another set of login details. From research I can see how this is achieved using OpenAuth and Google (etc), but I know hardly anything about AD and can't find much information on this (again I'm sure it helps when you know the terminology). Is this request even possible to achieve, given that most users will be from different (and unrelated) organisations? I saw on a Microsoft Build video not long ago that there is some kind of replication service for AD to allow Cloud authentication. Is this what I should be aiming for?

    Read the article

  • Should I expect ICMP transit traffic to show up when using debug ip packet with a mask on a Cisco IOS router?

    - by David Bullock
    So I am trying to trace an ICMP conversation between 192.168.100.230/32 an EZVPN interface (Virtual-Access 3) and 192.168.100.20 on BVI4. # sh ip access-lists 199 10 permit icmp 192.168.100.0 0.0.0.255 host 192.168.100.20 20 permit icmp host 192.168.100.20 192.168.100.0 0.0.0.255 # sh debug Generic IP: IP packet debugging is on for access list 199 # sh ip route | incl 192.168.100 192.168.100.0/24 is variably subnetted, 2 subnets, 2 masks C 192.168.100.0/24 is directly connected, BVI4 S 192.168.100.230/32 [1/0] via x.x.x.x, Virtual-Access3 # sh log | inc Buff Buffer logging: level debugging, 2145 messages logged, xml disabled, Log Buffer (16384 bytes): OK, so from my EZVPN client with IP address 192.168.100.230, I ping 192.168.100.20. I know the packet reaches the router across the VPN tunnel, because: policy exists on zp vpn-to-in Zone-pair: vpn-to-in Service-policy inspect : acl-based-policy Class-map: desired-traffic (match-all) Match: access-group name my-acl Inspect Number of Half-open Sessions = 1 Half-open Sessions Session 84DB9D60 (192.168.100.230:8)=>(192.168.100.20:0) icmp SIS_OPENING Created 00:00:05, Last heard 00:00:00 ECHO request Bytes sent (initiator:responder) [64:0] Class-map: class-default (match-any) Match: any Drop 176 packets, 12961 bytes But I get no debug log, and the debugging ACL hasn't matched: # sh log | inc IP: # # sh ip access-lists 198 Extended IP access list 198 10 permit icmp 192.168.100.0 0.0.0.255 host 192.168.100.20 20 permit icmp host 192.168.100.20 192.168.100.0 0.0.0.255 Am I going crazy, or should I not expect to see this debug log? Thanks!

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

  • Ubuntu 12.04 open port 80 inside WLAN

    - by Eduard
    I have an nginx server running on ubuntu 12.04 that serves http through port 80 and https through port 443. Everything works fine if I access it from the same computer via localhost, 127.0.0.1 or the local IP 192.168.0.11. If I try to access the server from another computer in the same VLAN it does not work for http; it works for https. I have changed my nginx configuration to also listen to port 8000 for http; I can then access http from the other computer in the same VLAN via "http://192.168.0.11:8000". I also have a web server running on port 80 on a windows machine and can access it from another device in the same VLAN, therefore the router is not blocking incoming http traffic. The nginx process is run by root. I have used tcpdump and I see that packets are arriving to Ubuntu: 192.168.0.16.49735 192.168.0.11.80 and that some response is being given 192.168.0.11.80 192.168.0.16.49735 (I do not know what the response is though). There is no request arriving at the nginx web server (I have checked the access log). I have iptables empty. I have unsuccessfully tried to find a solution for a long time to this, it has now become a matter of happiness or bitterness :).

    Read the article

  • Apache Server Redirect Subdomain to Port

    - by Matt Clark
    I am trying to setup my server with a Minecraft server on a non-standard port with a subdomain redirect, which when navigated to by minecraft will go to its correct port, or if navigated to by a web browser will show a web-page. i.e.: **Minecraft** minecraft.example.com:25565 -> example.com:25465 **Web Browser** minecraft.example.com:80 -> Displays HTML Page I am attempting to do this by using the following VirtualHosts in Apache: Listen 25565 <VirtualHost *:80> ServerAdmin [email protected] ServerName minecraft.example.com DocumentRoot /var/www/example.com/minecraft <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/example.com/minecraft/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> <VirtualHost *:25565> ServerAdmin [email protected] ServerName minecraft.example.com ProxyPass / http://localhost:25465 retry=1 acquire=3000 timeout=6$ ProxyPassReverse / http://localhost:25465 </VirtualHost> Running this configuration when I browse to minecraft.example.com I am able to see the files in the /var/www/example.com/minecraft/ folder, however if I try and connect in minecraft I get an exception, and in the browser I get a page with the following information: minecraft.example.com:25565 -> Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /. Reason: Error reading from remote server Could anybody share some insight on what I may be doing wrong and what the best possible solution would be to fix this? Thanks.

    Read the article

  • How to automatically restart Apache service after HTTP 503 error?

    - by Gnanam
    Our production server is running Apache v2.2.4 on CentOS5.2. Mono v1.2.4 is integrated within Apache. Recently, we faced a problem in our production server. From Apache's access_log, I found a HTTP 500 internal server error for one of the HTTP request and all subsequent HTTP requests also failed but with HTTP 503 service unavailable error. From thereafter, none of the requests were successful. Also, only later some time, we realized that our application was not working because of this error and then we restarted Apache service. My questions are, in this kind of situation, how do I automatically restart Apache service when HTTP 503 error is encountered? Is there any Apache directive available to set? in general, what would cause a HTTP 503 error in Apache? NOTE: Mono helps in running applications developed in .NET on a Linux-based OS. EDIT: I agree on finding the root cause of this problem. In fact, we've been analyzing that too. Till we resolve it, am finding whether this could be restarted immediately on its own without having any downtime/service disruption for application users.

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • DNS problems on CentOS fresh install

    - by Rick Koshi
    I'm having some DNS issues on a new box I'm installing with CentOS 6.2. I am able to look up names using nslookup, dig, or host. I am able to ping machines by name or by IP address. However, when I try other tools, such as ssh, wget, or yum, they are unable to resolve names. For example: # wget http://www.google.com --2012-03-08 14:48:06-- http://www.google.com/ Resolving www.google.com... failed: Name or service not known. wget: unable to resolve host address `www.google.com' # ssh www.google.com ssh: Could not resolve hostname www.google.com: Name or service not known # ping -c 1 www.google.com PING www.l.google.com (74.125.113.106) 56(84) bytes of data. 64 bytes from vw-in-f106.1e100.net (74.125.113.106): icmp_seq=1 ttl=46 time=43.6 ms --- www.l.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 59ms rtt min/avg/max/mdev = 43.665/43.665/43.665/0.000 ms # host www.google.com www.google.com is an alias for www.l.google.com. www.l.google.com has address 74.125.113.99 www.l.google.com has address 74.125.113.103 www.l.google.com has address 74.125.113.104 www.l.google.com has address 74.125.113.105 www.l.google.com has address 74.125.113.106 www.l.google.com has address 74.125.113.147 My /etc/nsswitch.conf file is the default, including this (standard) line: hosts: files dns /etc/resolv.conf is as set up by DHCP: ; generated by /sbin/dhclient-script nameserver 192.168.1.254 192.168.1.254 is a working DNS server (my DSL modem, working for years with other machines) Anyone know why ping would work, but ssh/wget would fail? Per NcA's suggestion, I tried changing /etc/resolv.conf to point to 8.8.8.8. Oddly enough, this does make it work. Obviously, my DSL modem is responding to DNS requests in some way that some parts of Linux's resolution system don't like. Looking at the tcpdump, I am unable to see what the difference is. Certainly, both servers are sending the same addresses. Here's the output from tcpdump -nn -X with the server set to the DNS server on the DSL modem. It's clearly replying with the correct addresses, but ssh/wget don't seem happy with it for some reason: 15:53:52.133580 IP 192.168.1.254.53 > 192.168.1.2.54836: 33157 7/0/0 CNAME www.l.google.com., A 74.125.115.105, A 74.125.115.106, A 74.125.115.147, A 74.125.115.99, A 74.125.115.103, A 74.125.115.104 (148) 0x0000: 4500 00b0 e33a 0000 ff11 53b1 c0a8 01fe E....:....S..... 0x0010: c0a8 0102 0035 d634 009c 7528 8185 8180 .....5.4..u(.... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0007 acd0 0008 0377 7777 016c c010 .........www.l.. 0x0050: c02c 0001 0001 0000 0001 0004 4a7d 7369 .,..........J}si 0x0060: c02c 0001 0001 0000 0001 0004 4a7d 736a .,..........J}sj 0x0070: c02c 0001 0001 0000 0001 0004 4a7d 7393 .,..........J}s. 0x0080: c02c 0001 0001 0000 0001 0004 4a7d 7363 .,..........J}sc 0x0090: c02c 0001 0001 0000 0001 0004 4a7d 7367 .,..........J}sg 0x00a0: c02c 0001 0001 0000 0001 0004 4a7d 7368 .,..........J}sh 15:53:52.135669 IP 192.168.1.254.53 > 192.168.1.2.54836: 65062- 0/0/0 (32) 0x0000: 4500 003c e33b 0000 ff11 5424 c0a8 01fe E..<.;....T$.... 0x0010: c0a8 0102 0035 d634 0028 98f9 fe26 8000 .....5.4.(...&.. 0x0020: 0001 0000 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 gle.com..... I'm not enough of an expert to know if this is malformed in some way, but ping seems to do the right thing with it. For comparison, here's the same thing when querying 8.8.8.8: 15:57:27.990270 IP 8.8.8.8.53 > 192.168.1.2.49028: 59114 7/0/0 CNAME www.l.google.com., A 74.125.113.105, A 74.125.113.103, A 74.125.113.106, A 74.125.113.147, A 74.125.113.104, A 74.125.113.99 (148) 0x0000: 4500 00b0 5530 0000 2f11 6453 0808 0808 E...U0../.dS.... 0x0010: c0a8 0102 0035 bf84 009c 39f8 e6ea 8180 .....5....9..... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 516a 0008 0377 7777 016c c010 ....Qj...www.l.. 0x0050: c02c 0001 0001 0000 0116 0004 4a7d 7169 .,..........J}qi 0x0060: c02c 0001 0001 0000 0116 0004 4a7d 7167 .,..........J}qg 0x0070: c02c 0001 0001 0000 0116 0004 4a7d 716a .,..........J}qj 0x0080: c02c 0001 0001 0000 0116 0004 4a7d 7193 .,..........J}q. 0x0090: c02c 0001 0001 0000 0116 0004 4a7d 7168 .,..........J}qh 0x00a0: c02c 0001 0001 0000 0116 0004 4a7d 7163 .,..........J}qc 15:57:28.018909 IP 8.8.8.8.53 > 192.168.1.2.49028: 31984 1/1/0 CNAME www.l.google.com. (102) 0x0000: 4500 0082 7b1b 0000 2f11 3e96 0808 0808 E...{.../.>..... 0x0010: c0a8 0102 0035 bf84 006e c67e 7cf0 8180 .....5...n.~|... 0x0020: 0001 0001 0001 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 517f 0008 0377 7777 016c c010 ....Q....www.l.. 0x0050: c030 0006 0001 0000 0258 0026 036e 7334 .0.......X.&.ns4 0x0060: c010 0964 6e73 2d61 646d 696e c010 0016 ...dns-admin.... 0x0070: 91f3 0000 0384 0000 0384 0000 0708 0000 ................ 0x0080: 003c .< I still don't know why the server's reply is adequate for ping but not for ssh/wget. If anyone has ideas, I'd be happy to hear them. For now, though, I can either refer to an outside DNS server or set up my own server on the new box. It's a workaround that seems like it should be unnecessary, but will allow me to proceed.

    Read the article

  • What are the requirements for Windows Remote Assistance over Teredo?

    - by Jens
    I try to get the Windows 7 (or Vista) remote assistance feature to work, without using UPnP on the novices computer. After enabling Teredo on the expert's computer (that is in a corporate network, and therefore has teredo disabled by default), I tried to connect to the novice both using Easy Connect and the invitation file with no success. My triubleshooting included the following (so far). A connection to the novice from my home pc was successful, hinting at a misconfiguration on the experts side. Both computers have a "qualified" connection to the Teredo Server. Both computers have a valid Teredo IP, access to the Global_ PNRP cloud and can resolve names registered with PNRP on the other computer. The expert can resolve the PNRP Id automatically generated with an Easy Connect help request Both computers can ping the other's PNRP name. Both computers can ping the other's Teredo IP Address using ping -6 Now, I am a little stumped. I expected Remote Assistance to work at this point, since my corporate firewall has no Teredo filtering. What could RA cause not to work in this setting? Thanks in advance!

    Read the article

  • Preventing DDOS/SYN attacks (as far as possible)

    - by Godius
    Recently my CENTOS machine has been under many attacks. I run MRTG and the TCP connections graph shoots up like crazy when an attack is going on. It results in the machine becoming inaccessible. My MRTG graph: mrtg graph This is my current /etc/sysctl.conf config # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 1 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.icmp_echo_ignore_broadcasts = 1 net.ipv4.conf.all.accept_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_max_syn_backlog = 1280 Futher more in my Iptables file (/etc/sysconfig/iptables ) I only have this setup # Generated by iptables-save v1.3.5 on Mon Feb 14 07:07:31 2011 *filter :INPUT ACCEPT [1139630:287215872] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1222418:555508541] Together with the settings above, there are about 800 IP's blocked via the iptables file by lines like: -A INPUT -s 82.77.119.47 -j DROP These have all been added by my hoster, when Ive emailed them in the past about attacks. Im no expert, but im not sure if this is ideal. My question is, what are some good things to add to the iptables file and possibly other files which would make it harder for the attackers to attack my machine without closing out any non-attacking users. Thanks in advance!

    Read the article

  • Domain workstation acting up and I can't track it down.

    - by DevNULL
    I have a developer with a Windows XP (SP2) 64 bit machine. If the machine is left on overnight (or any period of time longer than 5-6 hours) it takes 2-3 minutes to open any local drive and his network drives are no longer accessible. Here's what the system logs report... Any Help BTW: The problem just started a week ago and nothing has changed on the domain controller / AD or his machine. --- ERROR 1 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5719 Date: 6/8/2010 Time: 9:17:26 AM User: N/A Computer: BFC1 Description: This computer was not able to set up a secure session with a domain controller in domain UR due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. ADDITIONAL INFO If this computer is a domain controller for the specified domain, it sets up the secure session to the primary domain controller emulator in the specified domain. Otherwise, this computer sets up the secure session to any domain controller in the specified domain. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 5e 00 00 c0 ^..A --- ERROR 2 The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {555F3418-D99E-4E51-800A-6E89CFD8B1D7} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19). This security permission can be modified using the Component Services administrative tool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. --- ERROR 3 Event Type: Error Event Source: RemoteAccess Event Category: None Event ID: 20106 Date: 6/8/2010 Time: 10:12:18 AM User: N/A Computer: BFC1 Description: Unable to add the interface {E76F0A78-7A0B-4EBB-A081-BA3BD452FC4C} with the Router Manager for the IP protocol. The following error occurred: Cannot complete this function. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: eb 03 00 00 e...

    Read the article

  • Configure IIS site to work with host header & hosts file entry

    - by HarveySaayman
    I'm I bit of an IIS / Web noob (I'm a C# backend service / winforms dev) so please bare with me :-) I've set up a site in IIS on my local dev machine. In the bindings section of the site ive added 4 bindings, all 4 for http: Host Name Port IP Address blog.sourcecube.co.za 26581 * www.blog.sourcecube.co.za 26581 * blog.sourcecube.co.za 26581 127.0.0.1 www.blog.sourcecube.co.za 26581 127.0.0.1 in my hosts file (drivers\etc\hosts), i've added the folling entries: 127.0.0.1 blog.sourcecube.co.za 127.0.0.1 www.blog.sourcecube.co.za when i ping my domain name from the command line it does in fact resolve to the loopback address, 127.0.0.1. So what I'm expecting to happen when i navigate to blog.sourcecube.co.za in my browser is for it to resolve to 127.0.0.1, and when the request hits IIS, it should know which site to serve because of the host header? But when i navigate to blog.sourcecube.co.za, i get an "Unable to connect, Firefox can't establish a connection to the server at blog.sourcecube.co.za" error. What am I doing wrong? --- UPDATE --- Navigating to blog.sourcecube.co.za:26581 from my browser works... I'd like get it working without specifying the port number though.

    Read the article

  • Migrating to Windows Server 2008 R2 Domain Controllers - a few Questions/Issues

    - by Chris
    Ok so here's our setup: We have 2 Windows 2003 Domain Controllers. I am trying to replace them with Windows 2008 R2. The 2003 servers are named DC01 and DC02. The 2008 R2 servers are DC1 and DC2. I prepared the Windows Server 2003 Forest Schema for a Domain Controller that runs Windows Server 2008 or Windows Server 2008 R2. Then with both of the new servers up as member servers I ran dcpromo on DC1 using the advanced option and added it successfully to my existing domain. It's roles are GC, DNS and Active Directory Domain Services. I transferred The PDC Emulator, RID Pool Manager, and Infrastructure Master roles to DC1. The Schema Master and Domain Naming master are still on DC01. The first issue that I'm encountering is when I dcpromo the DC2 and select "Replicate data over the network from and existing domain controller" I select that I want to replicate from DC1 and I get the following error: Failed to identify the requested replica partner (dc1.xxx.org) as a valid domain controller with a machine account for (DC2$). This is likely due to either the machine account not being replicated to this domain controller because of replication latency or the domain controller not advertising the Active Directory Domain Services. Please consider retrying the operation with \dc01.xxx.org as the replica partner. "The server is unwilling to process the request. Is this because the Schema Master and Domain Naming Master roles are still on the old DC01? And if so, if I transfer Schema Master and Domain Naming Master roles to DC1 what is the risk or breaking my AD? I'm a little paranoid because this process HAS to be transparent. ANY down time or interruption will result in me getting a verbal ass kicking from my I.T. Director. Both of the new servers DNS point the the old DNS servers (DC01 and DC02) not themselves by the way.

    Read the article

  • OpenVZ: Choosing right MySQL-Server depending on host

    - by Scheintod
    What I have: Two servers running Wheezy/OpenVZ with One MySQL container on each host master/master replicated (mysql1/mysql2) Replicated DNS on each host (dns1/dns2) different web-containers on each host but regulary backuped to the other. What I want: Each container should use the "local" MySQL-Server (the one which runs on the same hardware-node). I'd like to be able to move the web-containers between the to hosts. Each container should choose the MySQL-Server (semi) automatically. This scheme should continue working if one host is down. What I tried: Currently I'm keeping track on which container should run on which host by DNS entries which are queries by scripts e.g. for questions like: "Which container should be backuped on/to which host." For choosing the right MySQL server I have one extra entry like "mysql.container_abc" which resolves to either mysql1/mysql2. So in the applications in the container I can use "mysql.container_abc" for e.g. mysql_connect and if I want to move the container around I just need to change the dns. Now I notices one problem with this approach: Every mysql_connect generates one DNS query because the dns is not cached and this slows the request down unnecessarily. What I would like better: Some way of passing the information on which host we are running to the container and using it directly instead of using DNS. E.g. some way of setting a custom /etc/hosts entry in the container. Or any other great idea. Doesn't have to include DNS but shouldn't require to much special "magic" inside the container.

    Read the article

  • How does Tunlr work?

    - by gravyface
    For those of you not in the US, Tunlr uses DNS witchcraft to allow you to access US-only (and UK-only stuff like BBC radio online) services and Websites like Hulu.com, etc. without using traditional methods like a VPN or Web proxy. From their FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. In order to use Tunlr, you will have to change the DNS address. See Get started for more information. I can't really wrap my head around how this works; I have always assumed that these services performed a geolocation lookup via your client IP. Just really curious as to how this works. EDIT 2 I believe they're only proxying the initial geo check and then modifying the data stream request to include your real IP address so that the streaming is direct, not proxied.

    Read the article

  • 'Bug in Mailman version 2.1.12'

    - by davorg
    I'm working on setting up a server running Plesk 10.4.4 Update #13 on Centos 6.2. I've configured Mailman and now I want to set up some mailing lists. I've created a list in the Plesk control panel, but when I try to administer the new list (by visiting http://lists.[domain].com/mailman/admin/[listname] I see the following error: Bug in Mailman version 2.1.12 We're sorry, we hit a bug! Please inform the webmaster for this site of this problem. Printing of traceback and other system information has been explicitly inhibited, but the webmaster can find this information in the Mailman error logs. I see exactly the same error if I try to go to the list info page at http://lists.[domain].com/mailman/listinfo/[listname]. I would follow the instructions and look in the error logs, but I can't find them. I would expect to find a file at /var/log/mailman/error, but there's nothing there. My test list seems to work correctly. It sends all the expected email. It's just the web pages for the list that seem to be broken. Has anyone else seen this? Any suggestions for tracking down and fixing the problem? p.s. I think I've chosen the correct Stack Exchange site, but it this question would be better asked elsewhere, please let me know. Update: I got to the bottom of this, so I'm documenting the answer in case anyone else has the same problem. The fact that I couldn't find the error log was the clue. The problem was that the Mailman process didn't have permissions to create an error log. And it seems that if Mailman can't create an error log then it will respond to any web request with this error page. Creating an error log file (in /var/log/mailman/error) and giving it the correct permissions fixed the problem.

    Read the article

  • Windows: disable remote access of local drive, even by domain admin

    - by Matt
    We have a network of Windows 7 PCs that are managed as part of a domain. What we want is for the domain admin to be unable to view the PC's local drive (C:) unless he is physically at the PC. In other words, no remote desktop and no ability to use UNC. In other words, the domain admin should not be allowed to put \\user_pc\c$ in Windows Explorer and see all the files on that computer, unless he is physically present at the PC itself. Edit: to clarify some of the questions/comments that have come up. Yes, I am an admin---but a complete Windows novice. And yes, for the sake of this and my similar questions, it is fair to assume that I am working for someone who is paranoid. I understand the arguments about this being a "social problem versus a technical problem", and "you should be able to trust your admins", etc. But this is the situation in which I find myself. I'm basically new to Windows system administration, but am tasked with creating an environment that is secure by the company owner's definition---and this definition is clearly very different from what most people expect. In short, I understand that this is an unusual request. But I'm hoping there is enough expertise in the ServerFault community to point me in the right direction.

    Read the article

  • Courier-imap login problem after upgrading / enabling verbose logging

    - by halka
    I've updated my mail server last night, from Debian etch to lenny. So far I've encountered a problem with my postfix installation, mainly that I managed to broke the IMAP access somehow. When trying to connect to the IMAP server with Thunderbird, all I get in mail.log is: Feb 12 11:57:16 mail imapd-ssl: Connection, ip=[::ffff:10.100.200.65] Feb 12 11:57:16 mail imapd-ssl: LOGIN: ip=[::ffff:10.100.200.65], command=AUTHENTICATE Feb 12 11:57:16 mail authdaemond: received auth request, service=imap, authtype=login Feb 12 11:57:16 mail authdaemond: authmysql: trying this module Feb 12 11:57:16 mail authdaemond: SQL query: SELECT username, password, "", '105', '105', '/var/virtual', maildir, "", name, "" FROM mailbox WHERE username = '[email protected]' AND (active=1) Feb 12 11:57:16 mail authdaemond: password matches successfully Feb 12 11:57:16 mail authdaemond: authmysql: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> Feb 12 11:57:16 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> ...and then Thunderbird proceeds to complain that it cant' login / lost connection. Thunderbird is definitely not configured to connect through SSL/TLS. POP3 (also provided by Courier) is working fine. I've been mainly looking for a way to make the courier-imap logging more verbose, like can be seen for example here. Edit: Sorry about the mess, I've found that I've been funneling the log through grep imap, which naturally didn't display entries for authdaemond. The verbose logging configuration entry is found in /etc/courier/imapd under DEBUG_LOGIN=1 (set to 1 to enable verbose logging, set to 2 to enable dumping plaintext passwords to logfile. Careful.)

    Read the article

  • nginx with ssl: I get a 403 and log "directory index of '...dir...' is forbidden" log message. works fine with unencrypted connection

    - by user72464
    As mentioned in the title, I had nginx working fine with my rails app, until I tried to add the ssl server. The unencrypted connection still works but the ssl always returns me a 403 page with the following line in the error log: directory index of "/home/user/rails/" is forbidden, client: [my ip], server: _, request: "GET / HTTP/1.1", host: "[server ip]" Below my nginx.conf server block: server { listen 80; listen 443 ssl; ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/server.key; client_max_body_size 4G; keepalive_timeout 5; root /home/user/rails; try_files $uri/index.html $uri.html $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://0.0.0.0:8080; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/user/rails; } } the /home/user/rails directory and it's parent have all read to all rights. and they belong to the user nginx. the certificate and key file have the following rights: -rw-r--r-- 1 nginx root 830 Nov 8 09:09 server.crt -rw--w---- 1 nginx root 887 Nov 8 09:09 server.key any clue?

    Read the article

  • Apache inflate application/ with mod_filter

    - by BGT
    I need to prevent pdf objects from being gzipped. Really, this only needs to take place if the request is from the Mozilla browser (but since I can't get something as seemingly simple as no-gzip for application/pdf, I figure it's wiser to start there). From looking at the apache documentation on mod_filter, I've got the following: <Location /> FilterDeclare gzipDeflate CONTENT_SET FilterDeclare gzipInflate CONTENT_SET FilterProvider gzipDeflate deflate req=User-Agent $Mozilla/ FilterProvider gzipInflate inflate resp=Content-Type $application/ FilterChain +gzipDeflate +gzipInflate </Location> From my testing, the gzipDeflate filter is doing its job and all the pages without the Content-Type starting with application are being gzipped. But, the gzipInflate doesn't seem to be working at all. I've inspected the response in Firebug and verified that the Content-Type being sent down is application/pdf. I'll go ahead and ask a potentially stupid question though: The response's Content-Type header in its entirety read "application/pdf; charset=Windows-1252". Does that make any sort of difference or is $application/ presumably enough to catch that? Any help is greatly appreciated. One other point, the URL that returns the pdf object does not have the .pdf extension. The pdf itself is stored in an Oracle database as a blob and appended to the page when appropriate (all the urls in the system use the same baseline). This was part of an original inquiry by a helpful member at stackoverflow who pointed me towards mod_filter and suggested I post the question here.

    Read the article

  • Re-installing Windows on an old laptop

    - by Khaled
    I have an old laptop and I want to re-install Windows XP on it. The problem is that this laptop does not have an optical drive. I checked the boot sequence in the BIOS. It does not show an option to boot from USB. It have only two options: Boot from HD. Boot using Realtek agent (network boot). I tried to copy the Windows CD to second drive D:\ and run the installed from there. However, I could not format the C:\ drive. Windows complaints about setup files will be removed or something like that. I tried to boot the laptop using PXE, but I could not. It seems that the DHCP request did not get answered. I thought I could use a USB CD-ROM drive (I don't have one to try), but it might not work as there is no option to boot from USB. Do you think it will work? Do I have other options to try? Any recommendations?

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • Calling Excel from PHP 5 through COM fails on Windows 7 when Apache started through Task Planner

    - by Stefan Pantke
    I currently write an application, which controls Excel through COM: The app creates a COM-based Excel instance, opens some XLS files and reads their contents. Scenario I On Windows 7, I start Apache and mySQL using xmapp-control with system administrator rights. All works as expected. The PHP-based controller script interacts with Excel as expected. Scenario II A problem appears, if I start Apache and mySQL as 'background jobs'. Here is how: I created two jobs using Windows 7 Task Planner. One runs apache_start.bat, the other runs mysql_start.bat. Both tasks run as SYSTEM with elevated privileges when Windows 7 boots. Apache and mySQL work as expected. Specifically, Apache serves HTTP request from clients and PHP is able to talk to mySQL. When I call the PHP controller, which calls and interacts with Excel using COM, I do receive an error. The error seems to come from Excel [not COM itself] and reads like this: Excel can't read the XLS-file Excel failed to save the file due to an ill-name worksheet Interestingly, during the first run of the PHP-based controller script, it takes a few seconds to render the error message. Each subsequent run immediately renders the error message. Windows system logs didn't show a single problem report entry. Note, that the PHP program and the Apache instance didn't change - except the way Apache was started. At least the PHP controller script is perfectly able to read the file-system, since it provides the pathes to the XLS-file through scandir() of a certain directory. Concurrency issues can't be the cause of the problem. A single instance of the specific PHP controller interacts with Excel. Question Could someone provide details, why this happens?

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >