Search Results

Search found 73679 results on 2948 pages for 'get http client info'.

Page 141/2948 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • How do I fix libdispatch problem crashing Mac OS X apps?

    - by david-ocallaghan
    In the last day I have started having a lot of brokenness on my Mac (MacBook Air running Mac OS X 10.6.2 with all software updates). Most noticably, iTunes no longer syncs with my iPhone. It fails with a crash dialog reporting "AppleMobileDeviceHelper quit unexpectedly" and an error dialog "iTunes was unable to load dataclass information from SyncServices. Reconnect or try again later." I've attempted the fix at support.apple.com/kb/HT1747 but it failed. I've also been having problems (at first seemingly unrelated) with the horrible Cisco VPN client, which started giving me this error: Error 51: Unable to communicate with the VPN subsystem I followed the steps at www.anders.com/cms/192/CiscoVPN/Error.51:.Unable.to.communicate.with.the.VPN.subsystem which don't seem to work for me, although I can connect if I use the command line with sudo : sudo vpnclient connect MyProfile I had a look in the Console app at the diagnostic messages and I noticed a pattern, that a number of apps were reporting "BUG IN CLIENT OF LIBDISPATCH". The affected programs are: AppleMobileBackup AppleMobileDeviceHelper Safari Webpage Preview Fetcher cvpnd (the Cisco VPN daemon) Of these, only the last is non-Apple software! The common text in the diagnostic messages is: Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000 Crashed Thread: 1 Dispatch queue: com.apple.libdispatch-manager Application Specific Information: BUG IN CLIENT OF LIBDISPATCH: Do not close random Unix descriptors I'm beginning to wonder if there's a permissions problem, or corruption of an important library, ... I should note that I've rebooted several times and verified the disk permissions and the disk. Any help would be great!

    Read the article

  • DD-WRT PPTP VPN problem

    - by Tobias Tromm
    I try to configure a DD-WRT as a PPTP client. The VPN Server is Windows Server 2003. This is my scenario: The Windows 2003 Server has set to give to the VPN Client the 10.0.0.81 fixed IP and to add a network route to the remote home. At the remote home I have changed the PPTP Options at DD-WRT to make the connection. The VPN connection is successfully established. ...and Windows successfully add the route to the remote home 192.168.2.X. From the remote home I can successfully access any computer from the VPN server side. The problem is when I try to access the remote home from the Server side. From Server side I only can access\ping DD-WRT ( by VPN Client IP - 10.0.0.81). What's wrong? How I need to do to be a site-to-site VPN? This is what happen when I try to tracert the remote home from local home.

    Read the article

  • PHP FastCGI HTTP Error 500 on Windows 7

    - by CJM
    I've just installed PHP (5.3.1) and MySQL (5.1.44) on my development machine. Then I used the Web Platform Installer to install a copy of Joomla and Drupal. However, when I tried to browse either site application, I get a HTTP Error 500: Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x00000000 Requested URL http://localhost:808/drupal/index.php Physical Path D:\Projects\drupal\index.php Logon Method Anonymous Logon User Anonymous PHPInfo.php reports that FastCGI is configured (not sure if that is significant). Sure the fact that PHPInfo.php reports anything is perhaps an indication that PHP itself is working...? I'm struggling to know where to look for a solution... Each application appears to be configure similarly to my other [ASP/ASP.NET] applications.

    Read the article

  • Can't find standalone Chrome Gmail client that I know exists

    - by Carson
    I'm on Windows. A couple years ago when I switched from Outlook to Gmail (Google Apps), Google provided this awesome little standalone gmail client that was just a single-purpose Chrome install. It launched like a normal application, stayed updated when I updated Chrome. It was Chrome in a separate application that launched only gmail, stayed logged in really well, and "felt" like a gmail mail client, with the gmail interface. It had it's own little red envelope icon, it was a windows app. (I remember there was no Mac equivalent.) I found it while looking through the "this is how you get your company to switch to gmail" documentation that Google provided. I just repaved my box and now I'm looking for this thing again, and I had no idea it would be impossible to find. I've spent literally 2 hours looking, searching, googling, etc. I'm losing my mind. Anyone know how I can get my hands on this? I used it all day every day for 2 years, so I know it exists :), but I can not find it. Any assistance would be gratefully received.

    Read the article

  • client flips between internal and external IP addresses??

    - by jmiller-miramontes
    I have what seems like a not-particularly-complicated home network, all things considered: a DSL line comes in to a modem/router, which goes off to a switch, which supports a bunch of machines. My machines live in a 192.168.0.x address space; however, I'm running some public servers on the network, so I have a block of 8 (5, really) static IP addresses that are mapped to the servers by the router. The non-servers get 192.168.0.x addresses via NAT; some machines have static addresses and some get addresses from DHCP. Locally, I'm running a DNS server (named) to map between the domain names and the 192.168 address space. Somewhat messy, but everything basically works. Except: One of my local non-server clients occasionally switches from its internal address to its external address. That is, if I check the logs of a website I'm running internally, the hits coming from this client sometimes show up with the internal 192.168 address, and sometimes with the external (216.103...) address. It will flip back and forth for no apparent reason, without my doing anything. This can be a problem in terms of how the clients interact with the way I have some of the clients' SSH systems configured (e.g., allowing access from the internal network but not the external network), but it also Just Seems Wrong. I will confess that I'm kinda skating on the very edge of my networking competence here, but I can't for the life of me figure out what's going on. If it helps, the client in question is running Mac OS X / 10.6; its address is statically assigned, is not one of the five externally-accessible addresses, and gets its DNS from (first) the internal DNS server and (second) my ISP's DNS servers. I can't swear that none of the other NAT clients are also showing this problem; the one I'm dealing with is my everyday machine, so this is where I run into it. Does anybody out there have any advice? This is driving me crazy...

    Read the article

  • Apache HTTP Server - Other local network pc can't access web application [closed]

    - by Manellen
    I have problem while accessing my web application through other pc via LAN. Here is the scenario: I have 3 computers directly connected to LAN and one computer has installed Apache HTTP Server also has a web application. 2 Other computers try to access wep application to the installed Apache HTTP Server computer and that 2 computers can't access the web application. That 2 computer displaying Does anyone know how to fix it? (fix how to make that 2 computer could access the web application through LAN) Thanks in advance.

    Read the article

  • 401 Using Multiple Authentication methods IE 10 only

    - by jon3laze
    I am not sure if this is more of a coding issue or server setup issue so I've posted it on stackoverflow and here... On our production site we've run into an issue that is specific to Internet Explorer 10. I am using jQuery doing an ajax POST to a web service on the same domain and in IE10 I am getting a 401 response, IE9 works perfectly fine. I should mention that we have mirrored code in another area of our site and it works perfectly fine in IE10. The only difference between the two areas is that one is under a subdomain and the other is at the root level. www.my1stdomain.com vs. portal.my2nddomain.com The directory structure on the server for these are: \my1stdomain\webservice\name\service.aspx \portal\webservice\name\service.aspx Inside of the \portal\ and \my1stdomain\ folders I have a page that does an ajax call, both pages are identical. $.ajax({ type: 'POST', url: '/webservice/name/service.aspx/function', cache: false, contentType: 'application/json; charset=utf-8', dataType: 'json', data: '{ "json": "data" }', success: function() { }, error: function() { } }); I've verified permissions are the same on both folders on the server side. I've applied a workaround fix of placing the <meta http-equiv="X-UA-Compatible" value="IE=9"> to force compatibility view (putting IE into compatibility mode fixes the issue). This seems to be working in IE10 on Windows 7, however IE 10 on Windows 8 still sees the same issue. These pages are classic asp with the headers that are being included, also there are no other meta tags being used. The doctype is being specified as <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//" "http://www.w3.org/TR/html4/loose.dtd"> on the portal page and <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> on the main domain. UPDATE1 I used Microsoft Network Monitor 3.4 on the server to capture the request. I used the following filter to capture the 401: Property.HttpStatusCode.StringToNumber == 401 This was the response - Http: Response, HTTP/1.1, Status: Unauthorized, URL: /webservice/name/service.aspx/function Using Multiple Authetication Methods, see frame details ProtocolVersion: HTTP/1.1 StatusCode: 401, Unauthorized Reason: Unauthorized - ContentType: application/json; charset=utf-8 - MediaType: application/json; charset=utf-8 MainType: application/json charset: utf-8 Server: Microsoft-IIS/7.0 jsonerror: true - WWWAuthenticate: Negotiate - Authenticate: Negotiate WhiteSpace: AuthenticateData: Negotiate - WWWAuthenticate: NTLM - Authenticate: NTLM WhiteSpace: AuthenticateData: NTLM XPoweredBy: ASP.NET Date: Mon, 04 Mar 2013 21:13:39 GMT ContentLength: 105 HeaderEnd: CRLF - payload: HttpContentType = application/json; charset=utf-8 HTTPPayloadLine: {"Message":"Authentication failed.","StackTrace":null,"ExceptionType":"System.InvalidOperationException"} The thing here that really stands out is Unauthorized, URL: /webservice/name/service.aspx/function Using Multiple Authentication Methods With this I'm still confused as to why this only happens in IE10 if it's a permission/authentication issue. What was added to 10, or where should I be looking for the root cause of this? UPDATE2 Here are the headers from the client machine from fiddler (server information removed): Main SESSION STATE: Done. Request Entity Size: 64 bytes. Response Entity Size: 9 bytes. == FLAGS ================== BitFlags: [ServerPipeReused] 0x10 X-EGRESSPORT: 44537 X-RESPONSEBODYTRANSFERLENGTH: 9 X-CLIENTPORT: 44770 UI-COLOR: Green X-CLIENTIP: 127.0.0.1 UI-OLDCOLOR: WindowText UI-BOLD: user-marked X-SERVERSOCKET: REUSE ServerPipe#46 X-HOSTIP: ***.***.***.*** X-PROCESSINFO: iexplore:2644 == TIMING INFO ============ ClientConnected: 14:43:08.488 ClientBeginRequest: 14:43:08.488 GotRequestHeaders: 14:43:08.488 ClientDoneRequest: 14:43:08.488 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 14:40:28.943 FiddlerBeginRequest: 14:43:08.488 ServerGotRequest: 14:43:08.488 ServerBeginResponse: 14:43:08.592 GotResponseHeaders: 14:43:08.592 ServerDoneResponse: 14:43:08.592 ClientBeginResponse: 14:43:08.592 ClientDoneResponse: 14:43:08.592 Overall Elapsed: 0:00:00.104 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] Portal SESSION STATE: Done. Request Entity Size: 64 bytes. Response Entity Size: 105 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-EGRESSPORT: 44444 X-RESPONSEBODYTRANSFERLENGTH: 105 X-CLIENTPORT: 44439 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#7 X-HOSTIP: ***.***.***.*** X-PROCESSINFO: iexplore:7132 == TIMING INFO ============ ClientConnected: 14:37:59.651 ClientBeginRequest: 14:38:01.397 GotRequestHeaders: 14:38:01.397 ClientDoneRequest: 14:38:01.397 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 14:37:57.880 FiddlerBeginRequest: 14:38:01.397 ServerGotRequest: 14:38:01.397 ServerBeginResponse: 14:38:01.464 GotResponseHeaders: 14:38:01.464 ServerDoneResponse: 14:38:01.464 ClientBeginResponse: 14:38:01.464 ClientDoneResponse: 14:38:01.464 Overall Elapsed: 0:00:00.067 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2]

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • tomcat5 HTTP 400 BAd Request

    - by Oneiroi
    OS is centOS 5.5 x64, rpm's are as follows: tomcat5-jsp-2.0-api-5.5.23-0jpp.9.el5_5 tomcat5-common-lib-5.5.23-0jpp.9.el5_5 tomcat5-servlet-2.4-api-5.5.23-0jpp.9.el5_5 tomcat5-server-lib-5.5.23-0jpp.9.el5_5 tomcat5-5.5.23-0jpp.9.el5_5 tomcat5-jasper-5.5.23-0jpp.9.el5_5 telnet localhost 8080 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. GET / HTTP/1.0 Host: localhost HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Date: Thu, 16 Sep 2010 15:06:21 GMT Connection: close alternatives --display java output: alternatives --display java java - status is manual. link currently points to /usr/lib/jvm/jre1.6.0_21/bin/java /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java - priority 16000 slave keytool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/keytool slave orbd: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/orbd slave pack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/pack200 slave rmid: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmid slave rmiregistry: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmiregistry slave servertool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/servertool slave tnameserv: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/tnameserv slave unpack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/unpack200 slave jre_exports: /usr/lib/jvm-exports/jre-1.6.0-openjdk.x86_64 slave jre: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64 slave java.1.gz: /usr/share/man/man1/java-java-1.6.0-openjdk.1.gz slave keytool.1.gz: /usr/share/man/man1/keytool-java-1.6.0-openjdk.1.gz slave orbd.1.gz: /usr/share/man/man1/orbd-java-1.6.0-openjdk.1.gz slave pack200.1.gz: /usr/share/man/man1/pack200-java-1.6.0-openjdk.1.gz slave rmid.1.gz: /usr/share/man/man1/rmid-java-1.6.0-openjdk.1.gz slave rmiregistry.1.gz: /usr/share/man/man1/rmiregistry-java-1.6.0-openjdk.1.gz slave servertool.1.gz: /usr/share/man/man1/servertool-java-1.6.0-openjdk.1.gz slave tnameserv.1.gz: /usr/share/man/man1/tnameserv-java-1.6.0-openjdk.1.gz slave unpack200.1.gz: /usr/share/man/man1/unpack200-java-1.6.0-openjdk.1.gz /usr/lib/jvm/jre-1.4.2-gcj/bin/java - priority 1420 slave keytool: /usr/lib/jvm/jre-1.4.2-gcj/bin/keytool slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: /usr/lib/jvm/jre-1.4.2-gcj/bin/rmiregistry slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: /usr/lib/jvm-exports/jre-1.4.2-gcj slave jre: /usr/lib/jvm/jre-1.4.2-gcj slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) /usr/lib/jvm/jre1.6.0_21/bin/java - priority 2 slave keytool: (null) slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: (null) slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: (null) slave jre: (null) slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) Current `best' version is /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java. Same occurs trying HTTP/1.1, and I am at a complete loss as to why.

    Read the article

  • Overriding routes on Openvpn client, iproute, iptables2

    - by sarvavijJana
    I am looking for some way to route packets based on its destination ports switching regular internet connection and established openvpn tunnel. This is my configuration OpenVPN server ( I have no control over it ) OpenVPN client running ubuntu wlan0 192.168.1.111 - internet connected if Several routes applied on connection to openvpn from server: /sbin/route add -net 207.126.92.3 netmask 255.255.255.255 gw 192.168.1.1 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 5.5.0.1 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 5.5.0.1 And I need to route packets regarding it's destination ports for ex: 80,443 into vpn everything else directly to isp connection 192.168.1.1 What i have used during my attempts: iptables -A OUTPUT -t mangle -p tcp -m multiport ! --dports 80,443 -j MARK --set-xmark 0x1/0xffffffff ip rule add fwmark 0x1 table 100 ip route add default via 192.168.1.1 table 100 I was trying to apply this settings using up/down options of openvpn client configuration All my attempts reduced to successful packet delivery and response only via vpn tunnel. Packets routed bypassing vpn i have used some SNAT to gain proper src address iptables -A POSTROUTING -t nat -o $IF -p tcp -m multiport --dports 80,443 -j SNAT --to $IF_IP failed in SYN-ACK like 0 0,1 0,1: "70","192.168.1.111","X.X.X.X","TCP","34314 > 81 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=18664016 TSER=0 WS=7" "71","X.X.X.X","192.168.1.111","TCP","81 > 34314 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1428 TSV=531584430 TSER=18654692 WS=5" "72","X.X.X.X","192.168.1.111","TCP","81 > 34314 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1428 TSV=531584779 TSER=18654692 WS=5" "73","192.168.1.111","X.X.X.X","TCP","34343 > 81 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=18673732 TSER=0 WS=7" I hope someone has already overcome such a situation or probably knows better approach to fulfill requirements. Please kindly give me a good advice or working solution.

    Read the article

  • Opscode Chef nginx compile from source issue reports successful run but does nothing

    - by v_abhi_v
    I am trying to install nginx from source in Opscode Chef and its bit weird, it runs complaining nothing but does not install it either. This is how my role attributes look like looks like "nginx":{ "default_site_enabled":false, "version":"1.2.6", "init_style":"init", "install_method":"source", "configure_flags":[ "--without-http_access_module", "--without-http_auth_basic_module", "--without-http_autoindex_module", "--without-http_browser_module", "--without-http_charset_module", "--without-http_fastcgi_module", "--without-http_memcached_module", "--without-http_referer_module", "--without-http_scgi_module", "--without-http_split_clients_module" ], "log_dir":"/var/log/nginx", "binary":"/opt/nginx/sbin/nginx", "source":{ "prefix":"/opt/nginx/dist", "modules":["http_ssl_module", "http_gzip_static_module" ] } }, The chef log shows: [2012-12-19T02:37:44+00:00] INFO: Processing bash[compile_nginx_source] action run (nginx::source line 82) [2012-12-19T02:37:45+00:00] INFO: bash[compile_nginx_source] ran successfully I am clueless on what's going on :(

    Read the article

  • How to disable Apache http compression (mod_deflate) when SSL stream is compressed

    - by Mohammad Ali
    I found that Goggle Chrome supports ssl compression and Firefox should support it soon. I'm trying to configure Apache to to disable http compression if the ssl compression is used to prevent CPU overhead with the configuration option: SetEnvIf SSL_COMPRESS_METHOD DEFLATE no-gzip While the custom log (using %{SSL_COMPRESS_METHOD}x) shows that the ssl layer compression method is DEFLATE, the above option did not work and the http response content is still being compressed by Apache. I had to use the option: BrowserMatchNoCase ".Chrome." no-gzip' I prefer if there are more general method in case other browsers supports ssl compression or some has a version of chrome that does not have ssl compression.

    Read the article

  • haproxy not passing X_FORWARD_FOR on HTTP POST

    - by Mark L
    Hello, I've setup HAProxy with the option forwardfor option so it'll pass on the user's IP to PHP via $_SERVER[ "HTTP_X_FORWARDED_FOR" ]. If the page request isn't a POST it's populated fine but if it is then it won't be populated. Any ideas where I've gone wrong? Thanks everyone! My whole HAProxy conf file for reference: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 4096 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webfarm :80 mode http balance roundrobin option forwardfor server webA 192.168.240.4 weight 1 maxconn 2048 check server webB 192.168.240.3 weight 1 maxconn 2048 check listen smtp :25 mode tcp option tcplog balance roundrobin server smtp 192.168.240.4:25 check

    Read the article

  • Big IP F5 outbound HTTP issues

    - by mbuk2k
    We've tried upgrading from 9.x to 10.2 on our F5 Big IP 3400 and everything went over fine apart from one thing. We're unable to establish any outbound HTTP (80) connections from any servers that are assigned to a virtual server. This is something that worked before and is required for certain calls our servers need to make. Interestingly HTTPS (443) connections work fine, it's literally just anything outbound over port 80 seems to fail. Does anyone know if anything has changed between 9.4 and 10.2 that would mean additional config would need to be made to allow for external HTTP connections? Any advice would be appreciated, thank you

    Read the article

  • Seeking web-based FTP client for very large file upload

    - by Paul M. Nguyen
    I have looked around for these for some time... the limits imposed by the web server and/or the dynamic programming environment (e.g. PHP) are far too restrictive for the application I'm working on. We need to be able to move large graphics and video files to and from clients (ranging from tens of MB to a few GB in a single file). Plain FTP with a proper desktop client will do the trick, and we're hosting this in Amazon EC2 with EBS. User management will be done from the office via webmin. Users are chroot-jailed into their home dir by proftpd. net2ftp will work for many clients, but we often need to move single files that approach 1GB or exceed 2-3GB which is way out of the range of any http-based uploader. So we turn to Java or Flash - can they do it? From within the web browser establish an FTP connection and grab a huge file? There are licensed applets and such out there, but none seem convincing. Again, I'm looking for some code that can speak FTP and read (& write?) the local disk, that is delivered in a web browser, and can move single files of 2GB+. The reason for having a web-based interface to FTP is to skip the software installation step for our clients. I will consider proper desktop client software as long as it's "portable" and at least Win+Mac and can be easily configured by lay-man users in a hurry.

    Read the article

  • XP Client for NFS failure dialog on startup, but drive mapping works

    - by Matt Bennett
    I'm mounting an NFS share to some windows machines using the tools that come in the Services for UNIX Administration toolkit. I've set up the User Name Mapping service to use local passwd and group files. I had to manually start the User Name Mapping service, and then created an 'advanced map' from the XP machine's user to a uid that exists in on my NFS server, like so: Windows User: Matt Bennett UNIX Domain: PCNFS UNIX User: mattbennett UID: 10250 Primary: * I can map a network drive without any issues, and it correctly identifies the UID and GID to use, but when I reboot I get this message: "An error occurred while connecting to the NFS server. Make sure that the Client for NFS service has started. If the problem persists make sure Client for NFS service can communicate with User Name Mapping or PCNFS server." After dismissing the dialog, the machine finishes booting and the network drive is there in My Computer with the title "Disconnected Network Drive", but I can open it I can see the network share without a problem, and then it drops the 'disconnected' from its title. It seems like the services are starting in the wrong order or something, so the first attempt to connect fails but subsequent ones work as expected. There don't seem to be any symptoms apart from the dialog box, but obviously something's not quite right. What have I done wrong? Thanks, Matt.

    Read the article

  • Vista WHS Client stopped resolving local names

    - by andrewcr
    I’m running Windows Home Server PP2 in my home, with 3 client computers: two XP and one Vista. I have a router that provides my local DHCP and the server has a static IP address. The other day the Vista machine hung, and on reboot stopped resolving local names. It will show the green home server client icon in the system tray, but if I attempt to log in to the console, I get a “This computer cannot connect to your home server” message. If I ping the server name from the command line, it does not resolve, and gives a “could not find host” message. Oddly enough, if I browse the network, I can see the server, but double clicking on it fails. The other machines on the local network have no problems seeing the server, and the Vista machine has no problems resolving names from the internet, it just can’t see any local machines. I’m aware that I can work around this by adding entries to my HOSTS file (it does work), but I’d like this to work the way it’s “supposed” to. I’m an experienced computer user and developer, but not a networking whiz. Can anyone tell me how local name resolution is supposed to work in my environment and/or suggest ways to troubleshoot this? Thanks, Andy

    Read the article

  • Very slow browsing shared folder XP client/host

    - by Ickster
    I have a pretty straightforward setup where I'm storing media files on an XP pro machine, and sharing the folder to be accessed by other XP pro machines around the house. (Typically, there's only one client accessing the share at a time, although there may be several with the share mounted.) It's been working just fine for years, but I've recently started having some problems. A couple of days ago, the host PC had power disconnected while it was running. It was restarted and everything seemed fine initially, but since then browsing the shared folder from client machines has been extremely slow and actually reading data is all but impossible. The problem exists in every access method I've tried: Windows Explorer, VLC dialogs, command line, etc. My first thought was that the disk was experiencing problems, but there are no problems viewing the files locally on the host machine. My second thought was that there was a network problem on the host machine, so I removed and reinstalled drivers for the NIC with no change. My third thought was that there might've been a problem elsewhere on the network, so I swapped out hardware to no avail. I'm regrouping and trying to come up with a methodical approach to figuring out what might be wrong. I would of course be thrilled if you can suggest specific problems (Microsoft KB articles, etc.) that I might check, but I'm not expecting a silver bullet. If you can help me outline an approach to identify the problem (including recommended tools, e.g., disk checkers, network analyzers, etc.) I'd greatly appreciate it.

    Read the article

  • HTTP downloads slow - FTP of same file very fast - Windows 2003

    - by Paul Hinett
    I am having some issues with download speeds on my site via http, i am averaging around 70kbps downloading a file that is around 70mb. But if i connect to my server via FTP and download the same file on the same computer / connection i am averaging about 300+kbps. I know my server has alot of connections at any one time, probably around 400 connections. My server has a 1gbps connection to the internet so there is plenty of bandwidth available, as proven with the FTP. I have no throttling of any kind enabled in IIS. If interested there is a test file here you can download to check the speed: http://filesd.house-mixes.com/test.zip I am based in the UK and the server is in Washington, USA if that makes any difference. Paul

    Read the article

  • Shared firewall or multiple client specific firewalls?

    - by Tauren
    I'm trying to determine if I can use a single firewall for my entire network, including customer servers, or if each customer should have their own firewall. I've found that many hosting companies require each client with a cluster of servers to have their own firewall. If you need a web node and a database node, you also have to get a firewall, and pay another monthly fee for it. I have colo space with several KVM virtualization servers hosting VPS services to many different customers. Each KVM host is running a software iptables firewall that only allows specific ports to be accessed on each VPS. I can control which ports any given VPS has open, allowing a web VPS to be accessed from anywhere on ports 80 and 443, but blocking a database VPS completely to the outside and only allowing a certain other VPS to access it. The configuration works well for my current needs. Note that there is not a hardware firewall protecting the virtualization hosts in place at this time. However, the KVM hosts only have port 22 open, are running nothing except KVM and SSH, and even port 22 cannot be accessed except for inside the netblock. I'm looking at possibly rethinking my network now that I have a client who needs to transition from a single VPS onto two dedicated servers (one web and one DB). A different customer already has a single dedicated server that is not behind any firewall except iptables running on the system. Should I require that each dedicated server customer have their own dedicated firewall? Or can I utilize a single network-wide firewall for multiple customer clusters? I'm familiar with iptables, and am currently thinking I'll use it for any firewalls/routers that I need. But I don't necessarily want to use up 1U of space in my rack for each firewall, nor the power consumption each firewall server will take. So I'm considering a hardware firewall. Any suggestions on what is a good approach?

    Read the article

  • Remote Desktop to Server 2008 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • Reverse and Forward DNS set up correctly but sometimes MapReduce job fails

    - by phodamentals
    Ever since we switched over our cluster to communicate via private interfaces and created a DNS server with correct forward and reverse lookup zones, we get this message before the M/R job runs: ERROR org.apache.hadoop.hbase.mapreduce.TableInputFormatBase - Cannot resolve the host name for /192.168.3.9 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '9.3.168.192.in-addr.arpa' A dig and nslookup both show that the reverse and forward look-ups both get good responses with no errors from within the cluster. Shortly after these messages, the job runs...but every once in awhile we get a NPE: Exception in thread "main" java.lang.NullPointerException INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.net.DNS.reverseDns(DNS.java:93) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInputFormatBase.java:219) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:184) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1063) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1080) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:992) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945) INFO app.insights.search.SearchIndexUpdater - at java.security.AccessController.doPrivileged(Native Method) INFO app.insights.search.SearchIndexUpdater - at javax.security.auth.Subject.doAs(Subject.java:415) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapreduce.Job.submit(Job.java:566) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596) INFO app.insights.search.SearchIndexUpdater - at app.insights.search.correlator.comments.CommentCorrelator.main(CommentCorrelator.java:72 Does anyone else who has set-up a CDH Hadoop cluster on a private network w/DNS server get this? CDH 4.3.1 with MR1 2.0.0 and HBase 0.94.6

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >