Search Results

Search found 73679 results on 2948 pages for 'get http client info'.

Page 187/2948 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Squid throws error, The requested URL could not be retrieved

    - by Supratik
    Sometimes I am getting the following error The requested URL could not be retrieved While trying to retrieve the URL: http://groups.google.com/ The following error was encountered: Unable to determine IP address from host name for groups.google.com The dnsserver returned: Refused: The name server refuses to perform the specified operation. This means that: The cache was not able to resolve the hostname presented in the URL. Check if the address is correct. Your cache administrator is root. What could be the reason for the above error ? Regards Supratik

    Read the article

  • Debugging an IP Camera

    - by Kevin Boyd
    Further to my previous question on ServerFault here, I finally can view the stream on RTSP however I still cannot view the camera stream in a web browser. The IP camera uses an activeX control in Internet Explorer. And although I can configure the camera settings from IE, I cannot view the stream it shows connecting for a few sec and shows disconnecting. I have forwarded the HTTP, RTSP and Stream ports of the IP camera. the public port is 7071 and private port is 7070. When I try to see the connections in TCPView it shows that the ActiveX control in IE is trying to connect to port 7070 which is quite unusual since it should connect to 7071 Also the state shows SYN_SENT for sometime and then disconnects. I have really no clue what's going on and why?

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • Animated HTTP request visualisation on Apache

    - by Simon Bennett
    This is more a question to appease my memory in trying to remember what it was I saw a while ago. I remember being introduced to a realtime server visualisation tool that showed the current requests that Apache was handling in a kind of fireworks effect on screen. Each request/group of requests would be shot across the screen in varying colours. I can't for the life for me remember what is was called and hunting around here and Google has left me empty handed. Just wondering if anybody else was able to plug this gem from the memory and ease my pain! Thanks

    Read the article

  • OpenVPN client on Amazon EC2

    - by Matt Culbreth
    I have an account with an OpenVPN service, and I'd like to get that running on my EC2 instance running Ubuntu 12.04. I have my config file in /etc/openvpn, and it connects fine when I run sudo openvpn --config matt.ovpn. However, I then lose connectivity to the EC2 machine, and I can't SSH back to it until I reboot. Previously I have done things like sudo ip rule add from IP_ADDRESS table 10 and then sudo ip route add default via GATEWAY_IP table 10, but that's not working on EC2. Any ideas? My private IP address right now is 10.209.29.XXX and my gateway is 10.209.29.1.

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • BIG IP - HTTPS Health Monitor setup

    - by djo
    I have a Web site that we have setup a health monitoring pages so we can take our servers in and out of the Big-IP as we see fit. Now we have just moved onto Big-IP and the issue I have hit is that you setup Health Monitors for port 80 and 443, now the 80 check works fine but when I to get the 443 check to look at our file it fails. Now I am aware as I am hitting the this page on the IP address over HTTPS is going to cause a cert error but I would have guessed that BIG-Ip would have been setup just to accept the cert and carry on with the check. Is what I am wanting to do possible? Also is there a way of just using a HTTP monitor for HTTPS? Because if port 80 has stopped sending traffic then if i use the same monitor for 443 it will stop traffic to that. Any help would be great! Thanks

    Read the article

  • Windows 7 MCE client server HTPC

    - by Dan Hook
    My HTPC is downstairs connected to my large screen. I would like to use my desktop upstairs to record over the air HDTV and stream it to the HTPC. I have Windows 7 Professional installed on the desktop. I currently have XP on the HTPC, but I'm going to upgrade it to Windows 7. Is there a particular flavor of Win 7 I should use? Is it possible to record on the desktop and use Windows MCE on the HTPC to watch the recorded content? What about live TV? The consensus I've seen is that Windows 7 cannot be configured as a Windows Media Center Extender. Is that the case? If so, what's the cheapest solution for an extender?

    Read the article

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • WSUS Showing Incorrect Version & Client Update Failure but they can check-in

    - by user132199
    One of the issues we are having is the clients will not download the updates from our WSUS server. They check-in as they are suppose too and find applicable updates but they are unable to actually download and install them. The GPO is set correctly. We decided to install the patch KB2720211to see if it would help eleviate this issue but it did not. In fact, even stranger, if I check the version that is installed on WSUS it reads 3.2.7600.226 but as far as I know it should read 3.2.7600.251. If I check Add/Remove programs to see what Windows Updates have been installed it even lists for WSUS that KB2720211 has been installed at version 3.2.7600.251. To install this update I followed the following directions Question: Has anyone seen this issue where the patch is installed yet not showing the correct version? What can I try to get my clients to update?

    Read the article

  • How to defend agains botnet http requests

    - by Killercode
    I have a server with WHM + CPanel and 5 of my costumer got infected with zbot. This means that the domains they have are constantly receiving requests to certain destinations. I tried to use mod_security but seems that it can't filter every requests... I don't really know why? I still see in the access log the connection comming in and it's consuming a LOT of bandwidth and server load Those accounts have already been clean so all of those requests go to error 404 (the ones catched on mod_security I am dropping the connection). Is there anymore ways to defend against this requests?

    Read the article

  • How to have internet connection over VPN while "Microsoft Firewall Client for ISA server" is running

    - by blocked
    I have the software mentioned in the title running on my machine. When I connect over VPN to my company's network, my internet connection gets borked, because somehow the ISA firewall blocks it. This is completely idiotic, because my work involves extensive use of the internet, so having to disconnect and reconnect continuously seriously cripples my productivity. (Meaning: I'm tearing my hair out here.) Can I have my VPN connection and somehow still have my internet connection too? I'm open to any solution.

    Read the article

  • Nginx - basic http authentication on PHP-script

    - by half_bit
    I added a PHP-Script that serves as "cgi-bin", Configuration: location ~^/cgi-bin/.*\.(cgi|pl|py|rb) { gzip off; fastcgi_pass 127.0.0.1:9000; fastcgi_index cgi-bin.php; fastcgi_param SCRIPT_FILENAME /etc/nginx/cgi-bin.php; fastcgi_param SCRIPT_NAME /cgi-bin/cgi-bin.php; fastcgi_param X_SCRIPT_FILENAME /usr/lib/$fastcgi_script_name; fastcgi_param X_SCRIPT_NAME $fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REMOTE_USER $remote_user; } PHP-Script: <?php $descriptorspec = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => array("pipe", "w") // stderr is a file to write to ); $newenv = $_SERVER; $newenv["SCRIPT_FILENAME"] = $_SERVER["X_SCRIPT_FILENAME"]; $newenv["SCRIPT_NAME"] = $_SERVER["X_SCRIPT_NAME"]; if (is_executable($_SERVER["X_SCRIPT_FILENAME"])) { $process = proc_open($_SERVER["X_SCRIPT_FILENAME"], $descriptorspec, $pipes, NULL, $newenv); if (is_resource($process)) { fclose($pipes[0]); $head = fgets($pipes[1]); while (strcmp($head, "\n")) { header($head); $head = fgets($pipes[1]); } fpassthru($pipes[1]); fclose($pipes[1]); fclose($pipes[2]); $return_value = proc_close($process); } else { header("Status: 500 Internal Server Error"); echo("Internal Server Error"); } } else { header("Status: 404 Page Not Found"); echo("Page Not Found"); } ?> The problem with it thought is that I cannot add basic authentication. As soon as I enable it for location ~/cgi-bin it gives me a 404 error when I try to look it up. How can I solve this? I thought about restricting access to only my second server where I then add basic authentication over a proxy, but there must be a simpler solution. Sorry for the bad title, I couldn't think of a better one.

    Read the article

  • Getting started with the vCenter Web Client Administration tool

    - by Saariko
    I am trying to access a newly vCenter. The documentation clearly mentions to access the web-admin through: https://localhost:9443/admin-app but since I don't have a windows OS built under the vCenter (I use the vCenter Appliance) There is no localhost to use. If I try with the host IP I get the error: This PDF explains to install IIS component - But it's ESX 4 - and also not talking about appliance. so, a simple question: how can I access the web-app admin tool? also found a similar question on vmware forum. But I can't understand the solution/if any.

    Read the article

  • Cache Control Headers with IIS 7.5

    - by Brad
    I'm trying to wrap my head around client side (web browser) caching and how it works in relation to IIS 7.5 cache control headers. In particular: If we want to force clients to reload cached resources, how must IIS be configured? Do we need to set expire web content immediately if the resources on the server have a more recent Modified Date (or ETag value)? Right now we're not setting any cache headers. So if I set a cache header of no-cache (which I think is the equivalent of expire web content immediately) will that force the web browser to obtain a new version of a particular file. Or will the browser only request a new version after it deems its current copy to be stale and then from that point forward not cache it? Would a best practice be to set a cache control flag of 1 week, then 8 days before I know I am going to make a change set the cache control down to for instance 30 minutes? But if I do that and then need to immediately expire an item from users caches because there was an issue with it how do I do that?

    Read the article

  • HTTP Proxypass of subdomain

    - by enedebe
    I'm trying to install a proxy on my gateway that everything that comes from a subdomain for example sub.mydomain.com goes to an inside server at a :3000 port. I'm installing a redmine server inside my network that has to be reached from outside. Any idea of how to do that? I think in httpd as proxypass, but I don't know how to get just the subdomain name to proxy it. My gateway is currently Clearos machine. Thanks

    Read the article

  • ddwrt client brigde acces point lost

    - by llazzaro
    Ok I have an AP with ddwrt firm (i know its not the best, but continue reading!) AP is configured to work like a wifi "transparent" brigde, also it had a virtual wifi network card to expand radius of wifi signal in that same AP. The brigde is working, computers behind AP gets ips from main routers which shares internet....BUT! I cant access webgui of the bridge AP... Main problem : AP is lost, but its working as brigde. I cant find it in the network (it didnt have any ip!) so I cant change any configuration... First solution : Reset AP, but it cannot be done. Reset button dont works due to a bug in ddwrt micro firm that mi linksys WAP54g had installed (I really hate this firmware I like more openwrt that my main router has) Second Solution : arp -a from main router , from computers behind AP...It dont appears in the list. Any more ideas, the router at some level must be there, the brigde is working. I know its possible that the AP is with an ip like 192.168.100.2 , my subnect actually is 172.16.X.X. :) thanks!

    Read the article

  • Log viewer server and client

    - by Scott Crooks
    I'm looking for a log viewing solution for (mostly) Linux and (preferably) Windows too. I want to be able to centralize the log information for a lot of servers so that people in the company can see what's going on for different servers. I would guess this would involve having a central server which accepts information from the various computers / virtual machines with (perhaps) a running daemon on each of the servers. Does such a software exist?

    Read the article

  • Proxy service likes Apache Http

    - by Aptos
    Currently I try to simulate my app as distributed servers, so I let them run on localhost:9000 and localhost:9001, i tried using apache load balancer but it is really hard to config on mac, my idea is the second server localhost:9001 will be kept ideal and the requests only be redirected to them when the first server is downed. Is there any good free program can do that ? (except Apache httpd). Extra functions: my application is written in java and maintain an in-memory object, is there any service that can synchronize that object between 2 servers so they can keep uptodate status of other (the second one takes state of the first one)? Is there any app can support that? Thank you very much.

    Read the article

  • Domain redirection to port on Windows Server 2008

    - by Rauffle
    I have a Windows server running IIS. I wish to run a piece of software that hosts a web interface on a non-standard HTTP port (let's say, port 9999). I have static DNS entries on my router for two FQDNs, both of which direct to the Windows server. I wish to have requests to 'website1' to continue to go to the IIS website on port 80, but requests for 'website2' to instead go to port 9999 to be handled by the other application. How can I accomplish this? Right now I can get to the application by going to 'website1:9999' or 'website2:9999'.

    Read the article

  • Using Outlook 2007 as PoP client with Gmail account

    - by goldenmean
    Hello, I recently started using outlook-2007 with my gmail account. I am using PoP settings in Outlook-2007 to access my gmail. In my gmail settings i have set the option as : Enable POP for mail that arrives from now on 1] How can download some messages from past already received in my gmail inbox to my outlook inbox ? 2] How can i selectively download messages from gmail to my outlook. thank you. -AD

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >