Search Results

Search found 11316 results on 453 pages for 'ip geolocation'.

Page 408/453 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Routing and authenticating all access through squid

    - by Knight Samar
    Hi, I want to route all Internet access in my network through a Squid proxy server and authenticate and log all users. I want this to be a client-independent setting so that no one needs to do anything on their browsers or machines. I have set my network gateway as the proxy server so that all traffic will be sent to it. I have done this using options in DHCP server. Now I tried using squid as a transparent proxy, but then it won't authenticate in that mode. I tried using iptables to route all traffic to port 3128 but it won't popup the authentication dialog box from SQUID. I tried telling DHCP to give WPAD to all clients by placing a WPAD file on a webserver containing the following for automatic proxy configuration on clients: Changes in dhcpd.conf option wpad code 252 =test; option wpad "\n\000"; option wpad "http://192.168.1.5/wpad.dat\n"; The WPAD file: function FindProxyForURL(url,host) { return "PROXY squid-server-ip-address:3128 ; DIRECT "; } But the browsers (different versions of Firefox and IE) seem to ignore it. :( What should I do ?

    Read the article

  • SSH to remote host (edgemarc 4200 or 4500 series routers) and pull arp data

    - by MaQleod
    I've been trying to think of a method to do this for days, but have not come up with anything yet. Ideally, this is what I'm looking to do: From a windows XP machine, I need to open an SSH connection to a remote host, send the arp command, and pull the text results of the command back for use on the client. I will need to parse this data and preferably produce a 2D array of IPs and MAC addresses. There will be no shared keys, this is all done with a username and password that will always be different, they will need to be fed into the command via variables that will be pulled from a database using an autoit script based on the WAN ip of the remote host. Now the actual parsing of the data and creation of the array will be easy if I can just get the text of the arp table. Is there any way to ssh to a remote host, run a command and return the data from that command to the client in a batch script or perl script (it is ok if it writes the text to a file, I can read it out of the file later, I just need it to get to the client)?

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • Route web traffic through a separate iterface

    - by tkane
    I'd like to route web traffic through the wlan0 interface and the rest through eth1. Can you please help me with the iptables commands to achieve this. Below is my configuration. Thank you :) Edit: This is about desktop configuration not a web server set up. Basically I want to use one of my connections to browse the web and the other one for everything else. ifconfig: eth1 Link encap:Ethernet HWaddr 00:1d:09:59:80:70 inet addr:192.168.2.164 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:9ff:fe59:8070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4771 (4.7 KB) TX bytes:7081 (7.0 KB) Interrupt:17 wlan0 Link encap:Ethernet HWaddr 00:1c:bf:90:8a:6d inet addr:192.168.1.70 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:bfff:fe90:8a6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:77 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14256 (14.2 KB) TX bytes:14764 (14.7 KB) route: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 1 0 0 eth1 192.168.1.0 * 255.255.255.0 U 2 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 default adsl 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • How do I access my webserver on my stationary from my laptop?

    - by Steven
    I'm running Apache on my stationary and I would like to access a website through my laptop. This is some of the Apache config: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerName mysite.com DocumentRoot I:/wamp/www/mysite/ </VirtualHost> ServerName localhost:80 <Directory /> Options FollowSymLinks AllowOverride all Order deny,allow Deny from all </Directory> On my laptop I've added the following to the HOSTS file: 10.0.0.3 mysite.com But accessing the page through mysite.com is not very successfull. If I enter the IP address directly, I only get a Forbidden message. What do I need to do in order to get this to work? Update I'm runing WAMPSERVER 2.1 (Apache 2.2.17) Apache is up and running I can ping 10.0.0.3 from laptop I'm not able to ping http://mysite.com from laptop IE gives me a 403 Forbidden - The website declined to show this webpage The only log that get's entries when trying to access the website from my laptop, is access.log. access.log 10.0.0.4 - - [13/Jun/2011:10:14:04 +0200] "GET / HTTP/1.1" 403 202 apache_error.log [Mon Jun 13 10:08:16 2011] [error] VirtualHost localhost:0 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results UPDATE 2 My apache config has the following entry: AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 Could it be that this Allow from is stopping other computers accessing the page?

    Read the article

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • exim4 seem to stop listening

    - by trakos
    Hey, I have a strange problem with my exim4 configuration. I have a dedicated server running debian for quite a long time now, but I'm not really using it actively recently, so everything just worked due to lack of changes ;) However, recently, my exim4 smtp stopped answering on port 25. It does not respond through localhost, as well - even though it's set to listen on any interface available. Some things I've checked: ks:/home/trakos/Maildir/new# netstat -ap | grep exim tcp 0 0 *:smtp : LISTEN 12521/exim4 ks:/home/trakos/Maildir/new# exiwhat 12521 daemon: -q30s, listening for SMTP on port 25 (IPv4) ks:/home/trakos/Maildir/new# cat /var/log/exim4/rejectlog ks:/home/trakos/Maildir/new# cat /var/log/exim4/paniclog The queue is set for 30s only because I was running it in a non-daemon mode to see any output. Strangely enough, no suspicious output is given, netstat even shows it is listening on port 25, but still trying to telnet to it times out. The only things that may have changed recently are: I've got second IP for my server I remember that few days ago my spamassasin crashed, and I've started it up again So yeah, I'm really clueless about this one now :P I mean, I don't even know what could be failing here. Could someone give me some ideas what should I check next? PS: it has uptime of 442 days, so I haven't really tried rebooting it yet ^^

    Read the article

  • Troubles with apache and virtual hosts

    - by xZero
    I have a BIG problem. I have VPS with Debian OS, and LAMP installed. Fresh install. For control panel i using Webmin. Now i trying to setup multiple sub-domains on my server using webmin for example: downloads.my-domain.com cpanel.my-domains.com forum.my-domains.com But problem what is happening is next, while i using no virtual hosts, everything works perfectly when i accessing it using my-domain.com, but when i add Virtual host, i cann access it but my-domain.com becomes unavilable because it redirects to virtual hosts i added. When i add more than 2 virtual hosts, problem is still here. Also, when i try to access to virtual server for example downloads.my-domain.com, it redirects again to cpanel.my-domains.com When i delete virtual hosts, access to my-domain.com is succesfull... What i known: - That is not problem with my domain provider. I correctly added subdomains and added host record to my VPS IP. - I added unique name to every single virtual host. - There are no two same virtual hosts - Every virtaul hosts have own directory: for example: downloads.my-domain.com have own WWW dir: /var/downloads Can somebody help me? Thanks.

    Read the article

  • Hyper-v and sql server connections for web apps

    - by Rick Ratayczak
    I have a physical machine running win8, and two VMs in hyper-v client: 1 web server, 1 sql server. The web server works fantastic. The sql is the one that is giving me the problem. I can connect to it with server explorer in visual studio or management studio just fine, and it's blazing fast. The problem happens when I use the same connection string I am using in visual studio server explorer in the web.config for an app. data source=VMSQL1;initial catalog=OtherShell;persist security info=True;user id=OtherShell;password=****;network library=dbmssocn;MultipleActiveResultSets=True;App=EntityFramework I made sure it was also using tcp-ip, but it doesn't connect with or without the network library part of the connection string. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) This is driving my batty for the last two days, any ideas? It fails from the web vm too, but works in management studio with the same connection string.

    Read the article

  • Can't reliably ping 6224 router from directly-attached system

    - by David Mackintosh
    OK, here's my situation. This is on the internet. The 6224 is the router in this picture and physically resides in Kanata. Both VLAN 1697 and 3994 are provided by an internet service provider. These VLANs are provided through a single 1Gb ethernet wire. The Kanata hosts are directly attached to the 6224; the other two sites are remote. VLAN 3994 is a single IP address space, so theoretically it shouldn't matter physically where the hosts on that subnet are. Here's the problem. I have a monitoring system which is connected further into the internet, so probes from the monitor would come in to this diagram on the 1697 VLAN. When I ping hosts at Albert or Bells Corners from the internet, there is 0 loss. The connection looks perfect. When I ping hosts at Kanata, I lose anywhere from 10 to 40% of the pings. The loss is not predictable, but: when I do lose them, I always lose at least 3, usually 4, rarely more, pings in a bunch. I have attached a monitor directly to the 6224 in Kanata on 3994.. When the monitor pings the 6224 routing interface, I see exactly the same loss pattern -- but NOT at the same time as the loss from the remote system. Ping time is around 1ms. When the monitor pings another system directly attached to the 6224, there is 0 loss. Ping time is about 0.1ms, one-tenth of the time to ping the router. Anyone know what is going on here?

    Read the article

  • Can I regenerate the rsa key for SSH access to a Cisco router? Or should I completely erase the SSH config?

    - by Josh
    I have a production 2691 that I administer via telnet. I'd like to change that to SSH. Looking at the config, it looks like there have been keys generated in the past. I think the history here is SSH was set up, they had issues connecting, and fell back to telnet. There are a number of crypto entries, including the following: crypto pki trustpoint Gateway-2691.xxx.com enrollment selfsigned subject-name cn=IOS-Gateway-2691.xxx.com revocation-check none rsakeypair Gateway-2691.xxx.com I've also got this going... Gateway-2691#sh ip ssh SSH Disabled - version 1.99 %Please create RSA keys (of atleast 768 bits size) to enable SSH v2. Authentication timeout: 120 secs; Authentication retries: 3 Gateway-2691# My question is simply, can I run crypto key generate rsa again to set it up again? Is there a way to negate or no all of the previous ssh config so that I can start fresh there? I may be asking the wrong questions, as I'm learning here. As for the SSH how-to, I'm sure I can find information in many places. I'm just basically wondering if I need to start fresh, or if I can pick up where the last attempt at SSH config left off.

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • Single Sign On 802.1x Wireless - saying “Connecting to <SSID>”, hangs for 10 seconds, fails with “Unable to connect to <SSID>, Logging on…”.

    - by Phaedrus
    We are implementing WiFi on Windows 7 machines in our corporate environment. Machines should be able to log into the domain by WiFi as the Machine (Pre-Logon), and as the User (Post-Logon). We have everything working correctly except for 2 things: 1) Sometimes the login scripts don't run 2) The user VLAN is sometimes different than the machine vlan, and no DHCP renew occurs after user logon. I am clear that both these problems should be fixable by using the "Single Sign On" Option under the 802.1x Wireless Vista GPO, and setting the wireless to connect immediately before user logon and also by enabling "This network uses different VLAN for authentication with machine and user credentials" If I enable these GPO settings in a lab, the computer does authenticate & gets WIFI before the user logs on, so when the login box is displayed, it says “Windows will try to connect to ”, even though it is already connected (which should be ok?). Enter the user credentials and it goes to a screen saying “Connecting to ”, hangs for 10 seconds, fails with “Unable to connect to , Logging on…”. Desktop fires up and then the user re-authenticates with no problem as himself instead of the machine, but by that point, we defeat the point of the WiFi SSO “before user logon”. Also by that point, no DHCP renew seems to occur, and the user is still stuck with the wrong IP address for the new VLAN. When the “Connecting to ” screen comes up, there’s no indication on the AP or the Radius server that anything whatsoever is happening after credentials are entered until after the domain logon. Also with this policy enabled, sometimes windows hangs on a black screen indefinitely until I disable the Wireless NIC, so something is knackered for sure. What have I missed? Suggestions are much appreciated... /P

    Read the article

  • Nginx load balancing and maintaining URLs

    - by Steve Klabnik
    I'm trying to use nginx as a load balancer, and it's working great. One problem, though. The load balancing box is at 123.123.123.123, and the backend box is 456.456.456.456. So I have this config: upstream backend { server 456.456.456.456; } server { listen 80; server_name 123.123.123.123; access_log off; error_log off; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend; } } This works great. I hit 123.123.123.123 in my browser, and the page comes up. But now the URL in the browser says http://456.456.456.456. Do I need to use a rewrite rule or something to keep the url correct? I don't want it to be different when going to different backed servers. None of the tutorials I've read have mentioned anything about this.

    Read the article

  • Best method to redirect internal DNS to external website?

    - by ProfessionalAmateur
    We host several web based applications outside of intranet. The URL's to these applications are long, complex and overall not user friendly. Ex: http://hostingsite:port/approot/folder/folder/login.aspx <-- (production) http://hostingsite:port22/approot/folder/folder/login.aspx <-- (dev) http://hostingsite:port33/approot/folder/folder/login.aspx <-- (test) I'd like to create an internal DNS entry to allow users to access these sites with ease. Ex: http://prod --> http://hostingsite:port/approot/folder/folder/login.aspx http://dev --> http://hostingsite:port22/approot/folder/folder/login.aspx I'm not familiar with the DNS process and setup, as far as I know a DNS can only be redirected to an IP, but not to subdomains for directory paths as described above? Is this a correct assumption? I am thinking for throwing up an internal webserver that will listen to the internal DNS entries and redirect to the external sites. http://prod --> [internal webserver] --> redirect --> http://hostingsite:port/approot/folder/folder/login.aspx Is there a better way to do this?

    Read the article

  • Cannot ssh into server

    - by revolver
    I am trying to SSH into a linux machine running ubuntu, but the interactive shell stuck somewhere and I can't key in anything. I am on Mac OS X Lion. This only happens when I am trying to access via an external IP. Local LAN SSH is working perfectly. macbook:~ user$ ssh -v -v user@serverip // i skipped the rest of the log, but I can paste it here again if needed. Authenticated to serverip debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug1: Sending env LC_CTYPE = UTF-8 debug2: channel 0: request env confirm 0 debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 My terminal shell just hang after this, and I can't key in anything. I checked var/log/auth on the server and saw that the a session is being created and I had already logged in, but I don't see any responses on my client machine. I googled around and a lot of the solution had to do with the Broadcom wireless driver, but I am not even using one, so I am pretty clueless here. To give you more information, the linux machine is also running a web server, and I have no problem accessing the web server. Thanks. Any help is appreciated.

    Read the article

  • Doesn't work Nginx + SSI

    - by boopidoopi
    I have some problems. Nginx doesn't work with SSI. Nginx listens 80 port (frontend), apache2 listens 81 port (backend). That is my nginx configurations: server { listen 80; server_name test.dev www.test.dev; error_log /var/log/nginx/error.log debug; log_subrequest on; location / { ssi on; proxy_pass http://localhost:81; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 15m; client_body_buffer_size 128k; } } SSI include in test.dev index.php: <!--# include virtual="http:test.dev/test.html" -- When I open test.dev/index.php I see clean page. In page source: <!--# include virtual="http:test.dev/test.html" -- So how to enable SSI in nginx? Can you help me?

    Read the article

  • Is it possible to trace someone using Google during an online exam?

    - by George
    I happen to be a professor at a reputed college. I want to design an online exam for over 1000 students via around 50 computers right after the vacation ends. Now the problem is that I have heard that many students use Google on a different tab to find answers when no invigilator is around. I want to know if there is a way to backtrace it after the exams via some kind of history or any other possible way. In our university there is a standard system. I am not good with computers but I will try to explain. Each computer uses mozilla to connect to a server centrally located via an IP. The students open it and enter a unique ID and password to start the exams. Many questions are jumbled and different groups of students give exam in a different time slot. Is there any way to trace it since I want to set an example for students so they won't cheat and give exams in an honest way. Additional details: Since the number of computers are less than the number of students, more than 10 students are going to use a single computer on a single day over a period of 10 hours. After this, if I check the history (and let's say someone even forgot to delete the history and I see it), will I able to figure out who among the 10 has done it? Moreover, is it even practical and feasible?

    Read the article

  • Any ideas why Ettercap filters aren't seeing packet data?

    - by Bryan
    I'm using an Ettercap filter to detect a query response coming back from a particular service on a remote machine. When I see a response from the service, I'm searching through the data in the packet to see if an offset is a specific value, and if so I'm changing the value at another offset. Trouble is, when I try this on a new virtual machine I built my Ettercap filter's no longer getting any data in the DATA.data variable available to it. if(ip.proto == TCP && tcp.src == 17867) { msg("Response seen!\n"); if(DATA.data + 2 == "\0x01") { msg("Flag detected!\n"); DATA.data + 5 = 0x09; } } The filter's getting applied to the traffic because "Response seen!" messages get printed out by Ettercap. However, "Flag detected!" messages do not. I think DATA.data is indeed empty because if I change my second "if" statement to check for DATA.data == "" then the "Flag detected!" message gets printed. Any ideas why this may be happening?! Also, if this is the wrong site to be asking questions like this, please let me know. I wasn't sure if it fit better here or somewhere like superuser or serverfault. By the way, this is a cross-post from StackOverflow... I should have posted on this forum instead I think. :)

    Read the article

  • Blocking of certain file downloads

    - by Philip Fourie
    I have a problem where I cannot completely download a certain file from a server. The file is 1.9MB in size but only 68% is downloaded and then it hangs. I tried and these cases, which failed: Downloaded the file with HTTP Downloaded the file with FTP Moved the file to different FTP and web servers behind the ISA firewall Tried with IIS 6.0 & IIS 7.0 Multiple download clients. Which included FireFox, FileZilla (on Windows) and wget (on Linux) This worked: Downloading other files from the same location on the server. Both bigger and smaller and in size than the original. FTP and HTTP worked. Earlier version of this file (.DLL) works. It is as if the content of this file has an influence on this file being served. Network architecture: Client Machine - Internet (ISP) - ISA Server - IIS 7.0 The only constants are the ISP, Cisco router and the ISA server. Is it possible that something is rejecting the download because of the contents of the file? I am hoping ISA is the culprit... I am not a ISA expert is there somewhere I can look to establish if it is indeed ISA causing this? Update: Splitting the file into two parts with a hex editor results in one half of the file being served correctly and the other part not. Zipping the file results in the file being downloaded successfully. However this is not an option for this particular scenario. Renaming the file and its extension also doesn't work. Update 2009/10/22: It does NOT seems to be ISA that is causing this problem. We connected a laptop (running IIS) on an available public IP and still the file download to 68% before it hanged. The two remaining components are the ISP and the Cisco 800 series router. Anyone knows about an issue on the router perhaps?

    Read the article

  • Route web browsing through a separate iterface

    - by tkane
    I'd like to route web browsing through the wlan0 interface and the rest through eth1. Can you please help me with the iptables commands to achieve this. Below is my configuration. Thank you :) Edit: This is about desktop configuration not a web server set up. Basically I want to use one of my connections to browse the web and the other one for everything else. ifconfig: eth1 Link encap:Ethernet HWaddr 00:1d:09:59:80:70 inet addr:192.168.2.164 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:9ff:fe59:8070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4771 (4.7 KB) TX bytes:7081 (7.0 KB) Interrupt:17 wlan0 Link encap:Ethernet HWaddr 00:1c:bf:90:8a:6d inet addr:192.168.1.70 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:bfff:fe90:8a6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:77 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14256 (14.2 KB) TX bytes:14764 (14.7 KB) route: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 1 0 0 eth1 192.168.1.0 * 255.255.255.0 U 2 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 default adsl 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Setup ejabberd with SQL Server 2008

    - by wonster
    Here's what I have got so far. Windows 2008 Server 64 bit. Installed the latest version of ejabberd, ejabberd-2.1.8-windows-installer.exe. The windows service starts up fine but seems ineffective. However, using the start & stop scripts work. I am able to login to the admin page which so far doesn't seem that versatile. Opened up ports 5222, 5226 and 5280 for my workstation to talk to the server. I've got Spark and Jabbear Windows clients to register, login and instant message with multiple accounts using the server. After confirming that I've got the very basics working, I've decided to make use of SQL Server 2008 as the database. Reason? Mainly, I am very comfortable with SQL Server. I can deal with redundancy, failover, data analysis easily. Not sure if ejabberd's built in DB provides all that. Following the instructions from ejabberd's documentation, I setup a system DSN that points to another physical database. The DSN checks out fine. (Tried both Named Pipes and TCP/IP) Modified ejabberd.cfg. Commented line %%{auth_method, internal} and uncommented line {auth_method, odbc} Uncommented and modified {odbc_server, "DSN=ejabberd;UID=somelogin;PWD=somepassword"}. After making these changes, I restarted. No errors are found in the log files. The jabber clients are no longer able to register new accounts. I'm not sure where to look for errors besides the /logs/ folder as I'm new to all this. I am basically stuck here on step 5. Has anyone got this setup to work recently? Some of the posts I've found around are years old and of no help. I can't be the only one setting up ejabberd with MS SQL. Any help would be appreciated!

    Read the article

  • What is wrong with my DNS entries?

    - by matheus
    I have some problems with a domain not working as expected. My registrar's controlpanel shows these records for mydomain.eu: www A 111.222.333.444 * A 111.222.333.444 I use the nameservers of my registrar. I get a correct answer if i do dig www.mydomain.eu dig whatever.mydomain.eu I can also ping/visit website etc with those adresses. But, dig mydomain.eu wont resolve to anything. I just get this: ; <<>> DiG 9.6-ESV-R1 <<>> mydomain.eu ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46837 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;mydomain.eu. IN A ;; AUTHORITY SECTION: mydomain.eu. 1799 IN SOA ns1.binero.se. registry.binero.se. 1281647822 3600 240 1209600 3600 ;; Query time: 77 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Thu Jan 6 01:36:31 2011 ;; MSG SIZE rcvd: 83 The same A-record setup work for another domain/server ip, but that domain has other nameservers. What am I missing here?

    Read the article

  • Backtrack, Wi-Fi not working

    - by hradecek
    I've installed Backtrack 5R3 KDE, and I realized that my wireless is not working, but wired is working fine. Here's the lshw output: *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 05 serial: 04:7d:7b:b7:46:f8 size: 100MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:42 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB Controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)

    Read the article

  • MySQL stopped asking for passwords

    - by BlaM
    I'm currently experiencing a weird problem with one of my MySQL database servers: It stopped asking for passwords when I try to access the database from local with the mysql command line tool. I need a valid admin username. I also still need a password for remote access (i.e. from another IP). And I need a password when I - for example - access the database from a PHP script. But when I try to access the database from local host/commandline it will let me straight in to the data with my administrative users. They (admin users) have passwords set - and as I mentioned - I still need to specify those when I try to access the data via PHP. Changing the password didn't help. Non-Administrative users need to specify their passwort, but that doesn't really help if they can get anywhere with "mysql -u root" (or another admin user account name). (System Debian Linux Lenny, MySQL 5.0.51a) Any ideas? Anything that explains this behaviour? I don't understand how this can happen.

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >