Search Results

Search found 98146 results on 3926 pages for 'super mariowww developerit com'.

Page 2803/3926 | < Previous Page | 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810  | Next Page >

  • Installing a .deb file manually?

    - by stef
    apt-get install gitosis --fix-missing on my Linode still leads to a 404 (Failed to fetch http://ftp.debian.org/debian/pool/main/g/gitosis/gitosis_0.2+20080825-2_all.deb 404 Not Found [IP: 130.89.148.12 80] ) . The correct file location seems to be http://ftp.debian.org/debian/pool/main/g/gitosis/gitosis_0.2+20090917-11_all.deb Is there any way I can install this without apt-get, or point apt-get in the right direction somehow? Several other packages on my Debian Linode also point to 404, both from command line and virtualmin. EDIT: Machine details Debian 5.0 64bit (Latest 2.6 (2.6.39.1-x86_64-linode19)) EDIT2 My sources list # main repo deb http://ftp.debian.org/debian/ lenny main contrib non-free deb-src http://ftp.debian.org/debian/ lenny main contrib non-free deb http://security.debian.org/ lenny/updates main contrib non-free deb-src http://security.debian.org/ lenny/updates main contrib non-free deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free deb-src http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free # contrib & non-free repos #deb http://ftp.debian.org/debian/ lenny contrib non-free #deb-src http://ftp.debian.org/debian/ lenny contrib non-free #deb http://security.debian.org/debian/ lenny/updates contrib non-free #deb-src http://security.debian.org/debian/ lenny/updates contrib non-free deb http://software.virtualmin.com/gpl/debian/ virtualmin-lenny main deb http://software.virtualmin.com/gpl/debian/ virtualmin-universal main

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • AWS own email domain and some generic questions

    - by John Brunner
    I'm getting started with Amazon Web Services and I have a few question I'm not sure about. As every (company) webpage I want to use an "office@companyname.com" email adress, but how is that done? I looked up at godaddy.com (for domain registration), the offer me an email adress like I want, but for 3 dollars per month. Is this possible with AWS? Because at AWS you have just a complex domain which is not very userfriendly or serious. Also I want to host my dynamic webpage on the amazon cloud, but I'm not sure if I'm doing that right. I've read many guides, and all I know is that I have to purchase a Elastic Compute Cloud, and a Simple Storage Service... and every guide is working with the basic linux package, why not Windows? Is it more expensive? I just want to host a mySQL Server for the dynamic webpage, which is reached over a normal domain. And one last question, if I sign up for an AWS account it asks me for an email account. But I found it a little bit unserious to write there my free-webmailer-adress... How is it done the normal way? Thanks in advance! Best regards, john.

    Read the article

  • How to enable hotlink protection without hardcoding my domain in the Apache config file?

    - by Jeff
    Been surfing around for a solution for a couple days now. How do I enable Apache hotlink protection without hardcoding my domain in the config file so I can port the code to my other domains without having to update the config file every time? This is what I have so far: RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www\.example\.com [NC] RewriteRule \.(gif|ico|jpe|jpeg|jpg|png)$ - [NC,F,L] ... And this is what Apache suggests: SetEnvIf Referer example\.com localreferer <FilesMatch \.(jpg|png|gif)$> Order deny,allow Deny from all Allow from env=localreferer </FilesMatch> ... both of which hardcode the domain in their rules. The closest I came to finding any info that covers this is right here on ServerFault, but the conclusion was that it cannot be done. Based on my research, that appears to be true, but I didn't find any questions or commentary dedicated soley to this question. If anyone's curious, here is the link to the Apache 2 docs that cover this topic. Note that Apache variables (e.g. %{HTTP_REFERER}) can only be used in the RewriteCond text-string and the RewriteRule substitution arguments.

    Read the article

  • sporadic routing to another website when opening a common url

    - by user226098
    I have a strange problem in our office: Sometimes when opening a url from one of our projects random url in any browser not the right website shows up but some other website. In most of the cases it redirects to google.com with some parameters like https://www.google.de/?gfe_rd=cr&ei=krOOU8_kGcSKswadyYDQBw&gws_rd=ssl or just the ugly google 404 page). But today it remains on the origial url but shows up the the content of http://debug.netdna-cdn.com/. This happens about 1 time a week and for no apparent reason. Even stranger it only occurs on a single pc in the network. It now happens on two different computers in the network. Both use windows 8. The problem cannot be fixed by clearing the browser cache but by rebooting the pc or using ipconfig /flushdns. So I think it has something to do with the dns cache of the machine. But I have no idea what the reason is for this and how i can figure out how to solve it. Any ideas?

    Read the article

  • Which AMI to to use for Java/Tomcat/MySQL in Amazon EC2?

    - by Justin
    I originally posted this on stackoverflow.com and it was suggested serverfault.com might be a better place to ask this question. So here goes: I'm trying to determine which Amazon Machine Image (AMI) to use as my Virtual Server in Amazon's EC2. For now, I'll need to choose an AMI that complies with the AWS Free Usage Tier. I want to deploy a Java app that I've been developing using Eclipse on Windows XP, Tomcat 7 and MySQL 5.5. I'm aware that I can choose the Basic 32-bit Amazon Linux AMI. Then I'd manually install Tomcat and MySQL (does MySQL get installed on the image or separately on an Elastic Block Store (EBS)?). Here's the rub, I'm a bit of a Linux noob. I can start Tomcat and tail the logs and such on Linux but I'm not familiar with the install process for Tomcat and MySQL on Linux and commands like sudo and chmod. I'm happy to get more hands on with Linux but I'm short on time right now. Are there AMI's that already have Tomcat and MySQL bundled? The Request Instance Wizard shows 805 Community AMI's that are Free Tier Eligible. 51 of the Free Tier Eligible AMI's have "Tomcat" in their name. I'm willing to consider using Elastic Beanstalk but my research thus far hasn't found any discussion of using MySQL with Beanstalk. The discussions all seem to use Amazon's SimpleDB. Any advice is greatly appreciated.

    Read the article

  • After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB

    - by Tom Crane
    (I have also posted this on technet but I'm running out of ideas) I've upgraded from Windows Server 2008 R2 Standard to Enterprise in order to make use of more RAM. The server previously had 32GB of RAM. The upgrade from Standard to Enterprise, using DISM, seemed to go OK, so I powered down and installed the RAM. This a Dell Poweredge T710, I was taking it from 32GB to 72GB. The BIOS recognised the RAM, although I needed to change from "Advanced ECC" to "Optimizer" mode for it to use all of it. After rebooting, windows can see the RAM but in the system panel will display: Installed memory (RAM): 72.0 GB (4.00 GB usable) In the resource monitor, the remainder of the RAM is showing as reserved for hardware. I've tried various RAM configurations, including reverting it to the same chips and same configuration as before the upgrade, but always just 4.00 GB is showing up as usable. Following some threads on these forums I've gone into msconfig and set the maximum memory "by hand" but that doesn't fix the problem. BIOS doesn't seem to have anything that looks like memory remapping which is another suggestion that has come up. How do I make this RAM available to Windows? It was available before the upgrade, because I could use the full 32GB RAM the server had to start with. A screenshot (this is after reverting to the original RAM configuration) http://screencast.com/t/5FuzevdNb I don't know if it's related, but my remote desktop configuration has also disappeared: screencast.com/t/mYedomeQWS (the bottom half of this dialog should allow me to configure Remote Desktop, it was working before the upgrade but now it isn't).

    Read the article

  • Ubuntu networking issue: two specific machines cannot browse web while connected to network at the same time.

    - by jensendarren
    I have setup a secure wireless network which works very well except for two laptops running Ubuntu 10.10 that can't access the Internet via a browser at the same time. They can both ping sites, wget sites, use skype but when using a browser the page never loads (in Firefox the status bar just sits there saying "Connecting" until it times out.) Here is what we have tried so far (nothing has fixed this issue): OpenDNS Restart networking services Using wired connection rather than wireless Removing all other nodes from the network except the two machines that have this issue Swapped out the router Factory reset the router Reformatted one of the machines and re-installed Ubuntu 10.10 Other things that we have checked: The two machines can connect simultaneously without any issues to other wireless networks in different locations (say in an Internet Cafe or another office) The two machines have unique IP addresses The two machines have unique MAC addresses The two machines can communicate on the network using Skype, wget, ping etc We are not using a proxy on either machine FYI: I have attached output from wireshark. For the test we turned both machines on and pointed them both to the same website. The content loaded on one and not the other. Here is the output from wireshark- (speedyshare.com/files/26228631/machine_output_1 && speedyshare.com/files/26228649/machine2). As you can see the first one worked, the second one didn't. I don't fully understand the output and would appreciate if someone could shed some light on what might be causing this and how we can fix it! Many thanks! Darren

    Read the article

  • IIS7 default document for urlMapped url throws 403 error

    - by MorningZ
    Hopefully this all makes sense: I have a Web Application project against an IIS7 server that is "theme-able" using different master pages. As a result of what I am trying to do, the root of the project has no aspx files, so I am using the web.config's ability to rewrite "~/default.aspx" to "~/themes/a/default.aspx" this works great as long as i type in "http://www.mysite.com/default.aspx", but typing just "http://www.mysite.com" results in a "403 - Forbidden: Access is denied" error I was hoping that the combination of urlMapping and default document would be smart enough to handle this, but it's not <system.webServer> <defaultDocument enabled="true"> <files> <clear /> <add value="default.aspx"/> </files> </defaultDocument> </system.webServer> i also tried <system.webServer> <defaultDocument enabled="true"> <files> <clear /> <add value="~/themes/a/default.aspx"/> </files> </defaultDocument> </system.webServer> to no avail I was hoping a browser would come in without a document defined, IIS7 would assume it was default.aspx, and then the urlMapping would map it accordingly, but nope any pointers? I've read a ton of posts here with similar issues, but not the exact issue

    Read the article

  • Outlook 2010 not resolving SMTP address to Display Name

    - by Ben
    I have a weird problem where a user (director, naturally) has an odd problem with the way the to field shows in his Outlook. We are using Outlook 2010 (with RPC/HTTP) and Exch 2003 at the back end. Most of his mail shows in his mailbox with the To: field as normal. (e.g. Fred Bloggs in the to field.) However, some mails come in showing in the To: field as fred.bloggs@mycorp.com. (Apparently this is an issue!) For example, most show as: but a few come in as: There doesn't appear to be a pattern to this. I have tried to replicate by: Sending from any specific senders to see if it recurs (it doesn't) Typing his full name in my Outlook and sending (it resolves as normal) Sending programatically (e.g. from script) - (it still resolves OK) Forcing a "fred.bloggs@mycorp.com" entry in my Outlook. It resolves as soon as I hit enter Sending in cached mode Sending disconnected Anyone got any ideas how I can either replicate the problem or fix it! I can't tell at the moment whether it is a problem at his end or the sender's. EDIT This seems to be a global issue, following some more digging. Most people seem to have a few emails in their inbox addressed to their smtp address, rather than display name.

    Read the article

  • Office 365 domain federation conversion failed

    - by Matt Bear
    We're doing things backwards, we have an established o365 domain, with 400+ users, and are just now deploying local AD, and ADFS for SSO. Last night, after configuring my servers, I ran the powershell command convert-MSOLdomaintofederated to convert the xxx.com vanity domain to federated, it errored out with an unspecified error(Microsoft ADFS support said the error has to do with the default password settings being changed.) And when I run convert-MSOLdomaintostandard, it comes back with the domain is already standard. Also in the o365 portal it shows the domain as standard, however it is trying to process login attempts as if it were a federated domain. I've spent 5 hours total on the phone with Microsoft, and it has been escalated to their engineering department for resolution, sometime within the next few days... I need it yesterday. From what we can gather, the conversion process started, error out, changed some of the internal configurations to federated, but left the description as standard.(if that makes since). So its in a weird limbo, where its in both modes but neither at the same time. Currently, the only way to fix it is to remove the vanity domain, and re-add it. I need a way to dissociate the user accounts from xxx.com domain to allow its removal. Removal of all the users themselves is not an option.

    Read the article

  • Certificate enrollment request chain not trusted

    - by makerofthings7
    I am working on a MSFT lab for Direct Access, and need to create a Web certificate. The instructions ask be to do the following: On EDGE1, click Start, type mmc, and then press ENTER. Click Yes at the User Account Control prompt. Click File, and then click Add/Remove Snap-ins. Click Certificates, click Add, click Computer account, click Next, select Local computer, click Finish, and then click OK. In the console tree of the Certificates snap-in, open Certificates (Local Computer)\Personal\Certificates. Right-click Certificates, point to All Tasks, and then click Request New Certificate. Click Next twice. On the Request Certificates page, click Web Server, and then click More information is required to enroll for this certificate. On the Subject tab of the Certificate Properties dialog box, in Subject name, for Type, select Common Name. In Value, type edge1.contoso.com, and then click Add. Click OK, click Enroll, and then click Finish. In the details pane of the Certificates snap-in, verify that a new certificate with the name edge1.contoso.com was enrolled with Intended Purposes of Server Authentication. Right-click the certificate, and then click Properties. In Friendly Name, type IP-HTTPS Certificate, and then click OK. Close the console window. If you are prompted to save settings, click No. In production, our company has overridden the Web Server template and it doesn't seem to be issuing certificates with the full CA chain. When I look at the issued certificate properties then both tiers of the 2 tier CA hierarchy are missing. How can I fix this? I'm not sure where to look outside the GUI.

    Read the article

  • Nginx proxy to IIS Connection Timeout

    - by MitMaro
    I am having an issue with random timeouts with a Nginx proxy connecting to an IIS machine. I have been watching a packet capture between the two servers and it seems that the IIS machine is receiving a SYN packet but is not responding with what I think should be an ACK response. Before the timeout occurs there seems to be a slower response from the IIS server. There is no unusual memory or processor usage on the IIS or Nginx machine. Some information on the servers and setup: Nginx Machine: Ubuntu 10.04 64bit Nginx 0.7.65 Amazon EC2 Windows Machine: Windows Server 2008 IIS 7 ASP.net Application in Integrated Mode Nginx Error: 2011/01/10 17:57:40 [error] 8297#0: *30 connect() failed (110: Connection timed out) while connecting to upstream, client: 209.***.***.***, server: secure.example.com, request: "GET /a/path/deliver.aspx HTTP/1.1", upstream: "http://***.***.***.****:****//another/path/deliver.aspx", host: "secure.example.com" WireShark Packets 6521.449528 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422103 TSER=0 WS=7 6524.443239 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422403 TSER=0 WS=7 6530.443241 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477423003 TSER=0 WS=7

    Read the article

  • using gmail as email relay for sendmail

    - by Nikita
    I used to be able to send emails using a gmail account & sendmail configured using one of the guides on the Internet, for example: http://appgirl.net/blog/configuring-sendmail-to-relay-through-gmail-smtp/ This is a small server and I've recently moved it to a different house. And sendmail has stop working. The only thing different in the network setup is a new router. What is happening: In the log files, I see the following error: ...stat=Deferred: smtp.gmail.com: No route to host When I run from the command line: strace sendmail -f A -t B -u "Subject" -m "Message" -tls=yes ssl=yes -s smtp.gmail.com:587 -xu A -xp XYZ It hangs on this call: recvfrom(3, "m0\201\203\0\1\0\0\0\0\0\0\4ares\3lan\0\0\34\0\1", 8192, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, [16]) = 26 close(3) = 0 time(NULL) = 1339997943 open("/etc/localtime", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76ff000 read(3, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0"..., 4096) = 3477 _llseek(3, -24, [3453], SEEK_CUR) = 0 read(3, "\nEST5EDT,M3.2.0,M11.1.0\n", 4096) = 24 close(3) = 0 munmap(0xb76ff000, 4096) = 0 socket(PF_FILE, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3 connect(3, {sa_family=AF_FILE, path="/dev/log"}, 110) = 0 send(3, "<18>Jun 18 01:39:03 sendmail[268"..., 96, MSG_NOSIGNAL) = 96 nanosleep({60, 0}, So it looks like at some point it tries to resolve the DNS name, but I don't have anything running on 53, so it dies out and then just hangs. The other interesting thing is that msmtp works just fine on the same server. Update: ares in strace output is actually the name of my server, but .254 IP address is the address of the router. Could anyone tell me why this is happening or what further steps can I take to investigate the issue? Thanks!

    Read the article

  • nginx phpmyadmin 404

    - by borannb
    I am trying to install phpmyadmin on my nginx web server. I installed phpmyadmin without a problem. I created subdomain for it. For security reasons I didnt call my subdomain "phpmyadmin" i used a different name. Then I used this config for my subdomain server { listen 80; server_name myphpmyadminsubdomain.domain.com; access_log off; error_log /srv/www/myphpmyadminsubdomain/error.log; location / { root /usr/share/phpmyadmin; index index.php; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; fastcgi_pass php; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } } Then I enabled it like this; /etc/nginx/sites-available/myphpmyadminsubdomain /etc/nginx/sites-enabled/myphpmyadminsubdomain I have restarted the nginx and go to myphpmyadminsubdomain.domain.com and it is giving me nginx 404 Not Found error. what am I doing wrong?

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • Get Safari to use different autocompletion on different URLs on same hostname

    - by Luke404
    I have a webserver publishing different services over the same SSL VirtualHost, the two most commonly used being PhpMyAdmin and Cacti. These (and others) use 'cookie' style authentication, asking user and password in an HTML form (thus not using HTTP Authentication). Being on the same hostname, the Safari browser didn't manage too well stored passwords: if I login to one app with user foo, and then go to app two it would propose me user foo and its password in the login form. Changing just the username to bar used to be sufficient to let Safari autocomplete the correct password in its form field. Annoying, but I could live with it - usernames are short and easy to remember when compared to the passwords we use. After the update to safari5 this seems to be no longer true: if I store in safari (actually user keychain on OSX) credentials for https://www.foobarbaz.com/app1 AND credentials for https://www.foobarbaz.com/app2 there seem to be no way for it to autocomplete both based on the url. Even editing the keychain to add the path (it will store only the hostname by default) does not help. Is there anything I can do to let it work the way I want while still keeping everything on one hostname? Modifying anything server side is of course possible, but I can't switch apps to HTTP Auth (and not every one will support it anyway) to use different 'realms'.

    Read the article

  • Redirect 301 fails with a path as destination

    - by Martijn Heemels
    I'm using a large number of Redirect 301's which are suddenly failing on a new webserver. We're in pre-production tests on the new webserver, prior to migrating the sites, but some sites are failing with 500 Internal Server Error. The content, both databases and files, are mirrored from the old to the new server, so we can test if all sites work properly. I traced this problem to mod_alias' Redirect statement, which is used from .htaccess to redirect visitors and search engines from old content to new pages. Apparently the Apache server requires the destination to be a full url, including protocol and hostname. Redirect 301 /directory/ /target/ # Not Valid Redirect 301 /main.html / # Not Valid Redirect 301 /directory/ http://www.example.com/target/ # Valid Redirect 301 /main.html http://www.example.com/ # Valid This contradicts the Apache documentation for Apache 2.2, which states: The new URL should be an absolute URL beginning with a scheme and hostname, but a URL-path beginning with a slash may also be used, in which case the scheme and hostname of the current server will be added. Of course I verified that we're using Apache 2.2 on both the old and the new server. The old server is a Gentoo box with Apache 2.2.11, while the new one is a RHEL 5 box with Apache 2.2.3. The workaround would be to change all paths to full URL's, or to convert the statements to mod_rewrite rules, but I'd prefer the documented behaviour. What are your experiences?

    Read the article

  • DNS request times out then succeeds on my local network. Why?

    - by Dan
    I have a W2K3 Server that is the Domain Controller and also the DNS server. I wanted to make another DNS zone on my network called "something.local" and then make 'A' records to point requests like 'admin.something.local' and 'www.something.local' to machines on my network. I keep getting DNS timeouts but then after 2 tries it succeeds. Why would this happen? How can I troubleshoot? From my desktop I run: nslookup admin.something.local and get: Server: server.domain.com.au.local Address: 192.168.0.10 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. Name: admin.something.local Address: 192.168.0.191 If I go back the other way: nslookup 192.168.0.191 I get: Server: server.domain.com.au.local Address: 192.168.0.10 Name: admin.something.local Address: 192.168.0.191 My DNS server address is 192.168.0.10. The new DNS zone is not hooked up to active directory. I do not have much experience with DNS. Yesterday it was working fine. I have tried doing an 'ipconfig /flushdns' on both my desktop and the DNS server

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • EPP Protocol create multiple domains in one command

    - by yannis hristofakis
    I've seen <domain:check> command can check multiple domains in one command. Is it possible to do the same for the <domain:create>? <?xml version="1.0" encoding="UTF-8" standalone="no"?> <epp xmlns="urn:ietf:params:xml:ns:epp-1.0"> <command> <create> <domain:create xmlns:domain="urn:ietf:params:xml:ns:domain-1.0"> <domain:name>example.com</domain:name> <domain:period unit="y">2</domain:period> <domain:ns> <domain:hostObj>ns1.example.com</domain:hostObj> <domain:hostObj>ns1.example.net</domain:hostObj> </domain:ns> <domain:registrant>jd1234</domain:registrant> <domain:contact type="admin">sh8013</domain:contact> <domain:contact type="tech">sh8013</domain:contact> <domain:authInfo> <domain:pw>2fooBAR</domain:pw> </domain:authInfo> </domain:create> </create> <clTRID>ABC-12345</clTRID> </command> </epp>

    Read the article

  • VCL - configuration for Magento and Varnish 3.0.2

    - by Tomas
    I would like to kindly ask if there's someone who can help me configure Varnish for Magento to reach far more hits. My current ratio from varnishstat is: cache_hit=271 cache_miss=926 I'm kindly asking this because I've googled almost every site related to this theme, but 99.9% of configurations don't work because of outdated code. Details of my set-up: I use Varnish on port 80, Apache on port 81, PageCache as Magento varnish module, APC for PHP speed and Memcached for dynamic caching. Load speed is about 1.5s on home-page (Pingdom.com average results) USA ping & 2.5s Europe. Servers are located in Toronto, Canada. EDIT: This is my full VCL configuration http://pastebin.com/885BzHCs (I just use xxx.xxx.xxx.xxx for my IPs) This is the info from the command (varnishtop -i TxHeader -I Cookie): TxHeader Cookie: frontend=965b5...(*lots of numbers); adminhtml=3ae65...(*lots of numbers); EXTERNAL_NO_CACHE=1 "(*lots of numbers)" is just my adding to the info Any idea how to avoid Varnish hitting this cookies? (If I got correctly the idea about avoiding Vanrish hitting the cookie and not caching the home page). Thank you for any help!

    Read the article

< Previous Page | 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810  | Next Page >