Search Results

Search found 24376 results on 976 pages for 'site crawler'.

Page 4/976 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Packaging a web application for deploying at customer site

    - by chitti
    I want to develop a django webapp that would get deployed at the customer site. The web app would run in a private cloud environment (ESX server here) of the customer. My web app would use a mysql database. The problem is I would not have direct access/control of the webapp. My question is, how to package such a web application with it's database and other entities so that it's easier to upgrade/update the app and it's database in future. Right now the idea I have is that I would provide a vm with the django app and database setup. The customer can just start the vm and he would have the webapp running. What are the other options I should consider?

    Read the article

  • Web Site Search Engines - Sending Your Site to Search Engine Sites

    Search engines are number one cost effective approach to market your business and web site. Studies indicate that vast majority viewers find web sites via leading search engines and directories. Quality listing on leading search engine or directory may drive targeted traffic to your website and improve your business in a short period of time.

    Read the article

  • IPSec tunnelling with ISA Server 2000...

    - by Izhido
    Believe it or not, our corporate network still uses ISA Server 2000 (in a Windows Server 2003 machine) to enable / control Internet access to / from it. I was asked recently to configure that ISA Server to create a site-to-site VPN for a new branch in a office about 25 km. away from it. The idea is basically to enable not only computers, but also Palm devices (WiFi-enabled, of course), to be able to see other computers in both sites. I was also told that a simple VPN-enabled wireless AP/router (in this case, a Cisco WRV210 unit) should be enough to establish communications with the main office. To be fair, the router looks easy to configure; it was confusing at first, but further understanding of how site-to-site VPNs work cleared all doubts about it. Now I need to make modifications to our ISA Server in order to recognize the newly installed & configured "remote" VPN site. Thing is, either my Googling skills are pathethically horrible, or there doesn't seem to be much (or any, at all) information about how to configure an ISA Server 2000 for this purpose. Lots of stuff on 2004, of course; also, I think I saw something for 2006. But nothing I could find about 2000. Reading about 2004, it seems that the only way I can do site-on-site with a Cisco router (read: a non-ISA-Server machine) is through something they call a "IPSec tunnel". Fair enough. However, I can't figure for the life of me how could I even start to find, leave alone configure, such a thing. Do you, people, happen to know how to do IPSec tunelling on a ISA Server 2000, so I can connect to a Cisco WRV210 VPN-enabled router, and build a site-to-site VPN for both networks? Or is this not possible at all? (Meaning I should change anything in this configuration to make it work...)

    Read the article

  • Grapeshot crawler ignoring robots.txt

    - by QF_Developer
    Has anyone come across a crawler called Grapeshot? They are hammering the same page repeatedly on our website. I believe they are looking for ad related keywords, based on previous content ad campaigns. The odd thing is we never ran any such campaigns on the page they are so interested in. We do have only a few pages running AdSense, is this what has attracted Grapeshot? I've added the following declaration to my robots.txt, but they don't seem to be honouring it? User-agent: grapeshot Disallow: / Any ideas on how to block this nuisance crawler? I'm starting to think the best way is by setting up IP rules in IIS?

    Read the article

  • Google crawler not found an error inside of the <head> tag

    - by inckka
    I've found a crawler error in my site and it is listed as a page not found(404) link. Heres the broken link http://mydomain.com/blog/comments/feed/ I'm using Google web master tools and found that broken link coming from my web site pages' head tag. here's actual code where that link situated. <head> <link rel="alternate" type="application/rss+xml" title="My Domain Blog &raquo; Feed" href="http://www.my-domain.com/blog/feed/" /> </head> So Google report this link as a not found. Actually this link target is not an exact page or a location. But essential for the blog feeds. Anyway I have to fix this and remove from the Google crawler error's list. But haven't got any idea, because cannot redirect or do a 404 header with this link target. Have anyone got an idea of fixing this?

    Read the article

  • Submitting a sitemap to take care of inherited Google crawler errors

    - by leeand00
    I have an awful lot of Google Crawler errors (1000 or so) after I inherited a site that the previous owner migrated without moving much of their content. Would generating a map of the current site and submitting it to Google help fix this? Is there any quicker, automated way to eliminate errors other than clicking each and every site error? Note: I have already tried automating this on my own.

    Read the article

  • Will keep google traffic on new site from old site when moving content from old site? [closed]

    - by user1324762
    Possible Duplicate: new domain, old links are 301’d from old domain to new, how will this affect my rankings? I have a site about bikers. Now I created a dating site for bikers. I don't need old site any more, I want to move all articles to this new dating site. So basically, this is not only moving content to new domain, but also to entire new site. What I am planning to do is to make 301 redirect for all 200 articles. For pages that are not articles, I will just put message that the site will be down soon. Do you think that I will get all google traffic from old site from those articles? Is there anything I should be aware and careful?

    Read the article

  • Strategy for hosting 700+ domains, each with static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to managing CNAMEs on the registrar-side, is there a way to quickly associate CNAMEs with new websites, maybe gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • Strategy for hosting 700+ domains names, each with a static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to manage CNAMEs on the registrar-side, but is there a way to quickly associate CNAMEs with new websites (on the hosting side), maybe using the method offered by gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • Want to tap into a niche market. Do I create new site or bolt on to existing site?

    - by nitbuntu
    Hi, After a lot of heard work and a few years of perseverance, I'm seeing regular sales on my website which have been steadily growing over the past year. However, the entrepreneur in me wants tap into a niche market which I've become very interested in. It's possible to bolt on this niche onto my existing site as an additional category, without it looking too out of place; my new category of products would also benefit from the ranking my current site gets. The kind of people who would purchase these new niche products, however, are very particular and obsessive about detail. So, for example, many Vegetarians would not eat in KFC even if they were to introduce a new range of Veggie burgers. So, I thought it best to create a new website and since my existing site was created using an 'old-school' shopping cart and there are many more up-to-date, feature-rich, ones available now, I wanted to use a different shopping cart system. My dilemma is that I already have 2 websites (1 b2c and another b2b site) and maintaining a 2nd b2c site would end up vastly increasing my workload and I fear that I would not be able to pay adequate attention to all the sites. Moreover, the additional customer service work (e.g. answering emails from many separate email accounts) could end up being too confusing and difficult to maintain. The easy answer would be to take on an employee, but I'm just not earning enough to justify this yet. If anyone has any tips or experience they'd like to share, which could help me answer this question, I'd be highly grateful.

    Read the article

  • Will adding q&a help my site's rankings, and if so, what are the implications of a sub-domain for q&a rather than a path on the site? [closed]

    - by ElHaix
    Possible Duplicate: Subdomain versus subdirectory One of our web properties is doing quite well without any additional links being created on the site, and our link inventory is tightly managed - no user-generated links. To introduce a community aspect to the site, we want to implement a q&a forum. Once in place, new links will populate our link inventory with keywords that are not necessarily targeted to the site. With the q&a on a sub-domain, would that not affect the main site's rankings? What's the best approach for this?

    Read the article

  • How to best develop web crawlers

    - by Fernando Barrocal
    Heyall, I am used to create some crawlers to compile information and as I come to a website I need the info I start a new crawler specific for that site, using shell scripts most of the time and sometime PHP. The way I do is with a simple for to iterate for the page list, a wget do download it and sed, tr, awk or other utilities to clean the page and grab the specific info I need. All the process takes some time depending on the site and more to download all pages. And I often steps into an AJAX site that complicates everything I was wondering if there is better ways to do that, faster ways or even some applications or languages to help such work.

    Read the article

  • Logic behind crawling an webpages like that of Screaming Frog? [on hold]

    - by sree
    I would like to know what is the parameters to be considered while developing a crawler like that of Screaming Frog. Am looking forward for information on do's and dont's of webpage crawling. What are the problems the crawler may infuse on the webpages like loadtime (maybe?) or anything that effects webpage during crawling. What are the rules the crawler needs to follow etc. Basically anything info that makes the crawler look good and accurate. Just point me in a right direction to achieve it.. Hope my requirement is clear this time.. :)

    Read the article

  • How to hide pages from Google crawler? [closed]

    - by NoobDev4iPhone
    Possible Duplicate: What are the most important things I need to do to encourage Google Sitelinks? I'm currently working on a website and need to keep certain pages hidden from Google crawler. How to make it so that search engines see only what I want them to see in a directory? Also, you know how Google results also give you shortcut links, Like 'Login', 'About' etc... how to put these links to search result?

    Read the article

  • Google "not selecting" many of the links on my site

    - by Loki
    Since a few days I noticed that Google wasn't indexing any of the pages on my site anymore. When I checked the indexingstatus-page I noticed the following: As you can see the "not selected"-line is going thru the roof! The information on google's help pages is very limited: https://support.google.com/webmasters/bin/answer.py?hl=nl&answer=139066 I run a free downloads-site which is automatically updated. I don't have duplicate content (except that sometimes descriptions of software is similar over an older version of the software) and my URL's are all forwarded to the www.-variant from within Wordpress. So the canonical-part that Google mentions in their Help-file, isn't the problem. Any ideas what could be causing this and how to solve it?

    Read the article

  • Cisco: Site-to-site VPN with cisco 878 and ASA weirdness

    - by cpf
    I currently have 2 sites, both connected to each other through 2 firewalls / routers in a site-to-site VPN. Pinging from server to server (Using 2mb/2mb SDSL) through that VPN obviously works, however, at one site, we have another internet connection (7m/400k ADSL), and only the link between the two sites should be on the other connection. All pc's should go over the other connection for internet, just communication between servers & Communication between pc's and the server at the other side should go through there too. What is configured at the moment is the server is using the SDSL directly as default gateway. Since it's not intended to surf anything it is a safe config. PC's are configured on the ADSL as default gateway. Now I wanted to route through everything that uses the range used on the other site, it should be sent from the ADSL modem to the SDSL modem, which has the VPN connection. I figured I could use OSPF to do so, however, OSPF doesn't seem to "detect" the range of the external site. Also (due to bad ip subnetting thanks to the other administrator), the ip used internally as the server on the other site also exists on the internet (causing a lot of confusion), so rdp-ing from our server to the server of the other site works (somehow), but tracerouting on the SDSL router (which should actually, in my opinion, go over the VPN) actually goes all over the internet. My question(s): Why doesn't the SDSL router ping the external ip through VPN, but the server does? Why can't I route from the ADSL router to the SDSL over VPN? I would seriously appreciate some help, since I can't figure out why it does it like this.

    Read the article

  • Moving from single-site to multi-site Active Directory has broken OWA proxying

    - by messick
    Originally we had the following setup: OfficeExch01 has Mailbox Role and CAS Role OfficeExch01 is in the office. CoLoExch01 had just CAS Role. CoLoExch01 is internet facing and in a CoLo. Three AD domain controllers in the default site. Users could go to https://webmail.whatever.com/owa, get proxyed to OfficeExch01 and everything was great. Well, we recently setup a separate AD site and put a domain controller and the ColoExch01 server in the new site. I also made that remote DC be a Global Catalog. Now, users get the following error: Outlook Web Access is not available. If the problem continues, contact technical support for your organization and tell them the following: There is no Microsoft Exchange Client Access server that has the necessary configuration in the Active Directory site where the mailbox is stored. I also see event 41 errors in the logs: The Client Access server "https://webmail.xxxxxxx.com/owa" attempted to proxy Outlook Web Access traffic for mailbox "/o=XXXXX/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=xxxxxxk". This failed because no Client Access server with an Outlook Web Access virtual directory configured for Kerberos authentication could be found in the Active Directory site of the mailbox. The simplest way to configure an Outlook Web Access virtual directory for Kerberos authentication is to set it to use Integrated Windows authentication by using the Set-OwaVirtualDirectory cmdlet in the Exchange Management Shell, or by using the Exchange Management Console. If you already have a Client Access server deployed in the target Active Directory site with an Outlook Web Access virtual directory configured for Kerberos authentication, the proxying Client Access server may not be finding that target Client Access server because it does not have an internalUrl parameter configured. You can configure the internalUrl parameter for the Outlook Web Access virtual directory on the Client Access server in the target Active Directory site by using the Set-OwaVirtualDirectory cmdlet. Looking this up I see a lot talk about ExternalURL and InternalURL settings. However, everything worked great until we made the new AD site. I also made sure the internal CAS server's /owa virtual directory is set to use Integrated Authentication. Is there something I need to do to allow Exchange to see that I've made these AD changes?

    Read the article

  • SEO effect of “You are leaving this site” page for outbound links?

    - by Timo Huovinen
    The problem I am working on an aggregation website that collects reviews about specific products from various websites. The site has many thousands of outbound links (with "nofollow" attributes) to the content source websites where the reviews were collected from. The site has far more outbound links than inbound links and I have read that this is bad for SEO. The question Would adding an intermediate «You are leaving this site» disclaimer/warning page like this hurt search engine rankings? And can you provide any links about this topic? p.s. The exit page would be a POST form instead of a script, that notifies the user that he/she is leaving this site and provides a button to continue to the other website. p.p.s This kind of idea is implemented on many forums, aggregation websites with the purpose of warning the user that he/she is leaving this site and to block search engine bots from following those links because search bots do not submit forms.

    Read the article

  • Weird 301 redirection by google crawler

    - by Ace
    I have some pages on my website www.acethem.com which are having 301 redirection but they are not actually 301 redirects. e.g. www.acethem.com/pastpapers/by-year/2007/ is seen as a 301 redirection to www.acethem.com/pastpapers/by-year by google (I am using "Fetch as google" in webmaster tools. Now more weird: My paginated pages with page = 10 are all redirected to homepage: http://www.acethem.com/pastpapers/o-level/chemistry/page/10/ while http://www.acethem.com/pastpapers/o-level/chemistry/page/9/ is working properly in google crawler. Note that all these pages work fine with no redirect in browsers. Sidenote: on www.acethem.com/pastpapers/by-year/2007/, the facebook share button also points to www.acethem.com/pastpapers/by-year/.

    Read the article

  • Can AdSense crawler view pages that require cookies?

    - by moomoochoo
    Details I require users to agree to terms and conditions before they can view several pages on my site. Once they have agreed a cookie is set and they can proceed to the webpage. If a user somehow manages to end up on the webpage without a cookie they will not be able to access the page's content. My question(s) Is the AdSense crawler able to set the cookie and visit these pages? If yes, how will it know to agree to the TOS? Is there some way to allow it access to the pages even if it couldn't use cookies?

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • RuntimeError: maximum recursion depth exceeded while calling a Python object

    - by Bilal Basharat
    this error arises when i try to run the following test case which is written in models.py of my django app named 'administration' : from django.test import Client, TestCase from django.core import mail class ClientTest( TestCase ): fixtures = [ 'testdata.json' ] def test_get_register( self ): response = self.client.get( '/accounts/register/', {} ) self.assertEqual( response.status_code, 200 ) the error arises at this line specifically: response = self.client.get( '/accounts/register/', {} ) my django version is 1.2.1 and python 2.6 and satchmo version is 0.9.2-pre hg-unknown. I code in windows platform(xp sp2). The command to run test case is: python manage.py test administration the complete error log is as follow: site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 101, in by_host site = Site.objects.get(domain=host) File "C:\django\django\db\models\manager.py", line 132, in get return self.get_query_set().get(*args, **kwargs) File "C:\django\django\db\models\query.py", line 336, in get num = len(clone) File "C:\django\django\db\models\query.py", line 81, in __len__ self._result_cache = list(self.iterator()) File "C:\django\django\db\models\query.py", line 269, in iterator for row in compiler.results_iter(): File "C:\django\django\db\models\sql\compiler.py", line 672, in results_iter for rows in self.execute_sql(MULTI): File "C:\django\django\db\models\sql\compiler.py", line 717, in execute_sql sql, params = self.as_sql() File "C:\django\django\db\models\sql\compiler.py", line 65, in as_sql where, w_params = self.query.where.as_sql(qn=qn, connection=self.connection) File "C:\django\django\db\models\sql\where.py", line 91, in as_sql sql, params = child.as_sql(qn=qn, connection=connection) File "C:\django\django\db\models\sql\where.py", line 94, in as_sql sql, params = self.make_atom(child, qn, connection) File "C:\django\django\db\models\sql\where.py", line 141, in make_atom lvalue, params = lvalue.process(lookup_type, params_or_value, connection) File "C:\django\django\db\models\sql\where.py", line 312, in process connection=connection, prepared=True) File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) File "C:\django\django\db\models\fields\__init__.py", line 323, in get_db_prep _lookup return [self.get_db_prep_value(value, connection=connection, prepared=prepar ed)] File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) RuntimeError: maximum recursion depth exceeded while calling a Python object ---------------------------------------------------------------------- Ran 7 tests in 48.453s FAILED (errors=1) Destroying test database 'default'...

    Read the article

  • Site to Site VPN with Fault Tolerence

    - by Nordberg
    Hello, I have a situation where I require an IPSEC tunnel between two sites. Site 2 is a small branch office with basic (ADSL) connectivity and Site 1 is the "main" office with SDSL and ADSL for redundancy should the SDSL fail. From Site 1, all traffic bound for the 172.0.0.0 network will then be sent down another IPSEC tunnel to a supplier's Remote Server. See this page for the basic premise (this is a rough idea and things can be moved about etc...) I am considering specifying Cisco ASA devices as the firewalls for both sites for all connections. Would it be possible to employ something like HSRC to provide a backup at Site 1 should the SDSL go down? I suppose the key aims here are that Site 2 can somehow failover to initiate a VPN to the ASA behind the ADSL at Site 1. I will have a 21 subnet mask on all internet connections so can play with Class C routing if need be... If I'm barking up the wrong tree with HSRC, is there another way I can acheive this without massive expenditure on Barracuda routers et al? Many Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >