Search Results

Search found 37688 results on 1508 pages for 'site search'.

Page 165/1508 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Hiding a Website from Search Engine Bots and Viewers by Disabling Default VirtualHost

    - by Basel Shishani
    When staging a website on a remote VPS, we would like it to be accessible to team members only, and we would also like to keep the search engine bots off until the site is finalized. Access control by host whether in Iptables or Apache is not desirable, as accessing hosts can vary. After some reading in Apache config and other SF postings, I settled on the following design that relies on restricting access to only through specific domain names: Default virtual host would be disabled in Apache config as follows - relying on Apache behavior to use first virtual host for site default: <VirtualHost *:80> # Anything matching this should be silently ignored. </VirtualHost> <VirtualHost *:80> ServerName secretsiteone.com DocumentRoot /var/www/secretsiteone.com </VirtualHost> <VirtualHost *:80> ServerName secretsitetwo.com ... </VirtualHost> Then each team member can add the domain names in their local /etc/hosts: xx.xx.xx.xx secrethostone.com My question is: is the above technique good enough to achieve the above said goals esp restricting SE bots, or is it possible that bots would work around that. Note: I understand that mod_rewrite rules con be used to achieve a similar effect as discussed here: How to disable default VirtualHost in apache2?, so the same question would apply to that technique too. Also please note: the content is not highly secretive - the idea is not to devise something that is hack proof, so we are not concerned about traffic interception or the like. The idea is to keep competitors and casual surfers from viewing the content before it's released, and to prevent SE bots from indexing it.

    Read the article

  • Setting Up Multiple Domains (plus wildcard subdomains) to Point to the Same Site/VirtualHost

    - by Derek Reynolds
    I have my primary domain with wildcard subdomains setup already. username.maindomain.com and maindomain.com I want to provide my users with additional domains that they can select. additional1.com, additional2.com, additional3.com... These additional domains would also need to support wildcard subdomains (as the subdomains route to a username). Anyone know how to properly configure this in DNS and VirtualHost config? Currently I have the additional domains as A records pointing to the same IP as my main domain (with a wildcard subdomain A record for each as well). In my VirtualHost config I am placing the additional domain names in the ServerAlias directive. Let me know if any more detail is needed.

    Read the article

  • Encrypted off-site data storage

    - by Dan
    My business has a rather unique problem. We work in China and we want to implement a file server paradigm which does not store any files locally, but rather in a server overseas. Applications would be saved onto our local machines, but data would be loaded directly into memory from the cloud, e.g. I load a docx into word at the beginning of the day, saving periodically to the cloud as I work on it, and turn off my computer at night, with nothing saved locally. Considering recent events, we worry about being raided by the Chinese authorities, and although all our data is encrypted, it would not be hard for the authorities to force us to give up the keys. So the goal is not to have anything compromising physically in China. We have about 20 computers, and we need an authenticated, encrypted connection with this overseas file server. A system with Active-Directory-like permissions would be best, so that only management can read or write to certain files, or workers can only access files that relate to their projects, and to which all access can be cut off should the need arise. The file server itself would also need to be encrypted. And for convenience, it would be nice if this system was integrated with each computer's file explorer (like skydrive or dropbox does, but, again, without saving a copy locally), rather than through a browser. I can't find any solution online. Does anyone know of a service that does this? Otherwise I'll have to do it myself (which kinda sounds fun, but I don't really have the time), and I'm not sure where to start. Amazon maybe. But the protocols that offices would use on their intranet typically aren't encrypted; we need all traffic securely tunneled out of the country. Each computer already has a VPN to a server in California, but I'm unsure whether it would be efficient to pipe file transfers through it. Let me know if anyone has any ideas. And this is my first post; feel free say whether this question is inappropriate/needs to be posted elsewhere.

    Read the article

  • Nginx / uWsgi / Django site can handle more traffic with rewrite URL

    - by Ludo
    Hi there. I'm running a Django app, using uWsgi behind Nginx. I've been doing some performance tuning and load testing using ApacheBench and have discovered something unexpected which I wonder if someone could explain for me. In my Nginx config I have a rewrite directive which catches lots of different URL permutations and then forwards them to the canonical URL I wish to use, eg, it traps www.mysite.com/whatever, www.mysite.co.uk/whatever and forwards them all to http://mysite.com/whatever. If I load test against any of the URLs listed with a redirect (ie, NOT the canonical URL which it is eventually forwarded to), it can serve 15000 concurrent connections without breaking a sweat. If I load test against the canonical URL, which the above test I would have expected got forwarded to anyway, it can't handle nearly as much. It will drop about 4000 of the 15000 requests, and can only handle about 9000 reliably. This is the command line I'm using to test: ab -c15000 -n15000 http://www.mysite.com/somepath/ and ab -c15000 -n15000 http://mysite.com/somepath/ I've tried several different types - it makes no different which order I do them in. This doesn't make sense to me - I can understand why the requests involving a redirect may not handle quite so many concurrent connections, but it's happening the other way round. Can anyone explain? I'd really prefer it if the canonical URL was the one which could handle more traffic. I'll post my Nginx config below. Thanks loads for any help! server { server_name www.somesite.com somesite.net www.somesite.net somesite.co.uk www.somesite.co.uk; rewrite ^(.*) http://somesite.com$1 permanent; } server { root /home/django/domains/somesite.com/live/somesite/; server_name somesite.com somesite-live.myserver.somesite.com; access_log /home/django/domains/somesite.com/live/log/nginx.log; location / { uwsgi_pass unix:////tmp/somesite-live.sock; include uwsgi_params; } location /media { try_files $uri $uri/ /index.html; } location /site_media { try_files $uri $uri/ /index.html; } location = /favicon.ico { empty_gif; } }

    Read the article

  • 310 too many redirects after moving drupal site to fast-cgi

    - by Jaels
    Here is trouble: When i follow this link - http://znak.net.ua it rewrites to http://znak.net.ua/ru/ru/ru/ru/ru/ and i got Error 310 (net::ERR_TOO_MANY_REDIRECTS) This happend when i start using fast-cgi insteed of mod_php Here is my .htaccess: ErrorDocument 404 "The requested file favicon.ico was not found. DirectoryIndex index.php <IfModule mod_php4.c> </IfModule> <IfModule sapi_apache2.c> </IfModule> <IfModule mod_php5.c> </IfModule> <IfModule mod_expires.c> ExpiresActive On ExpiresDefault A1209600 ExpiresByType text/html A1 </IfModule> <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^(.*)$ http://znak.net.ua/ru/$1 [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ ru/index.php?q=$1 [L,QSA] </IfModule>

    Read the article

  • Amazon EBS for web site files

    - by MattB
    I'm new to the Amazon EC2/EBS system. I'm trying to figure out the "best practice" for hosting a web application (php, ASP.NET, etc.). The way I see it, I have 2 options: Have the instance hold my web files - Don't need to worry about attaching volume's etc.? Have EBS volume hold my web files - Easily update with new code without needing to recreate the AMI for new updates? How do others handle this?

    Read the article

  • Experiences with hosted (off-site) Microsoft Dynamics CRM?

    - by Beau
    We're currently testing Microsoft Dynamics CRM hosted by Fpweb. I've been asked by the lead on the project how we can increase the speed at which CRM pages in IE load. The delay seems reasonable to me for a virtual server located across the country. Has anyone succeeded in speeding things up with aggressive caching (i.e. a WAN accelerator) or by some other means? Do your employees complain about the speed of hosted Dynamics CRM?

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Remove all HTTP bindings from an IIS 6 site while leaving SSL bindings

    - by MikeBaz
    We have a (remote, via a reseller) customer who configured their IIS6 server to not have any port 80 HTTP bindings, only port 443 SSL bindings. We would like to reproduce this without going through the three layers (!) to get to the customer to test some error scenerios. However, whenever I try to get IIS to not listen on IIS at all, I can't do it. If I do it in the UI, either leaving in the main properties page, or in the advanced bindings page, the UI does not let me proceed. If I remove the HTTP ServerBindings from the metabase.xml directly, IIS makes it port 80, all unassigned addresses anyway. Is there a way to get to the "SSL only" state naturally? Please note I am NOT talking about the "require SSL" checkbox or underlying metabase setting, as that still listens on port 80 (or whatever) to give the "SSL required" error message. I'm talking about not having any bindings listed at all for HTTP.

    Read the article

  • Web site DNS configuration Without using hosting nameservers

    - by user39110
    Hi, i am publishing my website(www.muratturan.com) by using godaddy's "Total DNS Control". My configuration like this: 1-) I configured my domain to total dns control by setting nameservers to total dns control's. 2-) In total dns control panel i set A host to directly my VPS's ip address. 3-) Also i configured mx records to google apps. Everythings looks good but i am thinking that is this technuqie has any negative effects ? Thanks

    Read the article

  • Reverse and Forward DNS set up correctly but sometimes MapReduce job fails

    - by phodamentals
    Ever since we switched over our cluster to communicate via private interfaces and created a DNS server with correct forward and reverse lookup zones, we get this message before the M/R job runs: ERROR org.apache.hadoop.hbase.mapreduce.TableInputFormatBase - Cannot resolve the host name for /192.168.3.9 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '9.3.168.192.in-addr.arpa' A dig and nslookup both show that the reverse and forward look-ups both get good responses with no errors from within the cluster. Shortly after these messages, the job runs...but every once in awhile we get a NPE: Exception in thread "main" java.lang.NullPointerException INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.net.DNS.reverseDns(DNS.java:93) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInputFormatBase.java:219) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:184) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1063) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1080) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:992) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945) INFO app.insights.search.SearchIndexUpdater - at java.security.AccessController.doPrivileged(Native Method) INFO app.insights.search.SearchIndexUpdater - at javax.security.auth.Subject.doAs(Subject.java:415) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapreduce.Job.submit(Job.java:566) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596) INFO app.insights.search.SearchIndexUpdater - at app.insights.search.correlator.comments.CommentCorrelator.main(CommentCorrelator.java:72 Does anyone else who has set-up a CDH Hadoop cluster on a private network w/DNS server get this? CDH 4.3.1 with MR1 2.0.0 and HBase 0.94.6

    Read the article

  • My DNS cannot resolve an web site address?

    - by ipkiss
    Hello all, Recently, I could not access the webpage bbc.co.uk anymore, while I can access other websites smoothly. Ar first, I though there may be some problem with my laptop. However, if I use my laptop through my company network, I can load the page bbc.co.uk normally. Then, I though maybe my ADSL at home blocks that web address. However, I tried another laptop with my home ADSL and it can load the page bbc.co.uk very fast. Now I do not know what could be the problem. Can anyone tell me please? Thank you.

    Read the article

  • How is my password sent across when I check gmails/access bank site [closed]

    - by learnerforever
    What encryption is used when my password is sent across in gmails/when I do online banking? RSA? DSA? Public-private key encryption?. In key encryption, which entity is assigned a public/private key? Does each unique machine with unique MAC address has a unique public/private key? Does each instance of browser have unique key? Does each user have unique private/public key? How does session key come into picture? How do machines receive their keys?

    Read the article

  • WSS Search fills 10 GB limit on SBS server 2011

    - by Kactus
    I've got a SBS Server 2011 Standard SP1 that isn't very busy. 2 Users local and 2 remote. We have sharepoint that has maybe a dozen small documents at most. I've just started getting the following two error occur Could not allocate space for object 'dbo.MSSBatchHistory'.'IX_MSSBatchHistory' in database 'WSS_Search_SERVER' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. And CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 10240 MB per database. Digging around in SQL manager I see that WSS Search DB file size is 10241MB, the log file is only 147 MB Firstly, why is WSS Search taking up so much space? How can I stop it from doing so, and what can I do now to get things running ok. I know about log file truncating and this isn't the case here since the log is tiny. Any help is appreciated. There is plenty of free space on the disk (791GB free) Thanks Kactus

    Read the article

  • Website crawler/spider to get site map

    - by ack__
    I need to retrieve a whole website map, in a format like : http://example.org/ http://example.org/product/ http://example.org/service/ http://example.org/about/ http://example.org/product/viewproduct/ I need it to be linked-based (no file or dir brute-force), like : parse homepage - retrieve all links - explore them - retrieve links, ... And I also need the ability to detect if a page is a "template" to not retrieve all of the "child-pages". For example if the following links are found : http://example.org/product/viewproduct?id=1 http://example.org/product/viewproduct?id=2 http://example.org/product/viewproduct?id=3 I need to get only once the http://example.org/product/viewproduct I've looked into HTTtracks, wget (with spider-option), but nothing conclusive so far. The soft/tool should be downloadable, and I prefer if it runs on Linux. It can be written in any language. Thanks

    Read the article

  • Serve WordPress from root of site on Bitnami EC2 Ubuntu image

    - by user57087
    Running Bitnami's Ubuntu Wordpress Image on Amazon EC2. By default the WordPress install is at /wordpress and there is a static index.html file in the root. How do I configure Apache and WordPress to serve from the root instead of /wordpress? I have tried the instructions on this page: http://digitivity.org/10/how-to-serve-your-wordpress-blog-from-the-root-directory-if-its-installed-in-a-subdirectory without success. I have tried changing the document root to the folder serving the wordpress content, but that just breaks WordPress. I know I am missing something simple, but not sure what. Any help would be appreciated.

    Read the article

  • IIS site hacked with ww.robint.us malware

    - by sucuri
    A bunch of IIS sites got hacked with a javascript malware pointing to ww.robint.us/u.js. Google cache says more than 1,000,000 different pages got affected: http://www.google.com/#hl=en&source=hp&q=http%3A%2F%2Fww.robint.us%2Fu.js http://blog.sucuri.net/2010/06/mass-infection-of-iisasp-sites-robint-us.html My question is: Did anyone here got hacked with that and still have any logs (or network dump) available for analysis? If you do, have you spotted anything interesting in there? Sites as big as wsj.com got hacked and some people are saying that maybe a zero-day on IIS/ASP.net is in the wild...

    Read the article

  • Wrong DNS for just one site on development machine

    - by Patrick
    www.superyoink.de is my clients' website. I can access it from any machine except my development one. If I ping it on my development machine, I get 80.67.28.107 - this is wrong. My laptop, next to me, is able to resolve it correctly. I have tried putting correct address into hosts like so: 93.187.232.191 www.superyoink.de Still resolves to wrong address. I can enter bogus DNS server addresses so nothing works. But www.superyoink.de still resolves to 80.67.28.107. Rebooted, did ipconfig /flushdns nothing seems to work. I run 32 bit Vista. My impression is that it has stored the wrong dns resolution somewhere and is not even trying the DNS servers. But where? Help please!

    Read the article

  • Nginx Forward SSL for single site

    - by Will.brown
    I have a nginx server setup and it works fine for http however i would like to bypass the proxy for https connection. I want it so that when someone goes to my ip https:// ip1 (Nginx server) it bypasses ngix and forwards all traffic to https:// ip2(webserver) i do not need ngix to do this for any ssl website just one particular website. SO Client to https:// ip1 to https:/ /ip2 to https:// ip1 to client pc I just want the nginx to not intercept the connection and forward it on and on return forward the connection to client Im guessing i do this by nat mascarade buy not exactly sure how to do it and if i will need to tell nginx to ignore ssl aswell can someone help me please this has gone me stuck

    Read the article

  • Hosting a site on amazon ec2

    - by Khalid Mushtaq
    I have recently bought an amazon ec2 instance. Now I want to host a website. I have googled and found some useful info but there is some confusion in my mind. Suppose domain name is "http://www.example.com" That's what I have done so far. I have configured my domain locally on amazon ec2 instance and it's working fine when I open that url in amazon ec2 instance's browser. I have used http://www.example.com in /etc/hosts file point it to 127.0.0.1 to open locally on instance. I have got one elastic ip address and associated it with the instance. I have changed http://www.example.com A's record with the elastic IP that I have got in above step. Now what should I do? When some user will open my website anywhere in the world, will it get pointed to my instanace's ip address? Have I done proper configurations for website on instance?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >