Search Results

Search found 24735 results on 990 pages for 'site ranking'.

Page 161/990 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Possible DNS Injection and/or SSL hijack?

    - by Anthony
    So if I go to my site without indicating the protocol, I'm taken to: http://example.org/test.php But if I go directly to: https://example.org/test.php I get a 404 back. If I go to just: https://example.org I get a totally different site (a page about martial arts). I went to the site via https not very long ago (maybe a week?) and it was fine. This is a shared server, as I understand it, and I do not have shell access, so I'm limited to the site's CPanel to do any further investigations. But when I go to: example.org:2083 I'm taken to https://example.org:2083, which, if someone has taken over the SSL port, could mean they have taken over the 2083 part as well (at least in my paranoid mind). I'm made more nervous by the fact that the cpanel login page at the above address looks very new (better, really) compared to the last time I went to it over the weekend. It's possible that wires got crossed somewhere after a system update, but I don't want to put in my name username and password in case it's a phishing attempt. Is there any way to know for sure without shell access to know for sure if someone has taken over? If I look up the IP address for the host name, the IP address matches what I have on a phpinfo page I can get to over http. If I go to the IP address directly on port 2083, I get the same login mentioned above (new and and suspiciously nice). But the SSL cert shows as good when I go this route. So if that's the case (I know the IP is right, the cert checks out, and there isn't any DNS involved), is that enough to feel safe at that point of entry? Finally, if I can safely log in via the IP, does anyone have any advice on where to check first on CPanel for why the SSL port is forwarding to a site on karate? Thanks.

    Read the article

  • What sites track your open source contributions?

    - by Daniel
    I know there's at least one site where you can register the open source projects you are involved with, indicate your committer name, and it will display the lines contributed, and the languages, along a timeline. Unfortunately, I don't recall what site was that. Anyway, can anyone indicate a site that does this?

    Read the article

  • Making an advertising server ads from different ad networks

    - by John
    In India there are many ad-networks(other than Adsense) who pay per acquisition or per lead. So Javascript ad code is not required(as fraud clicks don't matter as long as one converts). So an ad network will have many companies and each company will have many banner sizes for ads. Also suddenly any ad may be stopped just because company's target has met. Which is a common nuisance since if we don't remove those url's then that company will get conversions for free. I've a dozen sites and removing the ads are difficult every now and then. Also CPA based ads may not convert at all. That means I'll need to remove non-performing ads regularly. I've gone through: How can I show multiple ad networks on my site? . I've also visited DFP solution but without Adsense they wouldn't let me open account. I want to make an ad server wherein I'll feed new ads (banner image + link for click). I want to maintain categories there like ( shoes, phones, books etc). So if an ad is paused - i'll simply remove/pause the ad there while other ads in the category keep running. Also changing ad code within sites will no more be required. For example - let me have an ad category "clothing" where I can add ads from different companies. So if one of my site requests an ad from there it'll randomly select an ad in this category and return it to site for display. Removing/adding ads within this category will not affect the site requesting those ads. Any idea how to implement it?

    Read the article

  • How do I prevent spawning of zombie-like apache2 processes on Dreamhost VPS?

    - by Jonathan Hayward
    I have had a website for months or longer on a DreamHost VPS, and I have had vague memories on, in initial setup, having to turn off some customized Apache under /dh to get a standard Apache 2.x to work with. Things have been going along on an even keel, when I started making some changes lately and I found that when I tried to bounce Apache (/usr/sbin/apachectl restart), it couldn't bind to port 80, and my site had been converted from a big literature site to a small parking site. I tried to see what was listening on 80, and it was a DreamHost-customized Apache that had spawned. I killed all of them, restarted Apache, and changed the parent directory under /dh to mode 000. That was a day or two ago. I was bouncing Apache again in trying to get a new site to load under HTTPS, and I found that once again DreamHost's apache had spawned, from the directory I set to mode 000, and once again converted my site to a parking page. I renamed the directory, but I am very skeptical of whether I have permanently killed the DreamHost-customized Apache. Besides duct tape options like a crontab to kill and delete each minute, how can I permanently turn off the Apache processes that are spawning from a location under /dh and interfering with standard Apache? What should I be doing that I am not? Can DreamHost's technical support stop the interference? Thanks,

    Read the article

  • http to https upgrade -- SEO troubles

    - by SLIM
    I upgraded my site so that all pages have gone from using http to https. I didn't consider that Google treats https pages differently than http. I re-created my sitemap to so that all links now reflect the new https and let it be for a few days. (Whoops!) Google is now re-indexing all https pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new https. The problem is that Google sees all of these as brand new pages when many of them have a long http history. Of course most of you will recognize the problem, I didn't set up a 301 from the old http to the new https. Is it too late to do this? Should I switch my sitemap back to http and then 301 to the new https? Or should I leave the sitemap as is, and setup 301 redirects anyway.. I'm not even sure if Google is trying to reach the http site anymore. Currently the site is doing 303 redirects (from http to https), although I haven't figured out why yet. Thanks for any suggestions you can offer.

    Read the article

  • Seizing naming master from child domain server

    - by meera
    when I am trying to seize the role from my child domain server the naming master I get the following error fsmo maintenance: seize naming master Attempting safe transfer of domain naming FSMO before seizure. ldap_modify_sW error 0x34(52 (Unavailable). Ldap extended error message is 000020AF: SvcErr: DSID-03210380, problem 5002 (UN AVAILABLE), data 8438 Win32 error returned is 0x20af(The requested FSMO operation failed. The current FSMO holder could not be contacted.) ) Depending on the error code this may indicate a connection, ldap, or role transfer error. Transfer of domain naming FSMO failed, proceeding with seizure ... Server "win-fb20ixk90mu" knows about 5 roles Schema - CN=NTDS Settings,CN=WIN-3918XHC5STU,CN=Servers,CN=Default-First-Site-Na me,CN=Sites,CN=Configuration,DC=HCL,DC=com Naming Master - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First- Site-Name,CN=Sites,CN=Configuration,DC=HCL,DC=com PDC - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First-Site-Name, CN=Sites,CN=Configuration,DC=HCL,DC=com RID - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First-Site-Name, CN=Sites,CN=Configuration,DC=HCL,DC=com Infrastructure - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First -Site-Name,CN=Sites,CN=Configuration,DC=HCL,DC=com

    Read the article

  • Google results show .info domain instead of .com

    - by user481913
    I am on shared hosting currently and i registered this account with a .info domain as the main domain.... say MyDomain.info . However, the site runs from MyDomain.com . This is a cpanel based shared hosting account. MyDomain.info has nothing hosted at all... i.e no content files... MyDomain.com is setup as an Add On Domain and run from /public_html/MyDomain under MyDomain.info The problem is that when i type MyDomain as the keyword for search in Google , it shows result(s)for Mydomain.info although this is not the intended site and has no content hosted on itself. I tried to solve the issue by issuing a 301 permanent redirect from MyDomain.info to MyDomain.com, however Google keeps on displaying results as mydomain.info as the main site even after 1 month of the redirect. I want google to index MyDomain.com as the main site and remove MyDomain.info from the results. Also is this harmful from the seo point of view? How can i improve the seo if it is?

    Read the article

  • Apache ProxyPass/ProxyPassReverse to IIS

    - by Dana
    We have an ASP.NET web application which is mapped to a folder on an apache hosted php site using ProxyPass.ProxyPassReverse. A couple of problems being encountered. cookies are being lost which breaks the site navigation, this can be overcome by setting the asp app as cookieless. Forms authentication is used on the ASP site, this is also broken withe the proxypass in place, suspect this is cookie related also. ASP site works ok when run from a domain/ip address. Use of a separate domain / sub-domain is not an option duew to client requirements.

    Read the article

  • Biggest mistake you've ever made

    - by Rogue Coder
    Similar to the question I read on Server Fault, what is the biggest mistake you've ever made in an IT related position. Some examples from friends: I needed to do some work on a production site so I decided to copy over the live database to the beta site. Pretty standard, but when I went to the beta site it was still pulling out-of-date info. OOPS! I had copied the beta database over to the live site! Thank god for backups. And for me, I created a form for an event that was to be held during a specific time range. Participants would fill out the form for a chance to win, and we would send the event organizers a CSV from the database. I went into the database, and found ONLY 1 ENTRY, MINE. Upon investigating, it appears as though I forgot an auto increment key, and because of the server setup there was no way to recover the lost data. I am aware this question is similar to ones on Stack Overflow but the ones I found seemed to receive generic answers instead of actual stories :) What is the biggest coding error/mistake ever…

    Read the article

  • VMware vSphere 4.1: host performance graphs show "No data available", except the realtime view, which works fine

    - by Graeme Donaldson
    Here's our scenario: Site 1 has 3 hosts, and our vCenter server is here. Site 2 has 3 hosts. All hosts are ESXi 4.1 update 1. If I view the Performance tab for any host in Site 1, I can view realtime, 1 Day, etc., i.e. all the views give me graph data. For the hosts in Site 2, I can view the realtime graphs, 1 Day and 1 Week both say "No data available". 1 Month had mostly nothing, 1 Year shows that it was working fine for a long time and then started breaking. 1 Month view: 1 Year view: What would cause this loss of performance data?

    Read the article

  • A-2-Z web hosting on Amazon AWS

    - by JDelage
    All, I am studying web dvp, and one of my classes is project-based. We have to build a functional site that demonstrate our understanding of: HTML, CSS, Javascript, php, MySQL, And potentially Ajax or some other web component. For the project, we can use a local server using WampServer and basically build the site entirely on our laptop. If I have time, I would like to create a real site, and I thought it would be a good way to familiarize myself with Amazon's AWS services. So if I purchase a domain name, can I rely on AWS to host the site from A-to-Z? I understand I can use AWS to host content, the database, and do the background computations, if needed. What else do I need and what are the parts that AWS cannot help me with? Second, is there good documentation for a beginner to navigate AWS and learn how to use it (either on Amazon, or some 3rd party sites, or even a good book, as long as is up to date). The ideal documentation would be a tutorial on creating a web site from a-to-z on AWS, as detailed as possible. As you can guess, I have limited understanding of the IT issues. I have 0 Linux or sysadmin experience, but this is a good opportunity to change that. I hope you can help me. Thank you, JDelage PS: Please keep the answers AWS-specific. At this point, I am only interested in alternative services to the extent that they plug a hole in Amazon's offering.

    Read the article

  • Unextending Sharepoint 2007 Web Application from a zone

    - by dunxd
    When our Sharepoint was migrated from Sharepoint 2003 to Sharepoint 2007 (both fully paid versions), the consultants who carried it out extended each web app into two IIS sites/zones (e.g. the original Web App was http://intranet, then http://newintranet and http://intranet would be created for Sharepoint 2007 - each with its own IIS site). The idea was that during the migration period we would set up DNS to point the old url to SP2003 servers and the new one to SP2007, then once the migration was complete, do a DNS change so the SP2007 would recieve the requests to the http://intranet type URLs. Unfortunately the contractors did not tidy up the application extensions and IIS sites after the migration, and for some time both URLs were in use, resulting in many document links pointing to the http://newintranet type URLs. This means I need to maintain these URLs. Due to a rejig of organisation structure we now need to relocate some Sharepoint sites, and I'd like to use the RDA Collaboration Sharepoint URL Redirector feature. However a limitation of this is that it doesn't work for Web Applications which have been extended into multiple zones. So I have a need to tidy up the situation that our consultants left behind. I think the right thing to do is use the "Remove Sharepoint from IIS Web Site" page in Central Admin to remove the zone for the newintranet type sites, and select the option to also delete the IIS site. That should result in having no IIS sites listening for http://newintranet type URLs. Is this the right procedure? Once I have done that I need to set up Sharepoint to receive requests sent to the http://newintranet type URLs so they will continue to work. I am not sure if I should do this: using Alternative Access Mappings or, by adding a host header to the IIS site or, creating a non Sharepoint IIS site for each http://newintranet type URL, and use IIS redirection to forward the requests to the new URL using variables to pass the path to the Sharepoint site. Does anyone have any thoughts on these options, or any other way of achieving this? Sharepoint 2007 is running on Windows 2003 with IIS6. We don't currently have plans/budget to upgrade to Sharepoint 2010.

    Read the article

  • How can I decrease relevancy of Creative Commons footer text? (In Google Webmaster Tools)

    - by anonymous coward
    I know that I may just have to link the image to make this happen, but I figured it was worth asking, just in case there's some other semantic markup or tips I could use... I have a site that uses the textual Creative Commons blurb in the footer. The markup is like so: <div class="footer"> <!-- snip --> <!-- Creative Commons License --> <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/80x15.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="http://www.xmemphisx.com/" property="cc:attributionName" rel="cc:attributionURL">xMEMPHISx.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License</a>. <!-- /Creative Commons License --> </div> Within Google Webmaster Tools, the list of relevant keywords is heavily saturated with the text from that blurb. For instance, 50% of my top-ten most relevant keywords (including the site name): [site name] license [keyword] commons creative [keyword] alike [keyword] attribution [keyword] I have not done any extensive testing to find out rather or not this list even matters, and so far this doesn't impact performance in any way. The site is well designed for humans, and it is as findable as it needs to be at the moment. But, out of mostly curiosity: Do you have any tips for decreasing the relevancy of the text from the Creative Commons footer blurb?

    Read the article

  • Apache ProxyPass/ProxyPassReverse to IIS

    - by Dana
    We have an ASP.NET web application which is mapped to a folder on an apache hosted php site using ProxyPass.ProxyPassReverse. A couple of problems being encountered. cookies are being lost which breaks the site navigation, this can be overcome by setting the asp app as cookieless. Forms authentication is used on the ASP site, this is also broken withe the proxypass in place, suspect this is cookie related also. ASP site works ok when run from a domain/ip address. Use of a separate domain / sub-domain is not an option duew to client requirements.

    Read the article

  • How does one set up API on a locally hosted server...

    - by L33tCh
    I am setting up a personal Wordpress site and want to be able to post to it from other sites... the common request being for my "API Key". When creating a site on Wordpress.com for example, and API key is sent to you by mail, but surely it should be relatively simple, (if not just an address on the local site to point to,) to have one for a personal server (ubuntu server)?

    Read the article

  • IIS 7.5 returning 404 for unknown host names

    - by WaldenL
    This just doesn't seem correct to me, so I'm looking for someone to tell me how I've misconfigured IIS... Configuration is IIS7.5 (2008R2), without SP1. I have IIS 7.5 configured w/several sites. ALL sites have hostnames defined in the bindings, there is NO site w/out a hostname. However, if I request an unknown hostname from the server IIS (technically Microsoft-HTTPAPI/2.0) return a 404 error, not a 400 error. I would expect a 400 (or some other major error) rather than a lowly 404. This causes a problem when I have nginx in front of multiple IISs and want to stop a site so nginx takes it out of rotation. Since IIS still returns a 404 for the request even when there is no active site for that name, nginx doesn't know the server is dead. NB: IIS returns the 404 regardless of whether there is a server, but it's stopped, or there is no server. Thoughts? Solutions? -- Additional info: OK, I added a site on a port other than 80 (5000) and then on a connection to that port asked for a site that doesn't exist, and I get the expected error 400 (Invalid hostname). So, while IIS isn't listening for generic (no host name) connections on port 80 it would seem that something is. Any ideas how to get HTTPSys to dump the list of what it's listening for?

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • I need a relatively cheap host, which will be able to handle sudden peaks in traffic?

    - by Morten K
    Hello, We're launching a product in a few months, which will obviously have a website. Judging from our current traffic, we believe that overall traffic will probably not be that much, but we are aiming at promoting the site heavily using social media. This has the typical problem, that IF we get suddenly get picked up by a large tech blog, we will see a sudden burst: A very heavy increase in traffic all of the sudden. If we use a cheap charlie host as our current host is (www.unoeuro.com) or something similar like GoDaddy, I'm afraid that the site will go down under the load. If that happens, then we might as well have thrown our social media marketing dollars out of the window. Our site will be relatively lightweight, all videos hosted at Youtube or Vimeo and other than that mainly just a standard webpage (ie nothing too heavy). I am hoping for recommendations for a good hosting company, which has some form of scalable hosting, so if / when a traffic surge hits, the site will not go down.

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • FTP restrict user access to a specific folder

    - by Mahdi Ghiasi
    I have created a FTP Site inside IIS 7.5 panel. Now I have access to whole site using administrator username and password. Now, I want to let my friend access a specific folder of that FTP site. (for example, this path: \some\folder\accessible\) I can't create a whole new FTP Site for this purpose, since it says the port is being used by another website. How to create an account for my friend to have access to just an specific folder? P.S: I have read about User Isolation feature of IIS 7.5, but I couldn't find how to create a user just for FTP and set it to a custom path.

    Read the article

  • pros-cons of separate hosting accounts versus using addon domain

    - by hen3ry
    Folks: For historical reasons, I have "Site A" on "Hosting Account A", and "Site B" on "Account B", totally independent accounts with the same vendor, Bluehost. Both are primary domains. Now that Hosting Account B is just about to expire, I'm considering letting it disappear and moving Site B to an Addon domain on "Account A". Both sites are non-commercial, narrow-interest, very-low-traffic, hundreds of page views per month. The file weights for the sites are non-trivial, especially as I like to install specialized CMSs in subdomains. Since Bluehost allows unlimited hosting space there should be no issue with the file load, except I've seen hints of an issue with total file count, maybe 50k files -- which I'm not currently close to hitting, but might eventually. My question: what are the pros and cons of using separate accounts versus hosting Site B as an addon domain? Obviously, using a single account is cheaper by half, and I know that my authoring environment (DreamWeaver CS5) complains when it detects nested source trees, telling me "Synchronization" might fail in such cases, but I don't depend on this feature. What other factors should I consider? TIA

    Read the article

  • Can a webite have too many bindings?

    - by justSteve
    IIS7.x on a win08 web version on a dedicated server. I have a site that's serving a few dozen affiliates - many of which are hitting me via a subdomain from their own root domain - all of which have a subdomain specific to their account. E.G. my affiliate named 'Acme' hits my site via: myApp.Acme.com (his root, my app) Acme.MyDomain.com (his account within my root domain) Currently I'm adding each of these as a binding entry in IIS (targeting a discrete IP, not '*'). As I ramp this up to include more affiliates I'm wondering if I should be concerned about how many binding this site handles. Proabaly, in Acme's case I can do without the 'Acme.MyDomain.com' because, in reality, all traffic takes place via myApp.Acme.com. Mine is a niche site - very volume compared to most. At what point do I worry about all those bindings? thx

    Read the article

  • Securing php on a shared apache

    - by Jack
    I'm going to install apache+php in a server where two users, A and B, will deploy their website. I'm trying to achieve isolation of users' space for security reasons: that is no scripts from site A should be able to read files in site B. To achieve this I installed suphp. Website files of user A are owned by A:A with perm=700 and user of B are owned by B:B with perm=700. Suphp works great, but apache complains about permissions to read .htaccess. How can I let apache to read .htaccess in every dir of A and B while keeping isolation between site A and site B? I played with ownership (group = www-data) and permissions (750) but I found no way to keep isolation granted. Any idea? Maybe by running apache as root, but in this case are there any drawbacks?

    Read the article

  • Title of the page in search results and title of google's cached version are different. Why?

    - by Azmorf
    Check this: http://www.google.com/search?q=site:gunlawsbystate.com+kansas+gun+laws The title of the first result is "Kansas Gun Laws - Gun Laws By State". Although, on the page google has cached the title is different: <title>Kansas Gun Laws - Kansas Gun Law - Reciprocity Guide</title> Google shows the title that has been on the site 2-3 months ago. Google bot has visited the website a lot of times since that, and as you see it even cached it (the latest version is of 15th Sept), however for some reason it doesn't change the title to the new one in the search results. We use hash-bang URL structure on this website. It completely meets google's requirements for AJAX websites (_escaped_fragment_ stuff). The issue I explained is happening with almost all hash-bang pages that got indexed. Questions: Why does it keep old page title in the search results? Can it be connected to the fact that I'm using hash-bang URLs? There are lots of pages on the site that have the same issue, all of them have hash-bang URLs. Another thing I noticed is that Google's "Preview" feature doesn't work for any hash-bang URLs on the site. Did I do anything wrong? It has got cached versions of the pages, why wouldn't it generate a preview? Thanks (and sorry for my English) PS. Here's a weird thing I also noticed: this search query https://www.google.com/search?q=Kansas+Gun+Laws+-+Reciprocity+Guide shows the correct title for the same page as in the example above. Why does google show different titles for the same page when you run different queries?

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >