Search Results

Search found 25675 results on 1027 pages for 'digital site'.

Page 168/1027 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • wildcard host name bindings for multiple subdomains in multiple sites on IIS7 with a single IP address

    - by orca
    Situation: I have a single windows 2008 server with a single public IP address. I have multiple domains with wildcard A records pointing to the single IP address. I need each domain to be hosted by a different web site. (i.e. www.domain1.com by site domain1site) I need domain1.com to act like www.domain1.com I need each site to be able to have multiple subdomains (i.e. www.domain1.com, abc.domain1.com, xyz.domain1.com) Not relevant yet here it goes, I plan to handle each subdomain by a different application hosted in the same site (i.e. application /xyz in domain1site) However I found out that IIS7 does not support creating web sites with wildcard host name binding and setting it without any subdomain (i.e. domain1.com) does not work, even for www.domain1.com. Is there a simple solution? Does any IIS Extension like Application Request Routing provide such capability?

    Read the article

  • <authentication mode=“Windows”/>

    - by kareemsaad
    I had this error when I browse new web site that site sub from sub http://sharp.elarabygroup.com/ha/deault.aspx ha is new my web site Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. Source Error: Line 61: ASP.NET to identify an incoming user. Line 62: -- Line 63: Line 64: section enables configuration Line 65: of what to do if/when an unhandled error occurs I note other web site as sub from sub hadn't web.config is that realted with web.config and all subs and domain has one web.config

    Read the article

  • How can I change a specific website's style?

    - by Darthfett
    I have a specific website I often use (specifically, http://www.pygame.org/), which has an awful color scheme. I would like to change the color scheme of the site, but I haven't been able to find a good tool for the job. Some basic requirements: It should not be universal to all websites, or affect other websites. I want this to be semi-automatic. I don't want to have to re-define the theme for each page of the site. I want to continue to access the site online (I don't want a local copy of the entire site) Not OS-specific (browser-specific is okay) I am currently using Firefox, but I am also happy with Chrome. There may be some limitations on what is able to be done automatically, as the CSS seems to be embedded in the HTML (and some also in the HTML tags). I would like to remove as much of the green as possible. Is there an existing extension/add-on that does this?

    Read the article

  • How to make read-only file system writable?

    - by Tim
    I am not sure since when the filesystem on my digital audio player has been changed to be read-only. I cannot copy files into it or remove files on it. Are there some possible reasons for the player's file system to change the permission of its file system? I tried chmod: $ sudo chmod a+rw SGTL\ MSCN/ chomd: changing permissions of `SGTL MSCN/': Read-only file system where "SGTL MSCN" is the mounted point of the digital audio player. I was wondering how to make it writable? Thanks and regards!

    Read the article

  • Domain only responding to certain locations?

    - by CuriosityHosting
    I have a client who's been having problems with his site. The server doesn't seem to want to load hes site in certain countries, though other sites are fine. But this site [link removed] only seems to load in the US and Canada. In Europe, the UK, Asia etc, the site seems to be blocked (been like this for a week now). I've looked over the server and it seems fine. Other sites work fine, and the NS are set up properly, pointing to my main server, at http://puu.sh/MIGF Any ideas?

    Read the article

  • Could crosslinking using very general anchor texts be a reason for a drop in rankings?

    - by webmasters
    I have crosslinked 20 sites and I thought I have been penalized for this, asked this question and some experienced members told me maybe that crosslinking may not necessarily be the reason. The sites are on same host, different C class IP and every site in linked to each other. Each site targets long tail kewords. Site 1 - BMW Used Cars - and my area Site 2 - WW Used Cars - and my area And so on... When I crosslinked them (in the sidebar), I did it for the users; instead of repeating the terms used cars and my location over and over (since my users are targeted) I just crosslinked them using the brand: BMW, WW. Targeting locally, my niches are not overly competitive, so I did not need to many external links to rank on various positions on the 1st page. I'm thinking that when I chose to link using only the brand, google might have thought I wanted to actually rank for BBW and WW, hence the drop in my targeted local traffic. Could this be? I now have no-followed the links and I am noticing a slight recovery, but if it's not a interlinking penalty it would be a shame not to benefit from my links.

    Read the article

  • Setup IIS 7.5 with multiple website bindings and SSL?

    - by JK01
    On IIS 7.5 I am trying to achieve this with two websites: Default Web Site is bound to: (blank host header port 80 - http) (blank host header port 443 - https) go.example.com www71.example.com the IP address of go.example.com 2nd web site "Beta" is bound to: beta.example.com (blank host header port 443 - https) * using blank only because it doesn't seem to be possible to bind https to a named host header And both need to work with SSL. But I have these problems: When I type in beta.example.com, I see the go.example.com site instead I can not seem to add the SSL binding to both websites at once (I have a single *.example.com wildcard certificate). The beta site will not even start if I add the https binding to it. This is how I have set it up: What is the correct way to set it up?

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • Blatant copyright theft

    - by Tom Gullen
    Found a user on the forum trying to solicit business for his website, a good user reported it and I checked the website out. Firstly and most dangerously it's attempting to sell our original software, which is open source. Our open source software is around 15mb big and he's serving a 50mb download and trying to sell it for $20. He's also stolen our CSS/images/site design in general which is all custom built. I attempted to open reasonable discussion with him, and he responded promptly saying he would remove offending materials if he could just have 3 days to sort it out which I accepted. I'm not sure what his plan was because everything on that site is offending material. Anyway he messaged back saying the site was offline, and it was, but it went back online shortly afterwards. It's pretty sickening that someone is selling open source work as their own, (the site about us page references him as the sole developer etc etc, it's unbelievable to read it). I want to shut it down, what are my options? I'm going to contact his domain registrar, web host, and Paypal (that's how he's selling the program). Any other ideas?

    Read the article

  • PHP Page Stopped outputting content After Running "yum install php-devel" Command

    - by stwhite
    This error is bizarre but after running the "yum install php-devel" command (after a long day of trying to install Facedetect and OpenCV for face detection) my site stopped functioning. The site uses mysql and php. When you hit the url, the page executes the mysql and the php, but it appears to randomly stop outputting the content of the page. None of the code was changed and the site was working flawlessly prior to running the mentioned ssh command. I do use output buffering in the site, but after removing the calls "ob_flush", "ob_end_flush" and "ob_start" it didn't appear to help—still having issues with the site. Any ideas what this could be? Here is output from terminal: [myserver ~]# cd Facedetect-4b1dfe1 [myserver Facedetect-4b1dfe1]# phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 [myserver Facedetect-4b1dfe1]# configure bash: configure: command not found [myserver Facedetect-4b1dfe1]# phpize && configure && make && make install Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 bash: configure: command not found bash: Read: command not found [myserver Facedetect-4b1dfe1]# make make: *** No targets specified and no makefile found. Stop. [myserver Facedetect-4b1dfe1]# yum install php5-devel

    Read the article

  • Apache rewrite - optional parameters?

    - by Mayhem
    I'm creating SEO friendly urls for my news page. My links look like this : www.site.com/1234/the-pretty-url-string/ RewriteRule ^([^/])/([^/])/$ /news.php?sid=$1&url=$2 [L] This works great, but I like to have more flexability. I want to be able to accept urls like : www.site.com/1234 www.site.com/1234/ so then I can do some php $GET's and figure out if anything is missing - and 301 to the proper URL of my choice. I would like the &url=$2 to be optional.

    Read the article

  • OpenWeb(String) method

    - by ybbest
    I guess this is a SharePoint beginner problem ,however it took me a while to figure out what the problem is and I will blog it to help me to remember. Basically I wrote the following code to grab some list item from my SharePoint subsite http://win-oirj50igics/RestAPI,however I got the error stating that : “<nativehr>0×80070002</nativehr><nativestack></nativestack>There is no Web named / http://win-oirj50igics/RestAPI”. The problem is that OpenWeb(String) method returns the web site that is located at the specified server-relative or site-relative URL. It is the relative URL , so after I changed http://win-oirj50igics/RestAPI to RestAPI, everything works fine. using (SPSite site = new SPSite(http://win-oirj50igics/)) { SPWeb web = site.OpenWeb("http://win-oirj50igics/RestAPI"); SPQuery query = new SPQuery(); query.Query = camlDocument.InnerXml; SPListItemCollection items = web.Lists["Songs"].GetItems(query); IEnumerable<Song> sortedItems = from item in items.OfType<SPListItem>() orderby item.Title select new Song {SongName = item.Title, SongID = item.ID}; songs.AddRange(sortedItems); }

    Read the article

  • How can write a mod_rewrite rule to determine if the domain is not the main domain then change https:// to http://

    - by Oudin
    I've set up a WordPress multi-site with a wildcard ssl for example.com to access the admin area securely. However I'm also using domain mapping to map other domains to other sites e.g. alldogs.com to alldogs.example.com. The problem is when I'm trying to access the front end of a site from and admin for a mapped domain e.g. alldogs.com by clicking "Visit Site" the Link goes to https://alldogs.com because of the forced ssl applied to the admin area. Which produces a certificate warning since the certificate is for example.com and not alldogs.com. How can write a mod_rewrite rule to determine if the url/link clicked on is not the main domain e.g. example.com then change the https:// to http:// so the site can be accessed via port 80 and not generate a certificate warning for that mapped domains

    Read the article

  • Subdomain is preventing my search results from rising as expected in page rank

    - by culov
    My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ?? EDIT I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below. Thanks so much!

    Read the article

  • Visits-PageViews-Bounce Rate-New Visitors-Visit Duration (Google Analytics), which one is top priority for seo?

    - by HOY
    This is the case: My site is getting a lot of trafic from an image (a company logo image) because this image is ranked 1.st in google search results for a company's title. (I have no idea how that happened) This image is must for my website, but it is not relevant with site content so irrelevant people search for the image and finds out about my site, so that I get interesting statistics: http://postimage.org/image/3oyvrjoz9/ Pros: Total Visits & Avg. New Visits Cons: Avg. Page/Visit, Avg. Visit Duration, Bounce Rate In summary I am confused if this image is helpful to my website ? Because I don't know the balance between those 5 statistics P.S: My website is 2 months old, and we are working on seo at the moment Another P.S: Kindly ask you to not provide assumtions, because I also have assumptions, I need real knowledge. Edit: Search Keyword is: arcelik logo Search Site: google.com.tr Search URL: https://www.google.com.tr/search?hl=en&q=arcelik+logo&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.41524429,d.Yms&biw=1366&bih=667&um=1&ie=UTF-8&tbm=isch&source=og&sa=N&tab=wi&ei=oZIDUfutAseVswa9zYHwCw

    Read the article

  • Under what circumstances might an IIS6 website be automatically deleted?

    - by E. Anderson
    Late last week my colleagues did some hardware maintenance on one of our vmWare esxi servers. One of the guests is a Windows Server 2003 Web Edition system that runs our low-traffic web sites. We discovered this morning that one of those websites was no longer working with what appeared to be an SSL error. After logging in, I found that the web site in question had been deleted from IIS! Is it possible for this to happen without a user actually going in and deleting that single web site? All of the other sites were fine. The files for the site in question had not been touched. I just re-created the web site, assigned the SSL cert, and everything was working again. When I logged in, I did see the 'Unexpected Shutdown' dialog.

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • What guidelines should be followed when implementing third-party tracking pixels?

    - by Strozykowski
    Background I work on a website that gets a fair amount of traffic, and as such, we have implemented different tracking pixels and techniques across the site for various specific reasons. Because there are many agencies who are sending traffic our way through email campaigns, print ads and SEM, we have agreements with a variety of different outside agencies for tracking these page hits. Consequently, we have tracking pixels which span the entire site, as well as some that are on specific pages only. We have worked to reduce the total number of pixels available on any one page, but occasionally the site is rendered close to unusable when one of these third-party tracking pixels fails to load. This is a huge difficulty on parts of the site where Javascript is needed for functionality built into the page, but is unable to initialize until a 404 is returned on the external tracking pixel. (Sometimes up to 30 seconds later) I have spent some time attempting to research how other firms deal with this sort of instability with third-party components, but have come up a bit short. The plan currently is to implement our own stop-gap method to deal with these external outages, but rather than reinventing the wheel, we wanted to find out how this is dealt with on other sites. Question Is there a good set of guidelines that should be followed when implementing third-party tracking pixels? I would love to see some white papers or other written documents about how other people have dealt with this issue.

    Read the article

  • Exchange 2010 remote access

    - by evesirim
    Does anyone know how to configure more addresses for remote access to Outlook on our SBS 2008? Currently you can go to either 'https://remote.site.co.uk/remote' to acces the remote web workspace or 'https://remote.site.co.uk/owa' to go straight to remote exchange access. I would like to set it up so that by going to 'https://remote.site.co.uk/exchange' it takes you to same place that '/owa' would. Does anybody know if this is possible, and if so how? Many thanks

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • online reminder services

    - by Remus Rigo
    hi all I used a few years ago an online birthday reminder site and now i just can't find it. Does anyone know a good site that provides reminders (that sends alert mails)... (if this is the wrong place to post this question please suggest another site) thanks

    Read the article

  • Google Analytics Not tracking data correctly IP-address issue?

    - by PaperThick
    I have developed a small site for a client and the site has been placed inside a <iframe> at the clients site. The GA-script I'm using looks like this: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXXXXXX-2'], //My company's GA-account ['_trackPageview'], ['b._setAccount', 'UA-XXXXXXXX-1'], // Test GA-account ['b._trackPageview'], ['th._setAccount', 'UA-XXXXXXX-3'], ['th._setDomainName', '.clientdomain.se'], // Client GA-account ['th._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </head> As you can see I report the GA pageviews to the client as well. The GA script is tracking visitors and pageviews at both ends. But the problem is that on my clients side the visitor-count is more than double what they are on my end (20 000 vs 5 000). At first I thought that it was being duplicated at some point but when I checked my Crazy-Egg account I saw that it had tracked over 10 000 visits and then stopped tracking because that was my limit on the account. The page my site is on is on a IP-address (http://XXX.XXX.XX.X/campaign/) and not on a "valid url". Could that be an issue why some of the visitors isn't beeing tracked? Thanks in advance

    Read the article

  • Issue updating domain name servers from BlueHost to AWS

    - by cowls
    I am trying to migrate my site hosting from bluehost to AWS cloud based service. I have the site up and running on AWS with an elastic IP configured, it loads fine when I specify the IP address in the browser. I have gone into Route 53 on the AWS console and created a "hosted zone" for the domain. I then created a new record set of type "A" using the IP address as the value. I have a domain name registered with bluehost. Ive logged into the bluehost account and updated the domain name servers to point to those specified in Route 53 in the AWS console. When I hit the IP address directly the site loads, however it doesn't load when using the domain name (I get a google chrome oops error page saying page is not found) I've tried using this site: http://dns.squish.net/ to debug but it seems to be giving me the correct results. fizaclegems.com 300 IN A 107.20.209.78 Where 107.20.209.78 matches the elastic IP configured in the AWS console. This is the result it gives for all 4 name servers. Am I missing a step here? Does anyone know what else I should be doing or looking for?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >