Search Results

Search found 28303 results on 1133 pages for 'multi site'.

Page 212/1133 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Is it possible to use WebMatrix with pure IIS?

    - by Mike Christensen
    I'd like to check out WebMatrix for publishing our site to IIS automatically (right now, I have to zip it up, copy it out, Remote Desktop into the server, unzip it, etc). However, every example I can find on how to setup WebMatrix involves Azure, or using a .publishsettings file that you'd get from your hosting provider. I'm curious if I can publish to a normal, every day IIS server running on Windows Server 2008. So far, all I've done to the IIS server is install Web Deploy, which I believe is the protocol that WebMatrix uses to publish. When I enter the Remote Site Settings screen, I select Enter settings. I select Web Deploy as the protocol, type in my NT domain credentials (I'm an Admin on that server). I put in the site URL for the Site Name and Destination URL. When I click Validate Connection, I get: Am I doing something wrong, or is this just not possible to do?

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • What is the best way for an experienced developer to work on a WordPress blog

    - by nanothief
    I'm beginning to work on my first WordPress blog, however I've noticed most tutorials just have you do modifications (such as theme changes, installing plugins) on the production site. This worries me for a few reasons: No backups No version control If you make a mistake, your production site is affected Developing remotely is slower than local development, especially when tweaking css files. I understand why WordPress works like this - it allows people with no development experience to manage their WordPress installation (or the one provided by their service provider). It also allows you to work on the WordPress installation without having ssh access to the server. However as I am confortable working with tools like git and ssh, and am using a virtual server for the blog, this isn't very important to me. So I was wondering what techniques experienced developers use when working on a WordPress blog. For example: Do you develop locally, then push the changes to the live site? How do you do this? How do you manage database changes and backups? What do you store under version control (if anything)? If a plugin changes the database, do you somehow track the changes it does in version control, so you can rollback the changes done by the plugin if you need to? Or maybe I'm just overcomplicating everything if working on the production site isn't as risky as I am thinking it would be. I would appreciate any answers either way.

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • Why does my webpage look different when I connect using different routers?! Does routers cache files?

    - by Ayyash
    Here is the case, I am working on a site from office and home, I recently updated the stylesheets and logged in the live site from office (using my same laptop I use all the time), and everything looks okay, I come home use my home internet connection to connect to the site using the SAME laptop, the styles are not updated! The thing is: this happens on ALL browsers, and after emptying the cache many times, and even after one month of work, and even if I have never opened the site before on that browser (as if my router has a cache of its own) Another thing: only one particular styles.css file seem to be hanging Extra info: I use the same IP for my home wireless router as that defined in the office, the usual 192.168.0.1

    Read the article

  • How to make google Chrome omnibar search sites permanent?

    - by tim
    In google Chrome, when you have visited a site, say wikipedia.com, your can thereafter search the site directly from the omnibar by typing in a few letters then hitting tab. However, I noticed that after I clear my cache, Chrome does not remember the search autofills, and I once again have to visit the site manually for it to take effect. Is there a way to make the omnibar searches permanent even after doing a full cache clear in google Chrome? Thanks. Update: I tried the suggestion below, but bookmarking the site just allowed me to autocomplete the address. It did not allow me the option to hit tab to do the search directly in the omnibar. Any suggestions?

    Read the article

  • Setup IIS 7.5 with multiple website bindings and SSL?

    - by JK01
    On IIS 7.5 I am trying to achieve this with two websites: Default Web Site is bound to: (blank host header port 80 - http) (blank host header port 443 - https) go.example.com www71.example.com the IP address of go.example.com 2nd web site "Beta" is bound to: beta.example.com (blank host header port 443 - https) * using blank only because it doesn't seem to be possible to bind https to a named host header And both need to work with SSL. But I have these problems: When I type in beta.example.com, I see the go.example.com site instead I can not seem to add the SSL binding to both websites at once (I have a single *.example.com wildcard certificate). The beta site will not even start if I add the https binding to it. This is how I have set it up: What is the correct way to set it up?

    Read the article

  • TF2010 Build Definition and Access to Path is Denied error?

    - by Daniel DiVita
    I am new to TFS with regards to build definitions. I have a a build folder setup where I have set the permissons so EVERYONE has full control. Here is the exact error I am getting: E:\Builds\PIMSite\PIM.Site\PIM_Site.metaproj: Unable to copy file "C:\Builds\1\PIM System\PIM Site Build\Binaries\HtmlAgilityPack.xml" to "..\PIM.Site\Bin\HtmlAgilityPack.xml". Access to the path '..\PIM.Site\Bin\HtmlAgilityPack.xml' is denied. I have tried everyhitng. I have removed everything from that folder adn can delete it just fine so it is not being used by another process. Any thoughts?

    Read the article

  • Google is re-indexing pages after redirecting URLs from HTTP to HTTPS incorrectly

    - by SLIM
    I upgraded my site so that all pages have gone from using HTTP to HTTPS. I didn't consider that Google treats HTTPS pages differently than HTTP. I recreated my sitemap to so that all links now reflect the new HTTPS URLs and let it be for a few days. (Whoops!) Google is now re-indexing all the HTTPS pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new HTTPS pages. The problem is that Google sees all of these as brand new pages when many of them have a long HTTP history. Of course most of you will recognize the problem, I didn't set up a 301 from the old HTTP to the new HTTPS URLs. Is it too late to do this? Should I switch my sitemap back to HTTP URLs and then 301 redirect to the new HTTPS URls? Or should I leave the sitemap as is, and setup 301 redirects anyway... I'm not even sure if Google is trying to reach the HTTP site anymore. Currently the site is doing 303 redirects (from HTTP to HTTPS), although I haven't figured out why yet.

    Read the article

  • Sneak Preview - New CodePlex UI

    We have been busy the last several months working to improve the overall experience for the CodePlex community. We have been working through some of the top requested items, such as our big announcement last week enabling Git. Something that is not explicitly on the feature request list are requests to update the web site look and user experience.  As Brian Harry mentioned, the Future of CodePlex is Bright, so it is time to start brightening up the place. Goals As with any sizeable change you need to decide the scope of changes you want to tackle. We decided that we would optimize on incremental improvements verses taking months to get a completely new experience released. Our goals with this user experience work is to refresh the look and feel of the site, introduce new visual elements and set up the site for future structural changes. So this is not the end, just the beginning. Early Views I want to set a few expectations first, these screen shots are not final, and we are still working through the content and final element placement. Feedback is always welcome, just take that in mind as you review the images. New CodePlex Home The navigation changed a good bit on the home page and we have moved the search to a more consistent location across the site.   User Profile Users Home Page The goal was to make it easier to find and take action on common tasks such as creating projects. Project Home Issue Tracker   This should give you a taste of where we are going with the new user experience.     As always we love the feedback, either comment below, find us on Twitter @codeplex or @mgroves84, or create or vote up suggestions.

    Read the article

  • using web proxies - safe to enter passwords?

    - by bergin
    Hi Wanted to check something on a local site and see how the outside world sees it. however, using a web proxy im not sure that when i enter my credentials the proxy wont record this and give the proxy owner access to my site. is there another way to see my own site as though I was on the other side?

    Read the article

  • Loading wordpress admin page prevents and blocks further requests

    - by Auxiliary
    I've installed a Wordpress site on a host (actually I moved it from localhost). The site loads and works perfectly, however if I log in with Admin, the admin page doesn't completely load and all further requests to the server are blocked for about 10 or 15 minutes. I can't even ping to the site. Where could this problem be from? Is it firewall related or...? Any help or idea is greatly appreciated.

    Read the article

  • Google Analytics unexplained spike

    - by Dianne
    My client's Google Analytics has had a spike everyday from May 6th (from 0 - 100.) This is in a city that he is not optimized for and does very little business in. The hits are coming in direct to the website. My client is concerned that it has something to do with competition using his site as a price shopping device. I can't view the ip to see where they are coming from and his site is not built in PHP so the work around doesn't work here. Any thoughts? Could it be a "referring site" situation and if so is there a way for me to find out what the referring site is?

    Read the article

  • Getting rank for keywords that I don't want to appear on my website [duplicate]

    - by Rober
    This question already has an answer here: Which keyword should I use. colors or colours or a combination of both? 2 answers One of my products has two names. One of them is what I consider correct and thus it is what I want to appear on my website. The other name is incorrect for me, so I would like to avoid it. But I know that many people will search my product using the "bad" name. How could I get the "bad" name indexed for my site on search engines even if nobody can read it there? Of course, I want to do it "legally" so that no engine will ban my site considering it as cloaking, black hat SEO, etc... EDIT: Having that "bad" name on my backlinks is not an option. For example I would perceive user reviews connecting my site to that word as a negative point. Maybe having my site as a search result for that word could be negative as well, but I think it is worth it.

    Read the article

  • Is there a standard method for assigning nameservers to servers in a Windows domain?

    - by HopelessN00b
    As the title asks, I'm wondering if there's any standard or "best practice" for how to actually assign nameservers (DNS) and manage the nameserver configuration for client servers on a Windows domain. I'm talking about the setting circled in the below image, in case the language of the question is not clear enough: This is for a large, multi-site environment, where ideally/hopefully all servers point at their site's Domain Controller as the primary DNS server, and a DNS server at a different site as the secondary DNS server. For simplicity's sake, we can say that the secondary server would be the Domain Controller at the home site for everyone, and there are no tertiary DNS servers (even though that's not actually the case). Try as I might, I can't seem to find a GPO setting for this (at least on FL 2003 R2, the Computer Configuration - Administrative Templates - Network - DNS Client - DNS Servers GPO is Supported on: Windows XP Professional only), and I find it rather hard to believe that the "best"/"standard" solution would therefore be either scripting up something to apply the DNS settings per site, or using a DHCP server to push those configurations out to the other servers via the DHCP Scope Options. So, is there a standard way of managing this configuration? (That's hopefully not "a script" or "DHCP Scope Option.)

    Read the article

  • <authentication mode=“Windows”/>

    - by kareemsaad
    I had this error when I browse new web site that site sub from sub http://sharp.elarabygroup.com/ha/deault.aspx ha is new my web site Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. Source Error: Line 61: ASP.NET to identify an incoming user. Line 62: -- Line 63: Line 64: section enables configuration Line 65: of what to do if/when an unhandled error occurs I note other web site as sub from sub hadn't web.config is that realted with web.config and all subs and domain has one web.config

    Read the article

  • wildcard host name bindings for multiple subdomains in multiple sites on IIS7 with a single IP address

    - by orca
    Situation: I have a single windows 2008 server with a single public IP address. I have multiple domains with wildcard A records pointing to the single IP address. I need each domain to be hosted by a different web site. (i.e. www.domain1.com by site domain1site) I need domain1.com to act like www.domain1.com I need each site to be able to have multiple subdomains (i.e. www.domain1.com, abc.domain1.com, xyz.domain1.com) Not relevant yet here it goes, I plan to handle each subdomain by a different application hosted in the same site (i.e. application /xyz in domain1site) However I found out that IIS7 does not support creating web sites with wildcard host name binding and setting it without any subdomain (i.e. domain1.com) does not work, even for www.domain1.com. Is there a simple solution? Does any IIS Extension like Application Request Routing provide such capability?

    Read the article

  • How can I change a specific website's style?

    - by Darthfett
    I have a specific website I often use (specifically, http://www.pygame.org/), which has an awful color scheme. I would like to change the color scheme of the site, but I haven't been able to find a good tool for the job. Some basic requirements: It should not be universal to all websites, or affect other websites. I want this to be semi-automatic. I don't want to have to re-define the theme for each page of the site. I want to continue to access the site online (I don't want a local copy of the entire site) Not OS-specific (browser-specific is okay) I am currently using Firefox, but I am also happy with Chrome. There may be some limitations on what is able to be done automatically, as the CSS seems to be embedded in the HTML (and some also in the HTML tags). I would like to remove as much of the green as possible. Is there an existing extension/add-on that does this?

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • How can write a mod_rewrite rule to determine if the domain is not the main domain then change https:// to http://

    - by Oudin
    I've set up a WordPress multi-site with a wildcard ssl for example.com to access the admin area securely. However I'm also using domain mapping to map other domains to other sites e.g. alldogs.com to alldogs.example.com. The problem is when I'm trying to access the front end of a site from and admin for a mapped domain e.g. alldogs.com by clicking "Visit Site" the Link goes to https://alldogs.com because of the forced ssl applied to the admin area. Which produces a certificate warning since the certificate is for example.com and not alldogs.com. How can write a mod_rewrite rule to determine if the url/link clicked on is not the main domain e.g. example.com then change the https:// to http:// so the site can be accessed via port 80 and not generate a certificate warning for that mapped domains

    Read the article

  • Domain only responding to certain locations?

    - by CuriosityHosting
    I have a client who's been having problems with his site. The server doesn't seem to want to load hes site in certain countries, though other sites are fine. But this site [link removed] only seems to load in the US and Canada. In Europe, the UK, Asia etc, the site seems to be blocked (been like this for a week now). I've looked over the server and it seems fine. Other sites work fine, and the NS are set up properly, pointing to my main server, at http://puu.sh/MIGF Any ideas?

    Read the article

  • Blatant copyright theft

    - by Tom Gullen
    Found a user on the forum trying to solicit business for his website, a good user reported it and I checked the website out. Firstly and most dangerously it's attempting to sell our original software, which is open source. Our open source software is around 15mb big and he's serving a 50mb download and trying to sell it for $20. He's also stolen our CSS/images/site design in general which is all custom built. I attempted to open reasonable discussion with him, and he responded promptly saying he would remove offending materials if he could just have 3 days to sort it out which I accepted. I'm not sure what his plan was because everything on that site is offending material. Anyway he messaged back saying the site was offline, and it was, but it went back online shortly afterwards. It's pretty sickening that someone is selling open source work as their own, (the site about us page references him as the sole developer etc etc, it's unbelievable to read it). I want to shut it down, what are my options? I'm going to contact his domain registrar, web host, and Paypal (that's how he's selling the program). Any other ideas?

    Read the article

  • PHP Page Stopped outputting content After Running "yum install php-devel" Command

    - by stwhite
    This error is bizarre but after running the "yum install php-devel" command (after a long day of trying to install Facedetect and OpenCV for face detection) my site stopped functioning. The site uses mysql and php. When you hit the url, the page executes the mysql and the php, but it appears to randomly stop outputting the content of the page. None of the code was changed and the site was working flawlessly prior to running the mentioned ssh command. I do use output buffering in the site, but after removing the calls "ob_flush", "ob_end_flush" and "ob_start" it didn't appear to help—still having issues with the site. Any ideas what this could be? Here is output from terminal: [myserver ~]# cd Facedetect-4b1dfe1 [myserver Facedetect-4b1dfe1]# phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 [myserver Facedetect-4b1dfe1]# configure bash: configure: command not found [myserver Facedetect-4b1dfe1]# phpize && configure && make && make install Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 bash: configure: command not found bash: Read: command not found [myserver Facedetect-4b1dfe1]# make make: *** No targets specified and no makefile found. Stop. [myserver Facedetect-4b1dfe1]# yum install php5-devel

    Read the article

  • OpenWeb(String) method

    - by ybbest
    I guess this is a SharePoint beginner problem ,however it took me a while to figure out what the problem is and I will blog it to help me to remember. Basically I wrote the following code to grab some list item from my SharePoint subsite http://win-oirj50igics/RestAPI,however I got the error stating that : “<nativehr>0×80070002</nativehr><nativestack></nativestack>There is no Web named / http://win-oirj50igics/RestAPI”. The problem is that OpenWeb(String) method returns the web site that is located at the specified server-relative or site-relative URL. It is the relative URL , so after I changed http://win-oirj50igics/RestAPI to RestAPI, everything works fine. using (SPSite site = new SPSite(http://win-oirj50igics/)) { SPWeb web = site.OpenWeb("http://win-oirj50igics/RestAPI"); SPQuery query = new SPQuery(); query.Query = camlDocument.InnerXml; SPListItemCollection items = web.Lists["Songs"].GetItems(query); IEnumerable<Song> sortedItems = from item in items.OfType<SPListItem>() orderby item.Title select new Song {SongName = item.Title, SongID = item.ID}; songs.AddRange(sortedItems); }

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >