Search Results

Search found 37739 results on 1510 pages for 'static site'.

Page 217/1510 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Process for Securing Web Sites and Applications

    - by Aamir Hasan
    The following quick-start guide provides a detailed overview of how to configure security for IIS 6.0. Reduce the Attack Surface of the Web Server 1.       Enable only essential Windows Server 2003 components and services. 2.       Enable only essential IIS 6.0 components and services. 3.       Enable only essential Web service extensions. 4.       Enable only essential Multipurpose Internet Mail Extensions (MIME) types. 5.       Configure Windows Server 2003 security settings. Prevent Unauthorized Access to Web Sites and Applications 1.       Store content on a dedicated disk volume. 2.       Set IIS Web site permissions. 3.       Set IP address and domain name restrictions. 4.       Set the NTFS file system permissions. Isolate Web Sites and Applications 1.       Evaluate the effects of impersonation on application compatibility: 2·         Identify the impersonation behavior for ASP applications. 3·         Select the impersonation behavior for ASP.NET applications. 4.       Configure Web sites and applications for isolation. Configure User Authentication 1.       Configure Web site authentication. 2·         Select the Web site authentication method. 3·         Configure the Web site authentication method. 4.       Configure File Transfer Protocol (FTP) site authentication. Encrypt Confidential Data Exchanged with Clients 1.       Use Secure Sockets Layer (SSL) to encrypt confidential data. 2.       Use Internet Protocol security (IPSec) or virtual private network (VPN) with remote administration. Maintain Web Site and Application Security 1.       Obtain and apply current security patches. 2.       Enable Windows Server 2003 security logs. 3.       Enable file access auditing for Web site content. 4.       Configure IIS logs. 5.       Review security policies, processes, and procedures.  Note:To secure the Web sites and applications in a Web farm, use the process described in this chapter to configure security for each server in the Web farm. Link:http://www.studentacad.com/post/2010/04/28/Process-for-Securing-Web-Sites-and-Applications.aspx

    Read the article

  • In OpenGl ES 2, should I allocate multiple transformation matrices?

    - by thm4ter
    In OpenGl ES 2, should I declare just one transformation matrix, and share it across all objects or should I declare a transformation matrix in each object that needs it? for clarification... something like this: public class someclass{ public static float[16] transMatrix = new float[16]; ... public static void translate(int x, int y){ //do translation here } } public class someotherclass{ ... void draw(GL10 unused){ someclass.translate(10,10); //draw } } verses something like this: public class obj1{ public static float[16] transMatrix = new float[16]; ... void draw(GL10 unused){ //translate //draw } } public class obj2{ public static float[16] transMatrix = new float[16]; ... void draw(GL10 unused){ //translate //draw } }

    Read the article

  • Should I have link rel=next & prev on URLs which have query variables?

    - by user21100
    For example, I have link rel prev & next set up on these pages of products: site.com?page=2 site.com?page=3 (this is my preferred structure by the way and I'm trying to get all the ugly URLs which are littered with query variables deindexed as they are causing duplicate content). So the above URLs are fine but once a filter to narrow product results is selected, like "price", the URL shows like this: site.com?price[1000-1499]=on site.com?page=2&price[1000-1499]=on As of right now, I am having the link rel prev & next dynamically added to the header of these pages but since I am working on getting these query variable URLs pages deindexed, I am wondering if I should get rid of it on these pages? Any thoughts?

    Read the article

  • Architecture of a "website generator" web application

    - by Resorath
    What is the most maintainable and efficient way to architect a web application who's purpose is to host and generate websites which can be customized to a certain degree? There are a lot of these style of applications in the wild that generate all kinds of sites, from sites that host World of Warcraft guilds like guildlaunch to other sites like my wedding for wedding site hosting. My question is, what is the basic architecture that these sites operate on? I imagine there are two ways of thinking about this. A central set of code that all sites on the host run against, and it acts differently based on which site was visited. In this manner, when the base code is updated all sites are updated simultaneously. Or, the code for an individual site exists in a silo, and is simply replicated to a new directory each time a site is created. When an update needs to be applied, the code is pushed out to each site silo. In my case, I am working in PHP with the CodeIgniter framework, however the answer need not be limited to this case. Which method (if any) creates a more maintainable and efficient architecture to manage this style of web application?

    Read the article

  • Where have I been for the last month?

    - by MarkPearl
    So, I have been pretty quiet for the last month or so. True, it has been holiday time and I went to Cape Town for a stunning week of sunshine and blue skies, but the second I got back home I spent the remainder of my holiday on my pc viewing tutorials on www.tekpub.com Craig Shoemaker, who I got in contact with because of his podcast, sent me a 1 month free subscription to the site and it has been really appreciated. I have done a lot of WPF programming in the past, but not any asp.net stuff and so I used the time to get a peek at asp.net mvc2 as well as a bunch of other technologies. I just wished I had more spare time to do the rest of the videos. While I didn’t understand all of what was being shown on the asp.net stuff (it required previous asp.net expertise), the site was a really good jump start to someone wanting to learn a new technology and broaden the horizons and I would highly recommend it, My only gripe is that in South Africa we have limited bandwidth and bandwidth speeds and so I spent a lot of my monthly bandwidth on the site and had to top up with my ISP several times because of the high quality video captures that the site did. I would have preferred to download the video’s, but apparently that is only available to people who have the yearly subscription fee. Other than that, great site and thanks a ton Craig!

    Read the article

  • Sharing password-protected videos on social media

    - by PaulJ
    We are developing a site where users will be able to watch and download videos that they've recorded of themselves in a public event. The videos will be password protected, and will be available only to users who have paid for them at the event... ...But on the other hand, we also want users to share those videos on social media, since they will be an attractive publicity for our events. Having people log into our site with their password, download the video and then re-upload it to Youtube/Facebook will be too cumbersome, and I suspect that few users will be willing to do that. So the obvious alternative is to have one of those convenient "share" buttons, but the problem with that approach will be that: The video will be physically hosted (and linked to) in our site. What happens if those videos go viral and our bandwidth cost explodes? The video is password protected. The solution I've thought of for this is: Upload the user's video to our (password-protected site) and to Youtube at the same time, as an unlisted video. The user can access our site with his password and download his video (to watch on his TV or whatever). If the users hits the "share" button, we show him the Youtube link... and we turn the video into a listed one. This seems in line with the ideas in Using YouTube as a CDN, and there didn't seem to be any objections in that question. I'm posting this just to confirm that my idea doesn't violate any Youtube TOS, and also to see if it is a good one or there might be better alternatives.

    Read the article

  • How to handle new domain names?

    - by michael
    I have a new product which I'll call a pen ink reloader. I have a website using my products name, for example, www.inkywink.com which I want to have accessed by searches for keywords such as "pen ink", "pen out of ink" "ink for pens" etc. , since nobody knows that a pen ink reloader exists. I see that its quite difficult to get on front page for these keywords since they have lots of competition. However I notice that the exact phrases I want to rank highly for are available as domains. I purchase "www.penink.com" and "penoutofink.com" which for arguments sake are highly searched and the perfect keywords to get eyes on my money site www.inkywink.com . Two questions: 1. What is my best option to leverage those names so that they appear near top of searches so that I can get traffic to my money site? Do I just have them redirect 301 to inkywink.com or should I create small original content on each with links to my main site? 2. If I just have them redirected to inkywink.com, am I able to use keywords in metatag and headers for each site separately or do they all automatically obtain the same headers and tags as the site to which theyre redirected ? Thanks to anyone who can help as I'm a real newbie to all this.

    Read the article

  • What books/references are recommended on the subject of planning and developing efficient web sites [closed]

    - by Shakil
    Once I visited a site containing videos; a well-known web developer creating a site from scratch via planning(paper, software), management, designing then development. I bookmarked the site but unable to find it now. My question is : How to do web-development effectively? What books or videos are recommended ???(I tried google but unable to find useful books or videos). I want to learn how people does it. Can you share resources(books, videos, links) about this... Thanks in advance.. Note: I created a job site for my university project. It gave me huge pain. Thats why I want to learn efficient way. I know html, css, javascript, jquery, php[learning(mvc and framework not yet completed)], phpmyadmin.

    Read the article

  • Redirecting 2 or more domains to same hosting server

    - by mtk
    I have domains A.com, A.co.in and A.in Purchased from site X. I have a hosting space/account purchased from site Y, which has provided me with 2 DNS entries that is to be replaced in the account at the site from where I purchase the domains. I have successfully changed the DNS entries of A.com to these 2 DNS entries and I am able to see my index.html page when I hit A.com. Problem On similar lines, I have changed the DNS entries to the same entries for A.co.in and A.in, but on hitting those sites in browser gives me no response and browser specific page of 'Site not found' is been seen. Please let me know, how to set this, so that when I hit any of the domain, the web-site is rendered from the hosting server? What am I doing wrong here? Note It has been more than 3 days after changing the DNS entries, so I don't think so this is a problem of DNS propagation, which I heard from some people. Please provide some detail explanation, as I am very very new to this. This is my first hosting ;) -Thanks

    Read the article

  • Where is the best place to find stock website templates?

    - by Billy ONeal
    I think I'm in the majority of programmers in saying I can't do visual design for s***. But I do write programs occasionally, and I'd like to have a nice website to tell people about said programs. I used to use a site called "OSWD" to find templates, but it's been forever since it's been looked at, and most of the designs seem overly specifically tailored to a single kind of site -- for example, a site featuring a large picture of an ice cube wouldn't make much sense for a site displaying software for people to use. I know there are plenty of template sites out there which have freely available designs, but I'm not sure which ones are good, and which ones are garbage. Where is the best place to find website templates?

    Read the article

  • Are there any free hit counters that don't track users?

    - by David Englund
    Are there any free services that increment a simple hit counter without tracking the users of the site? I would like to know how many visitors there are to my site, excluding bots. I don't need detailed information like unique visitors or where the user is from (in fact, that's exactly what I don't want). I have been researching free hit counters, and it seems that most (all?) of them display advertisements and their terms of service indicate that they can use the data they collect from the client site however they want. Google Analytics also does this and tracks users across sites. The site is static HTML, so an external link or iframe of some sort is easiest for me to implement. I could switch to a Ruby or Node.js back-end, in which case lots of other options open up (like Ruby impressionist and more low-level implementations), but my hosting service is pretty limited. If the answer to my question is simply "no," what are my other options?

    Read the article

  • Cheap server stress testing

    - by acrosman
    The IT department of the nonprofit organization I work for recently got a new virtual server running CentOS (with Apache and PHP 5), which is supposed to host our website. During the process of setting up the server I discovered that the slightest use of the new machine caused major performance problems (I couldn't extract tarballs without bringing it to a halt). After several weeks of casting about in the dark by tech support, it now appears to be working fine, but I'm still nervous about moving the main site there. I have no budget to work with (so no software or services that require money), although due to recent cut backs I have several older desktops that I could use if it helps. The site doesn't need to withstand massive amounts of traffic (it's a Drupal site just a few thousand visitors a day), but I would like to put it through a bit of it paces before moving the main site over. What are cheap tools that I can use to get a sense if the server can withstand even low levels of traffic? I'm not looking to test the site itself yet, just fundamental operation of the server.

    Read the article

  • drupal 6 in ubercart [closed]

    - by Rohit developer
    i m work on druapl 6 in ubercart...add product in cart recuring for 1 months i have add different site for order ............the order have different site for recuring product.......product is $30 but he added 4 website for this product payment is 30*4=120. in next month user delete one site for product order is 30*3=90.. can i reduce payment in paypal druing next month.he pay $90 is possible in paypal............plzzzzzzzzzzz rply get soon

    Read the article

  • duplicate pages

    - by Mert
    I did a small coding mistake and google indexed my site wrongly. this is correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE but google index my site like this : https://www.foo.com/urunler/171/cart.aspx first I fixed the problem and made a site map and only correct link in it. now I checked webmaster tools and I see this; Total indexed 513 Not selected 544 Blocked by robots 0 so I think this can be caused by double indexes and they looks not selected makes my data not selected. I want to know how to fix this "https://www.foo.com/urunler/171/cart.aspx" links. should I fix in code or should I connect to google to reindex my site. If I should redirect wrong/duplicate links to correct ones, what the way should be? thanks for your time in advance.

    Read the article

  • The White Screen of Death

    - by TATWORTH
    A few days ago I was browsing a particular commerical web site, the site crashed and I encountered the "White Screen of Death". The detailed dump showed me what the site was using:zendMySqlPHPMageMagentoBesides all this detailed information of use to a hacker, the copyright on Magento cited a date of 2009.  Does this means that out of date software was in use?There is a more basic point - in your site please ensure that fatal errors are trapped and redirected to a page that gives away no information useful to a hacker. I suggest also that you provide a means for an administrator to simulate an error to check the error handling.

    Read the article

  • Preventing - Large Number of Failed Login Attempts from IP

    - by Silver89
    I'm running a CentOS 6.3 server and currently receive emails entitled "Large Number of Failed Login Attempts from IP" from my server every 15 minutes or so. Surely with the below configured it should mean only the person using the (my static ip) should be able to even try and log in? If that's the case where are these remote unknown users trying to log into which is generating these emails? Current Security Steps: root login is only allowed without-password StrictModes yes SSH password login is disabled - PasswordAuthentication no SSH public keys are used SSH port has been changed to a number greater than 40k cPHulk is configured and running Logins limited to specific ip address cPanel and WHM limited to my static ip only hosts.allow sshd: (my static ip) vsftpd: (my static ip) whostmgrd: (my static ip) hosts.deny ALL : ALL

    Read the article

  • How do sites avoid SEO issues / legalities with subdomain unique ids?

    - by JM4
    I was looking through a few websites recently and noticed a trend I'm not sure I understand. Sites are creating unique referral URLs for customers in the form of: http://customname.site.com (If somebody were to use http://www.site.com/customname it would function the same way). I can see the sites are using 302 redirects at some point using Google Chrome then doing some sort of htaccess redirect, taking the subdomain name (customname) and applying it as a referral parameter then keeping in session during the entire process. However, there must be thousands of these custom URLs that people are typing in. How are each one of these "subdomains" not treated as separate URLs which in turn are redirected to the same page (in short, generating tons of links all pointing to the same page which Google would normally frown upon)? Additionally, the links also appear on the site themselves as clickable links so I'm not sure how these are not tracked. Similarly, the "unique" url is not indexed or cached in any Google search results. How is this capability handled? It does NOT highlight the referral aspect, but a true example of this is visiting http://sfgiants.com which does a 302 redirect to the much longer proper San Francisco Giants MLB homepage. I am wondering how SFgiants.com is not indexed (assuming that direct shortened link appears on several MLB pages)? 1 - I know these are 302 redirects, I can see this on the sites network flow. 2 - These links do in fact appear on the page itself because in some areas (for example, the bottom of the page may say: send this page to a friend! http://name.site.com/ which in turn would again redirect to something like http://www.site.com?id=name so the id value could be stored in session

    Read the article

  • Why did my Google links disappear after a redesign?

    - by Bill
    I recently did a complete redesign of my site. As soon as Google picked up the changes (I could tell because the excerpt in the search results was brought up to date), I noticed that my traffic slowed by about 30%. I started to investigate, ran a "link:" query on my site and saw only two links there. I know there are many more links to my site, mostly from reputable sources like magazines and large blogs. Why aren't these links showing up anymore? There's nothing even remotely spammy about my site, so I don't see why there would be weirdness going on.

    Read the article

  • Can I use nofollow for offsite links without it affecting my page rank?

    - by Jack
    What I have is a page with almost all offsite links. Each clicked link is forwarded on to the destination. What I would like the search engines to do is to index the text between the anchor tag and not follow the link itself. <a href="somelink">Index This Text Only</a> I've read several articles and they all seem to contradict themselves as to when to use nofollow. What's been happening over the past 2 months that the site has been live is that both Google and Bing are crawling the site as well as all the links on the site that it has been forwarded to. The search engines are now generating a lot of 404s for images and files that never existed on my site but rather seems to correlate to the site it was forwarded to. The search engines don't seem to honor the 302 header when forwarded. I would like to get a definitive answer on the nofollow tag as it relates to my situation. Can I use nofollow to stop the 404s and if so, will it affect my page ranking negatively?

    Read the article

  • How can the shared hosting server provide unlimited physical subdomains as opposed to unlimited virtual subdomains?

    - by xport
    Some hosting companies offer unlimited subdomains. There are two kind of subdomains: physical subdomains and virtual subdomains. A physical subdomains has its own site directory rather than being nested inside the site directory of its parent domain. A virtual subdomain site directory, on the other hand, is nested inside the site directory of its parent domain. I wonder how can the shared hosting company provide unlimited (theoritically) physical subdomains? In my understanding, each physical subdomain represents a new site (rather than a new application or virtual directory) in IIS. Please correct me if my mental model is wrong.

    Read the article

  • Using a CDN for CMS software (multiple sites)

    - by SmokeyPHP
    I'm currently researching ideas for the media management side of a CMS I'm writing. I was looking at having images served from a CDN which is fine on a single site, but I want all sites that run the CMS to make use of a CDN (which will most likely be a custom developed one, rather than a third party service like S3). My main question is: Is a multi-site CDN a good idea? I can't think of a downside, but have probably missed something - obviously they won't share the same folder, as I invisage the requests to be css.cdnsite.com/example.com/style.css or something along those lines. Having multiple sites in the same place will obviously make it easier for us to manage, as well as being cheaper, but then I wonder if it'll be worth it... Long story short: How should the CMS handle user uploaded media (separate installations) Just keep a local copy of all assets and serve them from the same site, like in days of yore? Keep a local copy, force site to use www. and have CDN subdomains per site? Or use a single separate CDN for all sites? Apologies for the length of this question, not sure if this should be multiple questions or not, as all parts are kind of related and could affect each other.

    Read the article

  • Has anyone had issues with Google Analyticator authenticating?

    - by Marc Benzakein
    I'm using Analyticator on a site and am having an issue. I am getting an error (see below) when I go to authenticate from the settings panel on Analyticator. The structure on this is a bit different and I think that's what is causing it. The website is on a subdomain which is hosted on a different server than the top-level domain. The analytics account on google only has the subdomain listed. Is it possible that the reason for the error is that the primary domain either: A. doesn't have an Analytics account or B. does have an Analytics account but it is not linked to the Analytics account of the subdomain? The page you have requested cannot be displayed. Another site was requesting access to your Google Account, but sent a malformed request. Please contact the site that you were trying to use when you received this message to inform them of the error. A detailed error message follows: The site "http://xxxxx.com" has not been registered.

    Read the article

  • 404 code/header for search engines, on removed user content?

    - by mowgli
    I just got an email, from a former user on my website He was complaining that Google still shows the contact page he created on my site, even though he deleted it a month ago This is the first time in many years anyone requests this I told him, that it's almost entirely up to Google what content it wants to keep/show and for how long. If it's deleted on the site, I can't do much, other than request a re-visit from the googlebot The user-page already now says something like "Not found. The user has removed the content" TL;DR: But the question is: Should I generally add a 404 header (or other) for dynamic user content that has been removed from the site? Or could this hurt the site (SEO)?

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • How do i impress employers with my resume?

    - by acidzombie24
    I built a entire website from scratch in 10days which looks and feels professional with the site being unique. The site has features like logging in, sending activation emails, tag/content search (lucence.net), syntax highlighting (prettify) and a diff (one of the js diffs), markup for comments all on this site and autocomplete in a textbox (remember, 10days). I wrote i have 5+ years of C# experience (i could lie and say more but smart employers will know its only 8 years old and 1.1 is very different from what we use now). I had employers REPEATEDLY say they are looking for someone who has more C# experience... wtf. Maybe they don't read my CV, maybe they dont believe it or ignore me because i am not yet a graduate. I laughed when i first read Steve Yegge The Five Essential Phone Screen Questions as i knew all of that (although i still never used graph datastruct nor know much about it). I'm pretty sure competency wise i can do the job. I am also positive no one noticed i have markup, a diff, autocomplete nor email activation/forget password (i offer a test user account). So maybe my site/example work isnt impressive bc you dont realize what is in it. In short i dont think they read my CV or notice my site. How do i impress employers? PS: The problem is i dont get to the interview. I had one and ruined it by speaking too technical to the PM because i was nervous. The other 25+ jobs either didnt contact me or was kind enough to send a rejection email.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >