Search Results

Search found 24403 results on 977 pages for 'matt case'.

Page 150/977 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • One to many problem with implementing 301 redirect after changed urls

    - by user16136
    I have a problem. I had an old dynamic url which I have now split into multiple static urls. e.g. www.mydomain.com/product.php?type=1&id=2 www.mydomain.com/product.php?type=2&id=3 www.mydomain.com/product.php?type=2&id=4, etc which I have changed to something like www.mydomain.com/electronics/radio www.mydomain.com/electronics/television www.mydomain.com/mobile/smartphone, etc. Google has previously indexed the dynamic urls and search results show the old urls. I want search to point to the new urls. I have kept the old url active, so both urls work. How can I set up a 301 redirect in this case? I run IIS and it only allows a page to be redirected to 1 url. Should I deactivate the old dynamic url? In that case I lose all the previous seo rankings..

    Read the article

  • What is your preferred font for working with code and data?

    - by Gary
    The features I would look for in a 'programmers' font are Monospaced (maybe less important for code, but more important for data) Distinguishable characters. Often I (uppercase i), 1 (one) and l (lower-case L) can be confusing, as can O (upper-case o) and 0 (zero). I'd be interested other character issues, especially in accented or extended character sets. Free Windows / Linux / OSX Legible on screen and printouts at smaller sizes I've community wiki'd this. I'm really looking for a list of fonts that qualify. From that list, people can pick what looks good to them.

    Read the article

  • How to improve quality of software

    - by hariharan
    Last week in our organization, we triggered a topic related to different ways of improving the quality of software (both technical as well as functional related topics). Since i am a technical person, i suggested following ideas, Use case based detailed design document – Both technical as well as functional specification should be well organized according to use case requirement. Design patterns – Will help developers to adopt common approach irrespective of technologies. Analyze and implement new technologies – Helps to improve the performance as well as the security of the application. As I am not a well experienced technical candidate , i am unable to provide other solutions. If any suggestions or topics related to this (including testing, functional requirement), please post your valuable comments.

    Read the article

  • How Byte loading/storing is implemented By the CPU?

    - by AlexDan
    I know that in 32bit machine, cpu read from memory 32bits at a time. since the registers in this case is 32bit in size too, I can understand how this works. What I don't understand is how the cpu implement load instructions of 1 byte. does it load the whole word where the single byte is located to the register, then perform some kind of "byte shifting", or does the cpu can load a single byte, in this case when does the byte masking happen, is it until the byte got loaded in the register, or it happen when byte is send through the data bus ? P.S. The cpu Im using is MIPS, the instructions Im talking about are: lb or lbu

    Read the article

  • Small hiccup with VMware Player after upgrading to Ubuntu 12.04

    The upgrade process Finally, it was time to upgrade to a new LTS version of Ubuntu - 12.04 aka Precise Pangolin. I scheduled the weekend for this task and despite the nickname of Mauritius (Cyber Island) it took roughly 6 hours to download nearly 2.400 packages. No problem in general, as I have spare machines to work on, and it was weekend anyway. All went very smooth and only a few packages required manual attention due to local modifications in the configuration. With the new kernel 3.2.0-24 it was necessary to reboot the system and compared to the last upgrade, I got my graphical login as expected. Compilation of VMware Player 4.x fails A quick test on the installed applications, Firefox, Thunderbird, Chromium, Skype, CrossOver, etc. reveils that everything is fine in general. Firing up VMware Player displays the known kernel mod dialog that requires to compile the modules for the newly booted kernel. Usually, this isn't a big issue but this time I was confronted with the situation that vmnet didn't compile as expected ("Failed to compile module vmnet"). Luckily, this issue is already well-known, even though with "Failed to compile module vmmon" as general reason but nevertheless it was very easy and quick to find the solution to this problem. In VMware Communities there are several forum threads related to this topic and VMware provides the necessary patch file for Workstation 8.0.2 and Player 4.0.2. In case that you are still on Workstation 7.x or Player 3.x there is another patch file available. After download extract the file like so: tar -xzvf vmware802fixlinux320.tar.gz and run the patch script as super-user: sudo ./patch-modules_3.2.0.sh This will alter the existing installation and source files of VMware Player on your machine. As last step, which isn't described in many other resources, you have to restart the vmware service, or for the heart-fainted, just reboot your system: sudo service vmware restart This will load the newly created kernel modules into your userspace, and after that VMware Player will start as usual. Summary Upgrading any derivate of Ubuntu, in my case Xubuntu, is quick and easy done but it might hold some surprises from time to time. Nonetheless, it is absolutely worthy to go for it. Currently, this patch for VMware is the only obstacle I had to face so far and my system feels and looks better than before. Happy upgrade! Resources I used the following links based on Google search results: http://communities.vmware.com/message/1902218#1902218http://weltall.heliohost.org/wordpress/2012/01/26/vmware-workstation-8-0-2-player-4-0-2-fix-for-linux-kernel-3-2-and-3-3/ Update on VMware Player 4.0.3 Please continue to read on my follow-up article in case that you upgraded either VMware Workstation 8.0.3 or VMware Player 4.0.3.

    Read the article

  • WordPress : mise à jour de sécurité très importante qualifiée de critique par le créateur du CMS, qui invite à l'appliquer sur-le-champs

    WordPress : mise à jour de sécurité très importante Qualifiée de critique par le créateur du CMS qui invite à l'appliquer sur-le-champs Mise à jour du 30/12/10 WordPress, le moteur de blog en PHP qui se transforme de plus en plus en CMS ultra-complet, doit être mis à jour rapidement. C'est en tout cas le message que donne de son équipe de développement qui vient de sortir, en pleine période de fête, une mise à jour de sécurité qualifiée de « importante ». La période, très calme et où beaucoup de responsables web sont en vacances, n'est pas propice à ce genre de patch. Matt Mullenweg, le créateur de WordPress, en a bien c...

    Read the article

  • Obtaining the correct Client IP address when a Physical Load Balancer and a Web Server Configured With Proxy Plug-in Are Between The Client And Weblogic

    - by adejuanc
    Some Load Balancers like Big-IP have build in interoperability with Weblogic Cluster, this means they know how Weblogic understand a header named 'WL-Proxy-Client-IP' to identify the original client ip.The problem comes when you have a Web Server configured with weblogic plug-in between the Load Balancer and the back-end weblogic servers - WL-Proxy-Client-IP this is not designed to go to Web server proxy plug-in. The plug-in will not use a WL-Proxy-Client-IP header that came in from the previous hop (which is this case is the Physical Load Balancer but could be anything), in order to prevent IP spoofing, therefore the plug-in won't pass on what Load Balancer has set for it.So unfortunately under this Architecture the header will be useless. To get the client IP from Weblogic you need to configure extended log format and create a custom field that gets the appropriate header containing the IP of the client.On WLS versions prior to 10.3.3 use these instructions:You can also create user-defined fields for inclusion in an HTTP access log file that uses the extended log format. To create a custom field you identify the field in the ELF log file using the Fields directive and then you create a matching Java class that generates the desired output. You can create a separate Java class for each field, or the Java class can output multiple fields. For a sample of the Java source for such a class, seeJava Class for Creating a Custom ELF Field to import weblogic.servlet.logging.CustomELFLogger;import weblogic.servlet.logging.FormatStringBuffer;import weblogic.servlet.logging.HttpAccountingInfo;/* This example outputs the X-Forwarded-For field into a custom field called MyOriginalClientIPField */public class MyOriginalClientIPField implements CustomELFLogger{ public void logField(HttpAccountingInfo metrics,  FormatStringBuffer buff) {   buff.appendValueOrDash(metrics.getHeader("X-Forwarded-For");  }}In this case we are using 'X-Forwarded-For' but it could be changed for the header that contains the data you need to use.Compile the class, jar it, and prepend it to the classpath.In order to compile and package the class: 1. Navigate to <WLS_HOME>/user_projects/domains/<SOME_DOMAIN>/bin2. Set up an environment by executing: $ . ./setDomainEnv.sh This will include weblogic.jar into classpath, in order to use any of the libraries included under package weblogic.*3. Compile the class by copying the content of the code above and naming the file as:MyOriginalClientIPField.java4. Run javac to compile the class.$javac MyOriginalClientIPField.java5. Package the compiled class into a jar file by executing:$jar cvf0 MyOriginalClientIPField.jar MyOriginalClientIPField.classExpected output is:added manifestadding: MyOriginalClientIPField.class(in = 711) (out= 711)(stored 0%)6. This will produce a file called:MyOriginalClientIPField.jar This way you will be able to get the real client IP when the request is passing through a Load Balancer and a Web server before reaching WLS. Since 10.3.3 it is possible to configure a specific header that WLS will check when getRemoteAddr is called. That can be set on the WebServer Mbean. In this case, set that to be X-Forwarded-For header coming from Load Balancer as well.

    Read the article

  • Restoring a hard drive

    - by Indian
    I had a laptop on which there was an AMD X2 display card. Suddenly this laptop went kaput. Incidentally the hard drive was safe. I had checked it using another machine. Further, I got this hard drive covered using an external USB HDD case. One day, while sleeping, this case fell down and since then I have not been able to restore the contents of the Hard Drive (rather could not find tools to recover contents from the hard drive). This hard drive had three partitions (a) NTFS (b) Linux (either ext4 or ReiserFS; I do not remember which one I had formatted in); and (c) Swap. How do I recover my contents?

    Read the article

  • Creating Rectangle-based buttons with OnClick events

    - by Djentleman
    As the title implies, I want a Button class with an OnClick event handler. It should fire off connected events when it is clicked. This is as far as I've made it: public class Button { public event EventHandler OnClick; public Rectangle Rec { get; set; } public string Text { get; set; } public Button(Rectangle rec, string text) { this.Rec = rec; this.Text = text; } } I have no clue what I'm doing with regards to events. I know how to use them but creating them myself is another matter entirely. I've also made buttons without using events that work on a case-by-case basis. So basically, I want to be able to attach methods to the OnClick EventHandler that will fire when the Button is clicked (i.e., the mouse intersects Rec and the left mouse button is clicked).

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Programmer logbook application?

    - by jsoldi
    I've just released my application to the public, and I'm working on an updated version, but I really think I should keep track of ALL the code changes. In case some functionality suddenly starts failing, with a history of all the changes I made it would be a lot easier to figure out where I messed it up, in case the problem wasn't already there. The ideal would be to have a super fast computer with a huge hard drive and an application that automatically saves a backup of the whole project every time I change a line in the code, with some file comparison tool that would show me every difference between any two backed up projects, but that's not really possible for now. So, do you know any application that makes it easy for a programmer to keep track of the changes made to the source code?

    Read the article

  • What do you think about gems and eggs? Alternatives?

    - by Juanlu001
    I've read recently some criticism (see 1, 2, 3) about the packaging distribution system of two popular programming languages: Ruby gems and Python eggs. The most important argument stated against them is that they replace the system package manager (in case there is one, as in every Linux distribution), which makes eggs and gems difficult to track, code difficult to patch, and so on. Are actually eggs and gems right? In case not, are there any alternatives to distributing Python or Ruby modules? Should developers focus on taking advantage of package manager (apt-get, pacman, ...) capabilities?

    Read the article

  • Default save directory for gnome-screenshot?

    - by trent
    Are there any sort of configuration options for specifying the default save location for gnome-screenshot, or is this hard-coded into the source code? It used to be ~/Desktop, which seems to have changed to ~/Pictures (in 12.04). The only possible solution I've seen is about Setting the default name (as it includes time stamp information now instead of simply Screenshot#), but that solution doesn't really seem ideal to me. Also, this post suggested that the last save location is remembered the next time you take a screenshot, but in my experience, this doesn't seem to be the case. And in any case, following on from that, that entry in gconf-editor doesn't even seem to accurately reflect the last location, so more than likely an entry related to an older version of gnome-screenshot.

    Read the article

  • Recycle Bottles for DIY Projects [Video]

    - by Jason Fitzpatrick
    Rather than tossing bottles in the recycle bin with this simple hack and a little elbow grease you can recycle them into new containers like drinking cups and vases. In the above video Matt Richardson from Make magazine shows us how to use an inexpensive bottle cutting jig to recycle bottle into new things. With a little polishing you can drink more than beer out of your favorite beer bottles. Watch the video above to see how and hit up the link below for more information. How-To: Bottle Cutting [Make] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • BDD in a .NET environment

    - by Nick
    I've seen BDD in action (in this case using SpecFlow and Selenium in a .NET environment) for a small test project. I was very impressed - mainly due to the fact that the language used to specify the acceptance tests meant they engaged with the product owner much more easily. I'm now keen to bring this into my current organisation. However I'm asked 'who else uses this?' and 'show me some case-studies'. Unfortunately I cannot find any 'big names' (or even 'small names' for that matter!) of companies who are actively using BDD. I have two questions really: Is BDD adopted by companies out there? Who are they? How can BDD be implemented in an agile .NET environment and are there any significant drawbacks to doing it?

    Read the article

  • Installing HTK Error

    - by Alex Madill
    I am having an issue when I try and make the file, ./configure worked perfectly fine for me when I try and make: zodiac@Zodiac:~/Downloads/htk$ make all (cd HTKTools && make all) \ || case "" in *k*) fail=yes;; *) exit 1;; esac; make[1]: Entering directory `/home/zodiac/Downloads/htk/HTKTools' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/zodiac/Downloads/htk/HTKTools' (cd HLMTools && make all) \ || case "" in *k*) fail=yes;; *) exit 1;; esac; make[1]: Entering directory `/home/zodiac/Downloads/htk/HLMTools' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/zodiac/Downloads/htk/HLMTools' Thanks in advance

    Read the article

  • Language Design: Are languages like phyton and coffescript really more comprehendable?

    - by kittensatplay
    the "Verbally Readable !== Quicker Comprehension" arguement on http://ryanflorence.com/2011/case-against-coffeescript/ is really potent and interesting. i and im sure other would be very interested in evidence arguing against this. there's clear evidence for this and i believe it. ppl naturally think in images, not words, so we should be designing languages dissimilar to human language like english, french, whatever. being "readable" is quicker comprehension. most articles on wikipedia are not readable as they are long, boring, dry, sluggish, very very wordy, and because wikipedia documents a ton of info, is not especially helpful when compared to much more helpful sites with more practical, useful, and relevant info. but languages like phyton and coffescript are "verbally readable" in that they are closer to the english language syntax, and programming firstly and mainly in python, im not so sure this is really a good thing. the second interesting argument is that coffeescript is an intermediator so thereby another step between to ends, which may increase chances of bugs. while coffeescript has other practical benefits, this question is focused specifically on evidence showing support for the counter-case of language "readability"

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Google pénalise le référencement des sites truffés de publicités, compliquant l'accès au contenu, ils sont considérés comme du spam

    Google pénalise le référencement des sites truffés de publicités Compliquant l'accès au contenu, ils sont considérés comme du spam Matt Cutts, responsable de l'équipe antispam de Google, s'est présenté à la conférence PubCon avec de bonnes et de mauvaises nouvelles. Le moteur de recherche pénalise désormais le référencement des sites truffés de publicités rendant difficile l'accès aux contenus pertinents des pages. Ce qui compte, a expliqué Cutts dans sa keynote est « combien de contenu est au-dessus de la ligne de flottaison [...] Si vous avez des publicités obscurcissant votre contenu, vous avez intérêt à y repenser », laissant entendre que les sites compli...

    Read the article

  • Is premature optimization really the root of all evil?

    - by Craig Day
    A colleague of mine today committed a class called ThreadLocalFormat, which basically moved instances of Java Format classes into a thread local, since they are not thread safe and "relatively expensive" to create. I wrote a quick test and calculated that I could create 200,000 instances a second, asked him was he creating that many, to which he answered "nowhere near that many". He's a great programmer and everyone on the team is highly skilled so we have no problem understanding the resulting code, but it was clearly a case of optimizing where there is no real need. He backed the code out at my request. What do you think? Is this a case of "premature optimization" and how bad is it really?

    Read the article

  • What standard superseded 830-1998?

    - by user1564158
    I have been looking into how to document software projects more formally, and I have learned about IEEE 830-1998: Recommended Practice for Software Requirements Specifications. However, as you can see from that link, it has been superseded. I know that 830-1998, and probably even 830-1993, are probably just fine for use. However, if nothing else, I would like to know what standard has superseded it. In this case it may not matter, but if other standards are superseded for more technical things, I think it would be a good idea to link somewhere what standard superseded another (if it is not another one in the same line (830, in this case)). It is worth mentioning that: The most recent standard when searching for "Software Requirements Specifications" or "Software Requirements" on the IEEE Standards Association website is 830-1993, The 2004 (current) version of SWEBOK references 830-1993 (paragraph 2.5), The document's Wikipedia article doesn't mention that the standard was superseded. TLDR: How do you find what standard superseded another, and which one took 830-1998's place?

    Read the article

  • Add in the header of the license type is enough to say: "my code is licensed"? (Open-source)

    - by silverfox
    I do not know if this is the correct place to ask this stackexchange. Note: If a moderator can move to the correct place (if I am in the inappropriate site SE) I read on various sites about licenses. I did just put the license type in the header file (in my case the javascript file - open-source). /* * "codeName" "version" * http://officialsite.com/ * * Copyright 2012 "codeName" * Released under the "LICENSE NAME" license * http://officialsite.com/LICENSE NAME */ javascript code ... In the same folder I leave a copy of the license. The listing of the folder looks like this: * codeName.js * LICENSE In the file LICENSE would leave my code uses. What nobody says is if it is enough to say my code is licensed (the case of an open-source). Or is something more required? Sorry for the bad English. Thanks.

    Read the article

  • SUN Java and GPL V2 licence issue for linked applications

    - by user255607
    I have recently noticed that the Sun Java code has been released as GPL V2 code (see Google code repo). Does that mean that all applications written in Java and linked to Sun APIs (beans, and so on) are also in GPL ? If we strictly follow the GPL then this should be the case (Libs should be in LGPL, not GPL if we want to build proprietary software on top of it). Is there another commercial licence which can avoid such an issue ? I cannot believe this is the case. There must be an explanation on this. Regards, Apple92

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • Consistency vs. Usability?

    - by dsimcha
    When designing an API, consistency often aids usability. However, sometimes they conflict where an extra API feature can be added to streamline a common case. It seems like there's somewhat of a divide over what to do here. Some designs (the Java standard library come to mind) favor consistency even if it makes common cases more verbose. Others (the Python standard library comes to mind) favor usability even if it means treating the common case as "special" to make it easier. What is your opinion on how consistency and usability should be balanced?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >