Search Results

Search found 3987 results on 160 pages for 'captain obvious'.

Page 59/160 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • Should we test all our methods?

    - by Zenzen
    So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) { return myDao.findSomethingBySomething(parameters); } A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code?

    Read the article

  • Scripting language for filling out web form

    - by ityler22
    I have a job as an intern at a technology company, I was given the unfortunate job of performing some data entry into our web management system. The information entered into the web form is stored in a MySQL DB. Upon receiving the data I realized I would have to submit this online form about 1000 different times all consisting of about 10 different text fields / check boxes per form. (So in other words, would be completely mind numbing and be a ridiculous waste of time and resources, or so I thought...) Having used databases a good bit prior to this, my immediate reaction was to just write a short MySQL script to bulk import all of the data, especially since it was already presented to me in an excel spreadsheet ready to go. Thought it may have been some sort of a test since it seemed too obvious. I wrote the script which consisted of about 10 lines of code but was then informed I couldn't be trusted with MySQL Admin privileges to run said script. So my next thought would be to write a script to just enter the information through the web form (Which will take ten times longer but it's what I have to) Being unfamiliar with scripting of this nature (seems like I would need something similar to a bot, but the good kind) I was unsure of how to proceed to do this. Is there a preferred language to use to enter the data i have into the web form I do have access to? I'm not particularly looking for this to be done for me by any means just a nice point in the right direction as far as what scripting language to use and how to pair that with the data I have that needs to be entered. Thanks for the help/ valuable input! EDIT: Is there a way to perform this using perl without having access to place any files on the server? Would I be able to run some Javascript loops to pull the data out of .csv or just a .txt format with line delimiters and insert it into the web form?

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • Password protect an alias virtual difrecory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • Is individual code ownership important?

    - by Jim Puls
    I'm in the midst of an argument with some coworkers over whether team ownership of the entire codebase is better than individual ownership of components of it. I'm a huge proponent of assigning every member of the team a roughly equal share of the codebase. It lets people take pride in their creation, gives the bug screeners an obvious first place to assign incoming tickets, and helps to alleviate "broken window syndrome". It also concentrates knowledge of specific functionality with one (or two) team members making bug fixes much easier. Most of all, it puts the final say on major decisions with one person who has a lot of input instead of with a committee. I'm not advocating for requiring permission if somebody else wants to change your code; maybe have the code review always be to the owner, sure. Nor am I suggesting building knowledge silos: there should be nothing exclusive about this ownership. But when suggesting this to my coworkers, I got a ton of pushback, certainly much more than I expected. So I ask the community: what are your opinions on working with a team on a large codebase? Is there something I'm missing about vigilantly maintaining collective ownership?

    Read the article

  • An experiment: unlimited free trial

    - by Alex.Davies
    The .NET Demon team have just implemented an experiment that is quite a break from Red Gate's normal business model. Instead of the tool expiring after the trial period, it now continues to work, but with a new message that appears after the tool has saved you a certain amount of time. The rationale is that a user that stops using .NET Demon because the trial expired isn't doing anyone any good. We'd much rather people continue using it forever, as long as everyone that finds it useful and can afford it still pays for it. Hopefully the message appearing is annoying enough to achieve that, but not for people to uninstall it. It's true that many companies have tried it before with mixed results, but we have a secret weapon. The perfect nag message? The neat thing for .NET Demon is that we can easily measure exactly how much time .NET Demon has saved you, in terms of unnecessary project builds that Visual Studio would have done. When you press F5, the message shows you the time saved, and then makes you wait a shorter time before starting your application. Confronted with the truth about how amazing .NET Demon is, who can do anything but buy it? The real secret though, is that while you wait, .NET Demon gives you entertainment, in the form of a picture of a cute kitten. I've only had time to embed one kitten so far, but the eventual aim is for a random different kitten to appear each time. The psychological health benefits of a dose of kittens in the daily life of the developer are obvious. My only concern is that people will complain after paying for .NET Demon that the kittens are gone.

    Read the article

  • Count function on tree structure (non-binary)

    - by Spevy
    I am implementing a tree Data structure in c# based (largely on Dan Vanderboom's Generic implementation). I am now considering approach on handling a Count property which Dan does not implement. The obvious and easy way would be to use a recursive call which Traverses the tree happily adding up nodes (or iteratively traversing the tree with a Queue and counting nodes if you prefer). It just seems expensive. (I also may want to lazy load some of my nodes down the road). I could maintain a count at the root node. All children would traverse up to and/or hold a reference to the root, and update a internally settable count property on changes. This would push the iteration problem to when ever I want to break off a branch or clear all children below a given node. Generally less expensive, and puts the heavy lifting what I think will be less frequently called functions. Seems a little brute force, and that usually means exception cases I haven't thought of yet, or bugs if you prefer. Does anyone have an example of an implementation which keeps a count for an Unbalanced and/or non-binary tree structure rather than counting on the fly? Don't worry about the lazy load, or language. I am sure I can adjust the example to fit my specific needs. EDIT: I am curious about an example, rather than instructions or discussion. I know this is not technically difficult...

    Read the article

  • Is it bad to be the only person supporting software you have developed?

    - by trpt4him
    My employer has a need for a web-based application to manage and share data within the department, with approximately 50-75 possible users. I feel I have the ability to write it for them. I would likely use Python/Django with a MySQL database, so it would be open source. However, I'm the only IT person in my department (our larger organization has a separate IT support staff with which I often work, but not for web development). I want to develop this application, but if I leave in 1-2 years, and someone else has to come in after me and support it, will this be seen as a bad decision? This is assuming all the obvious points -- I will write documentation, I will comment my code, and I will strive to follow good application design principles. But will that be enough? In principle, is it acceptable for one person to develop and support an entire web application? Is this a "do first, then show and ask" kind of situation, or should I be certain it will be adopted by everyone involved first?

    Read the article

  • Etiquette when asking questions in an IRC channel

    - by Zarkonnen
    Many larger OSS projects maintain IRC channels to discuss their usage or development. When I get stuck on using a project, having tried and failed to find information on the web, one of the ways I try to figure out what to do is to go into the IRC channel and ask. But my questions are invariably completely ignored by the people in the channel. If there was silence when I entered, there will still be silence. If there is an ongoing conversation, it carries on unperturbed. I leave the channel open for a few hours, hoping that maybe someone will eventually engage me, but nothing happens. So I worry that I'm being rude in some way I don't understand, or breaking some unspoken rule and being ignored for it. I try to make my questions polite, to the point, and grammatical, and try to indicate that I've tried the obvious solutions and why they didn't work. I understand that I'm obviously a complete stranger to the people on the channel, but I'm not sure how to fix this. Should I just lurk in the channel, saying nothing, for a week? That seems absurd too. A typical message I send might be "Hello all - I've been trying to get Foo to work, but I keep on getting a BarException. I tried resetting the Quux, but this doesn't seem to do anything. Does anyone have a suggestion on what I could try?"

    Read the article

  • How to target just one search engine and optimise for that

    - by mickburkejnr
    I've been dabbling with SEO a lot in the last 6 months, and one thing that has surprised me is the disparity between Google and Bing in the way they deliver results. A website ranked for a specific keyword/phrase on Google may rank 3rd on the first page, but using the same keyword/phrase on Bing will display the same website but ranked 15th for the exact same keyword/phrase. I came up with the idea to increase traffic to my website by targetting Bing instead of Google for several reasons. The biggest one is that while it's not the biggest search provider, people still use it, and I feel that if other websites have been "neglected" and not optimised for Bing my website would stand a better chance of getting near the top of their search rankings. The question is though how would I do this? A lot of the SEO advice on the internet is generic, but I can't help feeling it's Google orientated for obvious reasons. How could I optimise my website to be Bing friendly, rather than Google friendly? I know it sounds like suicide as I'm taking myself out of the Google mindset, but I feel it could work wonders for traffic to the site.

    Read the article

  • How to deal with a CEO making all technical decision but with little technical knowledge ?

    - by anonymous
    Hi, Question posted anonymously for obvious reasons. I am working in a company with a dev group of 5-6 developers, and I am in a situation which I have a hard time dealing with. Every technical choice (language, framework, database, database scheme, configuration scheme, etc...) is decided by the CEO, often without much rationale. It is very hard to modify those choices, and his main argument consists in "I don't like this", even though we propose several alternative with detailed pros/cons. He will also decide to rewrite from scratch our core product without giving a reason why, and he never participates to dev meetings because he considers it makes things slower... I am already looking at alternative job opportunities, but I was wondering if there anything we (the developers) could do to improve the situation. Two examples which shocked me: he will ask us to implement something akin to configuration management, but he reject any existing framework because they are not written in the language he likes (even though the implementation language is irrelevant). He also expects us to be able to write those systems in a couple of days, "because it is very simple". he keeps rewriting from scratch on his own our core product because the current codebase is too bad (codebase whose design was his). We are at our third rewrite in one year, each rewrite worse than the previous one. Things I have tried so far is doing elaborate benchmarks on our product (he keeps complaining that our software is too slow, and justifies rewrites to make it faster), implement solutions with existing products as working proof instead of just making pros/cons charts, etc... But still 90 % of those efforts go to the trashbox (never with any kind of rationale behind he does not like it, again), and often get reprimanded because I don't do exactly as he wants (not realizing that what he wants is impossible).

    Read the article

  • Pidgin XML

    - by David Totzke
    I'm looking at an xml document that gets passed to a COM object (yes, I said the "C" word) to save a new record.  You can tell by the "new|" at the top of the file before the xml declaration.  If we were saving, there would be "edit|" at the top.  Couldn't you just have a root element with something like: <myRootElement mode="new"> Ah, here's why that won't work... There's no single root element but that's ok because next we find that this document is actually several documents.  <?xml version="1.0"?> appears several times.  The final document opens with <myElementStart> and closes with <myElementEnd> so it's not even well-formed. This isn't a style thing.  This is broken.  I mean, basic well-formed XML only has two rules; three if you count the xml declaration but it works as a document for DTO purposes without it. One root element. Close all elements with a matching tag. As a result, both ends of this conversation need to speak the same dialect of broken XML in order to communicate.  To join the conversation, you must also learn pidgin XML. How can you start out so right - XML being the obvious choice in this instance - and then go so horribly wrong? Dave Just because I can…

    Read the article

  • How To Export/Import a Website in IIS 7.x

    - by Tray Harrison
    IIS 6 had a great feature called ‘Save Configuration to a File’ which would allow you to easily export a website’s configuration, to be later used to import either on the same server or another box.  This came in handy anytime you wanted to duplicate a site in order to do some testing without impacting the existing application.  So naturally, Microsoft decided to do away with this feature in IIS 7. The process to export/import a site is still fairly simple, though not as obvious as it was in previous versions.  Here are the steps: 1. Open a command prompt and navigate to C:\Windows\System32\inetsrv and run the following command: appcmd list site /name:<sitename> /config /xml > C:\output.xml So if you were wanting to export a website named EAC, you would run the following: If you’ll be setting up another copy of the site on the same server, you’ll now need to edit the output.xml file before importing it.  This is necessary in order to avoid conflicts such as bindings, Site ID, etc.  To do this, edit the XML and change the values.  Go ahead and make a copy of the home directory, and rename it to whatever folder name you specified in the output – /EAC2 in this example.  If you decide to change the app pool, make sure you go ahead and create the new app pool as well. Once these edits have been made, we are now ready to import the site.  To do that run: appcmd add sites /in < c:\output.xml So for our example it would look like this: That’s it.  You should now see your site listed when opening up Inet Manager.  If for some reason the site fails to start, that’s probably because you forgot to create the new app pool or there is a problem with one of the other parameters you changed.  Look at the System log to identify any issues like this.

    Read the article

  • Password protect an alias virtual directory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • Hello PCI Council, are you listening?

    - by David Dorf
    Mention "PCI" to any retailer and you'll instantly see them take a deep breath and start looking for the nearest exit.  Nobody wants to be insecure, but few actually believe that PCI does anything more than focus blame directly on retailers.  I applaud PCI for making retailers more aware of the importance of security, but did you have to make them PAINFULLY aware?  POS vendors aren't immune to this pain either as we have to undergo lengthy third-party audits in addition to the internal secure programming programs.  There's got to be a better way. There's a timely article over at StorefrontBacktalk that discusses the inequity of PCI's rules, and also mentions that the PCI Council is accepting comments until April 15th. As a vendor, my biggest issue with PCI is that they require vendors to disclose the details of any breaches, in effect "ratting out" customers.  I don't think its a vendor's place to do this.  I'd rather have the trust of my customers so we can jointly solve the problem. Mary Ann Davidson, Oracle's Chief Security Officer, has an interesting blog posting on this very topic.  Its a bit of a long read, but I found it very entertaining and thought-provoking.  Here's an excerpt: ...heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give [the] PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. I encourage you to read the entire posting, Pain Comes Instantly, and then provide feedback to the PCI Council.

    Read the article

  • Is learning how to use C (or C++) a requirement in order to be a good (excellent) programmer?

    - by blueberryfields
    When I first started to learn how to program, real programmers could write assembly in their sleep. Any serious schooling in computer science would include a hefty bit of training and practice in programming using assembly. That has since changed, to the point where I see Computer Science degrees with assembly, if included at all, is relegated to one assignment, and one chapter, for a total of two weeks' work out of 4 years' schooling. C/C++ programming seems to have followed a similar path. I'm no longer surprised to interview university graduates who have not spent more than two weeks programming in C++, and have only read of C in a book somewhere. While the most serious CS degrees still seem to include significant time learning and using one or both of the languages, the trend is clearly towards less enforced C/C++ in school. It's clearly possible to make a career producing good work without ever reading or writing a single line of C or C++ code. Given all of that, is learning the two languages worth the effort? Are they at all required to excel? (beyond the obvious, non-language specific advice, such as "a good selection of languages is probably important for a comprehensive education", and "it's probably a good idea to keep trying out and learning new languages throughout a programmers' career, just to stretch the gray cells")

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • Webservice Return Generic Result Type or Purposed Result Type

    - by hanzolo
    I'm building a webservice which returns JSON / XML / SOAP at the moment.. and I'm not entirely sure which approach for returning results is best. Which would be a better return value? A generic "transfer" type structure, which carries Generic properties or a purposed type with distinct properties: class GenericTransferObject{ public string returnVal; public string returnType; } VS class PurposedTransferObject_1{ public string Property1; } //and then building additional "types" for additional values class PurposedTransferObject_2 { public string PropertyA; public string PropertyB; } Now, this would be the serialized and returned from a web service call via some client technology, JQuery in this example. SO if I called: /GetDaysInWeek/ I would either get back: {"returnType": "DaysInWeek", "returnVal": "365" } OR {"DaysInWeek": "365"} And then it would go from there. On the one hand there's flexibilty with the 1st example. I can add "returnTypes" without needing to adjust the client other than referencing an additional "index".. but if I had to add a property, now i'm changing a structure definition.. Is there an obvious choice in this situation?

    Read the article

  • How to market R at your institute?

    - by ran2
    Okay, I admit there are lots of threads R vs. something. The strengths of R are obvious to most people here. Still though advertising R in an environment that has been preferring various kinds of other software for quite some time is not easy. Moreover, even in the limited time I´ve been dealing with R, it improved so dramatically that I would mention things among its strengths that I would not have listed when I started my personal R-evolution. So, what I am trying to do here is to collect the most recent and striking arguments that can be put in nutshell and be presented easily. What I got on my list so far is: the Springer useR series ggplot2 and its documentation open source CRAN Rapache rcpp rsocket What can you add to this list? SO threads are also very welcome as answers. EDIT: so far, though indeed very helpful, most answers are arguments (pros) why one would want to use R. Do you have some specific hints that I could include in some kind of overview presentation? EDIT2: I wanted to add this link about R's future to the list...

    Read the article

  • Correct architecture for running and stopping complex tasks in the background

    - by Phonon
    I'm having trouble working out the correct architecture for the following task. I have a GUI in Windows Forms that contains a ListBox, listing certain architectural layouts. One an item in this list is selected, a custom Control displays an interactive visualization of the selected layout. Drawing of this interactive diagram is a CPU-intensive task, and can take up to a second on my machine. The kind of functionality I'm trying to achieve is that if a user wants to quickly scroll through the layouts in the ListBox (say, holding down the down arrow key), I don't want my computer to sit there thinking about how to draw the layout before it allows the user to do anything else. The obvious answer is, of course, to run the layout calculations in a separate thread. But how do I make that thread return a whole control? How do I make sure I'm not running two layout calculations at once? I'm fairly new to this complex GUI business. So the real question is what is the right architecture to implement something like this? This seems like something people do all the time, but finding any suggestions on how to do it properly is really difficult.

    Read the article

  • WCF REST Error Handler

    - by Elton Stoneman
    I’ve put up on GitHub a sample WCF error handler for REST services, which returns proper HTTP status codes in response to service errors.   The code is very simple – a ServiceBehavior implementation which can be specified in config to tag the RestErrorHandler to a service. Any uncaught exceptions will be routed to the error handler, which sets the HTTP status code and description in the response, based on the type of exception.   The sample defines a ClientException which can be thrown in code to indicate a problem with the client’s request, and the response will be a status 400 with a friendly error message:       throw new ClientException("Invalid userId. Must be provided as a positive integer");   - responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/lastLogin?userId=xyz   Error Status Code: 400, Description: Invalid userId. Must be provided as a positive integer   Any other uncaught exceptions are hidden from the client. The full details are logged with a GUID to identify the error, and the response to the client is a status 500 with a generic message giving them the GUID to follow up on:       var iUserId = 0;     var dbz = 1 / iUserId;   - logs the divide-by-zero error and responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/dbz     Error Status Code: 500, Description: Something has gone wrong. Please contact our support team with helpdesk ID: C9C5A968-4AEA-48C7-B90A-DEC986F80DA5   The sample demonstrates two techniques for building the response. For client exceptions, a friendly HTML response is sent in the body as well as the status code and description. Personally I prefer not to do that – it doesn’t make sense to get a 400 error and find text/html when you’re expecting application/json, but it’s easy to do if that’s the functionality you want. The other option is to send an empty response, which the sample does with server exceptions.   The obvious extension is to have multiple exceptions representing all the status codes you want to provide, then your code is as simple as throwing the relevant exception – UnauthorizedException, ForbiddenExeption, NotImplementedException etc – anywhere in the stack, and it will be handled nicely.

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • Microsoft WPC 12&ndash;Predictions

    - by D'Arcy Lussier
    Let me start by saying I have absolutely no inside knowledge, neither through the MVP program or any other means, that is fuelling what I’m about to write. This is entirely conjecture fuelled by speculation and too much Soporro beer at a fantastic Japanese restaurant tonight. Still, I present to you… D’Arcy’s Worldwide Partner Conference 2012 Predictions!!! So what can we expect to be announced at this year’s WPC? Much more than last year I’m hoping! Last year was sort of encouraging the troops to carry on with the Windows 7 messaging even with Windows 8 looming in the distance. It also showed Microsoft’s slant towards Private Cloud in addition to Azure. This year, we’re going to see a shift to a battle cry – Windows 8 is Coming, Windows 8 is Coming! I expect we’re going to hear an RTM date for Windows 8 from Steve Ballmer tomorrow, in addition to dates surrounding Windows Server 2012. We’ll also hear some announcement around Windows Phone 8, but I’m not really sure what – that whole piece is still quite muddy; are we going to actually *see* Windows Phone 8 devices this week? That would be great, but I imagine those types of announcements might be left for Build. Speaking of Build, I’m expecting an announcement on a date for a Build conference this Fall, probably late October. If any announcements are going to be made around Office 15, the schedule isn’t hinting at it. In fact, other than Office 365 there’s not much mention of Office in the conference sessions – either a red herring, or telling that Microsoft has another announcement coming later. The tagline of the conference is “A New Era. Together.” It’s obvious Microsoft is wanting to leverage WPC to rally their partners to carry the Windows 8 banner into the field of battle this fall when it ships. D

    Read the article

  • WiFi slow sometimes, reboot helps, how do I debug it?

    - by January
    Ubuntu 12.04.1 with all updates installed. Laptop Lenovo Thinkpad X230 with Intel Corporation Centrino Advanced-N 6205. WiFi sometimes becomes extremely slow. Often this occurs when I wake the system from suspend and connect to a different network. I find no obvious clues in system logs. /etc/init.d/network-manager restart doesn't help, but a reboot does. How can I go on with debugging this issue? In specific, which parts of the system should I try to restart (without a complete reboot)? I know of problems with Intel WiFi (see for example this question and the instructions here), but if that was the problem, I would expect the WiFi to be slow at all times, and not just sometimes. Also, I have a gut feeling that it might be a DNS issue (for example, getting a page from a known server is faster than accessing a new server), but I don't know how to tackle it. Update: despite numerous updates in the meanwhile, I still observe this behavior. It happens always when I access my WiFi router at home after returning from work; when I reboot my laptop, the connection speed is good again.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >