Search Results

Search found 7338 results on 294 pages for 'useful'.

Page 154/294 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Content light website and Google - Tell google it's a listings site (as opposed shop, reviews or restaurants)

    - by Doug Firr
    I have a listings style website. Due to the nature of this (listings) the site is content light. Each page is typically less that 50 words but there are many pages. The site in question has had a ton of media coverage and so has some great inbound links from places like Wired, Fast Company, Canada Broadcasting Corporation and many many other bloggers, media websites and recycle related niche authors (It's a recycling site). But Google really ignores it. Traffic from search is very very low - less than 5% of all traffic. I know that using markup you can tell Google whether your site is a restaurant, article, review, shop, local business and a few other categories (https://www.google.com/webmasters/markup-helper/u/0/). Is there a way to tell Google that my site is a listings site? I suspect, but do not know for sure, that part of the problem is that Google simply does not know what my site is? It's a crowdmap where people post curbalerts. The information is useful to people but it is presented in a short, concise way - a pin on a map, a picture and a short description. Adding anything further is not necessary for the site's intended purpose. 1st question - how best to tell the search engines what y site is - listings and not some spammy website? Any recommendations in improving our site's Search presence? You can take a look here if interested: http://tinyurl.com/lxg4hn7

    Read the article

  • Understanding interfaces [closed]

    - by user985482
    Possible Duplicate: When to use abstract classes instead of interfaces and extension methods in C#? Why are interfaces useful? What is the point of an interface? What other reasons are there to write interfaces rather than abstract classes? What is the point of having every service class have an interface? Is it bad habit not using interfaces? I am reading Microsoft Visual C# 2010 Step by Step which I feel it is a very good book on introducing you to the C# language. I have just finished reading a chapter on interfaces and although I understood the syntax of creating and using interfaces I have trouble of understanding the point on why should I use them? Correct me If I am wrong but in an interface you can only declare methods names and parameters.The body of the method should be declared in the class that inherits the interface. So in this case why should I declare an interface if I am going to declare the entire method in the class that inherits that interface? What is the point? Does this have something to do with the fact that a class can inherit multiple interfaces?

    Read the article

  • Terrible App Review of the Week&ndash;October 2nd

    - by David Paquette
    As some people know, I have a few apps in the Windows Phone Store.  One of these apps was intended to be a gimmicky app that did NOT really do anything useful.  It was just a funny little app that you probably try it once, then almost immediately uninstall.  To my surprise, this app ended up in some of the Top App lists and actually got a large number of downloads (for the Windows Phone Store).  Along with these downloads came a large number of really terrible and offensive reviews.  People are insulting me and saying awful things that they would never say to someone in person (I hope).  I am ok with this.  I can take the bad reviews and it doesn’t really bother me, but I still think that people are incredibly dis-respectful with their app reviews.  So..I am going to start sharing the best of the worst reviews.  If by chance this is your review, please contact me.  I would love to have a quick chat… Literally THE crappiest app I could of downloaded. You might as well rub dog *** in your eyes..... You'd see more!!! Stan8976   P.S. I am not particularly proud of this app, so I am not going to reveal the name. However, as you see more of these amazing reviews, I think you might be able to guess which app it is.

    Read the article

  • Ubuntu 12.04 (dual boot with Windows 7), doesn't boot after I deleted some files from Windows. What can I do?

    - by sacha
    The Ubuntu 12.04 I have installed (in Dual-Boot with Windows 7) using WUBI worked perfectly for over a month. Then it informed me that I ran out of space on the hard drive and I assumed it was because my hard drive on Windows was full. I logged into Windows and deleted the whole New Volume D. But now the problem is that it is not possible to log into Ubuntu but in Windows it's possible. I really paid attention about not deleting important files in Windows. When i try to log into Ubuntu : _either it does not go far and i have to restart the computer _or it goes until the loading time and a message says something like "[...] Graphics could not be detected [...]" and they ask to choose between 4 options including "Start with poor Graphics", "Reconfigure Graphics", "Troubleshoot" and "Restart the computer". But none of the options run and i also have to restart the computer manually from that point I have plenty of useful files in Ubuntu so i want to find another way to solve the problem instead of Uninstall/Reinstall Ubuntu. I want to know what happened ? And how to make it work ?

    Read the article

  • Is embedded programming closer to electrical engineering or software development?

    - by Jeremy Heiler
    I am being approached with a job for writing embedded C on micro controllers. At first I would have thought that embedding programming is to low on the software stack for me, but maybe I am thinking about it wrong. Normally I would have shrugged off an opportunity to write embedded code, as I don't consider myself an electrical engineer. Is this a bad assumption? Am I able to write interesting and useful software for embedded systems, or will I kick myself for dropping too low on the software stack? I went to school for computer science and really enjoyed writing a compiler, managing concurrent algorithms, designing data structures, and developing frameworks. However, I am currently employed as a Flex developer, which doesn't scream the interesting things I just described. (I currently deal with issues like: "this check box needs to be 4 pixels to the left" and "this date is formatted wrong".) I appreciate everyone's input. I know I have to make the decision for myself, I just would like some clarification on what it means to be a embedded programmer, and if it fits what I find to be interesting.

    Read the article

  • Welcome to the Java Training Beat!

    - by tmcginn
    We are a group of dedicated training developers for Java, located in the US, India, and now Mexico. In this blog we will announce new training content and events that might be of interest to our readers. In this first installment of the Java Training Beat, I would like to introduce three new Oracle By Example (OBE) modules I recently released and posted to the Oracle Online Learning Library. Creating a Simple Java Message Service (JMS) Producer with NetBeans and GlassFish - covers how to create a simple text message producer with NetBeans 7 and GlassFish. Creating Java Message Service (JMS) Resources in WebLogic Server 12c - covers how to create JMS resources using the console and WebLogic Server 12c. With this tutorial, you can replicate the results of the first tutorial in WebLogic. Creating a Publish/Subscribe Model with Message-Driven Beans and GlassFish Server - covers how to create a publish/subscribe application using JMS. This tutorial includes a short case study that includes a JSF front-end application that sends a hotel reservation request object to the server as a MapMessage. Hope you find these useful!  And do check out the Online Learning Library - we have a wide range of additional content posted and more being added every month!

    Read the article

  • Resurrecting a 5,000 line test plan that is a decade old

    - by ale
    I am currently building a test plan for the system I am working on. The plan is 5,000 lines long and about 10 years old. The structure is like this: 1. test title precondition: some W needs to be set up, X needs to be completed action: do some Y postcondition: message saying Z is displayed 2. ... What is this type of testing called ? Is it useful ? It isn't automated.. the tests would have to be handed to some unlucky person to run through and then the results would have to be given to development. It doesn't seem efficient. Is it worth modernising this method of testing (removing tests for removed features, updating tests where different postconditions happen, ...) or would a whole different approach be more appropriate ? We plan to start unit tests but the software requires so much work to actually get 'units' to test - there are no units at present ! Thank you.

    Read the article

  • Cloning from a given point in the snapshot tree

    - by Fat Bloke
    Although we have just released VirtualBox 4.3, this quick blog entry is about a longer standing ability of VirtualBox when it comes to Snapshots and Cloning, and was prompted by a question posed internally, here in Oracle: "Is there a way I can create a new VM from a point in my snapshot tree?". Here's the scenario: Let's say you have your favourite work VM which is Oracle Linux based and as you installed different packages, such as database, middleware, and the apps, you took snapshots at each point like this: But you then need to create a new VM for some other testing or to share with a colleague who will be using the same Linux and Database layers but may want to reconfigure the Middleware tier, and may want to install his own Apps. All you have to do is right click on the snapshot that you're happy with and clone: Give the VM that you are about to create a name, and if you plan to use it on the same host machine as the original VM, it's a good idea to "Reinitialize the MAC address" so there's no clash on the same network: Now choose the Clone type. If you plan to use this new VM on the same host as the original, you can use Linked Cloning else choose Full.  At this point you now have a choice about what to do about your snapshot tree. In our example, we're happy with the Linux and Database layers, but we may want to allow our colleague to change the upper tiers, with the option of reverting back to our known-good state, so we'll retain the snapshot data in the new VM from this point on: The cloning process then chugs along and may take a while if you chose a Full Clone: Finally, the newly cloned VM is ready with the subset of the Snapshot tree that we wanted to retain: Pretty powerful, and very useful.  Cheers, -FB 

    Read the article

  • HTML Manifest for Content Folios

    - by Kyle Hatlestad
    I recently worked on a project to create a custom content folio renderer in WebCenter Content. It needed to output the native files in the folio along with a manifest file in HTML format which would list the contents of the folio along with any designated metadata and a relative link to the file within the download.  This way a person could hand someone the folio download and it would be a self-contained package with all of the content and a single file to display the information on the contents.  The default Zip rendition of the folio will output the web-viewable version of the file with an HDA formatted file for each one. And unless you are fluent in HDA or have a tool to read them, they are difficult to consume. I thought this might be useful for others, so I'm posting a copy of the component here. Beyond the standard instructions for installing a component, there is an environment configuration file (folionativezipwithmanifestrenderer_environment.cfg) which has a couple of options. FolioMetadataManifestList - This is a comma separated list of metadata fields (system or custom) that should be included in the manifest file. FolioMetadataManifestUseOriginalFilename - (True or False) If set to True, the filenames in the zip file will be based on the original filename as it was checked into WebCenter Content.  If False, it will use the 'Name' of the item as defined within the Folio.  This is usually the Title of the item. The component also includes the source code, so feel free to use this as a reference for creating other interesting folios. 

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

  • What does the ".align" x86 Assembler directive do exactly? [migrated]

    - by Sinister Clock
    I will list exactly what I do not understand, and show you the parts I can not understand as well. First off, The .Align Directive .align integer, pad. The .align directive causes the next data generated to be aligned modulo integer bytes 1.~ ? : What is implied with "causes the next data generated to be aligned modulo integer bytes?" I can surmise that the next data generated is a memory-to-register transfer, no? Modulo would imply the remainder of a division. I do not understand "to be aligned modulo integer bytes"....... What would be a remainder of a simple data declaration, and how would the next data generated being aligned by a remainder be useful? If the next data is aligned modulo, that is saying the next generated data, whatever that means exactly, is the remainder of an integer? That makes absolutely no sense. What specifically would the .align, say, .align 8 directive issued in x86 for a data byte compiled from a C char, i.e., char CHARACTER = 0; be for? Or specifically coded directly with that directive, not preliminary Assembly code after compiling C? I have debugged in Assembly and noticed that any C/C++ data declarations, like chars, ints, floats, etc. will insert the directive .align 8 to each of them, and add other directives like .bss, .zero, .globl, .text, .Letext0, .Ltext0. What are all of these directives for, or at least my main asking? I have learned a lot of the main x86 Assembly instructions, but never was introduced or pointed at all of these strange directives. How do they affect the opcodes, and are all of them necessary?

    Read the article

  • trying to setup multiple primary partitions on ubuntu linux

    - by JohnMerlino
    I currently have ubuntu desktop installed on a harddrive. I want to partition the harddrive so that I can reserve 30 gigs for ubuntu server and 30 gigs for ubuntu desktop. The drive has 300 gigs available. Right now I am booting from dvd drive and installing ubuntu server. I selected "Guided partitioning" and created a 30 gig primary partition of Ext4 journaling filesystem, set "yes, format it" for format partition and set bootable flag to on. I intend to use this 30 gig partition to hold ubuntu server and allow me to boot from it. Now I have two other partitions. They are both set to "logical", one is currently using 285.8 gigs and is using ext4 (when I try to set bootable flag to true, it gives a warning "You are trying to set the bootable flag on a logical partition. The bootable flag is only useful on the primary partitions"). More alarming it says "No existing file system was detected in this partition". Actually, Im thinking that this is the parittion that is supposed to be holding my current Ubuntu Desktop. And of course I want this to be bootable and be a primary partition, so I could dual boot from this and the server partition. Now the third partition is also set to logical and it is being used as swap area. My question is regarding that second partition. Its supposed to be a primary partition thats holding my existing ubuntu desktop edition. How do I switch it to primary and to make sure that its pointing to my existing desktop installation?

    Read the article

  • How do you keep down your urge to learn many things [closed]

    - by devsundar
    One of the difficulties i have is to lower my urge to learn new things (Languages, tools, frameworks etc.). I know it's good to stay the bleeding edge, but at the same time i want to learn things properly. I really see that i need to strike a balance between staying bleeding edge and knowing things properly. For example: Before choosing Arch (Desktop), Ubuntu(Server) and Knoppix(Portable) -- depending on situation -- as favourite distributions. Virtually i have tried all popular linux distributions. You name any popular linux (Redhat, Ubuntu, Arch, Suse, Knoppix, Slax, Slackware) i have tried it for some time. In fact i have spent few years experimenting the operating systems. Before choosing Python, Javascript (nodejs). I have tried all the languages i cameacross Scala, Haskell, Erlang, Ruby, Python, Perl, Scheme. Same applies for database. All popular db RDBMS (Oracle, Mysql, Postgres, SQLite[Favourite] etc) and NoSQL (Mongo, Couch, Neo4j etc.). Advantages i see: We get a overall picture of the technologies/tools/languages. It's useful to select the right tool for the job. We develop a taste and choose the One we like. Disadvantages: I feel that i spend somuch time and see a need to strike a balance. In summary, for e.g. If i see a blog post in HackerNews about CofeeScript i will try it out irrespective of what i am currently learning (Say Haskell). I switch back to learning Haskell, then again i see DART i check it out. And this continues.. Effectively i take more time to learn Haskell, but learnt about other new stuff on the way. The quetion i have is how do you strike a balance between staying bleeding edge and learning properly.

    Read the article

  • Users can benefit from Session Tracking

    I use to work for a large Dental Plan marketing website a few years ago and they had a large customer-driven website that sold Dental Plans to consumers. Their website started tracking users as soon as they hit their web servers, and then they logged everything they could about the user. There are a lot of benefits for using session tracking for both the user and the website. Users can benefit from session tracking due to the fact that a website can retain pertaining information for the user so that they do not have to re-enter the same information repeatedly. In addition, websites can hold specific items in a cart for each user so that they can pay for all of their  items at once when they are ready to complete their purchases. Websites can also benefit from session tracking because they can determine where a specific user came from and which advertising partner gave them a sale. This information is very useful when deciding on where to spend an advertising budget. There is only one real disadvantage when it comes to session tracking, Users can not really control what is actually tracked by a website. Yes, they can disable cookies and this will help, but that means that no tracking can be done at all. Most sites require users to have cookies enabled in order for users to make purchases or login to their accounts.

    Read the article

  • CLR Profiler Allocated Bytes and XNA ContentManager

    - by Vackup
    I've been fighting with XNA ContentManager and memory allocations for some weeks because I'm trying to port my game from XNA (Windows) to ExEn / Monotouch (iphone). The problem is that after playing a few levels, my game exits unexpectedly on a real iPhone device (not simulator). Profiling memory usage on Windows with CLRProfile, I found some useful stuff but I also found something I dont understand. If I use 2 ContentManagers (1 for shared assets and 1 for level assets), when profiling, "Allocated Bytes" grows and grows after level through level but Memory consumption measured by Windows Task Manager stays constant (down when I unload the content manager and up again when I load content). Obviously, I contentManager.Unload() when level ends. After a few levels my game exits unexpectedly on an iPhone device. If I use 1 content manager, "CRLProfiler Allocated Bytes" stays constant on Windows and on the iPhone; I can play the game normally and it doesnt exit unexpectedly. I use the same assets level through level. It seems like in ios (iPhone) when loading and unloading the same assets, it allocates memory and consumes all device memory, so the ios kill it. Can anybody explain me how this really works? I've read quite a bit, but I still don't understand what's going on.

    Read the article

  • Bug: unable to handle kernel NULL pointer dereference at

    - by maria
    I have recently installed new system on my disc, Ubuntu 12.04. Installation proceeded without problems, I started installing additional software and put data from other discs. I had already two times bug report, it was quite long, and I have no idea how to acces to log file (which probably is somewhere saved) and since I had to switch off the computer using the button, anything else was possible, here is just a small part of it (what I've noted on paper) could not write bytes: Broken pipe speach dicpatcher disabled: edit etc/default/speach-dispatcher saned disabled: edit .... and than: BUG: unable to handle kernel NULL pointer dereference at 0000009c I've run Memory test in GRUB, everything is fine. First time it occured when I was using rsync, second time when I was trying to install texlive. Should I install whole system once again? Or can it be hardware problem? Or something else? If there is any hardware details which may be relevant, please ask, since I have no idea what is happening, I don't know what kind of information could be useful. Thanks P.S. dmesg output:

    Read the article

  • USB Keyboard works occasionally

    - by palimmo
    A few days ago I bought a SL640 Hama USB keyboard to use on my laptop with Ubuntu 12.04. But I'm having problems, as it works one time out of 10! While on my girlfriend's laptop, with Win Vista, always works. And on my laptop I have Win7 in dual boot. And I have to say that it always works on it. Here are some infos: ~$ lsusb Bus 006 Device 003: ID 04d9:1503 Holtek Semiconductor, Inc. Shortboard Lefty As you can see, the OS recognized it but the keyboard doesn't react ... even the Caps Lock and Num Lock keys don't blink. Regarding the legacy support (useful for GRUB), I have found no entry in the BIOS. But I'm not interested in it. I just want to use it in Ubuntu. However, in GRUB it works sometimes. Surprisingly, now I have booted my laptop: the usb keyboard hasn't worked in GRUB but it has worked since the ubuntu login! And now I'm typing with it. Well.. it means that Ubuntu has the right drivers and they work. But how to "load" them always correctly? thanks in advance!

    Read the article

  • Reverse-Engineer Driver for Backlit Keyboard

    - by user87847
    Here's my situation: I recently purchased a Sager NP9170 (same as the Clevo P170EM) and it has a multi-colored, backlit keyboard. Under Windows 7, you can launch an app that allows you to change the color of the backlighting to any of a handful of colors (blue, green, red, etc). I want that same functionality under Linux. I haven't been able to find any software that does this, so I guess I'm going to have to write it myself. I'm a programmer by trade, but I've haven't done much low level programming, and I've certainly never written a device driver, so I was wondering if anyone could answer these two questions: 1) Is there any software already out there that does this sort of thing? I've looked fairly thoroughly but haven't found anything applicable. 2) Where would I start in trying to reverse engineer this sort of thing? Any useful articles, tutorials, books that might help? And just to clarify: The backlighting already works, that's not the problem. I just want to be able to change the color of the backlighting. This functionality is supported by the hardware. The laptop came with windows software that does this and I want the same functionality in Linux. I am willing to write this software myself, I just want to know the best way to go about it. Thanks!

    Read the article

  • Classes as a compilation unit

    - by Yannbane
    If "compilation unit" is unclear, please refer to this. However, what I mean by it will be clear from the context. Edit: my language allows for multiple inheritance, unlike Java. I've started designing+developing my own programming language for educational, recreational, and potentially useful purposes. At first, I've decided to base it off Java. This implied that I would have all the code be written inside classes, and that code compiles to classes, which are loaded by the VM. However, I've excluded features such as interfaces and abstract classes, because I found no need for them. They seemed to be enforcing a paradigm, and I'd like my language not to do that. I wanted to keep the classes as the compilation unit though, because it seemed convenient to implement, familiar, and I just liked the idea. Then I noticed that I'm basically left with a glorified module system, where classes could be used either as "namespaces", providing constants and functions using the static directive, or as templates for objects that need to be instantiated ("actual" purpose of classes in other languages). Now I'm left wondering: what are the benefits of having classes as compilation units? (Also, any general commentary on my design would be much appreciated.)

    Read the article

  • i18n and L10n (1)

    - by Aaron Li
    Internationalization (i18n) is a way of designing and developing a software product to function in multiple locales. This process involves identifying the locales that must be supported, designing features which support those locales, and writing code that functions equally well in any of the supported locales. Localization (L10n) is a process of modifying or adapting a software product to fit the requirements of a particular locale. This process includes (but may not be limited to) translating the user interface, documentation and packaging, changing dialog box geometries, customizing features (if necessary), and testing the translated product to ensure that it still works (at least as well as the original). i18n is a pre-requisite for L10n. Resource is 1. any part of a program which can appear to the user or be changed or configured by the user. 2. any piece of the program's data, as opposed to its code. Core product is the language independent portion of a software product (as distinct from any particular localized version of that product - including the English language version). Sometimes, however, this term is used to refer to the English product as opposed to other localizations.   Useful links http://www.mozilla.org/docs/refList/i18n/ http://www.w3.org/International/ http://hub.opensolaris.org/bin/view/Community+Group+int_localization/

    Read the article

  • SPARC SuperCluster Papers

    - by user12616590
    Oracle has been publishing white papers that describe uses and characteristics of the SPARC SuperCluster product. Here are just a few: A Technical Overview of the Oracle SPARC SuperCluster T4-4SPARC SuperCluster T4-4 is a high performance, multi-purpose engineered system that has been designed, tested and integrated to run a wide array of enterprise applications. It is well suited for multi-tier enterprise applications with Web, database and application components. This 20-page paper discusses the components and technical characteristics of this product. SPARC SuperCluster T4-4 Platform Security Principles and CapabilitiesThe security capabilities designed into the SPARC SuperCluster, and architectural, deployment, and operational best practices for taking advantage of them. Consolidating Oracle E-Business Suite on Oracle’s SPARC SuperClusterThis Oracle Optimized Solution describes the implementation and use of SPARC SuperCluster as a consolidation platform for E-Business Suite in 30 pages. Oracle Optimized Solution for Oracle PeopleSoft Human Capital Management on SPARC SuperClusterThe Oracle Optimized Solution for PeopleSoft Human Capital Management on SPARC SuperCluster is the industry's only proven, tested, applications-to-disk solution that maintains excellence managing absences, optimizing collaborative activities, streamlining knowledge and honing processes; 31 pages. I hope you find some of those papers useful.

    Read the article

  • Is it a good practice to create a list of definitions for all symbols and words in a programming language?

    - by MrDaniel
    After arriving at this point in Learning Python The Hard Way I am wondering if this is a good practice to create a list of symbols and define what they do as noted in bold below, for every programming language. This seems reasonable, and might be very useful to have when jumping between programming languages? Is this something that programmers do or is it just a waste of effort? Exercise 22: What Do You Know So Far? There won't be any code in this exercise or the next one, so there's no WYSS or Extra Credit either. In fact, this exercise is like one giant Extra Credit. I'm going to have you do a form of review what you have learned so far. First, go back through every exercise you have done so far and write down every word and symbol (another name for 'character') that you have used. Make sure your list of symbols is complete. Next to each word or symbol, write its name and what it does. If you can't find a name for a symbol in this book, then look for it online. If you do not know what a word or symbol does, then go read about it again and try using it in some code. You may run into a few things you just can't find out or know, so just keep those on the list and be ready to look them up when you find them. Once you have your list, spend a few days rewriting the list and double checking that it's correct. This may get boring but push through and really nail it down. Once you have memorized the list and what they do, then you should step it up by writing out tables of symbols, their names, and what they do from memory. When you hit some you can't recall from memory, go back and memorize them again.

    Read the article

  • Why does there seem to be a lot of fear in choosing the "wrong" language to learn?

    - by Shewbox
    Perhaps its just me, but as a current CS student I have already come across many questions on this site and elsewhere about not just "Which language should I use for x?" but also "Does anyone still use language Y?" My first CS class was taught in Scheme, which, if I'm not mistaken, isn't used widely (at least in comparison to languages like Java, PHP, Python, etc). Many of my classmates balked at the idea of having to learn a language they would never have to use again, but I don't quite understand where so much of this fear of learning less popular languages comes from. No, I may not use Scheme in any job I get, but I certainly don't regret having learned to use it (albeit in a very beginner, not very in-depth manner in that one semester). I am taking a search engines class this semester, which is done in Perl and again I am seeing classmates complaining about the language choice. I can understand having a favorite language and disliking others but why do some get worked up over learning it in the first place? Can you really learn the "wrong" language? Isn't learning something like Scheme or Haskell good mental exercise if nothing else, and useful at least to exposure to different ways of solving problems?

    Read the article

  • Browser Item Caching and URLs

    - by Damon Armstrong
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it’s very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn’t want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn’t need to be removed from production code – just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don’t modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • Browser Item Caching and URLs

    - by Damon
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it's very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn't want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn't need to be removed from production code - just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don't modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >