Search Results

Search found 16097 results on 644 pages for 'dynamic pages'.

Page 325/644 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Matching my skills with Java and Web Programming

    - by John R
    here is my main question: What is the most common way that Java is used in web development? The reason I ask: I am currently in the process of finding my first internship. Every employer has a separate set of languages, technologies and acronyms they want their candidates to know. In school I did well with Java. As a hobby and interest I have developed a handful of web pages widgets, scripts, etc. My university emphasized Java, C and theory. My hobbies emphasize HTML, PHP, JavaScript, CSS, and a little jQuery, etc. I can't learn a dozen different technologies to satisfy most prospective employers (in what is left of the summer). I think my best bet is combine my skills with Java and my interests in web development. That brings me back to my original question: What is the most common way that Java is used in web development?

    Read the article

  • Is it a bad practice to include all the enums in one file and use it in multiple classes?

    - by Bugster
    I'm an aspiring game developer, I work on occasional indie games, and for a while I've been doing something which seemed like a bad practice at first, but I really want to get an answer from some experienced programmers here. Let's say I have a file called enumList.h where I declare all the enums I want to use in my game: // enumList.h enum materials_t { WOOD, STONE, ETC }; enum entity_t { PLAYER, MONSTER }; enum map_t { 2D, 3D }; // and so on. // Tile.h #include "enumList.h" #include <vector> class tile { // stuff }; The main idea is that I declare all enums in the game in 1 file, and then import that file when I need to use a certain enum from it, rather than declaring it in the file where I need to use it. I do this because it makes things clean, I can access every enum in 1 place rather than having pages openned solely for accessing one enum. Is this a bad practice and can it affect performance in any way?

    Read the article

  • How do I deal with content scrapers? [closed]

    - by aem
    Possible Duplicate: How to protect SHTML pages from crawlers/spiders/scrapers? My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

  • What is the impact of a CMS on page load time versus a static site?

    - by PleaseStand
    I am creating a 20-page site that will go on shared hosting. Each page will be about 20 KB (including HTML, CSS, and images common to all pages). To avoid manually adding navigation elements to each page, I am considering using a CMS. However, I am concerned that on a busy server, using a CMS would make the site load more slowly. In a shared hosting environment where PHP is run as a CGI binary, how much does a CMS (WordPress, Drupal, etc.) generally affect page load time, compared to both "plain HTML" static sites and those using PHP as merely a templating language?

    Read the article

  • Tab navigation and double content

    - by Guisasso
    I have a website in which i use tabs to navigate between pages. For example, page a displays A as an active tab and B and C background tabs. If the visitor gets to the website via page B, i also would like to display to page d, but not a and c. Question: I know i can just create index2 for b for example, so when the visitor gets to b from a, i display a,b,c and index1 when visitor gets to b from d for example. Is that a bad practice? I know double content isn't good, but in which other way can i or should i approach this problem? The tab navigation i designed uses < li and id tag do display active tab, defined in the < body tag.

    Read the article

  • Should the English website use href="x-default" when it doesn't auto-redirect to the user's language or country?

    - by Noam
    For each URL on my site, I'm auto-redirecting according to header accept language. The site arch is English version: http://mydomain.com/page Spanish version http://es.mydomaina.com/page etc.. The english version is displayed unless I'm seeing a specific language other than en and that I support in the header, and then a redirect occurs. Google says this: For language/country selectors or auto-redirecting homepages, you should add an annotation for the hreflang value "x-default" as well: My pages aren't language selectors, nor are they the homepage. But I am auto-redirecting. My question is, should my english version be hreflang="x-default" or/and hrefland="en"?

    Read the article

  • Getting the masked URL values in Mediawiki

    - by Kalai
    I have successfully masked the URL in Mediawiki. By using the following scripts in .htaccess and localsettings.php files in Mediawiki, i.e.: .htaccess: Options +FollowSymLinks RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)/(.*)$ /mediawiki/index.php?title=$1&actions=$2 [L] Localsettings.php: $wgScriptPath = "/lib/mediawiki"; $wgArticlePath = "/lib/mediawiki/$1/$2"; It is working fine with required URL. But my problem is I want to consider the second parameter as a querystring for my pages. But I could not get the second parameter in my file. I tried with $wgrequest function but it is only giving the first parameter as title. I tried with $_REQUEST also, it is sometimes give the value of $_REQUEST['actions']. But many times not. I cant understand what is the problem.

    Read the article

  • Can I include a robots meta tag outside of the head in HTML snippets indeded to be SSIed?

    - by Dan
    I have a number of files in my site which are not intended for independent viewing, but rather to be AJAXed into content within the site. They obviously don't meet HTML standards (no body, head, etc.) as independent entities. I would like to prevent search engines from indexing these pages, but do not have access to /robots.txt (which would be much more ideal). My question is, could I include the following at the top of these partial HTML files and get the desired results? <meta name="robots" content="noindex, noarchive"> I guess there are two parts to this question. Will this cause any rendering issues in any browsers? Will search engines (at least Google & Bing) interpret this as intended?

    Read the article

  • Single responsibility principle - am I overusing it?

    - by Tarun
    For reference - http://en.wikipedia.org/wiki/Single_responsibility_principle I have a test scenario where in one module of application is responsible for creating ledger entries. There are three basic tasks which could be carried out - View existing ledger entries in table format. Create new ledger entry using create button. Click on a ledger entry in the table (mentioned in first pointer) and view its details in next page. You could nullify a ledger entry in this page. (There are couple more operation/validations in each page but fore sake of brevity I will limit it to these) So I decided to create three different classes - LedgerLandingPage CreateNewLedgerEntryPage ViewLedgerEntryPage These classes offer the services which could be carried out in those pages and Selenium tests use these classes to bring application to a state where I could make certain assertion. When I was having it reviewed with on of my colleague then he was over whelmed and asked me to make one single class for all. Though I yet feel my design is much clean I am doubtful if I am overusing Single Responsibility principle

    Read the article

  • why google ignore my links page?

    - by Yaniv
    I have a website, where im loading all the data via AJAX. since, google doesn't work with AJAX, and the ways to make it AJAX-friendly are a bit odd, i thought that by creating a links page, where it links, from server side, to all the links that im loading in ajax - will solve the problem. but unfortunately, that doesnt seem to work. google webmaster shows that even though my links page discovered, the content of it - the links - are totally ignored. I can only assume that google tend to ignore links in such pages. my question is - WHY?! and furthermore, how to overcome this. Thanks.

    Read the article

  • Common reasons for the &lsquo;Sys is undefined&rsquo; error in ASP.NET Ajax applications

      In this blog I will try to summarize the most common reasons for getting the famous 'Sys is undefined' error when running an Ajax enabled web site or application (there are almost one milion results on Google for that phrase). Where does it come from? In every Ajax web pages source you will see a code like this: <script type="text/javascript"> //<![CDATA[ Sys.WebForms.PageRequestManager._initialize('ScriptManager1', document.getElementById('form1')); Sys.WebForms.PageRequestManager.getInstance()._updateControls([], [], [], 90); //]]> </script>   This is the initialization script of the ScriptManager. So, if for some reason the Sys namespace is not available when the code executes you get the Sys is undefined error. Here are the most common reasons and solutions for that problem:   1. The error occurs when you have added a control from RadControls for ASP.NET AJAX, but your application is not configured to use ASP.NET AJAX. For example, in VS 2005 you created a new Blank Site instead of a new Ajax-Enabled Web Site and the Sys is undefined message pops up. To fix it you need to follow the steps described at Configuring ASP.NET Ajax article (check the topic called Adding ASP.NET AJAX Configuration Elements to an Existing Web Site) or simply create the Ajax-Enabled Web Site. You can also check my other blog post on the matter: Visual Studio 2008: Where is the new ASP.NET Ajax-Enabled Web Site template?   2. Authentication - as the website denies access to all pages to unauthorized users, access to the Telerik.Web.UI.WebResource.axd handler is unauthorized (this is the default handler of RadScriptManager). This causes the handler to serve the content of the login page instead of the combined scripts, hence the error. To solve it - add a <location> section to the application configuration file to allow access to Telerik.Web.UI.WebResource.axd to all users, like: <configuration> ... <location path="Telerik.Web.UI.WebResource.axd"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> ... </configuration>   Note that the access to the standard ScriptResource.axd and WebResource.axd is automatically allowed for all users (authenticated and unauthenticated), so if you use the ScriptManager instead of RadScriptManager - you will not face this problem. The authentication problem does not manifest when you disable script combining or use the CDN. Adding the above configuration section will make it work with RadScriptManagers combined script.   3. The IE6 browser fails to load the compressed script. The problem does not appear in any other browser. There is a well known bug in the older versions of IE6 which lose the first 2,048 bytes of data that are sent back from a Web server that uses HTTP compression. Latest versions of RadScriptManager does not compress the output at all if the client is IE6, but in the previous versions you need to manually disable the output compression to prevent the error. So, if you get the Sys is undefined error in IE6 - update to the latest version of RadControls or simply disable the output compression.   4. Requests to the *.axd files returns Error Code 404 - Not Found. This could  be fixed easily: Check in the IIS management console that the .axd extension (the default HTTP handler extension) is allowed:     Also check if the Verify if file exists checkbox is unchecked (click on the Edit button appearing in the previous screenshot to check). More information can be found in our troubleshooting article and from the ASP.NET QA team blog post   5. The virtual directory in IIS is not marked as Web Application. Converting it to Web Application should fix the problem.   6. Check for the <xhtmlConformance mode="Legacy"/> option in your web.config and remove it. It would be rather rare to become a victim of this exact case, but still have it in mind. Scott Guthrie describes it in more details   In the above points I mentioned several times the terms web resources, javascript output, compressed script. If you want to find out more about these please see the Web Resources Demystified series of my friend and colleague Atanas Korchev   I hope that one of the above solutions will help you get rid of the Sys is undefined error.   Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • How mature is FreeBASIC?

    - by David
    A friend of mine is considering using FreeBASIC in a critical production environment. They currently use GWBasic, and they want to make a soft transition towards more modern languages. I am just worried that there might be undetected bugs in the software. I see that their version number is 0.22.0, which indicates, that it is not quite mature yet. I also read this discussion, without being able to conclude. Also on their Sourceforge pages there is no indication of whether it is Alpha or Beta (which anyways is not a very good indicator). Does anyone have own experience about the maturity, ideas on how to judge the maturity, or know of companies using FreeBASIC in a critical production environment?

    Read the article

  • The Gates Books&ndash;Finished

    - by MarkPearl
    Today I can finally say that I finished both of my Bill Gates books that had been lying on the shelf for several years. The books were… The Road Ahead (1995) Business @ the Speed of Thought (1999) I enjoyed “The Road Ahead”, purely because it was fun to read about someone looking into the future at technology, while I could read it looking at the past. In fact I was quite impressed with how much he got right and it was also nice to remember “The good old days”. Business @ the Speed of Thought was a harder read for me. The book still had some good insights, but was tough going at times (possibly because it was several hundred more pages than the first). All that being said, I can now finally place them back on the shelf knowing that they have been read.

    Read the article

  • Wireless very slow. Changed mode from Infracstructure to Ad-hoc

    - by Dan
    This is my first time using Linux so go easy on me :) I downloaded and installed Mint 11 couple of days ago and I was having trouble connecting to internet with wifi. Actually it says it connects but does not load any webpages. I tried to get help from this model of laptop owners (Asus G73SW) who uses mostly Ubuntu but they said they never had any problem right from install. So I decided to try out Ubuntu 11.04. Same thing. BUT now I get internet after going into Network Connection edit and changing the Mode from Infracstructure to Ad-Hoc. I get load pages but very slow. And if I'm not mistaken, as I was writing this paragraph, I might have been disconnected. However the wifi bar is full. Please help me this newb because I really want to keep using Ubuntu but I might just have to go back to W7 if nothing can be done. Thank you!

    Read the article

  • Whats the Quickest and Cheapest Solution to setup a Affiliate Program for an Online Product?

    - by szahn
    I have a simple HTML landing page setup for an online product I want to sell. This product is a hardcover book. I want to be able to allow other people to setup their own landing pages and make a percentage of the sale from their site. What are some good payment processors or payment gateways that make setting up an affiliate system easy and fast? Clarification - When someone purchases an item, I want (whatever the payment processor is) to automatically route a percentage of that payment to the affiliate and the rest to the original author.) Are there any payment frameworks that already do this? I've found a few sites that let you do this, but they seem to restrict you to digital purchases only. However, my sites is selling a ship-able product and the affiliate system needs to support this.

    Read the article

  • Do search engines rank internal redirects negatively?

    - by siverd
    A client is in the late stages (code complete) of a website redesign and unfortunately hasn't implemented 301 redirects to point high traffic pages to the new URL's. As I understand it our only option at this point is to create redirects within the CMS. Our CMS allows us to do this: www.mysite.com/category/current-page.html will redirect to www.mysite.com/new-category-name/new-page.html The site now uses custom logic on our 404 page to check this list of redirects and if one exists forwards the user to the new-page.html I understand that using 301 redirects would be the correct way to maintain our page rank but I think that would require a code change which isn't possible. Question How will search engines respond to this? Will they wait until the redirect happens and allow us to keep our page rank (authority, trust, etc) or will they see the 404 page and down-rank us? Worst case...will they make our new-page.html start from a rank of "0"? Thanks for your help.

    Read the article

  • Plug-in or framework recommendation for showing content preview fly-over in CMS

    - by Michael Huang
    The requirement is to have either front end plug-in or back end processor that generates the preview of contents (such as images, videos, PDF files, HTML pages) in a preview popup when user mouse hovers the content. I did some research on this, seems that there are assorted jquery plugins for each type of files, but what we are looking for is a framework that handles all types of file previews. Ideally, we want to generate preview images on the backend, considering the cost of retrieving content on front end. I did find some open source or proprietary CMS that provides this feature, but usually they are shipped as one suite and the API for file preview might not be open. Is there any java or jquery framework that handles file preview? Thanks a lot

    Read the article

  • Optimization of a Hybrid Pagination Scheme

    - by Kaustubh Karkare
    I'm working on a Web Application using node.js in which I'm building a partial copy of the database on the client-side to decrease the load on my server. Right now, I have a function like this (expressed as python-style pseudocode, but implemented in JavaScript): get(table_name,primary_key): if primary_key in cache[table_name]: return cache[table_name][primary_key] else: x = get_data_from_server(table_name,primary_key) # socket.io return cache[table_name][primary_key] = x While this scheme works perfectly well for caching individual rows, I'd like to extend it to support the creation of paginated tables ordered according to the primary_key, and loading additional data using the above function for only the current and possibly the adjacent pages. Now, I don't want to keep the list of primary keys on the server to be retrieved every time I need to change the page (which, for reasons beyond the scope here, will be very frequent), and keeping it on the client side, subject to real-time create/delete events from the server, doesn't seem that good an idea, even after compression (using ranges, instead of individual values). What is the best way to calculate which items are to be displayed on a random page, minimizing the space requirements & the need for communication with the server?

    Read the article

  • Where have the Direct3D 11 tutorials on MSDN have gone?

    - by Cam Jackson
    I've had this tutorial bookmarked for ages. I've just decided to give DX11 a real go, so I've gone through that tutorial, but I can't find where the next one in the series is! There are no links from that page to either the next in the series, or back up to the table of contents that lists all of the tutorials. These are just companion tutorials to the samples that come with the SDK, but I find them very helpful. Searching MSDN from google and the MSDN Bing search box has turned up nothing, it's like they've removed all links to these tutorials, but the pages are still there if you have the URLs. Unfortunately, MSDN URLs are akin to youtube URLs, so I can't just guess the URL of the next tutorial. Anyone have any idea what happened to these tutorials, or how I can find the others?

    Read the article

  • Testing HTML5 and javascript code for iPhone and Android devices

    - by Pankaj Upadhyay
    I have developed a simple HTML5 webpage that uses a javascript file. This is a fun learning page so I wanted to know as to how will they show up on mobile devices like iPhone and Android smartphones. The pages are hosted on a server and i have tested the thing on my desktop. But, how can i test the same for these mobile devices. i.e. how the page will look on mobile and stuff. I don't have an iPhone or Android. There is no serious development going in here so i was thinking if there is some free website or tool that acts as a iPhone or android browser. The main aim is just to see how the webpage will show up on an android phone.

    Read the article

  • XHTML fix solution republished

    - by TATWORTH
    As a post VS2010 SP1 installation activity, I am recompiling all my open source projects. The first is XHTMLFIX at http://xhtmlfix.codeplex.com/ This LGPL project has simple fixes to ASP.NET 2.0/4.0 to achieve XHTML compliance as measured by the W3C tests at http://validator.w3.org/ The XHTML project shows as untrue the commonly held belief that MVP or MVC are necessary for producing XHTML compliant web pages. Incidentally the other supposed advantage of MVP and MVC over web forms of easier testing is also very dubious as web forms can be tested by systems such as Selenium or WaTiN. I have used NUnitASP (alas sadly discontinued) with web forms and found it be more effective than unit testing MVP. Now if you prefer the MVP and / or MVC approach over Web forms then fine, that is your preferance. Now if you can find an example where ASP.NET 4.0 Web forms properly written do not produce XHTML compliant markup, I would be glad of your example and will look at ways of modifying the markup to be XHTML compliant.

    Read the article

  • Unable to connect to internet

    - by Peter Schriver
    I just upgraded to 12.10. Everything appears to go well but I am unable to connect to the Internet. The LAN is fine and I can view my shared files but no e-mail or access to web pages. My other computer which is running 12.04 or older seems to connect fine. I only upgraded one to make sure things would be ok. I am currently connecting fine with a wireless Windows laptop. Output from cat /etc/resolv.conf # Generated by NetworkManager nameserver 127.0.0.1

    Read the article

  • 301 redirects mirrored domain

    - by Dave
    I'm redesigning a site for a friend on my localhost. His old site is an .asp based site and we're replacing it with a WordPress site on LAMP hosting. The old site sits on domain A and also has another domain, domain B parked on top of it mirroring it. Google has picked up domain B for most of his search engine results and yahoo and bing etc have picked up domain A. The plan is to 301 redirect the the old pages of his site on domain A to the new WordPress versions and park domain B on top of it like before. My question is, will this work, if not what would be a better way to approach it? We'd prefer not to lose any of the search engine listings in the redesign, and the search engines don't appear to have penalized him for duplicate content. Thanks very much in advance!

    Read the article

  • Google pénalise le référencement des sites truffés de publicités, compliquant l'accès au contenu, ils sont considérés comme du spam

    Google pénalise le référencement des sites truffés de publicités Compliquant l'accès au contenu, ils sont considérés comme du spam Matt Cutts, responsable de l'équipe antispam de Google, s'est présenté à la conférence PubCon avec de bonnes et de mauvaises nouvelles. Le moteur de recherche pénalise désormais le référencement des sites truffés de publicités rendant difficile l'accès aux contenus pertinents des pages. Ce qui compte, a expliqué Cutts dans sa keynote est « combien de contenu est au-dessus de la ligne de flottaison [...] Si vous avez des publicités obscurcissant votre contenu, vous avez intérêt à y repenser », laissant entendre que les sites compli...

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >