Search Results

Search found 52299 results on 2092 pages for 'web reference'.

Page 267/2092 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Best practice for SEO "special characters" in products pages

    - by rhodesit
    Whats a best practice for creating websites do to the fact that i need to enter "ö" within the content/title/meta. Should I spell it without, and just use a "normal" character or do i put in this code everywhere. or do i spell it half the time with and half the time without. whats the best practice for seo? Google takes into account user intent. Which makes things complicated(in my mind). The user will be searching without the "special characters" but because of the whole "user intent" thing, I don't know the best practice for this situation is. Should I use a mix of both spellings? Should I use the special characters in anchortext/headers/title/metadescription?

    Read the article

  • Using Dynamic LINQ to get a filter for my Web API

    - by Espo
    We are considering using the Dynamic.CS linq-sample included in the "Samples" directory of visual studio 2008 for our WebAPI project to allow clients to query our data. The interface would be something like this (In addition to the normal GET-methods): public HttpResponseMessage List(string filter = null); The plan is to use the dynamic library to parse the "filter"-variable and then execute the query agains the DB. Any thoughts if this is a good idea? Is it a security problem?

    Read the article

  • Is programming in layers real?

    - by Aura
    I am fairly new in product development and I am trying to work over a product. The problem that I have realized is that people draw diagrams and charts showing different modules and layers. But as I am working alone (I am my own team) I got a bit confused about the interaction I am facing in the development within the programs and I am wondering whether developing a product in modules is real or not? Maybe I am not a great programmer, but I see no boundaries when data start to travel from frontend to backend.

    Read the article

  • Can I host a high traffic website at home?

    - by eric01
    I've been searching this up on google but I can't formulate my google search with the right terms to find an accurate answer to my question. Is it possible to have a super-fast connection at home to host a high traffic website? What is the generic term for that kind of connection? What's the major drawback of hosting at home? (I have no idea of the price range but it's probably quite expensive) Do you have to be a company to have the right to own such a connection?

    Read the article

  • What to use for "localhost" that includes PHP/SQL functionality?

    - by Jack
    I really do not want to install Linux at the moment, I would have to borrow a USB key, move files, format it, flash it, format it again, move files back, give it back... What would you recommend that is lightweight, easily and cleanly uninstallable afterwards (will install Linux when I get a new DVD-ROM, which will be in ~2 weeks), that also supports PHP and SQL? To be precise, I want to install a Wordpress blog, a few plugins, etc, and develop a theme. If there is no such thing for Windows (7, x64 if that matters), let me know too, I will borrow the USB key then (even though it's a pain).

    Read the article

  • worth learning c# before Visual Web Developer 2010 [closed]

    - by Jamie Knott
    Ive been trying to learn asp.net from reading "beginning asp.net 4 with c#" and been finding it hard to get a solid grasp on the code involved. I plan to go to tafe sometime next year to get my diploma but want to start myself. instead of learning asp.net as a whole and all the languages involved such as c#, html css and javascript etc etc. I'm starting to think a solid understanding of at lest one of these might be beneficial I have "Beginning C# Object-Oriented Programming - Clark - Apress, is it worth learning about the languages before I go head first into a ide?.

    Read the article

  • Issue in Webscrapping in C# : Downloading and parsing zipped text files

    - by user64094
    I am writing an webscrapper, to do the download content from a website. Traversing to the website/URL, triggers the creation of a temporary URL. This new URL has a zipped text file. This zipped file is to be downloaded and parsed. I have written a scrapper in C# using WebClient and its function - DownloadFileAsync(). The zipped file is read from the designated location on a trapped DownloadFileCompleted event. My issue : The Windows 'Open/Save dialog is triggered". This requires user input and the automation is disrupted. Can you suggest a way to bypass the issue ? I am cool with rewriting the code using any alternate libraries. :) Thanks for reading,

    Read the article

  • Dedicated server: managed hosting or manage it myself?

    - by ddawber
    We're currently hosting a number of sites on a self-managed dedicated server. Some companies, however, offer a managed dedicated server hosting service. They offer: Roughly the same server spec Ticketing system support Managed daily backups Virtual firewall (but with a limit of 10 IP addresses allowed through at any one time) Now, this managed hosting is at extra expense - somewhere in the region of $500 per month, and the limit on the number of IP addresses they'll manage on the firewall is also a real pain. My thinking is it would be better and cheaper to Stay with the same host since the dedicated box is fine Get an Amazon AWS account and use their server to manage backups; there are a number of good tools that can be used to automate the process Configure iptables so that I have complete control of the firewall I want to know Is a managed virtual firewall likely to be more secure than me configuring iptables? Whether, in your opinion, it's best to let someone else take care of backups? If, from your experience, there's anything else i'm missing that warrants using managed hosting over a DIY service? I think there is some reluctance to not having managed hosting since a managed host in effect takes responsibility for your server, whereas any hardware or security issues with a server that we manage would mean we are forced to hold our hands up when a client site goes down. That said, I personally don't think a managed host does that much in the day to day running of your server (backups are automatic, OS updates are carried out with ease, etc.).

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • Box2dWeb positioning relative to HTML5 Canvas

    - by Joe
    I'm new with HTML5 canvas and Box2DWeb and I'm trying to make an Asteroids game. So far I think I'm doing okay, but one thing I'm struggling to comprehend is how positioning works in relation to the canvas. I understand that Box2DWeb is only made to deal with physical simulation, but I don't know how to deal with positioning on the canvas. The canvas is 100% viewport and thus can vary size. I want to fill the screen with some asteroids, but if I hardcore certain values such as bodyDef.position.x = Math.random() * 50; the asteroid may appear off canvas for someone with a smaller screen? Can anybody help me understand how I can deal with relative positioning on the canvas?

    Read the article

  • Schema.org vs microformats

    - by Tordek
    They both server the same purpose: providing a vocabulary for semantic markup. Schema is recognized and standardized... but microformats are open. Schema exploits microdata, while microformats go on classes. (Of note: microdata means that an element must be of a single itemtype, while microformats allow several classes to apply to the same element. I can markup xFolk+hAtom with classes, but not with microdata.) Is this a black-and-white situation? Google says I can't use both "because it may confuse the parser". What's the consensus on these?

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

  • PrestaShop install SQL error

    - by Steve
    I am trying to install PrestaShop 1.4.0.17, and reach Step 3. I enter database information, which tests okay, and I choose the second option: Full mode: includes 100+ additional modules and demo products (FREE too!). I choose Next, and receive the error: Error while inserting data in the database: ‘CREATE TABLE `shop_county_zip_code` ( `id_county` INT NOT NULL , `from_zip_code` INT NOT NULL , `to_zip_code` INT NOT NULL , PRIMARY KEY ( `id_county` , `from_zip_code` , `to_zip_code` ) ) ENGINE=’ You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \’\’ at line 6(Error: : 1064) This happens if I use either MyISAM, or InnoDB. Why is this happening? This also happens if I drop all database tables, and try again in simple mode. Is there a manual installation method?

    Read the article

  • Would form keys reduce the amount of spam we receive?

    - by David Wilkins
    I work for a company that has an online store, and we constantly have to deal with a lot of spam product reviews, and bogus customer accounts. These are all created by automated systems and are more of a nuisance than anything. What I am thinking of (in lieu of captcha, which can be broken) is adding a sort of form key solution to all relevant forms. I know for certain some of the spammers are using XRumer, and I know they seldom request a page before sending us the form data (Is this the definition of CSRF?) so I would think that tying a key to each requested form would at least stem the tide. I also know the spammers are lazy and don't check their work, or they would see that we have never posted a spam review, and they have never gained any revenue from our site. Would this succeed in significantly reducing the volume of spam product reviews and customer account creations we are seeing? EDIT: To clarify what I mean by "Form Keys": I am referring to creating a unique identifier (or "key") that will be used as an invisible, static form field. This key will also be stored either in the database (relative to the user session) or in a cookie variable. When the form's target gets a request, the key must be validated for the form's data to be processed. Those pesky bots won't have the key because they don't load the javascript that generates the form (they just send a blind request to the target) and even if they did load the javascript once, they'd only have one valid key, and I'm not sure they even use cookies.

    Read the article

  • Marketing Burst Web and Landing Pages

    Marketing Burst was not created by a teenage techno geek without real world or real life marketing experience but by a seasoned professional for her own need to find simple solutions to marketing challenges she faced herself. Pam Bennett shares a similar story to many of use who was searching and spending money on experts who were thought to have the answers.

    Read the article

  • What is the architectural name for the set of data that enables UI choices?

    - by Richard Collette
    I have separate service methods that fetch business object data and the data for UI selection input such as radio buttons, check-boxes, combo-boxes, etc. I want to name my service methods that fetch the selection data appropriately. I am assuming that Model and ViewModel would not be part of the name because the selection data is but a portion of the Model or ViewModel. What might this set of data be named such that I can name my service method?

    Read the article

  • Recommendations for a quick and easy discussion forum installation

    - by BeachRunnerJoe
    Hello. I did a couple quick searches on this topic and both Google and this site yielded poor results, but I was wondering what's the quickest way to setup a discussion forum on my website. Preferably one that has a decent Admin dashboard. My website is very static (e.g. it has one index.html, a stylesheet, and a javascript) and it will most likely be hosted on GoDaddy.com. The last time I installed a discussion forum was six years ago and it was phpBB. I'm sure that's still an option, but I'm (hoping) wondering if there are better, free, and as-easy alternatives to phpBB. Thanks for your wisdom!

    Read the article

  • Blocking path scanning

    - by clinisbut
    I'm seeing in my access log a number of request very suspicious: /i /im /imaa /imag /image /images /images/d /images/di /images/dis They part from a known resource (in the above example /images/disrupt.jpg). All comming from same IP. Requests varies from 1/sec to 10/sec, seems somewhat random. It's obviously they are trying to find something and seems they are using a script. How do I block this kind of behaviour? I though of blocking the IP request, at least for a given time. Keeping in mind that: Request intervals seems legitimate (at least I think so). I don't want to end blocking a search engine bot, which may find 404 urls too (and that's a different problem, I know). ¿Do they use always same IP?

    Read the article

  • Significant number of non-HTTP requests hitting my site

    - by Mark Westling
    I'm seeing a significant number of non-HTTP requests hitting a site I just launched. They show up in the server (nginx) logs as non-ASCII and get rejected (correctly) with a 400 status. Here are some lines from the log: 95.132.198.189 - - [09/Jan/2011:13:53:30 -0500] "œ$A\x10õœ²É9J" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:57:42 -0500] "#§i²¸oYi á¹„\x13VJ—x·—œ\x04N \x1DÔvbÛè½\x10§¬\x1E0œ_^¼+\x09ÜÅ\x08DÌÃiJeT€¿æ]œr\x1EëîyIÐ/ßýúê5Ǹ" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:58:33 -0500] "¯Ú%ø=Œ›D@\x12¼\x1C†ÄÀe\x015mˆàd˜Û%pÛÿ" 400 173 "-" "-" What should I make of this? Is this some sort of scripted attack? Or could these be correct requests that have somehow been garbled? They're not affecting the performance of the site and I'm not seeing any other signs of attacks (e.g., no strange POSTs) so at this point I'm more curious than afraid.

    Read the article

  • Creating an online community - use templates or self-develop?

    - by ican ican
    PHPMotion, Joomla or develop my own? I'm thinking of developing a common interest online community. It will be have UGC, stats, etc.. functionality, and perhaps an online store. Though cost is an issue at this time, I want to be professional and effective. Should I use existing free platform templates, like PHP, Joomla, or should I develop my own? What are the advantages/disadvantages of either option? As a rough estimate, how much will it cost me to develop and manage my own? And how long will it take. In general what should I be careful about on this journey?

    Read the article

  • ecommerce options for 5-6 products [closed]

    - by user5252
    Possible Duplicate: Which Ecommerce Script Should I Use? We're looking to develop a simple e-commerce solution to sell 5-6 products. We'd rather not have to use PayPal's buttons (buy it now!) if there's an existing alternative, but would also for budget/time constraints don't want to roll our own. Are there any small, basic ecommerce solutions available that would allow this? I did look at Foxy Cart but the monthly fee was a bit of a turn off. (I must sound extremely fussy I'm aware!) Something like Zen would just be overkill for our needs. Thanks for any suggestions.

    Read the article

  • Chromium web-app creation doesn't work in 12.10?

    - by speter
    If you create an application of a website in Chromium and choose "desktop", a .desktop file is created. In 12.04, you were able to move it to ~/,local/share/applications/ and then start it from the Dash or the launcher. In my fresh 12.10 installation, this doesn't work anymore. I think the line that is misinterpreted is Exec=/usr/bin/chromium-browser --app=http://buymeapie.com/. If you try to run it from a terminal, you get the following result: 20:32 ~ speter Exec=/usr/bin/chromium-browser --app=http://buymeapie.com/ bash: --app=http://buymeapie.com/: Datei oder Verzeichnis nicht gefunden (English: "File or directory not found"). Can anyone explain or knows a workaround?

    Read the article

  • Hover alternatve for touch devices [migrated]

    - by Joshua Frank
    I'm building a standard infographic where you mouse over a region and the image changes as you move. For instance, imagine a map of the world, and when you mouse over a country, that country glows and a panel shows statistics about that country. The implementation is to have a separate image for the glowing country, and a div element with the statistics, and the code shows these additional elements on a hover over the country. The question is: what should this do on a tablet, where there's no hover event? What's a good alternative navigation metaphor for this kind of situation on touch-only devices?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >