Search Results

Search found 9935 results on 398 pages for 'pages'.

Page 11/398 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Oversizing images to produce better looking pages?

    - by Joannes Vermorel
    In the past, improper image resizing used to be a big no-no of web design (not mentioning improper compression format). Hence, for years I have been sticking to the policy where images (PNG or JPG) are resized on the server to match the resolution pixel-wise they will have with the rendered page. Now, recently, I hastily designed a HTML draft with oversized images, using inline CSS style such as width:123px and height:123px to resize the images. To my (slight) surprise, the page turned out to look much better that way. Indeed, with better screen resolution, some people (like me), tend to browse with some level of zoom (aka 125% or even 150% zoom), otherwise fonts are just too small on-screen. Then, if the image is strictly sized, the enlarged image appears blurry (pixel interpolation effect), but if the image is oversized the results is much better. Obviously, oversizing images is not an acceptable pattern if your website is intended for mobile browsing, but is there case where it would be considered as acceptable? Especially if the extra page weight is small anyway.

    Read the article

  • Okular (on Ubuntu 9.10) prints multiple pages per sheet (n-up) very small

    - by dgleich
    I'm trying to print a set of beamer slides with multiple slides per page (4-up or 6-up). When I select 4 pages or 6 pages per sheet in the Okular print dialog, the pages print quite small (perhaps even tiny -- about 1.75" by 1.25") and leave significant white-space on the page. I can get around this behavior by using the pdfnup utility (in the pdfjam package); which will correctly generate a 4- or 6-up pdf file but it's annoying to generate a second pdf file when I should be able to accomplish this task from the print dialog. Details: Ubuntu 9.10 (Karmic), 64-bit, Color Postscript printer.

    Read the article

  • google webmaster showing 6 pages submitted 0 indexed, yet i can see them all there when i search in google?

    - by sam
    I have a small 'brochure' type site with 6 pages, i can see them all the pages showing up in google when i search for my site. But in webmaster tools under the sitemaps section it says 6 submitted, (the blue bar of the graph), but the indexed pages - the red bar is showing 0 indexed pages, even though they seem to be indexed ? any idea why this is ? I dont really think its that important as the pages are still indexed, but it just seems odd. =================================================== UPDATE 9/3/12 having just looked in google webmaster its showing that there are 11 pages indexed, under the health index status tab.. but under the optimization sitemap tab it shows 6 urls submitted but only 1 indexed ? please see images bellow index status: Sitemap status:

    Read the article

  • Java and web pages

    - by Filippo
    Hello everyone and thank you in advance for the answers. I have a question concerning not how to do something, but with which instruments. Let's say, I want to write a simple application in Java that connects to a news website (e.g. CNN), parses the html document and prints on screen the news. Another example : my application retrieves and prints on screen soccer results from Eurosports. What do I need to do that? External libraries? Or maybe what I'm looking for is already included in JavaEE? Could this be helpful? http://jsoup.org/<< Thank you everyone again and have a nice day.

    Read the article

  • Deny access to a folder on hosting server but serve the pages

    - by Sourav
    My hosting server allows to host multiple websites. The directory structure is like this root |_ www.a.com |_ www.b.com |_ www.c.com |_ www.d.com I want to put some PHP files on the www.d.com folder so if some one browse the site from web-browser can get it, but no one can get it's source code [even by loggin in to the root folder]. Is there any way to doing so ? There is a feature called Password protect folder or so, can in help in this case ?

    Read the article

  • Value of links on negative review pages

    - by Sam Healey
    A general assumption with SEO is more links = higher rankings. What I would like to know is does Google know what those links are referring to. I.e. if somebody gives a product a good review on their personal blog and links the review to another companies website (who are selling the product), would Google take consideration for the review/description link. Essentially would Google know that this link refers to a product. So if somebody is looking to buy a product, Google would know to include this page because the previous link said it sells products rather than just having information on products. Then to take this further, does Google know if a link is positive or negative. For example, If somebody creates a post saying, do not visit example.com, example.com is bad because of blah blah blah. Would Google know that the link is getting bad feedback and therefore would it have a negative affect on rankings, or would Google go oh its just another link and give it better rankings?

    Read the article

  • CMS for single user-editable pages?

    - by GRardB
    Does anybody know of a CMS where users can edit their own page, and their page only (something similar to about.me, except with more customization/options)? I'm not talking about profiles, but more like an individual web page for people's businesses. I want to be able to give local businesses the opportunity to make a single web page for their businesses with ease. I have looked at many CMS's, but I can't find anything that offers this type of functionality. I've check out the following: Unify Concrete 5 Drupal Simple CMS CMS Made Simple (and more) If anybody knows a CMS with the functionality that I'm looking for, or even a regular CMS with modules/plugins that I would be able to use, that would be awesome. Also: the cheaper, the better :D Thanks, Gerard

    Read the article

  • Redirecting requests for .html pages in subdirectories to the same page in root with .htaccess

    - by Asherion
    I am porting a site from an old version of a CMS to a newer version which has different page addressing techniques. I'm unfortunately not very good with htaccess at all. URL/blog/sublblog/article.html is now simply URL/article.html Unfortunately, this will destroy any linking programs they have going, and break all the old links. I need a way to use .htaccess say: if request = /(any subdirectory)/(string).html then redirect to /(string).html If that makes any sense.

    Read the article

  • Marketing Burst Web and Landing Pages

    Marketing Burst was not created by a teenage techno geek without real world or real life marketing experience but by a seasoned professional for her own need to find simple solutions to marketing challenges she faced herself. Pam Bennett shares a similar story to many of use who was searching and spending money on experts who were thought to have the answers.

    Read the article

  • Is it a good idea to add robots "noindex" meta tags to deep low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • Static pages for large photo album

    - by Phil P
    I'm looking for advice on software for managing a largish photo album for a website. 2000+ pictures, one-time drop (probably). I normally use MarginalHack's album, which does what I want: pre-generate thumbnails and HTML for the pictures, so I can serve without needing a dynamic run-time, so there's less attack surface to worry about. However, it doesn't handle pagination or the like, so it's unwieldy for this case. This is a one-time drop for pictures from a wedding, with a shared usercode/password for distribution to the guests; I don't wish to put the pictures in a third-party hosting environment. I don't wish to use PHP, simply because that's another run-time to worry about, I might relent and use something dynamic if it's Python or Perl based (as I can maintain things written in those). I currently have: Apache serving static files, Album-generated, some sub-directories to divide up the content to be a little more manageable. Something like Album but with pagination already handled would be great, but I'm willing to have something a little more dynamic, if it lets people comment or caption and store the extra data in something like an sqlite DB. I'd want something light-weight, not a full-blown CMS with security updates every three months. I don't want to upload pictures of other peoples' children into a third-party free service where I don't know what the revenue model is. (For my site: revenue is none, costs out of pocket). Existing server hosting is *nix, Apache, some WSGI. Client-side I have MacOS. Any advice?

    Read the article

  • Best practice for SEO "special characters" in products pages

    - by rhodesit
    Whats a best practice for creating websites do to the fact that i need to enter "ö" within the content/title/meta. Should I spell it without, and just use a "normal" character or do i put in this code everywhere. or do i spell it half the time with and half the time without. whats the best practice for seo? Google takes into account user intent. Which makes things complicated(in my mind). The user will be searching without the "special characters" but because of the whole "user intent" thing, I don't know the best practice for this situation is. Should I use a mix of both spellings? Should I use the special characters in anchortext/headers/title/metadescription?

    Read the article

  • Word: MAC 2011, TOC on too many pages

    - by Mark
    I have a Word: MAC 2011 document where the bottom of the first 40 pages or so say "TOC: Page x". This notation appears to be in the Footer, as it is gray until I click on it (then the rest of the text goes gray instead). There is no TOC that I can see in the document, so I'm presuming someone tried to create one and messed things up. After the first 40 pages or so, all the other bottom of the page notations appear to be correct. (i.e. Chapter One, Chapter Two, etc.) How can I get those first 40 pages to be part of Chapter One rather than TOC?

    Read the article

  • How to customize web-app (pages and UI) for different customers

    - by demoncodemonkey
    We have an ASP.NET web-application which has become difficult to maintain, and I'm looking for ideas on how to redesign it. It's an employee administration system which can be highly customized for each of our customers. Let me explain how it works now: On the default page we have a menu where a user can select a task, such as Create Employee or View Timesheet. I'll use Create Employee as an example. When a user selects Create Employee from the menu, an ASPX page is loaded which contains a dynamically loaded usercontrol for the selected menuitem, e.g. for Create Employee this would be AddEmployee.ascx If the user clicks Save on the control, it navigates to the default page. Some menuitems involve multiple steps, so if the user clicks Next on a multi-step flow then it will navigate to the next page in the flow, and so on until it reaches the final step, where clicking Save navigates to the default page. Some customers may require an extra step in the Create Employee flow (e.g. SecurityClearance.ascx) but others may not. Different customers may use the same ASCX usercontrol, so in the AddEmployee.OnInit we can customize the fields for that customer, i.e. making certain fields hidden or readonly or mandatory. The following things are customizable per customer: Menu items Steps in each flow (ascx control names) Hidden fields in each ascx Mandatory fields in each ascx Rules relating to each ascx, which allows certain logic to be used in the code for that customer The customizations are held in a huge XML file per customer, which could be 7500 lines long. Is there any framework or rules-engine that we could use to customize our application in this way? How do other applications manage customizations per customer?

    Read the article

  • Free tool to automatically deskew and crop PDF made up of scanned pages [closed]

    - by Pietro M.
    I have several PDFs made up of book pages' scans. The scans are made from two pages at a time and some of these scans are skewed, making text appear slightly tilted. I'm looking for a tool that could allow me to do an automatic optimization by deskewing the scans without losing readability. I've found the GPL software briss to crop the scans in order to have a 1:1 page ratio instead of 2:1, but I don't have any tool to deskew the pages. I stumbled upon unpaper, another open source tool that seems perfect for what I want to do, but that tool is Linux only and it doesn't work on PDF files directly. Any hint is appreciated. Thank you.

    Read the article

  • duplicate pages

    - by Mert
    I did a small coding mistake and google indexed my site wrongly. this is correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE but google index my site like this : https://www.foo.com/urunler/171/cart.aspx first I fixed the problem and made a site map and only correct link in it. now I checked webmaster tools and I see this; Total indexed 513 Not selected 544 Blocked by robots 0 so I think this can be caused by double indexes and they looks not selected makes my data not selected. I want to know how to fix this "https://www.foo.com/urunler/171/cart.aspx" links. should I fix in code or should I connect to google to reindex my site. If I should redirect wrong/duplicate links to correct ones, what the way should be? thanks for your time in advance.

    Read the article

  • Will google "forget" unlinked pages?

    - by Mystere Man
    If i remove all links to a page, but do not delete the page from the site (nor block it from being requested), will google eventually "forget" about it when it reindexes the site? Assuming of course there are no other links to the page somewhere else externally. Or will google continue to request the page and verify it in the index and keep it around so long as it returns a valid page? Is this similar for Bing et al?

    Read the article

  • schema.org specification for generic pages or posts on a CMS

    - by NateWr
    I'm trying to determine the best possible schema.org type to declare for the content section in the template of a content management system, which will handle regular news posts for small, local hospitality businesses. The type should represent the content of that page, which is likely to be a wide range of things. The description for Article pretty strongly encourages its use to be limited to the articles of a publication. For purely semantic reasons, I'm not sure if Blog is appropriate in this case -- businesses won't be creating typical "blog" content but are more likely to be writing about upcoming events, special deals, awards, etc. Would Webpage be appropriate in this instance? Although I'm a fan of the schema.org concept, I frequently find myself unsure how broadly or narrowly I'm meant to infer the meaning of a type. In such cases, is it safe to use a high-level element, such as CreativeWork, or does this blunt the usefulness of the markup?

    Read the article

  • The Increasing Importance of SEO Content Pages

    It's been a well known fact that SEO content is extremely important for the popularity and search engine rankings of a site however a lot of people are not too keen on emphasizing on its importance. These days search engine professionals are trying to implement the content of the site in such a way that the overall effectiveness of the page is enhanced right from the back end coding.

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Own mediawiki/wikipedia naming convention for pages

    - by Andy M
    I recently installed a mediawiki at home and I'm looking for a way to name pages. Let's say I have the following structure : Main - Dev - C# - Tips Main - Cooking - Mexixan Cooking - Tips Main - Annoying my girlfriend - Tips Each final page is a different Tips page. Naming them only "tips" won't work because I need three different pages. Now, I could name each of my tips page with its "path" (ex: main_cooking_mexican_cooking_tips) but it looks cumbersome and the problem is that, whenever I'll change the structure of my mediawiki, some pages will need to change their name in order to be corrects. Does it exist some convention to follow regarding this ? Thanks for your help !

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >