Search Results

Search found 24735 results on 990 pages for 'site ranking'.

Page 144/990 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Is PhotoBucket a viable solution to host a website's photo galleries

    - by Evan Plaice
    I'm currently working with a lot of photographers and will probably be picking up development on a professional photography site soon. With that in mind, and I can't stop thinking about a way I can implement a user-friendly photo gallery hosting solution where the site owner can upload images themselves without any webmaster intervention. Kind of like a CMS for image hosting. The idea is: - The user can log in to PhotoBucket - Upload their gallery - Visit an admin section of the site - Enter the new gallery name to the listing And... Voila, the gallery automagically gets displayed on the website in a clean lightbox-style presentation format (ie, no iframe nonsense). I took a brief look at the API and it looks promising. Is this a viable solution? Bonus points if you have implemented something like this with Photobucket and/or another 3rd-party image hosting site. Note: Purchasing a premium account is expected if necessary. The limitations on free accounts at most image hosting sites are just too restrictive to be useful.

    Read the article

  • Weird referral traffic [closed]

    - by Noam
    Possible Duplicate: Strange incoming links appearing on site statistics I'm getting weird traffic from Japan, from a site called ime.nu Why weird? because I'm not able to identify the link and also when going to their homepage link it just shows an Apache Test Page, while I'm seeing it is a pretty big site in analysis sites (Alexa Ranked 121 in JP) Can someone help me understand the mystery?

    Read the article

  • How to keep groups when pulling with git

    - by mimrock
    I have a staging site that is a working directory of a git repository. How to set up git to let a developer pull out a branch or release without changing the group of the modified files? An example. Let's say I have two developers, robin and david. They are both in git-users group, so initially they can both have write permissions on site.php. -rw-rw-r-- 1 robin git-users 46068 Nov 16 12:12 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git After robin-server1$ git pull origin master: -rw-rw-r-- 1 robin robin 46068 Nov 16 12:35 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git And david do not have write permissions on site.php, because the group changed from 'git-users' to 'robin'. From now on, david will get a permission denied, when he tries to pull to this repository.

    Read the article

  • fully encrypt website using SSL

    - by eddywebs
    I had been trying to use SSL for the following site http://bit.ly/e8Lj32 , although the SSL certificate is signed properly by networksolutions , each time the pages are loaded it still displays an SSl warning in browser warning "Some parts of the site are not using SSL" , in I.E, its even worst if you hit "no I dont want view unsecured part of the page" site does not display properly (as it blocks some of the widgets) screenshots upped at http://i.imgur.com/fm5GO.png

    Read the article

  • Integrating eBay and PayPal inventory

    - by JW01
    Say I have an item for sale on eBay, and the same item for sale on another site via PayPal. Is it possible to have sales on one site reflected in the inventory for the other site, and vice-versa? In other words, if I have ten items for sale, and I buy one on either site, it should show that there are nine items left on both sites. I know that PayPal has an API for setting the inventory level of an item associated with a button. eBay also has an API for controlling an item's inventory. I'm wondering if anyone has tried to integrate them.

    Read the article

  • Hidden web standards behind Google "custom searchEngines"?

    - by Hoàng Long
    Today while playing with Google Chrome Omnibox, I notice a strange behavior. I guess there's some "hidden" web standard behind it, but can't figure it out. Here's how to reproduce: Go to http://edition.cnn.com/ Use the search function at the higher right corner, Search a random keyword, for example: "abc" Close the tabs. Open a new tab, type until Chrome reminds you about http://edition.cnn.com/, then press "Tab" The Omnibox now shows "Search CNN.com"! And when you type "abc" and press Enter, it uses the CNN search function to do the job, not Google! I also tried it for several different sites. To some it won't work. But to some sites, like CNN, vnexpress.net, it works after I use the search function of that site once. I also learnt about chrome://settings/searchEngines (type it in your chrome box and you will see), and learnt about you can add custom search engine in chrome. But the question is, why Chrome can realize the search URL automatically to some pages, and not others? It's not because some site subscribe to Google service, because I can do the same method for my site (http://ledohoanglong.wordpress.com), and I'm sure that there's no subscription. So I guess there's a method to "expose" the search function of a site, so that Google Chrome can catch it (after I call the search function of that site once, of courses). Does anyone know about how it works behind the scene?

    Read the article

  • Best way to prevent Google from indexing a directory [duplicate]

    - by Gkhan14
    This question already has an answer here: Stopping Google index some web pages I have 5 answers I've researched many methods on how to prevent Google/other search engines from crawling a specific directory. The two most popular ones I've seen are: Adding it into the robots.txt file: Disallow: /directory/ Adding a meta tag: <meta name="robots" content="noindex, nofollow"> Which method would work the best? I want this directory to remain "invisible" from search engines so it does not affect any of my site's ranking. In other words, I want this directory to be neutral/invisible and "just there." I don't want it to affect any ranking. Which method would be the best to achieve this?

    Read the article

  • What should every programmer know about web development?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • Why do my Google sitelinks show gibberish for a PDF link?

    - by Tom
    I have a website which Google lists nicely along with site links. One of the site links - to a PDF file - shows un-human gibberish e.g 67,8;45:: 56 83 @7<1. (7/0;,*;: /59( (7/0;,;<7, <7)(60:4 (9<7 /+ +2, VU I thought it might be due to the PDF's title property so I changed it. But there hasn't been an improvement to the site link. Other PDF site links are fine and display the title property as desired. Does anyone know how I might rectify this problem or what might be the cause? My uninformed guess is it's some transliteration problem between code and display text which, I suppose, means I ought to recondition the PDF file in some way. Not sure how.

    Read the article

  • Batch file to Delete Old Virtual Directories.

    - by Michael Freidgeim
    On some servers we have many old Virtual Directories created for previous versions of our application. IIS user interface allows to delete only one in a time. Fortunately we can use IIS scripts as described in How to manage Web sites and Web virtual directories by using command-line scripts in IIS 6.0 I've created batch file DeleteOldVDirs.cmd rem http://support.microsoft.com/kb/816568 rem syntax: iisvdir /delete WebSite [/Virtual Path]Name [/s Computer [/u [Domain\]User /p Password]] REM list all directories and create batch of deletes iisvdir /query "Default Web Site" echo "Enter Ctrl-C  if you want to stop deleting" Pause iisvdir /delete "Default Web Site/VDirName1" iisvdir /delete "Default Web Site/VDirName2" ...   If the name of WebSite or Virtual directory contain spaces(e.g  "Default Web Site"), don't forget to use double quotes. Note that the batch doesn't delete physical directories from flie system.You need to delete them using Windows Explorer, but it does support multiple selection!

    Read the article

  • Windows Phone 7 UserExtenedProperties opinion...

    - by webdad3
    I was thinking of a way to some how connect my phone user base to my site user base. Right now if an item gets added to the site via the phone the userId is generic and the site displays it as SmartPhoneUser. I was thinking it might be cool to display the unique phone id by using the UserExtenedProperties, however, after reading Nick Harris's blog about it I'm thinking it may not be a good idea as I don't want users to think I'm up to anything nefarious. So I'm wondering if there are any suggestions out there on how to accomplish this task. Right now my site uses the JanRain module that allows multiple logins from other sites (Facebook, Yahoo, Google etc.). Any thoughts on how I can accomplish what I want to do without using the ExtendedProperties?

    Read the article

  • URL redirect to a virtual server on a VLAN

    - by zeroFiG
    I have a production site, running off 10 servers. I've been given another virtual server on the same network as these 10 servers, to use for testing purposes. This server doesn't have it's own DNS entry. Therefore I need to do a redirect to the site hosted on this virtual server for a sub-domain of the site running on the 10 other servers. So Basically I was wondering how I would configure a sub domain of my production server to point at the Virtual server for testing. I'm guessing I need to modify my site file in /etc/apache2/sites-available and add another virtual host like the following and modify the redirect match: <VirtualHost *> ServerName SUBDOMAIN.DOMAIN.com RedirectMatch 301 (.*) **IP ADDRESS** CustomLog /var/log/apache2/SUBDOMAIN.DOMAIN.com.access.log combined </VirtualHost> Do I set the redirect match to just the IP on the Virtual server, and then configure another site file in the sites-available directory, which will recption this redirect and point the browser towards the HTML root? Thanks, I hope I made myself clear.

    Read the article

  • My Flex 3 Website Doesn't Have Any Keywords Listed in Google's Webmaster Tools

    - by Laxmidi
    Hi, I've got a Flex 3 website. When I in Google's webmaster tools - Your site on the web - Keywords, none are listed. Does anyone have an all-Flex site that has keywords listed for it the above? The sites been up for about a month. It's been indexed by Google. I have keyword metatags in the site, which from what I've read Google ignores. Where does google come up with the keywords for your site? Any suggestions on what I need to do? Thank you. -Laxmidi www.brainpinata.com

    Read the article

  • Lost Traffic from Google Because of Meta-tag Adding

    - by Marian
    I have a site aroundnails.com. It has English version on subdomain en.aroundnails.com. Reading about language related meta-tags for Google, I have placed such a meta tag on the main page of main site: <link rel="alternate" hreflang="en" href="http://en.aroundnails.com/" /> By this way I have tried to say Google, that my site on en.aroundnails.com is the english version of main site, not a duplicate. After a fortnight I have lost a huge part of traffic from Google, more than a half. At the beginning of September I have moved this meta-tag, but traffic remained at the same level. Hope somebody can help me to solve this issue.

    Read the article

  • What icon would you use to denote an XML (not rss) feed available [closed]

    - by mplungjan
    Given two sites - one aimed at regular users and one for automated access. The first site is the best known, so many are (still) screen scraping that site for data. It is preferable to have move to the other site where the same data is available in xml format. What icon (+text/title) on a page you are about to screen scrape, would make you pay attention and decide to see what that was about? Examples from Google Image search for xml icon

    Read the article

  • How to Create Features for Windows SharePoint Services 3.0

    To customise a SharePoint (WSS 3.0) site, you'll need to understand 'Features'. The 'Feature' framework has become the most important method of customising a SharePoint site, because it is now defined by a list of Features, a layout page and a master page. One templated site can be turned into another by toggling Features and maybe switching the layout page or master page. Charles Lee explains.

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • Creating a Document Library with Content Type in code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/15/154360.aspxIn the past, I have shown how to create a list content type and add the content type to a list in code.  As a Developer, many of the artifacts which we create are widgets which have a List or Document Library as the back end.   We need to be able to create our applications (Web Part, etc.) without having the user involved except to enter the list item data.  Today, I will show you how to do the same with a document library.    A summary of what we will do is as follows:   1.   Create an Empty SharePoint Project in Visual Studio 2.   Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution 3.   Create a new Feature and add and event receiver  all the code will be in the event receiver 4.   Add the fields which will extend the built-in Document content type 5.   If the Content Type does not exist, Create it 6.   If the Document Library does not exist, Create it with the new Content Type inherited from the Document Content Type 7.   Delete the Document Content Type from the Library (as we have a new one which inherited from it) 8.   Add the fields which we want to be visible from the fields added to the new Content Type   Here we go:   Create an Empty SharePoint Project in Visual Studio      Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution       The Utilities and Extensions Library will be part of this project which I will provide a download link at the end of this post.  Drag and drop them into your project.  If Dragged and Dropped from windows explorer you will need to show all files and then include them in your project.  Change the Namespace to agree with your project.   Create a new Feature and add and event receiver  all the code will be in the event receiver.  Here We added a new Feature called “CreateDocLib”  and then right click to add an Event Receiver All of our code will be in this Event Receiver.  For this Demo I will only be using the Feature Activated Event.      From this point on we will be looking at code!    We are adding two constants for use columGroup (How we want SharePoint to Group them, usually Company Name) and ctName(ContentType Name)  using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace CreateDocLib.Features.CreateDocLib { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("56e6897c-97c4-41ac-bc5b-5cd2c04f2dd1")] public class CreateDocLibEventReceiver : SPFeatureReceiver { const string columnGroup = "DJ"; const string ctName = "DJDocLib"; } }     Here we are creating the Feature Activated event.   Adding the new fields (Site Columns) ,  Testing if the Content Type Exists, if not adding it.  Testing if the document Library exists, if not adding it.   #region DocLib public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { //add the fields addFields(spWeb); //add content type SPContentType testCT = spWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(spWeb, ctName); } if ((spWeb.Lists.TryGetList("MyDocuments") == null)) { //create the list if it dosen't to exist CreateDocLib(spWeb); } } } #endregion The addFields method uses the utilities library to add site columns to the site. We can add as many fields within this method as we like. Here we are adding one for demonstration purposes. Icon as a Url type.  public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Icon", SPFieldType.URL, false, columnGroup); }The addContentType method add the new Content Type to the site Content Types. We have already checked to see that it does not exist. In addition, here is where we add the linkages from our site columns previously created to our new Content Type   private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Document"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } Here we are adding just one linkage as we only have one additional field in our Content Type public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Icon", ct); } Next we add the logic to create our new Document Library, which we have already checked to see if it exists.  We create the document library and turn on content types.  Add the new content type and then delete the old “Document” content types.   private void CreateDocLib(SPWeb web) { using (var site = new SPSite(web.Url)) { var web1 = site.RootWeb; var listId = web1.Lists.Add("MyDocuments", string.Empty, SPListTemplateType.DocumentLibrary); var lib = web1.Lists[listId] as SPDocumentLibrary; lib.ContentTypesEnabled = true; var docType = web.ContentTypes[ctName]; lib.ContentTypes.Add(docType); lib.ContentTypes.Delete(lib.ContentTypes["Document"].Id); lib.Update(); AddLibrarySettings(web1, lib); } }  Finally, we set some document library settings on our new document library with the AddLibrarySettings method. We then ensure that the new site column is visible when viewed in the browser.  private void AddLibrarySettings(SPWeb web, SPDocumentLibrary lib) { lib.OnQuickLaunch = true; lib.ForceCheckout = true; lib.EnableVersioning = true; lib.MajorVersionLimit = 5; lib.EnableMinorVersions = true; lib.MajorWithMinorVersionsLimit = 5; lib.Update(); var view = lib.DefaultView; view.ViewFields.Add("Icon"); view.Update(); } Okay, what's cool here: In a few lines of code, we have created site columns, A content Type, a document library. As a developer, I use this functionality all the time. For instance, I could now just add a web part to this same solutionwhich uses this document Library. I love SharePoint! Here is the complete solution: Create Document Library Code

    Read the article

  • How can I allow robots access to my sitemap, but prevent casual users from accessing it?

    - by morpheous
    I am storing my sitemaps in my web folder. I want web crawlers (Googlebot etc) to be able to access the file, but I dont necessarily want all and sundry to have access to it. For example, this site (superuser.com), has a site index - as specified by its robots.txt file (http://superuser.com/robots.txt). However, when you type http://superuser.com/sitemap.xml, you are directed to a 404 page. How can I implement the same thing on my website? I am running a LAMP website, also I am using a sitemap index file (so I have multiple site maps for the site). I would like to use the same mechanism to make them unavailable via a browser, as described above.

    Read the article

  • Are there specific legal issues for web developers working on sex dating sites?

    - by YumYumYum
    Say I have created many ordinary websites which are not related to any dating/sexual content. Are the rules and regulations for a developer the same when making a sex-related dating site? I'm talking about a site where people meet together and get to know each other, with the intent of having a sexual relationship (you know what I mean), also featuring webcam sex, but not explicitly a porno site. Do such sites have any special legal issues for developers compared with non-sexual/dating sites?

    Read the article

  • Do I get SEO rankings for redirects? [closed]

    - by Gavin Morrice
    Possible Duplicate: Could I buy a domain name to increase traffic to my site like this? Url's add SEO weight to any site. If I have a site that (for example) sells chickens and the url is http://cluckorama.com and I own www.chickensforsale.com Will search engines list chickens for sale if I set a permanent redirect to cluckorama.com? (provided the content of cluckorama.com is relevant to chickens for sale)

    Read the article

  • Can inbound links through template-based layouts result in penalties?

    - by Liam Sorsby
    So obviously link building is encouraged as long as it is natural, organic and has meaningful links with content relevant to your site. Obviously with the constant release of new updates to algorithms, Google is flagging sites for unnatural links to their sites. My Question is: Can this be caused by templating systems? With WordPress for example, where you can add a link on the footer and it is repeated throughout the entire website generating thousands of links? If we don't add any links, Good Content will be re-posted and linked to, surely if your content is constantly linked to this will flag your site for "unnatural" content as it's difficult to see if someone has been paid to write an article on your content. Or does Google just simply want us to audit some of the links to show we are making an effort? As you can tell we have had a Manual action for: Unnatural links to your site—impacts links. However, this seems to impact our website as well. Edit: To clarify the question: Can you get penalised for paying for advertising on a site that uses a templated sidebar. So when they create a new blog/page ect your link is also added onto the page hence resulting in 1000's of links to one page on our site. I know that one effect maybe a 0 pagerank web page linking to your page dilutes the PR of our page. However the links are only inbound not reciprocal

    Read the article

  • Making profit from a social network

    - by James P.
    This follows similar questions but I'd like to see if anything particular comes out of it due to the nature of site. In short, I've taken up the role of webmaster for a small social network site and wish to make it profitable to at least cover the running costs. The site is linked to a commerce and presents are offered to members according to the number of points they've accumulated through various actions. The site is running on shared hosting so it's probably dirt cheap but the presents can be expensive as a whole and some money has already been invested into the project. One idea I have is to seek some sponsors that would be willing to offer presents or special offers in return for publicity. I don't know if this will be easy or not. I'm also looking into adapting hosting to perhaps move static files to a cheaper online storage medium (see Ideas for reducing storage needs and/or costs (lots of images)). Other suggestions are welcome.

    Read the article

  • Applying Quotas Across all My Sites

    - by Bil Simser
    Just a quick snippet this morning. If you need to apply a new quota template to all users My Sites here's a quick script to do it. Changing an existing quota is fine but if you're migrating users from another system or you just want to up everyone's storage a bit here's what you do. Create a new quota template. This is found in Central Admin under Application Management | Site Collections | Specify quota templates. There's already a default "Individual Quota" created you might want to create your own or have a special one for your users Open up the PowerShell Management Console and enter "Get-SPWebApplication". This will list all your web applications on the farm.  To apply it to all My Sites (each site is a site collection of its own) run this script below. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: $webapps = Get-SPWebApplication; 2:   3: $webapp = $webapps[4]; 4:   5: foreach ($site in $webapp.Sites) { 6: Set-SPSite -Identity $site.url -QuotaTemplate "Your Quota Template" 7: } The first line gets all the web applications on the server. In our case, the forth one is the mysite web app (yours will probably be a different number). Just run Get-SPWebApplication from the console to figure out which one to use. You could get fancy and pipe the name to find it but I'm too lazy for that.Then we loop through all the sites on the list using the $site.url property and pass it to the Set-SPSite cmdlet and specify the name of the our custom QuotaTemplate.Easy. Now all users are updated with the new quota template.

    Read the article

  • Why do people crawl sites without downloading pictures?

    - by Michael
    Let me show you what I mean: IP Pages Hits Bandwidth 85.xx.xx.xxx 236 236 735.00 KB 195.xx.xxx.xx 164 164 533.74 KB 95.xxx.xxx.xxx 90 90 293.47 KB It's very clear that these person are crawling my site with bots. There's no way that you could visit my site and use <1MB bandwidth. You might say that there's the possibility that they could be browsing the site using some browser or plug-in that does not download images, js/css files, etc., but the simple fact of the matter is that there are not 90-236 pages that are linked from the home page (outside of WP files), even if you visited every page twice. I could understand if these people were crawling the site for pictures, but once again, the bandwidth indicates that this isn't what is happening. Why, then, would they crawl the site to simply view the HTML/txt/js/etc. files? The only thing that I can come up with is that they are scanning for outdated versions of WordPress, SQL injection vulnerabilities, etc., which makes me inclined to outright ban the IPs, but I'm curious, is it possible that this person is a legitimate user, or at the very least, not intending to be harmful?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >