Search Results

Search found 11896 results on 476 pages for 'smart pro'.

Page 172/476 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Photo Gallery Software - reads from a local directory - watches folder- user and group permissions

    - by Darkflare
    Use Case: Photos are organised in a folder structure by date (by software lightroom/picasa) Want to run a local webserver to host a web gallery from (already know how to run lamp etc) I want to be able to attach metadata to the photos (probably through a database not residing in the photos folder) such that they can be tagged/categoried/albumed without affecting the original photos I want to be able to assign permissions to different albums to set users I want the software to watch the photo source folder for changes so that new photos are indexed ready for applying metadata and albums. I'd like the software to handle the rendering of numerous file types (photo formats) as well as video formatts I am language agnostic so php/python heck even c#, just want software that forfills the requirements. The main reason I am asking this question here as I am unsure what this software would even be called so google searching is quite difficult! Thanks for reading.

    Read the article

  • Any board-like platform but only for files instead of text posts? [on hold]

    - by Janwillhaus
    I am looking for a (best open-source or free but also commercially available) CMS platform that is built similarly to bulletin boards (for example phpBB) but instead of text posts, registered users can upload files that can be rated, commented etc.) I am aware of the number of board CMSes that have file-database plugins, but that is not what I want. I want to have a system that focuses on the files rather than on the postings. Or do you have any alternative ideas on solutions to the problem? I need the following functions CMS focused on file management Files that can be categorized (in a tree-like view for example) Comments and ratings can be added to files Users that can be provided various rights around the platform (moderation, commenting, exclusion of non-registered users, etc.) Users can upload files themselves (for further moderation

    Read the article

  • What is the advantage to hosting static resources on a separate domain?

    - by Michael Ekstrand
    I notice a lot of sites host their resources on a separate domain from the main site, e.g. StackExchange using sstatic.net, Barnes & Noble using imagesbn.com, etc. I understand that there are benefits to putting your static resources on a separate host, possibly with an efficient static-file web server like nginx, freeing up the main server to focus on serving dynamic content. Similarly, outsourcing to a shared CDN like cloudfront Akamai is logical. What is the benefit to using a separate domain otherwise, though? Why sstatic.net instead of static.stackexchange.com? Update: Several answers miss the core question. I understand that there is benefit to splitting between multiple hosts — parallel downloads, slimmer web server, etc. But what is more elusive is why multiple domains. Why sstatic.net rather than static.stackexchange.com as the host for shared resources? So far, only one answer has addressed that.

    Read the article

  • Image SEO - always repeat main keyword in alt text?

    - by Marcus Edensky
    I'm working on an Easter Island website and I'm currently redesigning my image system. Virtually all my photos are of Easter Island. My question is, should I always include the keywords "Easter Island" for Google to easier understand that my photos are from Easter Island, or is it sufficient that the "Easter Island" keywords are in the domain, as well as in all other pages of the site? For example, Alt text 1: "Moai statues at volcano Rano Raraku at Easter Island (Rapa Nui)" or Alt text 2: "Moai statues at volcano Rano Raraku" Would example 1 be considered keyword stuffing by Google

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • Tracking URL Goals to an external site from a landing page

    - by Arel
    I have a landing page promoting an iOS app. The page is at vitogo.com. I've set up a goal for When a user clicks on the link to go to iTunes to download the app. I set up a URL destination goal in the property for the site, and can see the goal set up in the reports section. The problem is it isn't tracking any clicks. I've had the goal set up for a while now, and it hasn't tracked anything. Thanks for the help!

    Read the article

  • How much time it needs google webmaster yo generate content keyword if url masking is enabled? [closed]

    - by user1439968
    Possible Duplicate: What is domain “masking” or “cloaking”? Why should it be avoided for a new web site? my real domain is domain.in. But url masking has been enabled and the masked url is domain2.in .. In that case i have added d url bputdoubts.21backlogs.in to google webmaster a week ago but content keyword hasn't been generated. In this case when can I expect to get the content keywords generated ?? And is there a problem for getting visitors from google search if url masking is enabled ?

    Read the article

  • Track sales and commission with third-party tool

    - by Andrew
    I have a clothing website where I link to various clothing retailers. I have reached an agreement with one of the retailers whereby they will pay a commission to us for every sale they make from traffic that was referred by our site. I need a mechanism for tracking how much commission should be paid to us, that involves as little work as possible to implement from their side. We both have Google Analytics. Option 1: They record a goal in their GA account whenever someone makes a purchase on their site. They see how many completed goals are marked as referral traffic from our site and calculate commission accordingly. The problem with this is that the whole process of calculating and paying commission will be manual. They will need to frequently check how many sales were generated by referral traffic from our site, and probably we will have to chase them for commission payments. Also - since we won't have access to their GA data - we will need to trust that they report all sales accurately. Option 2: Sign them up to an affiliate network like Commission Junction or Google's Affiliate Network, and connect to them through this network. The problem with this solution is that it seems too heavyweight; ideally we don't want to ask a retailer to go through the whole sign up process just to deal with us and pay us commission. I am assuming that there must be some lightweight service that tracks the number of sales by one site and pays commission accordingly to the other site, where the sign up and installation procedure is simple and fast.

    Read the article

  • How can I set up Friendly URL to Nginx?

    - by MKK
    I'm trying to use dokuwiki with its Friendly URL on Nginx. The problem that I'm facing is, it doesn' show correct path to any link(even stylesheet, and images) on every page It looks that paths are missing wiki/ part. If I click on the image and show its destination, it shows this url http://foo-sample.com/lib/tpl/dokuwiki/images/logo.png But it has to be this below. http://foo-sample.com/wiki/lib/tpl/dokuwiki/images/logo.png and login URL is not working either. If I click on login link, it takes me to http://foo-sample.com/wiki/start?do=login&sectok=ff7d4a68936033ed398a8b82ac9 and it says 404 Not Found I took a look at this https://www.dokuwiki.org/rewrite#nginx and tried as much as possible. However it still doesn't work. Here's my conf files. How can I fix this problem? dokuwiki is set in /usr/share/wiki /etc/nginx/conf.d/rails.conf upstream sample { ip_hash; server unix:/var/run/unicorn/unicorn_foo-sample.sock fail_timeout=0; } server { listen 80; server_name foo-sample.com; root /var/www/html/foo-sample/public; location /wiki { alias /usr/share/wiki; index doku.php; } location ~ ^/wiki.+\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index doku.php; fastcgi_split_path_info ^/wiki(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /usr/share/wiki$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } /usr/share/wiki/.htaccess ## Enable this to restrict editing to logged in users only ## You should disable Indexes and MultiViews either here or in the ## global config. Symlinks maybe needed for URL rewriting. #Options -Indexes -MultiViews +FollowSymLinks ## make sure nobody gets the htaccess files <Files ~ "^[\._]ht"> Order allow,deny Deny from all Satisfy All </Files> # Uncomment these rules if you want to have nice URLs using # $conf['userewrite'] = 1 - not needed for rewrite mode 2 # Not all installations will require the following line. If you do, # change "/dokuwiki" to the path to your dokuwiki directory relative # to your document root. # If you enable DokuWikis XML-RPC interface, you should consider to # restrict access to it over HTTPS only! Uncomment the following two # rules if your server setup allows HTTPS. RewriteCond %{HTTPS} !=on RewriteRule ^lib/exe/xmlrpc.php$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] <IfModule mod_geoip.c> GeoIPEnable On Order deny,allow deny from all SetEnvIf GEOIP_COUNTRY_CODE JP AllowCountry Allow from .googlebot.com Allow from .yahoo.net Allow from .msn.com Allow from env=AllowCountry </IfModule>

    Read the article

  • Domain Transfer Protection - need advice

    - by Jack
    Hey, I am about to purchase a domain name for a bit of money. I do not personally know the person who I am purchasing the domain name from, we have only chatted via email. The proposed process for the transfer is: The owner of the domain lowest the domain name security and emails me the domain password, I request the transfer After the request, I transfer the money via PayPal When the money has been cleared the current domain name owner confirms the transfer via the link that he receives in that email I wait for it to be transferred. The domain is currently registered with DirectNIC - http://www.directnic.com/ Is this the best practice? Seeing I am paying a bit of money for this domain name, I am worried that after the money has been cleared that I won't see the domain name or hear from the current domain name owner again. Is there a 'domain governing body' which I can report to if this is the case? Is the proposed transfer process the best solution? Any advice would be awesome. Thanks! Jack

    Read the article

  • Blogger Dynamic template sometimes broken

    - by ber4444
    In all browsers (IE, Chrome, etc.) in my office and in all Blogger blogs using the dynamic template, I'm having this strange problem: No menu on the right side (except for the Subscribe button - other buttons are missing) plus no layout selection bar (top row). This problem is occasional, not happening reliably; any ideas? Update: whenever I get the menu problem with dynamic templates, I also seem to get a broken page (in Chrome, IE, etc) with simple templates - see second screenshot. Does anyone having this issue?

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Why are the custom campaign parameters in Google Analytics so long?

    - by Baumr
    Adding several Google Analytics custom campaign parameters can make URLs very long. For example, in Google's own examples: http://www.example.com/?utm_campaign=spring&utm_medium=referral&utm_source=exampleblog http://www.example.com/?utm_campaign=spring&utm_medium=email&utm_source=newsletter1 http://www.example.com/?utm_campaign=spring&utm_medium=email&utm_source=newsletter1&utm_content=toplink Is there shorter alternatives that GA will pick up?

    Read the article

  • Too many access denied errors showing in Google Webmaster Tools every day

    - by user2255733
    I get 18,000 access denied error showing in Google Webmaster Tools every day! So strange it shows for URL's with www and not no-www. Fetch as Google works perfectly for pages got that error. Google starts to downgrade my website - impressions have dropped from 35,000 to 18,000. I am using cloud flair CDN and .htaccess mod_rewrite. Any help will be extremely appreciated as I am really loosing control.

    Read the article

  • How can I stop a bot attack on my site?

    - by tnorthcutt
    I have a site (built with wordpress) that is currently under a bot attack (as best I can tell). A file is being requested over and over, and the referrer is (almost every time) turkyoutube.org/player/player.swf. The file being requested is deep within my theme files, and is always followed by "?v=" and a long string (i.e. r.php?v=Wby02FlVyms&title=izlesen.tk_Wby02FlVyms&toke). I've tried setting an .htaccess rule for that referrer, which seems to work, except that now my 404 page is being loaded over and over, which is still using lots of bandwidth. Is there a way to create an .htaccess rule that requires no bandwidth usage on my part? I also tried creating a robots.txt file, but the attack seems to be ignoring that. #This is the relevant part of the .htaccess file: RewriteCond %{HTTP_REFERER} turkyoutube\.org [NC] RewriteRule .* - [F]

    Read the article

  • Site failing randomly - could it be Cloudflare or something weird in the JS?

    - by James
    I've been working on a simple site that uses javascript to fade through some fullscreen background images as well as some other simple animations. I've tested the site on Chrome, Safari, FF and Opera on OSX, IE8+ on Win7 and Chrome & FF on Ubuntu and everything looks as I'd expect it to. However, I've had reports of the site failing to load (stops at the stage where the background fades up) on Safari and Chrome on OSX and Win. I can't replicate this on any setup so I'm finding it impossible to troubleshoot. Google's instant preview shows the site fine as does most of the options at browsershots.org so I'm really scratching my head. I'm running the site's traffic through Cloudflare and I'm wondering whether anyone can see (or knows from other sites) why Cloudflare might be mangling the JS or causing a problem somehow (I don't get any errors in the JS error console). Of course, if you can replicate the problem on your machine and can suggest an area to look at that would be amazing but I'm hoping that, like me, you don't see any problem with the site! Here's the site: http://www.bighornrevelstoke.com Thanks, James

    Read the article

  • Why some video posts from the same blog appear in google with thumbs, while others do not?

    - by jayarjo
    We own media blog - which is basically a big collection of various videos streamed through our branded player. Interesting thing is that some of our posts show up in google search results with a thumb denoting that the post in question is in fact a video. But more often they are not. We basically wonder why? What does affect it and can we control it somehow? All posts (their single pages) have facebook og meta tags in place.

    Read the article

  • Displaying thumbnails in google search results for flash games?

    - by serg
    Some sites are somehow displaying video preview thumbnails for pages that contain flash games in google search results. For example search for: site:www.thorgaming.com game I see this only on small sites which makes me believe google is not very happy about it. How do they do it and is it ok with google? I assume they are submitting thumbnails through video sitemaps, but I can't find any information about using them with flash games. I also run a few such sites through rich snipped testing tool and it didn't detect any microdata tags on a page.

    Read the article

  • Services - Separate Sites or One Site - Impact on SEO

    - by Lynda
    I have a client who is a lawyer that specializes in Criminal Defense and DUI, however, he does not show up well in Google. In researching the sites that rank better have much more content for those specialties than his site does and my thought it that he needs to add more quality content to rank better for those searches. On his site he mentions his specialties, but also he has various personal things on his sites that reflect his interest. These are clearly separated from the business portion. My questions are should he 1) separate his personal information into a new domain and 2) should he have a separate URL for each of his specialties? OR would one URL work as long as everything is clearly separated? I read once that for legal services to rank well you should make a separate site for each specialty and have that site focus solely on that service.

    Read the article

  • How to find out if my hosting's speed is good enough?

    - by Mert Nuhoglu
    There are lots of different online performance tests: Google PageSpeed Insights iWebTool Speed Test AlertFox Page Load Time WebPageTest Also there are several desktop/client software such as: ping tool YSlow Firebug's Net console Fiddler Http Watch I just want to decide if my hosting provider has a good enough performance or if I need to switch my hosting to another provider. So, which tool should I use to compare my hosting provider with other hosting providers?

    Read the article

  • Does Submit to Index on a page with new content update Content Keywords for the site?

    - by Dan Kanze
    Using Google Webmaster Tools I'm trying to update the Content Keywords of my site. I'm confused about the relationship between Submit to Index and Content Keywords Does Fetch as Google -- Submit to Index on a previously existing indexed page containing new content expidite updating the Content Keywords crawled by the real Google bot? Does Submit to Index only submit new URL's so that previously indexed URL's still point to the older cached version until Google crawls specifically for new content on its own? Does Submit to Index have anything to do with Content Keywords or crawling new content being a previously indexed page or never been indexed page?

    Read the article

  • How can I secretly ban someone (hellban) in phpbb?

    - by Dan Fabulich
    The trolls are driving me crazy; I'd like to experiment with a secret ban. http://www.codinghorror.com/blog/2011/06/suspension-ban-or-hellban.html A hellbanned user is invisible to all other users, but crucially, not himself. From their perspective, they are participating normally in the community but nobody ever responds to them. They can no longer disrupt the community because they are effectively a ghost. It's a clever way of enforcing the "don't feed the troll" rule in the community. When nothing they post ever gets a response, a hellbanned user is likely to get bored or frustrated and leave. It looks like there's no way to do this out-of-the-box with phpbb. Is there a way to hack it in?

    Read the article

  • How can one keep an ecommerce site active?

    - by Mantorok
    So, you build an e-commerce site, all your products are on there, but then very little changes which obviously causes your site to become less active, and ultimately not ranking as highly in search engines. Is there anything that can be done to keep it active? I'm aware that inbound links are important and I guess these come over time, are there any other recommended means of keeping the site active?

    Read the article

  • Does text size and placement on page have an effect on seo

    - by sam
    I was wandering seeing as Google and others keep trying to get more and more 'human' in terms of rating whats good and whats spam, is it known if they take into account the size of a heading ie. an thats font size is 40px is going to speak allot more to the user than a thats font size is 14px.. similarly does placement factor ? ie. a 300 word article at the bottom of a landing page (not in the footer but bellow the useful content) would just be there for seo purposes. i know they look at if your doing things like text-indent:-9999px; and white text on a white background, but what about these more border line practices that both have legitimate uses but also the possibility to be spammy

    Read the article

  • Why am I getting domainpark.cgi being called from my website?

    - by Sean
    I used to test my site on www.exampleone.com and now I have moved to the real domain www.realdomain.com now and www.exampleone.com is now parked by 1and1 (default). Now when I test to see which requests are made by the www.realdomain.comI see domainpark.cgi and park.js from Sedo Parking also being requested as well as the js that serves the ads by adclicks. How do I get rid of this? It's not on the index page at all, and it's causing a lot of strain and slowing my site down.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >