Search Results

Search found 1237 results on 50 pages for 'sam'.

Page 10/50 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • tumblr blog under subdomain blog.mysite.com - do i still get the benefit?

    - by sam
    ive got a tumblr blog for my main site, i use it more as a proper blog with articles and images, rather than just a tumblog. The blog is mapped to blog.mydomain.com Becuase of the ease of 'social' reposting in tumblr people often repost my articles (which is great - Backlinks !) But all these links go back to my blog and not my main site which is a mydomain.com does google see blog.mydomain and mydomain as the same / linked item and is my main site getting the benefit of these links ?

    Read the article

  • What are the problems with a relatively large common library?

    - by Sam Pearson
    As long as the code in the base library is as loosely coupled as splitting it up into separate libraries, what's the problem? In general, having a lot of assemblies composing a .NET solution is painful. Plus, when code in one solution needs to be shared, it can just be added to the common library, rather than deciding which common library it should be added to or creating yet another library. edit: the question comes to me after using Smalltalk for a bit, where all the code is available to use, all the time.

    Read the article

  • Tried verything - Yet highest Bounce Rate?

    - by Sam
    I read a lot of blogs and tips articles on how to decrease bounce rate. I feel I write very good content (niche is science) and I setup a good design, with attractive features (like download as PDF etc.), increased site loading times (google page speed score is 80+) but even then my bounce rate is always above 90, sometimes 100 :(. I get 42% traffic from the US and google analytics reports no visitor staying for more than 10-12 seconds. Please guide me.

    Read the article

  • Real-time stock market application

    - by Sam
    I'm an amateur programmer. I'd like to develop a software application (like Tradestation), to analyse real-time market data. Please teach me if the following approach is correct, ie the procedures, knowledge or software needed etc: Use a DB to read the real-time feed from data provider: what should be the right DB to use? I know it should be a time serious one. Can I use SQL, Mysql, or others? What database can receive real-time data feed? Do I need to configure the DB to do this? If the real-time data is in ASCII form, how can it be converted to those that can be read by the DB and my application? Should I have to write codes or just use some add-ins? What kind of add-in are needed? How should I code the program to retrieve the changing data from the DB so that the analysis software screen data can also change asynchronously? (like the RTD in excel) Which aspects of programming do I need to learn to develop the above? Are there web resources/ books I can refer to for more information?

    Read the article

  • adding tagged / dynamic pages in sitemap

    - by sam
    ive got a blog thats been running for about a year ive made about 200 posts, and there should be about 220 pages to index (additional pages for about / contact ect). When i go to crawl the site i get 1900 pages because of all the pages that are related to tags ive used in my blogs these 70% of these pages only contain one blog post. When submitting my site map to google should i exclude all pages with /tagged/ in the url so ill only be submitting unqiue pages, or should i submit the full site map ?

    Read the article

  • Reading 'Index Status' graph in Google Webmaster tools

    - by sam
    I recently found a bunch of old files that had been ftp'ed to a live production server by mistake on a static (html / css / js) site. I manually deleted these files, but today when checking in Google Webmaster tools i found this graph below. The 'update' marker is from 3/9/14, what i can work out is what Google is trying to tell me, are they saying that : There was a ranking update like Penguin or Panda and they penalized my site and un-indexed a load of pages which they thought were junk.. OR Is this showing that I updated the site by deleting the files on the server on 3/9/14 OR Is this something else ?

    Read the article

  • How do I add changes resolv.conf without getting overwritten?

    - by Sam
    I have migrated to 12.04 from 7.10 finally. I have one last part to complete but I am stumped. I am using puppet on each server, and in the past have used resolv.conf to point to my search to the puppetmaster. search puppetmaster.com nameserver 192.168.1.XXX When trying to use the file on 12.04 resolv.conf the file gets over written when rebooted. I cannot use a static IP for these, so using the /etc/network/interfaces to help me out is a nill point. # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 127.0.0.1 Is there a way to get resolvconf to handle this either in the head, tail or base. If there is, are there any examples I can use to tweak on my server. Any help is much appreciated.

    Read the article

  • Does google use chrome to check if a link is used by humans or just there for the bots?

    - by sam
    Does clicking a link in chrome tell google the link is used by humans and there fore not just automated backlink spam. It sounds weird but i read it today on a slightly obscure seo blog, they mensionned clicking the backlinks they make in a version of chrome where they have the "send data annonmusly to google" feature turned on. It sounded a bit far fetched but then i thought it could have a truth to it as with google now looking harder at "spammy" links it would mean at least some humans are using them. Has any one else heard anything else about this ?

    Read the article

  • How many developers do I need to build a website like Freelancer.com in about 3-5 months? [closed]

    - by Sam
    I have been asked to make a list of people that I need to build something similar to freelancer.com. Not exactly same, has a few more features to it too but I can't really get my head around the whole freelancer.com site. I have built a social networking site from scratch which is 70% of Facebook and 20% Google+ in about 5 months with raw PHP, JS, CSS and Ajax. I dont think it will take me more than a month or something to build the whole freelancer.com from scratch. Please suggest anything that should I pay attention to. I am thinking about: 2 php developers 1 mysql engineer 1 network/server engineer 1 graphics artist 1 UI developer Time frame: 20 days Is this a good estimation?

    Read the article

  • Does text size and placement on page have an effect on seo

    - by sam
    I was wandering seeing as Google and others keep trying to get more and more 'human' in terms of rating whats good and whats spam, is it known if they take into account the size of a heading ie. an thats font size is 40px is going to speak allot more to the user than a thats font size is 14px.. similarly does placement factor ? ie. a 300 word article at the bottom of a landing page (not in the footer but bellow the useful content) would just be there for seo purposes. i know they look at if your doing things like text-indent:-9999px; and white text on a white background, but what about these more border line practices that both have legitimate uses but also the possibility to be spammy

    Read the article

  • Google Places seo?

    - by sam
    Im familiar with seo and getting higher google listings but for allot of services google has recently been making there search results (were applicable) much more location orientated.. for instance searching for a "accountant in london" or "accountancy firm london" will through return the first half of page 1 as google places listings, then under about 6 of these you will get your normal search results so somone who used to rank #1 on page 1 now will rank effectily #7. What i was wandering is that i cant see any reasons as to why the company that rank high in the places results get there, often they are not high up in the search results. Is there a way to optimise on or offsite to rise up the google places listings in your city ?

    Read the article

  • do you still get a bounce in google analytics if all the linked pages/content is loaded dyanmicly?

    - by sam
    Google analytics describes a bounce as a user that visits and leaves before after their first page. But if your site is a one page site, with content loaded dynamicly using javascript you could have a user one your site go through loads of info, text images but would that still count as a bounce ? Or once they click on an a-tag even if it is <a href="#"> can google analytics see that ? (im aware of click tracking in analytics) but i was wandering if google picks up these clicks by default..

    Read the article

  • Which of these URL scenarios is best for big link menus? [seo /user friendly urls]

    - by Sam
    Hi folks, a question about urls... me and a good friend of mine are exploring the possibilities of either of the three scenarios for a website where each webpage has a menusystem with about 130 links.: SCENARIO 1 the pages menu system has SHORT non-descriptive hyperlinks as well as a SHORT canonical: <a href:"design">dutch design</a> the pages canonical url points to e.g.: "design" OR SCENARIO 2 the pages menu system has SHORT non-descriptive hyperlinks wwith LONG canonical urls: <a href="design">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest OR SCENARIO 3 the pages menu system has LONG descriptive hyperlinks with LONG canonical urls: <a href="dutch-design-crazy-yes-but-always-honest">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest Currently we have scenario 2... should we progress to scenario 3? All three work fine and point via RewriteMod to the same page which is fetched underwater. Now, my question is which of these is better in terms of: userfriendlyness (page loading times, full url visible in url bar or not) seo friendlyness (proper indexing due to the urls containing descriptive relevant tags) other concerns we forgot like possible penalties for so many words in link hrefs?? Thanks very much for your suggestions: much appreciated!

    Read the article

  • WWW.yoursite.com or HTTP://yoursite.com which one is futureproof?

    - by Sam
    http://yoursite.com www.yoursite.com http://www.yoursite.com yoursite.com Which of these would you choose as your favourite to work with, if you were to make a site for 2011 and beyond, which domainname would you provide to clients, websites linking to you, your letterhead, contact cards. Why one OR other? Which to avoid? Thinking of the following aspects: validity, correctly loading URL audience, most geeks know http://, most seniors/clients don't easiest to remember / URL as a brand misspellings by user input (in mobile phone or desktop browser) browsers not understanding protocol-less links total length of chars for easy user input method of peferance by major search engines/social media sites consistency sothat links dont fragment but all point to the same

    Read the article

  • Is There a Real Advantage to Generic Repository?

    - by Sam
    Was reading through some articles on the advantages of creating Generic Repositories for a new app (example). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor var c1 = new Country() { Name = "United States", CountryCode = "US" }; var c2 = new Country() { Name = "Canada", CountryCode = "CA" }; var c3 = new Country() { Name = "Mexico", CountryCode = "MX" }; var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" }; var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" }; var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" }; repo.Add<Country>(c1); repo.Add<Country>(c2); repo.Add<Country>(c3); repo.Add<Province>(p1); repo.Add<Province>(p2); repo.Add<Province>(p3); repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query(); IList<T> Find(Expression<Func<T,bool>> predicate); T Get(Expression<Func<T,bool>> predicate); T First(Expression<Func<T,bool>> predicate); //... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Edit: Original link doesn't reflect the pattern I was using in my sample code. Here is an (updated link).

    Read the article

  • Install a Mirror without downloading all the packages in the official repository

    - by Sam
    I first gonna explain the situation : ( The two PCs are running Ubuntu 12.04 ) I have a Laptop which is connected to a wifi connection, and a Desktop which can not be connected to Internet ( the modem is too far from it ), and i want to install some software to the last one. ( the two PCs are connected with an Ethernet cable ) I've already searched for a solution, but all i found was the use of some softwares that should have been already installed on the "Internet-less PC". ( Keryx, APTonCD ... ) What I want to do is to create a mirror in my laptop which contain the packages i have in this one ( situated in /var/cache/apt/archive ) and i don't want to download all the packages from the official repository, I don't need them. Can someone tell me if this is possible ? Thank you.

    Read the article

  • will Unity have a keyboard shortcut for accessing the "Session Menu" that appears on the panel?

    - by Sam
    I noticed in screen shots of Unity the presence of the "Session Menu" indicator in the right corner of the top-panel. This menu drops down to offer Log Out, Hibernate, Restart, Shut Down, etc. I know the keyboard shortcuts are not complete yet. But are there plans to implement a shortcut for accessing this Session Menu (i.e., so users can log out, restart & shut down without having to use the mouse)? Further, will the shortcut allow navigation through the menu by just typing the first letter of the listed word (e.g., R for restart and S for shut down)?

    Read the article

  • HP ENVY 4-Sleekbook or Samsung Series 5 NP530U3B Ultrabook?

    - by Sam
    I am a high school student and I need a laptop within the budget of 650$. I usually have a browser, microsoft office, music, and possibly a movie or something open at once. Will the HP ENVY 4's Intel Core i3 processor be enough to handle this or would I have to get the Samsung series 5 13 inch ultrabook to get this job done? I really like the look of the HP ENVY 4, but I also want a laptop that will be quick enough to handle my needs. PLEASE HELP!

    Read the article

  • Book Review: Professional ASP.Net MVC4

    - by Sam Abraham
    The past few weeks have been particularly busy as I continue to dedicate a bigger portion of my free time to refreshing my memory and enhancing my knowledge of best practices pertaining to technologies we plan on using for a major upcoming project. In this blog post, I will be providing a brief overview of my latest reading “Professional ASP.Net MVC4” by Jon Galloway, Phil Haack, Brad Wilson and K. Scott Allen. This book is a must read for web developers looking to enhance their MVC expertise with best practices and tips shared from recognized industry experts. This book takes the reader on a 16-chapter long journey towards being a better ASP.NET MVC developer with chapter 16 putting all information covered in practical context by dissecting the implementation of Nuget.org, a real-life open-source, ASP.NET MVC project.  All code samples referenced in this book are conveniently accessible via NuGet, a free, open-source Library package manager that installs as a Visual Studio Extension. Chapters 2, 3 and 4 thoroughly cover MVC’s various components: Controllers “C”, Views “V” and Models “M” respectively. Chapter 5 covers additional extension methods (Helpers) provided to speed and ease the use of common HTML elements such as forms, textboxes, grids, to name a few… Chapter 6 tackles built-in validation while providing examples and use cases on implementing custom validation that plugs into the MVC framework. Chapters 7 thru 13 discusses the latest on Membership, Ajax, Routing, NuGet and the ASP.Net Web API. Chapters 12 (Dependency Injection) and 13 (Unit Testing) demonstrate a big competitive advantage of MVC with its ease of test-ability and plug-ability. Chapters 14 and 15 targets the advanced developer showcasing how to extend MVC to customize and replace every piece in the framework.In conclusion, I strongly recommend Professional ASP.NET MVC 4 as an excellent read for both developers already using MVC as well as those getting started with the framework.   Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.  You can access my reviews of books I recently read: Professional ASP.NET Design Patterns Professional WCF 4.0 Inside Windows Communication Foundation Inside Microsoft SQL Server 2008 series

    Read the article

  • Install a Mirror without downloading all packages from official repository

    - by Sam
    I first gonna explain the situation: I have a Laptop which is connected to a wifi connection, and a Desktop which can not be connected to Internet (the modem is too far from it), and i want to install some software on the last one. (The two PCs are running Ubuntu 12.04, and are connected with an Ethernet cable) I've already searched for a solution, but all I've found was the use of some softwares that should have been already installed on the "Internet-less PC". ( Keryx, APTonCD ... ) What I want to do is to create a mirror in my laptop which contain the packages i have in this one ( situated in /var/cache/apt/archive ) and i don't want to download all the packages from the official repository, I actually don't need them. Can someone tell me if it is possible to do that ? Thanks,

    Read the article

  • google webmaster showing 6 pages submitted 0 indexed, yet i can see them all there when i search in google?

    - by sam
    I have a small 'brochure' type site with 6 pages, i can see them all the pages showing up in google when i search for my site. But in webmaster tools under the sitemaps section it says 6 submitted, (the blue bar of the graph), but the indexed pages - the red bar is showing 0 indexed pages, even though they seem to be indexed ? any idea why this is ? I dont really think its that important as the pages are still indexed, but it just seems odd. =================================================== UPDATE 9/3/12 having just looked in google webmaster its showing that there are 11 pages indexed, under the health index status tab.. but under the optimization sitemap tab it shows 6 urls submitted but only 1 indexed ? please see images bellow index status: Sitemap status:

    Read the article

  • Java Developers: Is Ant still in the "main stream" for builds? Do we push new developers to learn it?

    - by Sam Goldberg
    We have been slowly replacing batch command files (windows .bat) which were simply jarring up the classes compiled in the developers IDE, with more comprehensive Ant builds (i.e. get from CVS, clean compile, jar, archive, email, etc.) I've spent a lot of time learning (and debugging issues) with Ant, so I'm most comfortable using it for these tasks. But I wonder if Ant is still in as wide usage as it was when I first started learning, or whether "the world has moved on" to something newer (and maybe slicker). (I've started to see more Maven build stuff distributed, which I've never used, for example.) The practical import of this question, is whether I push new developers to learn Ant, or whether they should be learning something else for builds? I'm never too on top of the trends, so it would be great to hear from other Java developers what they think is the best build tool, and what they think new developers should be learning.

    Read the article

  • Data recovery on Ubuntu 11.10?! (after crashing with Seagate 320GB)

    - by Sam
    Just installed 11.10 last week and decided to transfer iTunes music (from Windows dual boot) to my Seagate 320GB. I left it in, restarted, clicked Ubuntu at the boot screen, and then it froze after a few lines of code! I think I got to 3.7086 or something before I pressed CTRL+ALT+DEL and the system restarted after another few lines of code. I am completely new to Ubuntu so after Googling, I made a live CD with 10.04, the most stable release I've heard, and I'm typing this from there now. However, when I go to mount my partition, only the Windows Vista partition (308GB) is there! It has all my Windows files but my Ubuntu 11.10 ones are nowhere to be found. I need to restore these pictures I transferred from my camera using Shotwell the other day... any help is appreciated! p.s. 11.10 has never crashed on me in my trial week, so I'm guessing it's the Seagate hard drive's fault. However, now I'm running it on 10.04 and it works fine.

    Read the article

  • Ubuntu 12.04 shut down freezes

    - by sam
    I am using Ubuntu 12.04 for last one month without any problem. It was upgraded from Ubuntu 10.10. Recently I needed to replace my hard disk. After replacement I installed Windows 7 and Ubuntu 12.04 on the system. For some days it went ok, but now shutting down from Ubuntu is causing problems. It freezes on the text screen saying acpid=exiting checking for running unattended upgrades. After that I waited for a long time, but it didn't go off and I had to hit ctrl+alt+del to restart the system and booting from Windows and then shutting down from there. This happens every time I shut down my system from Ubuntu. I have seen many posts regarding the same kind of issues but none of the solutions reported there helped me out.

    Read the article

  • Value of links on negative review pages

    - by Sam Healey
    A general assumption with SEO is more links = higher rankings. What I would like to know is does Google know what those links are referring to. I.e. if somebody gives a product a good review on their personal blog and links the review to another companies website (who are selling the product), would Google take consideration for the review/description link. Essentially would Google know that this link refers to a product. So if somebody is looking to buy a product, Google would know to include this page because the previous link said it sells products rather than just having information on products. Then to take this further, does Google know if a link is positive or negative. For example, If somebody creates a post saying, do not visit example.com, example.com is bad because of blah blah blah. Would Google know that the link is getting bad feedback and therefore would it have a negative affect on rankings, or would Google go oh its just another link and give it better rankings?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >