Search Results

Search found 103627 results on 4146 pages for 'google code'.

Page 231/4146 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Sharing code in LGPL and proprietary software

    - by Martin
    Hi I'm working on a piece of software that'll be released as a dll under LGPL. The software interfaces with hardware from a small company that has provided me with the needed libraries and some code to use them correctly (not only headers but its all in a separate file). As far as i know, the same code is used in their proprietary software that they don't intend to open source but they'd be fine releasing the piece of code they've given me. Now here's the question: What license could be used on the code I got from the company? I guess using GPL or LGPL would make them violate GPL when using the same code in their other software. Is MIT a good idea? Is it ok to just include a file with MIT license on it in my otherwise LGPL:ed project? Since I'm not the copyright holder, I'd have to ask the company to apply the license obviously but that shouldn't be a problem. Thanks /Martin

    Read the article

  • Multiple 301 redirect and massive loss of ranking

    - by DoesNotCompute
    I just remade from scratch a website for a client, the client ask me to preverve their ranking by making 301 redirect from the original URL to the new URL. For instance: http://plumber-directory.my-website.com/john-smith-city-1.php became http://directory.my-website.com/plumber/city/john-smith.html So i put the website online for few days until the 301 partially kicks in the google results. Then the client call me back to tell me that his boss want to switch back to the ancients URLs _< So i put a new 301 redirect: http://directory.my-website.com/plumber/city/john-smith.html revert to http://plumber-directory.my-website.com/john-smith-city-1.php Because google had just few days to assimilate the new URLs, it have now the two kinds of URLs in it's own result pages. Also the ranking of the website keeps falling down every day, i suspect google to mistaking those redirects for duplicate content. Is there something i can do to avoid a total loss of rankings?

    Read the article

  • Gallio and VS2010 code coverage

    - by andrewstopford
    Scott mentioned on twitter a great post on using VS2010 code coverage with ASP.NET unit tests with the following comment. So I figured I would work up a quick post on using Gallio with the code coverage features (and thus MbUnit, NUnit etc).  Using Gallio with the VS2010 code coverage features is exactly the same as you would use MSTest. Just enable the code coverage collector.   Select the assembly you want to profile (double click the collector to do this) Run your test   Right Click and select code coverage.  

    Read the article

  • are keywords in URLs good SEO or needlessly redundant?

    - by Blazemonger
    A coworker and I are locked in a debate over the value of SEO keywords in the URL of a page. She wants to change all the filenames of the HTML pages of a fencing company so they look like residential-home-chicago.html, contact-chicago-contractor.html, and so on. She is convinced that because Google highlights keywords in URLS in search results, that means that putting keywords here is more valuable. My position is that these do not improve SEO, since Google doesn't seem to give keywords in the URL any more weight than keywords in the body of the page, and might even give them less weight. In the meantime, they make it harder for me to find the pages I want when its time to edit them, and the site as a whole looks cheap and spammy. Google's own SEO guide suggests to me that yes, keywords in URLs are useful, but not superior, and that they are more useful for human readability than search engine rankings. I'm looking for authoritative sources that support either position, not blog articles from SEO optimization companies trying to promote themselves.

    Read the article

  • Upgrading SSIS Custom Components for SQL Server 2012

    Having finally got around to upgrading my custom components to SQL Server 2012, I thought I’d share some notes on the process. One of the goals was minimal duplication, so the same code files are used to build the 2008 and 2012 components, I just have a separate project file. The high level steps are listed below, followed by some more details. Create a 2012 copy of the project file Upgrade project, just open the new project file is VS2010 Change target framework to .NET 4.0 Set conditional compilation symbol for DENALI Change any conditional code, including assembly version and UI type name Edit project file to change referenced assemblies for 2012 Change target framework to .NET 4.0 Open the project properties. On the Applications page, change the Target framework to .NET Framework 4. Set conditional compilation symbol for DENALI Re-open the project properties. On the Build tab, first change the Configuration to All Configurations, then set a Conditional compilation symbol of DENALI. Change any conditional code, including assembly version and UI type name The value doesn’t have to be DENALI, it can actually be anything you like, that is just what I use. It is how I control sections of code that vary between versions. There were several API changes between 2005 and 2008, as well as interface name changes. Whilst we don’t have the same issues between 2008 and 2012, I still have some sections of code that do change such as the assembly attributes. #if DENALI [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2012")] [assembly: AssemblyCopyright("Copyright © 2012 Konesans Ltd")] [assembly: AssemblyVersion("3.0.0.0")] #else [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2008")] [assembly: AssemblyCopyright("Copyright © 2008 Konesans Ltd")] [assembly: AssemblyVersion("2.0.0.0")] #endif The Visual Studio editor automatically formats the code based on the current compilation symbols, hence in this case the 2008 code is grey to indicate it is disabled. As you can see in the previous example I have distinct assembly version attributes, ensuring I can run both 2008 and 2012 versions of my component side by side. For custom components with a user interface, be sure to update the UITypeName property of the DtsTask or DtsPipelineComponent attributes. As above I use the conditional compilation symbol to control the code. #if DENALI [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=3.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2012 Konesans Ltd; http://www.konesans.com" )] #else [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=2.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2004-2008 Konesans Ltd; http://www.konesans.com" )] #endif public sealed class FileWatcherTask: Task, IDTSComponentPersist, IDTSBreakpointSite, IDTSSuspend { // .. code goes on... } Shown below is another example I found that needed changing. I borrow one of the MS editors, and use it against a custom property, but need to ensure I reference the correct version of the MS controls assembly. This section of code is actually shared between the 2005, 2008 and 2012 versions of my component hence it has test for both DENALI and KATMAI symbols. #if DENALI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=11.0.00.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #elif KATMAI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #else const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #endif // Create Match Expression parameter IDTSCustomPropertyCollection100 propertyCollection = outputColumn.CustomPropertyCollection; IDTSCustomProperty100 property = propertyCollection.New(); property = propertyCollection.New(); property.Name = MatchParams.Name; property.Description = MatchParams.Description; property.TypeConverter = typeof(MultilineStringConverter).AssemblyQualifiedName; property.UITypeEditor = multiLineUI; property.Value = MatchParams.DefaultValue; Edit project file to change referenced assemblies for 2012 We now need to edit the project file itself. Open the MyComponente2012.cproj  in you favourite text editor, and then perform a couple of find and replaces as listed below: Find Replace Comment Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Change the assembly references version from SQL Server 2008 to SQL Server 2012. Microsoft SQL Server\100\ Microsoft SQL Server\110\ Change any assembly reference hint path locations from from SQL Server 2008 to SQL Server 2012. If you use any Build Events during development, such as copying the component assembly to the DTS folder, or calling GACUTIL to install it into the GAC, you can also change these now. An example of my new post-build event for a pipeline component is shown below, which uses the .NET 4.0 path for GACUTIL. It also uses the 110 folder location, instead of 100 for SQL Server 2008, but that was covered the the previous find and replace. "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\gacutil.exe" /if "$(TargetPath)" copy "$(TargetPath)" "%ProgramFiles%\Microsoft SQL Server\110\DTS\PipelineComponents" /Y

    Read the article

  • Traffic fall after a server problem

    - by Sébastien
    I have a website from which I analyse the traffic with Google analytics. Day after day the traffic (mainly from Google SE) incresed until I get a problem with my server. For one day the server has been offline and after that I have no longer had as much users as I had before. Now it's like the site is no more referenced on Google index (but when I type "site:mysite.com", I still have all the results). Do you know if this is a normal behaviour and if the traffic will come back as before (the server has had problems two days ago) ?

    Read the article

  • Stop bots from crawling old links with extensions

    - by Jared
    I've recently switched to MVC3 which is extension-less for the URL's, but Google and Bing have a wealth of links that they are crawling which no longer exist. So I'm trying to find out if there is a way to format robots.txt (or by some other method) to tell google/bing that any link that ends in an extension isn't a valid link... Is this possible? On pages that I'm concerned about a User having saved as a fav I'm displaying a 404 page that lists the links to take once they are redirected to the new page (I decided to not just redirect them as I don't want to maintain these forever). For Google/Bing sake I do have the canonical tag in the header. User-agent: * Allow: / Disallow: /*.* EDIT: I just added the 3rd line (in text above) and it APPEARS to do what I'm wanting. Allow a path, but disallow a file. Can anyone confirm this?

    Read the article

  • How does bing-bot( is that the right spider-name? ) and googlebot interpret 301 redirect?

    - by jbcurtin
    I've been looking for documentation on how the Microsoft and Google bots interpret 301 redirects. It seems that google-bot stores documents on a url based index system. But I haven't been able to figure out how bing works. Should I assume that they are still working towards coping everyone else and assume they use an algorithm close to google? Is it best to just forward a page to a new location via Javascript? I think this might be a blackhat trick, but how would I tell the bots that it's not? Is 301 redirect my best option and I just have to bit the bullet because said pages are no longer in existence? What other options do I have that I might not be aware of?

    Read the article

  • GA UA codes for testing site - set up

    - by Drew
    Anyone know the process for using a GA live UA code to test a site in development. I.e. I have a live site with a GA UA code attached, tracking live traffic data etc e.g. UA-123456. I've been told that there is a way to produce another code associated to the primary code to use on the testing version of my live site e.g. test code could be UA-123457. Can anyone shed some light on this? If not possible should I just set up a completely separate GA account for my testing site?

    Read the article

  • Working on someone else's code

    - by Xavi Valero
    I have hardly a year's experience in coding. After I started working, most of the time I would be working on someone else's code, either adding new features over the existing ones or modifying the existing features. The guy who has written the actual code doesn't work in my company any more. I am having a hard time understanding his code and doing my tasks. Whenever I tried modifying the code, I have in some way messed with the working features. What all should I keep in mind, while working over someone else's code?

    Read the article

  • Best S.E.O. practice for backlinking etc

    - by Aaron Lee
    I'm currently working on a website that I am really looking to optimise in terms of search engines, i've been submitting between 5-20 directory submissions daily, i've validated and optimised my code and i've joined a lot of forums etc to speak of the website in question, however, I don't seem to be making much of an impact in terms of Google. I know that S.E.O. takes a while to start making an impact, and that Google prefers sites that a regularly updated and aged, but are there any more practices that can really help with organic results in Search engines. I have looked on Google itself, and a few other SE's but nobody is willing to talk about extensive S.E.O. practices as they normally don't want people knowing their formula's for S.E.O., also does anyone know of a decent piece of software that really looks into the in's and out's of your page and provides feedback, I usually use http://www.woorank.com, but only using one program doesn't show if it's exactly correct in what it's saying. If anyone could help it would be much appreciated, thank you very much.

    Read the article

  • Is it possible to know impressions of other websites?

    - by Saeed Neamati
    Google Webmasters's dashboard gives you a big number which is called impressions, and by definition that I've seen in Google Analytics, it means the total number of times your site has been become eligible for SERPs. I just don't have an idea how to invest on this number, and how much its increase or decrease mean to me, because I can't compare it with other websites. I mean, if the impressions of say site a.com is 150,000, and mine is 50,000, then maybe I can confer that I need to triple my efforts to reach to a.com. But by seeing 50,000 alone I have no clue at all of how to interpret it. Is there any service or other way to know about the impressions Google gives to other sites?

    Read the article

  • Which programming language do you think is the most beautiful and which the ugliest? [closed]

    - by user1598390
    I would like to hear opinions about what programming language do you consider to produce the most legible, self-documenting, intention-transparent, beautiful-looking code ? And which produces the most messy-looking, unintentionally obfuscated, ugly code, regardless of it being good code ? Let me clarify: I'm talking about the syntax, "noise vs signal", structure of the language. Assignment operators. De-referencing. Whether it's dot syntax or "-" syntax. What languages do you think are inherently harder to read than others, given all other things being equal like, say, code quality, absence of code smells, etc. ?

    Read the article

  • Online accounts advanced setting with Empathy (13.10)

    - by uruloke
    the new online accounts doesn't have the advanced settings as the empathy accounts had. How do i change the google server to connect to? i read here: https://wiki.gnome.org/Empathy/FAQ I can't connect to my Google Talk account Your router is probably blocking DNS SRV requests. If possible you should try to fix it. If you can't, the easiest work around is to set "talk.google.com" in the "Server" field of the advanced section of the account. So i think this might fix my problem, or maybe just an option to shift the port it connects to. and is there anyone that knows how to use join any IRC channels with Empathy? i have installed the plugin, but i don't know how to join a channel.

    Read the article

  • migrating PR / rankings from one site to another

    - by sam
    Ive got a clients company site with decent PR, backlinks and search engines rankings. The client wants to change their comapany name and therfore URL, i will set up a rediect between the old site and the new site. But i was wandering is their a way to tell Google that they are moving while retaining all your rankings ? It is the same people, services, office building same everything just rebranded under a different name and url. Additionaly if their is a way to do this, how does google stop you buying expired domains and just pointing them onto your site, for instance i could buy several PR3 domains all relating to the same sector and point them at my site or would google catch on to this ?

    Read the article

  • Does my JavaScript look big in this?

    - by benhowdle89
    As programmers, you have certain curtains to hide behind with your code. With PHP all of your code is server side preprocessed, so this never see's the light of day as far as the user is concerned. If you have maybe rushed through some code for a deadline, as long as it functions correctly then the user never needs to know how many expletives you've inserted into the comments. However with more and more applications being written for the web, with a desktop feel implemented by AJAX and popular frameworks like jQuery being banded around to every Tom, Dick and Harry, how can a programmer maintain some dignity and hide his/her JavaScript code without it being flaunted like dirty laundry when the users hit Right Click-View Source or Inspect Element. Are there any ways to hide JavaScript application logic/code?

    Read the article

  • Redirecting non existing post to homepage; is that good for SEO?

    - by BlackEagle
    I am checking my website out on Google Webmasters and I am seeing an astonishing 5000 links that could not be found by Google's Crawlers. That's normal, because my website is built in a manner that users can drop their own things, which also lead to 404 pages. Not a problem at all if I can find a workaround of course... So my question is: what if I made a function or a mod rewrite that will check if the link exists (a post for example) and if not, it will redirect it to the home page. Is this good for SEO? Will Google see this as 'link found'? How do I have to look at this problem?

    Read the article

  • Why my site is not ranking for particular keyword

    - by user543087
    My site is only 3 days to be 6 months old. This website is unique, that is there is no competitor to this type site in India, providing comparison of payment gateways in India, besides the payment gateways companies itself. I've optimized it for key word : "payment gateway" I've changed the url's twice, latest being 3 months back, in which case Google Webmaster gave plently of 404's. I corrected the useful 404's and left meaningless ones as it is. What is the reason it's not ranking well for payment gateways? Even site with single page about "Payment gateways" seem to be ranking better than this. Is it does to: 1) Lot of outbound links to in-context companies and information 2) 404's as reported in Google Webmaster My another site is successfully getting 1500 unique visitors daily and is up in Google ranking. I don't know why it is not!

    Read the article

  • Is there an easier way to implement 301 redirects when converting a site to WordPress

    - by Amanda
    I have just converted a website to WordPress. The old site has hundreds of hard-coded html files, and the new site does not match the old site's directory structure or file naming system (bad SEO in the original site), so I can't place any "blanket" 301 redirects. Its been at least 2 months, and the old links are still appearing in Google searches, despite a google-friendly sitemap.xml. Do I need to hardcode a 301 for every individual page in my htaccess file, or am I just misunderstanding 301s and apache? Is there some other way I can update Google about the fact that my entire site structure has changed?

    Read the article

  • Does a longer registration length/period for a domain name improve its SEO and search ranking?

    - by Cupcake
    While I was renewing a domain of mine with a well-known domain registrar, the support person who was on call with me said that I'd improve the SEO ranking of my domain if I increased the registration length from 1 year to 5 years instead. The explanation that he gave me was something along the lines that a search engine like Google doesn't like to send users to domains and businesses that may no longer exist, and that by registering my domain for 5 years instead of just 1, Google would have higher confidence that I'm serious about keeping my business around for the long-term. Needless to say, I was quite skeptical. Does the registration/renewal length of a domain name affect its SEO and search result ranking for search engines such as Google?

    Read the article

  • SEO blog Indexing: wordpress.com subdomain vs a registered domain?

    - by rumspringa00
    I've used WordPress for a few of my client's sites, mostly small businesses and eCommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using WordPress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot WordPress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domain would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong?

    Read the article

  • How to copyright and license individual code files

    - by Hand-E-Food
    I have a habit of writing small, reusable components in my spare time. I reuse these components with my clients' code bases. It occurs to me there's a potential issue that using identical code for multiple clients may bite me back in the future. I don't care who uses the code. I just don't want a situation where one company tries to sue another for copyright infringement due to my actions. I'm not familiar with common licensing schemes. What do I need to specify in my code files to indicate that these protions are copyrighted to myself and usable by anyone, while differentiating them from the code specificly written for the client? Where can I find more information on this scenario?

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

  • Best way to redirect users back to the pretty URL who land on the _escaped_fragment_ one?

    - by Ryan
    I am working on an AJAX site and have successfully implemented Google's AJAX recommendation by creating _escape_fragment_ versions of each page for it to index. Thus each page has 2 URLs: pretty: example.com#!blog ugly: example.com?_escaped_fragment_=blog However, I have noticed in my analytics that some users are arriving on the site via the "ugly" URL and am looking for a clean way to redirect them to the pretty URL without impacting Google's ability to index the site. I have considered using a 301 redirect in the head but fear that Googlebot might try to follow it and end up in an endless loop. I have also considered using a JavaScript redirect that Googlebot wouldn't execute but fear that Google may interpret this as cloaking and penalize the website. Is there a good, clean, acceptable way to redirect real users away from the ugly URL if for some reason or another they end up arriving at the site that way?

    Read the article

  • Self-documenting code vs Javadocs?

    - by Andiaz
    Recently I've been working on refactoring parts of the code base I'm currently dealing with - not only to understand it better myself, but also to make it easier for others who are working on the code. I tend to lean on the side of thinking that self-documenting code is nice. I just think it's cleaner and if the code speaks for itself, well... That's great. On the other hand we have documentation such as javadocs. I like this as well, but there's a certain risk that comments here gets outdated (as well as comments in general of course). However, if they are up-to-date they can be extremely useful of say, understanding a complex algorithm. What are the best practices for this? Where do you draw the line between self-documenting code and javadocs?

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >