Search Results

Search found 15471 results on 619 pages for 'sharepoint tools'.

Page 101/619 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • getting started as a web developer [closed]

    - by kmote
    I have over 10 years of programming experience building (Windows-based) desktop applications and utilities (VC++, C#, Python). My goal over the next year is to start transitioning to web application development. I want to teach myself the fundamental tools and technologies that would be considered essential for building professional, online, interactive, visually-stunning, data-driven web apps -- the kind described in Google's recently released "Field Guide: Building Great Web Applications". So my question is, what are the primary, most commonly-used technologies that seasoned professionals will need in their tool belt in the coming years? My plan was to start coming up to speed in Javascript, HTML5, & CSS, and then to do a deep dive into ASP.NET and Ajax, along with SQL DBs. (I was surprised to not be able to find a single book at Amazon with a broad, general scope like this, which caused me to start second-guessing this approach.) So, seasoned professionals: am I on the right track? Are there some glaring omissions in my list? Or some unnecessary inclusions? I would welcome any book suggestions along these lines as well.

    Read the article

  • Project Management Software / 1 maybe 2 developers

    - by Ominus
    I am looking for software that I can use to "manage" multiple projects (5 - 10). Here are the features I would like but any recommendation is welcome. Bug/Feature tracking on a per project basis. Some way to keep all documents, diagrams, specs, requirements, in one place with the project. Better yet a tool where all these things or most of them could be authored. Task management during the development phase with milestones and estimates/actuals. Git integration I have been doing contract work and i have been doing really well for myself as far as getting projects but its becoming VERY hard to manage everything in an efficient manner. I am trying to learn about best practices when it comes to software programming methodologies and the more I read the more i realize that I am just managing these projects poorly. I am getting things done but the more I take on the less "solid" everything is. I am afraid if I don't get some good solid tools/practices in place I am going to do my customers and myself a disservice. The problem is that there are SO many options that its hard to weed through them all. I was at a point today where I had decided that I would just code my own (there is some irony here)! Obviously everyone has their likes dislikes I would love to hear from some of you lone programmers and how you manage everything since our needs aren't exactly the same thing that a large team might need. I also want a solution that can scale to 2 maybe 3 developers if I end up hiring some people to help with my work load. Thanks again for your usual insights!

    Read the article

  • GUI based backup utility [closed]

    - by Chethan S.
    Possible Duplicate: Comparison of backup tools I have read favorable reviews for 'Back In Time' for the purpose stated above. Still I am posting this question as I have some demands in my mind. Few years back I was using ThinkVantage Rescue and Recovery by IBM on my Lenovo PC under Windows. That provided me nice features like compressed backups, boot time options - OS Repair, Restore entire OS, restore entire system to an older date, restore individual files etc. Out of these the feature I liked the most was compressed backups. Similar features are available in software like Norton Ghost too. In Back In Time I was surprised to see that the snapshot takes up same amount of space as that of the original contents, no compression at all. Furthermore, I was not able to find options to change the compression ratio etc. under settings. According to me compression of backups is a must have feature. Therefore, can anyone suggest me any other utility which can serve the purpose. I insist on GUI based tool since I don't want to mess up with backups!

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • Is it worth replacing mouse by standalone trackpad for heavy code-editing? [on hold]

    - by heltonbiker
    I recently got more interested in improving my tools, workspace and worflow. The first sting came with a sore finger due to a crappy keyboard, and then after some research I fell in love with the "mechanical keyboard is what you need" doctrine, bought one (cherry MX Brown if you're curious), and am very happy with the results. Currently I am replacing my previous text editor (Geany) with Sublime Text 3, and am also very happy and feeling much more powerful and professional :) Well, but while I re-read all the ancient debates about VIM vs whatever-else, the following excerpt from a blog post got me thinking again about the mouse vs keyboard, and the "moving around from the very home row" (in VIM) versus gesturing away with the tiny and unstable mouse cursor: Reaching for a mouse may indeed slow you down, but developers are commonly on machines where the trackpad is a micro-hand movement away. Most novice programmers can click on a character on screen faster than an expert Vimmer can type 20jFp; or LkEEE or /word or any other nasty way Vimmers have to use. The point of a mouse is to make arbitrary on screen jumps efficient, and it’s very good at doing that. Don’t you ever think you can beat a mouse. Well, although there is some bitterness in this statement, it makes a lot of sense, and EVEN MORE if you consider your direct input to be a TRACKPAD conveniently placed in front of your spacebar (which oddly is where I like to put my mouse, rotated 90° ccw, due to a serious tendonitis in my right shoulder, already healed, but you knod...). So, the question is: Has anyone replaced mouse by a standalone trackpad, to work in code editing in a desktop machine (that is, with a sandalone keyboard)? Was it worth the change?

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • What is the most effective approach to learn an unfamiliar complex program? [closed]

    - by bdroc
    Possible Duplicate: How do you dive into large code bases? I have quite a bit of experience with different programming languages and writing small and functional programs for a variety of purposes. My coding skills aren't what I have a problem with. In fact, I've written a decent web application from scratch for my startup. However, I have trouble jumping into unfamiliar applications. What's the most effective way to approach learning a new program's structure and/or architecture so that I can start attacking the code effectively? Are there useful tools for their respective languages (Python and Java are my two primary languages)? Should I be starting with just looking at function names or documentation? How do you veterans approach this problem? I find this has to be with minimal help from coworkers or contributors who are already familiar with the application and have better things to do than help me. I'd love to practice this skill in an open source project so any suggestions for starting points (maybe mildly complex) would be great too!

    Read the article

  • SharePoint 2010 User Profile Synchronization

    - by manemawanna
    Hello, I'm completely new to working with SharePoint and Windows Server, but last week I was given a small brief to play with SharePoint 2010 to see how I got along with it. Anyway I've set up a SharePoint server and had a mess around to get some new sites and pages created etc, but I'm now looking to have a try at importing some AD groups. As part of this I've look at these tutorials, here and here. So far I've got through to the process of starting the User Profile Service which works fine, but when I get it starting the User Profile Synchronization service it sits on starting. But when I refresh the page or go to the monitoring section it shows it as aborted. Now I'm new to administering servers like I say and when I start the User Profile Synchronization service it tries to run as NT AUTHORITY\NETWORK SERVICE and asks for a password so I've been providing it with the admin password, now I'm not sure if this is part of the issue or not as I've checked the log files and they seem to say that it doesn't have permissions, which is fair enough, but I can't see how you can change the account even if I wanted to. So if anyone could help it would be appreciated, if you need any further information to help with an answer, just let me know.

    Read the article

  • One National Team One Event &ndash; SharePoint Saturday Kansas City

    - by MOSSLover
    I wasn’t expect to run an event from 1,000 miles away, but some stuff happened you know like it does and I opted in.  It was really weird, because people asked why are you living in NJ and running Kansas City?  I did move, but it was like my baby and Karthik didn’t have the ability to do it this year.  I found it really challenging, because I could not physically be in Kansas City.  At first I was freaking out and Lee Brandt, Brian Laird, and Chris Geier offered to help.  Somehow I couldn’t come the day of the event.  Time-wise it just didn’t work out.  I could do all the leg work prior to the event, but weekends just were not good.  I was going to be in DC until March or April on the weekdays, so leaving that weekend was too tough.  As it worked out Lee was my eyes and ears for the venue.  Brian was the sponsor and prize box coordinator if anyone needed to send items.  Lee also helped Brian the day of the event move all the boxes.  I did everything we could do electronically, such as get the sponsors coordinate with Michael Lotter on invoicing and getting the speakers, posting the submissions, budgeting the money, setting up a speaker dinner by phone, plus all that other stuff you do behind the scenes.  Chris was there to help Lee and Brian the day of the event and help us out with the speaker dinner.  Karthik finally got back from India and he was there the night before getting the folders together and the signs and stuffing it all.  Jason Gallicchio also helped me out (my cohort for SPS NYC) as he did the schedule and helped with posting the speakers abstracts and so did Chris Geier by posting the bios.  The lot of them enlisted a few other monkeys to help out.  It was the weirdest thing I’ve ever seen, but it worked.  Around 100+ attendees ended up showing and I hear it was  a great event.  Jason, Michael, Chris, Karthik, Brian, and Lee are not all from the same area, but they helped me out in bringing this event together.  It was a national SharePoint Saturday team that brought together a specific local event for Kansas City.  It’s like a metaphor for the entire SharePoint Community.  We help our own kind out we don’t let me fail.  I know Lee and Brian aren’t technically SharePoint People they are honorary SharePoint Community Members.  Thanks everyone for the support and help in bringing this event together.  Technorati Tags: SharePoint Saturday,SPS KC,SharePoint,SharePoint Saturday Kanas City,Kansas City

    Read the article

  • How to perform feature upgrade in SharePoint2010 part1

    - by ybbest
    Once your custom SharePoint solution went into production. Any changes made to it require you to perform feature upgrade. Today, I’d like to show you how to perform feature upgrade. For the first version of my solution, I deploy a document library with a custom document set content type. You can download the solution here. Once you extract your solution, the first version is in the original folder. In order to deploy the original solution, you need to run the sitecreation.ps1 in the script folder. Next, I will modify the solution so that I will index the application number in the document library I just created in my original solution. 1. Modify the ApplicationLibrary.Template.xml as highlighted below: 2. Adding the following code into the feature event receiver. public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters) { base.FeatureUpgrading(properties, upgradeActionName, parameters); SPWeb web = GetFeatureWeb(properties); switch (upgradeActionName) { case "IndexApplicationNumber": SPList applicationLibrary = web.Lists.TryGetList(ApplicationLibraryNamesConstant.ApplicationLibraryName); if (applicationLibrary != null) { SPField queueField = applicationLibrary.Fields["ApplicationNumber"]; queueField.Indexed = true; queueField.Update(); } break; } } 3. Package your solution and run the feature upgrade PowerShell script. $wspFolder ="v1.1" $scriptPath=Split-Path $myInvocation.MyCommand.Path $siteUrl = "http://ybbest" $featureToCheckGuid="1b9d84cd-227d-45f1-92d4-a43008aa8fe7" $requiredFeatureVersion="0.0.0.0" $siteUrlOfFeatureToBeChecked="http://ybbest" AppendLog "Starting Solution UpgradeSolutionAndFeatures.ps1" Magenta & "$scriptPath\UpgradeSolutionAndFeatures.ps1" $siteUrl $wspFolder $featureToCheckGuid $requiredFeatureVersion $siteUrlOfFeatureToBeChecked Write-Host AppendLog "All features updated" "Green" Note: If you have not version your feature explicitly , your feature version will be 0.0.0.0 . References: Feature upgrade.

    Read the article

  • Using SharePoint label to display document version in Word 2007 doesn't work when moved to another l

    - by ITManagerWhoCodes
    I am surfacing the Document Library version of a Word 2007 document by creating a Label ({version}) within the content type of the Document Library and adding it as a Quick-part Label in the Word 2007 document. This works great. The latest version always shows up when I open the Word document. I also added this Version quick-part field to the footer of the Word document and then added this document as a document template to my content type, "ContentTypeMain". Now, I can go to my Document Library and I can create a New instance of "ContentTypeMain" with the Version field automatically there. This works great as well. However, if I create another Document Library and add the same Content Type, "ContentTypeMain" to it, the value of the Version quick-part doesn't update or refresh. The only way is to add another copy of the Label quick-part. It seems like the Quick-Part Label that maps to the Document Library Version is unique to the Document Library. My application dynamically creates subsites using site definitions and list templates. Thus the document library in each of the subsites are all being created from the same List Template. I inspected the XML files under the hood of the Word Document and it does look like there is a GUID attached to the Quick-Part Version field.

    Read the article

  • How to set permissions on SharePoint to hide an aspx page for authenticated users and to make it vis

    - by Flo
    I have a portal based on a publishing portal. The portal (SPSite) contains has two websites (SPWebs) one is anonymously accessible and other one isn't. This works as expected. Now I want to set the permissions for some aspx page of the anonymously accessible website so that they are not visible for authenticated users. So it's actually the opposite of anonymous access. User that are not logged in should see the aspx pages and logged in user shouldn't. The aspx pages are normal publishing pages of the publishing portal. How could I archive this. Is this possible at all?

    Read the article

  • TFS 2008 to TFS 2010 upgrade to exclude sharepoint

    - by Chen
    Hi, I'm currently planning to upgrade our TFS 2008 server to TFS 2010 with the condition below: 1. upgrade everything except for the sharepoint 2. upgrade everything including sharepoint but sharepoint will be enabled only at later stage. will this stop us from using TFS for our development? Thanks, Chen

    Read the article

  • Sharepoint isn't accepting new Credentials initially when switching users.

    - by Tiziani
    Hi all, I have a standard website (one webapplication and one site collection) with some custom pages and webparts. The issue I'm having is that when I try to switch users, using the "Sign In As a Different User" and entering new credentials (even for another site collection admin account), IE tries the account three times, and then it presents a 401 Access Denied screen. After that, if I erase all the stuff of access denied page from the browser's url, I'm logged as the new account I just had entered and was not accepted. After researching for a while on google, I found a KB ( http://support.microsoft.com/kb/970814 ) that might relate, but just tested here and it didn't work at all. The modified method suggested by the KB is the following: function LoginAsAnother(url, bUseSource) { document.cookie="loginAsDifferentAttemptCount=0"; if (bUseSource=="1") { GoToPage(url); } else { //var ch=url.indexOf("?") =0 ? "&" : "?"; //url+=ch+"Source="+escapeProperly(window.location.href); //STSNavigate(url); document.execCommand("ClearAuthenticationCache"); } } But after making this change, it no longer asks for a new credential. Any ideas?

    Read the article

  • SharePoint Licensing

    - by Adam
    Hi - we are thinking of using SharePoint to host a web app for which we will allow internal staff and licensed external customers to access a website built on SharePoint. I am thinking that for each authenticated user (logged in) we would need an OS CAL & a SharePoint CAL & we would need a processor license for SQL Server - Is this correct & what about non-authemticatedc website "browsers"? Any advice much appreciated.

    Read the article

  • Why is this CHOICE element not getting assigned in my SharePoint Field definition schema?

    - by ccornet
    I defined a new field of the type "Choice" for my web application. It will serve basically as a pseudo-lookup as its contents are defined by the value of a Text field in a list. It is initialized with a dummy choice to begin with (I'm under the impression a choice field needs at least one choice when defined), which is replaced with a real choice later on. But for some reason, this dummy choice is never actually added to the choices! Below is the XML Schema for the field in question. <Field ID="{ALICEH-ASFA-KEGU-IDLISTED}" Name="ddlSystems" Group="Lookup Columns" DisplayName="ddlSystems" Type="Choice" Sealed="FALSE" ReadOnly="FALSE" Hidden="FALSE" FillInChoice="TRUE" DisplaceOnUpgrade="TRUE"> <CHOICES> <CHOICE>BLANULL</CHOICE> </CHOICES> <Default>BLANULL</Default> </Field> Initially, I used a default choice of (a single space), but I changed it to BLANULL so that I can parse an actual word instead of a veritably empty string. Now, even after having uninstalled and reinstalled the feature with this field, I have a choice field that has (still a single space) as the only choice. Even more perplexing, BLANULL is actually listed for the default value in both the UI and the object model! What is causing this problem, and how can I circumvent it so that I don't have to manually set this dummy value each time?

    Read the article

  • How do I stall until a SharePoint List Item is Deleted with SPLongOperation?

    - by ccomet
    I have a workflow, which creates a task and deletes it after the task is edited and its useful information acquired. I created a custom edit form for the task, so I have an SPLongOperation that I can use to stall the page. This is necessary, because if I don't stall the page in some fashion, the person will see the task in the task list for the minute moment before the workflow gets to delete the task, and that is bad. So some code to stall the page until the task is fully deleted is necessary. I have currently implemented a solution for this, but I am unsatisfied with the approach. It basically is summed up to a while loop that calls SPList.GetItemById until it throws an error. Delibrately attempting to cause an error doesn't sit well with me, but I cannot think of a faster method for checking this. I'm looking for alternatives that would preferably work faster if not as fast, and preferably without relying on catching exceptions. Thank you in advance!

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >