Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 383/1286 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • Google Cache showing wrong URL

    - by Sathiya Kumar
    I searched the cache details of the URL http://property.sulekha.com/pune-properties but the Google Cache showing details for property.sulekha.com. I don't know why it's showing like this. Not only for http://property.sulekha.com/pune-properties but also for all the Indian city relates URL's like http://property.sulekha.com/chennai-properties , http://property.sulekha.com/mumbai-properties , http://property.sulekha.com/kolkata-properties etc. Even i don't find these urls in the Google search result. If i search Chennai properties in Google, i find property.sulekha.com and not http://property.sulekha.com/chennai-properties . Why its happening like this? Please let me know

    Read the article

  • BizTalk 2009 - Service Instances: Last 100

    - by StuartBrierley
    Having previously talked about the lack of the traditional HAT in BizTalk 2009, the question then becomes how do you replicate some of the functionality that was previsouly relied on? I have already covered the Last 100 Messages Received, the Last 100 Messages Sent, and the Last 50 Suspended Messages queries so what about service instances? The BizTalk 2009 Group Hub allows you to search for suspended service instances and also running service instances, but not the two together. In BizTalk 2004 we had a query in HAT to return the last 100 service instances.  Lets create a direct replacement in the BizTalk 2009 Hatless environment. Basically we are creating a query to search for the last one hundred tracked service instances:

    Read the article

  • Picking the Right Keywords For SEO Success

    It is important to realize that picking the right key words is crucial to your SEO success. Always remember that for search engine optimization, your end goal is to rank high in the search engines for key words most relevant and valuable to your web site. For example, if you run a pet dog business, you naturally want to rank high for key words such as 'pet dogs', 'dogs for sale', 'pet dogs for sale'. Better yet, you can narrow down the key words to target very specific niches such as 'chihuahua pet dogs, pet dogs for sale in Brooklyn' etc.

    Read the article

  • Zenoss Setup for Windows Servers

    - by Jay Fox
    Recently I was saddled with standing up Zenoss for our enterprise.  We're running about 1200 servers, so manually touching each box was not an option.  We use LANDesk for a lot of automated installs and patching - more about that later.The steps below may not necessarily have to be completed in this order - it's just the way I did it.STEP ONE:Setup a standard AD user.  We want to do this so there's minimal security exposure.  Call the account what ever you want "domain/zenoss" for our examples.***********************************************************STEP TWO:Make the following local groups accessible by your zenoss account.Distributed COM UsersPerformance Monitor UsersEvent Log Readers (which doesn't exist on pre-2008 machines)Here's the Powershell script I used to setup access to these local groups:# Created to add Active Directory account to local groups# Must be run from elevated prompt, with permissions on the remote machine(s).# Create txt file should contain the names of the machines that need the account added, one per line.# Script will process machines line by line.foreach($i in (gc c:\tmp\computers.txt)){# Add the user to the first group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Distributed COM Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the second group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Performance Monitor Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the third group - Group doesn't exist on < Server 2008#$objUser=[ADSI]("WinNT://domain/zenoss")#$objGroup=[ADSI]("WinNT://$i/Event Log Readers")#$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)}**********************************************************STEP THREE:Setup security on the machines namespace so our domain/zenoss account can access itThe default namespace for zenoss is:  root/cimv2Here's the Powershell script:#Grant account defined below (line 11) access to WMI Namespace#Has to be run as account with permissions on remote machinefunction get-sid{Param ($DSIdentity)$ID = new-object System.Security.Principal.NTAccount($DSIdentity)return $ID.Translate( [System.Security.Principal.SecurityIdentifier] ).toString()}$sid = get-sid "domain\zenoss"$SDDL = "A;;CCWP;;;$sid" $DCOMSDDL = "A;;CCDCRP;;;$sid"$computers = Get-Content "c:\tmp\computers.txt"foreach ($strcomputer in $computers){    $Reg = [WMIClass]"\\$strcomputer\root\default:StdRegProv"    $DCOM = $Reg.GetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction").uValue    $security = Get-WmiObject -ComputerName $strcomputer -Namespace root/cimv2 -Class __SystemSecurity    $converter = new-object system.management.ManagementClass Win32_SecurityDescriptorHelper    $binarySD = @($null)    $result = $security.PsBase.InvokeMethod("GetSD",$binarySD)    $outsddl = $converter.BinarySDToSDDL($binarySD[0])    $outDCOMSDDL = $converter.BinarySDToSDDL($DCOM)    $newSDDL = $outsddl.SDDL += "(" + $SDDL + ")"    $newDCOMSDDL = $outDCOMSDDL.SDDL += "(" + $DCOMSDDL + ")"    $WMIbinarySD = $converter.SDDLToBinarySD($newSDDL)    $WMIconvertedPermissions = ,$WMIbinarySD.BinarySD    $DCOMbinarySD = $converter.SDDLToBinarySD($newDCOMSDDL)    $DCOMconvertedPermissions = ,$DCOMbinarySD.BinarySD    $result = $security.PsBase.InvokeMethod("SetSD",$WMIconvertedPermissions)     $result = $Reg.SetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction", $DCOMbinarySD.binarySD)}***********************************************************STEP FOUR:Get the SID for our zenoss account.Powershell#Provide AD User get SID$objUser = New-Object System.Security.Principal.NTAccount("domain", "zenoss") $strSID = $objUser.Translate([System.Security.Principal.SecurityIdentifier]) $strSID.Value******************************************************************STEP FIVE:Modify the Service Control Manager to allow access to the zenoss AD account.This command can be run from an elevated command line, or through Powershellsc sdset scmanager "D:(A;;CC;;;AU)(A;;CCLCRPRC;;;IU)(A;;CCLCRPRC;;;SU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)(A;;CCLCRPRC;;;PUT_YOUR_SID_HERE_FROM STEP_FOUR)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)"******************************************************************In step two the script plows through a txt file that processes each computer listed on each line.  For the other scripts I ran them on each machine using LANDesk.  You can probably edit those scripts to process a text file as well.That's what got me off the ground monitoring the machines using Zenoss.  Hopefully this is helpful for you.  Watch the line breaks when copy the scripts.

    Read the article

  • JavaOne Content Catalog Live!

    - by programmarketingOTN
    The JavaOne Content Catalog—the central repository for information on sessions, demos, labs, user groups, exhibitors, and more for San Francisco 2012—is live!In the Content Catalog you can search on tracks, session types, session categories, keywords, and tags. Or, you can search for your favorite speakers to see what they’re presenting this year. And, directly from the catalog, you can share sessions you’re interested in with friends and colleagues through a broad array of social media channels.Start checking out JavaOne content now to plan your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. Happy browsing! 

    Read the article

  • Is this Anti-Scraping technique viable with Crawl-Delay?

    - by skibulk
    I want to prevent web scrapers from abusing 1,000,000 on my website. I'd like to do this by returning a "503 Service Unavailable" error code for users that access an abnormal number of pages per minute. I don't want search engine spiders to ever receive the error. My inclination is to set a robots.txt crawl-delay which will ensure spiders access a number of pages per minute under my 503 threshold. Is this an appropriate solution? Do all major search engines support the directive? Could it negatively affect SEO? Are there any other solutions or recommendations?

    Read the article

  • My First robots.txt

    - by Whitechapel
    I'm creating my first robots.txt and wanted to get a second opinion on it. Basically I have a FTP setup on my board for some special users to transfer files between each other and I do NOT want that included in the search by the bots. I also want to point to my sitemap which gets auto generated by a PHP page. So here is what I have, what else should I include, and if I need to fix anything with it? Also, it's linking to xmlsitemap.php because that generates the sitemap when called. My goal is to allow any search bot crawl the forums to grab meta data. User-agent: * Disallow: /admin/ Disallow: /ali/ Disallow: /benny/ Disallow: /cgi-bin/ Disallow: /ders/ Disallow: /empire/ Disallow: /komodo_117/ Disallow: /xanxan/ Disallow: /zeroordie/ Disallow: /tmp/ Sitemap: http://www.vivalanation.com/forums/xmlsitemap.php Edit, I'm not sure how to handle all the user's folders under /public_html/ since the robots.txt will be going in /public_html.

    Read the article

  • Installing Cairo to get FastRWeb working for R gWidgetsWWW2 -pkg

    - by hhh
    I want to install FastRWeb for R but it requires some Cairo. How can I install the Cairo? compilation terminated. make: *** [xlib-backend.o] Error 1 ERROR: compilation failed for package ‘Cairo’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/Cairo’ ERROR: dependency ‘Cairo’ is not available for package ‘FastRWeb’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/FastRWeb’ The downloaded packages are in ‘/tmp/Rtmpno8hhF/downloaded_packages’ Warning messages: 1: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'Cairo' had non-zero exit status 2: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'FastRWeb' had non-zero exit status I cannot find what the Cairo is here, 16 entries with this search term below. It is apparently some library. $ apt-cache search libcairo|wc 16 132 996 Perhaps related http://stackoverflow.com/questions/9826128/r-making-r-rook-program-into-rscript-program-r http://stackoverflow.com/questions/9812547/r-gui-vizualiser-with-command-line-access-browser-based-letting-users-to-s Some related packages FastRWeb and RServe for the gWidgetsWWW2 -pkg.

    Read the article

  • C# - How to detect all IP addresses from a LAN?

    - by SAMIR BHOGAYTA
    string strHostName = string.Empty; cmbIPAddress.Items.Clear(); // Getting Ip address of local machine... // First get the host name of local machine. strHostName = Dns.GetHostName(); // Then using host name, get the IP address list.. IPHostEntry ipEntry = Dns.GetHostByName(strHostName); IPAddress[] iparrAddr = ipEntry.AddressList; if (iparrAddr.Length 0) { for (int intLoop = 0; intLoop cmbIPAddress.Items.Add(iparrAddr[intLoop].ToString()); }

    Read the article

  • Trying to configure samba share with office server

    - by tomphelps
    Hi, i'm trying to set up fstab to automatically connect to my office shared server. I'm undoubtedly doing something silly here as the username and password and server name work fine in the first code snippet below, just not the second - any help would be appreciated! The following command works as expected... tom@tom-desktop: sudo /usr/bin/smbclient -L Server.local -Uguest Enter guest's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.10] Sharename Type Comment --------- ---- ------- Lacie Disk Disk macosx Server Disk macosx IPC$ IPC IPC Service (Server) ADMIN$ IPC IPC Service (Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.10] Server Comment --------- ------- ACER-9D60040D10 SERVER Server Workgroup Master --------- ------- WORKGROUP ACER-9D60040D10 But when i add the following line to /etc/fstab, i get this error: cifs_mount failed w/return code = -22 //Server.local/Server /media/maguires cifs username=guest,password=password 0 0

    Read the article

  • SEO and Spelling mistakes in keyword

    - by Sushil
    I am about to register a domain name (suppose) someone.com (with proper spelling), in mind targeting the keyword "SOMEONE". But then I discovered on 'google keyword research tool' that not this but a typo "SOME1" seems to be more popular and people search this significantly more often than the proper keyword. And luckily someone.com and some1.com both are available. I understand that I can register both the domains, but I don't know on which should I keep my website and redirect the other one. Should I make the typo "some1.com" my base site? But that's a typo. P.S., my site has a totally relevant content and not just keyword targeted worthless site. What do you guys suggest? I am confused. How would that affect my SEO ranking?? EDIT: Because the competition for the keyword I am targeting is fairly low, I think nevertheless whatever domain I choose, it will appear on the search engine first page.

    Read the article

  • "X-Robots-Tag: noindex" on an HTTP 301 response

    - by Peter O.
    I understand that a resource with X-Robots-Tag: noindex forces some search engines, including Google, not to index the resource further. I also understand that an HTTP 301 response causes search engines to use the redirected URL instead of the original URL to refer to the resource. But what happens if both "X-Robots-Tag: noindex" and status code 301 occur on the same response? It's likely that the original URL will no longer be indexed, but will that cause the redirected URL to no longer be indexed too? This possibility is not mentioned in the X-Robots-Tag specification.

    Read the article

  • "Popular searches for this page" links with links to the same page, SEO difference?

    - by Rory McCann
    I've seen a few pages that have a section with "Popular searches for this page" and then have the search terms in a link pointing back to the same page (e.g. http://theenglishchillicompany.co.uk/the-complete-chilli-pepper-book-a-gardeners-guide-to-choosing-growing-preserving-and-cooking/) I assume they are doing it for SEO purposes (with more links to the page with the desired search terms). Does this make a difference? It seems strange that a link on page A to page A would be counted! Am I wrong?

    Read the article

  • How to prevent Google from finding my admin index page?

    - by krish
    I am running a website but for some days i stopped it and put the under-construction page because the Index of admin page is visible to the outside world through the Google search. One of my friend told me that your websites index is visible and its one step away to access the password file and he shows me that very simply using the Google search. How can i prevent this and i am hosting my site with a hosting company and i report about this to them but they simply replied to me still its secure so you no need to worry... am i really don need to worry and continue my site with the visible index of admin page?

    Read the article

  • Restricting crawler activity to certain directories with robots.txt

    - by neimad
    I would like to use robots.txt to prevent indexing of some parts of my website. I want search engines to index only the / directory and not search inside my controllers. In my robots.txt, I have this: User-Agent: * Disallow: /compagnies/ Disallow: /floors/ Disallow: /spaces/ Disallow: /buildings/ Disallow: /users/ Disallow: / I put this file in /mysite/public. I tested the file with a robots.txt validator and got no errors. However, Google always returns the result of my site. For testing, I added Disallow: /, but again, Google indexed all pages. floors, spaces, buildings, etc. are not physical directories. Is this a bug? How can I work around it?

    Read the article

  • Android application Database Framework

    - by Marek Sebera
    When creating mobile (specially Android) application, I usually come to touch with similar pattern of working with data. Usually I need to fetch some remote data (covered by authorization process) to local cache. And on next request: Check networking Check presence of cache file Check version of cache file (if networking) Get new version and save cache (if networking and file not in cache, or outdated) Data store is no-SQL JSON Document-Based (and yes, I know about CouchDB Android version, but it doesn't fit my needs yet.) Process of authorizing to data source and code for check version of local cache is adapted to application. But the other code (handling network, saving cache, handling exceptions,...) is always the same. Is there any Data Store helper I can use, which provides functions I described above?

    Read the article

  • Is my robots.txt working as it should?

    - by TigerBlood
    I want crawlers to have access to http://www.example.com but not http://www.example.com/ My robots.txt is as follows: User-agent: * Allow: /$ Disallow: / My site is in google search results, but I am not coming up in Bing, Yahoo, etc. I have had the same robots.txt since last year, and I initially requested inclusion ~1 year ago, having also resubmitted the URL to those latter search engines several times since as well. Is my robots.txt blocking those other crawlers? And if so, why not google as well? Thanks in advance!

    Read the article

  • Why are we being twitter spammed?

    - by Tom Gullen
    This is a search relating to us: https://twitter.com/#!/search/realtime/scirra We're getting a of of new accounts tweeting: The Layers Bar - Scirra.com Firstly this is not us doing it as we're quite proud of doing everything completely whitehat. Also this tweet doesn't make any sense, "The Layers Bar" seems to be referring to a manual entry of ours. They all seem to be new accounts with no followers and no prior tweets coming in like clockwork every hour. Does anyone know why this could be happening? Could this harm us? It it possible to find out the source of this? I should mention I'm hesitant to report them all as spam because it could look like we are the culprits.

    Read the article

  • Google ranking - Modal views - google analytics events [duplicate]

    - by minchiya
    This question already has an answer here: How to diagnose a search engine ranking drop? 5 answers I modified a site recently : - I added many google analytics events, to better understand user behaviour. - I added also two buttons on almost all the pages of the site. Those buttons show modal-views (I am using bootstrap) with questions about user opinion. This modals views are on almost all pages of the site. After this modification the ranking of the site decreased on google search from the second place to the seconde page :( Is it the events-collected or the model-views added ? If the model-views are the reason, then how to better do similar surveys ? Did you have please similar experience, or explanation to this ? Perhaps it is the effect of panda4 update. In this cas, what can I look for to improve the site. How to debug the problem/reasons ?

    Read the article

  • Cleaning a dataset of song data - what sort of problem is this?

    - by Rob Lourens
    I have a set of data about songs. Each entry is a line of text which includes the artist name, song title, and some extra text. Some entries are only "extra text". My goal is to resolve as many of these as possible to songs on Spotify using their web API. My strategy so far has been to search for the entry via the API - if there are no results, apply a transformation such as "remove all text between ( )" and search again. I have a list of heuristics and I've had reasonable success with this but as the code gets more and more convoluted I keep thinking there must be a more generic and consistent way. I don't know where to look - any suggestions for what to try, topics to study, buzzwords to google?

    Read the article

  • Blogging & SEO - They Go Hand in Hand

    You write a blog loyally every day or so. You provide informative, fascinating substance for your faithful readers. You've even got a number of member links in there, too. But is that this enough to induce great search engine results for your hard work? In all probability not. Certain, you'll get listed with the search engines effortlessly. But without a high twenty listing at one among the majors (Google, Yahoo! or MSN), you will not have traffic, literally, banging down your door....

    Read the article

  • Google locking on Ubuntu

    - by user170534
    Problem I'm facing is that Google doesn't respond well timed to connection requests send from any browsers known to Linux. As far as I can tell, this was existent in Mint, which is Ubuntu based. I have no debug or guess about cause but I'm sure there are people with the same problem. ping of terminal is untouched but any other browser keeps unloaded, for example; google loads fine, I search for something. Then I decide to search for something else and ta daa: You gotta wait for 30 seconds for Google server to respond. I tried using google's public DNS without success. Flare the suggestions & ideas!?

    Read the article

  • How can I set parameters in Google webmaster tools so that my dynamic content is indexed?

    - by Werewolf
    I have read questions about URL parameters in Google Webmaster Tools in this site and the Google Webmaster Help Center but I have a problem. My site searches in the database and show some information. These two URL display some data: http://mydomain.com/index.aspx?category=business http://mydomain.com/index.aspx?category=graphic&City=Paris In URL parameter section, I can only define parameter category, how Google can detect proper values (business, graphics, real estate...)? Every word is not valid for search. If My page name is default.aspx or anything else, where I should define it? If I use URL rewriting like http://mydomain.com/search/category/business, my settings must change?

    Read the article

  • Flowchart for solving programming problems

    - by nurne
    I noticed that every developer implements a somewhat different flowchart for solving programming problems. By flowchart I mean a defined system of techniques that the developer goes through in a certain sequence, trying to solve the problem at hand. Some examples for techniques: Google "how to..." or "... tutorial". Search the java/msdn/apple/etc API doc for the specific class or method. Search in stack overflow the exact problem with some tags like [iphone]/[java] etc. Take a nap and let the subconscious work. Debug. Draw the algorithm or system. Google the logged error message. Ask a colleague or manager. Ask a new question in stack overflow. From your experience, what is the best flowchart for solving a programming problem?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >