Search Results

Search found 76226 results on 3050 pages for 'google api java client'.

Page 84/3050 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Java: Reading images and displaying as an ImageIcon

    - by 11helen
    I'm writing an application which reads and displays images as ImageIcons (within a JLabel), the application needs to be able to support jpegs and bitmaps. For jpegs I find that passing the filename directly to the ImageIcon constructor works fine (even for displaying two large jpegs), however if I use ImageIO.read to get the image and then pass the image to the ImageIcon constructor, I get an OutOfMemoryError( Java Heap Space ) when the second image is read (using the same images as before). For bitmaps, if I try to read by passing the filename to ImageIcon, nothing is displayed, however by reading the image with ImageIO.read and then using this image in the ImageIcon constructor works fine. I understand from reading other forum posts that the reason that the two methods don't work the same for the different formats is down to java's compatability issues with bitmaps, however is there a way around my problem so that I can use the same method for both bitmaps and jpegs without an OutOfMemoryError? (I would like to avoid having to increase the heap size if possible!) The OutOfMemoryError is triggered by this line: img = getFileContentsAsImage(file); and the method definition is: public static BufferedImage getFileContentsAsImage(File file) throws FileNotFoundException { BufferedImage img = null; try { ImageIO.setUseCache(false); img = ImageIO.read(file); img.flush(); } catch (IOException ex) { //log error } return img; } The stack trace is: Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space at java.awt.image.DataBufferByte.<init>(DataBufferByte.java:58) at java.awt.image.ComponentSampleModel.createDataBuffer(ComponentSampleModel.java:397) at java.awt.image.Raster.createWritableRaster(Raster.java:938) at javax.imageio.ImageTypeSpecifier.createBufferedImage(ImageTypeSpecifier.java:1056) at javax.imageio.ImageReader.getDestination(ImageReader.java:2879) at com.sun.imageio.plugins.jpeg.JPEGImageReader.readInternal(JPEGImageReader.java:925) at com.sun.imageio.plugins.jpeg.JPEGImageReader.read(JPEGImageReader.java:897) at javax.imageio.ImageIO.read(ImageIO.java:1422) at javax.imageio.ImageIO.read(ImageIO.java:1282) at framework.FileUtils.getFileContentsAsImage(FileUtils.java:33)

    Read the article

  • sync korganizer with google calendar

    - by Joshua Robison
    You know that calendar that pops up when you click on the time in the panel? I like it but I tend to keep all my events in google calendar. korganizer version is 4.4.11 Is there a way that I can get my google calendar to sync with this panel-clock calendar? thanks EDIT:1 It looks like what I need to do is sync google calendar with korganizer and that should do the trick, from what I understand. EDIT:2 some answers here suggest I need this plug-in sudo apt-get install akonadi-kde-resource-googledata but they give no explanation on how to configure it once installed.

    Read the article

  • Rewrite Generic URLs into real URLs on Google Analytics

    - by valdroni
    I have an iPhone app for a forum which also has a limited Google Analytics reporting. This app reports the page views in following generic form: /forum/67 /thread/29036 etc... The numbers above represent forum and thread ID's I am trying to set an Advanced filter, which will rewrite/report the page views in Google Analytics in following form: http://www.mysite.com/forum-67.html http://www.mysite.com/thread-29036.html Can someone please assist me in creating an Advanced Google Analytics filter which will enable me to see URL's so they can be live and send to correct page. Is there another method to achieve what I'm looking for ? Obviously there will be a need for some RegExp matches, but I cannot get around it.

    Read the article

  • Link tracking: Amazon or Google way

    - by Howard
    When doing a shopping site, the best way is to reference some successful stores, like Amazon. In the area of link tracking, for example, to see which section of your frontpage yield better conversion: Amazon way: Generate an unique URL for each link in the frontpage, such as http://www.amazon.com/gp/product/B0083Q04IQ/ref=s9_pop_gw_g424_ir04/175-6575053-9292830?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-2&pf_rd_r=0AMJCKBBQA63EP0XHB86&pf_rd_t=101&pf_rd_p=1263340922&pf_rd_i=507846 Google way Use Google Analytics <a href="/products/abc" onClick="javascript: pageTracker._trackPageview('/from-main-menu/products/abc');"> WHat are the pros and cons with the above two approaches (besides Google require JS support)?

    Read the article

  • How to show other characters in online 2D rpg

    - by Loligans
    I have Player 1 and Player 2 I am using Json to send and retrieve player data between the client and the server, but when another player logs in, and is in the same map, how would I send that data to both players to update the graphics engine to show there are 2 Players on the map? About my game it is a 2D RPG tile based game it is 24x15 Tiles it is Real time Action it should interact anywhere between 10-150 ping players interact with each other when in the same map and can see each other moving around the game world is persistent, and is saved when the server shuts down Right now the server just sends the player Only their information which is inside a Json Object Here is an example of what I am talking about If you notice there are 2 separate characters in 2 separate clients, but they are running on the same server. I am trying to get them to show up on both clients, but I don't know how I should accomplish this. Should I send it as an added value in the Json object? Also what is the name of this process so I can look it up and find more info on it?

    Read the article

  • Google webmaster Verification failed.

    - by KMC
    I have a site created by Ruby on Rails. I had verified against Google Webmaster Tool some months ago, which was successful. One day webmaster starts giving me Re-verification fails. I tried again to verify my site using Meta tags and HTML files. But I kept having "Verification failed. The connection to your server timed out." Since then, Google stop crawling my site's content - though, somehow google still crawl my PDF contents on my site.

    Read the article

  • Google's Opinion on Javascript Page Refresh

    - by user35306
    I was wondering if anyone knows Google's view on this. My company has a homepage that features a lot of 3rd parties on it and it needs to inform customers which ones are currently online, which aren't, and which are currently busy. Because this constantly changes, we have the homepage refresh to show the most relevant and up-to-date content to our users. I'm not using a meta refresh element in the http-equiv parameter to do this. Instead I have this js element to refresh the page: window.setTimeout("refreshPage()", 120000); I just want to know whether people think Google might consider this a violation of the content guidelines or not. Or if it's not an outright violation, then at least if Google frowns on this or not. It doesn't redirect the user to a different page or anything, just refreshes the page so that they can see the most relevant content.

    Read the article

  • Google Cache showing wrong URL

    - by Sathiya Kumar
    I searched the cache details of the URL http://property.sulekha.com/pune-properties but the Google Cache showing details for property.sulekha.com. I don't know why it's showing like this. Not only for http://property.sulekha.com/pune-properties but also for all the Indian city relates URL's like http://property.sulekha.com/chennai-properties , http://property.sulekha.com/mumbai-properties , http://property.sulekha.com/kolkata-properties etc. Even i don't find these urls in the Google search result. If i search Chennai properties in Google, i find property.sulekha.com and not http://property.sulekha.com/chennai-properties . Why its happening like this? Please let me know

    Read the article

  • Google crawler not found an error inside of the <head> tag

    - by inckka
    I've found a crawler error in my site and it is listed as a page not found(404) link. Heres the broken link http://mydomain.com/blog/comments/feed/ I'm using Google web master tools and found that broken link coming from my web site pages' head tag. here's actual code where that link situated. <head> <link rel="alternate" type="application/rss+xml" title="My Domain Blog &raquo; Feed" href="http://www.my-domain.com/blog/feed/" /> </head> So Google report this link as a not found. Actually this link target is not an exact page or a location. But essential for the blog feeds. Anyway I have to fix this and remove from the Google crawler error's list. But haven't got any idea, because cannot redirect or do a 404 header with this link target. Have anyone got an idea of fixing this?

    Read the article

  • pluto or jetspeed on google app engine?

    - by Patrick Cornelissen
    I am trying to build something "portlet server"-ish on the google app engine. (as open source) I'd like to use the JSR168/286 standards, but I think that the restrictions of the app engine will make it somewhere between tricky and impossible. Has anyone tried to run jetspeed or an application that uses pluto internally on the google app engine? Based on my current knowledge of portlets and the google app engine I'm anticipating these problems: A war file with portlets is from the deployment standpoint more or less a complete webapp (yes, I know that it doesn't really work without a portal server). The war file may contain it's own web.xml etc. This makes deployment on the app engine rather difficult, because the apps are not visible to each other, so all portlet containing archives need to be included in the war file of the deployed "app engine based portal server". The "portlets" are (at least in liferay) started as permanent servlet processes, based on their portlet.xmls and web.xmls which is located in the same spot for every portlet archive that is loaded. I think this may be problematic in the app engine, because everything is in one big "web app", so it may be tricky to access the portlet.xmls from each archive. This prevents a 100% compatibility in my opinion. Is here anyone who has any experience with the combination of portlets and the app engine? Do you think it's feasible to modify jetspeed, pluto or any other portlet container to be able to run it on the app engine?

    Read the article

  • Correctly indexing multiple domains with same content in Google and others

    - by AJweb
    I have a client with a dozen territorial domains, like mydomain.co.uk, mydomain.fr, mydomain.de, etc Most of these domains hold a different language of the same dynamic content (shop), but some, like co.uk and .com, have the same language and content, except for some content customized to each country/domain in the front page, contact and other pages. I am aware that we should use the canonical meta tag to mark those duplicated contents, but, we want the co.uk to be present in UK ( indexed in google.co.uk ) and the .com to be present in US and other countries, for example, or least that is the goal. Is there anything we can do to "help" google determine the geographical meaning of each domain? If we mark with canonical tag the .com and co.uk sites, do you know how google will decide which one to show on a given search?

    Read the article

  • Google search does not show sub-pages from my website

    - by Chang
    My website appears in Google search, but only the first page. Of course I have sub-pages linked from the first page, but the sub-pages do not show in Google search. Not in Yahoo, not in Bing. What should I do? It has been three years that sub-pages do not show. (I tried searching site:mydomain.com and pressed 'repeat the search with the omitted results included' link) What would you suspect the reason? My website addresses were like xxx.php?yy=zzz etc, etc, so I changed it to /yy/zzz using mod_rewrite. I thought it might be (X)HTML standard violations, so now I changed it. I hope Google will soon have my entire website, but I am a little bit pessimistic. Do you have any thought?

    Read the article

  • Google Authorship Image of my blogspot has been disappeared in Google SERP. Why?

    - by Sathiya Kumar
    I have a blogspot and i used my image to appear on the Google SERP for my keywords using Google Authorship Markup. My image was showed for last 2 months but while checking SERP for my blog, i found that my authorship markup is not working. My image, name and G+ followers count is not appearing near my blogspot URL in SERP. I didn't made any changes in my google+ profile or in my blogspot header tag where i had put the authorship code. I tried to find the reason but i didn't find any value answer. May anyone answer this question. Please let me know if you had already experienced like this.

    Read the article

  • Portable way of finding total disk size in Java (pre java 6)

    - by Wouter Lievens
    I need to find the total size of a drive in Java 5 (or 1.5, whatever). I know that Java 6 has a new method in java.io.File, but I need it to work in Java 5. Apache Commons IO has org.apache.commons.io.FileSystemUtils to provide the free disk space, but not the total disk space. I realize this is OS dependant and will need to depend on messy command line invocation. I'm fine with it working on "most" systems, i.e. windows/linux/macosx. Preferably I'd like to use an existing library rather than write my own variants. Any thoughts? Thanks.

    Read the article

  • Google is good or bad for programmer? [closed]

    - by Vikas
    Recently I was being interviewed by a company and faced one question. The interviewer asked me a question and at that time I didn't know the answer but if I had been asked about just 4 months ago, I could have answered it. The question was from new language that I learned just 4 months ago. But I just get overview of the language and just get started working on that. Whenever I face difficultly, I google it. That means we do not have to memorize the whole programming language book! So in that situation I felt that Google screwed my job! Not talking subjectively, Is it good to google all the time?

    Read the article

  • Google Analytics - TOS section pertaining to privacy

    - by Eike Pierstorff
    The Google Analytics terms of service does do not allow to track "data that personally identifies an individual (such as a name, email address or billing information), or other data which can be reasonably linked to such information by Google". Does anybody have first-hand knowledge if this includes user ids which cannot be resolved by Google but can be linked to actual persons via an Analytics Users CRM system (e.g. a CRM linked to Analytics via API access) ? I used to think so, but if that where the case many ecommerce implementations would be illegal (since they store transactions id which can be linked to client's purchases). If anybody has insights about the intended meaning of the paragraph (preferably with a reliable source) it would be great if he/she could share :-)

    Read the article

  • Java Reflection, java.lang.IllegalAccessException Error

    - by rubby
    Hi all, My goal is : Third class will read the name of the class as String from console.Upon reading the name of the class, it will automatically and dynamically (!) generate that class and call its writeout method. If that class is not read from input, it will not be initialized. And I am taking java.lang.IllegalAccessException: Class deneme.class3 can not access a member of class java.lang.Object with modifiers "protected" error. And i don't know how i can fix it.. Can anyone help me? import java.io.*; import java.lang.reflect.*; class class3 { public void run() { try { BufferedReader reader= new BufferedReader(new InputStreamReader(System.in)); String line=reader.readLine(); Class c1 = Class.forName("java.lang.String"); Object obj = new Object(); Class c2 = obj.getClass(); Method writeout = null; for( Method mth : c2.getDeclaredMethods() ) { writeout = mth; break; } Object o = c2.newInstance(); writeout.invoke( o ); } catch(Exception ee) { System.out.println("ERROR! "+ee); } } public void writeout3() { System.out.println("class3"); } } class class4 { public void writeout4() { System.out.println("class4"); } } class ornek { public static void main(String[] args) { System.out.println("Please Enter Name of Class : "); class3 example = new class3(); example.run(); } }

    Read the article

  • JODA time in Java Appengine

    - by aloo
    Has anyone gotten JODA time classes to work on Google Appengine? I'm using 1.3.4 of the java sdk and I get the following error when trying: java.lang.NoClassDefFoundError: com/google/appengine/repackaged/org/joda/time/DateTimeZone I've imported it as well: import com.google.appengine.repackaged.org.joda.time.DateTime;

    Read the article

  • Submitting new site to directories - will Google penalize?

    - by Programmer Joe
    I just started a new site with a forum to discuss stocks. I've already submitted my site to DMOZ. To help promote my site and to help people who are looking for stock discussion forums to find it, I'm thinking of submitting my site to a few more directories but I'm hesistant because I know Google will penalize a site if it believes the backlinks to the site are spammy and/or low quality. So, I have a few questions: 1) If I submit my site to directories with a PR between 4 and 5, will those backlinks be considered spammy/low quality? I noticed most free directories have a PR between 4 and 5, but I don't know if backlinks from those directories would be considered spammy by Google. 2) I'm thinking of submitting it to Best of the Web and JoeAnt, but these are paid. Does anybody have any experience with these two paid directories? Are these two directories considered higher quality by Google?

    Read the article

  • HTML5 valid Google+ Button - Bad value publisher for attribute rel

    - by mrtsherman
    I recently migrated my website from xhtml transitional to html5. Specifically so that I could make use of valid block level anchor tags. <a><div /></a>. When running validation I encountered the following error: Bad value publisher for attribute rel on element link: Keyword publisher is not registered. But according to this page, that is exactly what I am supposed to do. https://developers.google.com/+/plugins/badge/#connect My code: <link href="https://plus.google.com/xxxxxxxxxxxxxxxx" rel="publisher" /> <a href="https://plus.google.com/xxxxxxxxxxxxxxx?prsrc=3" style="text-decoration:none;"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" alt="" style="border:0;width:16px;height:16px;"/> </a> I can't figure out how to implement this in an html5 compliant way. Can anyone help?

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Google Launches Hurricane Sandy Crisis Map

    - by Jason Fitzpatrick
    Whether you’re in the path of Hurricane Sandy or just want to keep an eye on what’s going on, Google’s new Hurricane Sandy crisis map will keep you abreast of any new storm-related developments. The main map tracks the current location of the storm, the forecasted track, storm surge probabilities, storm radar information, and active emergency shelters. In addition to the national-size map, Google also has a New York City specific map with evacuation routes and additional emergency information. Google Crisis Map: Hurricane Sandy [via Mashable] What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Google Chrome and Theora

    - by Michas
    I have problem with Google Chrome and Theora video. I have video that plays nice in Opera and Firefox. However, it doesn't play in Google browser and I don't know why. I've made this video in ogg2theora. Test sample: http://wwsi.edu.pl/video/enigma.html Does anyone know how should I encode to Theora that Google Chrome works? P.S. I am not interested in encoding in WebM or h.264, this time. I am not asking what is the best way to publish video on a website. I only do experiments with Ogg Theora. The test site has fallback for h.264 in video tag and WMV in WMP plugin.

    Read the article

  • Business not showing up on right hand side of google search

    - by Chris
    The business I work has currently has a verified business Google+ page and in the past this page has shown up as a thumbnail during Google searches. The thumbnail brings up basic information such as our picture and operating hours etc. However, since verifying the business page, the thumbnail overview of our business does not seem to show up anymore. I have tried Google searching our business on several computers and it still just brings up the normal search results. Is there a setting I need to activate in order for the thumbnail to appear? Thanks

    Read the article

  • Google Search not displaying results from sub-pages

    - by nlovric
    I published a new site with some delicate content on September 26, 2012 UTC and no results from sub-pages - only from the main page - appear in Google Search. Entering "neven lovric" "cat out of the bag" into Google Search finds the main page. Is this type of behavior normal? I ask this because the first site was ceased - my account was locked - by the NameCheap, Inc. Risk Assessment Team, allegedly due to PayPal, Inc. reversing my payment for the extension of the registration of the domain before I was able to publish any content on it. In 2011 UTC, Google, Inc. blocked all results for certain keywords from being displayed to their users in the Arab Republic of Egypt during the demonstrations there. So, considering previous events, this is not an unlikely scenario in this case, also.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >