Search Results

Search found 48264 results on 1931 pages for 'text search'.

Page 107/1931 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Blogger homepage won't update!

    - by Sims Siniron
    i am new on blogging webmaster Tools. When i usually add new post to my blog, it will automatic updated my homepage also. But from last 14th January, my homepage won't update by Google SEPR. As a result i am losing my popularity on SEPR. Previously when i post new article, 70-80% will go to the first page result. But after the problem occurs, none of them reach in top 15page of Google SEPR :( Last 1/12/12, Google webmaster sent me a "Notice of DMCA removal from Google Search" massage to indicates one of my URL that contains some infringing content which i deleted after receiving their notice. Not only that, i also cheeked all of my posts if there any additional infringing content available. After removing that, i fill out Google's Content Removed Notification form to notify them and Google also sent me a feedback that they received it and suggest "In the future, if you have removed the allegedly infringing content from your site (and won’t put it back), please use the correct form" which also i filled up. Now my question is that, Is everything alright which i did before? Although my new posts are indexed in GSEPR with ".." but why Google Robots.txt won't update my homepage which previously automatically updated when a new article was published.

    Read the article

  • Collaborative Whiteboard using WebSocket in GlassFish 4 - Text/JSON and Binary/ArrayBuffer Data Transfer (TOTD #189)

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) as its integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket One of the typical usecase for WebSocket is online collaborative games. This Tip Of The Day (TOTD) explains a sample that can be used to build such games easily. The application is a collaborative whiteboard where different shapes can be drawn in multiple colors. The shapes drawn on one browser are automatically drawn on all other peer browsers that are connected to the same endpoint. The shape, color, and coordinates of the image are transfered using a JSON structure. A browser may opt-out of sharing the figures. Alternatively any browser can send a snapshot of their existing whiteboard to all other browsers. Take a look at this video to understand how the application work and the underlying code. The complete sample code can be downloaded here. The code behind the application is also explained below. The web page (index.jsp) has a HTML5 Canvas as shown: <canvas id="myCanvas" width="150" height="150" style="border:1px solid #000000;"></canvas> And some radio buttons to choose the color and shape. By default, the shape, color, and coordinates of any figure drawn on the canvas are put in a JSON structure and sent as a message to the WebSocket endpoint. The JSON structure looks like: { "shape": "square", "color": "#FF0000", "coords": { "x": 31.59999942779541, "y": 49.91999053955078 }} The endpoint definition looks like: @WebSocketEndpoint(value = "websocket",encoders = {FigureDecoderEncoder.class},decoders = {FigureDecoderEncoder.class})public class Whiteboard { As you can see, the endpoint has decoder and encoder registered that decodes JSON to a Figure (a POJO class) and vice versa respectively. The decode method looks like: public Figure decode(String string) throws DecodeException { try { JSONObject jsonObject = new JSONObject(string); return new Figure(jsonObject); } catch (JSONException ex) { throw new DecodeException("Error parsing JSON", ex.getMessage(), ex.fillInStackTrace()); }} And the encode method looks like: public String encode(Figure figure) throws EncodeException { return figure.getJson().toString();} FigureDecoderEncoder implements both decoder and encoder functionality but thats purely for convenience. But the recommended design pattern is to keep them in separate classes. In certain cases, you may even need only one of them. On the client-side, the Canvas is initialized as: var canvas = document.getElementById("myCanvas");var context = canvas.getContext("2d");canvas.addEventListener("click", defineImage, false); The defineImage method constructs the JSON structure as shown above and sends it to the endpoint using websocket.send(). An instant snapshot of the canvas is sent using binary transfer with WebSocket. The WebSocket is initialized as: var wsUri = "ws://localhost:8080/whiteboard/websocket";var websocket = new WebSocket(wsUri);websocket.binaryType = "arraybuffer"; The important part is to set the binaryType property of WebSocket to arraybuffer. This ensures that any binary transfers using WebSocket are done using ArrayBuffer as the default type seem to be blob. The actual binary data transfer is done using the following: var image = context.getImageData(0, 0, canvas.width, canvas.height);var buffer = new ArrayBuffer(image.data.length);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = image.data[i];}websocket.send(bytes); This comprehensive sample shows the following features of JSR 356 API: Annotation-driven endpoints Send/receive text and binary payload in WebSocket Encoders/decoders for custom text payload In addition, it also shows how images can be captured and drawn using HTML5 Canvas in a JSP. How could this be turned in to an online game ? Imagine drawing a Tic-tac-toe board on the canvas with two players playing and others watching. Then you can build access rights and controls within the application itself. Instead of sending a snapshot of the canvas on demand, a new peer joining the game could be automatically transferred the current state as well. Do you want to build this game ? I built a similar game a few years ago. Do somebody want to rewrite the game using WebSocket APIs ? :-) Many thanks to Jitu and Akshay for helping through the WebSocket internals! Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Sharepoint .PDF contents displaying as 'searchtext.xml' in searches

    - by Green Muffins
    Hi Experts, I recently used installed ifilter in my sharepoint farm to enable searching of the contents of .pdf documents. All went well, except if I search for contents of any .pdf file, they appear in the search results with document title "searchtext.xml", and the link to the document gives a giant page of the .pdf contents in an .xml looking browser page. :s I have added .pdf filetypes to the search, so I am unsure why it is reading them incorrectly.. if I search for a .pdf document title such as 'document.pdf' it will display the result as a html page, though the link does follow to a readable .pdf file. Any help?

    Read the article

  • Formatting Google Search Result [closed]

    - by user5775
    Possible Duplicate: What are the most important things I need to do to encourage Google Sitelinks? Hello, I am new to search engine optimization. I am working on customizing how my results appear in Google as best as possible. I have learned about the meta tags to customize the text summary. However, I have some hierarchical parts to my website. When a result appears related to the "tip-of-the-iceberg", I would like to show links related to the "child" pages. For instance, if you Google "Walmart" you will see the following links listed with the result: Electronics TV & Video Departments Furniture Toys Girls Living Room Computers Is there any way that I can help Google determine which links to show and the text to display for these child links on my site? Or is this something that Google automatically generates? thanks!

    Read the article

  • Lotus Notes: Searching email by fields

    - by themel
    I'm using Lotus Notes 8.5.2 in a large corporate deployment. I'm trying to figure out how to search my email in a structured manner, e.g. by specifying criteria on fields. The help seems to suggest that I can use fields in square brackets and a list of operators, e.g. to find all mail where the From field contains John, I'd search for /[From] CONTAINS John However, I can't get this to work - any operator style query I've tried returns zero documents. "Web-style" queries (e.g. typing John into the search dialog) work, but I'd really prefer a way that would let me search more precisely. Potential issues: I'm assuming that the field names can be taken from the list of things I see when I open a mail and look at its Document Properties. Full text indexing is turned off for my mailbox, and all my attempts to create my own have failed. Does anyone have better information on searching by from/date/subject conditions in Notes?

    Read the article

  • Want to know how a particular page can be searched by Google?

    - by Champ
    I want to know how or what are the keywords by which a page can be searches on Google. Is there any tool on web by which I can get keywords for the page I want to search. Eg. If we search test on Google we will find this . Now what do i have to search(keywords) to find a particular page lets say abc.com/test.php Is there any tool by which i can get those keywords? Sorry if I am not clear with the question?

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Issue with text in a Intel 82945G Express Chipset Family

    - by user34681
    I am on Ubuntu 11.04 and the text in all the applications often looks with visual artifacts. I have installed xserver-xorg-video-intel. Attached a image so you can appreciate. lshw -c video output: *-display description: VGA compatible controller product: 82945G/GZ Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:16 memory:dfe00000-dfe7ffff ioport:8800(size=8) memory:e0000000-efffffff memory:dfe80000-dfebffff In the text selected in green you can see the 'Online', 'File' and 'hassle' words cut.

    Read the article

  • Focusing and Selecting the Text in ASP.NET TextBox Controls

    When a browser displays the HTML sent from a web server it parses the received markup into a Document Object Model, or DOM, which models the markup as a hierarchical structure. Each element in the markup - the <form> element, <div> elements, <p> elements, <input> elements, and so on - are represented as a node in the DOM and can be programmatically accessed from client-side script. What's more, the nodes that make up the DOM have functions that can be called to perform certain behaviors; what functions are available depend on what type of element the node represents. One function common to most all node types is focus, which gives keyboard focus to the corresponding element. The focus function is commonly used in data entry forms, search pages, and login screens to put the user's keyboard cursor in a particular textbox when the web page loads so that the user can start typing in his search query or username without having to first click the textbox with his mouse. Another useful function is select, which is available for <input> and <textarea> elements and selects the contents of the textbox. This article shows how to call an HTML element's focus and select functions. We'll look at calling these functions directly from client-side script as well as how to call these functions from server-side code. Read on to learn more! Read More >

    Read the article

  • Distorted text in programs

    - by Teneff
    I've installed Ubuntu 11 with gnome and in some point the text in the programs becomes unreadable like this. It's not only the text, but even the desktop background looks awful. I've tried to add section in xorg.conf, but it didn't helped out. Section "Device" Identifier "g33/X3000" Driver "intel" BusID "PCI:0:2:0" Option "ModeDebug" "on" Option "MonitorLayout" "LCD,VGA" Option "DevicePresence" "true" EndSection And this is what the lshw returns about the VGA *-display description: VGA compatible controller product: 82945G/GZ Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:16 memory:dfe00000-dfe7ffff ioport:8800(size=8) memory:e0000000-efffffff memory:dfe80000-d$

    Read the article

  • How to remove text from file icons ?

    - by Jesse
    I am fairly new to Ubuntu (Linux). I noticed when creating a .java file or .py gives me an icon, however when I start writing code it overrides the thumbnail into showing text. Is there a way to disable this silly feature ? When I installed Linux Mint on the VM I noticed I was able to accomplish this task by unchecking "Show text in icon". I know they use Namo as their file manager. I read this thread and it did not work. How to stop Nautilus from creating thumbnails of specific file types?

    Read the article

  • Link two or more text boxes in Visio

    - by Dan
    I am working on creating a template in Visio 2007 (Professional). Each page should reflect a document number and a revision number (two text boxes). I would like to make the template such that entering or changing text in one of these boxes on one page will automatically update the equivalent text boxes on all other pages. Is there an easy way to link two (or more) text boxes to show the same data (mirror each other)? I've looked into creating a ShapeData set and then using the ShapeData field in place of each box, but this will require training others to access and adjust the ShapeData field. In short - I want the issue that was attempting to be solved in Changing Text in Visio Org Chart Shape Changes Multiple Shapes' Text .

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • Email Discovery from Fairly Large Mailbox (15gig) Exchange 2003.

    - by nysingh
    I have a request from our legal team to search a users' mailbox. the mailbox is 15gig and it is on exchange 2003. I am trying to run windows desktop search and google desktop. I have gotten them to index mailbox but getting the results into a folder to backup on cd is getting bit difficult. Windows desktop search and google desktop search does not allow you to copy results to another folder. Can anyone point me to right direction? What is the best way to index and copy the results of pst, mailbox or edb file? What is the best discovery methods? Thanks

    Read the article

  • Sharepoint managed Properties

    - by paulie
    Originally posted on StackOverflow, and edited for clarity I have a custom Content Type inside a list that has over 30 items (Which were uploaded via DockIt), and I have added several "managed properties" to the "crawled properties", in the SSP. All of them work except 1. The column "Synopsis" is a multiline field with no limit on it's length. It appears as a crawled property "Synopsis", and is mapped to a managed property 'asynop'. On the 'Advanced Search Page', it is added as a property and searchable, however it only returns a some matching records (if any). I manually created an entry, ran the crawl and was able to search for it. I edited an existing entry, ran the crawl (full and incremental), and it still only returned the manually entered entry. If I entered the search term in the Search box directly "asynop:fatigue", then all the correct results appear. Why is this happening? And could it please stop?

    Read the article

  • Writing long line support for text editor

    - by Mathematician82
    I know that some some text editors have problems to show long lines https://bugzilla.gnome.org/show_bug.cgi?id=172099 . What is the best way to fix the bug or are they equally well? Modify the GTK+ source code and add a support for long lines. Modify the text editor source code such that it does not use GTK+ if it meets a long line. Split the long lines into part (maybe by cut on the Bash ) I'm just a junior programmer so I don't know what people does when they meet a bug that is on the library they use.

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • Transferring users and search engines to a new domain

    - by eftpotrm
    I've been asked to take over the maintnance of an existing site that's being reworked. At present it's serving localised content for several languages, but via a fairly unhelpful mechanism that means essentially search engines only have it indexed in English and any deep links will de facto appear in English as well. So, new localised sites are being built under separate domains - not just for this, there's other benefits. What we're then looking to do is to redirect users correctly to the new site, where appropriate. For humans this isn't a problem. We can send them through a gateway page on their first site visit, grab their language preference and put it in a cookie, then redirect them to the new localised content as soon as it's available. For search engines, this isn't so good... In principle I'm happy to simply bypass the gateway page and redirect known spiders to the new site, but this means we're serving radically different content (different URL even!) to human and robot users. Won't this therefore be regarded as cloaking and cause us grief? Anyone know a better way to handle this?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >