Search Results

Search found 1124 results on 45 pages for 'indexing'.

Page 3/45 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Programmaticaly prevent Vista desktop search (WDS) from indexing pst files placed on mapped network

    - by Jao
    Hi! After several days and multiple attempts I didn't find any 100% solution for this trouble. My search and investigation scopes: Direct access to registry: HKLM\SOFTWARE\Microsoft\Windows Search\CrawlScopeManager\Windows\SystemIndex\WorkingSetRules HKCU\Software\Microsoft\Windows Search\Gather\Windows\SystemIndex\Protocols\Mapi HKLM\SOFTWARE\Microsoft\Windows Search\Gather\Windows\SystemIndex\Sites\ and other keys... Windows Search 3.X interfaces like ISearchManager using Microsoft.Search.Interop Microsoft.Office.Interop.Outlook classes: NameSpace, Store AD policies (useless, no effect :( Preferred technologies: VB.NET, C#. This solution must be deployed within a large organization (about 5000 wokstations). Any ideas? Thanks in advance.

    Read the article

  • Text indexing algorithm

    - by Majd
    I am writing a C# winform application for an archiving system. The system has a huge database where some tables would have more than 1.5 million records. What i need is an algorithm that indexes the content of these records. Mainly, the files are Microsoft office, PDF and TXT documents. anyone can help? whether with ideas, links, books or codes, I appreciate it :) example: if i search for the word "international" in a certain folder in the database, i get all the files that contain that word ordered by a certain criteria such as relevance, modifying date...etc

    Read the article

  • SQL Server Indexing

    - by durilai
    I am trying to understand what is going on with CREATE INDEX internally. When I create a NONCLUSTERED index it shows as an INSERT in the execution plan as well as when I get the query test. DECLARE @sqltext VARBINARY(128) SELECT @sqltext = sql_handle FROM sys.sysprocesses s WHERE spid = 73 --73 is the process creating the index SELECT TEXT FROM sys.dm_exec_sql_text(@sqltext) GO Show: insert [dbo].[tbl] select * from [dbo].[tbl] option (maxdop 1) This is consistent in the execution plan. Any info is appreciated.

    Read the article

  • Python: Indexing list for element in nested list

    - by aquateenfan
    I know what I'm looking for. I want python to tell me which list it's in. Here's some pseudocode: item = "a" nested_list = [["a", "b"], ["c", "d"]] list.index(item) #obviously this doesn't work here I would want python to return 0 (because "a" is an element in the first sub-list in the bigger list). I don't care which sub-element it is. I don't care if there are duplicates, e.g., ["a", "b", "a"] should return the same thing as the above example. Sorry if this is a dumb question. I'm new to programming.

    Read the article

  • Google indexing to a staging server.

    - by Eric
    A site that I was working is resolving to a staging server through google. I've removed all the information. How long does it take for google to update the information so that it does not show up. is there anyone that I can contact to move this along?

    Read the article

  • Indexing community images for google image search

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Prepare community images for google image search indexing

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Prepare your site images for google image search indexing

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • Indexing datetime in MySQL

    - by User1
    What is the best way to index a datetime in MySQL? Which method is faster: Store the datetime as a double (via unix timestamp) Store the datetime as a datetime The application generating the timestamp data can output either format. Unfortunately, datetime will be a key for this particular data structure so speed will matter. Also, is it possible to make an index on an expression? For example, index on UNIX_TIMESTAMP(mydate) where mydate is a field in a table and UNIX_TIMESTAMP is a mysql function. I know that Postgres can do it. I'm thinking there must be a way in mysql as well.

    Read the article

  • Delphi SetLength Custom Indexing

    - by Andreas Rejbrand
    In Delphi, it is possible to create an array of the type var Arr: array[2..N] of MyType; which is an array of N - 1 elements indexed from 2 to N. If we instead declare a dynamic array var Arr: array of MyType and later allocate N - 1 elements by means of SetLength(Arr, N - 1) then the elements will be indexed from 0 to N - 2. Is it possible to make them indexed from 2 to N (say) instead?

    Read the article

  • Indexing Bilingual Sites

    - by Affar
    Hi all I have a site that supports two languages. The way the user changes the language is by clicking on a language link that will change his session language from A to B and return him to the same page. The problem that I am facing is that Google doesn't index the other language since it is seeing that the language link is the same as the page link but with changes in the parameters. So is there a way to inform google of the language, or should I add the language in the url, i.e. www.example.com/A/home?

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • Indexing affects only the WHERE clause?

    - by andre matos
    If I have something like: CREATE INDEX idx_myTable_field_x ON myTable USING btree (field_x); SELECT COUNT(field_x), field_x FROM myTable GROUP BY field_x ORDER BY field_x; Imagine myTable with around 500,000 rows and most of field_x values being unique. Since I don't use any WHERE clause, will the created index have any effect at all in my query? Edit: I'm asking this question because I don't get any relevant difference between query-times before and after creating the index; They always take about 8 seconds (which, of course is too much time!). Is this behaviour expected?

    Read the article

  • Read huge free text docs in one file for lucene indexing

    - by Jun
    I have heaps of free text news docs in one big file. The structure of each news doc is like: (Header line) Category, Doc1, Date (day, month, year) (body text) ... ... ... (Header line) Category, Doc2, Date (day, month, year) (body text) ... ... ... If I extract each doc from the big file, it costs too much time and not efficient. Therefore, I decide to read the file line by line and feed information to lucene the same time. I write c# code to index each doc to lucene like: Streamreader sr = new Streamreader(file); string line = ""; while((line = sr.ReadLine()) != null) { How can I tell this line is a doc header line from text line and get the metadata and all the text lines of a doc for lucene to index. Also, the text is read by OCR which can not give correct line-separating. Captions are mixed with content text iterate the process till the end of the file } with thanks

    Read the article

  • MySQL: optimization of table (indexing, foreign key) with no primary keys

    - by Haradzieniec
    Each member has 0 or more orders. Each order contains at least 1 item. memberid - varchar, not integer - that's OK (please do not mention that's not very good, I can't change it). So, thera 3 tables: members, orders and order_items. Orders and order_items are below: CREATE TABLE `orders` ( `orderid` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT, `memberid` VARCHAR( 20 ), `Time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP , `info` VARCHAR( 3200 ) NULL , PRIMARY KEY (orderid) , FOREIGN KEY (memberid) REFERENCES members(memberid) ) ENGINE = InnoDB; CREATE TABLE `order_items` ( `orderid` INT(11) UNSIGNED NOT NULL, `item_number_in_cart` tinyint(1) NOT NULL , --- 5 items in cart= 5 rows `price` DECIMAL (6,2) NOT NULL, FOREIGN KEY (orderid) REFERENCES orders(orderid) ) ENGINE = InnoDB; So, order_items table looks like: orderid - item_number_in_cart - price: ... 1000456 - 1 - 24.99 1000456 - 2 - 39.99 1000456 - 3 - 4.99 1000456 - 4 - 17.97 1000457 - 1 - 20.00 1000458 - 1 - 99.99 1000459 - 1 - 2.99 1000459 - 2 - 69.99 1000460 - 1 - 4.99 ... As you see, order_items table has no primary keys (and I think there is no sense to create an auto_increment id for this table, because once we want to extract data, we always extract it as WHERE orderid='1000456' order by item_number_in_card asc - the whole block, id woudn't be helpful in queries). Once data is inserted into order_items, it's not UPDATEd, just SELECTed. The questions are: I think it's a good idea to put index on item_number_in_cart. Could anybody please confirm that? Is there anything else I have to do with order_items to increase the performance, or that looks pretty good? I could miss something because I'm a newbie. Thank you in advance.

    Read the article

  • How to navigate to a UITableView position by the cell it's title using indexing

    - by BarryK88
    I'm using a UITableView filled with a around 200 cells containing a title, subtitle and thumbnail image. I want to a have a selection method just like within the contact App from Apple where you can select a character from the alphabet. I'm at the point where I've drawn the selection interface (A,B,C etc), and via it's delegate I'm retrieving the respective index and title (i.e: A = 1, B = 2, C = 3 etc.). Now I want to navigate to the first cell, where the first character of the cell it's title is starting with the selected index character. Just like the contacts App. Can someone give me a direction for how to implement such functionality. - (NSInteger)tableView:(UITableView *)tableView sectionForSectionIndexTitle:(NSString *)title atIndex:(NSInteger)index I filled my sectionIndex by means of - (NSArray *)sectionIndexTitlesForTableView:(UITableView *)tableView { if(searching) return nil; NSMutableArray *tempArray = [[NSMutableArray alloc] init]; [tempArray addObject:@"A"]; [tempArray addObject:@"B"]; [tempArray addObject:@"C"]; [tempArray addObject:@"D"]; [tempArray addObject:@"E"]; [tempArray addObject:@"F"]; [tempArray addObject:@"G"]; [tempArray addObject:@"H"]; [tempArray addObject:@"I"]; [tempArray addObject:@"J"]; [tempArray addObject:@"K"]; [tempArray addObject:@"L"]; [tempArray addObject:@"M"]; [tempArray addObject:@"N"]; [tempArray addObject:@"O"]; [tempArray addObject:@"P"]; [tempArray addObject:@"Q"]; [tempArray addObject:@"R"]; [tempArray addObject:@"S"]; [tempArray addObject:@"T"]; [tempArray addObject:@"U"]; [tempArray addObject:@"V"]; [tempArray addObject:@"W"]; [tempArray addObject:@"X"]; [tempArray addObject:@"Y"]; [tempArray addObject:@"Z"]; return tempArray; }

    Read the article

  • Utility to indexing a directory?

    - by achacha
    Here is what I am trying to do: I have a directory (with sub-directories) with source files, I need to index them so I can find files fast (find as I type) so I can open them for compare/analysis. I don't want it to scan the content, just filename index for quick lookup. I do this when trying to determine if a class exists in a given tree (we maintain directory trees for each release which has a lot of files) and sometimes I want to quickly check files to see how something was implemented, etc. Most of these directories are on remote servers (sometimes on the other side of the world) or on a VM (which is on a server far away), so I only want to read the directory trees once, which is why running find every time is way too slow and doing 'find . foo.txt' and then searching that is a bit tedious. It's kind of like how "Find Resource" works in eclipse after it indexes all files, but it's a bit of a chore to import/remove directories into eclipse every time. Eclipse is also very slow when dealing with remote volumes. Any suggestions are appreciated :)

    Read the article

  • Help on MySQL table indexing when GROUP BY is used in a query

    - by Silver Light
    Thank you for your attention. There are two INNODB tables: Table authors id INT nickname VARCHAR(50) status ENUM('active', 'blocked') about TEXT Table books author_id INT title VARCHAR(150) I'm running a query against these tables, to get each author and a count of books he has: SELECT a. * , COUNT( b.id ) AS book_count FROM authors AS a, books AS b WHERE a.status != 'blocked' AND b.author_id = a.id GROUP BY a.id ORDER BY a.nickname This query is very slow (takes about 6 seconds to execute). I have an index on books.author_id and it works perfectly, but I do not know how to create an index on authors table, so that this query could use it. Here is how current EXPLAIN looks: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE a ALL PRIMARY,id_status_nickname NULL NULL NULL 3305 Using where; Using temporary; Using filesort 1 SIMPLE b ref key_author_id key_author_id 5 a.id 2 Using where; Using index I've looked at MySQL manual on optimizing queries with group by, but could not figure out how I can apply it on my query. I'll appreciate any help and hints on this - what must be the index structure, so that MySQL could use it?

    Read the article

  • Can someone help with indexing issues

    - by user631249
    Hi all I am on the first page of google for keywords concerned with MOVING, however i cant seem to break the carpet cleaning rankings. I have made changes and additions which havent been indexed yet. Should i wait for the run or please please can someone give me pointers on the carpet cleaning indexing. Also i have 53pages submitted and only 38 indexed, where could the problem be. Is there software to check indexing hiccups . Thanks.

    Read the article

  • wordpress feeds not indexing in webmaster tools

    - by jogesh_p
    I don't have much experience about webmaster tools, i just know the basic of the webmaster, and i am not from SEO background, but i just want to know that: Why my blog's RSS Feeds not indexing from webmaster tools? i want to know about Crawl Stat is this stat is good or bad? To submit the RSS into the webmaster is good for indexing the pages or not?? i also submitted the sitemap. the link of the website is Webtech Eleven

    Read the article

  • Indexing and Page Ranking Issues

    - by user631249
    Hi all I am on the first page of google for keywords concerned with MOVING, however i cant seem to break the carpet cleaning rankings. I have made changes and additions which havent been indexed yet. Should i wait for the run or please please can someone give me pointers on the carpet cleaning indexing. Also i have 53pages submitted and only 38 indexed, where could the problem be. Is there software to check indexing hiccups . Thanks.

    Read the article

  • Google indexing and ranking a custom domain served by Google App Engine

    - by Hugues
    I have a website served on the following URL : "http://www.plugimmo.com" which is a custom domain served by Google App Engine on the following URL : http://plugimmo.appspot.com Since a while I have tried to optimise the Google indexing and ranking with no success. The problem is that searching on Google the keywords in the title of my home page does not retrieve my website at all even not in the 1,000 first results : When checking the cached version of google ( cache:www.plugimmo.com), it says that the cached version is the one of 20-Aug-12 of "plugimmo.appspot.com". It looks there are several issues : 1 - The cached version is really old. I have made a lot of changes since the 20-Aug-12 and I saw the googlebot crawling my site several times. 2 - The cached version is for "plugimmo.appspot.com" 3 - When looking at the Google Webmaster tools, I see that the number of pages indexed for www.plugimmo.com is 0, but that can't be the case given the number of changes I made since then. My questions would therefore be the following : Why is the version of the cache so old although I saw the googlebot crawling the site many times since 20-Aug-12 ? Is there a problem with indexing a custom domain served by Google App Engine ? Why is the Google Webmaster tools showing 0 pages indexed although new pages have been crawled and that no errors have been reported in the indexing ? Also, the site has been developed with Google Web Toolkit. I have followed the guidelines regarding crawling Ajax sites. The home page when crawled by a robot is redirected to http://www.plugimmo.com/HomeSnapshot.html Thanks a lot for your help ! Hugues

    Read the article

  • Odd results when searching for numbers using IXSSO.Query

    - by Vic
    Hi, from classic asp on Windows 2008, using an IXSSO.Query, when searching for a string of numbers, for example 10000000001, I receive results that also include variations to this, like 10000000002 10000000003 and so on. If I change the first digit so the search string is 20000000001 I dont get anything. If I keep moving the last digit from my first example to the left, I keep getting hits until I reach the half way point when I get no results! So in other words 10000020000 will return results like in the first example but 10000200000 does not. This all sounds like to me that its doing a match on the first 6 characters and it ignores the rest... Here is relevant parts of the Set oQuery = Server.CreateObject("IXSSO.Query") oQuery.Catalog = SEARCH_CATALOG oQuery.Query = "@all " & searchstr Set oRS = oQuery.CreateRecordset("nonsequential") Anyone got any ideas and/or suggestions? Thanks, Vic.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >