Search Results

Search found 4711 results on 189 pages for 'documents'.

Page 12/189 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Google Translation API not working for even one page long documents

    - by Saubhagya
    I'm using Google Translation API to translate text from Chinese Simplified to English in my C# program. The problem is if the text is small (around one line) the API is able to translate it, but if the text is larger (more than 3 lines) is gives an exception saying "The remote server returned an unexpected response: (414) Request-URI Too Large.". However if I use translate.google.com in my browser that works fine. Please tell me how can I process large documents using Google Translate API in my desktop application written in C#.

    Read the article

  • "Error opening associated documents" message when loading VS

    - by kumar
    When loading up a solution in VS2008 I get this message: An error was encountered while opening associated documents the last time this solution was loaded. Document load is being skipped during this solution load in order to avoid that error. It shut down immediately the first time I opened it. The next time I opened it, VS popped up a message box but did not shut down at first; however, it did shut down when I clicked a usercontrol or ASPX page. How can I find which document is causing the problem? Thanks...

    Read the article

  • JSON documents and SQL database tables

    - by Sharmi
    Do JSON documents in RavenDB cost more than the SQL Server tables in terms of the storage and query costs. And also for centralized access, which one is better? What are the disadvantages of NON-SQL databases like RavenDB,CouchDB,MongoDB, etc... ? I can get that some of these are open source and support more datatypes like enums,objects,etc. but otherwise i don't see any big advantage? Currently there is a problem of storing huge amount of logs from various locations. I am planning to suggest these to my manager so just need a clear idea.

    Read the article

  • Design Documents for Python/Django?

    - by british_trader
    After working on a Django project for a while, I now have to do some design documents for it (UML type stuff). However the code doesn't have classes, but instead uses views.py with modules in it... What would be the best way to show the design of my application from the initial __init__.py, to the urls.py where the HTML requests are then filtered to the specific urls.py in each of the packages and then handled by the views.py? i.e. django-app urls.py views.py settings.py manager.py __init__.py django-package urls.py views.py

    Read the article

  • Error handling in Rails Controller for adding embedded Mongoid documents to Model

    - by Dragonfly
    I have a Item model that has embedded documents. Currently, the following comments_controller code will add a comment to the item successfully. However, if pushing the comment document onto the comments array on item fails, I will not know this. #this does work, but i do not know if the push fails def create comment = Comment.new(:text => params[:text]) @item.comments << comment render :text => comment end I would like to have something like this, but @item.comments << comment does not return true or false: #this does not work def create comment = Comment.new(:text => params[:text]) if @item.comments << comment render :text => comment else render :text => 'oh no' end end Nor does it throw an exception when the document push fails: #this does not work def create begin comment = Comment.new(:text => params[:text]) @item.comments << comment render :text => comment rescue Exception => e render :text => 'oh no' end end Thanks!

    Read the article

  • How does lucene index documents?

    - by Mehdi Amrollahi
    Hello, I read some document about Lucene; also I read the document in this link (http://lucene.sourceforge.net/talks/pisa). I don't really understand how Lucene indexes documents and don't understand which algorithms Lucene uses for indexing? On the above link, it says Lucene uses this algorithm for indexing: incremental algorithm: maintain a stack of segment indices create index for each incoming document push new indexes onto the stack let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How does this algorithm provide optimized indexing? Does Lucene use B-tree algorithm or any other algorithm like that for indexing - or does it have a particular algorithm? Thank you for reading my post.

    Read the article

  • Plist won't copy to documents directory

    - by John
    I have a plist file in my resources group in xcode. I am trying to copy this into my documents directory on app launch. I am using the following code (taken from a sqlite tutorial): BOOL success; NSError *error; NSFileManager *fileManager = [NSFileManager defaultManager]; NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *filePath = [documentsDirectory stringByAppendingString:@"ActiveFeedsPlist.plist"]; success = [fileManager fileExistsAtPath:filePath]; if (success) return; NSString *path = [[[NSBundle mainBundle] resourcePath] stringByAppendingFormat:@"ActiveFeedsPlist.plist"]; success = [fileManager copyItemAtPath:path toPath:filePath error:&error]; if (!success) { NSAssert1(0, @"Failed to copy Plist. Error %@", [error localizedDescription]); } I am given the error " * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Failed to copy Plist. Error Operation could not be completed. No such file or directory'" in the console however. Any idea what is wrong? Thanks

    Read the article

  • Read Text File in Document Folder - Iphone SDK

    - by Kevin
    Hello everyone I have this code below: NSString *fileName = [[NSUserDefaults standardUserDefaults] objectForKey:@"recentDownload"]; NSString *fullPath = [NSBundle pathForResource:fileName ofType:@"txt" inDirectory:[NSHomeDirectory() stringByAppendingString:@"/Documents/"]]; NSError *error = nil; [textViewerDownload setText:[NSString stringWithContentsOfFile:fullPath encoding: NSUTF8StringEncoding error:&error]]; textviewerdownload is the textview displaying the text from the file. The actual file name is stored in an NSUserDefault called recentDownload. When I build this, I click the button which this is under, and my application crashes. Is there anything wrong with the syntax or just simple error?

    Read the article

  • Exception when indexing text documents with Lucene, using SnowballAnalyzer for cleaning up

    - by Julia
    Hello!!! I am indexing the documents with Lucene and am trying to apply the SnowballAnalyzer for punctuation and stopword removal from text .. I keep getting the following error :( IllegalAccessError: tried to access method org.apache.lucene.analysis.Tokenizer.(Ljava/io/Reader;)V from class org.apache.lucene.analysis.snowball.SnowballAnalyzer Here is the code, I would very much appreciate help!!!! I am new with this.. public class Indexer { private Indexer(){}; private String[] stopWords = {....}; private String indexName; private IndexWriter iWriter; private static String FILES_TO_INDEX = "/Users/ssi/forindexing"; public static void main(String[] args) throws Exception { Indexer m = new Indexer(); m.index("./newindex"); } public void index(String indexName) throws Exception { this.indexName = indexName; final File docDir = new File(FILES_TO_INDEX); if(!docDir.exists() || !docDir.canRead()){ System.err.println("Something wrong... " + docDir.getPath()); System.exit(1); } Date start = new Date(); PerFieldAnalyzerWrapper analyzers = new PerFieldAnalyzerWrapper(new SimpleAnalyzer()); analyzers.addAnalyzer("text", new SnowballAnalyzer("English", stopWords)); Directory directory = FSDirectory.open(new File(this.indexName)); IndexWriter.MaxFieldLength maxLength = IndexWriter.MaxFieldLength.UNLIMITED; iWriter = new IndexWriter(directory, analyzers, true, maxLength); System.out.println("Indexing to dir..........." + indexName); if(docDir.isDirectory()){ File[] files = docDir.listFiles(); if(files != null){ for (int i = 0; i < files.length; i++) { try { indexDocument(files[i]); }catch (FileNotFoundException fnfe){ fnfe.printStackTrace(); } } } } System.out.println("Optimizing...... "); iWriter.optimize(); iWriter.close(); Date end = new Date(); System.out.println("Time to index was" + (end.getTime()-start.getTime()) + "miliseconds"); } private void indexDocument(File someDoc) throws IOException { Document doc = new Document(); Field name = new Field("name", someDoc.getName(), Field.Store.YES, Field.Index.ANALYZED); Field text = new Field("text", new FileReader(someDoc), Field.TermVector.WITH_POSITIONS_OFFSETS); doc.add(name); doc.add(text); iWriter.addDocument(doc); } }

    Read the article

  • Dynamic SQL Server stored procedure

    - by Pinu
    ALTER PROCEDURE [dbo].[GetDocumentsAdvancedSearch] @SDI CHAR(10) = NULL ,@Client CHAR(4) = NULL ,@AccountNumber VARCHAR(20) = NULL ,@Address VARCHAR(300) = NULL ,@StartDate DATETIME = NULL ,@EndDate DATETIME = NULL ,@ReferenceID CHAR(14) = NULL AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- DECLARE DECLARE @Sql NVARCHAR(4000) DECLARE @ParamList NVARCHAR(4000) SELECT @Sql = 'SELECT DISTINCT ISNULL(Documents.DocumentID, '') ,Person.Name1 ,Person.Name2 ,Person.Street1 ,Person.Street2 ,Person.CityStateZip ,ISNULL(Person.ReferenceID,'') ,ISNULL(Person.AccountNumber,'') ,ISNULL(Person.HasSetPreferences,0) ,Documents.Job ,Documents.SDI ,Documents.Invoice ,ISNULL(Documents.ShippedDate,'') ,ISNULL(Documents.DocumentPages,'') ,Documents.DocumentType ,Documents.Description FROM Person LEFT OUTER JOIN Documents ON Person.PersonID = Documents.PersonID LEFT OUTER JOIN DocumentType ON Documents.DocumentType = DocumentType.DocumentType LEFT OUTER JOIN Addressess ON Person.PersonID = Addressess.PersonID' SELECT @Sql = @Sql + ' WHERE Documents.SDI IN ( '+ QUOTENAME(@sdi) + ') OR (Person.AssociationID = ' + ''' 000000 + ''' + 'AND Person.Client = ' + QUOTENAME(@Client) IF NOT (@AccountNumber IS NULL) SELECT @Sql = @Sql + 'AND Person.AccountNumber LIKE' + QUOTENAME(@AccountNumber) IF NOT (@Address IS NULL) SELECT @Sql = @Sql + 'AND Person.Name1 LIKE' +QUOTENAME(@Address)+ 'AND Person.Name2 LIKE' +QUOTENAME(@Address)+ 'AND Person.Street1 LIKE' +QUOTENAME(@Address)+ 'AND Person.Street2 LIKE' +QUOTENAME(@Address)+ 'AND Person.CityStateZip LIKE' +QUOTENAME(@Address) IF NOT (@StartDate IS NULL) SELECT @Sql = @Sql + 'AND Documents.ShippedDate >=' +@StartDate IF NOT (@EndDate IS NULL) SELECT @Sql = @Sql + 'AND Documents.ShippedDate <=' +@EndDate IF NOT (@ReferenceID IS NULL) SELECT @Sql = @Sql + 'AND Documents.ReferenceID =' +QUOTENAME(@ReferenceID) -- Insert statements for procedure here -- PRINT @Sql SELECT @ParamList = '@Psdi CHAR(10),@PClient CHAR(4),@PAccountNumber VARCHAR(20),@PAddress VARCHAR(300),@PStartDate DATETIME ,@PEndDate DATETIME,@PReferenceID CHAR(14)' EXEC SP_EXECUTESQL @Sql,@ParamList,@Sdi,@Client,@AccountNumber,@Address,@StartDate,@EndDate,@ReferenceID --PRINT @Sql END ERROR Msg 102, Level 15, State 1, Line 23 Incorrect syntax near '000000'. Msg 105, Level 15, State 1, Line 23 Unclosed quotation mark after the character string 'AND Person.Client = [1 ]AND Person.AccountNumber LIKE[1]'.

    Read the article

  • .net printing multiple reports in one document (architecture question)

    - by LawsonM
    I understand how to print a single document via the PrintDocument class. However, I want to print multiple reports in one document. Each "report" will consist of charts, tables, etc. I want to have two reports per page. I've studied the few examples I can find on how to combine multiple documents into one; however, they always seem to work by creating a collection of objects (e.g. customer or order) that are then iterated over and drawn in the OnPrintPage method. My problem and hence the "architecture" question is that I don't want to cache the objects required to produce the report since they are very large and memory intensive. I'd simply like the resulting "report". One thought I had was to print the report to a metafile, cache that instead in a "MultiplePrintDocument" class and then position those images appropriately two to a page in the OnPrintPage method. I think this would be a lot more efficient and scalable in my case. But I'm not a professional programmer and can't figure out if I'm barking up the wrong tree here. I think the Graphics.BeginContainer and Graphics.Save methods might be relevant, but can't figure out how to implement or if there is a better way. Any pointers would be greatly appreciated.

    Read the article

  • Cfsearch in combination of documents and indexed query data?

    - by Bart B
    hi! I have an application which stores all kind of data about people. The current cfsearch functionality (in Verity) includes searching documents that are attached to these people. If i have 2 documents attached to 1 person, 1 with say ABC in it and the other with XYZ in it, my ideal searchresult for "ABC AND XYZ" would return the 1 person. But as both 'words' are indexed in different documents, the standard behaviour is not to return any result from the cfsearch, because the combination doesnt exist in any of the 2 documents. Is there any way to combine indexed documents and/or query data in a way that the search is executed in the combination of relevant docs and data? In my application that would mean that i could index all documents and data regarding people and have an intelligent 'global' search to find the right person. any pointers and help very much appreciated! (should Solr offer new possibilities in comparison to Verity, no problem!) thanks! Bart

    Read the article

  • DSOFramer.ocx Hosting word documents in 64 bit Windows Forms

    - by Santhosh
    It looks like the DSOFramer.ocx component is not available for download anymore from MSDN as described here. Also the DSOFramer component is a 32 bit component. Given this, i have 2 questions: Is there any other alternative for hosting a word document in a Windows Form apart from using the DSOFramer.ocx component? If i move to Windows 64 bit operating system, and run the windows form as a native 64 bit process, then how do I host the word document in a 64 bit process?

    Read the article

  • Filtering documents against a dictionary key in MongoDB

    - by Thomas
    I have a collection of articles in MongoDB that has the following structure: { 'category': 'Legislature', 'updated': datetime.datetime(2010, 3, 19, 15, 32, 22, 107000), 'byline': None, 'tags': { 'party': ['Peter Hoekstra', 'Virg Bernero', 'Alma Smith', 'Mike Bouchard', 'Tom George', 'Rick Snyder'], 'geography': ['Michigan', 'United States', 'North America'] }, 'headline': '2 Mich. gubernatorial candidates speak to students', 'text': [ 'BEVERLY HILLS, Mich. (AP) \u2014 Two Democratic and Republican gubernatorial candidates found common ground while speaking to private school students in suburban Detroit', "Democratic House Speaker state Rep. Andy Dillon and Republican U.S. Rep. Pete Hoekstra said Friday a more business-friendly government can help reduce Michigan's nation-leading unemployment rate.", "The candidates were invited to Detroit Country Day Upper School in Beverly Hills to offer ideas for Michigan's future.", 'Besides Dillon, the Democratic field includes Lansing Mayor Virg Bernero and state Rep. Alma Wheeler Smith. Other Republicans running are Oakland County Sheriff Mike Bouchard, Attorney General Mike Cox, state Sen. Tom George and Ann Arbor business leader Rick Snyder.', 'Former Republican U.S. Rep. Joe Schwarz is considering running as an independent.' ], 'dateline': 'BEVERLY HILLS, Mich.', 'published': datetime.datetime(2010, 3, 19, 8, 0, 31), 'keywords': "Governor's Race", '_id': ObjectId('4ba39721e0e16cb25fadbb40'), 'article_id': 'urn:publicid:ap.org:0611e36fb084458aa620c0187999db7e', 'slug': "BC-MI--Governor's Race,2nd Ld-Writethr" } If I wanted to write a query that looked for all articles that had at least 1 geography tag, how would I do that? I have tried writing db.articles.find( {'tags': 'geography'} ), but that doesn't appear to work. I've also thought about changing the search parameter to 'tags.geography', but am having a devil of a time figuring out what the search predicate would be.

    Read the article

  • What good alternatives to CHM are there for context sensitive help documents in desktop applications

    - by ninesided
    We currently have a number of desktop applications (PowerBuilder, Winforms, WPF) that make use of a single CHM for context sensitive help. We'd like to move away from CHM as it's difficult to maintain but we've not found a suitable alternative. Ideally we'd like our developers to keep the help files up to date (perhaps in a wiki) as they add funtionality and simply export this to PDF or something like that, but is it possible to use a PDF for context sensitve help, or are there any other promising alternative to CHM?

    Read the article

  • Rotate Windows.Documents.Table

    - by Neverrav
    I need to rotate a table clockwise up to 90 degrees. It's one of the blocks of a FlowDocument. Is there a way to apply some kind of rotation to a Table? The possible solution of creating TextEffect like this: var table = new Table(); ... // fill the table here var effect = new TextEffect { Transform = new RotationTransform(90), PositionStart = 0, PositionCount = int.MaxValue }; table.TextEffects = new TextEffectCollection(); table.TextEffects.Add(effect); doesn't work.

    Read the article

  • Modifying documents in memory in yaml-cpp

    - by Mike Mueller
    I want to read a YML document, filter it by modifying some nodes in memory, and then spit it back out with an emitter. The problem is that YAML::Node appears to be designed to be read-only. Is there a way to replace a node's value (with a scalar in this case) that I'm missing?

    Read the article

  • How to skip callbacks on Mongoid Documents?

    - by jpemberthy
    My question is similar to this one http://stackoverflow.com/questions/1342761/how-to-skip-activerecord-callbacks but instead of AR I'm using Mongoid, It seems like that isn't implemented yet in the current version of Mongoid, so I'd like to know what should be an elegant solution to implement it. (if necessary).

    Read the article

  • searching XML documents using php

    - by dbomb101
    I am trying to make a search function using the combination of DOM, PHP and XML. I got something up and running but the problem is that my search function will only accept exact terms, on top of this am wondering if the method I picked the most efficient $searchTerm = "Lupe"; $doc = new DOMDocument(); foreach (file('musicInformation.xml')as $node) { $xmlString .= trim($node); } $doc->loadXML($xmlString); $records = $doc->documentElement->childNodes; $records = $doc->getElementsByTagName("musicdetails"); foreach( $records as $record ) { $artistnames = $record->getElementsByTagName("artistname"); $artistname = $artistnames->item(0)->nodeValue; $recordnames = $record->getElementsByTagName("recordname"); $recordname = $recordnames->item(0)->nodeValue; $recordtypes = $record->getElementsByTagName("recrodtype"); $recordtype = $recordtypes->item(0)->nodeValue; $formats = $record->getElementsByTagName("format"); $format = $formats->item(0)->nodeValue; $prices = $record->getElementsByTagName("price"); $price = $prices->item(0)->nodeValue; if($searchTerm == $artistname|| $searchTerm == $recordname || $searchTerm == $recordtype ||$searchTerm == $format || $searchTerm == $price) { echo "$artistname - $recordname - $recordtype - $format -$price\n"; }

    Read the article

  • Avoiding Redundancies in XML documents

    - by MarceloRamires
    I was working with a certain XML where there were no redundancies <person> <eye> <eye_info> <eye_color> blue </eye_color> </eye_info> </eye> <hair> <hair_info> <hair_color> blue </hair_color> </hair_info> </hair> </person> As you can see, the sub-tag eye-color makes reference to eye in it's name, so there was no need to avoid redundancies, I could get the eye color in a single line after loading the XML into a dataset: dataset.ReadXml(path); value = dataset.Tables("eye_info").Rows(0)("eye_color"); I do realise it's not the smartest way of doing so, and this situation I'm having now wasn't unforeseen. Now, let's say I have to read xml's that are in this format: <person> <eye> <info> <color> blue </color> </info> </eye> <hair> <info> <color> blue </color> </info> </hair> </person> So If I try to call it like this: dataset.ReadXml(path); value = dataset.Tables("info").Rows(0)("color"); There will be a redundancy, because I could only go as far as one up level to identify a single field in a XML with my previous method, and the 'disambiguator' is three levels above. Is there a practical way to reach with no mistake a single field given all the above (or at least a few) fields ?

    Read the article

  • Office documents prompt for login in anonymous SharePoint site

    - by xmt15
    I have a MOSS 07 site that is configured for anonymous access. There is a document library within this site that also has anonymous access enabled. When an anonymous user clicks on a PDF file in this library, he or she can read or download it with no problem. When a user clicks on an Office document, he or she is prompted with a login box. The user can cancel out of this box without entering a log in, and will be taken to the document. This happens in IE but not FireFox. I see some references to this question on the web but no clear solutions: http://www.microsoft.com/communities/newsgroups/en-us/default.aspx?dg=microsoft.public.sharepoint.windowsservices.development&tid=5452e093-a0d7-45c5-8ed0-96551e854cec&cat=en_US_CC8402B4-DC5E-652D-7DB2-0119AFB7C906&lang=en&cr=US&sloc=&p=1 http://www.sharepointu.com/forums/t/5779.aspx http://www.eggheadcafe.com/software/aspnet/30817418/anonymous-users-getting-p.aspx

    Read the article

  • Author in wiki, generate PDF documents, CHM files or embedded help

    - by Dilum Ranatunga
    Anyone know of a wiki or wiki plugin that generates a PDF file or CHM file that spans the entire wiki? I would like to have control of the table of contents. I would like the internal and external links to work. Ideally allow for tweaking the output template, but that is not a deal-breaker. I want to generate content using WIKI syntax and mindset (lots of cross-links etc), but ship the content in PDF, CHM or an embedded application form. Something friendlier than installing the wiki software on the enduser machine...

    Read the article

  • XPath - Quering two XML documents

    - by Arnej65
    I have have two xml docs: XML1: <Books> <Book id="11"> ....... <AuthorName/> </Book> ...... </Books> XML2: <Authors> <Author> <BookId>11</BookId> <AuthorName>Smith</AuthorName> </Author> </Authors> I'm trying to do the following: Get the value of XML2/Author/AuthorName where XML1/Book/@id equals XML2/Author/BookId. XML2/Author/AuthorName[../BookId = XML1/Book/@id]

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >