Search Results

Search found 26517 results on 1061 pages for 'large directory'.

Page 201/1061 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Suggested (simple) approach for drawing large numbers of visual elements in WPF?

    - by Ender
    I'm writing an interface that features a large (~50000px width) "canvas"-type area that is used to display a lot of data in a fairly novel way. This involves lots of lines, rectangles, and text. The user can scroll around to explore the entire canvas. At the moment I'm just using a standard Canvas panel with various Shapes placed on it. This is nice and easy to do: construct a shape, assign some coordinates, and attach it to the Canvas. Unfortunately, it's pretty slow (to construct the children, not to do the actual rendering). I've looked into some alternatives, it's a bit intimidating. I don't need anything fancy - just the ability to efficiently construct and place objects in a coordinate plane. If all I get are lines, colored rectangles, and text, I'll be happy. Do I need Geometry instances inside of Geometry Groups inside of GeometryDrawings inside of some Panel container? Note: I'd like to include text and graphics (i.e. colored rectangles) in the same space, if possible.

    Read the article

  • Is there a faster way to parse through a large file with regex quickly?

    - by Ray Eatmon
    Problem: Very very, large file I need to parse line by line to get 3 values from each line. Everything works but it takes a long time to parse through the whole file. Is it possible to do this within seconds? Typical time its taking is between 1 minute and 2 minutes. Example file size is 148,208KB I am using regex to parse through every line: Here is my c# code: private static void ReadTheLines(int max, Responder rp, string inputFile) { List<int> rate = new List<int>(); double counter = 1; try { using (var sr = new StreamReader(inputFile, Encoding.UTF8, true, 1024)) { string line; Console.WriteLine("Reading...."); while ((line = sr.ReadLine()) != null) { if (counter <= max) { counter++; rate = rp.GetRateLine(line); } else if(max == 0) { counter++; rate = rp.GetRateLine(line); } } rp.GetRate(rate); Console.ReadLine(); } } catch (Exception e) { Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } } Here is my regex: public List<int> GetRateLine(string justALine) { const string reg = @"^\d{1,}.+\[(.*)\s[\-]\d{1,}].+GET.*HTTP.*\d{3}[\s](\d{1,})[\s](\d{1,})$"; Match match = Regex.Match(justALine, reg, RegexOptions.IgnoreCase); // Here we check the Match instance. if (match.Success) { // Finally, we get the Group value and display it. string theRate = match.Groups[3].Value; Ratestorage.Add(Convert.ToInt32(theRate)); } else { Ratestorage.Add(0); } return Ratestorage; } Here is an example line to parse, usually around 200,000 lines: 10.10.10.10 - - [27/Nov/2002:16:46:20 -0500] "GET /solr/ HTTP/1.1" 200 4926 789

    Read the article

  • jQuery AJAX: How to pass large HTML tags as parameters?

    - by marknt15
    Hello, How can I pass a large HTML tag data to my PHP using jQuery AJAX? When I'm receiving the result it is wrong. Thanks in advance. Cheers, Mark jQuery AJAX code: $('#saveButton').click(function() { // do AJAX and store tree structure to a PHP array (to be saved later in database) var treeInnerHTML = $("#demo_1").html(); alert(treeInnerHTML); var ajax_url = 'ajax_process.php'; var params = 'tree_contents=' + treeInnerHTML; $.ajax({ type: 'POST', url: ajax_url, data: params, success: function(data) { $("#show_tree").html(data); }, error: function(req, status, error) { } }); }); treeInnerHTML actual value: <ul class="ltr"> <li id="phtml_1" class="open"><a href="#"><ins>&nbsp;</ins>Root node 1</a> <ul> <li class="leaf" id="phtml_2"><a href="#"><ins>&nbsp;</ins>Child node 1</a></li> <li class="last leaf" id="phtml_3"><a href="#"><ins>&nbsp;</ins>Child node 2</a></li> </ul> </li> <li id="phtml_5" class="file last leaf"><a href="#"><ins>&nbsp;</ins>Root node 2</a></li> </ul> Returned result from my show_tree div: <ul class="\&quot;ltr\&quot;"> <li id="\&quot;phtml_1\&quot;" class="\&quot;open\&quot;"><a href="%5C%22#%5C%22"><ins></ins></a></li></ul>

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

  • Best practice for inserting large chunks of HTML into elements with Javscript?

    - by hamstar
    Hey guys. I'm building a web application (using prototype) at the moment that requires the addition of large chunks of HTML into the DOM. Most of these are rows that contain elements with all manner of attributes. Currently I keep a blank row of html in a variable and var blankRow = '<tr><td>' +'<a href="{LINK}" onclick="someFunc(\'{STRING}\');">{WORD}</a>' +'</td></tr>'; function insertRow(o) { newRow = blankRow .sub('{LINK}',o.link) .sub('{STRING}',o.string) .sub('{WORD}',o.word); $('tbodyElem').insert( newRow ); } Now that works all well and dandy, but is it the best practice? I have to update the code in the blankRow when I update code on the page, so the new elements being inserted are the same. It gets sucky when I have like 40 lines of HTML to go in a blankRow and then I have to escape it too. Is there an easier way? I was thinking of urlencoding and then decoding it before insertion but that would still mean a blankRow and lots of escaping. What would be mean would be a eof function a la PHP et al. $blankRow = <<<EOF text text EOF; That would mean no escaping but it would still need a blankRow. What do you do in this situation?

    Read the article

  • Which is faster for large "for" loop: function call or inline coding?

    - by zaplec
    Hi, I have programmed an embedded software (using C of course) and now I'm considering ways to improve the running time of the system. The most important single module in my system is one very large nested for loop module. That module consists of two nested for loops that loops max 122500 times. That's not very much yet, but the problem is that inside that nested for loop I have a function call to a function that is in another source file. That specific function consists mostly of two another nested for loops which loops always 22500 times. So now I have to make a function call 122500 times. I have made that function that is to be called a lot lighter and shorter (yet still works as it should) and now I started to think that would it be faster to rip off that function call and write that process directly inside those first two for loops? The processor in that system is ARM7TDMI and its frequency is 55MHz. The system itself isn't very time critical so it doesn't have to be real time capable. However the faster it can process its duties the better. Also would it be also faster to use while loops instead of fors? And any piece of advice about how to improve the running time is appreciated. -zaplec

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • what can cause large discrepancy between minor GC time and total pause time?

    - by cxcg
    We have a latency-sensitive application, and are experiencing some GC-related pauses we don't fully understand. We occasionally have a minor GC that results in application pause times that are much longer than the reported GC time itself. Here is an example log snippet: 485377.257: [GC 485378.857: [ParNew: 105845K-621K(118016K), 0.0028070 secs] 136492K-31374K(1035520K), 0.0028720 secs] [Times: user=0.01 sys=0.00, real=1.61 secs] Total time for which application threads were stopped: 1.6032830 seconds The total pause time here is orders of magnitude longer than the reported GC time. These are isolated and occasional events: the immediately preceding and succeeding minor GC events do not show this large discrepancy. The process is running on a dedicated machine, with lots of free memory, 8 cores, running Red Hat Enterprise Linux ES Release 4 Update 8 with kernel 2.6.9-89.0.1EL-smp. We have observed this with (32 bit) JVM versions 1.6.0_13 and 1.6.0_18. We are running with these flags: -server -ea -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:NewSize=128m -XX:MaxNewSize=128m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-TraceClassUnloading Can anybody offer some explanation as to what might be going on here, and/or some avenues for further investigation?

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Javascript (using jQuery) in a large Project... organization, passing data, private method, etc.

    - by gaoshan88
    I'm working on a large project that is organized like so: Multiple javascript files are included as needed and most code is wrapped in anonymous functions... // wutang.js //Included Files Go Here // Public stuff var MethodMan; // Private stuff (function() { var someVar1; MethodMan = function(){...}; var APrivateMethod = function(){...}; $(function(){ //jquery page load stuff here $('#meh').click(APrivateMethod); }); })(); I'm wondering about a few things here. Assuming that there are a number of these files included on a page, what has access to what and what is the best way to pass data between the files? For example, I assume that MethodMan() can be accessed by anything on any included page. It is public, yes? var someVar1 is only accessible to the methods inside that particular anonymous function and nowhere else, yes? Also true of var APrivateMethod(), yes? What if I want APrivateMethod() to make something available to some other method in a different anonymous wrapped method in a different included page. What's the best way to pass data between these private, anonymous functions on different included pages? Do I simply have to make whatever I want to use between them public? How about if I want to minimize global variables? What about the following: var PootyTang = function(){ someVar1 = $('#someid').text(); //some stuff }; and in another included file used by that same page I have: var TangyPoot = function(){ someVar1 = $('#someid').text(); //some completely different stuff }; What's the best way to share the value of someVar1 across these anonymous (they are wrapped as the first example) functions?

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • Web page for IPhone - Large fonts instead of scrolling?

    - by chris_l
    I'm planning the layout of my web page, which should also be usable on the IPhone. I don't really have much experience with the IPhone yet - I just installed the IPhone Simulator on my Mac. The page's contents are flexible, so I think it would be better to use this flexibility to avoid that the user has to scroll around the entire page. Especially I have A header and a sidebar that will be used all the time to perform several actions. A main content area with a number of elements (e.g. images). The UI would stay usable pretty well, if the number of elements shown at one time is reduced for a small screen (e.g. by JavaScript). It would also be okay to make the main content area scrollable (as opposed to the entire page). The problem: If I simply display the page on the IPhone, it uses an extremely small font size, so that users must zoom in first, and then scroll around - so that they can't see the header and sidebar all the time. What's the best way to deal with this situation? Just leave it this way (very small fonts), because users expect that behaviour on the IPhone? Increase the font size (by specifying it in em or px or with xx-large, or what would be the best way?), if I detect - somehow - that it's being displayed on the IPhone. Or is there some way to restrict the viewport size to the screen size, and make it zoom in automatically? I think that would be the easiest solution in my case. Or ...?

    Read the article

  • Exemplars of large document-centric applications with COM/XPCOM/.NET interfaces.

    - by Warren P
    I am looking for exemplars (design examples) showing the use of interfaces (aka 'protocols' for you smalltalkers) to design a document management architecture in a large Word Processor, Spreadsheet, vector graphic or publishing package, or office-productivity (non-database) application with support for as many of the following as possible: any open source project, will be ideal, and language of implementation is unimportant since I am looking for design examples, however an object oriented language with support for "interfaces" is a must. I know at least a dozen languages, and I'm willing to study any application's source. use of "interface" could loosely be applied to either XPCOM or COM interfaces, or .NET interfaces, or even the use of pure-virtual (virtual+abstract) base-classes for OOP languages that lack the ability to declare an interface distinct from a class. I am mostly looking for a robust, thorough and flexible implementation for a document, IDocument, various document views (IDocumentView), and whatever operations make sense in that case. I am particular interested in cases where the product in question is a real-world product. For example, if anybody familiar with OpenOffice can tell me if the code contains a good sample design. I am looking for design documentation that outlines the design of the interfaces for such an application. So for example, if the openoffice spreadsheet has such an interface design, then that might be the best case, because it is a widely used real-world design, with millions of users, rather than a textbook example, which is minimal, and contrived. I know that the Mozilla platform uses XPCOM, and its design is heavily "interface" oriented, but I am looking more for a "word processor" or "spreadsheet" type of document design, rather than a web-browser. I am particularly interested in the interfaces used to access to data and meta-data such as markup (attributes like bold, and italics, and font size), and the ability to search and look up named entities within a document.

    Read the article

  • How to avoid geometric slowdown with large Linq transactions?

    - by Shaul
    I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) ) Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()... Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there. So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • How can you exclude a large number of records in a cross db query using LINQ2SQL?

    - by tap
    So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported. What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x = !idList.Contains(x.Id)) clause on the remote query. This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query. Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure? Thanks in advance.

    Read the article

  • Good C++ array class for dealing with large arrays of data in a fast and memory efficient way?

    - by Shane MacLaughlin
    Following on from a previous question relating to heap usage restrictions, I'm looking for a good standard C++ class for dealing with big arrays of data in a way that is both memory efficient and speed efficient. I had been allocating the array using a single malloc/HealAlloc but after multiple trys using various calls, keep falling foul of heap fragmentation. So the conclusion I've come to, other than porting to 64 bit, is to use a mechanism that allows me to have a large array spanning multiple smaller memory fragments. I don't want an alloc per element as that is very memory inefficient, so the plan is to write a class that overrides the [] operator and select an appropriate element based on the index. Is there already a decent class out there to do this, or am I better off rolling my own? From my understanding, and some googling, a 32 bit Windows process should theoretically be able address up to 2GB. Now assuming I've 2GB installed, and various other processes and services are hogging about 400MB, how much usable memory do you think my program can reasonably expect to get from the heap? I'm currently using various flavours of Visual C++.

    Read the article

  • How to process a large post array in PHP where item names are all different and not known in advance

    - by Salnajjar
    I have a PHP page that queries a DB to populate a form for the user to modify the data and submit. The query returns a number of rows which contain 3 items: ImageID ImageName ImageDescription The PHP page titles each box in the form with a generic name and appends the ImageID to it. Ie: ImageID_03 ImageName_34 ImageDescription_22 As it's unknown which images are going to have been retrieved from the DB then I can't know in advance what the name of the form entries will be. The form deals with a large number of entries at the same time. My backend PHP form processor that gets the data just sees it as one big array: [imageid_2] => 2 [imagename_2] => _MG_0214 [imageid_10] => 10 [imagename_10] => _MG_0419 [imageid_39] => 39 [imagename_39] => _MG_0420 [imageid_22] => 22 [imagename_22] => Curly Fern [imagedescription_2] => Wibble [imagedescription_10] => Wobble [imagedescription_39] => Fred [imagedescription_22] => Sally I've tried to do an array walk on it to split it into 3 arrays which set places but am stuck: // define empty arrays $imageidarray = array(); $imagenamearray = array(); $imagedescriptionarray = array(); // our function to call when we walk through the posted items array function assignvars($entry, $key) { if (preg_match("/imageid/i", $key)) { array_push($imageidarray, $entry); } elseif (preg_match("/imagename/i", $key)) { // echo " ImageName: $entry"; } elseif (preg_match("/imagedescription/i", $key)) { // echo " ImageDescription: $entry"; } } array_walk($_POST, 'assignvars'); This fails with the error: array_push(): First argument should be an array in... Am I approaching this wrong?

    Read the article

  • 12.10 update breaks NFS mount

    - by TarekEldeeb
    I've just upgraded to the latest 12.10 beta. Rebooted twice. The problem is with the NFS folders not mounting, here's a verbose log. # mount -v myserver:/nfs_shared/tools /tools/ mount: no type was given - I'll assume nfs because of the colon mount.nfs: timeout set for Mon Oct 1 11:42:28 2012 mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: Connection timed out My IP is 109, the server is 22. Just before the last upgrade, I was able to mount normally. Other PCs on my network are able to mount too, it's not a server issue. How can I analyse the problem? Certain log files? Any help?

    Read the article

  • Problem with script substitution when running script

    - by tucaz
    Hi! I'm new to Linux so this probably should be an easy fix, but I cannot see it. I have a script downloaded from official sources that is used to install additional tools for fsharp but it gives me a syntax error when running it. I tried to replace ( and ) by { and } but eventually it lead me to another error so I think this is not the problem since the script works for everybody. I read some articles that say that my bash version maybe is not the right one. I'm using Ubuntu 10.10 and here is the error: install-bonus.sh: 28: Syntax error: "(" unexpected (expecting "}") And this is line 27, 28 and 29: { declare -a DIRS=("${!3}") FILE=$2 And the full script: #! /bin/sh -e PREFIX=/usr BIN=$PREFIX/bin MAN=$PREFIX/share/man/man1/ die() { echo "$1" &2 echo "Installation aborted." &2 exit 1 } echo "This script will install additional material for F# including" echo "man pages, fsharpc and fsharpi scripts and Gtk# support for F#" echo "Interactive (root access needed)" echo "" # ------------------------------------------------------------------------------ # Utility function that searches specified directories for a specified file # and if the file is not found, it asks user to provide a directory RESULT="" searchpaths() { declare -a DIRS=("${!3}") FILE=$2 DIR=${DIRS[0]} for TRYDIR in ${DIRS[@]} do if [ -f $TRYDIR/$FILE ] then DIR=$TRYDIR fi done while [ ! -f $DIR/$FILE ] do echo "File '$FILE' was not found in any of ${DIRS[@]}. Please enter $1 installation directory:" read DIR done RESULT=$DIR } # ------------------------------------------------------------------------------ # Locate F# installation directory - this is needed, because we want to # add environment variable with it, generate 'fsharpc' and 'fsharpi' and also # copy load-gtk.fsx to that directory # ------------------------------------------------------------------------------ PATHS=( $1 /usr/lib/fsharp /usr/lib/shared/fsharp ) searchpaths "F# installation" FSharp.Core.dll PATHS[@] FSHARPDIR=$RESULT echo "Successfully found F# installation directory." # ------------------------------------------------------------------------------ # Check that we have everything we need # ------------------------------------------------------------------------------ [ $(id -u) -eq 0 ] || die "Please run the script as root." which mono /dev/null || die "mono not found in PATH." # ------------------------------------------------------------------------------ # Make sure that all additional assemblies are in GAC # ------------------------------------------------------------------------------ echo "Installing additional F# assemblies to the GAC" gacutil -i $FSHARPDIR/FSharp.Build.dll gacutil -i $FSHARPDIR/FSharp.Compiler.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Interactive.Settings.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Server.Shared.dll # ------------------------------------------------------------------------------ # Install additional files # ------------------------------------------------------------------------------ # Install man pages echo "Installing additional F# commands, scripts and man pages" mkdir -p $MAN cp *.1 $MAN # Export the FSHARP_COMPILER_BIN environment variable if [[ ! "$OSTYPE" =~ "darwin" ]]; then echo "export FSHARP_COMPILER_BIN=$FSHARPDIR" fsharp.sh mv fsharp.sh /etc/profile.d/ fi # Generate 'load-gtk.fsx' script for F# Interactive (ask user if we cannot find binaries) PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gtk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gtk#" gtk-sharp.dll PATHS[@] GTKDIR=$RESULT echo "Successfully found Gtk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/glib-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Glib" glib-sharp.dll PATHS[@] GLIBDIR=$RESULT echo "Successfully found Glib# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/atk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Atk#" atk-sharp.dll PATHS[@] ATKDIR=$RESULT echo "Successfully found Atk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gdk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gdk#" gdk-sharp.dll PATHS[@] GDKDIR=$RESULT echo "Successfully found Gdk# root directory." cp bonus/load-gtk.fsx load-gtk1.fsx sed "s,INSERTGTKPATH,$GTKDIR,g" load-gtk1.fsx load-gtk2.fsx sed "s,INSERTGDKPATH,$GDKDIR,g" load-gtk2.fsx load-gtk3.fsx sed "s,INSERTATKPATH,$ATKDIR,g" load-gtk3.fsx load-gtk4.fsx sed "s,INSERTGLIBPATH,$GLIBDIR,g" load-gtk4.fsx load-gtk.fsx rm load-gtk1.fsx rm load-gtk2.fsx rm load-gtk3.fsx rm load-gtk4.fsx mv load-gtk.fsx $FSHARPDIR/load-gtk.fsx # Generate 'fsharpc' and 'fsharpi' scripts (using the F# path) # 'fsharpi' automatically searches F# root directory (e.g. load-gtk) echo "#!/bin/sh" fsharpc echo "exec mono $FSHARPDIR/fsc.exe --resident \"\$@\"" fsharpc chmod 755 fsharpc echo "#!/bin/sh" fsharpi echo "exec mono $FSHARPDIR/fsi.exe -I:\"$FSHARPDIR\" \"\$@\"" fsharpi chmod 755 fsharpi mv fsharpc $BIN/fsharpc mv fsharpi $BIN/fsharpi Thanks a lot!

    Read the article

  • Problems compiling scangearmp

    - by maat
    I have already installed some missing packages, but i still get that error. Can anyone help me identify what is missing ? Thanks a lot psr@psr-EasyNote-TM85:~/Downloads/scangearmp-source-2.10-1/scangearmp$ make make all-recursive make[1]: Entering directory `/home/psr/Downloads/scangearmp-source-2.10-1/scangearmp' Making all in po make[2]: Entering directory `/home/psr/Downloads/scangearmp-source-2.10-1/scangearmp/po' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/psr/Downloads/scangearmp-source-2.10-1/scangearmp/po' Making all in backend make[2]: Entering directory `/home/psr/Downloads/scangearmp-source-2.10-1/scangearmp/backend' /bin/bash ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I. -I./include -DV_MAJOR=2 -DV_MINOR=1 -O2 -D__GIMP_PLUGIN_ENABLE__ -D_FILE_OFFSET_BITS=64 -MT libsane_canon_mfp_la-canon_mfp_tools.lo -MD -MP -MF .deps/libsane_canon_mfp_la-canon_mfp_tools.Tpo -c -o libsane_canon_mfp_la-canon_mfp_tools.lo `test -f 'canon_mfp_tools.c' || echo './'`canon_mfp_tools.c libtool: compile: gcc -DHAVE_CONFIG_H -I. -I.. -I. -I./include -DV_MAJOR=2 -DV_MINOR=1 -O2 -D__GIMP_PLUGIN_ENABLE__ -D_FILE_OFFSET_BITS=64 -MT libsane_canon_mfp_la-canon_mfp_tools.lo -MD -MP -MF .deps/libsane_canon_mfp_la-canon_mfp_tools.Tpo -c canon_mfp_tools.c -fPIC -DPIC -o .libs/libsane_canon_mfp_la-canon_mfp_tools.o canon_mfp_tools.c:40:17: fatal error: usb.h: No such file or directory #include ^ compilation terminated. make[2]: *** [libsane_canon_mfp_la-canon_mfp_tools.lo] Error 1 make[2]: Leaving directory `/home/psr/Downloads/scangearmp-source-2.10-1/scangearmp/backend' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/psr/Downloads/scangearmp-source-2.10-1/scangearmp' make: *** [all] Error 2 psr@psr-EasyNote-TM85:~/Downloads/scangearmp-source-2.10-1/scangearmp$

    Read the article

  • gcc 4.5 installation problem under ubuntu

    - by Claire Huang
    I tried to install gcc 4.5 on ubuntu 10.04 but failed. Here is a compile error that I don't know how to solve. Is there anyone successfully install the latest gcc on ubuntu? Following is my steps and the error message, I'd like to know where is the problem.... Step1: download these files: gcc-core-4.5.0.tar.gz gcc-g++-4.5.0.tar.gz gmp-4.3.2.tar.bz2 mpc-0.8.1.tar.gz mpfr-2.4.2.tar.gz Step2: Unzip above files Step3: move gmp, mpc, mpfr to the gcc-4.5.0/ directory. mv gmp-4.3.2 gcc-4.5.0/gmp mv mpc-0.8.1 gcc-4.5.0/mpc mv mpfr-2.4.2 gcc-4.5.0/mpfr Step4: go to gcc-4.5.0 directory and do configuration: sudo ./configure Step5: compile and install sudo make sudo make install The first 4 steps is OK, I can configure it successfully. However, when I try to compile it, following error message comes out, I cannot figure out what the problem is. Should I change the name from "gcc 4.5" to "gcc"?? It's a little strange that we need to do this by ourself. Is there anything I missed during the installation? xxx@xxx-laptop:/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0$ sudo make [sudo] password for xxx: [ -f stage_final ] || echo stage3 > stage_final /bin/bash: line 2: test: /media/Data/Tool/linux/gcc: binary operator expected /bin/bash: /media/Data/Tool/linux/gcc: No such file or directory make[1]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[3]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' rm -f stage_current make[3]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' Configuring stage 1 in host-x86_64-unknown-linux-gnu/intl /bin/bash: /media/Data/Tool/linux/gcc: No such file or directory make[2]: *** [configure-stage1-intl] Error 127 make[2]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make: *** [all] Error 2

    Read the article

  • QtOpenCl make errors. Please help.

    - by Skkard
    So I downloaded the ATI Stream SDK. I don't have a gpu now so I use the '-device cpu' and got the programs/examples in the OpenCl directory working by adding the directory to LD_LIBRARY_PATH etc. Now the problem is when installing QtOpenCl. configure script gives me: skkard@skkard-desktop:~/Applications/qt-labs-opencl$ ./configure This is the QtOpenCL configuration utility. Qt version ............. 4.6.2 qmake .................. /usr/bin/qmake OpenCL ................. yes OpenCL/OpenGL interop .. yes Extra QMAKE_CXXFLAGS ... Extra INCLUDEPATH ...... Extra LIBS ............. -lOpenCL QtOpenCL has been configured. Run '/usr/bin/make' to build. Make gives me: skkard@skkard-desktop:~/Applications/qt-labs-opencl$ make cd src/ && make -f Makefile make[1]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src' cd opencl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src/opencl' make[2]: Nothing to be done for `first'. make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src/opencl' cd openclgl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src/openclgl' make[2]: Nothing to be done for `first'. make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src/openclgl' make[1]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src' cd examples/ && make -f Makefile make[1]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples' cd opencl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl' cd vectoradd/ && make -f Makefile make[3]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl/vectoradd' g++ -o vectoradd vectoradd.o qrc_vectoradd.o -L/usr/lib -L../../../lib -L../../../bin -lQtOpenCL -lQtGui -lQtCore -lpthread ../../../lib/libQtOpenCL.so: undefined reference to `clBuildProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clSetCommandQueueProperty' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueNDRangeKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clSetKernelArg' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyBufferToImage' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clFinish' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueUnmapMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clGetMemObjectInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueReadImage' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMarker' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetCommandQueueInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyImage' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseContext' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseEvent' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWriteBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMapImage' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueReadBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clUnloadCompiler' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueBarrier' ../../../lib/libQtOpenCL.so: undefined reference to `clGetProgramBuildInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWaitForEvents' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateContext' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateImage3D' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMapBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clGetDeviceIDs' ../../../lib/libQtOpenCL.so: undefined reference to `clGetContextInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetDeviceInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetSamplerInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetPlatformIDs' ../../../lib/libQtOpenCL.so: undefined reference to `clGetSupportedImageFormats' ../../../lib/libQtOpenCL.so: undefined reference to `clGetPlatformInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clWaitForEvents' ../../../lib/libQtOpenCL.so: undefined reference to `clGetEventInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetEventProfilingInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetImageInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateProgramWithBinary' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetKernelWorkGroupInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainEvent' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainContext' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clFlush' ../../../lib/libQtOpenCL.so: undefined reference to `clGetProgramInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWriteImage' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateKernelsInProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateProgramWithSource' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateImage2D' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyImageToBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clGetKernelInfo' collect2: ld returned 1 exit status make[3]: *** [vectoradd] Error 1 make[3]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl/vectoradd' make[2]: *** [sub-vectoradd-make_default] Error 2 make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl' make[1]: *** [sub-opencl-make_default] Error 2 make[1]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples' make: *** [sub-examples-make_default-ordered] Error 2 Tried it using the '-no-openclgl', but none of the examples etc are compiled. I'm using ubuntu 10.04 using the Qt which is installed from synaptic.

    Read the article

  • Mongodb performance on Windows

    - by Chris
    I've been researching nosql options available for .NET lately and MongoDB is emerging as a clear winner in terms of availability and support, so tonight I decided to give it a go. I downloaded version 1.2.4 (Windows x64 binary) from the mongodb site and ran it with the following options: C:\mongodb\bin>mkdir data C:\mongodb\bin>mongod -dbpath ./data --cpu --quiet I then loaded up the latest mongodb-csharp driver from http://github.com/samus/mongodb-csharp and immediately ran the benchmark program. Having heard about how "amazingly fast" MongoDB is, I was rather shocked at the poor benchmark performance. Starting Tests encode (small).........................................320000 00:00:00.0156250 encode (medium)........................................80000 00:00:00.0625000 encode (large).........................................1818 00:00:02.7500000 decode (small).........................................320000 00:00:00.0156250 decode (medium)........................................160000 00:00:00.0312500 decode (large).........................................2370 00:00:02.1093750 insert (small, no index)...............................2176 00:00:02.2968750 insert (medium, no index)..............................2269 00:00:02.2031250 insert (large, no index)...............................778 00:00:06.4218750 insert (small, indexed)................................2051 00:00:02.4375000 insert (medium, indexed)...............................2133 00:00:02.3437500 insert (large, indexed)................................835 00:00:05.9843750 batch insert (small, no index).........................53333 00:00:00.0937500 batch insert (medium, no index)........................26666 00:00:00.1875000 batch insert (large, no index).........................1114 00:00:04.4843750 find_one (small, no index).............................350 00:00:14.2812500 find_one (medium, no index)............................204 00:00:24.4687500 find_one (large, no index).............................135 00:00:37.0156250 find_one (small, indexed)..............................352 00:00:14.1718750 find_one (medium, indexed).............................184 00:00:27.0937500 find_one (large, indexed)..............................128 00:00:38.9062500 find (small, no index).................................516 00:00:09.6718750 find (medium, no index)................................316 00:00:15.7812500 find (large, no index).................................216 00:00:23.0468750 find (small, indexed)..................................532 00:00:09.3906250 find (medium, indexed).................................346 00:00:14.4375000 find (large, indexed)..................................212 00:00:23.5468750 find range (small, indexed)............................440 00:00:11.3593750 find range (medium, indexed)...........................294 00:00:16.9531250 find range (large, indexed)............................199 00:00:25.0625000 Press any key to continue... For starters, I can get better non-batch insert performance from SQL Server Express. What really struck me, however, was the slow performance of the find_nnnn queries. Why is retrieving data from MongoDB so slow? What am I missing? Edit: This was all on the local machine, no network latency or anything. MongoDB's CPU usage ran at about 75% the entire time the test was running. Edit 2: Also, I ran a trace on the benchmark program and confirmed that 50% of the CPU time spent was waiting for MongoDB to return data, so it's not a performance issue with the C# driver.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >