Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 597/1477 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • Colorbox iFrame content not appearing in IE 8/9

    - by Rocketpig
    I'm using ColorBox to call a few informational modals on-screen and given the client's requirements, the best way to do this is via iFrames (not my first choice but whatever). Everything is working peachy in Chrome, FF, etc. but the iFrame content is not working in any version of IE. The modal wrapper appears but nothing is inside. This is what I've done so far: Changed the doctype to transitional and strict for IE. No dice. Removed the "iframe: true" and replaced it with HTML "Hello". That worked fine and "Hello" appeared in the Colorbox modal. I've removed all stylesheets from the header. No luck so it's not a CSS issue. Just to be sure, I rolled back my JQuery library to 1.6.2 from 1.8.2. Nothing there, either. Any help would be appreciated. This is aggravating. Some code: $(function () { $(".modal-large").colorbox({iframe:true, innerWidth:580, innerHeight:500}); }) HTML: <div class="top-droptext"><a class="modal-large" href="modal/serviceproviderinfo.html">Update Password</a></div>

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Scientific Data processing(Graph comparison and interpretation)

    - by pinkynobrain
    Hi stackoverflow friends, I'm trying to write a program to automate one of my more boring and repetative work tasks. I have some programming experience but none with processing or interpreting large volumes of data so i am seeking your advice(both suggestions of techneques to try and also things to read to learn more about doing this stuff). I have a piece of equipment that monitors an experiment by taking repeated samples and displays the readings on its screen as a graph. The input of experiment can be altered and one of these changes should produce a change in a section of the graph which i currently identify by eye and is what im looking for in the experiment. I want to automate it so that a computer looks at a set of results and spots the experiment input that causes the change. I can already extract the results from the machine. Currently they results for a run are in the form of an integer array with the index being the sample number and the corresponding value being the measurement. The overall shape of the graph will be similar for each experiment run. The change im looking for will be roughly the same and will occur in approximatly the same place every time for the correct experiment input. Unfortunatly there are a few gotcha's that make this problem more difficult. There is some noise in the measuring process which mean there is some random variation in the measured values between different runs. Although the overall shape of the graph remains the same. The time the experiment takes varies slightly each run causing two effects. First, the a whole graph may be shifted slightly on the x axis relative to another runs graph. Second, individual features may appear slightly wider or narrower in different runs. In both these cases the variation isn't particularly large and you can assume that the only non random variation is caused by the correct input being found. Thank you for your time, Pinky

    Read the article

  • Determing if an unordered vector<T> has all unique elements

    - by Hooked
    Profiling my cpu-bound code has suggested I that spend a long time checking to see if a container contains completely unique elements. Assuming that I have some large container of unsorted elements (with < and = defined), I have two ideas on how this might be done: The first using a set: template <class T> bool is_unique(vector<T> X) { set<T> Y(X.begin(), X.end()); return X.size() == Y.size(); } The second looping over the elements: template <class T> bool is_unique2(vector<T> X) { typename vector<T>::iterator i,j; for(i=X.begin();i!=X.end();++i) { for(j=i+1;j!=X.end();++j) { if(*i == *j) return 0; } } return 1; } I've tested them the best I can, and from what I can gather from reading the documentation about STL, the answer is (as usual), it depends. I think that in the first case, if all the elements are unique it is very quick, but if there is a large degeneracy the operation seems to take O(N^2) time. For the nested iterator approach the opposite seems to be true, it is lighting fast if X[0]==X[1] but takes (understandably) O(N^2) time if all the elements are unique. Is there a better way to do this, perhaps a STL algorithm built for this very purpose? If not, are there any suggestions eek out a bit more efficiency?

    Read the article

  • Quickest way to compare a buch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • How to load/save C++ class instance (using STL containers) to disk

    - by supert
    I have a C++ class representing a hierarchically organised data tree which is very large (~Gb, basically as large as I can get away with in memory). It uses an STL list to store information at each node plus iterators to other nodes. Each node has only one parent, but 0-10 children. Abstracted, it looks something like: struct node { public: node_list_iterator parent; // iterator to a single parent node double node_data_array[X]; map<int,node_list_iterator> children; // iterators to child nodes }; class strategy { private: list<node> tree; // hierarchically linked list of nodes struct some_other_data; public: void build(); // build the tree void save(); // save the tree from disk void load(); // load the tree from disk void use(); // use the tree }; I would like to implement the load() and save() to disk, and it should be fairly fast, however the obvious problems are: I don't know the size in advance; The data contains iterators, which are volatile; My ignorance of C++ is prodigious. Could anyone suggest a pure C++ solution please?

    Read the article

  • Optimized 2D Tile Scrolling in OpenGL

    - by silicus
    Hello, I'm developing a 2D sidescrolling game and I need to optimize my tiling code to get a better frame rate. As of right now I'm using a texture atlas and 16x16 tiles for 480x320 screen resolution. The level scrolls in both directions, and is significantly larger than 1 screen (thousands of pixels). I use glTranslate for the actual scrolling. So far I've tried: Drawing only the on-screen tiles using glTriangles, 2 per square tile (too much overhead) Drawing the entire map as a Display List (great on a small level, way to slow on a large one) Partitioning the map into Display Lists half the size of the screen, then culling display lists (still slows down for 2-directional scrolling, overdraw is not efficient) Any advice is appreciated, but in particular I'm wondering: I've seen Vertex Arrays/VBOs suggested for this because they're dynamic. What's the best way to take advantage of this? If I simply keep 1 screen of vertices plus a bit of overdraw, I'd have to recopy the array every few frames to account for the change in relative coordinates (shift everything over and add the new rows/columns). If I use more overdraw this doesn't seem like a big win; it's like the half-screen display list idea. Does glScissor give any gain if used on a bunch of small tiles like this, be it a display list or a vertex array/VBO Would it be better just to build the level out of large textures and then use glScissor? Would losing the memory saving of tiling be an issue for mobile development if I do this (just curious, I'm currently on a PC)? This approach was mentioned here Thanks :)

    Read the article

  • How to catch this low level MySQL (?) error in PHP/Magento

    - by andnil
    When I'm executing the following statement in Magento with a really large $sku, the execution terminates without any errors thrown what so ever. There are no errors in either Magento's, Apache's or PHP's error logs. Mage::getModel('catalog/product')-loadByAttribute('sku', $sku); Question: How do I catch the error? I've tried to set custom error handlers, and for testing purposes I've also managed to trigger error situations where each of the error handler functions are invoked. But when running the previously mentioned Magento code with a large $sku, none of the error handling functions are executed. error_reporting( -1 ); set_error_handler( array( 'Error', 'captureNormal' ) ); set_exception_handler( array( 'Error', 'captureException' ) ); register_shutdown_function( array( 'Error', 'captureShutdown' ) ); For completeness, this is the $sku I'm passing to loadByAttribute(). (The sku is invalid, but that is not the issue) 1- 9685 0102046|1- 9685 1212100|1- 9685 1212092|1- 9685 1212096|1- 9685 1102100|1- 9685 1102108|1- 9685 1102112|1- 9685 1102092|1- 9685 0102048|1- 9685 0102054|1- 9685 0102056|1- 9685 0102058|1- 9685 1212104|1- 9685 1212108|1- 9685 0212058|1- 9685 0104050|1- 9685 0212050|1- 9685 0212056|1- 9685 0212044|1- 9685 0212048|1- 9685 0212052|1- 9685 0212054|1- 9685 1102104|1- 9685 1102124 Any insight into this matter is much appreciated! Update: Upon further investigation, this is the exact point in the code where execution terminates. when the foreach is executed I guess Magento goes into MySQL world and starts loading up data from the database. \Mage\Catalog\Model\Abstract.php public function loadByAttribute($attribute, $value, $additionalAttributes = '*') { $collection = $this->getResourceCollection() ->addAttributeToSelect($additionalAttributes) ->addAttributeToFilter($attribute, $value) ->setPage(1,1); foreach ($collection as $object) { // <--------------- HERE return $object; } return false; } Note, I'm ONLY interested in finding out how to properly CATCH these kinds of errors, not "fix" the logic. This is so that I can present a proper error message to the user. The example above with the malformed sku is contrived and I have no desire to make my Magento app work with those erroneous skus.

    Read the article

  • What techniques do you use for emitting data from the server that will solely be used in client side scripts?

    - by chuck
    Hi all, I never found an optimal solution for this problem so I am hoping that some of you out there have a few solutions. Let's say I need to render out a list of checkboxes and each checkbox has a set of additional data that goes with it. This data will be used purely in the context of javascript and jquery. My usual strategy is to render this data in hidden fields that are grouped in the same container as the checkbox. My rendered HTML will look something like this: <div> <input type="checkbox" /> <input type="hidden" class="genreId" /> <input type="hidden" class="titleId" /> </div> My only problem with this is that the data in the hidden fields get posted to the server when the form is submitted. For small amounts of data, this is fine. However, I frequently work with large datasets and a large amount of data is needlessly transferred. UPDATE: Before submitting this post, I just saw that I can add a "DISABLED" attribute to my input element to suppress the submission of data. Is this pretty much the best approach that I can take? Thanks

    Read the article

  • Java giving incorrect year values

    - by whistler
    Something very, very strange is occurring in my program, and I'm wondering if anyone out there has seen this occur before. And, if so, how to fix it. Basically, I am parsing an csv file...no problem there. One column contains a date and I am taking it in as a String and changing to a Date object. Again, no problem there. The code is as follows: SimpleDateFormat dateFormat = new SimpleDateFormat("MM/dd/yy hh:mm"); Date initialDate = new Date(); try { initialDate = dateFormat.parse(rows.get(0)[8]); System.out.println(initialDate); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } Of course, I'm parsing other columns as well (and those are working fine). So, when I run my program for a small csv file (2.8 MB), the dates come out (i.e. are parsed) perfectly. However, when I run the program for a large csv file (25 MB), the dates are a hot mess. For example, take a look at the year values I am getting (the following is just a tiny portion of the println output from the code above): 1000264 at Sun Nov 05 15:30:00 EST 2186 1000320 at Sat Mar 04 17:30:00 EST 2169 1000347 at Sat Apr 01 09:45:00 EDT 2169 1000413 at Tue Jul 09 13:00:00 EDT 2182 1000638 at Fri Dec 11 13:45:00 EST 2167 1000667 at Wed Dec 10 10:00:00 EST 2188 1000690 at Mon Jan 02 13:00:00 EST 2169 1000843 at Thu Feb 11 13:30:00 EST 2196 In actuality, the years are in the realm of 1990-2006 or so. Again, this does not happen with the small csv file. Does anyone know what's going on here and how I can fix it? I need to process the large csv file (the small one was just for testing purposes). By request, here are the actual dates in the csv file and after that the value given by the code above: 5/20/03 15:30 5/20/03 15:30 8/30/04 9:00 8/30/04 9:00 12/20/04 10:30 12/20/04 10:30 Sun Nov 05 15:30:00 EST 2186 Sun Nov 05 15:30:00 EST 2186 Sun Nov 05 15:30:00 EST 2186 Thu Dec 08 09:00:00 EST 2196 Tue Dec 12 10:30:00 EST 2186 Tue Dec 12 10:30:00 EST 2186

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • Mgmt wants to re-title my position: Any help...? [closed]

    - by JohnFlyTN
    Management here wants to re-title my position, since I'm doing quite a bit of different work than was originally planned. They want my input. After a quick glance over my skill set and job duties, what would we need to describe this position as? I'll just list things I'm at least proficient in, I will not list things I have a passing knowledge of. About me : ~10 years software development. Languages : C, C++, Perl, PHP, C#, TCL, Unix shell scripting, SQL (TSQL, PLSQL) Systems : MS-Dos, Windows 3.1 to 7 for client, NT 4 to 2008 for server, OS/2, IBM MVS & z/OS, Linux ( multiple distros), AIX Current position: I do all sorts of in-house software. The range is single user apps to large systems spanning multiple OS's. One of the larger projects I've designed and coded is about 100k lines of C#, and a database where I have been the sole designer and maintainer. I have near total freedom to design as I see fit, restraints are usually budgetary. Skills required to replace me in my current role: Windows and Unix admin, Database design, .NET up to 3.5 (C#, ASP.NET), C++, Perl, good skills in designing large and efficient data processing systems. Given this small level of information what would you see this as being titled? (is more information required to render a decision?)

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • Why isn't obliterate an essential feature of Subversion?

    - by Dimitri C.
    For some years now, I'm waiting for Subversion to feature a "delete permanently" (obliterate) function. I hesitate to make the transition to Subversion (coming from Visual SourceSafe :p), because I think this is an essential feature, as otherwise I'd expect the repository to grow unstopably. However, for one reason or the other, the feature gets postponed over and over again. So I begin wondering if there is some other feature or workaround which makes the obliterate function dispensable. What do you do when you want to shrink the SVN central repository? Example 1: I check in a large third party library, and after a few weeks I realize it is not suited for my needs. I don't want that to store and backup that large amount of data forever. Example 2: I have 10 versions of 10 big third party libraries in the repository, but I only use the latest versions. Example 3: I accidentally checked in sensitive information (as suggested by John). Example 4: I accidentally checked in some big files that were never meant to be put in the repository.

    Read the article

  • recursive wget with hotlinked requisites

    - by dongle
    I often use wget to mirror very large websites. Sites that contain hotlinked content (be it images, video, css, js) pose a problem, as I seem unable to specify that I would like wget to grab page requisites that are on other hosts, without having the crawl also follow hyperlinks to other hosts. For example, let's look at this page https://dl.dropbox.com/u/11471672/wget-all-the-things.html Let's pretend that this is a large site that I would like to completely mirror, including all page requisites – including those that are hotlinked. wget -e robots=off -r -l inf -pk ^^ gets everything but the hotlinked image wget -e robots=off -r -l inf -pk -H ^^ gets everything, including hotlinked image, but goes wildly out of control, proceeding to download the entire web wget -e robots=off -r -l inf -pk -H --ignore-tags=a ^^ gets the first page, including both hotlinked and local image, does not follow the hyperlink to the site outside of scope, but obviously also does not follow the hyperlink to the next page of the site. I know that there are various other tools and methods of accomplishing this (HTTrack and Heritrix allow for the user to make a distinction between hotlinked content on other hosts vs hyperlinks to other hosts) but I'd like to see if this is possible with wget. Ideally this would not be done in post-processing, as I would like the external content, requests, and headers to be included in the WARC file I'm outputting.

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • Automated generation of the "What's new in this version" reading

    - by lunohodov
    We all know it - this is the reading that lists the changes brought by each new version of our favorite software. Whenever it comes bundled as a file (Changes.txt, CHANGES, WhatsNew.txt, etc) or is presented within an installer this is usually the first thing we read before installing/updating. On a current project we have a Changelog.txt file that gets updated by hand every time a notable change occurs. However this often leads to "I forgot to update the changelog". So I am looking for a way to automate this. I am thinking of a script that extracts the changes from our commit messages (using a convention) and generates the file. For example a commit message like Updated json-glib to 0.7.6 [changes] - Fix crash on Windows - Fix issues with facebook contacts with very large UIDs. Would produce the following Changes.txt Version 1.9.18 (03/10/2010) Fix crash on Windows Fix issues with facebook contacts with very large UIDs. Does anyone know a better solution/tool for this or am I going to write my own? Thanks!

    Read the article

  • Top n items in a List ( including duplicates )

    - by Krishnan
    Trying to find an efficient way to obtain the top N items in a very large list, possibly containing duplicates. I first tried sorting & slicing, which works. But this seems unnnecessary. You shouldn't need to sort a very large list if you just want the top 20 members. So I wrote a recursive routine which builds the top-n list. This also works, but is very much slower than the non-recursive one! Question: Which is my second routine (elite2) so much slower than elite, and how do I make it faster ? My code is attached below. Thanks. import scala.collection.SeqView import scala.math.min object X { def elite(s: SeqView[Int, List[Int]], k:Int):List[Int] = { s.sorted.reverse.force.slice(0,min(k,s.size)) } def elite2(s: SeqView[Int, List[Int]], k:Int, s2:List[Int]=Nil):List[Int] = { if( k == 0 || s.size == 0) s2.reverse else { val m = s.max val parts = s.force.partition(_==m) val whole = if( parts._1.size > 1) parts._1.tail:::parts._2 else parts._2 elite2( whole.view, k-1, m::s2 ) } } def main(args:Array[String]) = { val N = 1000000/3 val x = List(N to 1 by -1).flatten.map(x=>List(x,x,x)).flatten.view println(elite2(x,20)) println(elite(x,20)) } }

    Read the article

  • Boost shared_ptr use_count function

    - by photo_tom
    My application problem is the following - I have a large structure foo. Because these are large and for memory management reasons, we do not wish to delete them when processing on the data is complete. We are storing them in std::vector<boost::shared_ptr<foo>>. My question is related to knowing when all processing is complete. First decision is that we do not want any of the other application code to mark a complete flag in the structure because there are multiple execution paths in the program and we cannot predict which one is the last. So in our implementation, once processing is complete, we delete all copies of boost::shared_ptr<foo>> except for the one in the vector. This will drop the reference counter in the shared_ptr to 1. Is it practical to use shared_ptr.use_count() to see if it is equal to 1 to know when all other parts of my app are done with the data. One additional reason I'm asking the question is that the boost documentation on the shared pointer shared_ptr recommends not using "use_count" for production code.

    Read the article

  • Best way to convert log files (*.txt) to web-friendly files (*.html, *.jsp, etc)?

    - by prometheus
    I have a bunch of log files which are pure text. Here is an example of one... Overall Failures Log SW Failures - 03.09.2010 - /logs/swfailures.txt - 23 errors - 24 warnings HW Failures - 03.09.2010 - /logs/hwfailures.txt - 42 errors - 25 warnings SW Failures - 03.10.2010 - /logs/swfailures.txt - 32 errors - 27 warnings HW Failures - 03.10.2010 - /logs/hwfailures.txt - 11 errors - 31 warnings These files can get quite large and contain a lot of other information. I'd like to produce an HTML file from this log that will add links to key portions and allow the user to open up other log files as a result... SW Failures - 03.09.2010 - <a href="/logs/swfailures.txt">/logs/swfailures.txt</a> - 23 errors - 24 warnings This is greatly simplified as I would like to add many more links and other html elements. My question is -- what is the best way to do this? If the files are large, should I generate the html before serving it to the user or will jsp do? Should I use perl or other scripting languages to do this? What are your thoughts and experiences?

    Read the article

  • Could git do not store history of specific folders when working with git-svn?

    - by Timofey Basanov
    In short: Is there a way to disable storing full history for specific folders in git-svn repo? We have pretty large SVN repo with big checkout. I would like to migrate it to Git for my local development, because Git speeds up update and status commands orders of magnitude. When I simply do git svn clone it creates very big repo. Big enough to be bigger then my whole HDD. The problem lies in binary directories for which history is too large. Latest binaries are required for proper local build, but history is not required at all for my development process. I will never change them myself. I would like to store only latest versions for specific folders, or may be a history, but for no more than a week. I could only found filter for git svn fetch, which excludes specific folders at all. This is not exactly what I need. It's OK with me to have Cron task which deletes history from specific folders, but I do not know how to make one. Also Cron does not solve problem of first git svn clone. P.S. SVN repository structure could not be changed by any means.

    Read the article

  • Why do you program? Why do you do what you do? [closed]

    - by Pirate for Profit
    To me, writing a new program is like a puzzle. Before you write any code for a large system, you have to carefully craft each piece in your mind and imagine how all the pieces will fit together. If you don't, your solution may end up being undefined. What I mean is, I often don't know what I'm doing so I'll come to this site and beg for a code snippet, and then somehow try to hack it into my projects. I started writing GW-Basic when I was around 8 years old. Then it progressed from there, went to california university and did some Python and C++, but really didn't learn anything(college = highsk00l++). I've mostly been self-taught, took awhile to break bad habits and I'd say only in recent years would I consider myself understanding of design patterns and all that stuff (no but honestly procedural dudes, I would not want to design and maintain a large system procedurally, yous crazy). And despite my username, money has NOT been a big motivator. I've gone from job to job, I can usually get the work done perfect very quickly, any delays on my part are understandable (well about as understandable as it gets in the industry). But I ain't gonna work for peanuts because I got mouths to feed. Why do you program? Why do you do what you do?

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • How to create a datastore.Text object out of an array of dynamically created Strings?

    - by Adrogans
    I am creating a Google App Engine server for a project where I receive a large quantity of data via an HTTP POST request. The data is separated into lines, with 200 characters per line. The number of lines can go into the hundreds, so 10's of thousands of characters total. What I want to do is concatenate all of those lines into a single Text object, since Strings have a maximum length of 500 characters but the Text object can be as large as 1MB. Here is what I thought of so far: public void doPost(HttpServletRequest req, HttpServletResponse resp) { ... String[] audioSampleData = new String[numberOfLines]; for (int i = 0; i < numberOfLines; i++) { audioSampleData[i] = req.getReader().readLine(); } com.google.appengine.api.datastore.Text textAudioSampleData = new Text(audioSampleData[0] + audioSampleData[1] + ...); ... } But as you can see, I don't know how to do this without knowing the number of lines before-hand. Is there a way for me to iterate through the String indexes within the Text constructor? I can't seem to find anything on that. Of note is that the Text object can't be modified after being created, and it must have a String as parameter for the constructor. (Documentation here) Is there any way to this? I need all of the data in the String array in one Text object. Many Thanks!

    Read the article

  • dm_exec_query_stats returning stale data?

    - by VoiceOfUnreason
    I've been testing my app on a SQL Server 2005 database, and am trying to establish a preliminary picture of the query performance using sys.dm_exec_query_stats. Problem: there's a particular query that I'm interested in, because total_elapsed_time and last_elapsed_time are both large numbers. When I tickle my app to invoke that query (this runs successfully), then refresh my view of the stats, I find that 1) execution_count has incremented (expected) 2) last_execution_time has updated to now (expected) 3) last_elapsed_time is still a large value (not expected - I anticipated a new value) 4) total_elapsed_time is unchanged (contradiction?) If last_elapsed_time refers to the execution that happened @ last_execution_time, then the total_elapsed_time should have increased? This documentation: http://msdn.microsoft.com/en-us/library/ms189741(SQL.90).aspx tells me that last_execution_time is the last time the plan was executed, and last_elapsed_time comes from the "most recently executed plan", but doesn't tell me why those might be different. The query itself is uncomplicated (SELECT/WHERE/ORDER BY - parameters appearing in the where clause, but no clever operations), the table has maybe 25 rows in it right now. Questions: 1) What's the real relationship between execution_count, last_execution_time, and last_elapsed_time? 2) Where is the documentation of this relationship (manual, third party book, blog, bug ticket, stone tablets...) ?

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >