Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 144/869 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • How to troubleshoot a Highcharts script that's not rendering data when date is added and hanging the JS engine with large datasets?

    - by ylluminate
    I have a Highchart JS graph that I'm building in Rails (although I don't think Ruby has real bearing on this problem unless it's the Date output format) to which I'm adding the timestamp of each datapoint. Presently the array of floats is rendering fine without timestamps, however when I add the timestamp to the series it fails to rend. What's worse is that when the series has hundreds of entries all sorts of problems arise, not the least of which is the browser entirely hanging and requiring a force quit / kill. I'm using the following to build the array of arrays data series: series1 = readings.map{|row| [(row.date.to_i * 1000), (row.data1.to_f if BigDecimal(row.data1) != BigDecimal("-1000.0"))] } This yields a result like this: series: [{"name":"Data 1","data":[[1326262980000,1.79e-09],[1326262920000,1.29e-09],[1326262860000,1.22e-09],[1326262800000,1.42e-09],[1326262740000,1.29e-09],[1326262680000,1.34e-09],[1326262620000,1.31e-09],[1326262560000,1.51e-09],[1326262500000,1.24e-09],[1326262440000,1.7e-09],[1326262380000,1.24e-09],[1326262320000,1.29e-09],[1326262260000,1.53e-09],[1326262200000,1.23e-09],[1326262140000,1.21e-09]],"color":"blue"}] Yet nothing appears on the graph as noted. Notwithstanding, when I compare the data series in one of their very similar examples here: http://www.highcharts.com/demo/spline-irregular-time It appears that really the data series are formatted identically (except in mine I use the timestamp vs date method). This leads me to think I've got a problem with the timestamp output, but I'm just not able to figure out where / how as I'm converting the date output to an integer multipled by 1000 to convert it to milliseconds as per explained in a similar Railscasts tutorial. I would very much appreciate it if someone could point me in the right direction here as to what I may be doing wrong. What could cause no data to appear on the graph in smaller sized sets (<100 points) and when into the hundreds causes an apparent hang in the javascript engine in this case? Perhaps ultimately the key lies here as this is the entire js that's being generated and not rendering: jQuery(function() { // 1. Define JSON options var options = { chart: {"defaultSeriesType":"spline","renderTo":"chart_name"}, title: {"text":"Title"}, legend: {"layout":"vertical","style":{}}, xAxis: {"title":{"text":"UTC Time"},"type":"datetime"}, yAxis: [{"title":{"text":"Left Title","margin":10}},{"title":{"text":"Right Groups Title"},"opposite":true}], tooltip: {"enabled":true}, credits: {"enabled":false}, plotOptions: {"areaspline":{}}, series: [{"name":"Data 1","data":[[1326262980000,1.79e-08],[1326262920000,1.69e-08],[1326262860000,1.62e-08],[1326262800000,1.42e-08],[1326262740000,1.29e-08],[1326262680000,1.34e-08],[1326262620000,1.31e-08],[1326262560000,1.51e-08],[1326262500000,1.64e-08],[1326262440000,1.7e-08],[1326262380000,1.64e-08],[1326262320000,1.69e-08],[1326262260000,1.53e-08],[1326262200000,1.23e-08],[1326262140000,1.21e-08]],"color":"blue"},{"name":"Data 2","data":[[1326262980000,9.79e-09],[1326262920000,9.78e-09],[1326262860000,9.8e-09],[1326262800000,9.82e-09],[1326262740000,9.88e-09],[1326262680000,9.89e-09],[1326262620000,1.3e-06],[1326262560000,1.32e-06],[1326262500000,1.33e-06],[1326262440000,1.33e-06],[1326262380000,1.34e-06],[1326262320000,1.33e-06],[1326262260000,1.32e-06],[1326262200000,1.32e-06],[1326262140000,1.32e-06]],"color":"red"}], subtitle: {} }; // 2. Add callbacks (non-JSON compliant) // 3. Build the chart var chart = new Highcharts.StockChart(options); });

    Read the article

  • Keeping a large volume of data in Session - Suggestions / alternatives?

    - by Fishcake
    I'm developing a web app for which the client wants us to query their data as little as possible. The data will be coming from a Microsoft CRM instance. So we've agreed that data will only be queried as and when it is needed, therefore if a web user wants to see a list of contacts (for example) that list is fetched into a local DataTable. Then if a new contact is created on the website the new contact is sent to CRM and added to the local DataTable at the same time. Likewise for edits. If the user then looks at their contacts again the data will just come from the local DataTable. At the moment local data is being kept in Session but my concern is that too much memory will start being used up. However traffic is expected to be pretty small, perhaps no more than 20 concurrent users so am I worrying about nothing or is there a better way you can suggest to handle this?

    Read the article

  • How to generate one large dependency map for the whole project that builds with makefiles?

    - by Stan
    I have a gigantic project that is built using makefiles. Running make at the root of the project takes over 20 minutes when no files have changed (i.e. just traversing the project and checking for updated files). I'd like to create a dependency map that will tell me which directories I need to run 'make' in based on the file(s) changed. I already have a list of updated files that I can get from my version control system, and I'd like to skip the 20 minutes of traversing and get straight to the locations that do need to be recompiled. The project has a mix of several languages and custom tools, so this would ideally be language-independent (i.e. it would only process all makefiles to generate dependencies). I'll settle for a C/C++-specific solution, too, as the majority of the project is in C++. The project is built on Linux.

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • Creating many new instances vs reusing them?

    - by Hugo Riley
    I have multiple business entities in VB.NET Windows Forms application. Right now they are instanced on application startup and used when needed. They hold descriptions of business entities and methods for storing and retrieving data. To cut the long story short, they are somewhat heavy objects to construct (they have some internal dictionaries and references to other objects) created and held in one big global variable called "BLogic". Should I refactor this so that each object is created when needed and released when out of scope? Then every event on UI will probably create a few of this objects. Should I strive to minimize creation of new objects or to minimize number of static and global objects? Generally I am trying to minimize the scope of every variable but should I treat this business logic objects specially?

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • How can I optimize retrieving lowest edit distance from a large table in SQL?

    - by Matt
    Hey, I'm having troubles optimizing this Levenshtein Distance calculation I'm doing. I need to do the following: Get the record with the minimum distance for the source string as well as a trimmed version of the source string Pick the record with the minimum distance If the min distances are equal (original vs trimmed), choose the trimmed one with the lowest distance If there are still multiple records that fall under the above two categories, pick the one with the highest frequency Here's my working version: DECLARE @Results TABLE ( ID int, [Name] nvarchar(200), Distance int, Frequency int, Trimmed bit ) INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@Source, [Name])) As Distance, Frequency, 'False' As Trimmed FROM MyTable INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@SourceTrimmed, [Name])) As Distance, Frequency, 'True' As Trimmed FROM MyTable SET @ResultID = (SELECT TOP 1 ID FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @Result = (SELECT TOP 1 [Name] FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultDist = (SELECT TOP 1 Distance FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultTrimmed = (SELECT TOP 1 Trimmed FROM @Results ORDER BY Distance, Trimmed, Frequency) I believe what I need to do here is to.. Not dumb the results to a temporary table Do only 1 select from `MyTable` Setting the results right in the select from the initial select statement. (Since select will set variables and you can set multiple variables in one select statement) I know there has to be a good implementation to this but I can't figure it out... this is as far as I got: SELECT top 1 @ResultID = ID, @Result = [Name], (dbo.Levenshtein(@Source, [Name])) As distOrig, (dbo.Levenshtein(@SourceTrimmed, [Name])) As distTrimmed, Frequency FROM MyTable WHERE /* ... yeah I'm lost */ ORDER BY distOrig, distTrimmed, Frequency Any ideas?

    Read the article

  • Safe to KILL a mysql process REPLACEing records in a large myisam table?

    - by threecheeseopera
    I have a REPLACE query running for a few days now on a few MyISAM tables, the largest having 20+million records. I need it to stop. It is, basically: REPLACE INTO really_large_table (a,b,c,d) SELECT e,f,g,h FROM big_table INNER JOIN huge_table ON big_table.x LIKE CONCAT('%', huge_table.y, '%'); I need to KILL it, and I am worried that I may corrupt really_large_table. Because the sub-query itself takes a significant amount of time, the REPLACEing probably occurs (relatively) infrequently; if this is true, does this make it less likely for the data to become corrupted? For the curious, here is the SO question asked about the query I am trying to kill.

    Read the article

  • Best way to keep a large number of hobby projects alive; open sourcing?

    - by Daan van Yperen
    Because my time is limited I can usually only focus on one or two of my hobby projects, while the others sit there wasting away. I am looking for a solution that would allow me to divide my time better. is open sourcing where I take the role of guiding the project realistic, or are there better solutions? In my case, one project has a reasonably sized community of users going for it but is currently closed source. There have been requests to open source it.

    Read the article

  • Using XAML + designer to edit Plain Old CLR Objects?

    - by Joe White
    I want to write a POCO in XAML, and use a DataTemplate to display that object in the GUI at runtime. So far, so good; I know how to do all that. Since I'll already have a DataTemplate that can transform my POCO into a WPF visual tree, is there any way to get the Visual Studio designer to play along, and have the Design View show me the POCO+DataTemplate's resulting GUI, as I edit the POCO's XAML? (Obviously the designer wouldn't know how to edit the "design view"; I wouldn't expect the Toolbox or click-and-drag to work on the design surface. That's fine -- I just want to see a preview as I edit.) If you're curious, the POCOs in question would be level maps for a game. (At this point, I'm not planning to ship an end-user map editor, so I'll be doing all the editing myself in Visual Studio.) So the XAML isn't WPF GUI objects like Window and UserControl, but it's still not something where I would want to blindly bang out some XAML and hope for the best. I want to see what I'm doing (the GUI map) as I'm doing it. If I try to make a XAML file whose root is my map object, the designer shows "Intentionally Left Blank - The document root element is not supported by the visual designer." It does this even if I've defined a DataTemplate in App.xaml's <Application.Resources>. But I know the designer can show my POCO, when it's inside a WPF object. One possible way of accomplishing what I want would be to have a ScratchUserControl that just contains a ContentPresenter, and write my POCO XAML inside that ContentPresenter's Content property, e.g.: <UserControl ...> <ContentPresenter> <ContentPresenter.Content> <Maps:Map .../> </ContentPresenter.Content> </ContentPresenter> </UserControl> But then I would have to be sure to copy the content back out into its own file when I was done editing, which seems tedious and error-prone, and I don't like tedious and error-prone. And since I can preview my XAML this way, isn't there some way to do it without the UserControl?

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • how to gzip-compress large Ajax responses (HTML only) in Coldfusion?

    - by frequent
    I'm running Coldfusion8 and jquery/jquery-mobile on the front-end. I'm playing around with an Ajax powered search engine trying to find the best tradeoff between data-volume and client-side processing time. Currently my AJAX search returns 40k of (JQM-enhanced markup), which avoids any client-side enhancement. This way I'm getting by without the page stalling for about 2-3 seconds, while JQM enhances all elements in the search results. What I'm curious is whether I can gzip Ajax responses sent from Coldfusion. If I check the header of my search right now, I'm having this: RESPONSE-header Connection Keep-Alive Content-Type text/html; charset=UTF-8 Date Sat, 01 Sep 2012 08:47:07 GMT Keep-Alive timeout=5, max=95 Server Apache/2.2.21 (Win32) mod_ssl/2.2.21 ... Transfer-Encoding chunked REQUEST-header Accept */* Accept-Encoding gzip, deflate Accept-Language de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Connection keep-alive Cookie CFID= ; CFTOKEN= ; resolution=1143 Host www.host.com Referer http://www.host.com/dev/users/index.cfm So, my request would accept gzip, deflate, but I'm getting back chunked. I'm generating the AJAX response in a cfsavecontent (called compressedHTML) and run this to eliminate whitespace <cfrscipt> compressedHTML = reReplace(renderedResults, "\>\s+\<", "> <", "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(13), "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(09), "ALL"); </cfscript> before sending the compressedHTML in a response object like this: {"SUCCESS":true,"DATA": compressedHTML } Question If I know I'm sending back HTML in my data object via Ajax, is there a way to gzip the response server-side before returning it vs sending chunked? If this is at all possible? If so, can I do this inside my response object or would I have to send back "pure" HTML? Thanks! EDIT: Found this on setting a 'web.config' for dynamic compression - doesn't seem to work EDIT2: Found thi snippet and am playing with it, although I'm not sure this will work. <cfscript> compressedHTML = reReplace(renderedResults, "\>\s+\<", "> <", "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(13), "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(09), "ALL"); if ( cgi.HTTP_ACCEPT_ENCODING contains "gzip" AND not showRaw ){ cfheader name="Content-Encoding" value="gzip"; bos = createObject("java","java.io.ByteArrayOutputStream").init(); gzipStream = createObject("java","java.util.zip.GZIPOutputStream"); gzipStream.init(bos); gzipStream.write(compressedHTML.getBytes("utf-8")); gzipStream.close(); bos.flush(); bos.close(); encoder = createObject("java","sun.misc. outStr= encoder.encode(bos.toByteArray()); compressedHTML = toString(bos.toByteArray()); } </cfscript> Probably need to try this on the response object and not the compressedTHML variable

    Read the article

  • Why use hashing to create pathnames for large collections of files?

    - by Stephen
    Hi, I noticed a number of cases where an application or database stored collections of files/blobs using a has to determine the path and filename. I believe the intended outcome is a situation where the path never gets too deep, or the folders ever get too full - too many files (or folders) in a folder making for slower access. EDIT: Examples are often Digital libraries or repositories, though the simplest example I can think of (that can be installed in about 30s) is the Zotero document/citation database. Why do this? EDIT: thanks Mat for the answer - does this technique of using a hash to create a file path have a name? Is it a pattern? I'd like to read more, but have failed to find anything in the ACM Digital Library

    Read the article

  • Loading a datagrid with large amounts of data in silverlight?

    - by JD
    Hi I am breaking up my project in small sections and one of the sections involves loading a grid with possibily lots of records (could be up to 1000s of records in the database). Ideally I would like some sort of mechanism where as the users scrolls the grid, more data is retrieved. I have read that certain controls (datapager with RIA) do this but I would like to know how I could implement this myself or do something similiar? I was thinking about first loading 50 records at a time and when the user gets to scroll near the 50th record, then get another 50 as a start and so on. Not sure how I do this but this does not feel right or whether I should load ids of records in the grid and then get each row to load itself via an async thread but then I am hitting my database for each record? Thanks JD.

    Read the article

  • How can I store large amount of data from a database to XML (memory problem)?

    - by Andrija
    First, I had a problem with getting the data from the Database, it took too much memory and failed. I've set -Xmx1500M and I'm using scrolling ResultSet so that was taken care of. Now I need to make an XML from the data, but I can't put it in one file. At the moment, I'm doing it like this: while(rs.next()){ i++; xmlStringBuilder.append("\n\t<row>"); xmlStringBuilder.append("\n\t\t<ID>" + Util.transformToHTML(rs.getInt("id")) + "</ID>"); xmlStringBuilder.append("\n\t\t<JED_ID>" + Util.transformToHTML(rs.getInt("jed_id")) + "</JED_ID>"); xmlStringBuilder.append("\n\t\t<IME_PJ>" + Util.transformToHTML(rs.getString("ime_pj")) + "</IME_PJ>"); //etc. xmlStringBuilder.append("\n\t</row>"); if (i%100000 == 0){ //stores the data to a file with the name i.xml storeKBR(xmlStringBuilder.toString(),i); xmlStringBuilder= null; xmlStringBuilder= new StringBuilder(); } and it works; I get 12 100 MB files. Now, what I'd like to do is to do is have all that data in one file (which I then compress) but if just remove the if part, I go out of memory. I thought about trying to write to a file, closing it, then opening, but that wouldn't get me much since I'd have to load the file to memory when I open it. P.S. If there's a better way to release the Builder, do let me know :)

    Read the article

  • How to map (large) integer on (small in size( alphanumeric string with PHP? (Cantor?)

    - by Glooh
    Dear all, I can't figure out how to optimally do the following in PHP: In a database, I have messages with a unique ID, like 19041985. Now, I want to refer to these messages in a short-url service but not using generated hashes but by simply 'calculate' the original ID. In other words, for example: http://short.url/sYsn7 should let me calculate the message ID the visitor would like to request. To make it more obvious, I wrote the following in PHP to generate these 'alphanumeric ID versions' and of course, the other way around will let me calculate the original message ID. The question is: Is this the optimal way of doing this? I hardly think so, but can't think of anything else. $alphanumString = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-_'; for($i=0;$i < strlen($alphanumString);$i++) { $alphanumArray[$i] = substr($alphanumString,$i,1); } $id = 19041985; $out = ''; for($i=0;$i < strlen($id);$i++) { if(isset($alphanumString["".substr($id,$i,2).""]) && strlen($alphanumString["".substr($id,$i,2).""]) 0) { $out.=$alphanumString["".substr($id,$i,2).""]; } else { $out.=$alphanumString["".substr($id,$i,1).""]; $out.=$alphanumString["".substr($id,($i+1),1).""]; } $i++; } print $out;

    Read the article

  • What is the impact/limitation of oracle select with large number of bind variables?

    - by Igal Serban
    We had our oracle server chocking during processing a select statement with close to 3500(!!) bind variables. This select is, obviously, build dynamically by code that we can't change. During the execution of this select the db server went to 100% cpu usage and our system almost halted. We know how to reproduce this problem. So we can prevent this specific condition. But I am wondering if there is a way to protect the db ( by configuration) from this type of problems.

    Read the article

  • Sessions not persisting between requests

    - by klonq
    My session objects are only stored within the request scope on google app engine and I can't figure out how to persist objects between requests. The docs are next to useless on this matter and I can't find anyone who's experienced a similar problem. Please help. When I store session objects in the servlet and forward the request to a JSP using: getServletContext().getRequestDispatcher("/example.jsp").forward(request,response); Everything works like it should. But when I store objects to the session and redirect the request using: response.sendRedirect("/example/url"); The session objects are lost to the ether. In fact when I dump session key/value pairs on new requests there is absolutely nothing, session objects only appear within the request scope of servlets which create session objects. It appears to me that the objects are not being written to Memcache or Datastore. In terms of configuring sessions for my application I have set <sessions-enabled>true</sessions-enabled> In appengine-web.xml. Is there anything else I am missing? The single paragraph of documentation on sessions also notes that only objects which implement Serializable can be stored in the session between requests. I have included an example of the code which is not working below. The obvious solution is to not use redirects, and this might be ok for the example given below but some application data does need to be stored in the session between requests so I need to find a solution to this problem. EXAMPLE: The class FlashMessage gives feedback to the user from server-side operations. if (email.send()) { FlashMessage flash = new FlashMessage(FlashMessage.SUCCESS, "Your message has been sent."); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message will not be available in the session object in the next request response.sendRedirect(URL.HOME); } else { FlashMessage flash = new FlashMessage(FlashMessage.ERROR, FlashMessage.INVALID_FORM_DATA); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message is displayed without problem getServletContext().getRequestDispatcher(Templates.CONTACT_FORM).forward(request,response); } FlashMessage.java import java.io.Serializable; public class FlashMessage implements Serializable { private static final long serialVersionUID = 8109520737272565760L; // I have tried using different, default and no serialVersionUID public static final String SESSION_KEY = "flashMessage"; public static final String ERROR = "error"; public static final String SUCCESS = "success"; public static final String INVALID_FORM_DATA = "Your request failed to validate."; private String message; private String type; public FlashMessage (String type, String message) { this.type = type; this.message = message; } public String display(){ return "<div id='flash' class='" + type + "'>" + message + "</div>"; } }

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • .NET Extension Objects with XSLT -- how to iterate over a collection?

    - by Pandincus
    Help me, Stackoverflow! I have a simple .NET 3.5 console app that reads some data and sends emails. I'm representing the email format in an XSLT stylesheet so that we can easily change the wording of the email without needing to recompile the app. We're using Extension Objects to pass data to the XSLT when we apply the transformation: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:EmailNotification="ext:EmailNotification"> -- this way, we can have statements like: <p> Dear <xsl:value-of select="EmailNotification:get_FullName()" />: </p> The above works fine. I pass the object via code like this (some irrelevant code omitted for brevity): // purely an example structure public struct EmailNotification { public string FullName { get; set; } } // Somewhere in some method ... var notification = new Notification("John Smith"); // ... XsltArgumentList xslArgs = new XsltArgumentList(); xslArgs.AddExtensionObject("ext:EmailNotification", notification); // ... // The part where it breaks! (This is where we do the transformation) xslt.Transform(fakeXMLDocument.CreateNavigator(), xslArgs, XmlWriter.Create(transformedXMLString)); So, all of the above code works. However, I wanted to get a little fancy (always my downfall) and pass a collection, so that I could do something like this: <p>The following accounts need to be verified:</p> <xsl:for-each select="EmailNotification:get_SomeCollection()"> <ul> <li> <xsl:value-of select="@SomeAttribute" /> </li> </ul> <xsl:for-each> When I pass the collection in the extension object and attempt to transform, I get the following error: "Extension function parameters or return values which have Clr type 'String[]' are not supported." or List, or IEnumerable, or whatever I try to pass in. So, my questions are: How can I pass in a collection to my XSLT? What do I put for the xsl:value-of select="" inside the xsl:for-each ? Is what I am trying to do impossible?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >