Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 217/1477 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Ruby and RSS2 Feed not displaying image

    - by pcasa
    Trying to create a simple RSS2 Feed that I could later pass on to FeedBurner but can't get RSS feed to display images at all. Also, from what I have read having xml.instruct! on top might cause IE to complain it's not a valid feed. Is this true? My Code looks like xml.instruct! xml.rss "version" => "2.0", "xmlns:dc" => "http://purl.org/dc/elements/1.1/" do xml.channel do xml.title "Store" xml.link url_for :only_path => false, :controller => 'products' xml.description "Store" xml.pubDate @products.first.updated_at.rfc822 if @products.any? @products.each do |product| xml.item do xml.title product.name xml.pubDate (product.updated_at.rfc822) xml.image do xml.url domain_host + product.product_image.url(:small) xml.title "Store" xml.link url_for :only_path => false, :controller => 'products' end xml.link url_for :only_path => false, :controller => 'products', :action => 'show', :id => product.permalink xml.description product.fine_print xml.guid url_for :only_path => false, :controller => 'products', :action => 'show', :id => product.permalink end end end end

    Read the article

  • inserting large number of dates

    - by Radhe
    How can I insert all dates in an year(or more) in a table using sql My dates table has following structure dates(date1 date); Suppose I want to insert dates between "2009-01-01" to "2010-12-31" inclusive. Is there any sql query for the above?

    Read the article

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Load large images into Bitmap?

    - by GuyNoir
    I'm trying to make a basic application that displays an image from the camera, but I when I try to load the .jpg in from the sdcard with BitmapFactory.decodeFile, it returns null. It doesn't give an out of memory error which I find strange, but the exact same code works fine on smaller images. How does the generic gallery display huge pictures from the camera with so little memory?

    Read the article

  • designing an ASP.NET MVC partial view - showing user choices within a large set of choices

    - by p.campbell
    Consider a partial view whose job is to render markup for a pizza order. The desire is to reuse this partial view in the Create, Details, and Update views. It will always be passed an IEnumerable<Topping>, and output a multitude of checkboxes. There are lots... maybe 40 in all (yes, that might smell). A-OK so far. Problem The question is around how to include the user's choices on the Details and Update views. From the datastore, we've got a List<ChosenTopping>. The goal is to have each checkbox set to true for each chosen topping. What's the easiest to read, or most maintainable way to achieve this? Potential Solutions Create a ViewModel with the List and List. Write out the checkboxes as per normal. While writing each, check whether the ToppingID exists in the list of ChosenTopping. Create a new ViewModel that's a hybrid of both. Perhaps call it DisplayTopping or similar. It would have property ID, Name and IsUserChosen. The respective controller methods for Create, Update, and Details would have to create this new collection with respect to the user's choices as they see fit. The Create controller method would basically set all to false so that it appears to be a blank slate. The real application isn't pizza, and the organization is a bit different from the fakeshot, but the concept is the same. Is it wise to reuse the control for the 3 different scenarios? How better can you display the list of options + the user's current choices? Would you use jQuery instead to show the user selections? Any other thoughts on the potential smell of splashing up a whole bunch of checkboxes?

    Read the article

  • Assigning large UInt32 constants in VB.Net

    - by Kumba
    I inquired on VB's erratic behavior of treating all numerics as signed types back in this question, and from the accepted answer there, was able to get by. Per that answer: Visual Basic Literals Also keep in mind you can add literals to your code in VB.net and explicitly state constants as unsigned. So I tried this: Friend Const POW_1_32 As UInt32 = 4294967296UI And VB.NET throws an Overflow error in the IDE. Pulling out the integer overflow checks doesn't seem to help -- this appears to be a flaw in the IDE itself. This, however, doesn't generate an error: Friend Const POW_1_32 As UInt64 = 4294967296UL So this suggests to me that the IDE isn't properly parsing the code and understanding the difference between Int32 and UInt32. Any suggested workarounds and/or possible clues on when MS will make unsigned data types intrinsic to the framework instead of the hacks they currently are?

    Read the article

  • filesize of large files in c

    - by endeavormac
    How can I get the filesize of a file in C when the filesize is greater than 4gb? ftell returns a 4 byte signed long, limiting it to two bytes. stat has a variable of type off_t which is also 4 bytes (not sure of sign), so at most it can tell me the size of a 4gb file. What if the file is larger than 4 gb?

    Read the article

  • Document -> Flash viewer, not hosted

    - by Dane
    I've got a content management solution where we present scanned images (TIFF), PDFs, word docs for viewing. While we can simply embed a PDF, sometimes depending on user preferences it's a bit fiddly and sometimes not user-intuitive. I'd like a solution like scribd, embedit, etc, but not hosted. I want to run the application on our own servers and manage it that way (for legal reasons, and our clients won't buy the service if it's hosted somewhere else). SWFtools looks a little basic for my needs, plus doesn't do doc, docx or ppt. Any options? Doesn't have to be free, but would be ideal.

    Read the article

  • ActionBar SpinnerAdapter Large Branding followed by selection (spinner)

    - by SatanEnglish
    I'm trying to implement a spinner In the action bar that has brand Name above it. With the ActionBar setListNavigationCallbacks method if possible actionBar.setNavigationMode(ActionBar.NAVIGATION_MODE_LIST); actionBar.setListNavigationCallbacks(mSpinnerAdapter, null); Can anyone give me an Idea of how to do this? I would put some code here but I have no idea where to begin as I have not managed to find relevant information yet. Edit: Using V4.0

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • Timeout on Large mySQL Query

    - by Bob Stewart
    I have this query: $theQuery = mysql_query("SELECT phrase, date from wordList WHERE group='nouns'"); while($getWords=mysql_fetch_array($theQuery)) { echo "$getWords[phrase] created on $getWords[date]<br>"; } The data table "wordList" contains 75,000 records in the group "nouns" and every time I load the code I am returned an error. Help!

    Read the article

  • Hang during databinding of large amount of data to WPF DataGrid

    - by nihi_l_ist
    Im using WPFToolkit datagrid control and do the binding in such way: <WpfToolkit:DataGrid x:Name="dgGeneral" SelectionMode="Single" SelectionUnit="FullRow" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" Grid.Row="1" ItemsSource="{Binding Path=Conversations}" > public List<CONVERSATION> Conversations { get { return conversations; } set { if (conversations != value) { conversations = value; NotifyPropertyChanged("Conversations"); } } } public event PropertyChangedEventHandler PropertyChanged; public void NotifyPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public void GenerateData() { BackgroundWorker bw = new BackgroundWorker(); bw.WorkerSupportsCancellation = bw.WorkerReportsProgress = true; List<CONVERSATION> list = new List<CONVERSATION>(); bw.DoWork += delegate { list = RefreshGeneralData(); }; bw.RunWorkerCompleted += delegate { try { Conversations = list; } catch (Exception ex) { CustomException.ExceptionLogCustomMessage(ex); } }; bw.RunWorkerAsync(); } And than in the main window i call GenerateData() after setting DataCotext of the window to instance of the class, containing GenerateData(). RefreshGeneralData() returns some list of data i want and it returns it fast. Overall there are near 2000 records and 6 columns(im not posting the code i used during grid's initialization, because i dont think it can be the reason) and the grid hangs for almost 10 secs!

    Read the article

  • jQuery and Ajax

    - by Banderdash
    Can anyone help with a jQuery snippet that would use Ajax to pull an XML file in on page load? Have really clunky way of doing it without jQuery here: <script type="text/javascript"> function loadXMLDoc() { if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { xmlDoc=xmlhttp.responseXML; var txt=""; x=xmlDoc.getElementsByTagName("title"); for (i=0;i<x.length;i++) { txt=txt + x[i].childNodes[0].nodeValue + "<br />"; } document.getElementById("checkedIn").innerHTML=txt; } } xmlhttp.open("GET","ajax-response-data.xml",true); xmlhttp.send(); } </script> Ideally rather the having a click generate the list it would do so on page load, showing the fields from the XML (title, author, and whether it is checked in or not i.e. <book id="#" checked-in="0">) Would hug you for a solution.

    Read the article

  • how can i strip formatting from word document using php

    - by shazia
    I want to display a word file and then extract the content and display it in a separate textarea. i want to do away with the formatting as well. this is what i get when i get the read the text file using php ??????ÿÿÿÿ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????Running Head: INTERNATIONAL BUSINESS International Business [Name of the writer] [Name of the institution] International Business Question 1 Increasing returns to scale (or economies of scale) in production is an indisputable phenomenon characterizing real world production, and, as such, they have long been recognized as a principal source of economic prosperity. Nonetheless, they have never played a major role in this is the code $fh = fopen($newname, 'r'); $contents => fread($fh, filesize($newname)); fclose($fh); unlink($newname); echo "<br/>"; echo $contents; how can i get rid of all these charecters. Thanks

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • inserting unique date into txt document

    - by durian
    I'm trying this script to insert only a unique date into a text file, but it isn't working properly: $log_file_name = "logfile.txt"; $log_file_path = "log_files/$id/$log_file_name"; if(file_exists($log_file_path)){ $not = "not"; $todaydate = date('d,m,Y'); $today = "$todaydate;"; $strlength = strlen($today); $file_contents = file_get_contents($log_file_path); $file_contents_arry = explode(";",$file_contents); if(!in_array($todaytodaydate,$file_contents_arry)){ $append = fopen($log_file_path, 'a'); $write = fwrite($append,$today); //writes our string to our file. $close = fclose($append); //closes our file } else { $append = fopen($log_file_path, 'a'); $write = fwrite($append,$not); //writes our string to our file. $close = fclose($append); //closes our file } } else{ mkdir("log_files/$id", 0700); $todaydate = date('d,m,Y'); $today = "$todaydate;"; $strlength = strlen($today); $create = fopen($log_file_path, "w"); $write = fwrite($create, $today, $strlength); //writes our string to our file. $close = fclose($create); //closes our file } The problem is with the if else statement where it should be written if it's already in the array.

    Read the article

  • How can I write an XML on my hard drive to GetRequestStream

    - by swolff1978
    I need to post raw xml to a site and read the response. With the following code I keep getting an "Unknown File Format" error and I'm not sure why. XmlDocument sampleRequest = new XmlDocument(); sampleRequest.Load(@"C:\SampleRequest.xml"); byte[] bytes = Encoding.UTF8.GetBytes(sampleRequest.ToString()); string uri = "https://www.sample-gateway.com/gw.aspx"; req = WebRequest.Create(uri); req.Method = "POST"; req.ContentLength = bytes.Length; req.ContentType = "text/xml"; using (var requestStream = req.GetRequestStream()) { requestStream.Write(bytes, 0, bytes.Length); } // Send the data to the webserver rsp = req.GetResponse(); XmlDocument responseXML = new XmlDocument(); using (var responseStream = rsp.GetResponseStream()) { responseXML.Load(responseStream); } I am fairly certain my issue is what/how I am writing to the requestStream so.. How can I modify that code so that I may write an xml located on the hard drive to the request stream?

    Read the article

  • Extract anything that looks like links from large amount of data in python

    - by Riz
    Hi, I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.

    Read the article

  • Anchors within the document and their position.

    - by Jose Vega
    On the following website, www.josecvega.com, I have a navigation bar with years that link to sections on that same page. Unfortunately it is not working they way I hoped, when the user selects a year it moves to the section of the page, but puts that section on the top of the page. I have a fixed div on the top of the page that covers the sections and prevents it from properly displaying. What can I do for this to work? It hard to explain my situation, but it can be seen by going to www.josecvega.com and clicking one of the years.

    Read the article

  • Stack Overflow Accessing Large Vector

    - by cam
    I'm getting a stack overflow on the first iteration of this for loop for (int q = 0; q < SIZEN; q++) { cout<<nList[q]<<" "; } nList is a vector of type int with 376 items. The size of nList depends on a constant defined in the program. The program works for every value up to 376, then after 376 it stops working. Any thoughts?

    Read the article

  • Trace large C++ code base?

    - by anon
    Problem: I have just inherited this 200K LOC source code base. There's only a small part of it I need (and I want to rip all else out). What I would like to do is to be able to: 1) run the program a few times 2) have something (here's where you come in) record which lines of code gets executed 3) then rip out all the irrelevant lines of code I realize this has "problems" in the forms of "different args will take different paths"; but for my needs, it's very specific, and I just want something ot get me started on the right line of for ripping stuff out (I'll fine tune those special cases later). Thanks!

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >