Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 213/1477 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Functional Specifications vs. Requirements Document

    - by KP
    Guys, Currently in my comp., there are some changes going on regarding project documentation. There is a LOT of time and effort spent on discussing functional specs vs. requirements doc. However, I don't think anyone here understands the reason why you would use one over the other. Therefore, I don't understand the difference myself. Can someone shed some light on this matter plz? If you have links to articles, blog posts, etc. that would be helpful too. Thanks

    Read the article

  • Make process crash on large memory allocation

    - by Pieter
    I'm trying to find a significant memory leak (15MB at a time, but doing allocations like this on multiple places). I checked the most obvious places, and then used AQTime, but I still can't pinpoint it. Now I see 2 options left: 1) Use SetProcessWorkingSetSize: I've tried this but my process happily keeps on running when using up more then 150MB: DWORD MemorySize = 150*1024*1024; SetProcessWorkingSetSize( GetCurrentProcess(), MemorySize/2, MemorySize*2 ); 2) Put a breakpoint when allocating more then 1MB at a time. How should I do this, overload operator new with an 'if1MB' inside ?

    Read the article

  • C++ a class with an array of structs, without knowing how large an array I need

    - by Dominic Bou-Samra
    New to C++, and for that matter OO programming. I have a class with fields like firstname, age, school etc. I need to be able to store other information like for instance, where they have travelled, and what year it was in. I cannot declare another class specifically to hold travelDestination and what year, so I think a struct might be best. This is just an example: struct travel { string travelDest; string year; }; The issue is people are likely to have travelled different amounts. I was thinking of just having an array of travel structs to hold the data. But how do I create a fixed sized array to hold them, without knowing how big I need it to be? Perhaps I am going about this the completely wrong way, so any suggestions as to a better way would be appreciated.

    Read the article

  • Counting elements and reading attributes with .net2.0 ?

    - by Prix
    I have an application that is on .net 2.0 and I am having some difficult with it as I am more use to linq. The xml file look like this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <updates> <files> <file url="files/filename.ext" checksum="06B9EEA618EEFF53D0E9B97C33C4D3DE3492E086" folder="bin" system="0" size="40448" /> <file url="files/filename.ext" checksum="CA8078D1FDCBD589D3769D293014154B8854D6A9" folder="" system="0" size="216" /> <file url="files/filename.ext" checksum="CA8078D1FDCBD589D3769D293014154B8854D6A9" folder="" system="0" size="216" /> </files> </updates> The file is downloaded and readed on the fly: XmlDocument readXML = new XmlDocument(); readXML.LoadXml(xmlData); Initially i was thinking it would go with something like this: XmlElement root = doc.DocumentElement; XmlNodeList nodes = root.SelectNodes("//files"); foreach (XmlNode node in nodes) { ... im reading it ... } But before reading them I need to know how many they are to use on my progress bar and I am also clueless on how to grab the attribute of the file element in this case. How could I count how many "file" ELEMENTS I have (count them before entering the foreach ofc) and read their attributes ? I need the count because it will be used to update the progress bar. Overall it is not reading my xml very well.

    Read the article

  • MalformedURLException with file URI

    - by Paul Reiners
    While executing the following code: doc = builder.parse(file); where doc is an instance of org.w3c.dom.Document and builder is an instance of javax.xml.parsers.DocumentBuilder, I'm getting the following exception: Exception in thread "main" java.net.MalformedURLException: unknown protocol: c at java.net.URL.<init>(Unknown Source) at java.net.URL.<init>(Unknown Source) at java.net.URL.<init>(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startDTDEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.setInputSource(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source) at javax.xml.parsers.DocumentBuilder.parse(Unknown Source) at com.acme.ItemToThetaValues.createFiles(ItemToThetaValues.java:47) It's choking on this line of the file: <!DOCTYPE questestinterop SYSTEM "C:\Program Files\Acme\parsers\acme_full.dtd"> I am not getting this error on my machine, while a user is getting it on his machine. We are both using version 6 of the Sun JRE. This error also occurs when he's uses double backslashes in the path instead of single backslashes and when he uses forward slashes instead of backslashes. First of all, is the XML correct? Is the path expressed correctly? Second of all, why is this error occurring on one computer but not on another?

    Read the article

  • Creating a Large Matrix in ff

    - by Ryan Rosario
    I am trying to create a huge matrix in ff, and I know that ff is good for this sort of thing. But, there is a major problem. The dimensions of the matrix exceed .Machine$max_integer! I am running on a 64 bit machine, using 64bit R and 64bit ff. Is there any way to get around this problem? It's been suggested that R is using the MAXINT value from stdint.h. Is there any way to fix this without changing that file and possibly breaking build? > ffMatrix <- ff(vmode="boolean", dim=c(1e10,1e10)) Error in if (length < 0 || length > .Machine$integer.max) stop("length must be between 1 and .Machine$integer.max") : missing value where TRUE/FALSE needed In addition: Warning message: In ff(vmode = "boolean", dim = c(1e+10, 1e+10)) : NAs introduced by coercion > 1e+10 > .Machine$integer.max [1] TRUE

    Read the article

  • What would you recommend for a large-scale Java data grid technology: Terracotta, GigaSpaces, Cohere

    - by cliff.meyers
    I've been reading up on so-called "data grid" solutions for the Java platform including Terracotta, GigaSpaces and Coherence. I was wondering if anyone has real-world experience working any of these tools and could share their experience. I'm also really curious to know what scale of deployment people have worked with: are we talking 2-4 node clusters or have you worked with anything significantly larger than that? I'm attracted to Terracotta because of its "drop in" support for Hibernate and Spring, both of which we use heavily. I also like the idea of how it decorates bytecode based on configuration and doesn't require you to program against a "grid API." I'm not aware of any advantages to tools which use the approach of an explicit API but would love to hear about them if they do in fact exist. :) I've also spent time reading about memcached but am more interested in hearing feedback on these three specific solutions. I would be curious to hear how they measure up against memcached in the event someone has used both.

    Read the article

  • Excel document incorrect format

    - by Jim
    I have a macro enabled work book and i change the name of the .xlsm file to [FileName].xlsm.zip and then i unzip i get some folders I then put these extracted folders in to another folder and zip it back and rechange the extension to the previous xlsm format i now try and open but i get an unreadable error. I am not changing any content here just extracting and zip it back. What could be the problem?

    Read the article

  • ZipArchive memory problems on iPhone for large archive

    - by Mithin
    Hi, I am trying to compress multiple files into a single zip archive and I am running into low memory warning. Since the complete zip file is loaded into the memory I guess that's the problem. Is there a way by which I can manage the compression/decompression better using ZipArchive so that not all the data is in the memory at once? Thanks!

    Read the article

  • Recommendations on Triming Large Amounts of Text from a DOM Object

    - by aronchick
    I'm doing some in browser editing, and I have some content that's on the order of around 20k characters long in a <pre>. So it looks something like: <pre> Text 1 Text 2 Text 3 Text 4 [...] Text 20,000 </pre> I'd like to use jquery to trim it down when someone hits a button to chop, but I'm having trouble doing it without overloading the browser. Assume I know that the character numbers are at 16,510 - 17,888, and what I'd like to do is trim it. I was using: jQuery('#textsection').html(jQuery('textarea').html().substr(range.start)); But browsers seem to enjoy crashing when I do this. Alternatives?

    Read the article

  • Using Large Lists

    - by cam
    In an Outlook AddIn I'm working on, I use a list to grab all the messages in the current folder, then process them, then save them. First, I create a list of all messages, then I create another list from the list of messages, then finally I create a third list of messages that need to be moved. Essentially, they are all copies of eachother, and I made it this way to organize it. Would it increase performance if I used only one list? I thought lists were just references to the actual item.

    Read the article

  • Efficiently finding the shortest path in large graphs

    - by Björn Lindqvist
    I'm looking to find a way to in real-time find the shortest path between nodes in a huge graph. It has hundreds of thousands of vertices and millions of edges. I know this question has been asked before and I guess the answer is to use a breadth-first search, but I'm more interested in to know what software you can use to implement it. For example, it would be totally perfect if it already exist a library (with python bindings!) for performing bfs in undirected graphs.

    Read the article

  • Removing related elements using XSLT 1.0

    - by pmdarrow
    I'm attempting to remove Component elements from the XML below that have File children with the extension "config." I've managed to do this part, but I also need to remove the matching ComponentRef elements that have the same "Id" values as these Components. <Fragment> <DirectoryRef Id="MyWebsite"> <Component Id="Comp1"> <File Source="Web.config" /> </Component> <Component Id="Comp2"> <File Source="Default.aspx" /> </Component> </DirectoryRef> </Fragment> <Fragment> <ComponentGroup Id="MyWebsite"> <ComponentRef Id="Comp1" /> <ComponentRef Id="Comp2" /> </ComponentGroup> </Fragment> Based on other SO answers, I've come up with the following XSLT to remove these Component elements: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes" /> <xsl:template match="Component[File[substring(@Source, string-length(@Source)- string-length('config') + 1) = 'config']]" /> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> Unfortunately, this doesn't remove the matching ComponentRef elements (i.e. those that have the same "Id" values). The XSLT will remove the component with the Id "Comp1" but not the ComponentRef with Id "Comp1". How do I achieve this using XSLT 1.0?

    Read the article

  • inheritance in document database?

    - by nils petersohn
    i am wondering because i searched the pdf "xxx the definitive guide" and "beginning xxx" for the word "inheritance" but i didn't find anything? am i missing something? because i am doing a tablePerHierarchy inheritance with hibernate and mysql, does that become deprecated for some reason in xxx? (replace xxx with the "not only sql" database you like)

    Read the article

  • how to get entire document in scrapy using hxs.select

    - by Chris Smith
    I've been at this for 12hrs and I'm hoping someone can give me a leg up. Here is my code all I want is to get the anchor and url of every link on a page as it crawls along. from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.utils.url import urljoin_rfc from scrapy.utils.response import get_base_url from urlparse import urljoin #from scrapy.item import Item from tutorial.items import DmozItem class HopitaloneSpider(CrawlSpider): name = 'dmoz' allowed_domains = ['domain.co.uk'] start_urls = [ 'http://www.domain.co.uk' ] rules = ( #Rule(SgmlLinkExtractor(allow='>example\.org', )), Rule(SgmlLinkExtractor(allow=('\w+$', )), callback='parse_item', follow=True), ) user_agent = 'Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))' def parse_item(self, response): #self.log('Hi, this is an item page! %s' % response.url) hxs = HtmlXPathSelector(response) #print response.url sites = hxs.select('//html') #item = DmozItem() items = [] for site in sites: item = DmozItem() item['title'] = site.select('a/text()').extract() item['link'] = site.select('a/@href').extract() items.append(item) return items What I'm doing wrong... my eyes hurt now.

    Read the article

  • Large Scale VHDL techniques

    - by oxinabox.ucc.asn.au
    I'm thinking about implimenting a 16 bit CPU in VHDL. A simplish CPU. ADD, MULS, NEG, BitShift, JUMP, Relitive Jump, BREQ, Relitive BREQ, i don't know somethign along these lines Probably all only working with 16bit operands. I might even cut it down and use only a single operand and a accumulator. With Some status regitsters, Carry, Zero, Neg (unless i use a Accumlator), I know how to design all the parts from logic gates, and plan to build them up from first priciples, So for my ALU I'll need to 'build' a ADDer, proably a Carry Look ahead, group adder, this adder it self is make up oa a couple of parts, wich are themselves made up of a couple of parts. Anyway, my problem is not the CPU design, or the VHDL (i know the language, more or less). It's how i should keep things organised. How should I use packages, How should I name my processes and port maps? (i've never seen the benifit of naming the port maps, or processes)

    Read the article

  • Find/parse server-side <?abc?>-like tags in html document

    - by Iggyhopper
    I guess I need some regex help. I want to find all tags like <?abc?> so that I can replace it with whatever the results are for the code ran inside. I just need help regexing the tag/code string, not parsing the code inside :p. <b><?abc print 'test' ?></b> would result in <b>test</b> Edit: Not specifically but in general, matching (<?[chars] (code group) ?>)

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • how to send binary data within an xml string

    - by daemonkid
    I want to send a binary file to .net c# component in the following xml format <BinaryFileString fileType='pdf'> <!--binary file data string here--> </BinaryFileString> In the component that is called I will use the above xml string and convert the binary string recieved within the BinaryFileString tag, into a file as specified by the filetype='' attribute. The file type could be doc/pdf/xls/rtf I have the code in the calling application to get out the bytes from the file to be sent. How do I prepare it to be sent with xml tags wrapped around it? I want the application to send out a string to the component and not a byte stream. This is because there is no way I can decipher the file type [pdf/doc/xls] by just looking at the byte stream. Hence the xml string with the filetype attribute. Any ideas on this? method for extracting Bytes below FileStream fs = new FileStream(_filePath, FileMode.Open, FileAccess.Read); using (Stream input = fs) { byte[] buffer = new byte[8192]; int bytesRead; while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0) { } } return buffer; Thanks.

    Read the article

  • How to write a large number of nested records in JSON with Python

    - by jamesmcm
    I want to produce a JSON file, containing some initial parameters and then records of data like this: { "measurement" : 15000, "imi" : 0.5, "times" : 30, "recalibrate" : false, { "colorlist" : [234, 431, 134] "speclist" : [0.34, 0.42, 0.45, 0.34, 0.78] } { "colorlist" : [214, 451, 114] "speclist" : [0.44, 0.32, 0.45, 0.37, 0.53] } ... } How can this be achieved using the Python json module? The data records cannot be added by hand as there are very many.

    Read the article

  • What is the largest file size we can transfer through air application?

    - by Naveen kumar
    Hi all, I'm trying to transfer large file(1Gb+) using UDP(in packets) through air application. I'm transfering byteArray by taking chunks of packets from FileStream. But its giving 'Error #1000: The system is out of memory' at sender side after certain number of packets sent and by this time the downloaded file size at server side is 256 MB. I tried with other files but after downloading 256MB, sender is giving the same error. Is it because of the file stream size? How can I solve this problem so that I can transfer files of GB size.

    Read the article

  • handling large arrays with array_diff

    - by bigmac
    I have been trying to compare two arrays. Using array_intersect presents no problems. When using array_diff and arrays with ~5,000 values, it works. When I get to ~10,000 values, the script dies when I get to array_diff. Turning on error_reporting did not produce anything. I tried creating my own array_diff function: function manual_array_diff($arraya, $arrayb) { foreach ($arraya as $keya => $valuea) { if (in_array($valuea, $arrayb)) { unset($arraya[$keya]); } } return $arraya; } source: http://stackoverflow.com/questions/2479963/how-does-array-diff-work I would expect it to be less efficient that than the official array_diff, but it can handle arrays of ~10,000. Unfortunately, both array_diffs fail when I get to ~15,000. I tried the same code on a different machine and it runs fine, so it's not an issue with the code or PHP. There must be some limit set somewhere on that particular server. Any idea how I can get around that limit or alter it or just find out what it is?

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >