Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 216/1477 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Flex - weird display behavior on large number of Canvas

    - by itarato
    Hi, I have a Flex app (SDK 3.5 - FP10) that does mindmap trees. Every node is a Canvas (I'm using Canvas specific properties so I needed it). It has a shadow effect, background color and some small ui element on it (like icons, texts...). It works perfectly until it goes over ~700 nodes (Canvas). Over that number it shows grey rectangles: http://yfrog.com/bhw2pj . If I turn off the DropShadowFilter effect for the Canvas, they are also gone, so I assume it's a DropShadowFilter problem: http://yfrog.com/2d9y8j . The effect is simple: private static var _nodeDropShadow:DropShadowFilter = new DropShadowFilter(1, 45, 0x888888, 1, 1, 1); _backgroundComp.filters = _nodeDropShadow; Is it possible that Flex can't handle that much? Thanks in advance

    Read the article

  • Indexing large DB's with Lucene/PHP

    - by thebluefox
    Afternoon chaps, Trying to index a 1.7million row table with the Zend port of Lucene. On small tests of a few thousand rows its worked perfectly, but as soon as I try and up the rows to a few tens of thousands, it times out. Obviously, I could increase the time php allows the script to run, but seeing as 360 seconds gets me ~10,000 rows, I'd hate to think how many seconds it'd take to do 1.7million. I've also tried making the script run a few thousand, refresh, and then run the next few thousand, but doing this clears the index each time. Any ideas guys? Thanks :)

    Read the article

  • How to change XmlSchemaElement.SchemaType (or: difference between SchemaType and ElementSchemaType)

    - by Gregor
    Hey, I'm working on a XML Editor which gets all his information from the corresponding XSD file. To work with the XSD files I use the System.Xml.Schema Namespace (XmlSchema*). Because of an 'xsi:type' attribute in the XML I've to change the XmlSchemaType of an XmlSchemaElement. Until now I use in my code the 'ElementSchemaType' property of 'XmlSchemaElement'. The nice thing about it: it's read only. There is also in 'XmlSchemaElement' an 'SchemaType' property which is not read only, but always null (yes, XmlSchema and XmlSchemaSet are compiled). So how can I change the type of the 'XmlSchemaElement'? Or, also the same question: What is the diffrence between this two porperties? Some technical data: C#, .NET 3.5 The MSDN documentation is nearly the same for both: SchemaType Documentation: Gets or sets the type of the element. This can either be a complex type or a simple type. ElementSchemaType Documentation: Gets an XmlSchemaType object representing the type of the element based on the SchemaType or SchemaTypeName values of the element.

    Read the article

  • Cache for large read only database recommendation

    - by paddydub
    I am building site on with Spring, Hibernate and Mysql. The mysql database contains information on coordinates and locations etc, it is never updated only queried. The database contains 15000 rows of coordinates and 48000 rows of coordinate connections. Every time a request is processed, the application needs to read all these coordinates which is taking approx 3-4 seconds. I would like to set up a cache, to allow quick access to the data. I'm researching memcached at the moment, can you please advise if this would be my best option?

    Read the article

  • myXmlDataDoc.DataSet.ReadXml problem

    - by alex
    Hi: I am using myXmlDataDoc.DataSet.ReadXml(xml_file_name, XmlReadMode.InferSchema); to populate the tables within the dataset created by reading in a xml schema using: myStreamReader = new StreamReader(xsd_file_name); myXmlDataDoc.DataSet.ReadXmlSchema(myStreamReader); The problems I am facing is when it comes to read the xml tag: The readXml function puts the element in a different table and everything in the tables are 0s. Here is the print out of the datatable: TableName: test_data 100 2 1 TableName: parameters 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 In my xsd file, I represent an array using this: <xs:element name="test_data"> <xs:complexType> <xs:complexContent> <xs:extension base="test_base"> <xs:sequence> <xs:element name="a" type="xs:unsignedShort"/> <xs:element name="b" type="xs:unsignedShort"/> <xs:element name="c" type="xs:unsignedInt"/> <xs:element name="parameters" minOccurs="0" maxOccurs="12" type="xs:unsignedInt"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> And here is my xml file: 100 2 1 1 2 3 4 5 6 7 8 9 10 11 12 I am expecting after the readXml function call, the "parameters" is part of the test_data table. TableName: test_data 100 2 1 1 2 3 4 5 6 7 8 9 10 11 12 Does anyone knows what's wrong? Thanks

    Read the article

  • jquery, safari & jqzoom plugin - issue with document.ready

    - by mmuller
    Hi, I have a small issue with jQuery on Safari (Mac OSX 10.6) - the page loads fine under Firefox (Mac) and Internet Explorer (Win) but has to be refreshed to work properly in Safari... http://7souls.co.uk/store/index.php?dispatch=products.view&product_id=29788 If you hover over the image it is meant to show a magnified version to the right hand side - which works on the first page load on all browsers except Safari on the Mac. You have to refresh the page to get it to work under safari. Any Ideas, MM

    Read the article

  • How to detect if a doc document password protected

    - by StuffHappens
    Hello. I have a folder with lots of doc-documents and I need to upload their context into database. The problem is that some of them are password protected and I don't know the password. I would like to skip them but I don't know how to detect if password protection presents. If someone helps I'll appriciate it greatly. P.S. Programming language is C#.

    Read the article

  • How to implement a sharepoint lists webservice

    - by 1800 INFORMATION
    I want to implement a web-service that uses the same interface as the Lists web service in sharepoint. I do not want to run this through sharepoint. What is a good way to get started in this? I have tried to use the wsdl.exe tool to generate some wrapper classes but the generated wrappers seem to have punted on the structure parameters and just specified them as XML. For example below is the generated wrapper for GetList - it should return a structure which has the information in the list, but instead it is returning XML. What is going on? ... /// <remarks/> [System.CodeDom.Compiler.GeneratedCodeAttribute("wsdl", "2.0.50727.1432")] [System.Web.Services.WebServiceBindingAttribute(Name="ListsSoap", Namespace="http://schemas.microsoft.com/sharepoint/soap/")] public interface IListsSoap { /// <remarks/> [System.Web.Services.WebMethodAttribute()] [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://schemas.microsoft.com/sharepoint/soap/GetList", RequestNamespace="http://schemas.microsoft.com/sharepoint/soap/", ResponseNamespace="http://schemas.microsoft.com/sharepoint/soap/", Use=System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] System.Xml.XmlNode GetList(string listName); }

    Read the article

  • Tomcat JSP(2.0) Document how to stop automaticly closing empty body tags with /> instead of </tagnam

    - by JOKe
    The question is. If I use JSP Documents (or JSP 2.0) and If I put a TAG without a BODY it is automaticly closed I dont want that. so If I have <div id=....> </div> it is automaticly converted to <div id=.../> How I can stop this ? I am using tomcat is there any configuration about that ? P.S. the reason to want to stop it is because it simple "fuckes" the JQuery stuffs that the designer company are using.

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

  • Undefined control sequence at first line of a document

    - by Jayomat
    \documentclass{book} \usepackage{amsmath} \usepackage[german]{babel} \usepackage{amssymb} \usepackage{amsxtra} \usepackage[dvips]{epsfig,psfrag} \usepackage{listings} .... this is how my file starts. I didn't even edit it, I received it like this. However, if I want to make a pdf, it gives me the undefined control sequence error at the first line... What is wrong??

    Read the article

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

  • Migrating from Physical SQL (SQL2000) To VMWare machine (SQL2008) - Transferring Large DB

    - by alex
    We're in the middle of migrating from a windows & SQL 2000 box to a Virtualised Win & SQL 2k8 box The VMWare box is on a different site, with better hardware, connectivity etc... The old(current) physical machine is still in constant use - I've taken a backup of the DB on this machine, which is 21GB Transfering this to our virtual machine took around 7+ hours - which isn't ideal when we do the "actual" switchover. My question is - How should I handle the migration better? Could i set up our current machine to do log shipping to the VM machine to keep up to date? then, schedule down time out of hours to do the switch over? Is there a better way?

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Download office document without the web server trying to render it

    - by Dan Revell
    I'm trying to download an InfoPath template that's hosted on SharePoint. If I hit the url in internet explorer it asks me where to save it and I get the correct file on my disk. If I try to do this programmatically with WebClient or HttpWebRequest then I get HTML back instead. How can I make my request so that the web server returns the actual xsn file and doesn't try to render it in html. If internet explorer can do this then it's logical to think that I can too. I've tried setting the Accept property of the request to application/x-microsoft-InfoPathFormTemplate but that hasn't helped. It was a shot in the dark.

    Read the article

  • Document management, SCM ?

    - by tsunade
    Hello, This might not be a hard core programming question, but it's related to some of the tools used by programmers I suspect. So we're a bunch of people each with a bunch of documents and a bunch of different computers on a bunch of operating systems (well, only 2, linux and windows). The best way these documents can be stored/managed is if they were available offline (the laptop might not always be online) but also synchronized between all the machines. Having a server with extra reliable storage be a "base repository" seems like a good idea to me. Using a SCM comes to my mind and I've tried Subversion, and it seems to be a good thing that it uses a centralized repository - but: When checking out the total size of the checkout is roughly double the original size. Big files or big repositories seem to slow it down. Also I've tried rsync, which might work - but it's a bit rough when it comes to the potential conflict. Finally I've tried Unison (which is a wrapping of rsync, I think) and while it works it becomes horribly slow for the big directories we have here since it has to scan everything. So the question is - is there a SCM tool out there that is actually practial to use for a big bunch of both small and big files? If thats a NO - does anyone know other tools that do this job? Thanks for reading :)

    Read the article

  • removing phone number from a document.

    - by Grant Collins
    Hi, I've got a challenge that I am hoping that the SO community is able to help me with. I trying to parse a lot of html documents in my PHP application to remove personal details, such as names, addresses and phone numbers. I can remove most of these details without too much trouble, however the phone number is a real problem for me. My idea is to take the text from these documents and the use a regex to identify the phone numbers and replace them with another value such as 'xxxx'. I've got 2 regex that I am using one for UK landline numbers and one for UK cell/mobile numbers. However when I try and run them against the text it just returns an empty string. I am using the following preg_replace code: $pattens = array( '/^(((\+44\s?\d{4}|\(?0\d{4}\)?)\s?\d{3}\s?\d{3})|((\+44\s?\d{3}|\(?0\d{3}\)?)\s?\d{3}\s?\d{4})|((\+44\s?\d{2}|\(?0\d{2}\)?)\s?\d{4}\s?\d{4}))(\s?\#(\d{4}|\d{3}))?$/', '/^(\+44\s?7\d{3}|\(?07\d{3}\)?)\s?\d{3}\s?\d{3}$/' ); $replace = array('xxxxx', 'xxxxx'); //do the search for the numbers. $updatedContents = preg_replace($pattens, $replace, $htmlContents); At the moment this is causing me a lot of head scratching as I thought that I had this nailed, but at the moment I can't see what's wrong?? I am sure that it is something really simple. Thanks, Grant

    Read the article

  • Objective C code to handle large amount of data processing in iPhone

    - by user167662
    I had the following code that takes in 14 mb or more of image data encoded in base4 string and converts them to jpeg before writing to a file in iphone. It crashes my program giving the following error : Program received signal: “0”. warning: check_safe_call: could not restore current frame I tweak my program and it can process a few more images before the error appear again. My coding is as follows: // parameters is an array where the fourth element contains a list of images in base64 >encoded string NSMutableArray *imageStrList = (NSMutableArray*) [parameters objectAtIndex:5]; while (imageStrList.count != 0) { NSString *imgString = [imageStrList objectAtIndex:0]; // Create a file name using my own Utility class NSString *fileName = [Utility generateFileNName]; NSData *restoredImg = [NSData decodeWebSafeBase64ForString:imgString]; UIImage *img = [UIImage imageWithData: restoredImg]; NSData *imgJPEG = UIImageJPEGRepresentation(img, 0.4f); [imgJPEG writeToFile:fileName atomically:YES]; [imageStrList removeObjectAtIndex:0]; } I tried playing around with UIImageJPEGRepresentation and found out that the lower the value, the more image it can processed but this should not be the way. I am wondering if there is anyway to free up memory of the imageStrList immediately after processing each image so that it can be used by the next one in the line.

    Read the article

  • Managing Large Database Entity Models

    - by ChiliYago
    I would like hear how other's are effectively (or not) working with the Visual Studio Entity Designer when many database tables exists. It seems to me that navigating the Designer is tough enough to find what you are looking for with just a few tables but how about a database with say 100 to 200 tables? When a table change is made at the database level how is the model updated? Does it overwrite any manual changes you have made to the model? How would you quickly find an entity in the designer to make a change or inspect a change? Seems unrealistic to be scrolling around looking for specific entity. Thanks for your feedback!

    Read the article

  • bitshift large strings for encoding QR Codes

    - by icekreaman
    As an example, suppose a QR Code data stream contains 55 data words (each one byte in length) and 15 error correction words (again one byte). The data stream begins with a 12 bit header and ends with four 0 bits. So, 12 + 4 bits of header/footer and 15 bytes of error correction, leaves me 53 bytes to hold 53 alphanumeric characters. The 53 bytes of data and 15 bytes of ec are supplied in a string of length 68 (str68). The problem seems simple enough - concatenate 2 bytes of (right-shifted) header data with str68 and then left shift the entire 70 bytes by 4 bits. This is the first time in many years of programming that I have ever needed to do something like this, I am a c and bit shifting noob, so please be gentle... I have done a little investigation and so far have not been able to figure out how to bitshift 70 bytes of data; any help would be greatly appreciated. Larger QR codes can hold 2000 bytes of data...

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >