Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 300/1477 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • How do you create large, growable, shared filesystems on Linux at AWS?

    - by Reece
    What are acceptable/reasonable/best ways to provide large, growable, shared storage at AWS, exposed as a single filesystem? We're currently making 1TB EBS volumes ~biweekly and NFS exporting with no_subtree_check and nohide. In this setup, distinct exports appear under a single mount on the client. This arrangement does not scale well. The options we've considered: LVM2 with ext4. resize2fs is too slow. Btrfs on Linux. not obviously ready for prime time yet. ZFS on Linux. not obviously ready for prime time yet (although LLNL uses it) ZFS on Solaris. future of this combo is uncertain (to me), and new OS in the mix glusterfs. heard mostly good but two scary (and maybe old?) stories. The ideal solution would provide sharing, a single fs view, easy expandability, snapshots, and replication. Thanks for sharing ideas and experience.

    Read the article

  • Help with debugging COM errors? (.mdi to .pdf file conversions using Microsoft Office Document Imagi

    - by RyanW
    I thought I had a working solution for converting .mdi files to PDF using the Microsoft Office Document Imaging object model. The solution is in a Windows Service, but now I'm running into some errors that I'm having trouble tracking down info on. The exception I get is: The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT)) System.Runtime.InteropServices.COMException (0x80010105): The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT)) at MODI.DocumentClass.Create(String FileOpen) at DocumentStore.Mdi2PDF(String path, String newPath) Then, in the Event Viewer there is the following Application error: Faulting application MyWindowsServiceName.exe, version 1.0.0.0, time stamp 0x4b97f185, faulting module mso.dll, version 12.0.6425.1000, time stamp 0x49d65443, exception code 0xc0000005, fault offset 0x0000bd8e, process id 0xa5c, application start time 0x01cac08cf032914b. Here's the method that is doing the conversion: private int? Mdi2PDF(String path, String newPath) { int? pageCount = null; string tmpTif = Path.GetTempFileName(); MODI.Document mdiDoc = new MODI.Document(); mdiDoc.Create(path); mdiDoc.SaveAs(tmpTif, MODI.MiFILE_FORMAT.miFILE_FORMAT_TIFF_LOSSLESS, MODI.MiCOMP_LEVEL.miCOMP_LEVEL_HIGH); mdiDoc.Close(false); pageCount = Tiff2PDF(tmpTif, newPath); if (File.Exists(tmpTif)) File.Delete(tmpTif); return pageCount; } I removed all threading from the service invoking this, so that only the primary thread was initializing the MODI object, but still got the error, so it doesn't appear to be threading related. I also built a a console apps converting hundreds of documents and DID NOT get the exception. So, it seems to be caused by creating too many instances of the MODI object, but only instantiated within a Service? Doesn't quite make sense. Anybody have any clues about these errors and how to debug them further?

    Read the article

  • Question - Setting dynamic HTML using Javascript to iFrames on Windows Mobile 6.1 - IE Mobile6

    - by swaroop
    Hi Experts, (excuse me if this is not the right forum to post - i couldn't find anything related to non-native programming and related to this topic) I Am trying to set a dynamic HTML into an iFrame on the webpage. I have tried a couple of things but none of them seem to work. I m able to read the innerHTML but can't seem to update it. // Able to read using document.getElementById('iFrameIdentifier').innerHTML; // On Desktop IE, this code works document.getElementById('iFrameId').contentWindow.document.open(); document.getElementById('iFrameId').contentWindow.document.write(dynamicHTML); document.getElementById('iFrameId').contentWindow.document.close(); Ideally the same function should work as how it works for div's but it says 'Object doesn't support this method or property". I have also tried document.getElementById('iFrameId').document.body.innerHTML. This apparently replaces the whole HTML of the page and not just the innerHTML. I have tried out a couple of things and they didn't work document.getElementById('iFrameId').body.innerHTML document.frames[0].document.body.innerHTML My purpose is to have a container element which can contain dynamic HTML that's set to it. I've been using it well till now when I observed that the setting innerHTML on a div is taking increasing amount of time because of the onClicks or other JS methods that are attached to the anchors and images in the dynamic HTML. Appears the JS methods or the HTML is some how not getting cleaned up properly (memory leak?) Also being discussed - http://www.experts-exchange.com/Programming/Languages/Scripting/JavaScript/Q_26185526.html#a32779090

    Read the article

  • Large File Uploads? SWFUpload?

    - by Ethabelle
    So, we offer video services and have run into an issue with people uploaded large file sources. I realized that our developer was utilizing php HTTP uploads to handle this and that was causing the slow times & breakdowns. Now, they keep coming at me wanting to use SWFUpload, quoting it is utilized by YouTube, but I'm adamantly against it because -- well, flash. However, I don't really know of a -better- solution that works across all browsers. So I was wondering SWFUpload, which hasn't been updated in a year, is really the viable solution?

    Read the article

  • Database server: Small quick RAM or large slow RAM?

    - by Josh Smeaton
    We are currently designing our new database servers, and have come up with a trade off I'm not entirely sure of how to answer. These are our options: 48GB 1333MHz, or 96GB 1066MHz. My thinking is that RAM should be plentiful for a Database Server (we have plenty and plenty of data, and some very large queries) rather than as quick as it could be. Apparently we can't get 16GB chips at 1333MHz, hence the choices above. So, should we get lots of slower RAM, or less faster RAM? Extra Info: Number of DIMM Slots Available: 6 Servers: Dell Blades CPU: 6 core (only single socket due to Oracle licensing).

    Read the article

  • Hibernate: can I override an identifier generator using XML with a custom generator?

    - by Ken Liu
    I want to use a custom sequence generator in my application, but the entity is located in a domain model jar that is shared with other applications. Apparently entity annotations can be overridden in orm.xml but I can't figure out the proper XML incantation to get this to work. I can modify the annotation in the entity like this this: @GenericGenerator(name = "MYGEN", strategy = "MyCustomGenerator") @GeneratedValue(generator = "MYGEN") But I need to somehow map this to orm.xml in order to override the original annotation. Looking at the orm.xml schema here it appears that I can't even specify a generation type besides "sequence" and "table".

    Read the article

  • Is it naughty to have a large utility file?

    - by banister
    In my C project I have quite a large utils.c file. It is really full of many utilities of different sorts. I feel a bit naughty just stuffing different miscellaneous functions in there. For example it has some utilities related to low level stuff such as a lowercase() function, and it also has some quite sophisticated utilities such as converting to/from different colour formats. My question is, is it very naughty to have such a large utils.c with many different types of utilities in it? Should I break it up into many different kinds of utility files? Such as graphics_utils.c and so on What do you think?

    Read the article

  • Interface Builder error: IBXMLDecoder: The value for key is too large to fit into a 32 bit integer

    - by stdout
    I'm working with Robert Payne's fork of PSMTabBarControl that works with IB 3.2 (thanks BTW Robert!): http://codaset.com/robertjpayne/psmtabbarcontrol/. The demo application works fine on 64-bit systems, but when I try to open the XIB file in Interface Builder on a 32-bit system I get: IBXMLDecoder: The value (4654500848) for key (myTrackingRectTag) is too large to fit into a 32 bit integer Building the app as 32 bit works, but then running it gives: PSMTabBarControlDemo[9073:80f] * -[NSKeyedUnarchiver decodeInt32ForKey:]: value (4654500848) for key (myTrackingRectTag) too large to fit in 32-bit integer Not sure if this is a generic IB issue that can occur when moving between 64 and 32 bit systems, or if this is a more specific issue with this code. Has anyone else run into this?

    Read the article

  • When is it safe to remote import entires from feature.xml?

    - by Robert Munteanu
    I've recently learned that the import section from feature.xml is legacy, and the actual dependency work is delegated to the p2 engine, which uses the information from the plugin manifest. I am not sure though if p2 is available for all recent versions of Eclipse, or in all Eclipse-based products, so I'm not sure if it is safe to remove the import section from feature.xml. Under what circumstances is it safe to remove the import section from feature.xml? Assume that we are taking into consideration Eclipse 3.4 or newer.

    Read the article

  • How to configure Dovecot to not serve large emails to high-latency clients?

    - by Daniel Quinn
    I have a Dovecot mailserver running at home on a flaky cable connection. For the most part, the IMAP functionality works beautifully, but I'd like to add one feature if I can: I want Dovecot not to serve large messages to high-latency clients. That is to say, if someone decides that it's a good idea to send me a 9.3mb email to me, I don't want to get it unless I'm on my LAN at home. This can't be an uncommon request, but I'm having trouble finding the configuration option in their documentation. Any ideas and/or good keywords to use in Googling would be awesome.

    Read the article

  • A simple Volume Replication Tool for large data set?

    - by Jin
    I'm looking for a solution to the following: Server A (Site A) - Win 2008 R2 - approx 10TB (15TB max) of data - well over 8 million files Server B (Site B) - Win 2008 R2 I want to assynchronously replicate Server A's volume to a volume on Server B for data redundancy. Something that I can say to my users, "go here for data" when/if Server A goes belly up due to machine problems, disaster, etc. Windows 2008 R2 does have DFS, but microsoft does not apparently support this large of a dataset (or more accurately, more than 8 million files - according to the docs I could find). I also looked at Veritas Volume Replication, but this seems almost too much as I would also require Veritas Volume Manager. There are numerous "back-up" software which makes a 1-1 backup, which would be ok, but since it will be transfering over internet, I'd like something that has compression during transfer like DFS has. Does anyone have any suggestions regarding this?

    Read the article

  • Load XML file on seperate thread, overwrite old , and save.

    - by Luis Tovar
    Hey everyone... so Im trying to figure out how to do this. I have bounced around alot of forums to try and find my answer, but no success. Either their process it too complicated for me to understand, or is just overkill. What I am trying to do is this. I have an XML file within my app. Its a 500k xml file that i dont want the user to have to wait on when loading. SO... I put it in my app which kills the load time, and makes the app available offline. What i want to do is, when the app loads, in the background (seperate thread) download the SAME xml file which MIGHT be updated with new data. Once the xml file is complete, i want to REPLACE the xml file that was used to load the file. Any suggestions or code hints would be greatly appreciated. Thanks in advance!

    Read the article

  • Copying a large directory tree locally? cp or rsync?

    - by Rory
    I have to copy a large directory tree, about 1.8 TB. It's all local. Out of habit I'd use rsync, however I wonder if there's much point, and if I should rather use cp. I'm worried about permissions and uid/gid, since they have to be preserved in the clopy (I know rsync does this). As well as thinks like symlinks. The destination is empty, so I don't have to worry about conditionally updating some files. It's all local disk access, so I don't have to worry about ssh or network. The reason I'd be tempted away from rsync, is because rsync might do more than I need. rsync checksums files. I don't need that, and am concerned that it might take longer than cp. So what do you reckon, rsync or cp?

    Read the article

  • How do I protect large file downloads through PHP and/or Apache?

    - by Eric
    We have some large files (1-8GB) that are not publicly accessible. Currently we're serving them up through a PHP script that buffers the files in 1MB chunks and writes it to the output. It's incredibly CPU intensive and slows the server down when only a few downloads are active. We want to move the file transfer work to Apache or a more efficient method. We are using cookie authentication. FTP downloads are out unless there's some way to authenticate FTP sessions through the existing PHP session cookie. Ideally we'd like something where we can use PHP to hide the link to the file while it passes off the file transfer work to Apache, which is no doubt far more efficient at HTTP file transfers than PHP. We want to be able to resume downloads as well. Any help is appreciated.

    Read the article

  • What is the Reason large sites don't use MySQL with ASP.NET?

    - by Luke101
    I have read this article from highscalability about stackoverflow and other large websites. Many large high traffic .NET sites such as plentyoffish.com, mysapce and SO all use .NET technologies and use SQL SERver for their database. In the article it says SO said As you add more and more database servers the SQL Server license costs can be outrageous. So by starting scale up and gradually going scale out with non-open source software you can be in a world of financial hurt. I don't understand why don't high traffic .NET sites convert their databases to MySQL as it is waay cheaper then SQL Server

    Read the article

  • Why is Internet access and Wi-Fi always so terrible at large tech conferences?

    - by Joel Spolsky
    Every tech conference I've ever been to, and I've been to a lot, has had absolutely abysmal Wi-Fi and Internet access. Sometimes it's the DHCP server running out of addresses. Sometimes the backhaul is clearly inadequate. Sometimes there's one router for a ballroom with 3000 people. But it's always SOMETHING. It never works. What are some of the best practices for conference organizers? What questions should they ask the conference venue or ISP to know, in advance, if the Wi-Fi is going to work? What are the most common causes of crappy Wi-Fi at conferences? Are they avoidable, or is Wi-Fi simply not an adequate technology for large conferences?

    Read the article

  • Is my large Windows folder slowing down my machine?

    - by Moses
    I have a problem with my Windows installation running very slow and my Windows folder being too large. I thought that the problems are related. My Windows folder is 17.4 GB I have 1807 folders totalling 2.4 GB that are prefaced with a $. My System32 folder is 1.55 GB My Microsoft.NET folder is 654 MB – I don't know what if any programs I have that are using it. My Service Pack folder is 568 MB. The Software Distribution folder is 536 MB The ie8updates folder is 380 MB. How can I reduce the size of these folders and could their size be why I am running do slow?

    Read the article

  • Which Large File System Format to use for USB Flash drive compatible with Ubuntu/Mac/Windows?

    - by wajiw
    I've had this problem for a long time and can't find a solution. I switch between the 3 OSes all the time and use a 1TB USB Drive to do so. I can't seem to find a format that is compatible across all systems that handles large files (at least 8-9 GB). Does anyone have a solution for this? Recently I've tried exFat but that messes up the filesystem when trying to read on windows after adding files from Ubuntu (using the fuse driver). The OSes currently I'm using are Windows Vista/7, Mac OS X (10.6.5) and Ubuntu 10.10

    Read the article

  • Why does cpio say "WARNING! These file names were not selected" when copying a large number of files

    - by mmm bacon
    For over 10 years, I've been using this strategy to copy a large number of files between UNIX filesystems: cd source_directory find . -depth -print | cpio -pdm /path/to/destination_directory It works like a champ. However, I'm now getting this error from cpio: cpio: WARNING! These file names were not selected: (long list of files here...) The source directory is on OSX 10.5, and the destination directory is a NFS filesystem from an OpenSolaris server. Copying over NFS has never been a problem in the past. There's nothing strange about the filenames, meaning there aren't special characters or anything like that. Any ideas?

    Read the article

  • Where can I find a large body of *Python3* source code?

    - by Ira Baxter
    I'm testing a Python parser. I have Python 2.6/2.7 firmly under control, and some good (large) code samples on which I've tested it. I'm interested in testing my Python3 variant. I've been to various Python open source web sites (e.g., http://pythonsource.com/), which list lots of packages, but they are pretty unclear as whether these are Python 2.x vs 3.x source files. The several samples that I downloaded all turned out to be Python 2.x. Where can I find a number of large Python 3 software source codes? I don't really want 1000 little separate Python3 files; I prefer big applications.

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • What's the proper and reliable way to store an XML string as a JSON property?

    - by Edwin
    Hi Folks, I want to store a string which itself is an XML string as a property of an JSON object , what's the reliable and proper way of dong this? Should I first encode the XML data into BASE64 first prior saving it to an JSON object, due to the fact that JSON does not support binary data? Example of data I want to store: { "string1" : "...moderately complex XML..." } Thank you in advance for any hints and suggestions!

    Read the article

  • How well does Solr scale over large number of facet values?

    - by Continuation
    I'm using Solr and I want to facet over a field "group". Since "group" is created by users, potentially there can be a huge number of values for "group". Would Solr be able to handle a use case like this? Or is Solr not really appropriate for facet fields with a large number of values? I understand that I can set facet.limit to restrict the number of values returned for a facet field. Would this help in my case? Say there are 100,000 matching values for "group" in a search, if I set facet.limit to 50. would that speed up the query, or would the query still be slow because Solr still needs to process and sort through all the facet values and return the top 50 ones? Any tips on how to tune Solr for large number of facet values? Thanks.

    Read the article

  • Xalan redirect:write , use either of two element values to create name of new .xml file depending on

    - by Bilzac
    So I have the following code: <redirect:write select="concat('..\\folder\\,string(filename),'.xml')"> Where "filename" is a tag in the xml source. My problem occurs when filename is null or blank. And this is the case for several of the xml filename tags. So what I am trying to implement is a checking method. This is what I have done: <xsl-if test = "filename != ''"> <xsl:variable name = "tempName" select = "filename" /> </xsl-if> <xsl-if test ="filename = ''"> <xsl:variable name = "tempName" select = "filenameB"/> </xsl-if> <redirect:write select="concat('..\\folder\\,string($tempName),'.xml')"> I seem to be getting NPEs when I compile my Java code, saying the Variable not resolvable: tempName

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >