Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 594/1477 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • HTTP POST prarameters order / REST urls

    - by pq
    Let's say that I'm uploading a large file via a POST HTTP request. Let's also say that I have another parameter (other than the file) that names the resource which the file is updating. The resource cannot be not part of the URL the way you can do it with REST (e.g. foo.com/bar/123). Let's say this is due to a combination of technical and political reasons. The server needs to ignore the file if the resource name is invalid or, say, the IP address and/or the logged in user are not authorized to update the resource. Looks like, if this POST came from an HTML form that contains the resource name first and file field second, for most (all?) browsers, this order is preserved in the POST request. But it would be naive to fully rely on that, no? In other words the order of HTTP parameters is insignificant and a client is free to construct the POST in any order. Isn't that true? Which means that, at least in theory, the server may end up storing the whole large file before it can deny the request. It seems to me that this is a clear case where RESTful urls have an advantage, since you don't have to look at the POST content to perform certain authorization/error checking on the request. Do you agree? What are your thoughts, experiences?

    Read the article

  • New NSData with range of old NSData maintaining bytes.

    - by umop
    I have a fairly large NSData (or NSMutableData if necessary) object which I want to take a small chunk out of and leave the rest. Since I'm working with large amounts of NSData bytes, I don't want to make a big copy, but instead just truncate the existing bytes. Basically: NSData *source: < a few bytes I want to discard + < big chunk of bytes I want to keep NSData *destination: < big chunk of bytes I want to keep There are truncation methods in NSMutableData, but they only truncate the end of it, whereas I want to truncate the beginning. My thoughts are to do this with the methods: - getBytes:range: and - initWithBytesNoCopy:length:freeWhenDone: However, I'm trying to figure out how to manage memory with these. I'm guessing the process will be like this (I've placed ????s where I don't know what to do): void *buffer // Get range of bytes [source getBytes:buffer range:NSMakeRange(myStart, myLength)]; // Somehow (m)alloc the memory which will be freed up in the following step ????? // Release the source, now that I've allocated the bytes [source release]; // Create a new data, recycling the bytes so they don't have to be copied NSData destination = [[NSData alloc] initWithBytesNoCopy:buffer length:myLength freeWhenDone:YES]; Thanks for the help!

    Read the article

  • Uncompress a TIFF file without going through BufferedImage

    - by Gert
    I am receiving large size CCITT Group 4 compressed TIFF files that need to be written elsewhere as uncompressed TIFF files. I am using the jai_imageio TIFF reader and writer to do that and it works well as long as the product _width * height_ of the image fits in an integer. Here is the code I am using: TIFFImageReaderSpi readerSpi= new TIFFImageReaderSpi(); ImageReader imageReader = readerSpi.createReaderInstance(); byte[] data = blobManager.getObjectForIdAndVersion(id, version); ImageInputStream imageInputStream = ImageIO.createImageInputStream(data); imageReader.setInput(imageInputStream); TIFFImageWriterSpi writerSpi = new TIFFImageWriterSpi(); ImageWriter imageWriter = writerSpi.createWriterInstance(); ImageWriteParam imageWriteParam = imageWriter.getDefaultWriteParam(); imageWriteParam.setCompressionMode(ImageWriteParam.MODE_DISABLED); //bufferFile is created in the constructor ImageOutputStream imageOutputStream = ImageIO.createImageOutputStream(bufferFile); imageWriter.setOutput(imageOutputStream); //Now read the bitmap BufferedImage bufferedImage = imageReader.read(0); IIOImage iIOImage = new IIOImage(bufferedImage, null, null); //and write it imageWriter.write(null, iIOImage, imageWriteParam); Unfortunately, the files that I receive are often very large and the BufferedImage cannot be created. I have been trying to find a way to stream from the ImageReader directly to the ImageWriter but I cannot find out how to do that. Anybody with a suggestion?

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • Recursively MySQL Query

    - by Rachel
    How can I implement recursive MySQL Queries. I am trying to look for it but resources are not very helpful. Trying to implement similar logic. public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. $query = ""; while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = $query + "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (" + $data[0] + ", " + $data[1] + ", " + $data[2] + ", " + $data[3] + ")"; } $stmt = $this->prepare($query); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } @Author: Numenor Error Message: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '0' at line 1 This Approach inspired to look for an MySQL recursive query approach.

    Read the article

  • How to get the actual address of a pointer in C?

    - by Airjoe
    BACKGROUND: I'm writing a single level cache simulator in C for a homework assignment, and I've been given code that I must work from. In our discussions of the cache, we were told that the way a small cache can hold large addresses is by splitting the large address into the position in the cache and an identifying tag. That is, if you had an 8 slot cache but wanted to store something with address larger than 8, you take the 3 (because 2^3=8) rightmost bits and put the data in that position; so if you had address 22 for example, binary 10110, you would take those 3 rightmost bits 110, which is decimal 5, and put it in slot 5 of the cache. You would also store in this position the tag, which is the remaining bits 10. One function, cache_load, takes a single argument, and integer pointer. So effectively, I'm being given this int* addr which is an actual address and points to some value. In order to store this value in the cache, I need to split the addr. However, the compiler doesn't like when I try to work with the pointer directly. So, for example, I try to get the position by doing: npos=addr%num_slots The compiler gets angry and gives me errors. I tried casting to an int, but this actually got me the value that the pointer was pointing to, not the numerical address itself. Any help is appreciated, thanks!

    Read the article

  • Biztalk vs API for databroker layer

    - by jdt199
    My company is about to undergo a large project in which our client wants a large customer portal with a cms, crm implementing. This will require interaction with data from multiple sources across our customers business, these sources include XML office backend systems, sql datbases, webservices etc. Our proposed solution would be to write an API in c# to provide a common interface with all these systems. This would be scalable for future and concurrent projects within the company. Our client expressed an interest in using Biztalk rather than a custom API for this integration, as they feel it is an enterprise solution that any of their suppliers could pick up and use, and it will be better supported. We feel that the configuration work using Biztalk would be rather heavy for all their custom business rules which are required and an interface for the new application to get data to and from Biztalk would still need to be written. Are we right to prefer a custom API solution above Biztalk? Would Biztalk be suitable as a databroker layer to provide an interface for the new Customer portal we are writing. We have not experience with using Biztalk before so any input would be appreciated.

    Read the article

  • Orphan IBM JVM process

    - by Nicholas Key
    Hi people, I have this issue about orphan IBM JVM process being created in the process tree: For example: C:\Program Files\IBM\WebSphere\AppServer\bin>wsadmin -lang jython -f "C:\Hello.py" Hello.py has the simple implementation: import time i = 0 while (1): i = i + 1 print "Hello World " + str(i) time.sleep(3.0) My machine has such JVM information: C:\Program Files\WebSphere\java\bin>java -verbose:sizes -version -Xmca32K RAM class segment increment -Xmco128K ROM class segment increment -Xmns0K initial new space size -Xmnx0K maximum new space size -Xms4M initial memory size -Xmos4M initial old space size -Xmox1624995K maximum old space size -Xmx1624995K memory maximum -Xmr16K remembered set size -Xlp4K large page size available large page sizes: 4K 4M -Xmso256K operating system thread stack size -Xiss2K java thread stack initial size -Xssi16K java thread stack increment -Xss256K java thread stack maximum size java version "1.6.0" Java(TM) SE Runtime Environment (build pwi3260sr6ifix-20091015_01(SR6+152211+155930+156106)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Windows Server 2003 x86-32 jvmwi3260sr6-20091001_43491 (JIT enabled, AOT enabled) J9VM - 20091001_043491 JIT - r9_20090902_1330ifx1 GC - 20090817_AA) JCL - 20091006_01 While the program is running, I tried to kill it and subsequently I found an orphan IBM JVM process in the process tree. Is there a way to fix this issue? Why is there an orphan process in the first place? Is there something wrong with my code? I really don't believe that my simplistic code is wrongly implemented. Any suggestions?

    Read the article

  • Creating nested arrays on the fly

    - by adardesign
    I am trying to do is to loop this HTML and get a nested array of this HTML values that i want to grab. It might look complex at first but is a simple question... This script is just part of a Object containing methods. html <div class="configureData"> <div title="Large"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> <div title="Medium"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> <div title="Small"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> </div> javascript // this is part of a script..... parseData:function(dH){ dH.find(".configureData div").each(function(indA, eleA){ colorNSize.tempSizeArray[indA] = [eleA.title,[],[],[],[]] $(eleZ).find("a").each(function(indB, eleB){ colorNSize.tempSizeArray[indA][indB+1] = eleC.title }) }) }, I expect the end array should look like this. [ ["large", ["yellow", "green", "blue"], ["true", "true", "true"], ["$55", "$55","$55"] ], ["Medium", ["yellow", "green", "blue"], ["true", "true", "true"], ["$55", "$55","$55"] ] ] // and so on....

    Read the article

  • Google Web Toolkit or Microsoft Technology (Silverlight, ASP.NET)

    - by NativeByte
    We have a large code base in MFC and VB. A few applications are in .NET. All these applications interoperate with each other on the user's machine and also connect with Unix servers via sockets. Recently we have started discussing a re-write of our applications and possibility of moving a lot of these desktop applications to web (they would run in intranet). A straight forward way is rewritting them in one of the .NET technologies. But a suggestion about using Google Web tookit has popped up and the argument is that it would help creating applications that would run in a browser on both desktop and mobile devices. One of the key problem that I see is that GWT is a large abstraction over Javascript. This will require the team to learn GWT, Javascript, IDEs etc as their experience has been primarily Microsoft technologies and not Java. It would be easier for them to learn .NET technologies instead of GWT. I do not have a depth of GWT and its drawback pittfalls and do not know about a parallel Microsoft Technology that I should investigate. So I would appreciate if people here can share their views or experiences using GWT or equivalent Microsoft technology.

    Read the article

  • Cannot write the output to a text file in cpp program

    - by swapedoc
    I have this input file "https://code.google.com/codejam/contest/351101/dashboard/do/A-large-practice.in?cmd=GetInputFile&problem=374101&input_id=1&filename=A-large-practice.in&redownload_last=1&agent=website&csrfmiddlewaretoken=OWMxNTVmMTUyODBiYjhhN2Q2OTM3ZGJiMTNhNDkwMDF8fDEzNzIxNzI1NTE3ODAzMjA%3D" I tried to read this file :-using freopen("filename.txt",r,stdin); and then I wanted the output written to be written to another text file which I can upload in this codejam practice question for the judge. #include<iostream> #include<cstdio> using namespace std; int main() { int t,k=0,a[2000]; freopen("ab.txt","r",stdin); scanf("%d",&t); while(t--) { freopen("cb.txt","w",stdout); int c; scanf("%d",&c); int n; scanf("%d",&n); for(int i=0;i<n;i++) scanf("%d",&a[i]); printf("Case #%d: ",++k); for(int i=0;i<n-1;i++) {for(int j=i+1;j<n;j++) if((a[i]+a[j])==c) {printf("%d %d\n",i+1,j+1); i=n;} } } return 0; } This is my code. Now the problem is the output file cb.txt contains only the last line of the input. I want the the whole of the output to be written to cb.txt,so what should I do.

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • Best ways to copyright protect your work for example Myows

    - by Saif Bechan
    Recently I have read about Myows, they say its: "The universal copyright management and protection app for smart creatives" It is used to protect your application from copyrights and more. Do you think this will be a good idea for large application, or are there better ways to achieve such a thing. url: Myows Update Referencing: http://stackoverflow.com/questions/2618015/has-anyone-tried-myows-to-copyright-protect-your-work/2618628#2618628 Wow there is no better person to have answered this question like the creator himself. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so 'new'. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it will works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • How to shrink the ASPX page

    - by salvationishere
    I am developing a C#/ASP.NET web application in VS 2008. Currently this page is too tall. The buttons appear on top and then there is a large gap between these buttons and the resultLabel text. The following code is from my ASPX file. I have tried switching to the Design tab of this file and manually moving this label, but there is still a large gap. I'm sure this is simple. How do I correct this? Text="Now select from the dropdownlists which table columns from my database you want to map these fields to"     <table align="center"><tr> <td style="text-align: center;width: 300px;"> <asp:Label ID="resultLabel" runat="server" style="position:absolute; text-align:center; top:148px; left: 155px;" Visible="False"></asp:Label> </td></tr></table> <p>

    Read the article

  • Interface for reading variable length files with header and footer.

    - by John S
    I could use some hints or tips for a decent interface for reading file of special characteristics. The files in question has a header (~120 bytes), a body (1 byte - 3gb) and a footer (4 bytes). The header contains information about the body and the footer is only a simple CRC32-value of the body. I use Java so my idea was to extend the "InputStream" class and add a constructor such as "public MyInStream( InputStream in)" where I immediately read the header and the direct the overridden read()'s the body. Problem is, I can't give the user of the class the CRC32-value until the whole body has been read. Because the file can be 3gb large, putting it all in memory is a be an idea. Reading it all in to a temporary file is going to be a performance hit if there are many small files. I don't know how large the file is because the InputStream doesn't have to be a file, it could be a socket. Looking at it again, maybe extending InputStream is a bad idea. Thank you for reading the confused thoughts of a tired programmer. :)

    Read the article

  • iPhone: Leak with UIWebView loading Office documents. Any ideas how to avoid it?

    - by Thomas Tempelmann
    While there are already quite a few posts about leaks around UIWebView, mine is a bit more special, I believe, and thus deserves its own post here. I see a reproducible large leak every time I load a Office document such as a Word or Excel file. For instance, every time I display a 180KB .doc file, I get a 100KB leak. And that happens with both the simulator and an actual device, running OS 3.1.3. The leak is not visible with the Leaks instrument but only by looking at the malloc instances via the ObjectAlloc instrument. Here's a picture from the instruments trace: I've also made a demo project, UIWebView-Leak.zip, so you can verify this yourself. To see the leak, use the ObjectAlloc instrument, switch to the view where you see individual allocation objects, and sort by size so that you see the large ones in a group, just like in my picture above. Then view a Office document a few times and find the Malloc objects that keep staying "Live" even after the actual UIWebView has been freed. Is this a known bug? Or is there any way I can avoid these leaks? I.e, have you successfully shown Office documents on an iPhone withing getting such leaks? Note: I've reported this as a bug to Apple now, too (ID 7950594) I am still waiting for someone (including Apple) to confirm this as a true leak or show why it isn't (i.e. that I do something wrong or make wrong assumptions)

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Concurrent WCF calls via shared channel

    - by Kent Boogaart
    I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled. But they are not being called concurrently. If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I wanted to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible. I found this article on MSDN that states: While channels and clients created by the channels are thread-safe, they might not support writing more than one message to the wire concurrently. If you are sending large messages, particularly if streaming, the send operation might block waiting for another send to complete. Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages. Can anyone shed some light on this?

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • How do you get the viewport scale after pinch/zoom on an iPhone web app?

    - by Loktar
    Does anyone know how to get the size in pixels or scale value of the viewport after a user has pinched or double tapped to zoom in/out on a page in JavaScript? I've tried using window.innerWidth but I've had mixed results. Sometimes it seems to accurately give the number of pixels the viewport is showing, however, if I zoom way in on a page and then do a large pinch to zoom back out, window.innerWidth will be around 600-700 even though it is only showing ~200px of the page. The page is only 400px wide and it didn't show the checkered "you've gone too far" background you see when you zoom out beyond the page size. If I do small pinches to zoom in and out, window.innerWidth appears to work just fine. Unfortunately I can't rely on a user only making small pinch gestures :) I've also tried to use the scale property on the gesture event object, but I've found that unreliable because you don't always know the initial scale when you reload the page or use back/forward buttons to navigate to it even when using the meta tag to specify it. Ultimately, I'm trying to make an app that is aware of when a user is trying to zoom out beyond the maximum zoom level so if there is another way to do this I'm interested in hearing about it :) Here's the code I'm using to get the innerWidth: document.body.addEventListener('gestureend', function (evt) { console.log(window.innerWidth); // inaccurate when doing large pinch gestures }, false); Thanks!

    Read the article

  • Append json data to html class name

    - by user2898514
    I have a problem with my json code. I want to append each json value comes from a key to be appended to an html class name which matches the key of json data. here's my Live demo if you see the result in the life demo. it's only appending the last record. is it possible to make it show all records in order? json var json = '[{"castle":"big","commercial":"large","common":"sergio","cultural":"2009"},' + '{"castle":"big2","commercial":"large2","common":"sergio2","cultural":"20092"}]'; html <div class="castle"></div> <div class="commercial"></div> <div class="common"></div> <div class="cultural"></div> javascript var data = $.parseJSON(json); $.each(data, function(l,v) { $.each(v, function(k,o) { $('.'+k).attr('id', k+o); console.log($('#'+k+o).attr('id')); $('#'+k+o).text(o); }); }); for more illustration... I want the result in the live demo to look like this big large sergio 2009, big2 large2 sergio2 20092

    Read the article

  • Reducing load time, or making the user think the load time is less

    - by Malfist
    I've been working on a website, and we've managed to reduce the total content for a page load from 13.7MiB's to 2.4, but the page still takes forever to load. It's a joomla site (ick), and it has a lot of redundant DOM elements (2000+ for the home page), and make 60+ HttpRequest's per page load, counting all the css, js, and image requests. Unlike drupal, joomla won't merge them all on the fly, and they have to be kept separate or else the joomla components will go nuts. What can I do to improve load time? Things I've done: Added colors to dom elements that have large images as their background so the color is loaded, then the image Reduced excessively large images to much smaller file sizes Reduced DOM elements to ~2000, from ~5000 Loading CSS at the start of the page, and javascript at the end Not totally possible, joomla injects it's own javascript and css and it does it at the header, always. Minified most javascript Setup caching and gziping on server Uncached size 2.4MB, cached is ~300KB, but even with so many dom elements, the page takes a good bit of time to render. What more can I do to improve the load time?

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >