Search Results

Search found 602 results on 25 pages for 'chunks'.

Page 6/25 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Slick2D Rendering Lots of Polygons

    - by Hazzard
    I'm writing an little isometric game using Slick. The world terrain is made up of lots of quadrilaterals. In a small world that is 128 by 128 squares, over 16,000 quadrilaterals need to be rendered. This puts my pretty powerful computer down to 30 fps. I've though about caching "chunks" of the world so only single chunks would ever need updating at a time, but I don't know how to do this, and I am sure there are other ways to optimize it besides that. Maybe I'm doing the whole thing wrong, surely fancy 3D games that run fine on my machine are more intensive than this. My question is how can I improve the FPS and am I doing something wrong? Or does it actually take that much power to render those polygons? -- Here is the source code for the render method in my game state. It iterates through a 2d array or heights and draws polygons based on the height. public void render(GameContainer container, StateBasedGame game, Graphics gfx) throws SlickException { gfx.translate(offsetX * d + container.getWidth() / 2, offsetY * d + container.getHeight() / 2); gfx.scale(d, d); for (int y = 0; y < placeholder.length; y++) {// x & y are isometric // diag for (int x = 0; x < placeholder[0].length; x++) { Polygon poly; int hor = TestState.TILE_WIDTH * (x - y);// hor and ver are orthagonal int W = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x];//points to go off of int S = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x + 1]; int E = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x + 1]; int N = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x]; if (placeholder[y][x] == null) { poly = new Polygon();//Create actual surface polygon poly.addPoint(-TestState.TILE_WIDTH + hor, W); poly.addPoint(hor, S + TestState.TILE_HEIGHT); poly.addPoint(TestState.TILE_WIDTH + hor, E); poly.addPoint(hor, N - TestState.TILE_HEIGHT); float z = ((float) heights[y][x + 1] - heights[y + 1][x]) / 32 + 0.5f; placeholder[y][x] = new Tile(poly, new Color(z, z, z)); //ShapeRenderer.fill(placeholder[y][x]); } if (true) {//ONLY draw tile if it's on screen gfx.setColor(placeholder[y][x].getColor()); ShapeRenderer.fill(placeholder[y][x]); //gfx.fill(placeholder[y][x]); //placeholder[y][x]. //DRAW EDGES if (y + 1 == placeholder.length) {//draw South foundation edges gfx.setColor(Color.gray); Polygon found = new Polygon(); found.addPoint(-TestState.TILE_WIDTH + hor, W); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(-TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); } if (x + 1 == placeholder[0].length) {//north gfx.setColor(Color.darkGray); Polygon found = new Polygon(); found.addPoint(TestState.TILE_WIDTH + hor, E); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); }//*/ } } } }

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • XNA - 3D AABB collision detection and response

    - by fastinvsqrt
    I've been fiddling around with 3D AABB collision in my voxel engine for the last couple of days, and every method I've come up with thus far has been almost correct, but each one never quite worked exactly the way I hoped it would. Currently what I do is get two bounding boxes for my entity, one modified by the X translation component and the other by the Z component, and check if each collides with any of the surrounding chunks (chunks have their own octrees that are populated only with blocks that support collision). If there is a collision, then I cast out rays into that chunk to get the shortest collision distance, and set the translation component to that distance if the component is greater than the distance. The problem is that sometimes collisions aren't even registered. Here's a video on YouTube that I created showing what I mean. I suspect the problem may be with the rays that I cast to get the collision distance not being where I think they are, but I'm not entirely sure what would be wrong with them if they are indeed the problem. Here is my code for collision detection and response in the X direction (the Z direction is basically the same): // create the XZ offset vector Vector3 offsXZ = new Vector3( ( _translation.X > 0.0f ) ? SizeX / 2.0f : ( _translation.X < 0.0f ) ? -SizeX / 2.0f : 0.0f, 0.0f, ( _translation.Z > 0.0f ) ? SizeZ / 2.0f : ( _translation.Z < 0.0f ) ? -SizeZ / 2.0f : 0.0f ); // X physics BoundingBox boxx = GetBounds( _translation.X, 0.0f, 0.0f ); if ( _translation.X > 0.0f ) { foreach ( Chunk chunk in surrounding ) { if ( chunk.Collides( boxx ) ) { float dist = GetShortestCollisionDistance( chunk, Vector3.Right, offsXZ ) - 0.0001f; if ( dist < _translation.X ) { _translation.X = dist; } } } } else if ( _translation.X < 0.0f ) { foreach ( Chunk chunk in surrounding ) { if ( chunk.Collides( boxx ) ) { float dist = GetShortestCollisionDistance( chunk, Vector3.Left, offsXZ ) - 0.0001f; if ( dist < -_translation.X ) { _translation.X = -dist; } } } } And here is my implementation for GetShortestCollisionDistance: private float GetShortestCollisionDistance( Chunk chunk, Vector3 rayDir, Vector3 offs ) { int startY = (int)( -SizeY / 2.0f ); int endY = (int)( SizeY / 2.0f ); int incY = (int)Cube.Size; float dist = Chunk.Size; for ( int y = startY; y <= endY; y += incY ) { // Position is the center of the entity's bounding box Ray ray = new Ray( new Vector3( Position.X + offs.X, Position.Y + offs.Y + y, Position.Z + offs.Z ), rayDir ); // Chunk.GetIntersections(Ray) returns Dictionary<Block, float?> foreach ( var pair in chunk.GetIntersections( ray ) ) { if ( pair.Value.HasValue && pair.Value.Value < dist ) { dist = pair.Value.Value; } } } return dist; } I realize some of this code can be consolidated to help with speed, but my main concern right now is to get this bit of physics programming to actually work.

    Read the article

  • Streaming to the Android MediaPlayer

    - by Rob Szumlakowski
    Hi. I'm trying to write a light-weight HTTP server in my app to feed dynamically generated MP3 data to the built-in Android MediaPlayer. I am not permitted to store my content on the SD card. My input data is essentially of an infinite length. I tell MediaPlayer that its data source should basically be something like "http://localhost/myfile.mp3". I've a simple server set up that waits for MediaPlayer to make this request. However, MediaPlayer isn't very cooperative. At first, it makes an HTTP GET and tries to grab the whole file. It times out if we try and simply dump data into the socket so we tried using the HTTP Range header to write data in chunks. MediaPlayer doesn't like this and doesn't keep requesting the subsequent chunks. Has anyone had any success streaming data directly into MediaPlayer? Do I need to implement an RTSP or Shoutcast server instead? Am I simply missing a critical HTTP header? What strategy should I use here? Rob Szumlakowski

    Read the article

  • split video (avi/h264) on keyframe

    - by m.sr
    Hallo. I have a big video file. ffmpeg, tcprobe and other tool say, it is an h264-stream in an AVI-container. Now i'd like to cut out small chunks form the video. Problem: The index of the video seam corrupted/destroyed. I kind of fixed this via mplayer -forceidx -saveidx <IndexFile> <BigVideoFile>. The Problem here is, that I'm now stuck with mplayer/mencoder which can use this index file via -loadidx <IndexFile>. I have tried correcting the index like described in man aviindex (mplayer -frames 0 -saveidx mpidx broken.avi ; aviindex -i mpidx -o tcindex ; avimerge -x tcindex -i broken.avi -o fixed.avi), but this didn't fix my video - meaning that most tools i've tested couldn't search in the video file. Problem: I sut out parts of the video via following command: mencoder -loadidx in.idx -ss 8578 -endpos 20 -oac faac -ovc x264 -sws 9 -lavfopts format=mp4 -x264encopts <LotsOfOpts> -of lavf -vf scale=800:-10,harddup in.avi -o out.mp4. Now here the problem is, that some videos are corrupted at the beginning. I think this is because the fact, that i do not necessarily cut at keyframe. Questions: What is the best way to fix the index of an avi "inline" so that every tool can again work as expected with it? How can i split at the keyframes? Is there an mencoder-option for this? Are Keyframes coming in a frequency? How to find out this frequency? (So with a bit of math it should be possible to calculate the next keyframe and cut there) Is ther perhaps some completely other way to split this movie? Doing it by hand is no option, i've to cut out 1000+ chunks ... Thanks a lot!

    Read the article

  • C# FileStream position is off after calling ReadLine()

    - by Cristi Diaconescu
    I'm trying to read a (small-ish) file in chunks of a few lines at a time, and I need to return to the beginning of particular chunks. The problem is, after the very first call to streamReader.ReadLine(); the streamReader.BaseStream.Position property is set to the end of the file! Now I assume some caching is done in the backstage, but I was expecting this property to reflect the number of bytes that I used from that file. For instance, calling ReadLine() again will (naturally) return the next line in the file, which does not start at the position previously reported by streamReader.BaseStream.Position. My question is, how can I find the actual position where the 1st line ends, so I can return there later? I can only think of manually doing the bookkeeping, by adding the lengths of the strings returned by ReadLine(), but even here there are a couple of caveats: ReadLine() strips the new-line character(s) which may have a variable length (is is '\n' ? is it "\r\n" ? etc) I'm not sure if this would work ok with variable-length characters ...so right now it seems like my only option is to rethink how I parse the file, so I don't have to rewind. If it helps, I open my file like this: using (var reader = new StreamReader( new FileStream(m_path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))) {...} Any suggestions?

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • Multimedia files written over WAN are getting truncated

    - by Dean
    I use the windows Multimedia API to create .wav files. 1. Open file with mmsioOpen 2. Creates WAVE,frm and data chunks using mmioCreateChunk 3. Write audio data using mmioWrite 4. Ascend out of the chunks using mmioAscend 5. Close file using mmioClose The file is being written into a temporary location, so after it has been closed it gets copied to another location using the CopyFile. This program is written in C++ and works great until the file it is writing resides over a WAN in a different city or country. The end result is a wav file that should be 20-30 seconds long ends up being 4 secodns long. It is always the last bit that is missing, so when you play it back it just stops before then of the recording. I initially thought that maybe I was copying the file too soon so as a test I put in a pause of 30 seconds after closing the file using Sleep(30000), but this made no difference to either it being truncated or by how much. I have modified the program to write to a file in parrallel using CreateFile and WriteFile, and the result is the same, so it is not an issue specifically with the mmio API's. Does anyone have any ideas why this is happening and if there is a work-around to it? I suspect that I may end up having the temporary location on the local drive, but this is quite a big change to the application as well as existing deployments. thanks for everyones time Dean

    Read the article

  • Use $_FILES on a page called by .ajax

    - by RachelD
    I have two .php pages that I'm working with. Index.php has a file upload form that posts back to index.php. I can access the $_FILES no problem on index.php after submitting the form. My issue is that I want (after the form submit and the page loads) to use .ajax (jQuery) to call another .php file so that file can open and process some of the rows and return the results to ajax. The ajax then displays the results and recursively calls itself to process the next batch of rows. Basically I want to process (put in the DB etc) the csv in chunks and display it for the user in between chunks. Im doing it this way because the files are 400,000+ rows and the user doesnt want to wait the 10+ min for them all to be processed. I dont want to move this file (save it) because I just need to process it and throw it away and if a user closes the page while its processing the file wont be thrown away. I could cron script it but I dont want to. What I would really like to do is pass the (single) $_FILES through .ajax OR Save it in a $_POST or $_SESSION to use on the second page. Is there any hope for my cause? Heres the ajax code if that helps: function processCSV(startIndex, length) { $.ajax({ url: "ajax-targets/process-csv.php", dataType: "json", type: "POST", data: { startIndex: startIndex, length: length }, timeout: 60000, // 1000 = 1 sec success: function(data) { // JQuery to display the rows from the CSV var newStart = startIndex+length; if(newStart <= data['csvNumRows']) { processCSV(newStart, length); } } }); } processCSV(1, 2); }); P.S. I did try this Passing $_FILES or $_POST to a new page with PHP but its not working for me :( SOS.

    Read the article

  • Open source file upload with no timeout on IIS6 with ASP, ASP.NET 2.0 or PHP5

    - by Christopher Done
    I'm after a cross-platform cross-browser way of uploading files such that there is no timeout. Uploads aren't necessarily huge -- some just take a long time to upload because of the uploader's slow connection -- but the server times out anyway. I hear that there are methods to upload files in chunks so that somehow the server decides not to timeout the upload. After searching around all I can see is proprietary upload helpers and Java and Flash (SWFUpload) widgets that aren't cross-platform, don't upload in chunks, or aren't free. I'd like a way to do it in any of these platforms (ASP, ASP.NET 2.0 or PHP5), though I am not very clued up on all this .NET class/controller/project/module/visual studio/compile/etc stuff, so some kind of runnable complete project that runs on .NET 2.0 would be helpful. PHP and ASP I can assume will be more straight-forward. Unless I am completely missing something, which I suspect/hope I am, reasonable web uploads are bloody hard work in any language or platform. So my question is: how can I perform web browser uploads, cross-platform, so that they don't timeout, using free software? Is it possible at all?

    Read the article

  • Are lambda expressions/delegates in C# "pure", or can they be?

    - by Bob
    I recently asked about functional programs having no side effects, and learned what this means for making parallelized tasks trivial. Specifically, that "pure" functions make this trivial as they have no side effects. I've also recently been looking into LINQ and lambda expressions as I've run across examples many times here on StackOverflow involving enumeration. That got me to wondering if parallelizing an enumeration or loop can be "easier" in C# now. Are lambda expressions "pure" enough to pull off trivial parallelizing? Maybe it depends on what you're doing with the expression, but can they be pure enough? Would something like this be theoretically possible/trivial in C#?: Break the loop into chunks Run a thread to loop through each chunk Run a function that does something with the value from the current loop position of each thread For instance, say I had a bunch of objects in a game loop (as I am developing a game and was thinking about the possibility of multiple threads) and had to do something with each of them every frame, would the above be trivial to pull off? Looking at IEnumerable it seems it only keeps track of the current position, so I'm not sure I could use the normal generic collections to break the enumeration into "chunks". Sorry about this question. I used bullets above instead of pseudo-code because I don't even know enough to write pseudo-code off the top of my head. My .NET knowledge has been purely simple business stuff and I'm new to delegates and threads, etc. I mainly want to know if the above approach is good for pursuing, and if delegates/lambdas don't have to be worried about when it comes to their parallelization.

    Read the article

  • Swingworker producing duplicate output/output out of order?

    - by Stefan Kendall
    What is the proper way to guarantee delivery when using a SwingWorker? I'm trying to route data from an InputStream to a JTextArea, and I'm running my SwingWorker with the execute method. I think I'm following the example here, but I'm getting out of order results, duplicates, and general nonsense. Here is my non-working SwingWorker: class InputStreamOutputWorker extends SwingWorker<List<String>,String> { private InputStream is; private JTextArea output; public InputStreamOutputWorker(InputStream is, JTextArea output) { this.is = is; this.output = output; } @Override protected List<String> doInBackground() throws Exception { byte[] data = new byte[4 * 1024]; int len = 0; while ((len = is.read(data)) > 0) { String line = new String(data).trim(); publish(line); } return null; } @Override protected void process( List<String> chunks ) { for( String s : chunks ) { output.append(s + "\n"); } } }

    Read the article

  • Avoiding GC thrashing with WSE 3.0 MTOM service

    - by Leon Breedt
    For historical reasons, I have some WSE 3.0 web services that I cannot upgrade to WCF on the server side yet (it is also a substantial amount of work to do so). These web services are being used for file transfers from client to server, using MTOM encoding. This can also not be changed in the short term, for reasons of compatibility. Secondly, they are being called from both Java and .NET, and therefore need to be cross-platform, hence MTOM. How it works is that an "upload" WebMethod is called by the client, sending up a chunk of data at a time, since files being transferred could potentially be gigabytes in size. However, due to not being able to control parts of the stack before the WebMethod is invoked, I cannot control the memory usage patterns of the web service. The problem I am running into is for file sizes from 50MB or so onwards, performance is absolutely killed because of GC, since it appears that WSE 3.0 buffers each chunk received from the client in a new byte[] array, and by the time we've done 50MB we're spending 20-30% of time doing GC. I've played with various chunk sizes, from 16k to 2MB, with no real great difference in results. Smaller chunks are killed by the latency involved with round-tripping, and larger chunks just postpone the slowdown until GC kicks in. Any bright ideas on cutting down on the garbage created by WSE? Can I plug into the pipeline somehow and jury-rig something that has access to the client's request stream and streams it to the WebMethod? I'm aware that it is possible to "stream" responses to the client using WSE (albeit very ugly), but this problem is with requests from the client.

    Read the article

  • Java InputReader. Detect if file being read is binary?

    - by Trizicus
    I had posted a question in regards to this code. I found that JTextArea does not support the binary type data that is loaded. So my new question is how can I go about detecting the 'bad' file and canceling the file I/O and telling the user that they need to select a new file? class Open extends SwingWorker<Void, String> { File file; JTextArea jta; Open(File file, JTextArea jta) { this.file = file; this.jta = jta; } @Override protected Void doInBackground() throws Exception { BufferedReader br = null; try { br = new BufferedReader(new FileReader(file)); String line = br.readLine(); while(line != null) { publish(line); line = br.readLine(); } } finally { try { br.close(); } catch (IOException e) { } } return null; } @Override protected void process(List<String> chunks) { for(String s : chunks) jta.append(s + "\n"); } }

    Read the article

  • Parsing HTTP - Bytes.length != String.length

    - by hotzen
    Hello, I consume HTTP via nio.SocketChannel, so I get chunks of data as Array[Byte]. I want to put these chunks into a parser and continue parsing after each chunk has been put. HTTP itself seems to use an ISO8859-Charset but the Payload/Body itself may be arbitrarily encoded: If the HTTP Content-Length specifies X bytes, the UTF8-decoded Body may have much less Characters (1 Character may be represented in UTF8 by 2 bytes, etc). So what is a good parsing strategy to honor an explicitly specified Content-Length and/or a Transfer-Encoding: Chunked which specifies a chunk-length to be honored. append each data-chunk to an mutable.ArrayBuffer[Byte], search for CRLF in the bytes, decode everything from 0 until CRLF to String and match with Regular-Expressions like StatusRegex, HeaderRegex, etc? decode each data-chunk with the proper charset (e.g. iso8859, utf8, etc) and add to StringBuilder. With this solution I am not able to honor any Content-Length or Chunk-Size, but.. do I have to care for it? any other solution... ?

    Read the article

  • Packet fragmentation when sending data via SSLStream

    - by Ive
    When using an SSLStream to send a 'large' chunk of data (1 meg) to a (already authenticated) client, the packet fragmentation / dissasembly I'm seeing is FAR greater than when using a normal NetworkStream. Using an async read on the client (i.e. BeginRead()), the ReadCallback is repeatedly called with exactly the same size chunk of data up until the final packet (the remainder of the data). With the data I'm sending (it's a zip file), the segments happen to be 16363 bytes long. Note: My receive buffer is much bigger than this and changing it's size has no effect I understand that SSL encrypts data in chunks no bigger than 18Kb, but since SSL sits on top of TCP, I wouldn't think that the number of SSL chunks would have any relevance to the TCP packet fragmentation? Essentially, the data is taking about 20 times longer to be fully read by the client than with a standard NetworkStream (both on localhost!) What am I missing? EDIT: I'm beginning to suspect that the receive (or send) buffer size of an SSLStream is limited. Even if I use synchronous reads (i.e. SSLStream.Read()), no more data ever becomes available, regardless of how long I wait before attempting to read. This would be the same behavior as if I were to limit the receive buffer to 16363 bytes. Setting the Underlying NetworkStream's SendBufferSize (on the server), and ReceiveBufferSize (on the client) has no effect.

    Read the article

  • Is there a max recommended size on bundling js/css files due to chunking or packet loss?

    - by George Mauer
    So we all have heard that its good to bundle your javascript. Of course it is, but it seems to me that the story is too simple. See if my logic makes sense here. Obviously fewer HTTP requests is fewer round trips and hence better. However - and I don't know much about bare http - aren't http responses sent in chunks? And if a file is larger than one of those chunks doesn't it have to be downloaded as multiple (possibly synchronous?) round trips? As opposed to this, several requests for files just under the chunking size would arrive much quicker since modern web browsers download resources like javascripts in parallel. Even if chunking is not an issue, it seems like there would be some max recommended size just due to likelyhood of packet loss alone since a bundled file must wait till it is entirely downloaded to execute, versus the more lenient native rule that scripts must execute in order. Obviously there's also matters of browser caching and code volatility to consider but can someone confirm this or explain why I'm off base? Does anyone have any numbers to put to it?

    Read the article

  • Save Xml in an Excel cell value causes ComException

    - by mas_oz2k1
    I am trying to save an object (Class1) as string in a cell value. My issue is that from time to time I have a ComException: HRESULT: 0x8007000E (E_OUTOFMEMORY) (It is kind of random but I have not identified any particular pattern yet) when I write the value into a cell. Any ideas will be welcome For illustration purposes: Let Class1 be the class to be converted to an Xml string. (Notice that I removed the xml declaration at the start of the string to avoid having the preamble present- non printable character) <Class1 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" <ElementID HL690375</ElementID </Class1" Class1 myClass = new Class1(); this class is converted to a string s. s= ConvertObjectToXmlString(myClass); then s is assigned to a cell Range r = Application.ActiveCell; r.Value2 = s; Note: (1) If the string is too big, I limit it to 32000 chars and split the string in chunks of 32000 chars and save the chunks in multiple cells. (2) I do not to quote the string before adding to a cell. Do I need to? If so how it can be done? (3) All object contents are English. (4) C# code sample will be great but VB.net code is OK.

    Read the article

  • What if a large number of objects are passed to my SwingWorker.process() method?

    - by Trejkaz
    I just found an interesting situation. Suppose you have some SwingWorker (I've made this one vaguely reminiscent of my own): public class AddressTreeBuildingWorker extends SwingWorker<Void, NodePair> { private DefaultTreeModel model; public AddressTreeBuildingWorker(DefaultTreeModel model) { } @Override protected Void doInBackground() { // Omitted; performs variable processing to build a tree of address nodes. } @Override protected void process(List<NodePair> chunks) { for (NodePair pair : chunks) { // Actually the real thing inserts in order. model.insertNodeInto(parent, child, parent.getChildCount()); } } private static class NodePair { private final DefaultMutableTreeNode parent; private final DefaultMutableTreeNode child; private NodePair(DefaultMutableTreeNode parent, DefaultMutableTreeNode child) { this.parent = parent; this.child = child; } } } If the work done in the background is significant then things work well - process() is called with relatively small lists of objects and everything is happy. Problem is, if the work done in the background is suddenly insignificant for whatever reason, process() receives a huge list of objects (I have seen 1,000,000, for instance) and by the time you process each object, you have spent 20 seconds on the Event Dispatch Thread, exactly what SwingWorker was designed to avoid. In case it isn't clear, both of these occur on the same SwingWorker class for me - it depends on the input data, and the type of processing the caller wanted. Is there a proper way to handle this? Obviously I can intentionally delay or yield the background processing thread so that a smaller number might arrive each time, but this doesn't feel like the right solution to me.

    Read the article

  • Postfix mailq - send every x minutes

    - by Mike
    I got about 2000 clients on my website that have subscribed to our mailing list. I've used in the past Swift Mailer but it didn't work the way it was supposed to. I'm wondering if there is a way that Postfix could keep emails on the mailq (if lots of emails are sent at the same time) and send chunks of 20-30 emails every 10-20 mins. So this way, our server is not blacklisted. Any suggestions will be appreciate it.

    Read the article

  • Disable scroll acceleration (windows/lenovo)

    - by dhardy
    Exactly the opposite of http://superuser.com/questions/101482/global-scroll-acceleration-in-windows, is there a way to disable scroll-accelartion on windows (touchpad -- maybe from lenovo drivers). It's heck of annoying -- using no accelartion but high sensitivity like on linux is so much more precise. ( With this silly acceleration, I either have incredibly slow scrolling, or scrolling so fast it skips whole chunks of text at once. Speed control but not acceleration control. Could say the same for mouse cursor, though that's less annoying. )

    Read the article

  • flashcache with mdadm and LVM

    - by Backtogeek
    I am having trouble setting up flashcache on a system with LVM and mdadm, I suspect I am either just missing an obvious step or getting some mapping wrong and hoped someone could point me in the right direction? system info: CentOS 6.4 64 bit mdadm config md0 : active raid1 sdd3[2] sde3[3] sdf3[4] sdg3[5] sdh3[1] sda3[0] 204736 blocks super 1.0 [6/6] [UUUUUU] md2 : active raid6 sdd5[2] sde5[3] sdf5[4] sdg5[5] sdh5[1] sda5[0] 3794905088 blocks super 1.1 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md3 : active raid0 sdc1[1] sdb1[0] 250065920 blocks super 1.1 512k chunks md1 : active raid10 sdh1[1] sda1[0] sdd1[2] sdf1[4] sdg1[5] sde1[3] 76749312 blocks super 1.1 512K chunks 2 near-copies [6/6] [UUUUUU] pcsvan PV /dev/mapper/ssdcache VG Xenvol lvm2 [3.53 TiB / 3.53 TiB free] Total: 1 [3.53 TiB] / in use: 1 [3.53 TiB] / in no VG: 0 [0 ] flashcache create command used: flashcache_create -p back ssdcache /dev/md3 /dev/md2 pvdisplay --- Physical volume --- PV Name /dev/mapper/ssdcache VG Name Xenvol PV Size 3.53 TiB / not usable 106.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 28952 Free PE 28912 Allocated PE 40 PV UUID w0ENVR-EjvO-gAZ8-TQA1-5wYu-ISOk-pJv7LV vgdisplay --- Volume group --- VG Name Xenvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.53 TiB PE Size 128.00 MiB Total PE 28952 Alloc PE / Size 40 / 5.00 GiB Free PE / Size 28912 / 3.53 TiB VG UUID 7vfKWh-ENPb-P8dV-jVlb-kP0o-1dDd-N8zzYj So that is where I am at, I thought that was the job done however when creating a logical volume called test and mounting it is /mnt/test the sequential write is pathetic, 60 ish MB/s /dev/md3 has 2 x SSD's in Raid0 which alone is performing at around 800 MB/s sequential write and I am trying to cache /dev/md2 which is 6 x 1TB drives in raid6 I have read a number of pages through the day and some of them here, it is obvious from the results that the cache is not functioning but I am unsure why. I have added the filter line in the lvm.conf filter = [ "r|/dev/sdb|", "r|/dev/sdc|", "r|/dev/md3|" ] It is probably something silly but the cache is clearly performing no writes so I suspect I am not mapping it or have not mounted the cache correctly. dmsetup status ssdcache: 0 7589810176 flashcache stats: reads(142), writes(0) read hits(133), read hit percent(93) write hits(0) write hit percent(0) dirty write hits(0) dirty write hit percent(0) replacement(0), write replacement(0) write invalidates(0), read invalidates(0) pending enqueues(0), pending inval(0) metadata dirties(0), metadata cleans(0) metadata batch(0) metadata ssd writes(0) cleanings(0) fallow cleanings(0) no room(0) front merge(0) back merge(0) force_clean_block(0) disk reads(9), disk writes(0) ssd reads(133) ssd writes(9) uncached reads(0), uncached writes(0), uncached IO requeue(0) disk read errors(0), disk write errors(0) ssd read errors(0) ssd write errors(0) uncached sequential reads(0), uncached sequential writes(0) pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0) lru hot blocks(31136000), lru warm blocks(31136000) lru promotions(0), lru demotions(0) Xenvol-test: 0 10485760 linear I have included as much info as I can think of, look forward to any replies.

    Read the article

  • Synchronize large objects to S3 efficiently

    - by emk
    I need to synchronize about 30GB of git repositories to S3. These repos may contain some very large pack files, on the rough order of 2GB. I know that S3 has recently added support for large objects, and has new APIs that allow the objects to be uploaded as several parallel chunks. Is there a good command-line tool for Linux that allows me to efficiently synchronize large objects with S3 in a fashion similar to s3sync?

    Read the article

  • Can you recommend a good replacement for Windows Sound Recorder?

    - by andygrunt
    Can anyone recommend a good, free replacement for Windows 'Sound Recorder'? Requirements: Very quick to load and run Allows saving to a compact sound format (e.g. MP3) Allows long recordings (1+ Hours) Free Runs on Windows XP Would be nice but probably not essential: Saves along the way (in case of crashes) Allows simple edits (trim start and end and maybe remove chunks in the middle) Before anyone suggests it, I'm aware of (and have used) Audacity but I'm really after something as simple and lightweight as possible.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >