Search Results

Search found 9467 results on 379 pages for 'objective c blocks'.

Page 238/379 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • How do you use selected vertical blocks in Visual Studio 2008 and 2010?

    - by jamesmoorecode
    I get that you can select a vertical block in Visual Studio 2008 (alt-drag), but I don't understand how you use it once it's selected. How do you: Move the selection point inside the block and have the text you're typing get inserted on every line simultaneously? Move the block one space to the right/left Or can you just copy/delete the selection? I assumed when I saw the ability to select that it was something like TextMate's vertical blocks, but maybe it's just not as advanced as that. New behavior (as of VS2010 RC - added 13 Feb 2010): You can now type in the selection and have the same thing show up on every line. Tab moves the selected block

    Read the article

  • Possible memory leak problem

    - by MaiTiano
    I write two pieces of c programs like following, during memcheck process using Valgrind, a lot of mem leak information is given. int GetMemory(int framewidth, int frameheight, int SR/*, int blocksize*//*,int ALL_REF_NUM*/) { //int i,j; int memory_size = 0; //int refnum = ALL_REF_NUM; int input_search_range = SR; memory_size += get_mem2D(&curFrameY, frameheight, framewidth); memory_size += get_mem2D(&curFrameU, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&curFrameV, frameheight>>1, framewidth>>1); memory_size += get_mem3D(&prevFrameY, refnum, frameheight, framewidth);// to allocate reference frame buffer accoding to the reference frame number memory_size += get_mem3D(&prevFrameU, refnum, frameheight>>1, framewidth>>1); memory_size += get_mem3D(&prevFrameV, refnum, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&mpFrameY, frameheight, framewidth); memory_size += get_mem2D(&mpFrameU, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&mpFrameV, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&searchwindow, input_search_range*2 + blocksize, input_search_range*2 + blocksize);// to allocate search window according to the searchrange /*memory_size +=*/ get_mem1D(/*&SAD_cost, height, width*/); // memory_size += get_mem2D(&searchwindow, 80, 80);// if searchrange is 32, then only 32+1+32+15 pixels is needed in each row and col, therefore the range of // search window is enough to be set to 80 ! memory_size += get_mem2Dint(&all_mv, height/blocksize, width/blocksize); return 0; } void FreeMemory(int refno) { free_mem2D(curFrameY); free_mem2D(curFrameU); free_mem2D(curFrameV); free_mem3D(prevFrameY,refno); free_mem3D(prevFrameU,refno); free_mem3D(prevFrameV,refno); free_mem2D(mpFrameY); free_mem2D(mpFrameU); free_mem2D(mpFrameV); free_mem2D(searchwindow); free_mem1D(); free_mem2Dint(all_mv); } void free_mem1D() { free(SAD_cost); } Now I hope to make sure where are the possible problems in my program? Here I may post some valgrind information to let you know about the actual error information. ==29105== by 0x804A906: main (me_search.c:1480) ==29105== ==29105== ==29105== HEAP SUMMARY: ==29105== in use at exit: 124,088 bytes in 18 blocks ==29105== total heap usage: 37 allocs, 21 frees, 749,276 bytes allocated ==29105== ==29105== 272 bytes in 1 blocks are still reachable in loss record 1 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804885E: GetMemory (me_search.c:117) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 352 bytes in 1 blocks are still reachable in loss record 2 of 18 ==29105== at 0x4024F20: malloc (vg_replace_malloc.c:236) ==29105== by 0x409537E: __fopen_internal (iofopen.c:76) ==29105== by 0x409544B: fopen@@GLIBC_2.1 (iofopen.c:107) ==29105== by 0x804A660: main (me_search.c:1439) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 3 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048724: GetMemory (me_search.c:106) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 4 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048747: GetMemory (me_search.c:107) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 5 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048809: GetMemory (me_search.c:114) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 6 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804882C: GetMemory (me_search.c:115) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are definitely lost in loss record 7 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804A4F8: get_mem3D (me_search.c:1393) ==29105== by 0x804879B: GetMemory (me_search.c:110) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are definitely lost in loss record 8 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804A4F8: get_mem3D (me_search.c:1393) ==29105== by 0x80487C9: GetMemory (me_search.c:111) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 1,168 bytes in 1 blocks are still reachable in loss record 9 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048701: GetMemory (me_search.c:105) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 1,168 bytes in 1 blocks are still reachable in loss record 10 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x80487E6: GetMemory (me_search.c:113) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 1,168 bytes in 1 blocks are definitely lost in loss record 11 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804A4F8: get_mem3D (me_search.c:1393) ==29105== by 0x804876D: GetMemory (me_search.c:109) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 6,336 bytes in 1 blocks are definitely lost in loss record 12 of 18 ==29105== at 0x4024F20: malloc (vg_replace_malloc.c:236) ==29105== by 0x804A25C: get_mem1D (me_search.c:1295) ==29105== by 0x8048866: GetMemory (me_search.c:119) ==29105== by 0x804A757: main (me_search.c:1456)

    Read the article

  • What CSS should I use to create a series of horizontal, non-wrapping blocks?

    - by JOhnC
    I have a set of dynamically generated content - anywhere between 1 and about 25 blocks (each of which I want to be about 250px wide. Clearly, this can run off-screen, but that's fine since my design allows for horizontal scrolling (using jQuery - I don't want the browser to do it with its own scroll bars). So what CSS - cross-browser - is the best approach? Floats seem to wrap unreliably, and the dynamic nature of the content which changes frequently through ajax calls - means that recalculating the container width is not very practical. Other CSS-based option?

    Read the article

  • What difference does it make to use several script blocks on a web page?

    - by Jan Aagaard
    What difference does it make to use more than one script block on a web page? I have pasted in the standard code for including Google Analytics as an example, and I have seen the same pattern used in other places. Why is this code separated into two separate script blocks instead of just using a single one? <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try{ var pageTracker = _gat._getTracker("UA-xxxxxx-x"); pageTracker._trackPageview(); } catch(err) {} </script>

    Read the article

  • Why Does try ... catch Blocks Require Braces?

    - by Bidou
    Hello. While in other statements like if ... else you can avoid braces if there is only one instruction in a block, you cannot do that with try ... catch blocks: the compiler doesn't buy it. For instance: try do_something_risky(); catch (...) std::cerr << "Blast!" << std::endl; With the code above, g++ simply says it expects a '{' before do_something_risky(). Why this difference of behavior between try ... catch and, say, if ... else ? Thanks!

    Read the article

  • Why Do try ... catch Blocks Require Braces?

    - by Bidou
    Hello. While in other statements like if ... else you can avoid braces if there is only one instruction in a block, you cannot do that with try ... catch blocks: the compiler doesn't buy it. For instance: try do_something_risky(); catch (...) std::cerr << "Blast!" << std::endl; With the code above, g++ simply says it expects a '{' before do_something_risky(). Why this difference of behavior between try ... catch and, say, if ... else ? Thanks!

    Read the article

  • The Controls collection cannot be modified because the control contains code blocks (i.e. <% ... %>)

    - by user339160
    Hi, I am trying to add a css form code . My website uses a master page . I am getting the error The Controls collection cannot be modified because the control contains code blocks (i.e. <% ... %). my code snipet string CssClass = string.Format("{0}/{1}?$BUILD$", BaseImageUrl, CssFileName); HtmlLink css = new HtmlLink(); css.Href = CssClass; css.Attributes["rel"] = "stylesheet"; css.Attributes["type"] = "text/css"; Header.Controls.Add(css); Any suggessions,

    Read the article

  • How much C/C++ knowledge is needed for Objective-C/iPhone development?

    - by BFree
    First, a little background. I'm a .Net developer (C#) and have over 5 years experience in both web development and desktop applications. I've been wanting to look into iPhone development for some time now, but for one reason or another always got side tracked. I finally have a potential project on the horizon, and I'm now going full steam ahead learning this stuff. My question is this: I haven't done any C/C++ programming since my schooling days, I've been living in managed land ever since. How much knowledge if any is needed to be successful as an iOS developer? Obviously memory management is something that I'll have to be conscious about (although with iOS 5 there seems to be something called ARC which should make my life easier), but what else? I'm not just talking about the C API (for example, in order to get the sin of a number, I call the sin() function), that's what Google is for. I'm talking about fundamental C/C++ idioms that the average C# developer is unaware of.

    Read the article

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode e2fsck 1.41.3 (12-Oct-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... fsck.ext3: A block group is missing an inode table while checking ext3 journal for /dev/sda3 I tried also backup superblocks, same error result. There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • how can i move ext3 partition to the beginning of drive without losing data?

    - by Felipe Alvarez
    I have a 500GB external drive. It had two partitions, each around 250GB. I removed the first partition. I'd like to move the 2nd to the left, so it consumes 100% of the drive. How can this be accomplished without any GUI tools (CLI only)? fdisk Disk /dev/sdd: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc80b1f3d Device Boot Start End Blocks Id System /dev/sdd2 29374 60801 252445410 83 Linux parted Model: ST350032 0AS (scsi) Disk /dev/sdd: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 2 242GB 500GB 259GB primary ext3 type=83 dumpe2fs Filesystem volume name: extstar Last mounted on: <not available> Filesystem UUID: f0b1d2bc-08b8-4f6e-b1c6-c529024a777d Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 15808608 Block count: 63111168 Reserved block count: 0 Free blocks: 2449985 Free inodes: 15799302 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8208 Inode blocks per group: 513 Filesystem created: Mon Feb 15 08:07:01 2010 Last mount time: Fri May 21 19:31:30 2010 Last write time: Fri May 21 19:31:30 2010 Mount count: 5 Maximum mount count: 29 Last checked: Mon May 17 14:52:47 2010 Check interval: 15552000 (6 months) Next check after: Sat Nov 13 14:52:47 2010 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: d0363517-c095-4f53-baa7-7428c02fbfc6 Journal backup: inode blocks Journal size: 128M

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • How to set up Gmail in Outlook when the ISP blocks SMTP?

    - by Revolter
    In Outlook, I'm setting up a Gmail account and I'm not able to send mails because my ISP is blocking SMTP forward. Any ways to bypass this? EDIT I've tried different settings, followed Gmail support inscructions and still not working. telnet smtp.gmail.com 465 telnet smtp.gmail.com 587 telnet smtp.gmail.com 25 all of them reply: Connecting To smtp.gmail.com...Could not open connection to the host, on port xxx : Connect failed and I don't have an email acount from my ISP.

    Read the article

  • verilog / systemverilog -- What is the behavior of blocking statements across two always blocks?

    - by miles.sherman
    I am wondering about the behavior of the below code. There are two always blocks, one is combinational to calculate the next_state signal, the other is sequential which will perform some logic and determine whether or not to shutdown the system. It does this by setting the shutdown_now signal high and then calling state <= next_state. My question is if the conditions become true that the shutdown_now signal is set (during clock cycle n) in a blocking manner before the state <= next_state line, will the state during clock cycle n+1 be SHUTDOWN or RUNNING? In other words, does the shutdown_now = 1'b1 line block across both state machines since the state signal is dependent on it through the next_state determination? enum {IDLE, RUNNING, SHUTDOWN} state, next_state; logic shutdown_now; // State machine (combinational) always_comb begin case (state) IDLE: next_state <= RUNNING; RUNNING: next_state <= shutdown ? SHUTDOWN : RUNNING; SHUTDOWN: next_state <= SHUTDOWN; default: next_state <= SHUTDOWN; endcase end // Sequential Behavior always_ff @ (posedge clk) begin // Some code here if (/*some condition) begin shutdown_now = 1'b0; end else begin shutdown_now = 1'b1; end state <= next_state; end

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Why might ASP.NET be putting JavaScript in HTML Comment blocks, not CDATA?

    - by d4nt
    We have an ASP.NET 2.0 WebForms app that uses MS Ajax 1.0. It's working fine on all our environments (dev, test, IE6 VMs etc.). However, at the customer site the client side validation is not happening. We're currently trying to eliminate all the various factors and along the way we asked them to get their page source and send it to us, and we found something interesting. In our environment, our page has ASP.NET javascript in CDATA blocks: <script type="text/javascript"> //<![CDATA[ . . . //]]> </script> In their environment, the same code looks like this: <script type="text/javascript"> <!-- . . . //--> </script> This may be a red herring, but I'd like to eliminate it as the cause of the validation issues. Does anyone know whether specific configurations/patches/versions of ASP.NET will make it do this?

    Read the article

  • LUKS with LVM, mount is not persistent after reboot

    - by linxsaga
    I have created a Logical vol and used luks to encrypt it. But while rebooting the server. I get a error message (below), therefore I would have to enter the root pass and disable the /etc/fstab entry. So mount of the LUKS partition is not persistent during reboot using LUKS. I have this setup on RHEL6 and wondering what i could be missing. I want to the LV to get be mount on reboot. Later I would want to replace it with UUID instead of the device name. Error message on reboot: "Give root password for maintenance (or type Control-D to continue):" Here are the steps from the beginning: [root@rhel6 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created [root@rhel6 ~]# vgcreate vg01 /dev/sdb Volume group "vg01" successfully created [root@rhel6 ~]# lvcreate --size 500M -n lvol1 vg01 Logical volume "lvol1" created [root@rhel6 ~]# lvdisplay --- Logical volume --- LV Name /dev/vg01/lvol1 VG Name vg01 LV UUID nX9DDe-ctqG-XCgO-2wcx-ddy4-i91Y-rZ5u91 LV Write Access read/write LV Status available # open 0 LV Size 500.00 MiB Current LE 125 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 [root@rhel6 ~]# cryptsetup luksFormat /dev/vg01/lvol1 WARNING! ======== This will overwrite data on /dev/vg01/lvol1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: [root@rhel6 ~]# mkdir /house [root@rhel6 ~]# cryptsetup luksOpen /dev/vg01/lvol1 house Enter passphrase for /dev/vg01/lvol1: [root@rhel6 ~]# mkfs.ext4 /dev/mapper/house mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 127512 inodes, 509952 blocks 25497 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 63 block groups 8192 blocks per group, 8192 fragments per group 2024 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 21 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@rhel6 ~]# mount -t ext4 /dev/mapper/house /house PS: HERE I have successfully mounted: [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# vim /etc/fstab -> as follow /dev/mapper/house /house ext4 defaults 1 2 [root@rhel6 ~]# vim /etc/crypttab -> entry as follows house /dev/vg01/lvol1 password [root@rhel6 ~]# mount -o remount /house [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# umount /house/ [root@rhel6 ~]# mount -a -> SUCCESSFUL AGAIN [root@rhel6 ~]# ls /house/ lost+found Please let me know if I am missing anything here. Thanks in advance.

    Read the article

  • Best XML Parser for RSS Feeds in Objective C ?

    - by Ansari
    Hi all, I am going to develop an application which will parse the RSS feeds and display the items in my custom cell.(Cell containing the image, label, description, etc). The most popular way of parsing is using the NSXMLParser. But this is bit of a lengthy way. So is there any other way to do this. Or my question will be, which is the best xml parser for objective-c ?

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >