Search Results

Search found 70567 results on 2823 pages for 'binary file'.

Page 3/2823 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • EXC_BAD_ACCESS and KERN_INVALID_ADDRESS after intiating a print sequence.

    - by Edward M. Bergmann
    MAC G4/1.5GHz/2GB/1TB+ OS10.4.11 Start up Volume has been erased/complete reinstall with updated software. Current problem only occurs when printing to an Epson Artisan 800 [USB as well as Ethernet connected] when using Macromedia FreeHand 10.0.1.67. All other apps/printers work fine. Memory has been removed/swapped/reinstalled several times, CPU was changed from 1.5GB to 1.3GB. Page(s) will print, but application quits within a second or two after selecting "print." Apple has never replied, Epson hasn't a clue, and I am befuddled!! Perhaps there is GURU out their who and see a bigger-better picture and understands how to interpret all of this stuff. If so, it would be a terrific pleasure to get a handle on how to cure this problem or get some A M M U N I T I O N to fire in the right direction. I thank you in advance. FreeHand 10 MAC OS10.4.11 unexpectedly quits after invoking a print command, the result: Date/Time: 2010-04-20 14:23:18.371 -0700 OS Version: 10.4.11 (Build 8S165) Report Version: 4 Command: FreeHand 10 Path: /Applications/Macromedia FreeHand 10.0.1.67/FreeHand 10 Parent: WindowServer [1060] Version: 10.0.1.67 (10.0.1.67, Copyright © 1988-2002 Macromedia Inc. and its licensors. All rights reserved.) PID: 1217 Thread: 0 Exception: EXC_BAD_ACCESS (0x0001) Codes: KERN_INVALID_ADDRESS (0x0001) at 0x07d7e000 Thread 0 Crashed: 0 <<00000000>> 0xffff8a60 __memcpy + 704 (cpu_capabilities.h:189) 1 FreeHand X 0x011d2994 0x1008000 + 1878420 2 FreeHand X 0x01081da4 0x1008000 + 499108 3 FreeHand X 0x010f5474 0x1008000 + 971892 4 FreeHand X 0x010d0278 0x1008000 + 819832 5 FreeHand X 0x010fa808 0x1008000 + 993288 6 FreeHand X 0x01113608 0x1008000 + 1095176 7 FreeHand X 0x01113748 0x1008000 + 1095496 8 FreeHand X 0x01099ebc 0x1008000 + 597692 9 FreeHand X 0x010fa358 0x1008000 + 992088 10 FreeHand X 0x010fa170 0x1008000 + 991600 11 FreeHand X 0x010f9830 0x1008000 + 989232 12 FreeHand X 0x01098678 0x1008000 + 591480 13 FreeHand X 0x010f7a5c 0x1008000 + 981596 Thread 1: 0 libSystem.B.dylib 0x90005dec syscall + 12 1 com.apple.OpenTransport 0x9ad015a0 BSD_waitevent + 44 2 com.apple.OpenTransport 0x9ad06360 CarbonSelectThreadFunc + 176 3 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 2: 0 libSystem.B.dylib 0x9002bfc8 semaphore_wait_signal_trap + 8 1 libSystem.B.dylib 0x90030aac pthread_cond_wait + 480 2 com.apple.OpenTransport 0x9ad01e94 CarbonOperationThreadFunc + 80 3 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 3: 0 libSystem.B.dylib 0x9002bfc8 semaphore_wait_signal_trap + 8 1 libSystem.B.dylib 0x90030aac pthread_cond_wait + 480 2 com.apple.OpenTransport 0x9ad11df0 CarbonInetOperThreadFunc + 80 3 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 4: 0 libSystem.B.dylib 0x90053f88 semaphore_timedwait_signal_trap + 8 1 libSystem.B.dylib 0x900707e8 pthread_cond_timedwait_relative_np + 556 2 ...ple.CoreServices.CarbonCore 0x90bf9330 TSWaitOnSemaphoreCommon + 176 3 ...ple.CoreServices.CarbonCore 0x90c012d0 TimerThread + 60 4 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 5: 0 libSystem.B.dylib 0x9001f48c select + 12 1 com.apple.CoreFoundation 0x907f1240 __CFSocketManager + 472 2 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 6: 0 libSystem.B.dylib 0x9002188c access + 12 1 ...e.print.framework.PrintCore 0x9169a620 CreateProxyURL(__CFURL const*) + 192 2 ...e.print.framework.PrintCore 0x9169a4f8 CreateOriginalPrinterProxyURL() + 80 3 ...e.print.framework.PrintCore 0x9169a034 CheckPrinterProxyVersion(OpaquePMPrinter*, __CFURL const*) + 192 4 ...e.print.framework.PrintCore 0x91699d94 PJCPrinterProxyCreateURL + 932 5 ...e.print.framework.PrintCore 0x916997bc PJCLaunchPrinterProxy(OpaquePMPrinter*, PMLaunchPCReason) + 32 6 ...e.print.framework.PrintCore 0x91699730 PJCLaunchPrinterProxyThread(void*) + 136 7 libSystem.B.dylib 0x9002b908 _pthread_body + 96 Thread 0 crashed with PPC Thread State 64: srr0: 0x00000000ffff8a60 srr1: 0x000000000200f030 vrsave: 0x00000000ff000000 cr: 0x24002244 xer: 0x0000000020000002 lr: 0x00000000011d2994 ctr: 0x00000000000003f6 r0: 0x0000000000000000 r1: 0x00000000bfffea60 r2: 0x0000000000000000 r3: 0x00000000083bb000 r4: 0x00000000083c0040 r5: 0x0000000000014d84 r6: 0x0000000000000010 r7: 0x0000000000000020 r8: 0x0000000000000030 r9: 0x0000000000000000 r10: 0x0000000000000060 r11: 0x0000000000000080 r12: 0x0000000007d7e000 r13: 0x0000000000000000 r14: 0x00000000005cbd26 r15: 0x0000000000000001 r16: 0x00000000017b03a0 r17: 0x0000000000000000 r18: 0x000000000068fa80 r19: 0x0000000000000001 r20: 0x0000000006c639c4 r21: 0x00000000006900f8 r22: 0x0000000006e09480 r23: 0x0000000006e0a250 r24: 0x0000000000000002 r25: 0x0000000000000000 r26: 0x00000000bfffed2c r27: 0x0000000006e05ce0 r28: 0x0000000000014d84 r29: 0x0000000000000000 r30: 0x0000000000014d84 r31: 0x00000000083bb000 Binary Images Description: 0x1000 - 0x2fff LaunchCFMApp /System/Library/Frameworks/Carbon.framework/Versions/A/Support/LaunchCFMApp 0x27f000 - 0x2ce3c7 CarbonLibpwpc PEF binary: CarbonLibpwpc 0x2ce3d0 - 0x2e66bd Apple;Carbon;Multimedia PEF binary: Apple;Carbon;Multimedia 0x2e7c00 - 0x2e998b Apple;Carbon;Networking PEF binary: Apple;Carbon;Networking 0x31ab10 - 0x31abb3 CFMPriv_QuickTime PEF binary: CFMPriv_QuickTime 0x31ac20 - 0x31ac97 CFMPriv_System PEF binary: CFMPriv_System 0x31af30 - 0x31b000 CFMPriv_CarbonSound PEF binary: CFMPriv_CarbonSound 0x31b080 - 0x31b153 CFMPriv_CommonPanels PEF binary: CFMPriv_CommonPanels 0x31b230 - 0x31b2eb CFMPriv_Help PEF binary: CFMPriv_Help 0x31b2f0 - 0x31b3ba CFMPriv_HIToolbox PEF binary: CFMPriv_HIToolbox 0x31b440 - 0x31b516 CFMPriv_HTMLRendering PEF binary: CFMPriv_HTMLRendering 0x31b550 - 0x31b602 CFMPriv_CoreFoundation PEF binary: CFMPriv_CoreFoundation 0x31b7f0 - 0x31b8a5 CFMPriv_DVComponentGlue PEF binary: CFMPriv_DVComponentGlue 0x31f760 - 0x31f833 CFMPriv_ImageCapture PEF binary: CFMPriv_ImageCapture 0x31f8c0 - 0x31f9a5 CFMPriv_NavigationServices PEF binary: CFMPriv_NavigationServices 0x31fa20 - 0x31faf6 CFMPriv_OpenScripting?MacBLib PEF binary: CFMPriv_OpenScripting?MacBLib 0x31fbd0 - 0x31fc8e CFMPriv_Print PEF binary: CFMPriv_Print 0x31fcb0 - 0x31fd7d CFMPriv_SecurityHI PEF binary: CFMPriv_SecurityHI 0x31fe00 - 0x31fee2 CFMPriv_SpeechRecognition PEF binary: CFMPriv_SpeechRecognition 0x31ff60 - 0x320033 CFMPriv_CarbonCore PEF binary: CFMPriv_CarbonCore 0x3200b0 - 0x320183 CFMPriv_OSServices PEF binary: CFMPriv_OSServices 0x320260 - 0x320322 CFMPriv_AE PEF binary: CFMPriv_AE 0x320330 - 0x3203f5 CFMPriv_ATS PEF binary: CFMPriv_ATS 0x320470 - 0x320547 CFMPriv_ColorSync PEF binary: CFMPriv_ColorSync 0x3205d0 - 0x3206b3 CFMPriv_FindByContent PEF binary: CFMPriv_FindByContent 0x320730 - 0x32080a CFMPriv_HIServices PEF binary: CFMPriv_HIServices 0x320880 - 0x320960 CFMPriv_LangAnalysis PEF binary: CFMPriv_LangAnalysis 0x3209f0 - 0x320ad6 CFMPriv_LaunchServices PEF binary: CFMPriv_LaunchServices 0x320bb0 - 0x320c87 CFMPriv_PrintCore PEF binary: CFMPriv_PrintCore 0x320c90 - 0x320d52 CFMPriv_QD PEF binary: CFMPriv_QD 0x320e50 - 0x320f39 CFMPriv_SpeechSynthesis PEF binary: CFMPriv_SpeechSynthesis 0x405000 - 0x497f7a PowerPlant Shared Library PEF binary: PowerPlant Shared Library 0x498000 - 0x4f0012 Player PEF binary: Player 0x4f1000 - 0x54a4c0 KodakCMSC PEF binary: KodakCMSC 0x6bd000 - 0x6eca10 <Unknown disk fragment> PEF binary: <Unknown disk fragment> 0x7fc000 - 0x7fdb8b xRes Palette Importc PEF binary: xRes Palette Importc 0x1008000 - 0x17770a5 FreeHand X PEF binary: FreeHand X 0x17770b0 - 0x17a6fe7 MW_MSL.Carbon.Shlb PEF binary: MW_MSL.Carbon.Shlb 0x17fb000 - 0x17fff03 Smudge PEF binary: Smudge 0x5ce6000 - 0x5cecebe PICT Import Export68 PEF binary: PICT Import Export68 0x5ced000 - 0x5d24267 PNG Import Exportr68 PEF binary: PNG Import Exportr68 0x5d25000 - 0x5d30dde Release To Layerswpc PEF binary: Release To Layerswpc 0x5d31000 - 0x5d37fd2 Roughen PEF binary: Roughen 0x5d38000 - 0x5d459ae Shadow PEF binary: Shadow 0x5d46000 - 0x5d4b0de Spiral PEF binary: Spiral 0x5d4c000 - 0x5d57f07 Targa Import Export8 PEF binary: Targa Import Export8 0x5d58000 - 0x5d8d959 TIFF Import Export68 PEF binary: TIFF Import Export68 0x5d93000 - 0x5da0f65 Color Utilities PEF binary: Color Utilities 0x5f62000 - 0x5f6e795 Mirror PEF binary: Mirror 0x5f6f000 - 0x5fbd656 HTML Export PEF binary: HTML Export 0x5fc8000 - 0x5fd442f Graphic Hose PEF binary: Graphic Hose 0x5fd5000 - 0x5fe4b5a BMP Import Exportr68 PEF binary: BMP Import Exportr68 0x5fe5000 - 0x60342d6 PDF Export PEF binary: PDF Export 0x6041000 - 0x6042f44 Fractalizej@ PEF binary: Fractalizej@ 0x6043000 - 0x6075214 Chart Tool™ PEF binary: Chart Tool™ 0x6076000 - 0x607d46d Bend PEF binary: Bend 0x607e000 - 0x60cda7b PDF Import PEF binary: PDF Import 0x60dc000 - 0x60e38f2 Photoshop ImportChartCursor PEF binary: Photoshop ImportChartCursor 0x60e4000 - 0x60eb9b1 3D Rotationp PEF binary: 3D Rotationp 0x60ec000 - 0x611b458 JPEG Import ExportANEL PEF binary: JPEG Import ExportANEL 0x611c000 - 0x613d89f GIF Import Export PEF binary: GIF Import Export 0x613e000 - 0x616d7f7 Flash Export PEF binary: Flash Export 0x616e000 - 0x6175d75 Fisheye Lens PEF binary: Fisheye Lens 0x6176000 - 0x6182343 IPTC File Info PEF binary: IPTC File Info 0x6184000 - 0x6193790 PEF binary: 0x6194000 - 0x61965e5 Photoshop Palette Import PEF binary: Photoshop Palette Import 0x6197000 - 0x619c5a4 Add PointsZ PEF binary: Add PointsZ 0x619d000 - 0x61ad92b Emboss PEF binary: Emboss 0x61ae000 - 0x61be6e1 AppleScript™ Xtrawpc PEF binary: AppleScript™ Xtrawpc 0x61bf000 - 0x61d16de Navigation PEF binary: Navigation 0x61d2000 - 0x61ff94e CorelDRAW 7-8 Import PEF binary: CorelDRAW 7-8 Import 0x620a000 - 0x620d7f1 Trap PEF binary: Trap 0x620e000 - 0x62149d4 Import RGB Color Table PEF binary: Import RGB Color Table 0x6215000 - 0x6217dfe Arc PEF binary: Arc 0x6218000 - 0x62211e3 Delete Empty Text Blocks PEF binary: Delete Empty Text Blocks 0x6222000 - 0x624c8da MIX Services PEF binary: MIX Services 0x7d0b000 - 0x7d37fff com.apple.print.framework.Print.Private 4.6 (163.10) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/Current/Plugins/PrintCocoaUI.bundle/Contents/MacOS/PrintCocoaUI 0x7dbf000 - 0x7ddffff com.apple.print.PrintingCocoaPDEs 4.6 (163.10) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/A/Plugins/PrintingCocoaPDEs.bundle/Contents/MacOS/PrintingCocoaPDEs 0x7f05000 - 0x7f39fff com.epson.ijprinter.pde.PrintSetting.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/PrintSetting.plugin/Contents/MacOS/PrintSetting 0x7f49000 - 0x8044fff com.epson.ijprinter.IJPFoundation 6.54 /Library/Printers/EPSON/InkjetPrinter/Libraries/IJPFoundation.framework/Versions/A/IJPFoundation 0x809a000 - 0x80cdfff com.epson.ijprinter.pde.ColorManagement.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/ColorManagement.plugin/Contents/MacOS/ColorManagement 0x80dd000 - 0x8110fff com.epson.ijprinter.pde.ExpandMargin.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/ExpandMargin.plugin/Contents/MacOS/ExpandMargin 0x8120000 - 0x8153fff com.epson.ijprinter.pde.ExtensionSetting.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/ExtensionSetting.plugin/Contents/MacOS/ExtensionSetting 0x8163000 - 0x8196fff com.epson.ijprinter.pde.DoubleSidePrint.EP0827MSA 6.36 /Library/Printers/EPSON/InkjetPrinter/PrintingModule/EP0827MSA_Core.plugin/Contents/PDEs/DoubleSidePrint.plugin/Contents/MacOS/DoubleSidePrint 0x81a6000 - 0x81bffff com.apple.print.PrintingTiogaPDEs 4.6 (163.10) /System/Library/Frameworks/Carbon.framework/Frameworks/Print.framework/Versions/A/Plugins/PrintingTiogaPDEs.bundle/Contents/MacOS/PrintingTiogaPDEs 0x838f000 - 0x8397fff com.apple.print.converter.plugin 4.5 (163.8) /System/Library/Printers/CVs/Converter.plugin/Contents/MacOS/Converter 0x78e00000 - 0x78e07fff libLW8Utils.dylib /System/Library/Printers/Libraries/libLW8Utils.dylib 0x79200000 - 0x7923ffff libLW8Converter.dylib /System/Library/Printers/Libraries/libLW8Converter.dylib 0x8fe00000 - 0x8fe52fff dyld 46.16 /usr/lib/dyld 0x90000000 - 0x901bcfff libSystem.B.dylib /usr/lib/libSystem.B.dylib 0x90214000 - 0x90219fff libmathCommon.A.dylib /usr/lib/system/libmathCommon.A.dylib 0x9021b000 - 0x90268fff com.apple.CoreText 1.0.4 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreText.framework/Versions/A/CoreText 0x90293000 - 0x90344fff ATS /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/ATS 0x90373000 - 0x9072efff com.apple.CoreGraphics 1.258.85 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics 0x907bb000 - 0x90895fff com.apple.CoreFoundation 6.4.11 (368.35) /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x908de000 - 0x908defff com.apple.CoreServices 10.4 (???) /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices 0x908e0000 - 0x909e2fff libicucore.A.dylib /usr/lib/libicucore.A.dylib 0x90a3c000 - 0x90ac0fff libobjc.A.dylib /usr/lib/libobjc.A.dylib 0x90aea000 - 0x90b5cfff IOKit /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit 0x90b72000 - 0x90b84fff libauto.dylib /usr/lib/libauto.dylib 0x90b8b000 - 0x90e62fff com.apple.CoreServices.CarbonCore 681.19 (681.21) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x90ec8000 - 0x90f48fff com.apple.CoreServices.OSServices 4.1 /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices 0x90f92000 - 0x90fd4fff com.apple.CFNetwork 129.24 /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CFNetwork.framework/Versions/A/CFNetwork 0x90fe9000 - 0x91001fff com.apple.WebServices 1.1.2 (1.1.0) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/WebServicesCore.framework/Versions/A/WebServicesCore 0x91011000 - 0x91092fff com.apple.SearchKit 1.0.8 /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit 0x910d8000 - 0x91101fff com.apple.Metadata 10.4.4 (121.36) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Metadata 0x91112000 - 0x91120fff libz.1.dylib /usr/lib/libz.1.dylib 0x91123000 - 0x912defff com.apple.security 4.6 (29770) /System/Library/Frameworks/Security.framework/Versions/A/Security 0x913dd000 - 0x913e6fff com.apple.DiskArbitration 2.1.2 /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration 0x913ed000 - 0x913f5fff libbsm.dylib /usr/lib/libbsm.dylib 0x913f9000 - 0x91421fff com.apple.SystemConfiguration 1.8.3 /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration 0x91434000 - 0x9143ffff libgcc_s.1.dylib /usr/lib/libgcc_s.1.dylib 0x91444000 - 0x914bffff com.apple.audio.CoreAudio 3.0.5 /System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio 0x914fc000 - 0x914fcfff com.apple.ApplicationServices 10.4 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/ApplicationServices 0x914fe000 - 0x91536fff com.apple.AE 312.2 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE 0x91551000 - 0x91623fff com.apple.ColorSync 4.4.13 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ColorSync.framework/Versions/A/ColorSync 0x91676000 - 0x91707fff com.apple.print.framework.PrintCore 4.6 (177.13) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/PrintCore.framework/Versions/A/PrintCore 0x9174e000 - 0x91805fff com.apple.QD 3.10.28 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/QD.framework/Versions/A/QD 0x91842000 - 0x918a0fff com.apple.HIServices 1.5.3 (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/HIServices.framework/Versions/A/HIServices 0x918cf000 - 0x918f0fff com.apple.LangAnalysis 1.6.1 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/LangAnalysis.framework/Versions/A/LangAnalysis 0x91904000 - 0x91929fff com.apple.FindByContent 1.5 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/FindByContent.framework/Versions/A/FindByContent 0x9193c000 - 0x9197efff com.apple.LaunchServices 183.1 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/LaunchServices 0x9199a000 - 0x919aefff com.apple.speech.synthesis.framework 3.3 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0x919bc000 - 0x91a02fff com.apple.ImageIO.framework 1.5.9 /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/ImageIO 0x91a19000 - 0x91ae0fff libcrypto.0.9.7.dylib /usr/lib/libcrypto.0.9.7.dylib 0x91b2e000 - 0x91b43fff libcups.2.dylib /usr/lib/libcups.2.dylib 0x91b48000 - 0x91b66fff libJPEG.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJPEG.dylib 0x91b6c000 - 0x91c23fff libJP2.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJP2.dylib 0x91c72000 - 0x91c76fff libGIF.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libGIF.dylib 0x91c78000 - 0x91ce2fff libRaw.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libRaw.dylib 0x91ce7000 - 0x91d02fff libPng.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libPng.dylib 0x91d07000 - 0x91d0afff libRadiance.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libRadiance.dylib 0x91d0c000 - 0x91deafff libxml2.2.dylib /usr/lib/libxml2.2.dylib 0x91e0a000 - 0x91e48fff libTIFF.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib 0x91e4f000 - 0x91e4ffff com.apple.Accelerate 1.2.2 (Accelerate 1.2.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate 0x91e51000 - 0x91f36fff com.apple.vImage 2.4 /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage 0x91f3e000 - 0x91f5dfff com.apple.Accelerate.vecLib 3.2.2 (vecLib 3.2.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib 0x91fc9000 - 0x92037fff libvMisc.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib 0x92042000 - 0x920d7fff libvDSP.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib 0x920f1000 - 0x92679fff libBLAS.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 0x926ac000 - 0x929d7fff libLAPACK.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib 0x92a07000 - 0x92af5fff libiconv.2.dylib /usr/lib/libiconv.2.dylib 0x92af8000 - 0x92b80fff com.apple.DesktopServices 1.3.7 /System/Library/PrivateFrameworks/DesktopServicesPriv.framework/Versions/A/DesktopServicesPriv 0x92bc1000 - 0x92df4fff com.apple.Foundation 6.4.12 (567.42) /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 0x92f27000 - 0x92f45fff libGL.dylib /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib 0x92f50000 - 0x92faafff libGLU.dylib /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib 0x92fc8000 - 0x92fc8fff com.apple.Carbon 10.4 (???) /System/Library/Frameworks/Carbon.framework/Versions/A/Carbon 0x92fca000 - 0x92fdefff com.apple.ImageCapture 3.0 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/ImageCapture.framework/Versions/A/ImageCapture 0x92ff6000 - 0x93006fff com.apple.speech.recognition.framework 3.4 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SpeechRecognition.framework/Versions/A/SpeechRecognition 0x93012000 - 0x93027fff com.apple.securityhi 2.0 (203) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SecurityHI.framework/Versions/A/SecurityHI 0x93039000 - 0x930c0fff com.apple.ink.framework 101.2 (69) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Ink.framework/Versions/A/Ink 0x930d4000 - 0x930dffff com.apple.help 1.0.3 (32) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Help.framework/Versions/A/Help 0x930e9000 - 0x93117fff com.apple.openscripting 1.2.7 (???) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/OpenScripting.framework/Versions/A/OpenScripting 0x93131000 - 0x93140fff com.apple.print.framework.Print 5.2 (192.4) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/A/Print 0x9314c000 - 0x931b2fff com.apple.htmlrendering 1.1.2 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HTMLRendering.framework/Versions/A/HTMLRendering 0x931e3000 - 0x93232fff com.apple.NavigationServices 3.4.4 (3.4.3) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/NavigationServices.framework/Versions/A/NavigationServices 0x93260000 - 0x9327dfff com.apple.audio.SoundManager 3.9 /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CarbonSound.framework/Versions/A/CarbonSound 0x9328f000 - 0x9329cfff com.apple.CommonPanels 1.2.2 (73) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CommonPanels.framework/Versions/A/CommonPanels 0x932a5000 - 0x935b3fff com.apple.HIToolbox 1.4.10 (???) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox 0x93703000 - 0x9370ffff com.apple.opengl 1.4.7 /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL 0x93714000 - 0x93734fff com.apple.DirectoryService.Framework 3.3 /System/Library/Frameworks/DirectoryService.framework/Versions/A/DirectoryService 0x93787000 - 0x93787fff com.apple.Cocoa 6.4 (???) /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa 0x93789000 - 0x93dbcfff com.apple.AppKit 6.4.10 (824.48) /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit 0x94149000 - 0x941bbfff com.apple.CoreData 91 (92.1) /System/Library/Frameworks/CoreData.framework/Versions/A/CoreData 0x941f4000 - 0x942b9fff com.apple.audio.toolbox.AudioToolbox 1.4.7 /System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox 0x9430c000 - 0x9430cfff com.apple.audio.units.AudioUnit 1.4 /System/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit 0x9430e000 - 0x944cefff com.apple.QuartzCore 1.4.12 /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore 0x94518000 - 0x94555fff libsqlite3.0.dylib /usr/lib/libsqlite3.0.dylib 0x9455d000 - 0x945adfff libGLImage.dylib /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib 0x945b6000 - 0x945cffff com.apple.CoreVideo 1.4.2 /System/Library/Frameworks/CoreVideo.framework/Versions/A/CoreVideo 0x9477d000 - 0x9478cfff libCGATS.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCGATS.A.dylib 0x94794000 - 0x947a1fff libCSync.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCSync.A.dylib 0x947a7000 - 0x947c6fff libPDFRIP.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libPDFRIP.A.dylib 0x947e7000 - 0x94800fff libRIP.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libRIP.A.dylib 0x94807000 - 0x94b3afff com.apple.QuickTime 7.6.4 (1327.73) /System/Library/Frameworks/QuickTime.framework/Versions/A/QuickTime 0x94c22000 - 0x94c93fff libstdc++.6.dylib /usr/lib/libstdc++.6.dylib 0x94e09000 - 0x94f39fff com.apple.AddressBook.framework 4.0.6 (490) /System/Library/Frameworks/AddressBook.framework/Versions/A/AddressBook 0x94fcc000 - 0x94fdbfff com.apple.DSObjCWrappers.Framework 1.1 /System/Library/PrivateFrameworks/DSObjCWrappers.framework/Versions/A/DSObjCWrappers 0x94fe3000 - 0x95010fff com.apple.LDAPFramework 1.4.1 (69.0.1) /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP 0x95017000 - 0x95027fff libsasl2.2.dylib /usr/lib/libsasl2.2.dylib 0x9502b000 - 0x9505afff libssl.0.9.7.dylib /usr/lib/libssl.0.9.7.dylib 0x9506a000 - 0x95087fff libresolv.9.dylib /usr/lib/libresolv.9.dylib 0x9acff000 - 0x9ad1dfff com.apple.OpenTransport 2.0 /System/Library/PrivateFrameworks/OpenTransport.framework/OpenTransport 0x9ad98000 - 0x9ad99fff com.apple.iokit.dvcomponentglue 1.7.9 /System/Library/Frameworks/DVComponentGlue.framework/Versions/A/DVComponentGlue 0x9b1db000 - 0x9b1f2fff libCFilter.A.dylib /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCFilter.A.dylib 0x9c69b000 - 0x9c6bdfff libmx.A.dylib /usr/lib/libmx.A.dylib 0xeab00000 - 0xeab25fff libConverter.dylib /System/Library/Printers/Libraries/libConverter.dylib Model: PowerMac3,1, BootROM 4.2.8f1, 1 processors, PowerPC G4 (3.3), 1.3 GHz, 2 GB Graphics: ATI Radeon 7500, ATY,RV200, AGP, 32 MB Memory Module: DIMM0/J21, 512 MB, SDRAM, PC133-333 Memory Module: DIMM1/J22, 512 MB, SDRAM, PC133-333 Memory Module: DIMM2/J23, 512 MB, SDRAM, PC133-333 Memory Module: DIMM3/J24, 512 MB, SDRAM, PC133-333 Modem: Spring, UCJ, V.90, 3.0F, APPLE VERSION 0001, 4/7/1999 Network Service: Built-in Ethernet, Ethernet, en0 PCI Card: SeriTek/1V2E2 v.5.1.3,11/22/05, 23:47:18, ata, SLOT-B PCI Card: pci-bridge, pci, SLOT-C PCI Card: firewire, ieee1394, 2x8 PCI Card: usb, usb, 2x9 PCI Card: usb, usb, 2x9 PCI Card: pcie55,2928, 2x9 PCI Card: ATTO,ExpressPCIPro, scsi, SLOT-D Parallel ATA Device: MATSHITADVD-ROM SR-8585 Parallel ATA Device: IOMEGA ZIP 100 ATAPI USB Device: Hub, Up to 12 Mb/sec, 500 mA USB Device: Hub, Up to 12 Mb/sec, 500 mA USB Device: USB2.0 Hub, Up to 12 Mb/sec, 500 mA USB Device: iMic USB audio system, Griffin Technology, Inc, Up to 12 Mb/sec, 500 mA USB Device: USB Storage Device, Generic, Up to 12 Mb/sec, 500 mA USB Device: USB2.0 MFP, EPSON, Up to 12 Mb/sec, 500 mA USB Device: DYMO LabelWriter Twin Turbo, DYMO, Up to 12 Mb/sec, 500 mA USB Device: USB 2.0 CD + HDD, DMI, Up to 12 Mb/sec, 500 mA USB Device: USB2.0 Hub, Up to 12 Mb/sec, 500 mA USB Device: USB2.0 Hub, Up to 12 Mb/sec, 500 mA USB Device: iMate, USB To ADB Adaptor, Griffin Technology, Inc., Up to 1.5 Mb/sec, 500 mA USB Device: Hub in Apple Pro Keyboard, Alps Electric, Up to 12 Mb/sec, 500 mA USB Device: Griffin PowerMate, Griffin Technology, Inc., Up to 1.5 Mb/sec, 100 mA

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • Error importing large MySQL dump file which includes binary BLOBs in Windows

    - by Daniel Magliola
    I'm trying to import a MySQL dump file, which I got from my hosting company, into my Windows dev machine, and i'm running into problems. I'm importing this from the command line, and i'm getting a very weird error: ERROR 2005 (HY000) at line 3118: Unknown MySQL server host '+?*á±dÆ-N+Æ·h^ye"p-i+ Z+-$?P+Y.8+|?+l8/l¦¦î7æ¦X¦XE.ºG[ ;-ï?éµ?º+¦¦].?+f9d릦'+ÿG?-0à¡úè?-?ù??¥'+NÑ' (11004) I'm attaching the screenshot because i'm assuming the binary data will get lost... I'm not exactly sure what the problem is, but two potential issues are the size of the file (2 Gb) which is not insanely large, but it's not trivially small either, and the other is the fact that many of these tables have JPG images in them (which is why the file is 2Gb large, for the most part). Also, the dump was taken in a Linux machine and I'm importing this into Windows, not sure if that could add to the problems (I understand it shouldn't) Now, that binary garbage is why I think the images in the file might be a problem, but i've been able to import similar dumps from the same hosting company in the past, so i'm not sure what might be the issue. Also, trying to look into this file (and line 3118 in particular) is kind of impossible given its size (i'm not really handy with Linux command line tools like grep, sed, etc). The file might be corrupted, but i'm not exactly sure how to check it. What I downloaded was a .gz file, which I "tested" with WinRar and it says it looks OK (i'm assuming gz has some kind of CRC). If you can think of a better way to test it, I'd love to try that. Any ideas what could be going on / how to get past this error? I'm not very attached to the data in particular, since I just want this as a copy for dev, so if I have to lose a few records, i'm fine with that, as long as the schema remains perfectly sound. Thanks! Daniel

    Read the article

  • PHP File Downloading Questions

    - by nsearle
    Hey All! I am currently running into some problems with user's downloading a file stored on my server. I have code set up to auto download a file once the user hits the download button. It is working for all files, but when the size get's larger than 30 MB it is having issues. Is there a limit on user download? Also, I have supplied my example code and am wondering if there is a better practice than using the PHP function 'file_get_contents'. Thank You all for the help! $path = $_SERVER['DOCUMENT_ROOT'] . '../path/to/file/'; $filename = 'filename.zip'; $filesize = filesize($path . $filename); @header("Content-type: application/zip"); @header("Content-Disposition: attachment; filename=$filename"); @header("Content-Length: $filesize") echo file_get_contents($path . $filename);

    Read the article

  • Using Multiple File Handles for Single File

    - by Ryan Rosario
    I have an O(n^2) operation that requires me to read line i from a file, and then compare line i to every line in the file. This repeats for all i. I wrote the following code to do this with 2 file handles, but it does not yield the result I am looking for. I imagine this is a simple error on my part. IN1 = open("myfile.dat","r") IN2 = open("myfile.dat","r") for line1 in IN1: for line2 in IN2: print line1.strip(), line2.strip() IN1.close() IN2.close() The result: Hello Hello Hello World Hello This Hello is Hello an Hello Example Hello of Hello Using Hello Two Hello File Hello Pointers Hello to Hello Read Hello One Hello File The output should contain 15^2 lines.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How do I compare binary files in Linux?

    - by frustratedCmpNoLongerUser
    I need to compare two binary files and get the output in the form <fileoffset-hex <file1-byte-hex <file2-byte-hex for every different byte. So if file1.bin is 00 90 00 11 in binary form and file2.bin is 00 91 00 10 I want to get something like 00000001 90 91 00000003 11 10 What is the easiest way to accomplish the goal? Standard tool? Some third-party tool? (Note: cmp -l should be killed with fire, it uses a decimal system for offsets and octal for bytes.)

    Read the article

  • Designing a database file format

    - by RoliSoft
    I would like to design my own database engine for educational purposes, for the time being. Designing a binary file format is not hard nor the question, I've done it in the past, but while designing a database file format, I have come across a very important question: How to handle the deletion of an item? So far, I've thought of the following two options: Each item will have a "deleted" bit which is set to 1 upon deletion. Pro: relatively fast. Con: potentially sensitive data will remain in the file. 0x00 out the whole item upon deletion. Pro: potentially sensitive data will be removed from the file. Con: relatively slow. Recreating the whole database. Pro: no empty blocks which makes the follow-up question void. Con: it's a really good idea to overwrite the whole 4 GB database file because a user corrected a typo. I will sell this method to Twitter ASAP! Now let's say you already have a few empty blocks in your database (deleted items). The follow-up question is how to handle the insertion of a new item? Append the item to the end of the file. Pro: fastest possible. Con: file will get huge because of all the empty blocks that remain because deleted items aren't actually deleted. Search for an empty block exactly the size of the one you're inserting. Pro: may get rid of some blocks. Con: you may end up scanning the whole file at each insert only to find out it's very unlikely to come across a perfectly fitting empty block. Find the first empty block which is equal or larger than the item you're inserting. Pro: you probably won't end up scanning the whole file, as you will find an empty block somewhere mid-way; this will keep the file size relatively low. Con: there will still be lots of leftover 0x00 bytes at the end of items which were inserted into bigger empty blocks than they are. Rigth now, I think the first deletion method and the last insertion method are probably the "best" mix, but they would still have their own small issues. Alternatively, the first insertion method and scheduled full database recreation. (Probably not a good idea when working with really large databases. Also, each small update in that method will clone the whole item to the end of the file, thus accelerating file growth at a potentially insane rate.) Unless there is a way of deleting/inserting blocks from/to the middle of the file in a file-system approved way, what's the best way to do this? More importantly, how do databases currently used in production usually handle this?

    Read the article

  • Detecting if a file is binary or plain text?

    - by dr. evil
    How can I detect if a file is binary or a plain text? Basically my .NET app is processing batch files and extracting data however I don't want to process binary files. As a solution I'm thinking about analysing first X bytes of the file and if there are more unprintable characters than printable characters it should be binary. Is this the right way to do it? Is there nay better implementation for this task?

    Read the article

  • Embedding binary blobs using gcc mingw

    - by myforwik
    I am trying to embed binary blobs into an exe file. I am using mingw gcc. I make the object file like this: ld -r -b binary -o binary.o input.txt I then look objdump output to get the symbols: objdump -x binary.o And it gives symbols named: _binary_input_txt_start _binary_input_txt_end _binary_input_txt_size I then try and access them in my C program: #include <stdlib.h> #include <stdio.h> extern char _binary_input_txt_start[]; int main (int argc, char *argv[]) { char *p; p = _binary_input_txt_start; return 0; } Then I compile like this: gcc -o test.exe test.c binary.o But I always get: undefined reference to _binary_input_txt_start Does anyone know what I am doing wrong?

    Read the article

  • How to tell binary from text files in linux

    - by gabor
    The linux file command does a very good job in recognising file types and gives very fine-grained results. The diff tool is able to tell binary files from text files, producing a different output. Is there a way to tell binary files form text files? All I want is a yes/no answer whether a given file is binary. Because it's difficult to define binary, let's say I want to know if diff will attempt a text-based comparison. To clarify the question: I do not care if it's ASCII text or XML as long as it's text. Also, I do not want to differentiate between MP3 and JPEG files, as they're all binary.

    Read the article

  • How to redistribute binary programs built on modern Ubuntu so that they can be executed on any older Linux system ?

    - by psihodelia
    I found that if I build any binary on Ubuntu 10.10, then it doesn't execute on some older Linuxes. It is because Ubuntu uses a very new C library, called EGLIBC. Most of the desktop Linux systems use GLIBC. I would like to know whether there is any standard method how to redistribute binary programs built on a modern Ubuntu so that they can be executed on any older Linux system ? How to find all required dependencies (glibc version, dynamic libraries) ?

    Read the article

  • Why use binary files to stack up different versions on DMSs?

    - by edgarator
    I've used both Liferay and Alfresco trying to use them as the Document Management System for an intranet. I noticed the following: They use the file system and the database to store files They use a GUID to name the file on the filesystem and that GUID is used as an Id in the database. The GUID-named file is a binary file The GUID-named binary file stores all versions for a given file The path for the file in the DMS doesn't match the one in the file system The URL makes reference to the GUID when a certain file is requested What I want to know is why is this, and what would be the best way of doing it. Like how to would you create the binary file (zip?), and what parts would you keep in the binary file and what parts would you store in the database (meta-data, path?). I'm assuming some of the benefits of doing it like this. As having the same URL for a file, regardless of its current document path. And having only one file even if the file has changed names over time.

    Read the article

  • What is the most efficient way to convert to binary and back in C#?

    - by Saad Imran.
    I'm trying to write a general purpose socket server for a game I'm working on. I know I could very well use already built servers like SmartFox and Photon, but I wan't to go through the pain of creating one myself for learning purposes. I've come up with a BSON inspired protocol to convert the the basic data types, their arrays, and a special GSObject to binary and arrange them in a way so that it can be put back together into object form on the client end. At the core, the conversion methods utilize the .Net BitConverter class to convert the basic data types to binary. Anyways, the problem is performance, if I loop 50,000 times and convert my GSObject to binary each time it takes about 5500ms (the resulting byte[] is just 192 bytes per conversion). I think think this would be way too slow for an MMO that sends 5-10 position updates per second with a 1000 concurrent users. Yes, I know it's unlikely that a game will have a 1000 users on at the same time, but like I said earlier this is supposed to be a learning process for me, I want to go out of my way and build something that scales well and can handle at least a few thousand users. So yea, if anyone's aware of other conversion techniques or sees where I'm loosing performance I would appreciate the help. GSBitConverter.cs This is the main conversion class, it adds extension methods to main datatypes to convert to the binary format. It uses the BitConverter class to convert the base types. I've shown only the code to convert integer and integer arrays, but the rest of the method are pretty much replicas of those two, they just overload the type. public static class GSBitConverter { public static byte[] ToGSBinary(this short value) { return BitConverter.GetBytes(value); } public static byte[] ToGSBinary(this IEnumerable<short> value) { List<byte> bytes = new List<byte>(); short length = (short)value.Count(); bytes.AddRange(length.ToGSBinary()); for (int i = 0; i < length; i++) bytes.AddRange(value.ElementAt(i).ToGSBinary()); return bytes.ToArray(); } public static byte[] ToGSBinary(this bool value); public static byte[] ToGSBinary(this IEnumerable<bool> value); public static byte[] ToGSBinary(this IEnumerable<byte> value); public static byte[] ToGSBinary(this int value); public static byte[] ToGSBinary(this IEnumerable<int> value); public static byte[] ToGSBinary(this long value); public static byte[] ToGSBinary(this IEnumerable<long> value); public static byte[] ToGSBinary(this float value); public static byte[] ToGSBinary(this IEnumerable<float> value); public static byte[] ToGSBinary(this double value); public static byte[] ToGSBinary(this IEnumerable<double> value); public static byte[] ToGSBinary(this string value); public static byte[] ToGSBinary(this IEnumerable<string> value); public static string GetHexDump(this IEnumerable<byte> value); } Program.cs Here's the the object that I'm converting to binary in a loop. class Program { static void Main(string[] args) { GSObject obj = new GSObject(); obj.AttachShort("smallInt", 15); obj.AttachInt("medInt", 120700); obj.AttachLong("bigInt", 10900800700); obj.AttachDouble("doubleVal", Math.PI); obj.AttachStringArray("muppetNames", new string[] { "Kermit", "Fozzy", "Piggy", "Animal", "Gonzo" }); GSObject apple = new GSObject(); apple.AttachString("name", "Apple"); apple.AttachString("color", "red"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.5); GSObject lemon = new GSObject(); apple.AttachString("name", "Lemon"); apple.AttachString("color", "yellow"); apple.AttachBool("inStock", false); apple.AttachFloat("price", (float)0.8); GSObject apricoat = new GSObject(); apple.AttachString("name", "Apricoat"); apple.AttachString("color", "orange"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.9); GSObject kiwi = new GSObject(); apple.AttachString("name", "Kiwi"); apple.AttachString("color", "green"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)2.3); GSArray fruits = new GSArray(); fruits.AddGSObject(apple); fruits.AddGSObject(lemon); fruits.AddGSObject(apricoat); fruits.AddGSObject(kiwi); obj.AttachGSArray("fruits", fruits); Stopwatch w1 = Stopwatch.StartNew(); for (int i = 0; i < 50000; i++) { byte[] b = obj.ToGSBinary(); } w1.Stop(); Console.WriteLine(BitConverter.IsLittleEndian ? "Little Endian" : "Big Endian"); Console.WriteLine(w1.ElapsedMilliseconds + "ms"); } Here's the code for some of my other classes that are used in the code above. Most of it is repetitive. GSObject GSArray GSWrappedObject

    Read the article

  • Compile a binary file for linking OSX

    - by Satpal
    I'm trying to compile a binary file into a MACH_O object file so that it can be linked it into a dylib. The dylib is written in c/c++. On linux the following command is used: ld -r -b binary -o foo.o foo.bin I have tried various option on OSX but to no avail: ld -r foo.bin -o foo.o gives: ld: warning: -arch not specified ld: warning: ignoring file foo.bin, file was built for unsupported file format which is not the architecture being linked (x86_64) An empty .o file is created ld -arch x86_64 -r foo.bin -o foo.o ld: warning: ignoring file foo.bin, file was built for unsupported file format which is not the architecture being linked (x86_64) Again and empty .o file is created. Checking the files with nm gives: nm foo.o nm: no name list The binary file is actually, firmware that will be downloaded to an external device. Thanks for looking

    Read the article

  • Free-as-in-beer binary file format inspector

    - by fbrereto
    I am looking for a utility that gives me the ability to specify a binary file format and then interpret a file of bytes according to that format. (Something along the lines of the 010 Editor, but infinitely more cost-effective). Something that runs on Mac OS X would be preferred, but I'm interested to see what all is out there in general (while more of a hassle I'd be willing to run a tool on Windows if it were superior.) What's your preference?

    Read the article

  • Processing Binary Data in SOA Suite 11g

    - by Ramkumar Menon
    SOA Suite 11g provides a variety of ways to exchange binary data amongst applications and endpoints. The illustration below is a bird's-eye view of all the features in SOA Suite to facilitate such exchanges. Handling Binary data in SOA Suite 11g Composites Samples and Step-by-Step Tutorials A few step-by-step tutorials have been uploaded to java.net that illustrate key concepts related to Binary content handling within SOA composites. Each sample consists of a fully built composite project that can be deployed and tested, together with a Readme doc with screenshots to build the project from scratch. Binary Content Handling within File Adapter Samples [Opaque, Streaming, Attachments] SOAP with Attachments [SwA] Sample MTOM Sample Mediator Pass-through for attachments Sample For detailed information on binary content and large document handling within SOA Suite, refer to Chapter 42 of the SOA Suite Developer's Guide. Handling Binary data in Oracle B2B The following diagram illustrates how Oracle B2B facilitates exchange of binary documents between SOA Suite and Trading Partners.

    Read the article

  • Apache return binary data on the headers

    - by Camvoya
    Sometimes when i get my page, the apache return a file named 4r4fq34sd.part . the file seem a random name. And the content is: i»{h»¿ox..(a lot of binary)¿ox.... Date: Wed, 12 May 2010 13:40:10 GMT Server: Apache X-Powered-By: PHP/5.2.12-pl0-gentoo And i don´t can find in google the solution. Thanks

    Read the article

  • Binary diff/patch for large files on linux?

    - by thejh
    I've got two partition images (A and B) and want to use them to create a patch that I can apply on A on another computer in order to get the new B image without flooding the network. I have the following requirements: works on linux can create diffs can use diffs to patch files can handle binary files can handle large files (a few hundred GB should work) no user interaction required (just a console application) ideally, should be able to read from/write to pipes (so that I can pipe into it from a gzip-compressed file and write to one) Does something like that exist?

    Read the article

  • Binary file email attachment problem

    - by Alan Harris-Reid
    Hi there, Using Python 3.1.2 I am having a problem sending binary attachment files (jpeg, pdf, etc.) - MIMEText attachments work fine. The code in question is as follows... for file in self.attachments: part = MIMEBase('application', "octet-stream") part.set_payload(open(file,"rb").read()) encoders.encode_base64(part) part.add_header('Content-Disposition', 'attachment; filename="%s"' % file) msg.attach(part) # msg is an instance of MIMEMultipart() server = smtplib.SMTP(host, port) server.login(username, password) server.sendmail(from_addr, all_recipients, msg.as_string()) However, way down in the calling-stack (see traceback below), it looks as though msg.as_string() has received an attachment which creates a payload of 'bytes' type instead of string. Has anyone any idea what might be causing the problem? Any help would be appreciated. Alan builtins.TypeError: string payload expected: File "c:\Dev\CommonPY\Scripts\email_send.py", line 147, in send server.sendmail(self.from_addr, all_recipients, msg.as_string()) File "c:\Program Files\Python31\Lib\email\message.py", line 136, in as_string g.flatten(self, unixfrom=unixfrom) File "c:\Program Files\Python31\Lib\email\generator.py", line 76, in flatten self._write(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 101, in _write self._dispatch(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 127, in _dispatch meth(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 181, in _handle_multipart g.flatten(part, unixfrom=False) File "c:\Program Files\Python31\Lib\email\generator.py", line 76, in flatten self._write(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 101, in _write self._dispatch(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 127, in _dispatch meth(msg) File "c:\Program Files\Python31\Lib\email\generator.py", line 155, in _handle_text raise TypeError('string payload expected: %s' % type(payload))

    Read the article

  • C++ how to store integer into a binary file??

    - by blaxc
    i gt a struct with 2 integer, i want to store them in a binary file and read it again... here is my code... struct pw { int a; int b; }; void main(){ pw* p = new pw(); pw* q = new pw(); std::ofstream fout(ADMIN_FILE, ios_base::out | ios_base::binary | ios_base::trunc); std::ifstream fin(ADMIN_FILE, ios_base::in | ios_base::binary); p->a=123; p->b=321; fout.write((const char*)p, sizeof(pw)); fin.write((char*)q, sizeof(pw)); fin.close(); cout<< q->a << endl;} my output is 0. anyone can tell me what is the problem?

    Read the article

  • PHP File Upload second file does not upload, first file does without error

    - by Curtis
    So I have a script I have been using and it generally works well with multiple files... When I upload a very large file in a multiple file upload, only the first file is uploaded. I am not seeing an errors as to why. I figure this is related to a timeout setting but can not figure it out - Any ideas? I have foloowing set in my htaccess file php_value post_max_size 1024M php_value upload_max_filesize 1024M php_value memory_limit 600M php_value output_buffering on php_value max_execution_time 259200 php_value max_input_time 259200 php_value session.cookie_lifetime 0 php_value session.gc_maxlifetime 259200 php_value default_socket_timeout 259200

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >