Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 119/238 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • How to solve Python memory leak when using urrlib2?

    - by b_m
    Hi, I'm trying to write a simple Python script for my mobile phone to periodically load a web page using urrlib2. In fact I don't really care about the server response, I'd only like to pass some values in the URL to the PHP. The problem is that Python for S60 uses the old 2.5.4 Python core, which seems to have a memory leak in the urrlib2 module. As I read there's seems to be such problems in every type of network communications as well. This bug have been reported here a couple of years ago, while some workarounds were posted as well. I've tried everything I could find on that page, and with the help of Google, but my phone still runs out of memory after ~70 page loads. Strangely the Garbege Collector does not seem to make any difference either, except making my script much slower. It is said that, that the newer (3.1) core solves this issue, but unfortunately I can't wait a year (or more) for the S60 port to come. here's how my script looks after adding every little trick I've found: import urrlib2, httplib, gc while(true): url = "http://something.com/foo.php?parameter=" + value f = urllib2.urlopen(url) f.read(1) f.fp._sock.recv=None # hacky avoidance f.close() del f gc.collect() Any suggestions, how to make it work forever without getting the "cannot allocate memory" error? Thanks for advance, cheers, b_m update: I've managed to connect 92 times before it ran out of memory, but It's still not good enough. update2: Tried the socket method as suggested earlier, this is the second best (wrong) solution so far: class UpdateSocketThread(threading.Thread): def run(self): global data while 1: url = "/foo.php?parameter=%d"%data s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('something.com', 80)) s.send('GET '+url+' HTTP/1.0\r\n\r\n') s.close() sleep(1) I tried the little tricks, from above too. The thread closes after ~50 uploads (the phone has 50MB of memory left, obviously the Python shell has not.) UPDATE: I think I'm getting closer to the solution! I tried sending multiple data without closing and reopening the socket. This may be the key since this method will only leave one open file descriptor. The problem is: import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) socket.connect(("something.com", 80)) socket.send("test") #returns 4 (sent bytes, which is cool) socket.send("test") #4 socket.send("test") #4 socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns the number of sent bytes, ok socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns 0 on the phone, error on Windows7* socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns 0 on the phone, error on Windows7* socket.send("test") #returns 0, strange... *: error message: 10053, software caused connection abort Why can't I send multiple messages??

    Read the article

  • Add and Subtract 128 Bit Integers in C(++)

    - by Billy ONeal
    Hello :) I'm writing a compressor for a long stream of 128 bit numbers. I would like to store the numbers as differences -- storing only the difference between the numbers rather than the numbers themselves because I can pack the differences in fewer bytes because they are smaller. However, for compression then I need to subtract these 128 bit values, and for decompression I need to add these values. Maximum integer size for my compiler is 64 bits wide. Anyone have any ideas for doing this efficiently? Billy3

    Read the article

  • Monitor Network Traffic Mac

    - by Tom Irving
    I'm wondering how to go about monitoring network traffic on my Mac. Like the way activity monitor does it, showing the bytes / packets in and out. I know it's a bit vague, but I'm unsure of the best place to start.

    Read the article

  • Saving a bitmap to a Memorystream produces an inverted colors image

    - by Raphael
    I've created an image with GDI+ on my application and now I must convert this image to an array of bytes. My first thought was this simple code: public byte[] ToByte() { MemoryStream ms = new MemoryStream(); bitmap.Save(ms, ImageFormat.Bmp); return ms.GetBuffer(); } The problem with this approach is that when I finally save this image into a file the colors are inverted. What I'm I doing wrong?

    Read the article

  • How to read a file byte by byte in Python?

    - by zaplec
    Hi, I'm trying to read a file byte by byte, but I'm not sure how to do that. I'm trying to do it like that: file = open(filename, 'rb') while 1: byte = file.read(8) # Do something... So does that make the variable byte to contain 8 next bits at the beginning of every loop? It doesn't matter what those bytes really are. The only thing that matters is that I need to read a file in 8-bit stacks.

    Read the article

  • Monitor file disk activity programmatically (Windows)

    - by iulianchira
    In Windows 2008R2, in Resource Monitor in the Disk Acitivity section I can see the number of bytes read from/written into files. How can I do this in a programatic manner, prefferably using C# (or Win32 API)? I have looked into WMI and various performance counters, however I cannot figure out if there is something which suits my needs.

    Read the article

  • In SQL Server changing column varchar(255) nvarchar

    - by JD
    Hi, I am using SQL server 2008 express and some of our columns are defined as varchar(255). Should I convert these columns to NvarChar(255) or nvarchar(max)? The reason I ask is I read that nvarchar(255) for unicode characters would actually store 1/2 the number of characters (since unicode characters are 2 bytes) whereas 255 with varchar() would allow me to store 255 characters (or is it 255 - 2 for the offset). Would there be any performance hits using nvarchar(max)? JDs

    Read the article

  • Please help with iPhone Memory & Images, memory usage crashing app

    - by Andrew Gray
    I have an issue with memory usage relating to images and I've searched the docs and watched the videos from cs193p and the iphone dev site on memory mgmt and performance. I've searched online and posted on forums, but I still can't figure it out. The app uses core data and simply lets the user associate text with a picture and stores the list of items in a table view that lets you add and delete items. Clicking on a row shows the image and related text. that's it. Everything runs fine on the simulator and on the device as well. I ran the analyzer and it looked good, so i then starting looking at performance. I ran leaks and everything looked good. My issue is when running Object Allocations as every time i select a row and the view with the image is shown, the live bytes jumps up a few MB and never goes down and my app eventually crashes due to memory usage. Sorting the live bytes column, i see 2 2.72MB mallocs (5.45Mb total), 14 CFDatas (3.58MB total), 1 2.74MB malloc and everything else is real small. the problem is all the related info in instruments is really technical and all the problem solving examples i've seen are just missing a release and nothing complicated. Instruments shows Core Data as the responsible library for all but one (libsqlite3.dylib the other) with [NSSQLCore _prepareResultsFromResultSet:usingFetchPlan:withMatchingRows:] as the caller for all but one (fetchResultSetReallocCurrentRow the other) and im just not sure how to track down what the problem is. i've looked at the stack traces and opened the last instance of my code and found 2 culprits (below). I havent been able to get any responses at all on this, so if anyone has any tips or pointers, I'd really appreciate it!!!! //this is from view controller that shows the title and image - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; self.title = item.title; self.itemTitleTextField.text = item.title; if ([item.notes length] == 0) { self.itemNotesTextView.hidden = YES; } else { self.itemNotesTextView.text = item.notes; } //this is the line instruments points to UIImage *image = item.photo.image; itemPhoto.image = image; } - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the managed object for the given index path NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; [context deleteObject:[fetchedResultsController objectAtIndexPath:indexPath]]; // Save the context. NSError *error = nil; if (![context save:&error]) //this is the line instruments points to { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); exit(-1); } } }

    Read the article

  • access elements of a void *?

    - by user146780
    I have a void pointer and want to access elements from it. How could I transform a void * into an unsigned byte pointer so I can access its elements (which I know are actually unsigned bytes). Thanks Using C++

    Read the article

  • How to simulate a dial-up connection for testing purposes?

    - by mawg
    I have to code a server app where clients open a TCP/IP socket, send some data and close the connection. The data packets are small < 100 bytes, however there is talk of having them batch their transactions and send multiple packets. How can I best simulate a dial-up ut connection (using Delphy & Indy components, just FYI)? Is it as simple as open connection wait a while (what is the definition of "a while"?) close connection

    Read the article

  • Sending the array of arbitrary length through a socket. Endianness.

    - by Negai
    Hi everyone, I'm fighting with socket programming now and I've encountered a problem, which I don't know how to solve in a portable way. The task is simple : I need to send the array of 16 bytes over the network, receive it in a client application and parse it. I know, there are functions like htonl, htons and so one to use with uint16 and uint32. But what should I do with the chunks of data greater than that? Thank you.

    Read the article

  • Using memcpy to change a jnz to a jmp.

    - by Phil
    Not used memcpy much but here's my code that doesn't work. memcpy((PVOID)(enginebase+0x74C9D),(void *)0xEB,2); (enginebase+0x74C9D) is a pointer location to the address of the bytes that I want to patch. (void *)0xEB is the op code for the kind of jmp that I want. Only problem is that this crashes the instant that the line tries to run, I don't know what I'm doing wrong, any incite?

    Read the article

  • Exact textual representation of an IEEE "double"

    - by CyberShadow
    I need to represent an IEEE 754-1985 double (64-bit) floating point number in a human-readable textual form, with the condition that the textual form can be parsed back into exactly the same (bit-wise) number. Is this possible/practical to do without just printing the raw bytes? If yes, code to do this would be much appreciated.

    Read the article

  • Access violation when running native C++ application that uses a /clr built DLL

    - by doobop
    I'm reorganzing a legacy mixed (managed and unmanaged DLLs) application so that the main application segment is unmanaged MFC and that will call a C++ DLL compiled with /clr flag that will bridge the communication between the managed (C# DLLs) and unmanaged code. Unfortuantely, my changed have resulted in an Access violation that occurs before the application InitInstance() is called. This makes it very difficult to debug. The only information I get is the following stack trace. > 64006108() ntdll.dll!_ZwCreateMutant@16() + 0xc bytes kernel32.dll!_CreateMutexW@12() + 0x7a bytes So, here are some sceanrios I've tried. - Turned on Exceptions-Win32 Exceptions-c0000005 Access Violation to break when Thrown. Still the most detail I get is from the above stack trace. I've tried the application with F10, but it fails before any breakpoints are hit and fails with the above stack trace. - I've stubbed out the bridge DLL so that it only has one method that returns a bool and that method is coded to just return false (no C# code called). bool DllPassthrough::IsFailed() { return false; } If the stubbed out DLL is compiled with the /clr flag, the application fails. If it is compiled without the /clr flag, the application runs. - I've created a stub MFC application using the Visual Studio wizard for multidocument applications and call DllPassthrough::IsFailed(). This succeeds even with the /clr flag used to compile the DLL. - I've tried doing a manual LoadLibrary on winmm.lib as outlined in the following note Access violation when using c++/cli. The application still fails. So, my questions are how to solve the problem? Any hints, strategies, or previous incidents. And, failing that, how can I get more information on what code segment or library is causing the access exception? If I try more involved workarounds like doing LoadLibrary calls, I'd like to narrow it to the failing libraries. Thanks. BTW, we are using Visual Studio 2008 and the project is being built against the .NET 2.0 framework for the managed sections.

    Read the article

  • HTTP caching confusion

    - by Keith
    I'm not sure whether this is a server issue, or whether I'm failing to understand how HTTP caching really works. I have an ASP MVC application running on IIS7. There's a lot of static content as part of the site including lots of CSS, Javascript and image files. For these files I want the browser to cache them for at least a day - our .css, .js, .gif and .png files rarely change. My web.config goes like this: <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" /> </staticContent> </system.webServer> The problem I'm getting is that the browser (tested Chrome, IE8 and FX) doesn't seem to be caching the files as I'd expect. I've got the default settings (check for newer pages automatically in IE). On first visit the content downloads as expected HTTP/1.1 200 OK Cache-Control: max-age=86400 Content-Type: image/gif Last-Modified: Fri, 07 Aug 2009 09:55:15 GMT Accept-Ranges: bytes ETag: "3efeb2294517ca1:0" Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET Date: Mon, 07 Jun 2010 14:29:16 GMT Content-Length: 918 <content> I think that the Cache-Control: max-age=86400 should tell the browser not to request the page again for a day. Ok, so now the page is reloaded and the browser requests the image again. This time it gets an empty response with these headers: HTTP/1.1 304 Not Modified Cache-Control: max-age=86400 Last-Modified: Fri, 07 Aug 2009 09:55:15 GMT Accept-Ranges: bytes ETag: "3efeb2294517ca1:0" Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET Date: Mon, 07 Jun 2010 14:30:32 GMT So it looks like the browser has sent the ETag back (as a unique id for the resource), and the server's come back with a 304 Not Modified - telling the browser that it can use the previously downloaded file. It seems to me that would be correct for many caching situations, but here I don't want the extra round trip. I don't care if the image gets out of date when the file on the server changes. There are a lot of these files (even with sprite-maps and the like) and many of our clients have very slow networks. Each round trip to ping for that 304 status is taking about a 10th to a 5th of a second. Many also have IE6 which only has 2 HTTP connections at a time. The net result is that our application appears to be very slow for these clients with every page taking an extra couple of seconds to check that the static content hasn't changed. What response header am I missing that would cause the browser to aggressively cache the files? How would I set this in a .Net web.config for IIS7? Am I misunderstanding how HTTP caching works in the first place?

    Read the article

  • Assembly Load and loading the "sub-modules" dependencies - "cannot find the file specified"

    - by Ted
    There are several questions out there that ask the same question. However the answers they received I cannot understand, so here goes: Similar questions: http://stackoverflow.com/questions/1874277/dynamically-load-assembly-and-manually-force-path-to-get-referenced-assemblies ; http://stackoverflow.com/questions/22012/loading-assemblies-and-its-dependencies-closed The question in short: I need to figure out how dependencies, ie References in my modules can be loaded dynamically. Right now I am getting "The system cannot find the file specified" on Assemblies referenced in my so called modules. I cannot really get how to use the AssemblyResolve event... The longer version I have one application, MODULECONTROLLER, that loads separate modules. These "separate modules" are located in well-known subdirectories, like appBinDir\Modules\Module1 appBinDir\Modules\Module2 Each directory contains all the DLLs that exists in the bin-directory of those projects after a build. So the MODULECONTROLLER loads all the DLLs contained in those folders using this code: byte[] bytes = File.ReadAllBytes(dllFileFullPath); Assembly assembly = null; assembly = Assembly.Load(bytes); I am, as you can see, loading the byte[]-array (so I dont lock the DLL-files). Now, in for example MODULE1, I have a static reference called MyGreatXmlProtocol. The MyGreatXmlProtocol.dll then also exists in the directory appBinDir\Modules\Module1 and is loaded using the above code When code in the MODULE1 tries to use this MyGreatXmlProtocol, I get: Could not load file or assembly 'MyGreatXmlProtocol, Version=1.0.3797.26527, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. So, in a post (like this one) they say that To my understanding reflection will load the main assembly and then search the GAC for the referenced assemblies, if it cannot find it there, you can then incorparate an assemblyResolve event: First; is it really needed to use the AssemblyResolve-event to make this work? Shouldnt my different MODULEs themself load their DLLs, as they are statically referenced? Second; if AssemblyResolve is the way to go - how do I use it? I have attached a handler to the Event but I never get anything on MyGreatXmlProctol... === EDIT === CODE regarding the AssemblyResolve-event handler: public GUI() { InitializeComponent(); AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); ... } // Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args) { Console.WriteLine(args.Name); return null; } Hope I wasnt too fuzzy =) Thx

    Read the article

  • Passing Activity A's data into Activity B

    - by user1058153
    What i am trying to show here is that I am trying to pass the data in Activity A to Activity B. Activity A mainly there are 3 textbox for me to key in something then a button to go to Activity B(Confirmation Page) and in Activity B, i am able to show what i have keyed in Activity A. I am new to Android, so can someone guide me through this? In Activity A @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activitya); Textview01 = (EditText) this.findViewById(R.id.txtView1); Textview02 = (EditText) this.findViewById(R.id.txtView2); Textview03 = (EditText) this.findViewById(R.id.txtView3); mButton = (Button) findViewById(R.id.button); mButton.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Intent i = new Intent(ActivityA.this, ActivityB.class); i.putExtra("Textview01", txtView1.getText().toString()); i.putExtra("Textview02", txtView2.getText().toString()); i.putExtra("Textview03", txtView3.getText().toString()); startActivity(i); In Activity B. @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.confirmbooking); TextView txtPickup = (TextView) this.findViewById(R.id.txtPickup); TextView txtLocation = (TextView) this.findViewById(R.id.txtLocation); TextView txtDestination = (TextView) this.findViewById(R.id.txtDestination); txtLocation.setText(getIntent().getStringExtra("Location")); txtPickup.setText(getIntent().getStringExtra("Pick Up Point")); txtDestination.setText(getIntent().getStringExtra("Destination")); In my Activity B XML <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="txtView01:" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/txtView01"></TextView> <TextView android:id="@+id/TextView02" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="txtView02:"></TextView> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/txtView02"></TextView> <TextView android:id="@+id/TextView02" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="txtView03:"></TextView> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/txtView03"></TextView> <Button android:id="@+id/btnButton" android:layout_height="wrap_content" android:layout_width="fill_parent" android:text="Book now" /> </LinearLayout> Can someone tell me if this is correct? I'm getting some error like a popup Instrumental.class. LogCat shows : 11-26 17:27:40.895: INFO/ActivityManager(52): Starting activity: Intent { cmp=ActivityA/.ActivityB (has extras) } 11-26 17:27:42.956: DEBUG/dalvikvm(252): GC_EXPLICIT freed 156 objects / 11384 bytes in 346ms 11-26 17:27:47.815: DEBUG/dalvikvm(288): GC_EXPLICIT freed 31 objects / 1496 bytes in 161ms

    Read the article

  • Moseycode Install Failure

    - by scout
    I am trying to install the moseycode-0.2.1.apk on the emulator. I get the following error with both moseycode-0.2.0 and moseycode-0.2.1. 733 KB/s (410936 bytes in 0.546s) pkg: /data/local/tmp/moseycode-0.2.1.apk Failure [INSTALL_PARSE_FAILED_MANIFEST_MALFORMED] I tried on emulators(avd) with Google api 7, android 2.1, Google Api 6. I have the latest version of Android-sdk Please let me know whats wrong.

    Read the article

  • AudioConverterConvertBuffer problem with insz error

    - by Samuel
    Hi Codegurus, I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format _ streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ; _streamFormat.mBitsPerChannel = 16; _streamFormat.mChannelsPerFrame = 2; _streamFormat.mBytesPerPacket = 4; _streamFormat.mBytesPerFrame = 4; _streamFormat.mFramesPerPacket = 1; _streamFormat.mSampleRate = 44100; _streamFormat.mReserved = 0; to this format _streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0; _streamFormatOutput.mBitsPerChannel = 16; _streamFormatOutput.mChannelsPerFrame = 1; _streamFormatOutput.mBytesPerPacket = 2; _streamFormatOutput.mBytesPerFrame = 2; _streamFormatOutput.mFramesPerPacket = 1; _streamFormatOutput.mSampleRate = 44100; _streamFormatOutput.mReserved = 0; and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows This is to set the channel map for PCM output file SInt32 channelMap[1] = {0}; status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap); and this is to convert the buffer in a while loop AudioBufferList audioBufferList; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y=0; y<audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; //frames = audioBuffer.mData; NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels); NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize); numBytesIO = audioBuffer.mDataByteSize; convertedBuf = malloc(sizeof(char)*numBytesIO); status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf); char errchar[10]; NSLog(@"status audio converter convert %d",status); if (status != 0) { NSLog(@"Fail conversion"); assert(0); } NSLog(@"Bytes converted %d",numBytesIO); status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf); NSLog(@"status for writebyte %d, bytes written %d",status,numBytesIO); free(convertedBuf); if (numBytesIO != audioBuffer.mDataByteSize) { NSLog(@"Something wrong in writing"); assert(0); } countByteBuf = countByteBuf + numBytesIO; But the insz problem is there... so it cant convert. I would appreciate any input Thanks in advance

    Read the article

  • .NET Object Dump

    - by Thomas
    Hi all, I have a question about the dump of an objet. 0:000> !do 0x012817b8 Name: blabla.Union2 MethodTable: 009231ac EEClass: 00921548 Size: 16(0x10) bytes Fields: MT Field Offset Type VT Attr Value Name 790fd0f0 4000003 4 System.Object 0 instance 00000000 o 7912d7c0 4000004 8 System.Int32[] 0 instance 00000000 arr What are the significations of : Field, Offset, VT ?

    Read the article

  • Interoperability between two AES algorithms

    - by lpfavreau
    Hello, I'm new to cryptography and I'm building some test applications to try and understand the basics of it. I'm not trying to build the algorithms from scratch but I'm trying to make two different AES-256 implementation talk to each other. I've got a database that was populated with this Javascript implementation stored in Base64. Now, I'm trying to get an Objective-C method to decrypt its content but I'm a little lost as to where the differences in the implementations are. I'm able to encrypt/decrypt in Javascript and I'm able to encrypt/decrypt in Cocoa but cannot make a string encrypted in Javascript decrypted in Cocoa or vice-versa. I'm guessing it's related to the initialization vector, nonce, counter mode of operation or all of these, which quite frankly, doesn't speak to me at the moment. Here's what I'm using in Objective-C, adapted mainly from this and this: @implementation NSString (Crypto) - (NSString *)encryptAES256:(NSString *)key { NSData *input = [self dataUsingEncoding: NSUTF8StringEncoding]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:TRUE]; return [Base64 encode:output]; } - (NSString *)decryptAES256:(NSString *)key { NSData *input = [Base64 decode:self]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:FALSE]; return [[[NSString alloc] initWithData:output encoding:NSUTF8StringEncoding] autorelease]; } + (NSData *)cryptoAES256:(NSData *)input key:(NSString *)key doEncrypt:(BOOL)doEncrypt { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [input length]; // See the doc: For block ciphers, the output size will always be less than or // equal to the input size plus the size of one block. // That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesCrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(doEncrypt ? kCCEncrypt : kCCDecrypt, kCCAlgorithmAES128, kCCOptionECBMode | kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, nil, // initialization vector (optional) [input bytes], dataLength, // input buffer, bufferSize, // output &numBytesCrypted ); if (cryptStatus == kCCSuccess) { // the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesCrypted]; } free(buffer); // free the buffer; return nil; } @end Of course, the input is Base64 decoded beforehand. I see that each encryption with the same key and same content in Javascript gives a different encrypted string, which is not the case with the Objective-C implementation that always give the same encrypted string. I've read the answers of this post and it makes me believe I'm right about something along the lines of vector initialization but I'd need your help to pinpoint what's going on exactly. Thank you!

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >