Search Results

Search found 13619 results on 545 pages for 'memory mapped'.

Page 227/545 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • UIViewController not loading a UIView

    - by Cosizzle
    Hey, I'm playing around with a script my teacher provided for a table based application. However I can't seem to get my own view to load. Files: SubViewOneController (which is a sub view, also has a nib) TapViewController (Custom UIView I created and want to add to a cell) RootViewController (Main controller which loads in the views) SimpleNavAppDelegate How it works: Within the RootViewController, there's an NSArray that holds NSDictionary objects which is declared in the -(void)awakeFromNib {} method - (void)awakeFromNib { // we'll keep track of our views controllers in this array views = [[NSMutableArray alloc] init]; // when using alloc you are responsible for it, and you will have to release it. // ==================================================================================================== // ==================================================================================================== // LOADING IN CUSTOM VIEW HERE: // allocate a set of views and add to our view array as a dictionary item TapViewController *tapBoardView = [[TapViewController alloc] init]; //push onto array [views addObject:[NSDictionary dictionaryWithObjectsAndKeys: @"Tab Gestures", @"title", tapBoardView, @"controller", nil]]; [tapBoardView release]; //release the memory // ==================================================================================================== // ==================================================================================================== SubViewOneController *subViewOneController = [[SubViewOneController alloc] init]; // This will set the 2nd level title subViewOneController.title = @"Swipe Gestures"; //set it's title //push it onto the array [views addObject:[NSDictionary dictionaryWithObjectsAndKeys: @"Swipe Gestures", @"title", subViewOneController, @"controller", nil]]; [subViewOneController release]; //release the memory } Later on I set the table view: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // OUTPUT -- see console NSLog(@"indexPath %i", indexPath.row); // OUTPUT: tapController: <TapViewController: 0x3b2b360> NSLog(@"view object: %@", [views objectAtIndex:indexPath.row]); // OUTPUT: view object: controller = <TapViewController: 0x3b0e290>; title = "Tab Gestures"; // ----- Hardcoding the controller and nib file in does work, so it's not a linkage issue ------ // UNCOMMENT TO SEE WORKING -- comment below section. //TapViewController *tapContoller = [[TapViewController alloc] initWithNibName:@"TapBoardView" bundle:nil]; //NSLog(@"tapController: %@", tapContoller); //[self.navigationController pushViewController:tapContoller animated:YES]; // ----- Random Tests ----- //UIViewController *targetViewController = [[views objectAtIndex: 0] objectForKey:@"controller"]; // DOES NOT WORK // LOADS THE SECOND CELL (SubViewOneController) however will not load (TapViewController) UIViewController *targetViewController = [[views objectAtIndex: indexPath.row] objectForKey:@"controller"]; NSLog(@"target: %@", targetViewController); // OUTPUT: target: <TapViewController: 0x3b0e290> [self.navigationController pushViewController:targetViewController animated:YES]; } Reading the comments you should be able to see that hardcoding the view in, works - however trying to load it from the View NSArray does not work. It does however contain the object in memory, seeing that NSLog proves that. Everything is linked up and working within the TapViewController nib file. So ya, im kinda stuck on this one, any help would be great! Thanks guys

    Read the article

  • OpenGL FrameBuffer Objects weird behavior

    - by Ben Jones
    My algorithm is this: Render the scene to a FBO with shadow mapping from multiple locations Render the scene to the screen with shadow mapping ...black magic that I still have to imlement... Combine the samples from step 1 with the image from step 2 I'm trying to debug steps 1 and 2 and am coming across STRANGE behavior. My algorithm for each shadow mapped pass is: render the scene to a FBO connected to a depth array texture from the POV of each light render the scene from the viewpoint and use vertex/frag shaders to compare the depths When I run my algorithm this way: render from point to FBO render from point to screen glutSwapBuffers() The normal vectors in the screen pass appear to be incorrect (inverted possibly). I'm pretty sure that's the issue because my diffuse lighting calculation is incorrect, but the material colors are correct, and the shadows appear in the correct places. So, it seems like the only thing that could be the culprit is the normals. However if I do render from point to FBO render from point to Screen glutSwapBuffers() //wrong here render from point to Screen glutSwapBuffers() the second pass is correct. I assume there's a problem with my framebuffer calls. Can anyone see what the problem is from the log below? Its from a bugle trace grepped for 'buffer' with a few edits to make it a little more clear. Thanks! [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfeb90 - { 1 }) [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfebac - { 2 }) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glDrawBuffer(GL_NONE) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) //start render to FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 2, 0) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, 3, 0) [INFO] trace.call: glDrawBuffer(GL_COLOR_ATTACHMENT0) //bind to the FBO attached to a depth tex array for shadows [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the FBO I want the shadow mapped image rendered to [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //draw geometry //draw to screen pass //again shadow mapping FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the screen [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //finished, swap buffers [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //INCORRECT OUTPUT //second try at render to screen: [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) draw geometry [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //correct output

    Read the article

  • MIPS assembly: how to declare integer values in the .data section?

    - by Barney
    I'm trying to get my feet wet with MIPS assembly language using the MARS simulator. My main problem now is how do I initialize a set of memory locations so that I can access them later via assembly language instructions? For example, I want to initialize addresses 0x1001000 - 0x10001003 with the values 0x99, 0x87, 0x23, 0x45. I think this can be done in the data declaration (.data) section of my assembly program but I'm not sure of the syntax. Is this possible? Alternatively, in the .data section, how do I specify storing the integer values in some memory location (I don't care where, but I just want to reference them somewhere). So I'm looking for the C equivalent of "int x = 20, y=30, z=90;" I know how to do that using MIPS instructions but is it possible to declare something like that in the .data section of a MIPS assembly program?

    Read the article

  • dllimport C++ DLL In VB.net

    - by JFLow
    Hi Experts out there, I'm stuck at dll imports with the c++ dll and i really need help to get over this. Here is the function in the c++ dll that i want to call from my VB.net code. bool LoadNewTestPlan(const char* szPlanFileName=" "); I've tried many ways in my VB.net but always getting the error : "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." I have tried passing in byte(), Marshalling with LPStr, SafeArray and nothing works. Here is the example of my code code within the module <DllImport("HPVKIfc.dll", EntryPoint:="?LoadNewTestPlan@HPVKIfc@@QAE_NPBD@Z", CharSet:=CharSet.Ansi)> _ Public Function LoadNewTestPlan(<MarshalAs(UnmanagedType.LPStr)> ByVal pln As String) As Boolean End Function Do you see anything wrong? Thanks in advance.

    Read the article

  • Using NServiceBus with multiple application both act as Publisher and Subscriber

    - by Yoann. B
    Hi, I'm trying to use NServiceBus to make 4 applications communicating together. All these applications have to act as Publisher and Subscriber. The only way i founded ti get it workiing is to create a "master" queue named Server, on which MessageEndpointMappings in all applications configuration is mapped to, but i think it's not the good way ... So how should i configure NServiceBus on all these application to get this working ? Thanks.

    Read the article

  • Deleting a folder in TFS

    - by Mark Kadlec
    I created a folder in a TFS Project under workspace "CPortalWS". I deleted the workspace, but now I would like to delete the folder in the project and the delete option is not available. I've tried to create a new workspace mapped to the project but I still don't get the option to delete. Is this a bug in TFS? How can I delete the folder? Any help would be appreciated.

    Read the article

  • Asynchronous readback from opengl front buffer using multiple PBO's

    - by KillianDS
    I am developing an application that needs to read back the whole frame from the front buffer of an openGL application. I can hijack the application's opengl library and insert my code on swapbuffers. At the moment I am successfully using a simple but excruciating slow glReadPixels command without PBO's. Now I read about using multiple PBO's to speed things up. While I think I've found enough resources to actually program that (isn't that hard), I have some operational questions left. I would do something like this: create a series (e.g. 3) of PBO's use glReadPixels in my swapBuffers override to read data from front buffer to a PBO (should be fast and non-blocking, right?) Create a seperate thread to call glMapBufferARB, once per PBO after a glReadPixels, because this will block until the pixels are in client memory. Process the data from step 3. Now my main concern is of course in steps 2 and 3. I read about glReadPixels used on PBO's being non-blocking, will this be an issue if I issue new opengl commands after that very fast? Will those opengl commands block? Or will they continue (my guess), and if so, I guess only swapbuffers can be a problem, will this one stall or will glReadPixels from front buffer be many times faster than swapping (about each 15-30ms) or, worst case scenario, will swapbuffers be executed while glReadPixels is still reading data to the PBO? My current guess is this logic will do something like this: copy FRONT_BUFFER - generic place in VRAM, copy VRAM-RAM. But I have no idea which of those 2 is the real bottleneck and more, what the influence on the normal opengl command stream is. Then in step 3. Is it wise to do this asynchronously in a thread separated from normal opengl logic? At the moment I think not, It seems you have to restore buffer operations to normal after doing this and I can't install synchronization objects in the original code to temporarily block those. So I think my best option is to define a certain swapbuffer delay before reading them out, so e.g. calling glReadPixels on PBO i%3 and glMapBufferARB on PBO (i+2)%3 in the same thread, resulting in a delay of 2 frames. Also, when I call glMapBufferARB to use data in client memory, will this be the bottleneck or will glReadPixels (asynchronously) be the bottleneck? And finally, if you have some better ideas to speed up frame readback from GPU in opengl, please tell me, because this is a painful bottleneck in my current system. I hope my question is clear enough, I know the answer will probably also be somewhere on the internet but I mostly came up with results that used PBO's to keep buffers in video memory and do processing there. I really need to read back the front buffer to RAM and I do not find any clear explanations about performance in that case (which I need, I cannot rely on "it's faster", I need to explain why it's faster). Thank you

    Read the article

  • Does the CLR store small values in 'natural' sized locations?

    - by izb
    In Java, a byte or short is stored in the JVM's 'natural' word length, i.e. for the most part, 32-bits. An exception would be an array of bytes, where each byte occupies a byte of memory. Does the CLR do the same thing? If it does do this, in what situations are there exceptions to this? E.g. How much memory does this occupy? struct MyStruct { short s1; short s2; }

    Read the article

  • Using Valgrind tool how can I detect which object trying to access 0x0 address ?

    - by Davit Siradeghyan
    I have this output when trying to debug Program received signal SIGSEGV, Segmentation fault 0x43989029 in std::string::compare (this=0x88fd430, __str=@0xbfff9060) at /home/devsw/tmp/objdir/i686-pc-linux-gnu/libstdc++-v3/include/bits/char_traits.h:253 253 { return memcmp(__s1, __s2, __n); } Current language: auto; currently c++ Using valgrind I getting this output ==12485== Process terminating with default action of signal 11 (SIGSEGV) ==12485== Bad permissions for mapped region at address 0x0 ==12485== at 0x1: (within path_to_my_executable_file/executable_file)

    Read the article

  • SqlDataReader / DbDataReader implementation question

    - by Jose
    Does anyone know how DbDataReaders actually work. We can use SqlDataReader as an example. When you do the following cmd.CommandText = "SELECT * FROM Customers"; var rdr = cmd.ExecuteReader(); while(rdr.Read()) { //Do something } Does the data reader have all of the rows in memory, or does it just grab one, and then when Read is called, does it go to the db and grab the next one? It seems just bringing one into memory would be bad performance, but bringing all of them would make it take a while on the call to ExecuteReader. I know I'm the consumer of the object and it doesn't really matter how they implement it, but I'm just curious, and I think that I would probably spend a couple hours in Reflector to get an idea of what it's doing, so thought I'd ask someone that might know. I'm just curious if anyone has an idea.

    Read the article

  • OutOfMemoryError what to increase and how?

    - by Pentium10
    I have a really long collection with 10k items, and when running a toString() on the object it crashes. I need to use this output somehow. 05-21 12:59:44.586: ERROR/dalvikvm-heap(6415): Out of memory on a 847610-byte allocation. 05-21 12:59:44.636: ERROR/dalvikvm(6415): Out of memory: Heap Size=15559KB, Allocated=12932KB, Bitmap Size=613KB 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): Uncaught handler: thread main exiting due to uncaught exception 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): java.lang.OutOfMemoryError 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): at java.lang.AbstractStringBuilder.enlargeBuffer(AbstractStringBuilder.java:97) 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): at java.lang.AbstractStringBuilder.append0(AbstractStringBuilder.java:155) 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): at java.lang.StringBuilder.append(StringBuilder.java:202) 05-21 12:59:44.636: ERROR/AndroidRuntime(6415): at java.util.AbstractCollection.toString(AbstractCollection.java:384) I need step by step guide how to increase the heap for and Android application. I don't run the command line.

    Read the article

  • Why won't the VisualVM Profiler profile my application?

    - by Luke
    I've created a simple 1 file java application that iterates through a loop, calls some functions, allocates some memory, adds some numbers, etc. I run that application via eclipse's Run As->Java Application. The running application shows up in Java VisualVM under Local. I double click on that application and go to the Profiler tab. The default settings are: Start profiling from classes: my.main.package.** Do not profile classes: java.*, javax.*, sun.*, sunw.*, com.sun.* I click on CPU. The CPU and Memory buttons gray out. Nothing happens. The Status says profiling inactive. When my application terminates the Status says application terminated. What am I doing wrong here? Are there some settings I need to tweak? Do I need to set a VM flag when I launch my application?

    Read the article

  • In LINQ-SQL, wrap the DataContext is an using statement - pros cons

    - by hIpPy
    Can someone pitch in their opinion about pros/cons between wrapping the DataContext in an using statement or not in LINQ-SQL in terms of factors as performance, memory usage, ease of coding, right thing to do etc. Update: In one particular application, I experienced that, without wrapping the DataContext in using block, the amount of memory usage kept on increasing as the live objects were not released for GC. As in, in below example, if I hold the reference to List of q object and access entities of q, I create an object graph that is not released for GC. DataContext with using using (DBDataContext db = new DBDataContext()) { var q = from x in db.Tables where x.Id == someId select x; return q.toList(); } DataContext without using and kept alive DBDataContext db = new DBDataContext() var q = from x in db.Tables where x.Id == someId select x; return q.toList(); Thanks.

    Read the article

  • pyinstaller: 2 instances of my cherrypy app exe get executed.

    - by d.c
    I have a cherrypy app that I've made an exe with pyinstaller. now when I run the exe it loads itself twice into memory. Watching the taskmanager shows the first instance load into about 1k, then a second later a second instance of hte exe loads into about 3k ram. If I close the bigger one both processes die. If I close hte smaller one only that one dies. Loading the exe with subprocess, if I try to proc.kill(), it only kills the small one leaving the other running in memory. Is this a sideeffect of using cherrypy and pyinstaller together?

    Read the article

  • JBoss Seam - Jetty - Virtualhosting

    - by Walter White
    Hi all, I am trying to cutback on the memory usage of my server and would like to optimize the architecture. I currently deploy 2 separate web applications to Jetty 6.1.22 that correspond to different virtualhosts. They have pretty much the same application stack except one has fewer components and are styled differently (content, images, css, etc.). If I change my design pattern over to EJB / EAR + 2 WARS embedded, will that lower the memory consumption? Will that give me a single instance of JBoss Seam, Quartz, and all of my components? They must use a different datasource. Thanks, Walter

    Read the article

  • How to read Hibernate mapping

    - by Lluis Martinez
    Hi, I need to know which physical column is associated to a persistent's attribute. e.g. Class LDocLine has this attribute private Integer lineNumber; which is mapped in hibernate like this : The method I need is something like : getColumn("LDocLine","lineNumber) = "LINENUMBER" I assume its existence internally, but not sure if it's in the public api. Thanks in advance

    Read the article

  • Silverlight SOS (Son of Strike) documenation

    - by Kris Erickson
    Is there any microsoft or even non-official documentation for SOS for Silverlight. Other than a few web posts I have seen zero documentation for it on MSDN. Even official documentation for the CLR version of SOS seems hard to find, this ancient article mentions a sos.htm file that is included in the windows SDK but it doesn't appear to be there any more. Any pointers to debugging Silveright with SOS? I have found the following blog posts but am looking for more information: http://davybrion.com/blog/2009/08/finding-memory-leaks-in-silverlight-with-windbg/ http://www.ningzhang.org/2008/12/19/silverlight-debugging-with-windbg-and-sos/ http://debuggingblog.com/wp/2009/07/07/windbg-extension-sos-in-clr-40net-framework-40-ctp-net-runtime-dll-renamed-and-sos-commands-just-got-richer/ http://www.netfxharmonics.com/label/debugging http://blogs.msdn.com/b/tess/archive/2008/08/21/debugging-silverlight-applications-with-windbg-and-sos-dll.aspx http://blogs.msdn.com/b/delay/archive/2009/03/11/where-s-your-leak-at-using-windbg-sos-and-gcroot-to-diagnose-a-net-memory-leak.aspx http://blogs.msdn.com/b/delay/archive/2009/03/09/controls-are-like-diapers-you-don-t-want-a-leaky-one-implementing-the-weakevent-pattern-on-silverlight-with-the-weakeventlistener-class.aspx

    Read the article

  • Convert a byte array to a class containing a byte array in C#

    - by Mathijs
    I've got a C# function that converts a byte array to a class, given it's type: IntPtr buffer = Marshal.AllocHGlobal(rawsize); Marshal.Copy(data, 0, buffer, rawsize); object result = Marshal.PtrToStructure(buffer, type); Marshal.FreeHGlobal(buffer); I use sequential structs: [StructLayout(LayoutKind.Sequential)] public new class PacketFormat : Packet.PacketFormat { } This worked fine, until I tried to convert to a struct/class containing a byte array. [StructLayout(LayoutKind.Sequential)] public new class PacketFormat : Packet.PacketFormat { public byte header; public byte[] data = new byte[256]; } Marshal.SizeOf(type) returns 16, which is too low (should be 257) and causes Marshal.PtrToStructure to fail with the following error: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. I'm guessing that using a fixed array would be a solution, but can it also be done without having to resort to unsafe code?

    Read the article

  • openmp in mex : stackoverflow error

    - by Edwin
    i have got the following fraction of code that getting me the stack overflow error #pragma omp parallel shared(Mo1, Mo2, sum_normalized_p_gn, Data, Mean_Out,Covar_Out,Prior_Out, det) private(i) num_threads( number_threads ) { //every thread has a new copy double* normalized_p_gn = (double*)malloc(NMIX*sizeof(double)); #pragma omp critical { int id = omp_get_thread_num(); int threads = omp_get_num_threads(); mexEvalString("drawnow"); } #pragma omp for //some parallel process..... } the variables declared in the shared are created by malloc. and they consumes with large amount of memory there are 2 questions regarding to the above code. 1) why this would generate the stack overflow error( i.e. segmentation fault) before it goes into the parallel for loop? it works fine when it runs in the sequential mode.... 2) am i right to dynamic allocate memory for each thread like "normalized_p_gn" above? Regards Edwin

    Read the article

  • How do I implement google maps yelp.com style?

    - by Craig
    I have multiple locations (20+ per page) that need to be mapped on a single map. I would like to click on a link for the location that is not dynamically generated (for SEO purposes) that would open the info window for the respective marker on the map. Behavior should mimic http://maptheburg.com/ - but this map has the sidebar links dynamically generated. Yelp.com is the only site I have seen so far that manages to implement the Google Maps API with unobtrusive JavaScript.

    Read the article

  • ungetc in Python

    - by Dragos Toader
    Some file read (readlines()) functions in Python copy the file contents to memory (as a list) I need to process a file that's too large to be copied in memory and as such need to use a file pointer (to access the file one byte at a time) -- as in C getc(). The additional requirement I have is that I'd like to rewind the file pointer to previous bytes like in C ungetc(). Is there a way to do this in Python? Also, in Python, I can read one line at a time with readline() Is there a way to read the previous line going backward?

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >