Search Results

Search found 13860 results on 555 pages for 'core graphics'.

Page 123/555 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Literature and Tutorials for Writing a Ray Tracer

    - by grrussel
    I am interested in finding recommendations on books on writing a raytracer, simple and clear implementations of ray tracing that can be seen on the web, and online resources on introductory raytracing. Ideally, the approach would be incremental and tutorial in style, and explain both the programming techniques and underyling mathematics, starting from the basics.

    Read the article

  • Retrieve the CAKeyframeAnimationKey's key in AnimationDidStop

    - by JacktheSparrow
    I have multiple CAKeyframeAnimation objects, each with a unique key like the following: ..... [myAnimation setValues:images]; [myAnimation setDuration:1]; .... [myLayer addAnimation:myAnimation forKey:@"unique key"]; My question is, if I have multiple animation like this and each with a unique key, how do I retrieve their keys in the method AnimationDidStop? I want to be able to do something like this: -(void)animationDidStop:(CAAnimation*)animation finished:(BOOL)flag{ if(..... ==@"uniquekey1"){ //code to handle this specific animation here: }else if(.... ==@"uiquekey2"){ //code to handle this specific animation here: } }

    Read the article

  • Open Source sound engine

    - by Steph Thirion
    When I started using SoundEngine (from CrashLanding and TouchFighter), I had read about a few people recommending not to use it, for it was, according to them, not stable enough. Still it was the only solution I knew of to play sounds with pitch and position control without learning C++ and OpenAL, so I ignored the warnings and went on with it. But now I'm starting to worry. The 2.2 SDK introduced AVFoundation. Using both SoundEngine from CrashLanding (for sounds) and AVAudioPlayer (for music), I found out SoundEngine behaves strangely when the only existing AVAudioPlayer is released (all sounds stop until a new AVAudioPlayer is initiated). Around the same time as the 2.2 SDK came out, the CrashLanding sample code was mysteriously removed from the ADC site. I'm worried there are more bad surprises to come. My question is, is anyone aware of an Open Source alternative to SoundEngine? Maybe even a C++ library that uses OpenAL?

    Read the article

  • How to deal with many to many relationships with NSFetchedResultsController?

    - by Phil Yates
    OK so I have two entities in my data model (let's say entityA and entityB), both of these entities have a to-many relationship to each other. I have setup a NSFetchedResultsController to fetch a bunch of entityA. Now I'm trying to have the section names for the tableview be the title of entityB. sectionNameKeyPath:@"entityB.title" Now this causes a problem, where by the section name returned from that relationship appears to be ({title1}) or ({title1,title2...titleN}) obviously depending on how many different entityB's are involved. This doesn't look great in a tableview and doesn't group the objects as I would like. What I would like is a section per entityB title with entityA appearing under each section, under multiple sections if necessary. I'm at a loss as how I am supposed to achieve this whether I need to update the predicate to get the entity to appear multiple times or whether I need to update the section and header functions to do some processing as the controller loops through the objects. Any help is appreciated :) Thanks

    Read the article

  • Unexplained crashs with coregraphic

    - by Ziggy
    Hello there, i'm on this bug for a week now, and i can't solve it. I have some crash with coregraphic calls, it happen randomly (sometimes after 2 mn, or just at the start), but often at the same places in the code. I have a class that just wrap a CGContext, it have a CGContextRef as member. This Object is re-created each time DrawRect() is called, so the CGContextRef is always up-to-date. The draw calls came from the main thread, only After looking for this kind of error, it appear that it should be object Release related. Here is an example of an error : #0 0x90d8a7a7 in ___forwarding___ #1 0x90d8a8b2 in __forwarding_prep_0___ #2 0x90d0d0b6 in CFRetain #3 0x95e54a5d in CGColorRetain #4 0x95e5491d in CGGStateCreateCopy #5 0x95e5486d in CGGStackSave #6 0x95e54846 in CGContextSaveGState #7 0x00073500 in CAutoContextState::CAutoContextState at Context.cpp:47 the AutoContextSave() class look like this : class CAutoContextState { private: CGContextRef m_Hdc; public: CAutoContextState(const CGContextRef& Hdc) { m_Hdc = Hdc; CGContextSaveGState(m_Hdc); } virtual ~CAutoContextState() { CGContextRestoreGState(m_Hdc); } }; It crash at CGContextSaveGState(m_Hdc). Here is what i see into GDB: * -[Not A Type retain]: message sent to deallocated instance 0x16a148b0. When i type malloc-history on the address, i have this : 0: 0x954cf10c in malloc_zone_malloc 1: 0x90d0d201 in _CFRuntimeCreateInstance 2: 0x95e3fe88 in CGTypeCreateInstanceWithAllocator 3: 0x95e44297 in CGTypeCreateInstance 4: 0x95e58f57 in CGColorCreate 5: 0x71fdd in _ZN4Flux4Draw8CContext10DrawStringERKNS_7CStringEPKNS0_5CFontEPKNS0_6CBrushERKNS_5CRectENS0_12tagAlignmentESE_NS0_17tagStringTrimmingEfiPKf at /Volumes/Sources Mac/Flux/Sources/DotFlux/Projects/../Draw/CoreGraphic/Context.cpp:1029 Which point me at this line of code : f32 components[] = {pSolidBrush->GetColor().GetfRed(), pSolidBrush->GetColor().GetfGreen(), pSolidBrush->GetColor().GetfBlue(), pSolidBrush->GetColor().GetfAlpha()}; //{ 1.0, 0.0, 0.0, 0.8 }; CGColorRef TextColor = CGColorCreate(rgbColorSpace, components); Point this func : CGColorCreate(); Any help would be appreciated, i need to finish this task very soon, but i don't know how to resolve this :( Thanks.

    Read the article

  • Automatic tracking algorithm

    - by nico
    Hi everyone, I'm trying to write a simple tracking routine to track some points on a movie. Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background. I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that. A few more infos: the spots are a few (es. 5x5) pixels in size the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth. the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses. the spots don't move in a particular direction. They can move right and then left and then right again the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points. As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy. Thank you nico

    Read the article

  • pitchbend (varispeed) audio with iPhone SDK's AudioUnit

    - by fetzig
    hi, I'm trying to manipulate the speed (and pitch) of a sound while playing. so i played around with iphone sdk's AudioUnit. downloaded iPhoneMultichannelMixerTest and tried to add an AUComponent to the graph (in this case a formatconverter). but i get (pretty soon) following error when building: #import <AudioToolbox/AudioToolbox.h> #import <AudioUnit/AudioUnit.h> ... AUComponentDescription varispeed_desc(kAudioUnitType_FormatConverter, kAudioUnitSubType_Varispeed, kAudioUnitManufacturer_Apple); ^^ error: 'kAudioUnitSubType_Varispeed' was not declared in this scope. any ideas why? the documentation on this topic doesn't help me at all (just api doc isn't very helpful when having no clue about the concept behind). there are no examples on how to wire these effects together and manipulating there properties...so maybe i'm totally wrong, anyway any hint is great. thx for help.

    Read the article

  • Problem when trying to use simple Shaders + VBOs

    - by Mr.Gando
    Hello I'm trying to convert the following functions to a VBO based function for learning purposes, it displays a static texture on screen. I'm using OpenGL ES 2.0 with shaders on the iPhone (should be almost the same than regular OpenGL in this case), this is what I got working: //Works! - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat vertices[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; glBindTexture(GL_TEXTURE_2D, _name); //Attrib position and attrib_tex coord are handles for the shader attributes glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } I tried to do this to convert to a VBO however I don't see anything displaying on-screen with this version: //Doesn't display anything - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat position[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; //Texture on-screen position ( each vertex is x,y in on-screen coords ) GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; // Texture coords from 0 to 1 glBindVertexArrayOES(vao); glGenVertexArraysOES(1, &vao); glGenBuffers(2, vbo); //Buffer 1 glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), position, GL_STATIC_DRAW); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, position); //Buffer 2 glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), coordinates, GL_DYNAMIC_DRAW); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); //Draw glBindVertexArrayOES(vao); glBindTexture(GL_TEXTURE_2D, _name); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } In both cases I'm using this simple Vertex Shader //Vertex Shader attribute vec2 position;//Bound to ATTRIB_POSITION attribute vec4 color; attribute vec2 texcoord;//Bound to ATTRIB_TEXCOORD varying vec2 texcoordVarying; uniform mat4 mvp; void main() { //You CAN'T use transpose before in glUniformMatrix4fv so... here it goes. gl_Position = mvp * vec4(position.x, position.y, 0.0, 1.0); texcoordVarying = texcoord; } The gl_Position is equal to product of mvp * vec4 because I'm simulating glOrthof in 2D with that mvp And this Fragment Shader //Fragment Shader uniform sampler2D sampler; varying mediump vec2 texcoordVarying; void main() { gl_FragColor = texture2D(sampler, texcoordVarying); } I really need help with this, maybe my shaders are wrong for the second case ? thanks in advance.

    Read the article

  • Display shift when adding rows to UITableView

    - by Kamchatka
    Hello, I have a UITableView displaying an underlying NSFetchedResultsController. When the fetchedResultsController is updated, - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { is called. And the following is executed: case NSFetchedResultsChangeInsert: [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:shiftedIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; The cellForRowAtIndexPath is only called for the new line (which is logical) The problem is however that all the values displayed in the rows of the table view are shifted down. The first row displays its title. The second rows displays the first row's title. The third row the second row's title etc. If I repeat that, the third row will display the second's row title, the fourth the third row's title etc. I don't understand what can happen, especially because cellForRowAtIndexPath is only called for the new line (which is logical) and not for all these lines. Additionally, if I click on these lines which have the wrong title, it opens the right document (didSelectRowAtIndexPath works correctly with the indexPath) Any clue of what could happen? Thanks!

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Finding the intersection of two vector equations.

    - by Matthew Mitchell
    I've been trying to solve this and I found an equation that gives the possibility of zero division errors. Not the best thing: v1 = (a,b) v2 = (c,d) d1 = (e,f) d2 = (h,i) l1: v1 + ?d1 l2: v2 + µd2 Equation to find vector intersection of l1 and l2 programatically by re-arranging for lambda. (a,b) + ?(e,f) = (c,d) + µ(h,i) a + ?e = c + µh b +?f = d + µi µh = a + ?e - c µi = b +?f - d µ = (a + ?e - c)/h µ = (b +?f - d)/i (a + ?e - c)/h = (b +?f - d)/i a/h + ?e/h - c/h = b/i +?f/i - d/i ?e/h - ?f/i = (b/i - d/i) - (a/h - c/h) ?(e/h - f/i) = (b - d)/i - (a - c)/h ? = ((b - d)/i - (a - c)/h)/(e/h - f/i) Intersection vector = (a + ?e,b + ?f) Not sure if it would even work in some cases. I haven't tested it. I need to know how to do this for values as in that example a-i. Thank you.

    Read the article

  • Zoom image to pixel level

    - by zaf
    For an art project, one of the things I'll be doing is zooming in on an image to a particular pixel. I've been rubbing my chin and would love some advice on how to proceed. Here are the input parameters: Screen: sw - screen width sh - screen height Image: iw - image width ih - image height Pixel: px - x position of pixel in image py - y position of pixel in image Zoom: zf - zoom factor (0.0 to 1.0) Background colour: bc - background colour to use when screen and image aspect ratios are different Outputs: The zoomed image (no anti-aliasing) The screen position/dimensions of the pixel we are zooming to. When zf is 0 the image must fit the screen with correct aspect ratio. When zf is 1 the selected pixel fits the screen with correct aspect ratio. One idea I had was to use something like povray and move the camera towards a big image texture or some library (e.g. pygame) to do the zooming. Anyone think of something more clever with simple pseudo code? To keep it more simple you can make the image and screen have the same aspect ratio. I can live with that. I'll update with more info as its required.

    Read the article

  • Unit Testing Model Classes that inherit from NSManagedObject

    - by Matt Baker
    So...I'm trying to get unit tests set up in my iPhone App but I'm having some issues. I'm trying to test my model classes but they inherit directly from NSManagedObject. I'm sure this is a problem but I don't know how to get around it. Everything is building and running as expected but I get this error when calling any method on the class I'm testing: Unknown.m:0:0 unrecognized selector sent to instance 0xc2b120 If I follow this structure (http://chanson.livejournal.com/115621.html) to create my object in my tests I end up with another error entirely but it still doesn't help me. Basically my question is this: how can I test a class that inherits from NSManagedObject?

    Read the article

  • UIImage Rotation

    - by Kamchatka
    I display an image in a UIImageView (within a UIScrollView) which is also stored in CoreData. In the interface, I want the user to be able to rotate the picture by 90 degrees. I also want it to be saved in CoreData. What should I rotate in the display? the scrollview, the uiimageview or the image itself? (If possible I would like the rotation to be animated) But then I also have to save the picture to CoreData. I thought about changing the image orientation but this property is readonly.

    Read the article

  • What's a good matrix manipulation library available for C ?

    - by banister
    Hi, I am doing a lot of image processing in C and I need a good, reasonably lightweight, and above all FAST matrix manipulation library. I am mostly focussing on affine transformations and matrix inversions, so i do not need anything too sophisticated or bloated. Primarily I would like something that is very fast (using SSE perhaps?), with a clean API and (hopefully) prepackaged by many of the unix package management systems. Note this is for C not for C++. Thanks :)

    Read the article

  • So does Apple recommend to not use predicates and sort descriptors in an NSFetchRequest?

    - by dontWatchMyProfile
    From the docs: To summarize, though, if you execute a fetch directly, you should typically not add Objective-C-based predicates or sort descriptors to the fetch request. Instead you should apply these to the results of the fetch. If you use an array controller, you may need to subclass NSArrayController so you can have it not pass the sort descriptors to the persistent store and instead do the sorting after your data has been fetched. I don't get it. What's wrong with using them on fetch requests? Isn't it stupid to get back a whole big bunch of managed objects just to pick out a 1% of them in memory, leaving 99% garbage floating around? Isn't it much better to only fetch from the persistent store what you really need, in the order you need it? Probably I did get that wrong...

    Read the article

  • A way to enable a LaunchDaemon to output sound?

    - by Varun Mehta
    I have a small Foundation application that checks a website and plays a sound if it sees a certain value. This application successfully plays a sound when I run it as my user from the Terminal. I've configured this app to run as a LaunchDaemon, with the following plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>org.myorg.appidentifier</string> <key>ProgramArguments</key> <array> <string>/Users/varunm/path/to/cli/application</string> </array> <key>KeepAlive</key> <true/> <key>RunAtLoad</key> <true/> </dict> </plist> When I have this service launched I can see it successfully read in and log values from the website, but it never generates any sound. The sound files are located in the same directory as the binary, and I use the following code: NSSound *soundToPlay = [[NSSound alloc] initWithContentsOfFile:@"sound.wav" byReference:NO]; [soundToPlay setDelegate:stopper]; [soundToPlay play]; while (g_keepRunning) { [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; } [soundToPlay setCurrentTime:0.0]; Is there any way to get my LaunchDaemon application to play sound? This machine gets run by different people, and sometimes has no one logged in, which is why I have to configure it as a LaunchDaemon.

    Read the article

  • How to draw a drop shadow AND gradient with quartz2d?

    - by Luke
    Hello! I've a custom shape drawing using coregraphics and i want to add a drop shadow and a gradient to it also. I've been trying and searching a lot of informations on how to combine and do this, but i can't get it to work. I'm able to draw only one either. Anyone doing this already or know how to do this? Thank you.

    Read the article

  • Rendering a Long Document on iPad

    - by benjismith
    I'm implementing a document viewer with highlighting/annotation capabilities for a custom document format on iPad. The documents are kind of long (100 to 200 pages, if printed on paper) and I've had a hard time finding the right approach. Here are the requirments: 1) Basic rich-text styling: control of left/right margins. Control of font name, size, foreground/background color, and line spacing. Bold, italics, underline, etc. 2) Selection and highlighting of arbitrary text regions (not limited to paragraph boundaries, like in Safari/UIWebView). 3) Customization of the Cut/Copy/Paste popup (what is that thing called anyhow? UIActionBar?) This is one of the essential requirements of the app. My first implementation was based on UIWebView. I just rendered the document as HTML with CSS for text styling. But I couldn't get the kind of text selection behavior I wanted (across paragraph boundaries) and the UIActionBar can't be customized from within UIWebView. So I started working on a javascript approach, faking the device text-selection behavior using JQuery to trap touch events and dynamically modifying the DOM to change the background color of selected regions of text. I built a fake UIActionBar control as a hidden DIV, positioning it and unhiding it whenever there was an active selection region. Not too shabby. The main problem is that it's SLOOOOOOOW. Scrolling through the document is nice and quick, but dynamically changing the DOM is not very snappy. Plus, I couldn't figure out how to recreate the magnifier loupe, so my fake text-selection GUI doesn't look quite the same as the native implementation. Also, I haven't yet implemented the communication bridge between the javascript layer and the objective-c layer (where the rest of the app lives), but it was shaping up to be a huge hassle. So I've been looking at CoreText, but there are precious few examples on the web. I spent a little time with this simple little demo: http://github.com/jonasschnelli/I7CoreTextExample/ It shows how to use CoreText to draw an NSAttributedText string into a UIView. But it has its own problems: It doesn't implement text-selection behavior, and it doesn't present a UIActionBar, so I don't have any idea how to make that happen. And, more importantly, it tries to draw the entire document all at once, with significant performance degradations for long documents. My documents can have thousands of paragraphs, and less than 1% of the document is ever on screen at a time. On the plus side, these documents already contain precise formatting information. I know the exact page-position of every line of text, so I don't need a layout engine. Does anyone know how to implement this sort of view using CoreText? I understand that a full-fledged implementation is overkill for a question like this, but I'm looking for a good CoreText example with a few basic requirements: 1) Precise layout & formatting control (using the formatting metrics and text styles I've already calculated). 2) Arbitrary selection of text. 3) Customization of the UIActionBar. 4) Efficient recycling of resources for off-screen objects. I'd be happy to implement my own recycling when text elements scroll off-screen, but wouldn't that require re-implementing UIScrollView? I'm brand-new to iPhone development, and still getting used to Objective-C, but I've been working in other languages (Java, C#, flex/actionscript, etc) for more than ten years, so I feel confident in my ability to get the work done, if only I had a better feel for the iPhone SDK and the common coding patterns for stuff like this. Is it just me, or does the SDK documentation really suck? Anyhow, thanks for your help!

    Read the article

  • rotate a plane around a diagonal

    - by compie
    I would like to rotate a plane, not around a single (X or Y) axis, but around the diagonal (45 degrees between X and Y). How do I calculate the Rx and Ry given the Rdiagonal? (Rdiagonal is the amount of rotation I would like to achieve around the diagonal axis).

    Read the article

  • Convet from line points to shape points

    - by VOX
    I have an array of points that make up a line. However I need to draw the line with the width of n pixels. How can I transform that points for lines to points for polygon (or a shape) so I can directly draw it on canvas. I'm developing two program at the same time, one is j2me and another is .NET CF. j2me doesn't support drawing lines with width. Please take a look at the picture. link text

    Read the article

  • What is faster with PictureBox? Many small redraws or complete redraw.

    - by kornelijepetak
    I have a PictureBox (WinMobile 6 WinForm) on which I draw some images. There is a background image that goes in the background and it does not change. However objects that are drawn on the picturebox are moving during the application so I need to refresh the background. Since items that are redrawn fill from 50% to 80% of the surface, the question is which of the two is faster: 1) Redraw only parts of the background image that have been changed (previous+next location of the moving object). 2) Redraw complete background and then draw all the objects in their current position. Now, the reason for asking is because I am not sure how much of processor power is needed for a single drawImage operation and what are the time consuming factors. I am aware if there is almost complete coverage of the background, it would be stupid to redraw portions of it, because by drawing portions I will have drawn the complete picture. But since sometimes only half of the image had changed (some objects remained in their old position), it may (perhaps) be benefitial to redraw only those regions. But I need your insight on this... Thanks.

    Read the article

  • Tinting iPhone application screen red

    - by btschumy
    I'm trying to place a red tint on all the screens of my iPhone application. I've experimented on a bitmap and found I get the effect I want by compositing a dark red color onto the screen image using Multiply (kCGBlendModeMultiply). So the question is how to efficiently do this in real time on the iPhone? One dumb way might be to grab a bitmap of the current screen, composite into the bitmap and then write the composited bitmap back to the screen. This seems like it would almost certainly be too slow. In addition, I need some way of knowing when part of the screen has been redrawn so I can update the tinting. I can almost get the effect I want by putting a red, translucent, fullscreen UIView above everything. That tints everything red within further intervention on my part, but the effect is much "muddier" than results from the composite. So do any wizards out there know of some mechanism I can use to automatically composite the red over the app in similar fashion to what the translucent red UIView does?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >