Search Results

Search found 13860 results on 555 pages for 'core graphics'.

Page 121/555 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • iPhone: How to Maintain Original Image Size Thoughout Image Edits

    - by maddy
    hi, I am developing an iPhone app that resizes and merges images. I want to select two photos of size 1600x1200 from photo library and then merge both into a single image and save that new image back to the photo library. However, I can't get the right size for the merged image. I take two image views of frame 320x480 and set the view's image to my imported images. After manipulating the images (zooming, cropping, rotating), I then save the image to album. When I check the image size it shows 600x800. How do I get the original size of 1600*1200? I've been stuck on this problem from two weeks! Thanks in advance.

    Read the article

  • ImageMagick Reflection

    - by dbruns
    Brief: convert ( -size 585x128 gradient: ) NewImage.png How do I change the above ImageMagick command so it takes the width and height from an existing image? I need it to remain a one line command. Details: I'm trying to programatically create an image reflection using ImageMagick. The effect I am looking for is similar to what you would see when looking at an object on the edge of a pool of water. There is a pretty good thread on what I am trying to do here but the solution isn't exactly what I am looking for. Since I will be calling ImageMagick from a C#.Net application I want to use one call without any temp files and return the image through stdout. So far I have this... convert OriginalImage.png ( OriginalImage.png -flip -blur 3x5 \ -crop 100%%x30%%+0+0 -negate -evaluate multiply 0.3 \ -negate ( -size 585x128 gradient: ) +matte -compose copy_opacity -composite ) -append NewImage.png This works ok but doesn't give me the exact fade I am looking for. Instead of a nice solid fade from top to bottom it is giving me a fade from top left to bottom right. I added the (-negate -evaluate multiply 0.3 -negate) section in to lighten it up a bit more since I wasn't getting the fade I wanted. I also don't want to have to hard code in the size of the image when creating the gradient ( -size 585x128 gradient: ) I'm also going to want to keep the original image's transparency if possible. To go to stdout I plan on replacing "NewImage.png" with "-"

    Read the article

  • Ray Generation Inconsistency

    - by Myx
    I have written code that generates a ray from the "eye" of the camera to the viewing plane some distance away from the camera's eye: R3Ray ConstructRayThroughPixel(...) { R3Point p; double increments_x = (lr.X() - ul.X())/(double)width; double increments_y = (ul.Y() - lr.Y())/(double)height; p.SetX( ul.X() + ((double)i_pos+0.5)*increments_x ); p.SetY( lr.Y() + ((double)j_pos+0.5)*increments_y ); p.SetZ( lr.Z() ); R3Vector v = p-camera_pos; R3Ray new_ray(camera_pos,v); return new_ray; } ul is the upper left corner of the viewing plane and lr is the lower left corner of the viewing plane. They are defined as follows: R3Point org = scene->camera.eye + scene->camera.towards * radius; R3Vector dx = scene->camera.right * radius * tan(scene->camera.xfov); R3Vector dy = scene->camera.up * radius * tan(scene->camera.yfov); R3Point lr = org + dx - dy; R3Point ul = org - dx + dy; Here, org is the center of the viewing plane with radius being the distance between the viewing plane and the camera eye, dx and dy are the displacements in the x and y directions from the center of the viewing plane. The ConstructRayThroughPixel(...) function works perfectly for a camera whose eye is at (0,0,0). However, when the camera is at some different position, not all needed rays are produced for the image. Any suggestions what could be going wrong? Maybe something wrong with my equations? Thanks for the help.

    Read the article

  • Can't compile code when working with CALayer

    - by Sheehan Alam
    For some reason I get linker errors when I try and use CALayer: "_OBJC_CLASS_$_CALayer", referenced from: I have imported the following headers: #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> #import <QuartzCore/QuartzCore.h> Code: arrowImage = [[CALayer alloc] init];

    Read the article

  • Display shift when adding rows to UITableView

    - by Kamchatka
    Hello, I have a UITableView displaying an underlying NSFetchedResultsController. When the fetchedResultsController is updated, - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { is called. And the following is executed: case NSFetchedResultsChangeInsert: [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:shiftedIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; The cellForRowAtIndexPath is only called for the new line (which is logical) The problem is however that all the values displayed in the rows of the table view are shifted down. The first row displays its title. The second rows displays the first row's title. The third row the second row's title etc. If I repeat that, the third row will display the second's row title, the fourth the third row's title etc. I don't understand what can happen, especially because cellForRowAtIndexPath is only called for the new line (which is logical) and not for all these lines. Additionally, if I click on these lines which have the wrong title, it opens the right document (didSelectRowAtIndexPath works correctly with the indexPath) Any clue of what could happen? Thanks!

    Read the article

  • cut audio file with iPhone SDK

    - by Dmitry
    Hi! Is it possible to cut audio file with iPhone SDK? (file has .caf extension) I just need to cut off the silence at the beginning. (Also, maybe it's possible to write new file from the existing one with specified start and end time.) Thanks in advance!

    Read the article

  • What is the equivalent to Java's canvas object in C#?

    - by Winston
    I'm working on creating a basic application that will let a user draw (using a series of points) and I plan to do something with these points. If this were Java, I think I would probably use a canvas object and some Java2D calls to draw what I want. All the tutorials I've read on C#/Drawing involve writing your own paint method and adding it to the paint event for the form. However, I'm interested in having some traditional Form controls as well and I don't want to be drawing over them. So, is there a "Canvas" object where I can constrain what I'm drawing on? Also, is WinForms a poor choice given this use case? Would WPF have more features that would help enable me to do what I want? Or Silverlight?

    Read the article

  • OpenGL Video RAM Limits

    - by Tamir
    I have been trying to make a Cross-platform 2D Online Game, and my maps are made of tiles. My tileset, which I render the tiles from, is quite huge. I wanted to know how can I disable hardware rendering, or at least making it more capable. Hence, I wanted to know what are the basic limits of the video ram, as far as I know, Direct3D has a texture size limits (by that I don't mean the power-of-two texture sizes).

    Read the article

  • Coordinates in distorted grid

    - by Carsten
    I have a grid in a 2D system like the one in the before image where all points A,B,C,D,A',B',C',D' are given (meaning I know the respective x- and y-coordinates). I need to calculate the x- and y-coordinates of A(new), B(new), C(new) and D(new) when the grid is distorted (so that A' is moved to A'(new), B' is moved to B'(new), C' is moved to C'(new) and D' is moved to D'(new)). The distortion happens in a way in which the lines of the grid are each divided into sub-lines of equal length (meaning for example that AB is divided into 5 parts of the equal length |AB|/5 and A(new)B(new) is divided into 5 parts of the equal length |A(new)B(new)|/5). The distortion is done with the DistortImage class of the Sandy 3D Flash engine. (My practical task is to distort an image using this class where the handles are not positioned at the corners of the image like in this demo but somewhere within it).

    Read the article

  • Quartz2d vector images vs OpenGL vector description?

    - by tbarbe
    How big of a difference is the description language of Quartz2d to OpenGL ES? It seems they are similar in description power... except that Quartz is mostly 2d and that OpenGL is out of the box 3d ( but can be made 2d focused ). Are the mappings from 2dQuartz to 2d OpenGL ES that different? Im sure there must be differences in some specific features that might be handled differently on one vs another... but to do a translator? Anyone have experience with both OpenGL and Quartz2d have some insights?

    Read the article

  • renderInContext creating memory that is not promptly released

    - by iworkinprogress
    While debugging in instruments using 'ObjectAlloc' I'm noticing 7megs of memory being allocated for the renderInContext call, but it never is released. When I comment out the renderInContext call this doesn't happen, and future renderInContext calls does not continue to increase the memory allotment. UIGraphicsBeginImageContext(contentHolder.bounds.size); [contentHolder.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); Is there a way to force this memory to be released?

    Read the article

  • How do I use texture-mapping in a simple ray tracer?

    - by fastrack20
    I am attempting to add features to a ray tracer in C++. Namely, I am trying to add texture mapping to the spheres. For simplicity, I am using an array to store the texture data. I obtained the texture data by using a hex editor and copying the correct byte values into an array in my code. This was just for my testing purposes. When the values of this array correspond to an image that is simply red, it appears to work close to what is expected except there is no shading. The bottom right of the image shows what a correct sphere should look like. This sphere's colour using one set colour, not a texture map. Another problem is that when the texture map is of something other than just one colour pixels, it turns white. My test image is a picture of water, and when it maps, it shows only one ring of bluish pixels surrounding the white colour. When this is done, it simply appears as this: Here are a few code snippets: Color getColor(const Object *object,const Ray *ray, float *t) { if (object->materialType == TEXTDIF || object->materialType == TEXTMATTE) { float distance = *t; Point pnt = ray->origin + ray->direction * distance; Point oc = object->center; Vector ve = Point(oc.x,oc.y,oc.z+1) - oc; Normalize(&ve); Vector vn = Point(oc.x,oc.y+1,oc.z) - oc; Normalize(&vn); Vector vp = pnt - oc; Normalize(&vp); double phi = acos(-vn.dot(vp)); float v = phi / M_PI; float u; float num1 = (float)acos(vp.dot(ve)); float num = (num1 /(float) sin(phi)); float theta = num /(float) (2 * M_PI); if (theta < 0 || theta == NAN) {theta = 0;} if (vn.cross(ve).dot(vp) > 0) { u = theta; } else { u = 1 - theta; } int x = (u * IMAGE_WIDTH) -1; int y = (v * IMAGE_WIDTH) -1; int p = (y * IMAGE_WIDTH + x)*3; return Color(TEXT_DATA[p+2],TEXT_DATA[p+1],TEXT_DATA[p]); } else { return object->color; } }; I call the colour code here in Trace: if (object->materialType == MATTE) return getColor(object, ray, &t); Ray shadowRay; int isInShadow = 0; shadowRay.origin.x = pHit.x + nHit.x * bias; shadowRay.origin.y = pHit.y + nHit.y * bias; shadowRay.origin.z = pHit.z + nHit.z * bias; shadowRay.direction = light->object->center - pHit; float len = shadowRay.direction.length(); Normalize(&shadowRay.direction); float LdotN = shadowRay.direction.dot(nHit); if (LdotN < 0) return 0; Color lightColor = light->object->color; for (int k = 0; k < numObjects; k++) { if (Intersect(objects[k], &shadowRay, &t) && !objects[k]->isLight) { if (objects[k]->materialType == GLASS) lightColor *= getColor(objects[k], &shadowRay, &t); // attenuate light color by glass color else isInShadow = 1; break; } } lightColor *= 1.f/(len*len); return (isInShadow) ? 0 : getColor(object, &shadowRay, &t) * lightColor * LdotN; } I left out the rest of the code as to not bog down the post, but it can be seen here. Any help is greatly appreciated. The only portion not included in the code, is where I define the texture data, which as I said, is simply taken straight from a bitmap file of the above image. Thanks.

    Read the article

  • Detecting Singularities in a Graph

    - by nasufara
    I am creating a graphing calculator in Java as a project for my programming class. There are two main components to this calculator: the graph itself, which draws the line(s), and the equation evaluator, which takes in an equation as a String and... well, evaluates it. To create the line, I create a Path2D.Double instance, and loop through the points on the line. To do this, I calculate as many points as the graph is wide (e.g. if the graph itself is 500px wide, I calculate 500 points), and then scale it to the window of the graph. Now, this works perfectly for most any line. However, it does not when dealing with singularities. If, when calculating points, the graph encounters a domain error (such as 1/0), the graph closes the shape in the Path2D.Double instance and starts a new line, so that the line looks mathematically correct. Example: However, because of the way it scales, sometimes it is rendered correctly, sometimes it isn't. When it isn't, the actual asymptotic line is shown, because within those 500 points, it skipped over x = 2.0 in the equation 1 / (x-2), and only did x = 1.98 and x = 2.04, which are perfectly valid in that equation. Example: In that case, I increased the window on the left and right one unit each. My question is: Is there a way to deal with singularities using this method of scaling so that the resulting line looks mathematically correct? I myself have thought of implementing a binary search-esque method, where, if it finds that it calculates one point, and then the next point is wildly far away from the last point, it searches in between those points for a domain error. I had trouble figuring out how to make it work in practice, however. Thank you for any help you may give!

    Read the article

  • JSON to Persistent Data Store (CoreData, etc.)

    - by Bryan Veloso
    All of the data on my application is pulled through an API via JSON. The nature of a good percentage of this data is that it doesn't change very often. So to go and make JSON requests to get a list of data that doesn't change much didn't seem all that appealing. I'm looking for the most sensible option to have this JSON saved onto the iPhone in some sort of persistent data store. Obviously one plus of persisting the data would be to provide it when the phone can't access the API. I've looked at a few examples of having JSON and CoreData interact, for example, but it seems that they only describe transforming NSManagedObjects into JSON. If I can transform JSON into CoreData, my only problem would be being able to change that data when the data from the API does change. (Or, maybe this is all just silly.)

    Read the article

  • masking a UIImage

    - by iworkinprogress
    I'm working on an app that can change the borders or a rectangular UIImage. The borders will vary, but will look like the UIImage was cut out with scissors, or something to that affect. What is the best way to do this? My first thought is to prep a bunch of transparent PNGs with the correct border effect I'm looking for, and then somehow use that as a mask for my UIImage. Is this the right path? Or is there a more flexible programmatic way to do this?

    Read the article

  • Custom UILabel does not show text.

    - by Oscar
    Hi! I've made an custom UILabel class in which i draw a red line at the bottom of the frame. The red line is showing but i cant get the text to show. #import <UIKit/UIKit.h> @interface LetterLabel : UILabel { } @end #import "LetterLabel.h" @implementation LetterLabel - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code } return self; } - (void)drawRect:(CGRect)rect { CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 2.0); CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 0.0, 0.0, 1.0); CGContextBeginPath(UIGraphicsGetCurrentContext()); CGContextMoveToPoint(UIGraphicsGetCurrentContext(), 0.0, self.frame.size.height); CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), self.frame.size.width, self.frame.size.height); CGContextStrokePath(UIGraphicsGetCurrentContext()); } - (void)dealloc { [super dealloc]; } @end #import <UIKit/UIKit.h> #import "Word.h" @interface WordView : UIView { Word *gameWord; } @property (nonatomic, retain) Word *gameWord; @end @implementation WordView @synthesize gameWord; - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { self.backgroundColor = [UIColor whiteColor]; LetterLabel *label = [[LetterLabel alloc] initWithFrame:CGRectMake(0, 0, 20, 30)]; label.backgroundColor = [UIColor cyanColor]; label.textAlignment = UITextAlignmentCenter; label.textColor = [UIColor blackColor]; [label setText:@"t"]; [self addSubview:label]; } return self; } - (void)drawRect:(CGRect)rect { // Drawing code } - (void)dealloc { [gameWord release]; [super dealloc]; } @end

    Read the article

  • Should an NSLock instance be "global"?

    - by Alex Reynolds
    Should I make a single NSLock instance in the application delegate, to be used by all classes? Or is it advisable to have each class instantiate its own NSLock instance as needed? Would the locking work in the second case, if I, for example, had access to a managed object context that is spread across two view controllers?

    Read the article

  • line is erasing in java

    - by Gogoo
    I make an Application in java and draw rectangle. When I search from combobox the rectangle is under the combobox and when I close combobox some parts of the rectangle which was under the combobox is deleted. How can I make rectangle to be visible after the combobox was closed.

    Read the article

  • iPhone+Quartz+OpenGL. What is the correct way for Quartz and OpenGL to play nice together regarding

    - by dugla
    So we know the CoreGraphics/Quartz imaging model is based on pre-multiplied alpha. We also know that OpenGL blending is based on un-premultiplied alpha. What is the best practice to avoid head explosion when doing blending with textures that are derived from pre-multiplied alpha imagery (PNG files generated in Photoshop with pre-multiplied alpha). Given the apples/oranges mish mash of Quartz and OpenGL, what is the correct glBlendFunc for doing the fundamental Porter/Duff "over" operation? Typical example: A simple paint program. Brush shapes are texture-map patterns created from pre-multiplied alpha rgba images. Paint color is specified via glColor4(...) with the alpha channel used to control paint transparency. GL_MODULATE is used so the brush texture multiplies the (translucent) paint color to blend the color into the canvas. Problem: The texture is premult. The color is not. What is the correct way to handle this fundamental inconsistency? Thanks, Doug

    Read the article

  • java/swing: Shape questions: serializing and combining

    - by Jason S
    I have two questions about java.awt.Shape. Suppose I have two Shapes, shape1 and shape2. How can I serialize them in some way so I can save the information to a file and later recreate it on another computer? (Shape is not Serializable but it does have the getPathIterator() method which seems like you could get the information out but it would be kind of a drag + I'm not sure how to reconstruct a Shape object afterwards.) How can I combine them into a new Shape so that they form a joint boundary? (e.g. if shape1 is a large square and shape2 is a small circle inside the square, I want the combined shape to be a large square with a small circular hole)

    Read the article

  • Matlab: Adding symbols to figure

    - by niko
    Hi, Below is the user interface I have created to simulate LDPC coding and decoding The code sequence is decoded iteratively by passing values between the left and right nodes through the connections. The first thing it would be good to add in order to improve visualization is to add arrows to the connections in the direction of passing values. The alternative is to draw a bigger arrow at the top of the connection showing the direction. Another thing I would like to do is displaying the current mathematical operation below the connection (in this example c * H'). What I don't know how to do is displaying special characters and mathematical symbols and other kinds of text such as subscript and superscript in the figure (for example sum sign and subscript "T" instead of sign ="'" to indicate transposed matrix). I would be very thankful if anyone could point to any useful resources for the questions above or show the solution. Thank you.

    Read the article

  • How do I find hash value of a 3D vector ?

    - by brainydexter
    I am trying to perform broad-phase collision detection with a fixed-grid size approach. Thus, for each entity's position: (x,y,z) (each of type float), I need to find which cell does the entity lie in. I then intend to store all the cells in a hash-table and then iterate through to report (if any) collisions. So, here is what I am doing: Grid-cell's position: (int type) (Gx, Gy, Gz) = (x / M, y / M, z / M) where M is the size of the grid. Once, I have a cell, I'd like to add it to a hash-table with its key being a unique hash based on (Gx, Gy, Gz) and the value being the cell itself. Now, I cannot think of a good hash function and I need some help with that. Can someone please suggest me a good hash function? Thanks

    Read the article

  • NSPredicate by NSManagedObject for many-to-one lookups

    - by niklassaers
    Hi guys, I've got the scenario with two NSManagedObjects, Arm and Person. Between them is a many-to-one relationship Person.arms and inverse Arm.owner. I'd like to write a simple NSPredicate where I've got the NSManagedObject *arm and I'd like to fetch the NSManagedObject *person that this arm belongs to. I could make a textual representation and look for that, but is there a better way where I can look it up by identity? Something like this perhaps? NSEntityDescription *person = [NSEntityDescription entityForName:@"Person" inManagedObjectContext:MOC]; NSPredicate *personPredicate = [NSPredicate predicateWithFormat:@"%@ IN arms", arm]; Cheers Nik

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >