Search Results

Search found 8172 results on 327 pages for 'vector graphics'.

Page 103/327 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Is there a table of OpenGL extensions, versions, and hardware support somewhere?

    - by Thomas
    I'm looking for some resource that can help me decide what OpenGL version my game needs at minimum, and what features to support through extensions. Ideally, a table of the following format: 1.0 1.1 1.2 1.2.1 1.3 ... multitexture - ARB ARB core core texture_float - EXT EXT ARB ARB ... (Not sure about the values I put in, but you get the idea.) The extension specs themselves, at opengl.org, list the minimum OpenGL version they need, so that part is easy. However, many extensions have been accepted and became core standard in subsequent OpenGL versions, but it is very hard to find when that happened. The only way I could find is to compare the full OpenGL standards document for each version. On a related note, I would also very much like to know which extensions/features are supported by which hardware, to help me decide what features I can safely use in my game, and which ones I need to make optional. For example, a big honkin' table like this: MAX_TEXTURE_IMAGE_UNITS MAX_VERTEX_TEXTURE_IMAGE_UNITS ... GeForce 6xxx 8 4 GeForce 7xxx 16 8 ATi x300 8 4 ... (Again, I'm making the values up.) The table could list hardware limitations from glGet but also support for particular extensions, and limitations of such extension support (e.g. what floating-point texture formats are supported in hardware). Any pointers to these or similar resources would be hugely appreciated!

    Read the article

  • When not to do maximum compression in png?

    - by user1444680
    Intro When saving png images through GIMP, I've always used level 9 (maximum) compression, as I knew that it's lossless. Now I've to specify compression level when saving png format image through GD extension of PHP. Question Is there any case when I shouldn't compress PNG to maximum level? Like any compatibility issues? If there's no problem then why to ask user; why not automatically compress to max?

    Read the article

  • Merging photo textures - (from calibrated cameras) - projected onto geometry

    - by freakTheMighty
    I am looking for papers/algorithms for merging projected textures onto geometry. To be more specific, given a set of fully calibrated cameras/photographs and geometry, how can we define a metric for choosing which photograph should be used to texture a given patch of the geometry. I can think of a few attributes one may seek minimize including the angle between the surface normal and the camera, the distance of the camera from the surface, as well as minimizing some parameterization of sharpness. The question is how do these things get combined and are there well established existing solutions?

    Read the article

  • Border in DrawRectangle

    - by undsoft
    Well, I'm coding the OnPaint event for my own control and it is very nescessary for me to make it pixel-accurate. I've got a little problem with borders of rectangles. See picture: These two rectangles were drawn with the same location and size parameters, but using different size of the pen. See what happend? When border became larger it has eaten the free space before the rectangle (on the left). I wonder if there is some kind of property which makes border be drawn inside of the rectangle, so that the distance to rectangle will always be the same. Thanks.

    Read the article

  • Level of Detail for 3D terrains/models in Mobile Devices (Android / XNA )

    - by afriza
    I am planning to develop for WP7 and Android. What is the better way to display (and traverse) 3D scene/models in term of LoD? The data is planned to be island-wide (Singapore). 1) Real-Time Dynamic Level of Detail Terrain Rendering 2) Discrete LoD 3) Others? And please advice some considerations/algorithms/resources/source codes. something like LoD book also Okay. Side note: I am a beginner in this area but pretty well-versed in C/C++. And I haven't read the LoD book. Related posts: - Distant 3D object rendering [games]

    Read the article

  • Simple question about the lunarlander example.

    - by Smills
    I am basing my game off the lunarlander example. This is the run loop I am using (very similar to what is used in lunarlander). I am getting considerable performance issues associated with my drawing, even if I draw almost nothing. I noticed the below method. Why is the canvas being created and set to null each cycle? @Override public void run() { while (mRun) { Canvas c = null; try { c = mSurfaceHolder.lockCanvas();//null synchronized (mSurfaceHolder) { updatePhysics(); doDraw(c); } } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { mSurfaceHolder.unlockCanvasAndPost(c); } } } } Most of the times I have read anything about canvases it is more along the lines of: mField = new Bitmap(...dimensions...); Canvas c = new Canvas(mField); My question is: why is Google's example done that way (null canvas), what are the benefits of this, and is there a faster way to do it?

    Read the article

  • Simulating brush strokes for painting application

    - by DrRobot
    I'm trying to write an application that can be used to create pictures that look like paintings using simulated brush strokes. Are there any good sources for simple ways of simulating brush strokes? For example, given a list of mouse positions that the user has dragged the mouse through, a brush width and a brush texture, how do I determine what to draw to the canvas? I've tried angling the brush texture in the direction of the mouse movement and dabbing several brush texture images along the path, but it doesn't look great. I think I'm missing something where the brush texture should shrink and grow on corners. Any simple to follow links would be appreciated. I've found complex academic papers on simulating e.g. oil paints but I just want a basic algorithm to use that produces OK results if possible.

    Read the article

  • Why are all my masked views unmasked in my view snapshot?

    - by mystify
    I'm taking a snapshot of an view. This view has got some subviews which have layer masks applied to them. For some reason, those masks take no effect in the snapshot and the masked parts are completely visible. UIGraphicsBeginImageContext(theView.frame.size); [theView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); I assume this is a bug in the framework. But maybe it's not? Did I do anything wrong here?

    Read the article

  • Displaying bitmaps in relative positions

    - by JonF
    I'd like to put a couple images on a surfaceview. I understand that the screen sizes of android devices can vary, so I don't think I can just use an x y position or I might end up placing it off different screens. Say I want to put two boxes in the center of the screen, a blue one and a red one. The blue one is to the left of the red one. How can I accomplish that while accounting for different screen sizes?

    Read the article

  • iPhone animated banner : which framework to use

    - by Julien
    Hi folks, I'm willing to create a little frame to display animated ads in my app. It could be simple little animations, or "3D" transition between ads, or combination of both. I'm not familiar with graphic frameworks, I just used CoreGraphics, which I think is not optimized for that. I thought of OpenGL, but maybe that's too much and takes too much ressources just for this little thing. What do you think ?

    Read the article

  • How do I integrate a BSDF into a ray caster.

    - by pelb
    I'm trying to implement sub surface scattering at isosurfaces and looked up how a BSDF works mathematically. Implementing the reflective and diffuse part seems to be quite easy as i just have to evaluate phong at the isosurface intersection, but how do you I apply the transmissive part of the BSDF? In what way do i have to modify the ray direction. Any pointers to a practical implementation are welcome. Thanks and So long!

    Read the article

  • Optimally place a pie slice in a rectangle.

    - by Lisa
    Given a rectangle (w, h) and a pie slice with start angle and end angle, how can I place the slice optimally in the rectangle so that it fills the room best (from an optical point of view, not mathematically speaking)? I'm currently placing the pie slice's center in the center of the rectangle and use the half of the smaller of both rectangle sides as the radius. This leaves plenty of room for certain configurations. Examples to make clear what I'm after, based on the precondition that the slice is drawn like a unit circle: A start angle of 0 and an end angle of PI would lead to a filled lower half of the rectangle and an empty upper half. A good solution here would be to move the center up by 1/4*h. A start angle of 0 and an end angle of PI/2 would lead to a filled bottom right quarter of the rectangle. A good solution here would be to move the center point to the top left of the rectangle and to set the radius to the smaller of both rectangle sides. This is fairly easy for the cases I've sketched but it becomes complicated when the start and end angles are arbitrary. I am searching for an algorithm which determines center of the slice and radius in a way that fills the rectangle best. Pseudo code would be great since I'm not a big mathematician.

    Read the article

  • [CA_COLOR_OPAQUE] things that make a layer non-opaque. scaled CAGradientLayer?

    - by mahal tertin
    i spent some time with the environment variable CA_COLOR_OPAQUE = 1 and have my findings to share. things that make a CALayer non-opaque (slow, more memory, ...): * contents with alpha (like an NSImage with an icon) * NSImage/CGImage from a pdf as contents (even when the pdf does not contain any alpha and opaque=YES) * backgroundColor = nil * CATextLayer with text in a (because it is contents with alpha) * rounded corners? maybe/sometimes * masksToBounds? not necessarily as we scale most of tree with CATransform3DScale on sublayerTransform i found also these rather irritating non-opaque: * CAGradientLayer that is somewhere down in this scaled tree (even when set all the gradient colors without alpha) * edgeAntialiasingMask != 0 of a layer that is somewhere down in this scaled tree the last two do not make sense to me. why should it be non opaque? what do i see? if anyone has any thoughts on these findings, i'm happy to learn as i couldn't find such a list yet.

    Read the article

  • How can I draw a shadow beyond a UIView's bounds?

    - by Christian
    I'm using the method described at http://stackoverflow.com/questions/805872/how-do-i-draw-a-shadow-under-a-uiview to draw shadow behind a view's content. The shadow is clipped to the view's bounds, although I disabled "Clip Subviews" in Interface Builder for the view. Is it possible to draw a shadow around a view and not only in a view? I don't want to draw the shadow inside the view because the view would receive touch events for the shadow area, which really belongs to the background.

    Read the article

  • How does Photoshop (Or drawing programs) blit?

    - by user146780
    I'm getting ready to make a drawing application in Windows. I'm just wondering, do drawing programs have a memory bitmap which they lock, then set each pixel, then blit? I don't understand how Photoshop can move entire layers without lag or flicker without using hardware acceleration. Also in a program like Expression Design, I could have 200 shapes and move them around all at once with no lag. I'm really wondering how this can be done without GPU help. I don't think super efficient algorithms could justify that? Thanks

    Read the article

  • How to change the coordinate of a point that is inside a GraphicsPath?

    - by Ben
    Is there anyway to change the coordinates of some of the points within a GraphicsPath object while leaving the other points where they are? The GraphicsPath object that gets passed into my method will contain a mixture of polygons and lines. My method would want to look something like: void UpdateGraphicsPath(GraphicsPath gPath, RectangleF regionToBeChanged, PointF delta) { // Find the points in gPath that are inside regionToBeChanged // and move them by delta. // gPath.PathPoints[i].X += delta.X; // Compiles but doesn't work } GraphicsPath.PathPoints seems to be readonly, so does GraphicsPath.PathData.Points. So I am wondering if this is even possible. Perhaps generating a new GraphicsPath object with an updated set of points? How can I know if a point is part of a line or a polygon? If anyone has any suggestions then I would be grateful.

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • .GIF re edit! Can't figure it out!!

    - by Adam C
    http://img227.imageshack.us/img227/1892/hatersgonna.gif That is the photo.. I am trying to cut around it so its a little smaller and make him walk the opposite direction. The reason I am doing this is for a VBulletin forum signature since it marquees left to right. I have tried editing the animation in Photoshop and I flipped the canvas to horizontal... I can't figure this out.. I've been at it for HOURS. hah Also if anyone can make it just a little darker that would be amazing. "no I'm not asking for free help" but any help would be great Thank you so much

    Read the article

  • Code Interaction with Quartz Composition

    - by Alberto MQO
    Hi, i have a Quartz Composition with a Cube, and X/Y/Z rotation inputs are published. On Interface Builder i made a QCView and a QCPatchController with the previous Quartz Composition loaded. In QCView the Patch Controller is binded, and the rotation published ports are binded too to three NSSlider, so when i change the value of the NSSlider's then the cube rotates. All this works fine, but i want to change the rotation values of the cube from the App Delegate on XCode. I tried to change the value of the NSSliders with IBoulets pointing to them, but this change doesnt apply to the cube, like it does when i change the Sliders directly with my mouse. What should i instanciate and/or how to access and change this Input_Ports.value throught the CQPatchController? Thank you very much for reading, i really need help!

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >