Search Results

Search found 76 results on 4 pages for 'distortion'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • my Search method is coming up with all nulls

    - by Epic.Distortion
    Let me give a quick explanation. I took a 5 week course through a company on Java in July. They covered basic stuff, like console app, crud operations, mysql, and n-tier architecture. Since the course ended I didn't use it much because I went back to work, and other medical reasons surfaced....blah blah. I was told by the company to make a simple program to reflect what I learned. Turns out I retained very little. I decided to make a video game starage program. It would be used to stare your video games so you wouldn't have to search your bookcase(or how ever you store your games.) It is a basic console app using the crud operations with MYSQL. I can't get my search function to actually work. I have 2 layers a Presentation layer and a Logic layer. The search method allows them to search for a game by the title. when i bring run the program and use Search it only displays the title and the rest is null. here is my Presentation layer: private static Games SearchForGame() { Logic aref = new Logic(); Games g = new Games(); Scanner scanline = new Scanner(System.in); System.out.println("Please enter the name of the game you wish to find:"); g.setTitle(scanline.nextLine()); aref.SearchGame(); System.out.println(); System.out.println("Game Id: " + g.getGameId()); System.out.println("Title: " + g.getTitle()); System.out.println("Rating: " + g.getRating()); System.out.println("Platform: "+ g.getPlatform()); System.out.println("Developer: "+ g.getDeveloper()); return g; } and here is my logic layer public Games SearchGame() { Games g = new Games(); try { Class.forName(driver).newInstance(); Connection conn = DriverManager.getConnection(url+dbName,userName,password); java.sql.PreparedStatement statement = conn.prepareStatement("SELECT GameId,Title,Rating,Platform,Developer FROM games WHERE Title=?"); statement.setString(1, g.getTitle()); ResultSet rs = statement.executeQuery(); while(rs.next()){ g.setGameId(rs.getInt("GameId")); g.setTitle(rs.getString("Title")); g.setRating(rs.getString("Rating")); g.setPlatform(rs.getString("Platform")); g.setDeveloper(rs.getString("Developer")); statement.executeUpdate(); } } catch (Exception e) { e.printStackTrace(); } return g; } here is also my last results Please enter the name of the game you wish to find: Skyrim Game Id: 0 Title: Skyrim Rating: null Platform: null Developer: null any help would be greatly appreciated and thanks in advance EDIT: here is my code for my games class public class Games { public int GameId; public String Title; public String Rating; public String Platform; public String Developer; public int getGameId() { return GameId; } public int setGameId(int gameId) { return GameId = gameId; } public String getTitle() { return Title; } public String setTitle(String title) { return Title = title; } public String getRating() { return Rating; } public void setRating(String rating) { Rating = rating; } public String getPlatform() { return Platform; } public void setPlatform(String platform) { Platform = platform; } public String getDeveloper() { return Developer; } public void setDeveloper(String developer) { Developer = developer; } }

    Read the article

  • OpenCV haar training for static image

    - by Evl-ntnt
    I trying to train haar cascade classificator for card suite detection (which no rotation and has no distortion on image) For example I have file Clubs.png which contents clubs image on white background 20x20 pixels This tutorial is so tangled http://note.sonots.com/SciSoftware/haartraining.html My image varies only in sizes, no distortion or angling. Which commands I must enter in aim to get Clubs.xml file?

    Read the article

  • How to blend multiple normal maps?

    - by János Turánszki
    I want to achieve a distortion effect which distorts the full screen. For that I spawn a couple of images with normal maps. I render their normal map part on some camera facing quads onto a rendertarget which is cleared with the color (127,127,255,255). This color means that there is no distortion whatsoever. Then I want to render some images like this one onto it: If I draw one somewhere on the screen, then it looks correct because it blends in seamlessly with the background (which is the same color that appears on the edges of this image). If I draw another one on top of it then it will no longer be a seamless transition. For this I created a blendstate in directX 11 that keeps the maximum of two colors, so it is now a seamless transition, but this way, the colors lower than 127 (0.5f normalized) will not contribute. I am not making a simulation and the effect looks quite convincing and nice for a game, but in my spare time I am thinking how I could achieve a nicer or a more correct effect with a blend state, maybe averaging the colors somehow? I I did it with a shader, I would add the colors and then I would normalize them, but I need to combine arbitrary number of images onto a rendertarget. This is my blend state now which blends them seamlessly but not correctly: D3D11_BLEND_DESC bd; bd.RenderTarget[0].BlendEnable=true; bd.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; bd.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; bd.RenderTarget[0].BlendOp = D3D11_BLEND_OP_MAX; bd.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE; bd.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO; bd.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_MAX; bd.RenderTarget[0].RenderTargetWriteMask = 0x0f; Is there any way of improving upon this? (PS. I considered rendering each one with a separate shader incementally on top of each other but that would consume a lot of render targets which is unacceptable)

    Read the article

  • Why are there 3 conflicting OpenCV camera calibration formulas?

    - by John
    I'm having a problem with OpenCV's various parameterization of coordinates used for camera calibration purposes. The problem is that three different sources of information on image distortion formulae apparently give three non-equivalent description of the parameters and equations involved: (1) In their book "Learning OpenCV…" Bradski and Kaehler write regarding lens distortion (page 376): xcorrected = x * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ 2 * p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ], ycorrected = y * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ p1 * ( r^2 + 2 * y^2 ) + 2 * p2 * x * y ], where r = sqrt( x^2 + y^2 ). Assumably, (x, y) are the coordinates of pixels in the uncorrected captured image corresponding to world-point objects with coordinates (X, Y, Z), camera-frame referenced, for which xcorrected = fx * ( X / Z ) + cx and ycorrected = fy * ( Y / Z ) + cy, where fx, fy, cx, and cy, are the camera's intrinsic parameters. So, having (x, y) from a captured image, we can obtain the desired coordinates ( xcorrected, ycorrected ) to produced an undistorted image of the captured world scene by applying the above first two correction expressions. However... (2) The complication arises as we look at OpenCV 2.0 C Reference entry under the Camera Calibration and 3D Reconstruction section. For ease of comparison we start with all world-point (X, Y, Z) coordinates being expressed with respect to the camera's reference frame, just as in #1. Consequently, the transformation matrix [ R | t ] is of no concern. In the C reference, it is expressed that: x' = X / Z, y' = Y / Z, x'' = x' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ 2 * p1 * x' * y' + p2 * ( r'^2 + 2 * x'^2 ) ], y'' = y' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ p1 * ( r'^2 + 2 * y'^2 ) + 2 * p2 * x' * y' ], where r' = sqrt( x'^2 + y'^2 ), and finally that u = fx * x'' + cx, v = fy * y'' + cy. As one can see these expressions are not equivalent to those presented in #1, with the result that the two sets of corrected coordinates ( xcorrected, ycorrected ) and ( u, v ) are not the same. Why the contradiction? It seems to me the first set makes more sense as I can attach physical meaning to each and every x and y in there, while I find no physical meaning in x' = X / Z and y' = Y / Z when the camera focal length is not exactly 1. Furthermore, one cannot compute x' and y' for we don't know (X, Y, Z). (3) Unfortunately, things get even murkier when we refer to the writings in Intel's Open Source Computer Vision Library Reference Manual's section Lens Distortion (page 6-4), which states in part: "Let ( u, v ) be true pixel image coordinates, that is, coordinates with ideal projection, and ( u ~, v ~ ) be corresponding real observed (distorted) image coordinates. Similarly, ( x, y ) are ideal (distortion-free) and ( x ~, y ~ ) are real (distorted) image physical coordinates. Taking into account two expansion terms gives the following: x ~ = x * ( 1 + k1 * r^2 + k2 * r^4 ) + [ 2 p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ] y ~ = y * ( 1 + k1 * r^2 + k2 * r^4 ] + [ 2 p2 * x * y + p2 * ( r^2 + 2 * y^2 ) ], where r = sqrt( x^2 + y^2 ). ... "Because u ~ = cx + fx * u and v ~ = cy + fy * v , … the resultant system can be rewritten as follows: u ~ = u + ( u – cx ) * [ k1 * r^2 + k2 * r^4 + 2 * p1 * y + p2 * ( r^2 / x + 2 * x ) ] v ~ = v + ( v – cy ) * [ k1 * r^2 + k2 * r^4 + 2 * p2 * x + p1 * ( r^2 / y + 2 * y ) ] The latter relations are used to undistort images from the camera." Well, it would appear that the expressions involving x ~ and y ~ coincided with the two expressions given at the top of this writing involving xcorrected and ycorrected. However, x ~ and y ~ do not refer to corrected coordinates, according to the given description. I don't understand the distinction between the meaning of the coordinates ( x ~, y ~ ) and ( u ~, v ~ ), or for that matter, between the pairs ( x, y ) and ( u, v ). From their descriptions it appears their only distinction is that ( x ~, y ~ ) and ( x, y ) refer to 'physical' coordinates while ( u ~, v ~ ) and ( u, v ) do not. What is this distinction all about? Aren't they all physical coordinates? I'm lost! Thanks for any input!

    Read the article

  • Coordinates in distorted grid

    - by Carsten
    I have a grid in a 2D system like the one in the before image where all points A,B,C,D,A',B',C',D' are given (meaning I know the respective x- and y-coordinates). I need to calculate the x- and y-coordinates of A(new), B(new), C(new) and D(new) when the grid is distorted (so that A' is moved to A'(new), B' is moved to B'(new), C' is moved to C'(new) and D' is moved to D'(new)). The distortion happens in a way in which the lines of the grid are each divided into sub-lines of equal length (meaning for example that AB is divided into 5 parts of the equal length |AB|/5 and A(new)B(new) is divided into 5 parts of the equal length |A(new)B(new)|/5). The distortion is done with the DistortImage class of the Sandy 3D Flash engine. (My practical task is to distort an image using this class where the handles are not positioned at the corners of the image like in this demo but somewhere within it).

    Read the article

  • D3 fisheye on width on bar chart

    - by Dexter Tan
    i have been trying to create a vertical bar chart with a d3 fisheye cartesian distortion with only the x-axis being distorted. I have succeeded in distorting the x position of the vertical bars on mouseover with the following code: var maxMag = d3.max(dataset, function(d) { return d.value[10]; }); var minDate = d3.min(dataset, function(d) { return new Date(d.value[1], d.value[2]-1, d.value[3]).getTime(); }); var maxDate = d3.max(dataset, function(d) { return new Date(d.value[1], d.value[2]-1, d.value[3]).getTime(); }); var yScale = d3.scale.linear().domain([0, maxMag]).range([0, h]); var xScale = d3.fisheye.scale(d3.scale.linear).domain([minDate, maxDate]).range([0, w]).focus(w/2); var bar = svg.append("g") .attr("class", "bars") .selectAll(".bar") .data(dataset) .enter().append("rect") .attr("class", "bar") .attr("y", function(d) { return h - yScale(d.value[10]); }) .attr("width", w/dataset.length) .attr("height", function(d) { return yScale(d.value[10]); }) .attr("fill", function(d) { return (d.value[10] <= 6? "yellow" : "orange" ); }) .call(position); // Positions the bars based on data. function position(bar) { bar.attr("x", function(d) { var date = null; if (d.date != null) { date = d.date; } else { date = new Date(d.value[1],0,1); if (d.value[2] != null) date.setMonth(d.value[2]-1); if (d.value[3] != null) date.setMonth(d.value[3]); d.date = date; } return xScale(date.getTime()); }); } svg.on("mousemove", function() { var mouse = d3.mouse(this); xScale.distortion(2.5).focus(mouse[0]); bar.call(position); }); However at this point, applying fisheye on the width remains a mystery to me. I have tried several methods like using a fisheye scale for width however it does not work as expected. What i wish to do is to have the width of a bar expand on mouseover, the same way a single vertical bar is singled out on mouseover with the cartesian distortion. Any clues or help will be much appreciated! edit: http://dexter.xumx.me to view the visualisation i am talking about for easier understanding!

    Read the article

  • Automatic adaptive gain normalization

    - by Eduardo
    How may I normalize a (voice) audio mp3 or aac file with no loss, having the gain rised as much as possible with little distortion, so in a long conversation people that speak softer can have more gain for their voice and people that speak louder can have less gain?

    Read the article

  • Pixel Shader - apply a mask (XNA)

    - by Michal Bozydar Pawlowski
    I'd like to apply a simple few masks to few images. The first mask I'd like to implement is mask like: XXXOOO I mean, that on the right everything is masked (to black), and on the left everything is stayed without changes. The second mask I'd like to implement is glow mask. I mean something like this: O O***O O**X**O O***O O What I mean, is a circle mask, which in the center everything is saved without changes, and going outside the circle everything is starting to be black The last mask is irregular mask. For example like this: OOO* O**X**O OO**OO**O OO*X*O O*O O Where: O - to black * - to gray X - without changes I've read, how to apply distortion pixel shader in XNA: msdn Could you explain me how to apply mute mask on an image? (mask will be grayscale)

    Read the article

  • Avatar (spoiler alert!)

    - by Dave Yasko
    This past weekend we finally saw “Avatar,” or as I like to call it “Dances with Smurfs.”  It was rather light on the story, heavy on the message, and incredibly well done.  The eye for detail is what blew me away, especially the visual distortion (presumably due to density) when the two atmospheres mixed.  The only thing I thought they might have missed was why so many (presumably) mammals had 6 appendages (4 arms + 2 legs) and breathed through passages near their clavicles, but the Na’vi (sp?) didn’t have/do either.  Also, James Cameron just loves to telegraph upcoming events: Riding the big red bird thing has only happened 5 times before – Sully is on the job.  The tree is going to download dying Ripley to her avatar body – Sully is going to do that too.  I’ve seen worse foreshadowing, but not in a long time. I give it 4 Papa Smurfs out of 5.

    Read the article

  • How can I create animated card graphics like in Hearthstone?

    - by Appeltaart
    In the game Hearthstone, there are cards with animated images on them. A few examples: http://www.hearthhead.com/card=281/argent-commander http://www.hearthhead.com/card=469/blood-imp The animations seem to be composed of multiple effects: Particle systems. Fading sprites in and out/rotating them Simple scrolling textures A distortion effect, very evident in the cape and hair of example 1. Swirling smoke effects, the light in example 1 and the green/purple glow in example 2. The first three elements are trivial, what I'd like to know is how the last two could be done. Can this even be done realtime in a game, or are they pre-rendered animations?

    Read the article

  • Rendering output to arbitary quadrilateral

    - by Trainee4Life
    I want to render output on a device to an arbitary quadirlateral, i.e. project texture on to a quad. What are the possible ways I could implement it? Till now, I have investigated: Drawing textured quadrilateral - Quads look odd as they are composed of triangles, and the distortion looks odd. The issue I'm facing has been discussed here and here as well. Setting transformation on device - Need help in getting this implemented. Pixel shaders - Not able to implement the desired effect. Any help would be much appreciated.

    Read the article

  • Oculus Rift with Antichamber

    - by Scott Hainline
    Antichamber runs great on linux (steam version). But it is not playable with the Oculus Rift at this point. The issues are: 1) no headtracking 2) graphics are not being split and distorted by Oculus SDK My current plan is to use LD_PRELOAD to add the functionality, this seems to be the linux equivalent of DLL injection. Antichamber appears to be using SDL, I'm hoping this can be configured to use the headtracking data as a joystick and apply the graphics distortion, but I am not sure which functions I should be looking for. Is there a simpler way of getting these issues resolved? Is SDL the right choice here? Would appreciate any information on how the Unreal Engine 3 works under linux; and library injection too.

    Read the article

  • Bad texture on model with different GPU

    - by Pacha
    I have some kind of distortion on the texture of my 3D model. It works perfectly well on an AMD GPU, but when testing on a integrated Intel HD graphics card it has a weird issue. I don't have a problem with the rest of my entities as they are not scaled. The models with the problems are scaled, as my engine supports different sizes for the platforms. I am using Ogre3D as rendering engine, and GLSL as shader language. Vertex shader: #version 120 varying vec2 UV; void main() { UV = gl_MultiTexCoord0; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment shader: #version 120 varying vec2 UV; uniform sampler2D diffuseMap; void main(void) { gl_FragColor = texture(diffuseMap, UV); } Screenshot (the error is on the right and left side, the top and bottom part are rendered perfectly well):

    Read the article

  • What animation technique is used in 'Dont Starve'?

    - by Bugster
    While playing a few games in my personal time off development I've stumbled across a survival 2D/3D survival game. The game was apparently made in SDL and GLUT (Dont starve) but what really amazed me was the animations in the game. The animations are extremely smooth and fluent. There is no distortion while animating, what usually happens in hand-made animations is that pixels get removed, animations are jaggy and they simply aren't as smooth. That got me thinking on how they managed to accomplish such a quality of animations. Were they really handmade (If they were, then it must've taken a very talented artist), is it bone animation or are they using another technique?

    Read the article

  • Flash audio problem; redirecting to Pulse?

    - by Makaze
    I am having a problem where the sound on flash objects other than YouTube videos are giving a strange distortion. It has a high pitched pulse sound every second or less that resembles skips on a record. I have no idea what the problem is or how to fix it. Does anyone have an idea what I could do to fix this? I am running Xubuntu 11.10. I think I might have redirected everything to pulse using a config file but I cannot seem to find it anywhere. It used the lines 'type pulse' in it. If anyone knows what I am talking about and how to find that file or whether or not it has any relevance I would greatly appreciate it.

    Read the article

  • Are there any 3rd Party libraries for image processing in C#?

    - by Gaax
    Just wondering if there's anything out there to help make my project a lot easier to deal with. I'm working on a program that uses the Wiimote and infrared pen (mapped to the mouse) to manipulate images in real time and I'd much rather not have to use all my time figuring out how to make the program resize and rotate images efficiently with the least amount of distortion... I haven't really found anything when searching. Anybody know of any libraries that can help me?

    Read the article

  • How to distort the desktop screen

    - by HaifengWang
    Hi friends, I want to change the shape of the desktop screen, so what are displayed on the desktop will be distorted at the same time. And the user can still operate the PC with the mouse on the distorted desktop(Run the applications, Open the "My Computer" and so on). I think I must get the projection matrix of the screen coordinate at first. Then transform the matrix, and map the desktop buffer image to the distorted mesh. Are there any interfaces which can modify the shape of the desktop screen in OpenGL or DirectX? Would you please give me some tip on it. Thank you very much in advance. Please refer to the picture from http://oi53.tinypic.com/bhewdx.jpg BR, Haifeng Addition1: I'm sorry! Maybe I didn't express clearly what I want to implement. What I want to implement is to modify the shape of the screen. So we can distort the shapes of all the applications which are run on Windows at the same time. For example that the window of "My Computer" will be distorted with the distortion of the desktop screen. And we can still operate the PC with mouse from the distorted desktop(Click the shortcut to run a program). Addition2: The projection matrix is just my assume. There isn't any desktop projection matrix by which the desktop surface is projected to the screen. What I want to implement is to change the shape of the desktop, as the same with mapping the desktop to an 3D mesh. But the user can still operate the OS on the distorted desktop(Click the shortcut to run a program, open the ie to surf the internet). Addition3: The shapes of all the programs run on the OS are changed with the distortion of the screen. It's realtime. The user can still operate the OS on the distorted screen as usually. Maybe we can intercept or override the GPU itself to implement the effect. I'm investigating GDI, I think I can find some clue for that. The first step is to find how to show the desktop on the screen.

    Read the article

  • What decent audio recording interfaces are well supported in Windows 7 64bit?

    - by labradort
    I currently have an Audigy 2 ZS Platinum. It permits me to insert a 1/4" jack line from bass guitar and play along with pre-recorded piano music. This worked fine under Windows XP. I am moving to Windows 7 64 bit (dual boot for now), and Creative may not develop fully working drivers for this component. Looking around, I don't see Windows 7 support mentioned at product web sites from E-MU, Roland, M-Audio, etc. Even at Creative, the posting of available drivers for Windows 7 is deceptive, as they do not adequately support recording (latency, distortion). My local music store shrugs and says to stay with Win XP. In some cases, the Vista drivers will work in Win 7. So I need real world feedback on this. I should also mention I'm not impressed with available USB interfaces - they have too low of a signal to noise ratio for my purposes. That leaves PCI, or possibly firewire devices (never tried one yet).

    Read the article

  • sound clipping in windows 7 64bit

    - by Sonic Soul
    something is up with 64 bit version of windows 7. i have 2 (good pro-sumer) sound cards which i've used in other version of windows 7 w/out problems, but now it is causing severe clipping (noise, sound having little gaps surrounded by static).. the cards i am using is echo mia, and native instruments audio kontrol 1. the audio kontrol 1 is a external card, and would work ok for a few hours after rebooting, and than it would go back into clipping mode, to the point where you tube videos would not play from trying to process sound and it stopping for extended periods of time. echo mia is performing better, but there is still some clipping and distortion. the machine i use is newly built, with i7 920 64 bit cpu, 6 gigs of ram and an outdated nvidia video card (geforece 7950 gx2)

    Read the article

  • Audio problems with asus notebook with Bluetooth and usb devices in win 7

    - by QuickSilver
    My notebook is Asus P53E - core i5 Windows 7 installed Audio from PC speakers and headphone is distorted when i turn on bluetooth or some usb device plugged in. I belive this is a software issue. I tried updating my audio drivers but nothing help. Any help will be appreciated. Update: After a few days digging I found that this problem is causing by the asus sound enhancement application SonicFocus. The distortion stops while turning off sonic focus. Can anyone help me with a solution other than turning off SonicFocus

    Read the article

  • Improving sound quality with remote ESD server

    - by cuu508
    Hi, I'm investigating low-budget ways to get audio from my PC (Ubuntu) to HiFi without wires. I'm currently testing a setup where Asus WL-500gP wireless router runs ESD daemon and has attached USB soundcard which is then plugged into HiFi. I'm testing playback on PC with mpg123-esd and Spotify under Wine. The sound is there, latency is unexpectedly low, but I also hear occassional clicks and some distortion from time to time. I suppose that's because of the low latency and wireless streaming of uncompressed audio--any packet drops, CPU temporarily being busy etc. will cause clicks in sound output. Is there a way around this problem, increasing latency / buffer size somehow perhaps? Streaming using shoutcast protocol seems to be a way out but I have feeling that would be a complex and brittle setup.

    Read the article

  • How can I find the right UV coordinates for interpolating a bezier curve?

    - by ssb
    I'll let this picture do the talking. I'm trying to create a mesh from a bezier curve and then add a texture to it. The problem here is that the interpolation points along the curve do not increase linearly, so points farther from the control point (near the endpoints) stretch and those in the bend contract, causing the texture to be uneven across the curve, which can be problematic when using a pattern like stripes on a road. How can I determine how far along the curve the vertices actually are so I can give a proper UV coordinate? EDIT: Allow me to clarify that I'm not talking about the trapezoidal distortion of the roads. That I know is normal and I'm not concerned about. I've updated the image to show more clearly where my concerns are. Interpolating over the curve I get 10 segments, but each of these 10 segments is not spaced at an equal point along the curve, so I have to account for this in assigning UV data to vertices or else the road texture will stretch/shrink depending on how far apart vertices are at that particular part of the curve.

    Read the article

< Previous Page | 1 2 3 4  | Next Page >