Search Results

Search found 6772 results on 271 pages for 'rob effect'.

Page 4/271 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • "Adding Esperanto circumflexes (supersigno)" option has no effect

    - by Gardel
    I'm used to use the keyboard layout option to type the accentuated characters in Esperanto. This option is in System Settings Keyboard Layout Options Adding Esperanto circumflexes (supersigno) To the corresponding key in a Qwerty keyboard. It worked great but since I upgraded to Ubuntu 12.04, this option has no effect. I'm using Ubuntu in French with the keyboard layout French (variant) and Gnome-shell. I tested on another computer with Unity and there is the same issue. Is it a known issue? I did not find anything about it…

    Read the article

  • Chameleon effect not working after update

    - by Roshnal
    I've been using Ubuntu 12.04 since it came out and so far didn't have any problems with it. But after I updated (via Update Manager) it yesterday, after every reboot, the launcher color defaults to a blue dash, workspace and trash icon and the rest is black. If I set the wallpaper again manually, then it changes color as its supposed to (chameleon). Any idea why this is? Its really annoying to change your wallpaper every time you logon just to get the "normal" launcher. EDIT The chameleon effect works perfectly for notifications. Only the Launcher and Dash are not changing the color. Thanks.

    Read the article

  • Can Layer Masks Achieve This Effect

    - by Julian
    If you look at the image below you will see the player surrounded by a dotted yellow box. The dotted yellow box is also part of the player and represented a portion of the player being masked from both rendering and affected by physics. My question is if layer masks in Unity can achieve the following effect. -In Area 1, the red box/animations of the player are visible and the rigidbody of this shape is affected by all Physics. -Any portion of the player that enters Area 2 makes the larger yellow box within the area become visible (and affected by physics) and vice versa for any portion of the smaller red box that enters. -This can persist when both entering and leaving either area from any direction. Thank you for any help!

    Read the article

  • Managing the Domino Effect (with Tutor Publisher Reports)

    - by [email protected]
    When an organization upgrades their business application or improves a process, it triggers changes that will reverberate throughout an organization, like a falling row of dominoes standing on end. A tangible and repeatable way to communicate change is with updated process documentation. But how do organizations get their arms around all the documents that are impacted by an application upgrade or process improvement? A small change in one place will trigger subsequent changes in other areas. A simple domino chain of questions can go like this. What screens have changed? Do the new screens change the process in place? In what procedural documents are the screens referenced? Who uses the screens and must be notified of the changes? What other documents are affected? Will the change affect current company policy? Tutor Publisher compiles focused, easy to read impact analysis reports of your process documentation library that answer these tough questions. Tutor reports make it easy to quickly target the information and documents that require updating. In turn, the updated documents are used to communicate the change. The Tutor writing methodology and Publisher reports provide organizations the means to confidently keep documentation in sync with the way the business runs. Start managing the domino effect in your organization. Get a grip on it here!

    Read the article

  • how to make a continuous machine gun sound-effect

    - by Jan
    I am trying to make an entity fire one or more machine-guns. For each gun I store the time between shots (1.0 / firing rate) and the time since the last shot. Also I've loaded ~10 different gun-shot sound-effects. Now, for each gun I do the following: function update(deltatime): timeSinceLastShot += deltatime if timeSinceLastShot >= timeBetweenShots + verySmallRandomValue(): timeSinceLastShot -= timeBetweenShots if gunIsFiring: displayMuzzleFlash() spawnBullet() selectRandomSound().play() But now I often get a crackling noise (which I assume is when two or more guns are firing at the same time and confuse the sound-device). My question is whether A) This a common problem and there is a well-known solution, maybe to do with the channels or something, or B) I am using a completely wrong approach to the task. I had a look at some sound-assets for other games and they used complete burst with multiple shots. I suppose I could try that, but I would like to have organic little hickups in the gun-fire (that's what the random value is for) to make the game more gritty and dirty. I am using Panda3D, but I had the exact same problem in PyGame and SDL. [edit] Thanks a lot for the answers so far! One more problem with faking it though: Now how do I stop the sound? Let's say I have an effect with 5 bangs... *bang* *bang* *bang* *bang* *bang* And I magically manage to loop it so that there's no gap or overlap if the player fires more than 5 shots. Now, what do I do if the player stops firing halfway through the third bang? How do I know how long to keep playing the sample so that the third bang is completed and I can start playing the rumbling echo of the last shot? Of course I can look up the shot/pause timing of that sound-sample and code accordingly, but it feels extremely hacky.

    Read the article

  • Effect of using dedicated NVidia card instead of Intel HD4000

    - by Sman789
    Short version: Can someone please advise me of the effect of adding a dedicated NVIDIA GeForce GT 630M card to an Ubuntu laptop in terms of power consumption and performance gains/losses when doing general productivity tasks and booting up. Also, how good are the closed source, open source, and Bumblebee drivers for these newer cards compared to support for the Intel HD4000? Long version/Background, if any info here is helpful: I'm thinking of ordering a laptop from PC Specialist (a UK company who actually sell machines without Windows pre-installed) with the following specifications: Genesis IV: 15.6" AUO Matte 95% Gamut LED Widescreen (1920x1080) Intel® Core™i5 Dual Core Mobile Processor i5-3210M (2.50GHz) 3MB 4GB SAMSUNG 1600MHz SODIMM DDR3 MEMORY (1 x 4GB) 120GB INTEL® 520 SERIES SSD, SATA 6 Gb/s (upto 550MB/sR | 520MB/sW) Intel 2 Channel High Definition Audio + MIC/Headphone Jack GIGABIT LAN & WIRELESS INTEL® N135 802.11N (150Mbps) + BLUETOOTH Now, as I want this laptop mainly for work and not for games, I would be more than content with the HD4000 integrated chip which comes with the processor. However, for compatibility reasons, I am not able to get the specs I want unless I choose a NVIDIA GeForce GT 630M 1GB graphics card, which I don't have a great deal of use for. I'm willing to buy it, however, as it's still cheaper than any other laptop with the specs I want. However, I know that Linux power management isn't fantastic with open-source graphics drivers, and I don't much about Bumblebee. Basically, whilst I'm happy to 'tolerate' the card being there, I don't want to experience any negative effects on the rest of my system (battery, performance etc) and if there are likely to be any, I might reconsider my purchase. So if anyone can advise me on the effects, I would be very grateful, since I doubt I can just turn the card off. Thankyou for any assistance :)

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Asp.Net MVC - Rob Conery's LazyList - Count() or Count

    - by Adam
    I'm trying to create an html table for order logs for customers. A customer is defined as (I've left out a lot of stuff): public class Customer { public LazyList<Order> Orders { get; set; } } The LazyList is set when fetching a Customer: public Customer GetCustomer(int custID) { Customer c = ... c.Orders = new LazyList<Order>(_repository.GetOrders().ByOrderID(custID)); return c; } The order log model: public class OrderLogTableModel { public OrderLogTableModel(LazyList<Order> orders) { Orders = orders; Page = 0; PageSize = 25; } public LazyList<Order> Orders { get; set; } public int Page { get; set; } public int PageSize { get; set; } } and I pass in the customer.Orders after loading a customer. Now the log i'm trying to make, looks something like: <table> <tbody> <% int rowCount = ViewData.Model.Orders.Count(); int innerRows = rowCount - (ViewData.Model.Page * ViewData.Model.PageSize); foreach (Order order in ViewData.Model.Orders.OrderByDescending(x => x.StartDateTime) .Take(innerRows).OrderBy(x => x.StartDateTime) .Take(ViewData.Model.PageSize)) { %> <tr> <td> <%= order.ID %> </td> </tr> <% } %> </tbody> </table> Which works fine. But the problem is evaluating ViewData.Model.Orders.Count() literally takes about 10 minutes. I've tried with the ViewData.Model.Orders.Count property instead, and the results are the same - takes forever. I've also tried calling _repository.GetOrders().ByCustomerID(custID).Count() directly from the view and that executes perfectly within a few ms. Can anybody see any reason why using the LazyList to get a simple count would take so long? It seems like its trying to iterate through the list when getting a simple count.

    Read the article

  • Where do we put "asking the world" code when we separate computation from side effects?

    - by Alexey
    According to Command-Query Separation principle, as well as Thinking in Data and DDD with Clojure presentations one should separate side effects (modifying the world) from computations and decisions, so that it would be easier to understand and test both parts. This leaves an unanswered question: where relatively to the boundary should we put "asking the world"? On the one hand, requesting data from external systems (like database, extental services' APIs etc) is not referentially transparent and thus should not sit together with pure computational and decision making code. On the other hand, it's problematic, or maybe impossible to tease them apart from computational part and pass it as an argument as because we may not know in advance which data we may need to request.

    Read the article

  • Hardware wireless switch has no effect after suspend and 13.10 upgrade

    - by blaineh
    This seems to be a fairly chronic problem, as shown by the following questions: How do I fix a "Wireless is disabled by hardware switch" error? Wireless disabled by hardware switch "Wireless disabled by hardware switch" after suspend and other hardware buttons ineffective - how can I solve this? but no good solutions have been found! Wireless works fine after a reboot, but after a suspend the hardware switch (for my laptop this is f12) has no effect on the wireless, it is just permanently off, and shows that it is with a red LED. All My rfkill list all reads: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes Any combination with rfkill <un>block wifi doesn't work, although one time first blocking then unblocking actually turned it on again. sudo lshw -C network reads: *-network DISABLED description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 78:e4:00:65:2e:3f width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.11.0-12-generic firmware=N/A ip=155.99.215.79 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:90100000-9010ffff *-network DISABLED description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 02 serial: c8:0a:a9:89:b4:30 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:42 ioport:2000(size=256) memory:90010000-90010fff memory:90000000-9000ffff memory:90020000-9002ffff Also, adding a /etc/pm/sleep.d/brcm.sh file as recommended here simply prevents the laptop from suspending at all, which of course is no good. This question has an answer urging to install the original driver, but it wasn't an "accepted answer" so I'd rather not take a chance on it. Also I'll admit I'm a bit lost on that and would like help doing so with the specific information I've given. xev shows that no internal event is triggered for my wireless switch (f12), but other function keys also acting as hardware switches work fine. I would be happy to provide more information, so long as you're willing to help me find it for you! This is a very annoying bug. I have a Compaq Presario CQ62. Edit. I just tried to reload bios defaults (or something) as shown by this video. Didn't work. Edit. I tried the contents of this answer, and it didn't work. Edit. I made a pastebin of dmesg. I couldn't even begin to understand the contents. Edit. Output of lspci | grep Network: 02:00.0 Network controller: Qualcomm Atheros AR9285 Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • Jquery - effect + autohide

    - by lidermin
    Hello, I'm using jquery to animate a bit my web site, but I'm having a little issue with some behaviour: I have a div, which suddenly appears from the top of the page and shakes: $(minipopup).animate({ marginTop: '+=' + (240) + 'px' }, 1000); $(minipopup).effect("shake"); This mini popup has an X for closing it, or else, it will auto close after a few seconds: setTimeout(function() { $('#minipopup').effect("explode"); }, 10000); $('#closePopup').click(function() { $('#minipopup').effect("explode"); }); Everything works, except that, if the user clicks the CLOSE button, he sees the explode effect and the popup effectively dissapears, but after the 10 seconds pass (the one I defined under the setTimeout), the user again sees the popup explosion (just the effect, cause the popup is not there visually). How could I avoid that "ghost" explosion if the user already closed the popup manually? Thanks in advance.

    Read the article

  • How to create projection/view matrix for hole in the monitor effect

    - by Mr Bell
    Lets say I have my XNA app window that is sized at 640 x 480 pixels. Now lets say I have a cube model with its poly's facing in to make a room. This cube is sized 640 units wide by 480 units high by 480 units deep. Lets say the camera is somewhere in front of the box looking at it. How can I set up the view and projection matrices such that the front edge of the box lines up exactly with the edges of the application window? It seems like this should probably involve the Matrix.CreatePerspectiveOffCenter method, but I don't fully understand how the parameters translate on to the screen. For reference, the end result will be something like Johhny Lee's wii head tracking demo: http://www.youtube.com/watch?v=Jd3-eiid-Uw&feature=player_embedded P.S. I realize that his source code is available, but I am afraid I haven't been able to make heads or tails out of it.

    Read the article

  • Setting user's group and umask has no effect

    - by Andrew Vit
    I'm trying to allow my "deploy" user to have access to files created by www-data: I added "deploy" to the www-data group. I set umask to 002. When I run the following commands, I'm not seeing the result I expect: deploy@ubuntu-lucid-32-generic:/var/www$ groups www-data adm dialout cdrom plugdev lpadmin sambashare admin deploy sysadmin deploy@ubuntu-lucid-32-generic:/var/www$ newgrp www-data deploy@ubuntu-lucid-32-generic:/var/www$ umask 0002 deploy@ubuntu-lucid-32-generic:/var/www$ mkdir test deploy@ubuntu-lucid-32-generic:/var/www$ ls -la test total 0 drwxr-xr-x 1 deploy deploy 68 Nov 7 20:37 . drwxr-xr-x 1 deploy deploy 476 Nov 7 20:37 .. I see that: The folder doesn't belong to the www-data group. The folder permissions don't have group-write (775). Note that the /var/www directory is owned by the deploy user: drwxr-xr-x 1 deploy deploy 510 Nov 7 20:45 . How can I give www-data selective access to directories? Or, how to share the /var/www directory with my deploy user: I don't care who owns it, as long as I can write to it, and so can www-data. (Ideally I would set up a directory with SGID access for www-data.)

    Read the article

  • Collision detection - Smooth wall sliding, no bounce effect

    - by Joey
    I'm working on a basic collision detection system that provides point - OBB collision detection. I have around 200 cubes in my environment and I check (for now) each of them in turn and see if it collides. If it does I return the colliding face's normal, save the old player position and do some trigonometry to return a new player position for my wall sliding. edit I'll define my meaning of wall sliding: If a player walks in a vertical slope and has a slight horizontal rotation to the left or the right and keeps walking forward in the wall the player should slide a little to the right/left while continually walking towards the wall till he left the wall. Thus, sliding along the wall. Everything works fine and with multiple objects as well but I still have one problem I can't seem to figure out: smooth wall sliding. In my current implementation sliding along the walls make my player bounce like a mad man (especially noticable with gravity on and moving forward). I have a velocity/direction vector, a normal vector from the collided plane and an old and new player position. First I negate the normal vector and get my new velocity vector by substracting the inverted normal from my direction vector (which is the vector to slide along the wall) and I add this vector to my new Player position and recalculate the direction vector (in case I have multiple collisions). I know I am missing some step but I can't seem to figure it out. Here is my code for the collision detection (run every frame): Vector direction; Vector newPos(camera.GetOriginX(), camera.GetOriginY(), camera.GetOriginZ()); direction = newPos - oldPos; // Direction vector // Check for collision with new position for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { // Get inverse normal (direction STRAIGHT INTO wall) Vector invNormal = normal.Negative(); Vector wallDir = direction - invNormal; // We know INTO wall, and DIRECTION to wall. Substract these and you got slide WALL direction newPos = oldPos + wallDir; direction = newPos - oldPos; } } Any help would be greatly appreciated! FIX I eventually got things up and running how they should thanks to Krazy, I'll post the updated code listing in case someone else comes upon this problem! for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { Vector invNormal = normal.Negative(); invNormal = invNormal * (direction * normal).Length(); // Change normal to direction's length and normal's axis Vector wallDir = direction - invNormal; newPos = oldPos + wallDir; direction = newPos - oldPos; } }

    Read the article

  • Disconnect have no effect using rdesktop

    - by Hongxu Chen
    So I'm using rdesktop with my labtop when I remote my PC in the lab,which is installed with Windows 7.Everything went well until I recently upgraded my lubuntu of the laptop(or maybe there's nothing with the upgrade at all,however I don't know).The rdesktop fails to disconnect when I disconnect from the start menu of Windows.This does not mean that I cannot return to my linux, actually I get back to lubuntu successfully and the terminal reports that I have disconnected.However when I re-login to Windows of the PC in the lab(via rdesktop) after I reboot my laptop, it fails.Then I come to the PC in the lab and the screen message tells me that it is still connected with my lubuntu. So what's the problem? Do any guys have similar experience? PC:Windows 7,in the lab;laptop:linux(lubuntu 12.04)

    Read the article

  • Ogre3d particle effect causing error in iPhone

    - by anu
    1) First I have added the Particle Folder from the OgreSDK( Contains Smoke.particle) 2) Added the Smoke.material And smoke.png and smokecolors.ong 3) After this I added the Plugin = Plugin_ParticleFX in the plugins.cfg Here is my code: #Defines plugins to load # Define plugin folder PluginFolder=./ # Define plugins Plugin=RenderSystem_GL Plugin=Plugin_ParticleFX 4) I have added the particle path in the resources.cfg( adding the particle file in this get crash ) #Resource locations to be added to the 'bootstrap' path # This also contains the minimum you need to use the Ogre example framework [Bootstrap] Zip=media/packs/SdkTrays.zip # Resource locations to be added to the default path [General] FileSystem=media/models FileSystem=media/particle FileSystem=media/materials/scripts FileSystem=media/materials/textures FileSystem=media/RTShaderLib FileSystem=media/RTShaderLib/materials Zip=media/packs/cubemap.zip Zip=media/packs/cubemapsJS.zip Zip=media/packs/skybox.zip 6) Finally I did all the settings, my code is here: mPivotNode = OgreFramework::getSingletonPtr()->m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); // create a pivot node // create a child node and attach an ogre head and some smoke to it Ogre::SceneNode* headNode = mPivotNode->createChildSceneNode(Ogre::Vector3(100, 0, 0)); headNode->attachObject(OgreFramework::getSingletonPtr()->m_pSceneMgr->createEntity("Head", "ogrehead.mesh")); headNode->attachObject(OgreFramework::getSingletonPtr()->m_pSceneMgr->createParticleSystem("Smoke", "Examples/Smoke")); 7) I run this, I got the below error: An exception has occurred: OGRE EXCEPTION(2:InvalidParametersException): Cannot find requested emitter type. in ParticleSystemManager::_createEmitter at /Users/davidrogers/Documents/Ogre/ogre-v1-7/OgreMain/src/OgreParticleSystemManager.cpp (line 353) 8) Getting crash at: (void)renderOneFrame:(id)sender { if(!OgreFramework::getSingletonPtr()->isOgreToBeShutDown() && Ogre::Root::getSingletonPtr() && Ogre::Root::getSingleton().isInitialised()) { if(OgreFramework::getSingletonPtr()->m_pRenderWnd->isActive()) { mStartTime = OgreFramework::getSingletonPtr()->m_pTimer->getMillisecondsCPU(); //( getting crash here) Does anyone know what could be causing this?

    Read the article

  • Tag Cloud with jQuery Effect

    Tag Cloud WebControl built on the Star Field jQuery plug-in...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How is this lighting effect done?

    - by Mike
    This is the most beautiful 2d lighting I have ever seen. Does anyone know how he went about doing it? http://www.youtube.com/watch?v=BIQRhOFkvQY http://www.youtube.com/watch?v=tnTYXPuecMs http://www.youtube.com/watch?v=rhC_jVM8IYU http://www.youtube.com/watch?v=_Aw5BdjWqqU Or download it here: http://grantkot.com/PollutedPlanet/publish.htm edit: I am not asking how the particles are simulated; I don't care about the physics.

    Read the article

  • effect and model vertex declaration compatibility

    - by Vodácek
    I have normal model drawing code. When I try to draw model without UV coordinates I got this exception: System.InvalidOperationException: The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. at Microsoft.Xna.Framework.Graphics.GraphicsDevice.VerifyCanDraw( Boolean bUserPrimitives, Boolean bIndexedPrimitives) at Microsoft.Xna.Framework.Graphics.GraphicsDevice.DrawIndexedPrimitives( PrimitiveType primitiveType, Int32 baseVertex, Int32 minVertexIndex, Int32 numVertices, Int32 startIndex, Int32 primitiveCount) at Microsoft.Xna.Framework.Graphics.ModelMeshPart.Draw() at Microsoft.Xna.Framework.Graphics.ModelMesh.Draw() ... I know what cause the exception, but is possible to avoid it? Is possible to check model before drawing it with current shader for vertex declaration compatibility?

    Read the article

  • How can I implement an Iris Wipe effect?

    - by Vandell
    For those who doesn't know: An iris wipe is a wipe that takes the shape of a growing or shrinking circle. It has been frequently used in animated short subjects, such as those in the Looney Tunes and Merrie Melodies cartoon series, to signify the end of a story. When used in this manner, the iris wipe may be centered around a certain focal point and may be used as a device for a "parting shot" joke, a fourth wall-breaching wink by a character, or other purposes. Example from flasheff.com Your answer may or may not include a coding sample, a language agnostic explanation is considered enough.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >