Search Results

Search found 751 results on 31 pages for 'quad'.

Page 16/31 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • Mandelbrot set not displaying properly

    - by brainydexter
    I am trying to render mandelbrot set using glsl. I'm not sure why its not rendering the correct shape. Does the mandelbrot calculation require values to be within a range for the (x,y) [ or (real, imag) ] ? Here is a screenshot: I render a quad as follows: float w2 = 6; float h2 = 5; glBegin(GL_QUADS); glVertex3f(-w2, h2, 0.0); glVertex3f(-w2, -h2, 0.0); glVertex3f(w2, -h2, 0.0); glVertex3f(w2, h2, 0.0); glEnd(); My vertex shader: varying vec3 Position; void main(void) { Position = gl_Vertex.xyz; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } My fragment shader (where all the meat is): uniform float MAXITERATIONS; varying vec3 Position; void main (void) { float zoom = 1.0; float centerX = 0.0; float centerY = 0.0; float real = Position.x * zoom + centerX; float imag = Position.y * zoom + centerY; float r2 = 0.0; float iter; for(iter = 0.0; iter < MAXITERATIONS && r2 < 4.0; ++iter) { float tempreal = real; real = (tempreal * tempreal) + (imag * imag); imag = 2.0 * real * imag; r2 = (real * real) + (imag * imag); } vec3 color; if(r2 < 4.0) color = vec3(1.0); else color = vec3( iter / MAXITERATIONS ); gl_FragColor = vec4(color, 1.0); }

    Read the article

  • Active Directory Problem

    - by Ankur Dholakiya
    Hello All, I have one server 2008 installed with AD, SQL and IIS. Now I am trying to attach different HDD on this server only. I am able to install windows server 2008 r2 64bit on the server, but when I try to install the ActiveDirectory on the server the setup doesn't get completed and keep processing at following level. "Configuring Active directory and local host domains ......." If I attach same HDD on any other PC Active directory setups completes successfully. My server is Xeon quad core with 8GB of RAM. Can any one help the appropriate solution for this?

    Read the article

  • AMD Processors and the Windows Phone 8 Emulator

    - by Aj Patel
    I would madly appreciate it if anyone in this community would help me with my question. The background is that I want to develop Windows Phone 8 applications but both of my current computer processors do not have Hardware Virtualization & Second Level Address Translation that are needed to run the Emulator. I have my eyes on an AMD computer g7-2243us (I like it because it has 1600x900 screen res). I looked up this Link that shows that this computer's AMD processor (Next Gen AMD Quad-Core A8-4500M Accelerated 1.9GHz up to 2.8GHz 4MB L2 Cache Processor) supports AMD-V Hardware Virtualization. So, will this computer be able to run the emulator? Thank you so much for your answers. I'm pretty sure it will run the emulator, but I just want to make sure before spending $400. Thank You all So Much.

    Read the article

  • 12.04 LTS boot hangs at "SP5100 TCO timer: mmio address 0xfec000f0 already in use", didn't yesterday

    - by DarkIron112
    Dual-booting Windows 7 and Ubuntu 12.04 LTS. I went to reboot from Win to Ubu, and found a few interesting things. My POST screen is covered in blocks of epileptic colors until I hit GRUB, which continues when I try to boot into Ubuntu. These color blocks don't appear when I use my on-board VGA, so I'll just attribute to that. Grub dimensions are swapped (card vs onboard, probably), but, when interfacing with onboard VGA, the Grub Timeout Counter works and when using my card, it does not (see "[!!!]" below for more information) Booting into Ubuntu directly causes the error: SP5100 TCO timer: mmio address 0xfec000f0 already in use Booting into recovery mode, meanwhile, and then "resuming normal boot" gets me to the desktop without native 1440x900 resolution and graphic drivers can't tell the monitor it's looking at (I assume this is because it's not a full graphic boot, and as it says, some drivers won't run?) [!!!] When I reboot after going into recovery mode, the countdown timer works ONCE, puts me back into default ubuntu boot, and then does not work again until after another recovery-mode boot. Windows 7 can boot perfectly with no issues whatsoever from epilepsy color blocks or driver detection. This makes me wonder /why/ the POST screen can't handle my video card anymore. Amidst all the diagnostics, I opened my case and re-seated the videocard securely, ensuring it wasn't a loose connection-- But this did nothing to help me. Hardware I am running an NVidia GeForce GTX 8800 video card in a PCI slot. I have 4.8GiB memory, an AMD Athlon II Quad-core 640 Processor, on an MSI K9N6GM Series Mobo. Onboard video is an NVidia GeForce MCP61(V/S/P) card. Note: I did not have any of these problems yesterday, and I have been using Ubuntu intensively for a week, though it's been working flawlessly for months. I've recently been using it to mod my Android phone, perhaps I messed something up in the file system?

    Read the article

  • Can't double click files to open them in inDesign (CS5)

    - by Matt
    I cannot open a file unless I open inDesign (the program) and then do File-Open If I double click, it starts to open, then just hangs forever. AFTER I close it, and look in the directory where they're saved, I see a (temporary?) "lock" file. Now I can double click the original file and it opens just fine. However, now when I close iD it deletes the file and the whole process starts again... I have tried updating the software, uninstalled COMPLETELY and reinstalled, tried a brand new Win7 install. These files are all saved on a network drive, the computer is a new quad-core Dell with 12GB of RAM and a fresh x64 Win7 install on the SSD. Does not happen with other programs.

    Read the article

  • How can I judge the suitability of modern processors for systems with specific CPU requirements?

    - by Iszi
    Inspired by this question: How do I calculate clock speed in multi-core processors? The answers in the above question do a fair job of explaining why a lower-speed multi-core processor won't necessarily perform at the same level as a higher-speed single-core processor. Example: 4*2=8, but a quad-core 2 GHz processor isn't necessarily as fast as a single-core 8 GHz processor. However, I'm having a hard time putting the information in those answers to practical use in my mind. Particularly, I want to know how it should be used to judge whether a given CPU is appropriate for an application with specific requirements. Example scenarios: An application has a minimum CPU requirement of 2.4 GHz dual-core. Another application has a minimum CPU requirement of 1.8 GHz single-core. For either of the above scenarios: Would a higher-speed processor with fewer cores, or a lower-speed processor with more cores, be equally sufficient? If so, how can we determine the appropriate processor speeds required for a given number of cores?

    Read the article

  • Accessing multiple local HD's or RAID with ESXi 4.0

    - by Shawn Anderson
    How to I get additional HD's to be recognized and used by ESXi 4.0. When I purchased my system I had two 2TB HD's, but when I installed ESXi it only recognized one of them. I'm happy to get whatever number of drives that I need (I have a four bay SATA in my Dell T310). What are some options? RAID? If so, is it supported. I guess I would need hardware instead of software since ESXi is so small. The VMWare forums (where I've lived for the last two days) are a charlie foxtrot of outdated and conflicting info. I want to utilize my T310, with 32 GB RAM, 2.8GHz quad core to run many lab Windows VM's. I don't need production level availability but I do want decent performance, even though it's in a lab environment. A huge thanks to Jim B., Zypher, Helvick, and Jeff Hengesbach who posted answers to my earlier predecessor question on why ESXi was so sluggish.

    Read the article

  • Issues connecting to WPA2 with User Authentication Mavericks?

    - by heinst
    I was on all the builds of the Mavericks beta and connecting to my University's network was fine. Then I upgraded to the public release and now I can't seem to connect to the internet. I can connect to other networks, but not my schools. Its a WPA2 network with a User Authentication. And my MacBook is a 2011? 2.2 GHz first gen i7 Quad Core with 8 GBs of RAM. Does anyone else have the same issue? Any tips on how to fix it? Thanks! heinst

    Read the article

  • FreeNas running on ESXi - sometimes gets very slugish.

    - by Luma
    Hello everyone, I have a ESXi server (dual quad core, 8GB of DDR3 ram, 6x 1TB WD Blacks running in RAid 5 on the PErc 6/i controller. I have a 64bit freenas VM running, on this VM I keep about 200Gigs of stuff that my windows machines access. every now and then the throughput of this VM just dies, for example right now it can't even handle streaming a song and when I tried to transfer a folder the speed goes from 10-400KB/s. Might I add at this point that the ESXi box has dual gigabit network cards plugged into a good solid gigabit switch and other linux and windows VM's are just fine I have seen speeds over 90MB/s (frequently) The server still has ram left over (plenty actually) and cpu is very low (500-1000mhz) any ideas what could cause this? thanks. Luc

    Read the article

  • Computer won't start unless power is removed for ~5 minutes

    - by Paul Tarjan
    I have a fairly standard 2-year old desktop computer (quad-core intel, single hard drive, decent video card, 300W power supply) which recently started acting up. I'm not sure what the cause is, so hopefully you can help. Sometimes (once a week-ish) I press the power button and nothing happens. No blinking, no sounds, no nothing. If I remove the power cord (or flip the switch on the power supply) I hear a capacitor discharge. If I leave it in the "no power at all" state for about 5 minutes then I can put the plug back in and the computer works perfectly. What is the issue? What do you think I have to replace?

    Read the article

  • How do you get past the Analysis to Paralysis when working on a new project?

    - by Cape Cod Gunny
    I've been struggling with how to get my project going. I've got an old software package that is in need of desparate rewrite. I haven't compiled the source code since 2004. It still sells, it's stable but does require the “Run this program in compatibility mode for:” on a lot of the newer windows systems. It's also one of those hard coded 640 X 480 screen resolution programs. Yuck! I can't seem to get started with this rewrite. I'm constantly fiddling around with different things. I'll play around with different fluid layouts for a while. Then I start looking around at how the main menu should work/look. I quickly find out that there's this thing called "Cool Bars" and I'll spend hours playing with that. Then I start thinking about stuff like "Oh I need to make sure that the screen sizes are preserved so when the application gets relaunched it remebers how the screens were positioned." Which leads to what happens if they have two monitors? Which leads to what happens if they have a quad screen? Yikes it's got to stop. I have always been a slow starter. I think about stuff long and hard up front. This has always plagued me. Once I get my mind made up then bam... I'm off and running. I'm looking for advice from some other one-person software companies that can help someone like me get off to a quicker start?

    Read the article

  • Wireless drops on HP ENVY dv6 with RT3290 wireless, worked without problem prior to upgrading to Ubuntu 13.10, can it be fixed?

    - by Tim
    I have a HP ENVY dv6 Notebook PC with an AMD A10 quad core and RT3290 wireless. Since I upgraded from Ubuntu 13.04 to 13.10, the wireless connects, but then drops after a few minutes or longer, whether or not I am running openconnect to get through a VPN. If I attempt to run a remote X client (e.g. remote xterm) it drops. If I don't run an X client, it disconnects after a while, requiring a reload of the driver and reconnect. Wireless info... sudo lshw -c network *-network description: Wireless interface product: RT3290 Wireless 802.11n 1T/1R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 68:94:23:a7:09:cb width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rt2800pci driverversion=3.11.0-12-generic firmware=0.37 ip=192.168.1.115 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:55 memory:f0210000-f021ffff I have successfully built and installed the MediaTek driver with no luck on connecting, then the system hangs on reboot and I have to recover/undo the changes to boot successfully.

    Read the article

  • Computer won't start unless power is removed for ~5 minutes

    - by Paul Tarjan
    I have a fairly standard 2-year old desktop computer (quad-core intel, single hard drive, decent video card, 300W power supply) which recently started acting up. I'm not sure what the cause is, so hopefully you can help. Sometimes (once a week-ish) I press the power button and nothing happens. No blinking, no sounds, no nothing. If I remove the power cord (or flip the switch on the power supply) I hear a capacitor discharge. If I leave it in the "no power at all" state for about 5 minutes then I can put the plug back in and the computer works perfectly. What is the issue? What do you think I have to replace?

    Read the article

  • 284 GiB of data, 217.4 GiB of space

    - by Malfist
    I want to reinstall my OS, but I don't have the hard drive space to backup any more (I have a RAID 1 array, so I haven't done it for a while). In my /home I have 284.8 GiB of data, and I have a spare 250 GB (or 217.4 GiB) hard drive that I've been using for backup. What type of compression algorithm (if any) is capable of this type of compression? I don't care about the time, I have a quad core though, so something that utilizes all 4 cores would be great. I have tried 7zip with no success. Ran on one core for two days and failed because of lack of space. Any ideas?

    Read the article

  • 11.10 liveCD black screen

    - by Shaun Killingbeck
    Attempting to install/try ubuntu 11.10 on my new laptop, using a liveCD (and tried USB). I get the purple screen (with the man/keyboard at the bottom) and after that the screen flashes bright white before going black. Ubuntu continues to load in the background, with login sound etc but the screen is off. I have tried as many different solutions as I could find including: using nomodestep, xforcevesa, i915.modeset=0 in boot options (seperately): varying consequences, but either I end up at a blinking cursor with no prompt, a command line (startx fails: no screen found), or the original blank screen again Tried booting from VirtualBox - it crashes at the same place the screen would go blank when using a CD/USB tried 11.04: I don't have this problem BUT when trying to install, I get a ubi-partman error 141 (possibly down to the three partitions that came on my laptop... not sure why HP needed there own separate partition for HP Tools...) Model: HP Pavillion DV6 6B08SA Processor: AMD Quad-Core A6-3410MX APU with Radeon HD 6545G2 Dual Graphics (1.6 GHZ 4 MB L2 cache ) Chipset: AMD RS880M Any help would be greatly appreciated. I just want to be able to partition the drive and install Ubuntu. I'm assuming the issue is graphics card related, although I have no confirmation of that. I have caught a glimpse of some output to do with pulseaudio and [fail], but I can't imagine why that would cause a screen problem and the sound definitely works anyway.

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Ubuntu 12.04 64 bit "unable to find medium with live filesystem" AFTER normal install

    - by user88710
    So, I got a new computer (64 bit quad core yada yada). pulled my Ubuntu SSD drive from old machine, installed it into new machine. (my intention here is to have Ubuntu installed on the 120G SSD, Win7 on the main drive) downloaded 64 bit Ubuntu, burned it to a disk. rebooted with Live CD, installed Ubuntu to the SSD drive, had no problems rebooted again, got the grub menu, selected Ubuntu after a minute i got this - "unable to find medium with live filesystem" booting into windows, explorer doesnt even see the SSD. Device manager sees it though. I assume this is because its formatted with ext4. so, The liveCD saw the SSD just fine, installed fine, but when i try to boot ubuntu, i get the error above, heeellllpppp! UPDATE: small update. Windows did a software update that apparently wiped out my grub, so I guess grub was installed on the main drive. I reinstalled Ubuntu (again) on the SSD drive but, still no joy with booting from it. same error message as above.

    Read the article

  • USB Ports In Wrong Mode, How To Use usbmodeswitch?

    - by user86872
    I haven't had access to my USB ports as media devices for a couple days now. I've been reading and researching everything I can find but I can't find a good guide for usbmodeswtich or usbms that I can decipher. The USB's are fine for power, but won't support my android phone as a media device, which is killing me because I use adb everyday, and won't support my plug and play mouse any longer. Not sure what caused the switch, though I think it may be related to the suspend issue I've read about, but the solutions in those threads I read also didn't work. Below is my system information and details. System: Ubuntu 12.04, 64-bit, Dedicated Machine Machine: HP-Pavillion g6 notebook, AMD A6 Quad Core Processor USBs used for: Cooling dock, Android Debug Bridge, Wireless Mouse Attempted Mod Probe, udev restart, unable to attempt lsusb due to my own lack of knowledge. :) Last Attempt Readout: ncandiano@ncandiano-HP-Pavilion-g6-Notebook-PC:~$ sudo modprobe -r usbhid && sleep 5 && sudo modprobe usbhid ncandiano@ncandiano-HP-Pavilion-g6-Notebook-PC:~$ sudo modprobe -r usb-storage ncandiano@ncandiano-HP-Pavilion-g6-Notebook-PC:~$ sudo modprobe usb-storage ncandiano@ncandiano-HP-Pavilion-g6-Notebook-PC:~$ sudo restart udev udev start/running, process 2624 ncandiano@ncandiano-HP-Pavilion-g6-Notebook-PC:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 0461:4de7 Primax Electronics, Ltd webcam Any help would be greatly appreciated!

    Read the article

  • Dell Studio XPS 16 Runs Hot

    - by dtbarne
    Specs: 1920x1080 i7 1.6 GHz quad core 6GB RAM 1GB ATI Radeon HD 6570M/5700 Series 500GB 7200rpm Hard Drive Love this laptop for many reasons, but it constantly runs hot (CPU is low 70s with basic tasks and often 80+ is not uncommon) and I'm finding it too much to deal with. The laptop feels very hot (almost too hot for a lap) and often gets so hot that the OS slows down or freezes altogether. I've tried cleaning it out and even replacing the thermal paste. I often use an external cooler, but it only helps 3-5 degrees and it's a pain to have to use. I've come to the conclusion that it just runs hot. I have two questions: What is to blame? The i7 processor, the gfx card, or is it just that this laptop has poor cooling? Does the Dell XPS 15 run cooler? I'm looking at replacing my current laptop, but I don't want to run into the same problem.

    Read the article

  • Javascript Canvas Drawing Efficiency

    - by jujumbura
    I have just recently started some experiments with game development in Javascript/HTML5, and so far it has been going pretty well. I have a simple test scene running with some basic input handling, and a hundred-ish drawImage() calls with a few transforms. This all runs great on Chrome, but unfortunately, it already chugs on Firefox. I am using a very large canvas ( 1920 x 1080 ), but it doesn't seem like I should be hitting my limit already. So on that note, I was hoping to ask a few questions: 1) What exactly is done on the CPU vs. the GPU in terms of canvas and drawImage()? I'm afraid the answer is probably "it depends on the browser", but can anybody give me some rules of thumb? I naively imagined that each drawImage call results in a textured quad on the GPU with the canvas effectively being a render target, but I'm wondering if I'm pretty far off base there... 2) I have seen posts here and there with people saying not to use the translate(), rotate(), scale() functions when drawing on the canvas. Am I adding a lot of overhead just by adding a translate() call, as opposed to passing in the x,y to drawImage()? Some people suggest using "transate3d", etc., which are CSS properties, but I'm not sure how to use them within a scene. Can they be used for animated sprites within a single canvas? 3) I have also seen a lot of posts with people mentioning that pre-building canvases and then re-using them is a lot faster than issuing all the individual draw calls again. I am guessing that my background should definitely be pre-built into a canvas, but how far should I take this? Should I maintain an individual canvas for each sprite, to cache all static image data when not animating? Thank you much for your advice!

    Read the article

  • How to improve Minecraft-esque voxel world performance?

    - by SomeXnaChump
    After playing Minecraft I marveled a bit at its large worlds but at the same time I found them extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume Minecraft is fairly slow because: A) It's written in Java, and as most of the spatial partitioning and memory management activities happen in there, it would naturally be slower than a native C++ version. B) It doesn't partition its world very well. I could be wrong on both assumptions; however it got me thinking about the best way to manage large voxel worlds. As it is a true 3D world, where a block can exist in any part of the world, it is basically a big 3D array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc.) Now, I am assuming to make this sort of world perform well you would need to: A) Use a tree of some variety (oct/kd/bsp) to split all the cubes out; it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. B) Use some algorithm to work out which blocks can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. C) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees. I guess there is no right answer to this, but I would be interested to see peoples' opinions on the subject. How would you improve performance in a large voxel-based world?

    Read the article

  • Freescale One Box Unboxing (then installing Java SE Embedded technology)

    - by hinkmond
    So, I get a FedEx delivery the other day... "What cool device could be inside this FedEx Overnight Express Large Box?" I was wondering... Could it be a new Linux/ARM target device board, faster than a Raspberry Pi and better than a BeagleBone Black??? Why, yes! Yes, it was a Linux/ARM target device board, faster than anything around! It was a Freescale i.MX6 Sabre Smart Device Board (SDB)! Cool... Quad Core ARM Cortex A9 1GHz with 1GB of RAM. So, cool... I installed the Freescale One Box OpenWRT Linux image onto its SD card and booted it up into Linux. But, wait! One thing was missing... What was it? What could be missing? Why, it had no Java SE Embedded installed on it yet, of course! So, I went to the JDK 7u45 download link. Clicked on "Accept License Agreement", and clicked on "jdk-7u45-linux-arm-vfp-sflt.tar.gz", installed the bad boy, and all was good. Java SE Embedded 7u45 on a Freescale One Box. Nice... Hinkmond

    Read the article

  • Do I really need Microsoft Updates?

    - by Tony Wong
    When I install a fresh copy of Windows XP Home (I bought it from the store.. not a copy), my PC rocks like lightening speed. But when I start installing all the updates, patches & less .NET 4.0 client (as the .NET 4.0 Client seems to bring machine to slow crawl). The PC starts to slow down.. like there are more resources to watch or something is happening in the background. So could I not get away with an awesome virus protector and an awesome firewall set-up and avoid all the patches? The machine I have is a quad 4, 4 GB RAM and 2.3 GHz process. Tons of room and the machine can run several applications at one time.. but when the updates happen.. it's s-l-o-w!

    Read the article

  • OpenGL + Allegro. Moving from software drawing X Y to openGL is confusing

    - by Aaron
    Having a fair bit of trouble. I'm used to Allegro and drawing sprites on a bitmap buffer at X Y coords. Now I've started a test project with OpenGL and its weird. Basically, as far as I know, theirs many ways to draw stuff in OpenGL. At the moment, I think I'm creating a Quad? Whatever that is, and I think Ive given it a texture of a bitmap and them im drawing that: GLuint gl_image; bitmap = load_bitmap("cat.bmp", NULL); gl_image = allegro_gl_make_texture_ex(AGL_TEXTURE_MASKED, bitmap, GL_RGBA); glBindTexture(GL_TEXTURE_2D, gl_image); glBegin(GL_QUADS); glColor4ub(255, 255, 255, 255); glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0); glTexCoord2f(1, 0); glVertex3f(0.5, 0.5, 0); glTexCoord2f(1, 1); glVertex3f(0.5, -0.5, 0); glTexCoord2f(0, 1); glVertex3f(-0.5, -0.5, 0); glEnd(); So yeah. So I got a few questions: Is this the best way of drawing a sprite? Is it suitable? The big question: Can anyone help / Does anyone know any tutorials on this weird coordinate thing? If it even is that. It's vastly different from XY, but I want to learn it. I was thinking maybe I could learn how this weird positioning stuff works, and then write a function to try and translate it to X and Y coords. Thats about it. I'm still trying to figure it all out on my own but any contributions you guys can make would be greatly appreciated =D Thanks!

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >