Search Results

Search found 8172 results on 327 pages for 'vector graphics'.

Page 282/327 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • Cheese won't start

    - by Anthony Hohenheim
    I can't start Cheese Webcam Booth. It starts loading and there is a brief moment when the window shows up but then it disappears, like it shuts itself down and it's not in system monitor. My webcam works perfectly in Skype video call. I installed and run Camorama and it gave me an error: Could not connect to video device (/dev/video0) Please check connection When I run the lsusb I get this line for my webcam: Bus 002 Device 002: ID 04f2:b210 Chicony Electronics Co., Ltd And for my graphic card, running lspci: VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) It's not a pressing matter, but it bugs my nerves, if it works on Skype, why does Cheese and other programs refuse to run. As I said, it's not a big deal but any help would be appreciated. Running Cheese in terminal: (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gdk-WARNING **: The program 'cheese' received an X Window System error. This probably reflects a bug in the program. The error was 'BadDrawable (invalid Pixmap or Window parameter)'. (Details: serial 932 error_code 9 request_code 137 minor_code 9) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.)

    Read the article

  • It's possible to fulfill the social necessity of a human being through a social game in 3D like IMVU?

    - by Totty
    (I'm not advertising nor promoting this game, as it's just an example of my experience and I would like to have your opinion about the matter if possible) I've been started researching "things" about games and I've decided to begin to play IMVU as a friend of mine said it's cool. At first it seemed just another 3d social game, not so cool.. But I've "tried to like" and after 1 day I can say I'm addicted to it! Yes; I will explain better: About the game: You can go in chat-rooms, move to positions. Some positions are like sitting in a sofa, floor, dancing alone or with a partner, kissing and more in this way. In the free version of the game there is no nudity. You can even listen to music, view youtube... The 3d graphics are quite low end, so it's not as real as the paid PC games of today. About my experience: At first I was going with my friend in chat-rooms, they seemed very nice. There were people talking about general stuff, quite like in a real life. Well, I begin to know some girls (yes, virtual girls commanded by a real girl, I hope!). Things happened: Some girls are just crazy, not like in real life, they make out in before even talking; Other girls you can speak a little bit, then they add you to their friend-list. Sometimes they invite to their virtual places. Some girls have really IMVU boyfriends only (but not in reality) and most of them don't even make up in the game, so it's really a level of commitment involved here! But from what my friend told they last for him, at least, about 3 days... Some others have real and IMVU boyfriends that are the same. Until now I haven't find a girl with different boyfriend in the IMVU and reality. Nor multiple boyfriends. There are rooms where the same people find each selves every day and speak about general stuff, relationships and so on... They are nice with you, they "feel" you and show careness. This is what amazes me, they treat you like a real human being and as being their friend in the real world. (of course it's not always like this) There are jealous girls too and competitiveness between females lol, I know you loled! This is kind of social. So today I closed my door in my room and I've played it all day long and guess what, I didn't feel a need to stay with a real person at all. Normally, If I would stay a full day alone I would get quite crazy... So the question is: It's just me that seemed to be able to fulfill my social needs or there is something more? thanks for your precious time for reading my full question,

    Read the article

  • Lenovo Thinkpad X1 Carbon support

    - by Robottinosino
    I am considering selling my Mac to get money towards a Lenovo Thinkpad X1 because what I really want to do is to be running an Ubuntu system all the time. Is this machine completely supported in Ubuntu, with no tiny little feature missing just because I am "going Linux"? Optional user story section, skip to the question below if you don't have time: I have a friend who bought a "works on Ubuntu" system a year ago and has hated the fact ever since: battery lasts less than if he boots in Windows (which he despises) and he ascribes that to "no good OS/harware integration and support for advanced chipset power management features", odd behaviour on suspend/resume/hibernate (says: "when it works 90% of the time and the other 10% it makes you lose your work is as good as broken - 90% is the same as 0% he says), some occasional graphics card glitches he can perfectly well live with and has almost grown affectionate to, and finally, and that is what would make him undo his choice if he could, bad "input device drivers". He says: trackpoint and trackpad just "feel different", "so much better" on Windows and that was impossible to know from the website brochure. That story makes me very doubtful... but I want to abandon this "walled garden" of prison that is my Mac and go Ubuntu all the way, no doubt about that! My dilemma at this time is just: "I don't want to live with those eternal frustrations for sure"! Here's a directly answerable phrasing of my question: Is the Lenovo Thinkpad X1 supported on Ubuntu? Yes/no, which version? Which hardware features are not supported? Provide a list Optionally: sort the list in descending order of frustration from your experience Optionally: mention if there are acceptable workarounds to the "out-of-the-box" condition described in the earlier points and whether this ameliorates frustration at least to "tolerable" levels Comment: the Ubuntu hardware certification page is so not-for-end-users it's unreal. Whoa. What would make it end-user friendly is: Link to "buy here and you'll be just fine, this is the right configuration for you, it'll work as long as you press BUY on that page and don't browse further" Remove mentions of may and might not work. Just tell it straight: press buy here and you will get a working system with the exception of A, B, C (so that I can decide whether the philosophical "freedom pleasure" I get from escaping an Apple world is enough to off-balance the loss, for instance, of Bluetooth capabilities (something that I of course use on my Mac) but "could" lose to use free (as in freedom) software The certification page fails to dispel doubts in me as an end-user. I don't feel "eased into Ubuntu", I feel "partially informed".

    Read the article

  • Pantech Link II, Ubuntu and Virtual XP

    - by user85041
    Okay this is my problem. I have a Pantech Link II, dmesg states: [ 896.072037] usb 2-3: new high-speed USB device number 3 using ehci_hcd [ 896.258562] cdc_acm 2-3:1.0: ttyACM0: USB ACM device [ 896.260039] usbcore: registered new interface driver cdc_acm [ 896.260042] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters Have it installed through wine (pc suite and driver) and it doesn't see it. Virtual XP through VMWare Player sees my device, knows it needs a driver. The removable devices says Curitel Pantech USB Device (Maybe Driver). I have PC Suite installed in XP, I install the driver through the executable.. it says problem with installing hardware, and then it disappears. Ubuntu sees it after restart, but if I start XP with that driver installed, it disappears from both and I get these errors in dmesg: [ 1047.760555] /dev/vmmon[2882]: PTSC: initialized at 3093322000 Hz using TSC, TSCs are synchronized. [ 1048.174033] /dev/vmmon[2882]: Monitor IPI vector: 0 [ 1055.293060] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293074] /dev/vmnet: port on hub 8 successfully opened [ 1055.293088] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293094] /dev/vmnet: port on hub 8 successfully opened [ 1072.446305] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446316] /dev/vmnet: port on hub 8 successfully opened [ 1072.446328] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446334] /dev/vmnet: port on hub 8 successfully opened [ 1072.856024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.292024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.732024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1127.743034] NET: Registered protocol family 39 [ 1127.749320] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_ALLOC (cid=1522210225,result=4). [ 1144.104031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1144.412031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1155.889976] ehci_hcd 0000:00:13.2: force halt; handshake ffffc90000642024 00004000 00000000 -> -110 [ 1155.889980] ehci_hcd 0000:00:13.2: HC died; cleaning up [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 lessa@X:~$ dmesg|tail [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 I have tried uninstalling, and installing manually from the device manager update driver while it's still has the warning sign.. it doesn't see the drivers as valid. No idea how to fix this.. would prefer to not have to go to another computer. I'm not trying to do anything but get the pictures off of it. I have to restart ubuntu, plug in device, for ubuntu to see it correctly again. I am like a month and a half old linux newbie so I have no idea the commands I could use for this, and I don't have a memory card in the phone to mount.

    Read the article

  • Ubuntu Linux won't display netbook's native resolution

    - by Daniel
    FYI: My Netbook model is HP Mini 210-1004sa, which comes with Intel Graphics Media Accelerator 3150, and has a display 10.1" Active Matrix Colour TFT 1024 x 600. I recently removed Windows 7 Starter from my netbook, and replaced it with Ubuntu 12.10. The problem is the OS doesn't seem to recognise the native display resolution of 1024x600 i.e. the bottom bits of Ubuntu is hidden beneath the screen & the only 2 available resolutions are: the default 1024x768 and 800x600. I've also thought about replacing Ubuntu with Lubuntu or Puppy Linux, as the system does run a bit slow, but I can't, as then I won't be able to access the taskbar and application menu which will be hidden beneath the screen. Only Ubuntu with Unity is currently usable, as the Unity Launcher is visible enough. I was able to define a custom resolution 1024x600 using the Q&A: How set my monitor resolution? but when I set that resolution, there appears a black band at the top of the screen and the desktop area is lowered, with bits of it hidden beneath the screen. I tried leaving it at this new resolution and restarting the system to see if the black band would disappear & the display will fit correctly, but it gets reset to 1024x768 at startup and displays following error: Could not apply the stored configuration for monitors none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1)

    Read the article

  • Oracle R Distribution 3.1.1 Released

    - by Sherry LaMonica-Oracle
    Oracle R Distribution version 3.1.1 has been released to Oracle's public yum today. R-3.1.1 (code name "Sock it to Me") is an update to R-3.1.0 that consists mainly of bug fixes. It also includes enhancements related to accessing package help files, improved accuracy when importing data with large integers, and better integration with RStudio graphics. The full list of new features and bug fixes is listed in the NEWS file.To install Oracle R Distribution using yum, follow the instructions in the Oracle R Enterprise Installation and Administration Guide.Installing using yum will resolve any operating system dependencies automatically. As such, we recommend using yum to install Oracle R Distribution. However, if yum is not available, you can install Oracle R Distribution RPMs directly using RPM commands.For Oracle Linux 5, the Oracle R Distribution RPMs are available in the Enterprise Linux Add-Ons repository:  R-3.1.1-1.el5.x86_64.rpm   R-core-3.1.1-1.el5.x86_64.rpm  R-devel-3.1.1-1.el5.x86_64.rpm  libRmath-3.1.1-1.el5.x86_64.rpm  libRmath-devel-3.1.1-1.el5.x86_64.rpm  libRmath-static-3.1.1-1.el5.x86_64.rpm For Oracle Linux 6, the Oracle R Distribution RPMs are available in the Oracle Linux Add-Ons repository:  R-3.1.1-1.el6.x86_64.rpm  R-core-3.1.1-1.el6.x86_64.rpm  R-devel-3.1.1-1.el6.x86_64.rpm  libRmath-3.1.1-1.el6.x86_64.rpm  libRmath-devel-3.1.1-1.el6.x86_64.rpm  libRmath-static-3.1.1-1.el6.x86_64.rpmFor example, this command installs the R 3.1.1 RPM on Oracle Linux x86-64 version 6:  rpm -i R-3.1.1-1.el6.x86_64.rpm To complete the Oracle R Distribution 3.1.1 installation, repeat this command for each of the 6 RPMs, resolving dependencies as required. Oracle R Distribution 3.1.1 is not yet officially certified with Oracle R Enterprise. Refer to Table 1-2 in the Oracle R Enterprise Installation Guide for supported configurations of Oracle R Enterprise components, or check this blog for updates. The Oracle R Distribution 3.1.1 binaries for Windows, AIX, Solaris SPARC and Solaris x86 will be available on OSS, Oracle's Open Source Software portal, in the coming weeks.

    Read the article

  • Improving the efficiency of frustum culling

    - by DeadMG
    I've got some code which performs frustum culling. However, this defines the "frustum" way too broadly- when I have ~10 objects on screen, the code returns 42 objects to be rendered. I've tried taking "slices" through the frustum to attempt to increase the accuracy of the technique, but it doesn't seem to have made much impact. I also significantly reduced the far plane, so that the objects are barely at the edge. Here's my code (where size is the size in screen space- the resolution of the client area of the window I'm rendering into). Any suggestions? auto&& size = GetDimensions(); D3DVIEWPORT9 vp = { 0, 0, size.x, size.y, 0, 1 }; D3DCALL(device->SetViewport(&vp)); static const int slices = 10; std::vector<Object*> result; for(int i = 0; i < slices; i++) { D3DXVECTOR3 WorldSpaceFrustrumPoints[8] = { D3DXVECTOR3(0, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i + 1) / slices), D3DXVECTOR3(0, size.y, static_cast<float>(i + 1) / slices) }; D3DXMATRIXA16 Identity; D3DXMatrixIdentity(&Identity); D3DXVec3UnprojectArray( WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), &vp, &Projection, &View, &Identity, 8 ); Math::AABB Frustrum; auto world_begin = std::begin(WorldSpaceFrustrumPoints); auto world_end = std::end(WorldSpaceFrustrumPoints); auto world_initial = WorldSpaceFrustrumPoints[0]; Frustrum.BottomLeftClosest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x < rhs.x ? lhs : rhs; }).x; Frustrum.BottomLeftClosest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y < rhs.y ? lhs : rhs; }).y; Frustrum.BottomLeftClosest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z < rhs.z ? lhs : rhs; }).z; Frustrum.TopRightFurthest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x > rhs.x ? lhs : rhs; }).x; Frustrum.TopRightFurthest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y > rhs.y ? lhs : rhs; }).y; Frustrum.TopRightFurthest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z > rhs.z ? lhs : rhs; }).z; auto slices_result = ObjectTree.collision(Frustrum); result.insert(result.end(), slices_result.begin(), slices_result.end()); } return result;

    Read the article

  • MarteEngine Tile Collision

    - by opiop65
    I need to add collision to my tile map using MarteEngine. MarteEngine is built of of slick2D. Here's my tile generation code: Code: public void render(GameContainer gc, StateBasedGame game, Graphics g) throws SlickException { for (int x = 0; x < 16; x++) { for (int y = 0; y < 16; y++) { map[x][y] = AIR; air.draw(x * GameWorld.tilesize, y * GameWorld.tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 7; y < 8; y++) { map[x][y] = GRASS; grass.draw(x * tilesize, y * tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 8; y < 10; y++) { map[x][y] = DIRT; dirt.draw(x * tilesize, y * tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 10; y < 16; y++) { map[x][y] = STONE; stone.draw(x * tilesize, y * tilesize); } } super.render(gc, game, g); } And one of my tile classes (they're all the same, the image names are just different): Code: package MarteEngine; import org.newdawn.slick.Image; import org.newdawn.slick.SlickException; import it.randomtower.engine.entity.Entity; public class Grass extends Entity { public static Image grass = null; public Grass(float x, float y) throws SlickException { super(x, y); grass = new Image("res/grass.png"); setHitBox(0, 0, 50, 50); addType(SOLID); } } I tried to do it like this: Code: for (int x = 0; x < 16; x++) { for (int y = 7; y < 8; y++) { map[x][y] = GRASS; Grass.grass.draw(x * tilesize, y * tilesize); } } But it gave me a NullPointerException. No idea why, everything looks initialized right? I would be very grateful for some help!

    Read the article

  • No drivers listed?/How to install all drivers? 12.10

    - by madmike59
    So i go to system settings in Ubuntu 12.10 and i want to install my drivers but under the Additional Drivers, My LAN doesnt work, Doesnt even pick up that im plugged in threw Ethernet cord. I have a GTX 670M with 3Gb GDDR5 for a video card and would like to use that. Just need help, pretty new to Ubuntu. Ok when i looked at that other question i did the sudo lspce -nn and this is what i got madmike@Mike-GT70:~$ sudo lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation 3rd Gen Core processor DRAM Controller [8086:0154] (rev 09) 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) 00:14.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:1e31] (rev 04) 00:16.0 Communication controller [0780]: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 [8086:1e3a] (rev 04) 00:1a.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 [8086:1e10] (rev c4) 00:1c.2 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 3 [8086:1e14] (rev c4) 00:1c.4 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 5 [8086:1e18] (rev c4) 00:1d.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:1e26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation HM77 Express Chipset LPC Controller [8086:1e57] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] [8086:1e03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller [8086:1e22] (rev 04) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1213] (rev ff) 02:00.0 Ethernet controller [0200]: Atheros Communications Inc. Device [1969:e091] (rev 13) 03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 2230 [8086:0887] (rev c4) 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5209 PCI Express Card Reader [10ec:5209] (rev 01) 04:00.1 SD Host controller [0805]: Realtek Semiconductor Co., Ltd. RTS5209 PCI Express Card Reader [10ec:5209] (rev 01) Let me know if you need any more info? sorry it took so long.

    Read the article

  • Nvidia driver overscan issue second monitor via dvi-d cable

    - by benmichael
    Ok, I know that I have a bit of a bizarre setup, but here goes. I have an old laptop, HP Pavilion 6000. The graphics card in there is a GeForce 7150M. The monitor connection is an old 18pin. The external monitor I use is a Samsung SyncMaster 2333. Don't ask me why, but this monitor only has a dvi-d connection (yes, i have searched it). So I have the monitor plugged into the laptop. If I use any of the Nvidia propriety drivers and try to set the resolution up to 1920x1080 (the monitor's native resolution), I get a massive overscan issue. Over the years I have tried to get this to work, tinkering with my xorg.conf to death. I have also tried this on every Ubuntu since 10.04, on all the corresponding LUbuntus, and on all the Linux Mints since Lisa. Exact same issue. I have even tried it in WinDoze and it works perfectly there (although I did get the error once, but was unable to reproduce it). Using the Open Source drivers it works perfectly iff I switch off the laptop monitor (this makes no difference with the Nvidia drivers). I would have happily gone on using the Open Source drivers, except that since upgrading to LUbuntu 12.10, the Open Source drivers make my monitor completely hazy and have the same overscan issue until I (through the haze, only because I know where things are) go to the monitor settings, activate the laptop's monitor, then deactivate it, and suddenly it comes right. I have to do this every time. So I have to find a way to fix one of them, so I may as well tackle the propriety drivers, hence this overlong question. Amidst other things, I have tried the nvidia-settings, but because it is connected to an 18pin, it detects the monitor as a vga monitor and does not give me overscan correction options. I have tried custom modlines (although there are always more of those try), I have tried using xrandr, and I have tried all the FlatPanelOptions. What I have not tried is a Gentoo build, as I don't have time any more to do that installation, but up to about three years ago when I ran Gentoo exclusively I did not have this issue. Below is an link to an image with a red highlight around the portion of the screen visible to me, the numbers around it are the number of pixels which are cut off. This does seem to drift a few pixels every now and then. Thanks in advance. Nvidia driver issue image

    Read the article

  • concurrency::extent<N> from amp.h

    - by Daniel Moth
    Overview We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension. From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative). So, if you read the description of index, you won't be surprised with the below description of extent<N> There is the rank field returning the value of N you passed as the template parameter. You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0. You can access the components of the extent through the subscript operator (passing it an integer). You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -. There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result. In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements. Example code extent<2> e(3, 4); _ASSERT(e.rank == 2); _ASSERT(e.size() == 3 * 4); e += 3; e[1] += 6; e = e + index<2>(3,-4); _ASSERT(e == extent<2>(9, 9)); _ASSERT( e.contains(index<2>(8, 8))); _ASSERT(!e.contains(index<2>(8, 9))); grid<N> Our upcoming pre-release bits also have a similar type to extent, grid<N>. The way you create a grid is by passing it an extent, e.g. extent<3> e(4,2,6); grid<3> g(e); I am not going to dive deeper into grid, suffice for now to think of grid<N> simply as an alias for the extent<N> object, that you create when you encounter a function that expects a grid object instead of an extent object. Usage The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a grid<N> object that you pass to the new parallel_for_each function that I'll cover in the next post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • Should I continue to pursue programming based on my experience?

    - by El Be
    The reason I ask this question is because I am not sure my troubles come from a lack of confidence, or something much deeper like lack of passion. I'm hoping experienced programmers and developers can help identify the cause of my troubles. To be brief my undergraduate major was in Computer Science, but in a small school and I had the highest gpa in my year in computer science. The first time I ever programmed was once in the 5th grade (using logo) and when I was a freshman in college. I enjoyed programming when I was in school. Then I did an internships where I was expected to produce image processing software and program microchips. I was unsuccessful and produced little results and I hated the job, because I had to figure out everything for myself, did not have any help, and there was a lot of pressure to produce results. Although I tried I could not figure out what to do and was stuck all the time and made me dislike the job. When the internship ended I went to a PhD program for computer science at a prestigious computer science school. I had a very hard time with the course, met people who have been programming since they were 6 and made plenty of applications in their spare time (which I never did, although I tried). I even met many sophomores who understood more than I did. The combination of this and other things have made me feel that programming is not for me, but sometimes I consider a career in programming. I still consider programming as a career because of the career potential (not only just because of money). Based on my experience do you believe my confidence has just been shaken and I should continue to prepare for a programming career, or do you see a lack of passion and it would make it tough to continue programming. thank you for reading and for your advice Thank you for everyone's advice so far! Also: I dropped out of the ph.D program for computer science and switched to a master's in computer graphics. Its more applied, but I still find it hard to be motivated (due to either lack of confidence or passion), but since programming is such a big field I am looking for that niche area that I feel good programming in.

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric (2D) game with moderate-scale multiplayer - 20-30 players. I've had some difficulty getting a good movement prediction implementation in place. Right now, clients are authoritative for their own position. The server performs validation and broad-scale cheat detection, and I fully realize that the system will never be fully robust against cheating. However, the performance and implementation tradeoffs work well for me right now. Given that I'm dealing with sprite graphics, the game has 8 defined directions rather than free movement. Whenever the player changes their direction or speed (walk, run, stop), a "true" 3D velocity is set on the entity and a packet it sent to the server with the new movement state. In addition, every 250ms additional packets are transmitted with the player's current position for state updates on the server as well as for client prediction. After the server validates the packet, it gets automatically distributed to all of the other "nearby" players. Client-side, all entities with non-zero velocity (ie/ moving entities) are tracked and updated by a rudimentary "physics" system - basically nothing more than changing the position by the velocity according to the elapsed time slice (40ms or so). What I'm struggling with is how to implement clean movement prediction. I have the nagging suspicion that I've made a design mistake somewhere. I've been over the Unreal, Half-life, and all other movement prediction/lag compensation articles I could find, but they all seam geared toward shooters: "Don't send each control change, send updates every 120ms, server is authoritative, client predicts, etc". Unfortunately, that style of design won't work well for me - there's no 3D environment so each individual state change is important. 1) Most of the samples I saw tightly couple movement prediction right into the entities themselves. For example, storing the previous state along with the current state. I'd like to avoid that and keep entities with their "current state" only. Is there a better way to handle this? 2) What should happen when the player stops? I can't interpolate to the correct position, since they might need to walk backwards or another strange direction if their position is too far ahead. 3) What should happen when entities collide? If the current player collides with something, the answer is simple - just stop the player from moving. But what happens if two entities take up the same space on the server? What if the local prediction causes a remote entity to collide with the player or another entity - do I stop them as well? If the prediction had the misfortune of sticking them in front of a wall that the player has gone around, the prediction will never be able to compensate and once the error gets to high the entity will snap to the new position.

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • Complete Beginner to Game Programming and Unreal Engine 4, Looking For Advice [on hold]

    - by onemic
    I am currently a 2nd year programming student(Just finished my first year so I will be starting my second year in September) and have mainly learned C and C++ in my classes. In terms of what I know of C++, I know about general inheritance, polymorphism, overloading operators, iterators, a little bit about templates(only class and function templates) etc. but not of the more advanced topics like linked lists and other sequential containers(containers in general I guess), enumerations, most of the standard library(other than like strings and vectors), and probably a bunch of other stuff I dont even know about yet. I subscribed to Unreal Engine 4 as I was very intrigued by their Unreal Tournament announcement earlier this month, especially after hearing that UE4 is going completely C++. Of course my end goal in doing this programming program is to eventually go into game/graphics programming. Since it's my summer off, I thought what better way then to actually apply some of my skills to a personal project so I actually have a firmer understanding of C++ past what my professors tell me. My questions are this: What would be the best way to start off making a small personal game in UE4 as a project for the summer? What should I be aiming for, especially for someone that is still learning C++? Should I focus on making a simple 2D game rather than a 3D one to get started? Seeing the Flappy Chicken showcase intrigued me because before I thought the UE engine was pretty much pigeonholed into being for FPS games What should my expectations be going into UE4 and a game engine for the first time?(UE4 will be my first foray into making a game) What can I expect to gain from making things in UE4, in terms of making games and in terms of further fleshing out my knowledge of C++? Would you recommend I start off 100% using C++ for scripting or using the visual blueprints? Since I'm not a designer, how would I be able to add objects and designs to my game? For someone at my level is retaining the UE4 subscription worth it or is it better to cancel and resub when I learn enough about UE4 and C++? Lastly is there anything to be gained in terms of knowledge/insight through me looking at the source code for UE4? I opened it in VS2013, but noticed that most of the files were C# files and not cpp's. Thanks in advance for taking the time to answer.

    Read the article

  • OpenGL Fast-Object Instancing Error

    - by HJ Media Studios
    I have some code that loops through a set of objects and renders instances of those objects. The list of objects that needs to be rendered is stored as a std::map, where an object of class MeshResource contains the vertices and indices with the actual data, and an object of classMeshRenderer defines the point in space the mesh is to be rendered at. My rendering code is as follows: glDisable(GL_BLEND); glEnable(GL_CULL_FACE); glDepthMask(GL_TRUE); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); for (std::map<MeshResource*, std::vector<MeshRenderer*> >::iterator it = renderables.begin(); it != renderables.end(); it++) { it->first->setupBeforeRendering(); cout << "<"; for (unsigned long i =0; i < it->second.size(); i++) { //Pass in an identity matrix to the vertex shader- used here only for debugging purposes; the real code correctly inputs any matrix. uniformizeModelMatrix(Matrix4::IDENTITY); /** * StartHere fix rendering problem. * Ruled out: * Vertex buffers correctly. * Index buffers correctly. * Matrices correct? */ it->first->render(); } it->first->cleanupAfterRendering(); } geometryPassShader->disable(); glDepthMask(GL_FALSE); glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST); The function in MeshResource that handles setting up the uniforms is as follows: void MeshResource::setupBeforeRendering() { glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glEnableVertexAttribArray(2); glEnableVertexAttribArray(3); glEnableVertexAttribArray(4); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0); // Vertex position glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 12); // Vertex normal glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 24); // UV layer 0 glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 32); // Vertex color glVertexAttribPointer(4, 1, GL_UNSIGNED_SHORT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 44); //Material index } The code that renders the object is this: void MeshResource::render() { glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } And the code that cleans up is this: void MeshResource::cleanupAfterRendering() { glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); glDisableVertexAttribArray(3); glDisableVertexAttribArray(4); } The end result of this is that I get a black screen, although the end of my rendering pipeline after the rendering code (essentially just drawing axes and lines on the screen) works properly, so I'm fairly sure it's not an issue with the passing of uniforms. If, however, I change the code slightly so that the rendering code calls the setup immediately before rendering, like so: void MeshResource::render() { setupBeforeRendering(); glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } The program works as desired. I don't want to have to do this, though, as my aim is to set up vertex, material, etc. data once per object type and then render each instance updating only the transformation information. The uniformizeModelMatrix works as follows: void RenderManager::uniformizeModelMatrix(Matrix4 matrix) { glBindBuffer(GL_UNIFORM_BUFFER, globalMatrixUBOID); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(Matrix4), matrix.ptr()); glBindBuffer(GL_UNIFORM_BUFFER, 0); }

    Read the article

  • Why is my primitive xna square not drawn/shown?

    - by Mech0z
    I have made this class to draw a rectangle, but I cant get it to be drawn, I have no issues displaying a 3d model created in 3dmax, but shown these primitives seems much harder I use this to create it board = new Board(Vector3.Zero, 1000, 1000, Color.Yellow); And here is the implementation using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Shapes; using Quadro.Models; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace Quadro { public class Board : IGraphicObject { //Private Fields private Vector3 modelPosition; private BasicEffect effect; private VertexPositionColor[] vertices; private Matrix rotationMatrix; private GraphicsDevice graphicsDevice; private Matrix cameraProjection; //Constructor public Board(Vector3 position, float length, float width, Color color) { var _color = color; vertices = new VertexPositionColor[6]; vertices[0].Position = new Vector3(position.X, position.Y, position.Z); vertices[1].Position = new Vector3(position.X, position.Y + width, position.Z); vertices[2].Position = new Vector3(position.X + length, position.Y, position.Z); vertices[3].Position = new Vector3(position.X + length, position.Y, position.Z); vertices[4].Position = new Vector3(position.X, position.Y + width, position.Z); vertices[5].Position = new Vector3(position.X + length, position.Y + width, position.Z); for(int i = 0; i < vertices.Length; i++) { vertices[i].Color = color; } initFields(); } private void initFields() { graphicsDevice = SharedGraphicsDeviceManager.Current.GraphicsDevice; effect = new BasicEffect(graphicsDevice); modelPosition = Vector3.Zero; float screenWidth = (float)graphicsDevice.Viewport.Width; float screenHeight = (float)graphicsDevice.Viewport.Height; float aspectRatio = screenWidth / screenHeight; this.cameraProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); this.rotationMatrix = Matrix.Identity; } //Public Methods public void Update(GameTimerEventArgs e) { } public void Draw(Vector3 cameraPosition, GameTimerEventArgs e) { Matrix cameraView = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); effect.World = rotationMatrix * Matrix.CreateTranslation(modelPosition); effect.View = cameraView; effect.Projection = cameraProjection; graphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 2, VertexPositionColor.VertexDeclaration); } } public void Rotate(Matrix rotationMatrix) { this.rotationMatrix = rotationMatrix; } public void Move(Vector3 moveVector) { this.modelPosition += moveVector; } } }

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • XNA - 3D AABB collision detection and response

    - by fastinvsqrt
    I've been fiddling around with 3D AABB collision in my voxel engine for the last couple of days, and every method I've come up with thus far has been almost correct, but each one never quite worked exactly the way I hoped it would. Currently what I do is get two bounding boxes for my entity, one modified by the X translation component and the other by the Z component, and check if each collides with any of the surrounding chunks (chunks have their own octrees that are populated only with blocks that support collision). If there is a collision, then I cast out rays into that chunk to get the shortest collision distance, and set the translation component to that distance if the component is greater than the distance. The problem is that sometimes collisions aren't even registered. Here's a video on YouTube that I created showing what I mean. I suspect the problem may be with the rays that I cast to get the collision distance not being where I think they are, but I'm not entirely sure what would be wrong with them if they are indeed the problem. Here is my code for collision detection and response in the X direction (the Z direction is basically the same): // create the XZ offset vector Vector3 offsXZ = new Vector3( ( _translation.X > 0.0f ) ? SizeX / 2.0f : ( _translation.X < 0.0f ) ? -SizeX / 2.0f : 0.0f, 0.0f, ( _translation.Z > 0.0f ) ? SizeZ / 2.0f : ( _translation.Z < 0.0f ) ? -SizeZ / 2.0f : 0.0f ); // X physics BoundingBox boxx = GetBounds( _translation.X, 0.0f, 0.0f ); if ( _translation.X > 0.0f ) { foreach ( Chunk chunk in surrounding ) { if ( chunk.Collides( boxx ) ) { float dist = GetShortestCollisionDistance( chunk, Vector3.Right, offsXZ ) - 0.0001f; if ( dist < _translation.X ) { _translation.X = dist; } } } } else if ( _translation.X < 0.0f ) { foreach ( Chunk chunk in surrounding ) { if ( chunk.Collides( boxx ) ) { float dist = GetShortestCollisionDistance( chunk, Vector3.Left, offsXZ ) - 0.0001f; if ( dist < -_translation.X ) { _translation.X = -dist; } } } } And here is my implementation for GetShortestCollisionDistance: private float GetShortestCollisionDistance( Chunk chunk, Vector3 rayDir, Vector3 offs ) { int startY = (int)( -SizeY / 2.0f ); int endY = (int)( SizeY / 2.0f ); int incY = (int)Cube.Size; float dist = Chunk.Size; for ( int y = startY; y <= endY; y += incY ) { // Position is the center of the entity's bounding box Ray ray = new Ray( new Vector3( Position.X + offs.X, Position.Y + offs.Y + y, Position.Z + offs.Z ), rayDir ); // Chunk.GetIntersections(Ray) returns Dictionary<Block, float?> foreach ( var pair in chunk.GetIntersections( ray ) ) { if ( pair.Value.HasValue && pair.Value.Value < dist ) { dist = pair.Value.Value; } } } return dist; } I realize some of this code can be consolidated to help with speed, but my main concern right now is to get this bit of physics programming to actually work.

    Read the article

  • Drawing a line using openGL does not work

    - by vikasm
    I am a beginner in OpenGL and tried to write my first program to draw some points and a line. I can see that the window opens with white background but no line is drawn. I was expecting to see red colored (because glColor3f(1.0, 0.0, 0.0);) dots (pixels) and line. But nothing is seen. Here is my code. void init2D(float r, float g, float b) { glClearColor(r,g,b,0.0); glMatrixMode(GL_PROJECTION); gluOrtho2D(0.0, 200.0, 0.0, 150.0); } void display() { glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0, 0.0, 0.0); glBegin(GL_POINTS); for(int i = 0; i < 10; i++) { glVertex2i(10+5*i, 110); } glEnd(); //draw a line glBegin(GL_LINES); glVertex2i(10,10); glVertex2i(100,100); glEnd(); glFlush(); } int main(int argc, char** argv) { //Initialize Glut glutInit(&argc, argv); //setup some memory buffers for our display glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); //set the window size glutInitWindowSize(500, 500); //create the window with the title 'points and lines' glutCreateWindow("Points and Lines"); init2D(0.0, 0.0, 0.0); glutDisplayFunc(display); glutMainLoop(); } I wanted to verify that the glcontext was opening properly and used this code: int main(int argc, char **argv) { glutInit(&argc, argv); //setup some memory buffers for our display glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); //set the window size glutInitWindowSize(500, 500); //create the window with the title 'points and lines' glutCreateWindow("Points and Lines"); char *GL_version=(char *)glGetString(GL_VERSION); puts(GL_version); char *GL_vendor=(char *)glGetString(GL_VENDOR); puts(GL_vendor); char *GL_renderer=(char *)glGetString(GL_RENDERER); puts(GL_renderer); getchar(); return 0; } And the ouput I got was: 3.1.0 - Build 8.15.10.2345 Intel Intel(R) HD Graphics Family Can someone point out what I am doing wrong ? Thanks.

    Read the article

  • Collision disturbing the jumping mechanic in java 2D game [on hold]

    - by user50931
    So I have been working on a 2D Java game recently and everything was going smoothly, until I reached a problem to do with the players jumping mechanic. So far I've got the player to jump a fixed rate and fall due to gravity. Hers my code for my Player class. public class Player extends GameObject { public Player(int x, int y, int width, int height, ObjectId id) { super(x, y, width, height, id); } @Override public void tick(ArrayList<GameObject> object) { if(go){ x+=vx; y+=vy; } if(vx <0){ facing =-1; }else if(vx >0) facing =1; checkCollision(object); checkStance(); } private void checkStance() { if(falling){ //gravity jumping = false; vy = speed/2; } if(jumping){ // Calculates how high jump should be vy = -speed*2; if(jumpY - y >= maxJumpHeight) falling =true; } } private void checkCollision(ArrayList<GameObject> object) { for(int i=0; i< object.size(); i++ ){ GameObject tempObject = object.get(i); if(tempObject.getId() == ObjectId.Ledge){ if(getBoundsTop().intersects(tempObject.getBoundsAll())){ //Top y = tempObject.getY() + tempObject.getBoundsAll().height; falling =true; } if(getBoundsRight().intersects(tempObject.getBoundsAll())){ // Right x = tempObject.getX() -width ; } if(getBoundsLeft().intersects(tempObject.getBoundsAll())){ //Left x = tempObject.getX() + tempObject.getWidth(); } if(getBoundsBottom().intersects(tempObject.getBoundsAll())){ //Bottom y = tempObject.getY() - height; falling =false; vy=0; }else{ falling =true; } } } } @Override public void render(Graphics g) { g.setColor(Color.BLACK); g.fillRect((int)x, (int)y, width, height); } @Override public Rectangle getBoundsAll() { return new Rectangle((int)x, (int)y,width,height); } public Rectangle getBoundsTop() { return new Rectangle((int) x , (int)y ,width,height/15); } public Rectangle getBoundsBottom() { return new Rectangle( (int)x , (int) y +height -(height /15),width,height/15); } public Rectangle getBoundsLeft() { return new Rectangle( (int) x , (int) y + height /10 ,width/8,height - (height /5)); } public Rectangle getBoundsRight() { return new Rectangle((int) x + width - (width/8) ,(int) y + height /10 ,width/8,height - height/5); } } My problem is when I add: else{ falling =true; } during the loop of the ArrayList to check collision, it stops the player from jumping and keeps him on the ground. I've tried to find a way around this but haven't had any luck. Any suggestions?

    Read the article

  • WiFi on Ubuntu 12.04 custom: downloading unbearably slow

    - by Mark
    iwconfig reports 11 Mbps, yet I've seen as low as <1 KBps. This is the latest in my laundry list of Ubuntu problems in a dual-boot machine (cyberpowerpc custom, intel i7-3820, nvidia gtx 570). I received it two days ago, Windows 7 running fine, still having problems with Ubuntu. The browsing is intermittent but unacceptable. e.g. I could get to this site last night but I couldn't post this question. The downloading is unbearably slow, I can't download anything or install any packages because the speed is so slow. e.g. I am trying to install vim which is inexplicably missing from my 12.04 install (add another one to the problems list) and my download speed reported in the terminal was 241 B/s. Yes, bytes. iwconfig reports 11 Mbps, which further adds to the confusion. User@ubuntu:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"linksys" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:39:76:2C:A1 Bit Rate=11 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=36/70 Signal level=-74 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:54 Invalid misc:18 Missed beacon:0 eth0 no wireless extensions. Any ideas? I see this is a problem a lot of people, but none of the on line solutions have worked for me so far. e.g. one site recommends editing the ath9k.conf file in /etc/modprobe.d, yet this file isn't even in the folder: User@ubuntu:/$ cd etc/modprobe.d User@ubuntu:/etc/modprobe.d$ ls alsa-base.conf blacklist-oss.conf blacklist-ath_pci.conf blacklist-rare-network.conf blacklist.conf blacklist-watchdog.conf blacklist-firewire.conf dkms.conf blacklist-framebuffer.conf nvidia-current_hybrid.conf blacklist-modem.conf nvidia-graphics-drivers.conf I think the nvidia gpu might be mucking things up. I had the "blinking cursor" problem when installing in the first place, and then I had the monitor out of range problem as well. I have my faithful Asus laptop, which is running Ubuntu 12.04 just fine. The only difference is executing host -t SOA local in the terminal gives User@ubuntu:~$ host -t SOA local local has SOA record local. nobody.localhost. 42 86400 43200 604800 10800 in my new machine, and the command reports Host local. not found in the laptop. Help would be most welcome, as I am in danger of reverting back to Windows. I'm seriously considering it. Sorry for the length, trying to show my effort in resolving the issue and include terminal snippets that might be helpful.

    Read the article

  • No Sound in 12.04 on Acer TimelineX 3830TG

    - by Fawkes5
    When i first installed ubuntu 12.04 everything worked fine. All of a sudden my sound has stopped working!! This has happened before [like 2 days ago], however the next day it started working again. That may happen again...[I've already rebooted 5 times] My Sound Card is a HDA intel PCH My Sound Chip is Intel CougarPoint HDMI My machine is an Acer TimelineX 3830TG, heres my lcpci output 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b4) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b4) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev ff) 02:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) 03:00.0 Network controller: Atheros Communications Inc. AR9287 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5116 PCI Express Card Reader (rev 01) 05:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) I'm running a dual boot with Windows 7, sound always works there. I've tried loading earlier kernels [3.2.0 ] but that hasn't helped. I've tried playing with alsamixer, unmuting and raising volume, doesn't work I've tried playing with pauvcontrol, doesnt work. Althought, strangely, when i play a song, under output i can see a bar moving implying that something is working, just the sound isn't getting through my speakers. Thus it might be a hardware problem. I've tried unistalling and reinstalling pulsaudio annd alsamixer, doesn't work [maybe made it worse...]. I've heard reverting back to kernel 3.0... works, but i'm reluctant to try that[i dont know much about kernels]. Thank you. Please tell me what other information you need. EDIT: My audio has started working again after another reboot. However this time im not connected to my external audio on startup. I will see if this is the issue next time i'm at home Edit: Connecting to external audio is not the problem.

    Read the article

  • A Windows Update Prevented Live Discs From Working? [migrated]

    - by user88311
    First off I'll state my system specs. Acer Aspire M1100 Windows Vista Home Basic 32bit OEM 2 Gigs of DDR2 ram 160Gig hard drive 2.7Ghz AMD Athlon 64 processor ATI Radeon X1250 Graphics card A few days ago my computer did automatic updates and updated windows defender to KB915597 (Definition 1.135.415.0), After which when shutting down and starting up I would receive BSOD with the information BUGCODE_USE_DRIVER and 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x877330000) upon where my computer would not start up with any USB devices plugged in and it always require me to run startup repair before it started. Upon when I first started it up and was able to fully boot windows, I had no use of the mouse so I was unable to install the fix that the windows solutions center brought up on my screen, so I restarted again and installed the fix hoping it would cease the problem, it did not. Upoon starting up after installing the fix and restarting I was confronted with the BSOD 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x83291000) at which I found the startup repair could not fix the problem and I restored, as I most like should have in the first place. After going through that I read that simply installing the latest defender version from the microsoft site had fixed this problem for others, so I did that, to find I still received the BSOD's. So in a attempt to find a fix to the problem I went to the microsoft answers site to try to find a way to fix the problem, there I was told to simply disable defender and reboot to see if that fixed the problem, upon doing this my computer would no longer even startup, when I boot normally I get to just when the loading screen finishes and then my computer restarts and when I run startup repair, it runs for about 15 seconds and then my computer restarts as well. I have tried running ubuntu live discs in order to simply access the drive and simply copy and paste the 2 month old physical backup I have of my C drive to the C drive, but whenever I run the live OS when it gets to the end of the load screen and is about to boot, the computer again restarts, yet if I put in a gparted disc, I am able to boot it fully, although it does not give me access to the file system just partition managing and when I attempt to access the internet through it, the computer once again restarts. So my question is, how could the update and what has happened prevent me from running the live OS's properly?

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >