Search Results

Search found 8172 results on 327 pages for 'vector graphics'.

Page 281/327 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Fan not detected by lm-sensors

    - by OrangeTux
    My fan is blowing hard, while my cpu temperature is 32 degrees I tried a lot of things to control my fan. Changed grub file GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi= pci=noacpi" _ GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=\"Linux\"" Ran sensors-detect : To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO) Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK Ran sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +34.0°C (crit = +90.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +34.0°C (high = +80.0°C, crit = +90.0°C) Core 2: +34.0°C (high = +80.0°C, crit = +90.0°C) Ran sudo start module-init-tools and sudo start module-init-tools module-init-tools stop/waiting As you can see my fan isn't detected. Running fancontrol gives me this: Loading configuration from /etc/fancontrol ... Error: Can't read configuration file Can you help me, please? I cannot use my laptop now in class. Thanks in advance. My system 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 05) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) 01:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5400 Series] 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 7f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 05) 7f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 05) 7f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 05) 7f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 05) 7f:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 05) 7f:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 05)

    Read the article

  • Vostro 3560 Wireless on Ubuntu 12.10

    - by ngille
    I have been using Ubuntu 12.04.1 LTS on a Dell Vostro 3560 for some time now. At first I had issues with the wireless but I was pointed to a driver from Ubuntu 11.xx for a Broadcom BCM43142 and that did work! however with the upgrade to 12.10 I find myself at a lost because the driver will not work anymore. When trying to reinstall it I get "Driver of bad quality" I still install it but it just wont work. I cant seem to find anything on this issue for 12.10 and didn't know if anyone had any advice or if someone could point me in the right direction on resources to write my own driver for it. As I know C/C++ but have never written a driver for a Linux machine before. Thanks for any help! To be more clear I installed Ubuntu 12.10 Desktop on my Dell Vostro 3560 laptop. When I log in, my wireless card isn't visible in the Network Manager popup menu, although the wired network shows up there. I installed the driver mentioned below and that did help me on 12.04 but is now broken due to "Poor quality" a sudo lspci -nn command brings up: 00:00.0 Host bridge [0600]: Intel Corporation 3rd Gen Core processor DRAM Controller [8086:0154] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) 00:14.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:1e31] (rev 04) 00:16.0 Communication controller [0780]: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 [8086:1e3a] (rev 04) 00:1a.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 [8086:1e10] (rev c4) 00:1c.1 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 2 [8086:1e12] (rev c4) 00:1c.2 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 3 [8086:1e14] (rev c4) 00:1d.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:1e26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation HM77 Express Chipset LPC Controller [8086:1e57] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] [8086:1e03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller [8086:1e22] (rev 04) 01:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 07) **02:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01)**

    Read the article

  • Why is my system freezing when I switch users

    - by ZeroDivide
    Hello I've recently upgraded from 13.04 to 13.10 64bit. I'm running AMD graphics with the proprietary drivers. I have two user accounts. Mine(administrator) and my girlfriend's(standard) My girlfriend clicks "switch user" from my lock screen and logs in fine. I then try to click "switch user" from her lock screen and everything goes black. Then the monitor blinks on and off with just a single cursor. I have no way to access the terminal, the system is unresponsive and I have to hit the power button. Even ctrl + alt + f4 or ctrl + alt + t doesn't get me a terminal. When I press the power button on my system, it does start printing out the shutdown sequence on the monitor. Here is my .xsession-errors Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. Here is hers: init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd respawning too fast, stopped init: logrotate main process (4726) killed by TERM signal init: upstart-dbus-session-bridge main process (4865) terminated with status 1 init: gnome-settings-daemon main process (4843) terminated with status 1 init: gnome-session main process (4852) terminated with status 1 init: unity-panel-service main process (4863) killed by KILL signal I found some advice in a forum to look for at-spi2-registryd in my system logs. Perhaps it will be useful. executing this: sudo grep -r at-spi2-registryd /var/log/* produces this: /var/log/lightdm/x-1-greeter.log:** (at-spi2-registryd:4384): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-1-greeter.log:** (at-spi2-registryd:4384): WARNING **: Unable to register client with session manager /var/log/lightdm/x-2-greeter.log.old:** (at-spi2-registryd:7447): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-2-greeter.log.old:** (at-spi2-registryd:7447): WARNING **: Unable to register client with session manager /var/log/lightdm/x-0-greeter.log:** (at-spi2-registryd:1378): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-0-greeter.log:** (at-spi2-registryd:1378): WARNING **: Unable to register client with session manager /var/log/lightdm/x-0-greeter.log.old:** (at-spi2-registryd:1357): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-0-greeter.log.old:** (at-spi2-registryd:1357): WARNING **: Unable to register client with session manager Any ideas what is going on?

    Read the article

  • Ubuntu 13.04 boot into black screen, even after installing nvidia drivers, fail at: "Starting Reload cups, upon starting avahi-daemon..."

    - by Elad92
    I got a new machine with i7 4770k and gefore gtx660. I installed windows and then installed Ubuntu 13.04. In the installation everything went well, after the installation I again boot from the USB and choose try ubuntu, and installed boot-repair because windows automatically boot with no option for ubuntu. After I repaired the boot, I restarted and got into grub, when I chose Ubuntu I got message saying I'm running on low graphics mode and when I pressed ok the screen turned off. I couldn't do anything so I restarted and after reading this post: My computer boots to a black screen, what options do I have to fix it? I got into recovery, selected dpkg and repaired packages. Then I rebooted, and tried to press 'e' and change quiet splash to no splash and nomodeset but unfortunately both of them didn't work, and not the screen didn't get turned off, but I just saw a blank screen. So I rebooted again, entered the recovery, and this time I went to the root shell, and tried to install nvidia drivers from this guide here: http://www.howopensource.com/2012/10/install-nvidia-geforce-driver-in-ubuntu-12-10-12-04-using-ppa/ I rebooted and got the same black screen again (the screen didn't go off, just saw a blank screen). In the grub I see that the kernel is 3.8.0-25. I also checked that the usb files are not corrupted using the check CD from the ubuntu installation screen. I'm really frustrated, I don't know what else can I do. (If someone also knows how can I connect to wifi using the root shell it will be very appreciated, because when I choose the 'enable networking' it again booting me to a black screen. Thanks Edit After digging more, I again boot with nomodeset and saw where is it failing, this is the lines I saw: * Starting Reload cups, upon starting avahi-daemon to make sure remote queues are populated [OK] * Starting Reload cups, upon starting avahi-daemon to make sure remote queues are populated [fail] I searched for it in google and this is the closest result I got: http://ubuntuforums.org/showthread.php?t=2144261 i didn't find any way to solve it in the internet, from what I understand this is a problem with 13.04. If someone knows how to fix it I will be very grateful. Thanks Edit 2 - 23.6.13 As Mitch suggested, I disabled the avahi-daemon, and when I boot in nomodeset I get the following error: What else can be the problem?

    Read the article

  • HDMI sound gone, can't figure out how to turn it back on

    - by Oli
    I have had an Acer Revo box as a media centre for a while. I recently installed Ubuntu Server (10.10) on it and polished it up with nodm (one of the most simple ways to launch an X session) and installed boxee. It's been working fine for over a month. It's just running ALSA. I've had problems with PulseAudio/Boxee/HDMI before so I wanted to keep it simple. And that worked. It pushed both PCM and digital (AAC and various Dolby codecs) over HDMI perfectly. But I restarted it the other day after mucking around with some nfs configuration and now there isn't any sound. The hardware is an ION chipset. Nvidia 9400M graphics with Nvidia MCP79/7A audio. One thing I have noticed is there doesn't appear to be any sign of a IEC958 device. A traditional fix in the past for fresh installs has been to load alsamixer, find the IEC device and toggle its mute but I can't. I'm certain this used to represent the HDMI output. It just doesn't seem to exist any more unless I run sudo alsa-utils restart while boxee is running, when I see it in an error message: * Shutting down ALSA... [ OK ] * Setting up ALSA... * warning: 'alsactl restore' failed with error message 'alsactl: set_control:1388: Cannot write control '2:0:0:IEC958 Playback Default:0' : Operation not permitted'... ...done. When nodm (and thus boxee) aren't running, I don't see this error but alsamixer still doesn't show the IEC channel. aplay -l gives: card 0: NVidia [HDA NVidia], device 0: ALC662 rev1 Analog [ALC662 rev1 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 0/1 Subdevice #0: subdevice #0 Its section in lshw reads: *-multimedia description: Audio device product: MCP79 High Definition Audio vendor: nVidia Corporation physical id: 8 bus info: pci@0000:00:08.0 version: b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list configuration: driver=HDA Intel latency=0 maxlatency=5 mingnt=2 resources: irq:22 memory:fae78000-fae7bfff I was running on the stock PAE kernel but now it's running on 2.6.37.1. I upgraded to see if that fixed things; it didn't. I'm considering a reinstall but I hate doing that because a) there's a bit of custom configuration in getting X and Boxee to start on boot and b) I don't know what the problem is. If I reinstall this time, I'll end up doing that every time the sound breaks. I love Ubuntu but I don't want to install it once a month. Is there any way to forcibly reset all alsa settings and restart from scratch (without doing a reinstall)? Any other tips? If you need more information, just ask.

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Will HTML5 make Silverlight redundant?

    - by Laila
    One of the great features of Adobe AIR v2 that was launched this month was its support for some of the 2008 draft of HTML5. The HTML5 specification was started in 2004, but the full spec will probably not be approved by W3C until around 2022. One might have thought that it would take years yet from now to reach the point where any browsers were remotely HTML5-compliant, but enough of HTML5 is published and agreed to make a lot of it possible, and Safari and Adobe have got there thanks to Apple's open-source WebKit. The race for HTML 5 has been fuelled by the demand by Apple and Google for advanced graphics, typography, animations and transitions without having to rely on third party browser plug-ins such as Adobe Flash or Silverlight. There is good reason for this haste: Flash doesn't support touch-devices and has been slow in supporting hardware video decoders such as H.264. There is a strong requirement to do all that Flash can do in an open-standards way. Those with proprietary solutions remain sniffy. In AIR 2, Adobe pointedly disables the HTML5 and tags that allow basic playing of media content, saying that the specification is not final and there is still no standard for the supported formats, and adding that Safari implements a 'disjoint set' of codecs. Microsoft also has little interest in HTML 5 as it has so much invested in Silverlight. Google stands to gain by the Adobe AIR for Android as it will allow a lot of applications to be migrated easily to the platform, so sees Apple's war on Flash as a way of gaining market share. Why do we care? It is because HTML5/CSS3 provides facilities much far beyond HTML4, bring the reality of browser-based applications a lot closer. Probably most generally useful is the advanced typography: Safari and AIR already both support a way of reflowing text in a container across an arbitrary number of columns; Page-specific fonts can also be specified. Then there is 2D drawing, video, transitions, local storage, AJAX navigation and mutable DOM prototypes. HTML5 is likely to provide base functionality that is required but it is too early to be certain that it will render Flash, Silverlight or JavaFX obsolete. In the meantime, Adobe Air provides the best vehicle for developing HTML5/CSS3 applications without a twinge of worry about browser incompatibilities. Cheers, Laila

    Read the article

  • Ubuntu hangs on booting up after a update

    - by alFReD NSH
    I've made a clean install yesterday, for the first time restarted, everything went good and then after I updated packages and copied my old home directory to replace the new one, when I restarted it hung when it was booting. I tried reinstalling again and doing the same thing, but again same thing happened. Here's what I see, before when the Ubuntu logo with the five dots is shown: Then after that, 3 or 4 of the dots will load and hangs there. If I press arrow up before that, this will be shown I started my laptop again today(the pictures are for the day before) and after that, boot up with live CD and got the logs. dmesg: http://pastebin.com/aVxV7BQF syslog: http://pastebin.com/4E2BrRUK And some info: alfred@alFitop:~$ uname -a Linux alFitop 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lshw: http://pastebin.com/AZbKJmsT sources.list : http://pastebin.com/2HazmuyV My problem is a bit similar to here: http://ubuntuforums.org/showthread.php?t=1918271 Though I didn't change my x.org config. Only changed home directory and updated packages. I've tried memtest and fschk, both passed. In the recovery mode boot option, I've also realized that same things happen in failsafe graphical mode. But when I go into the network mode, I can boot up my system, but of course same the graphics are just basic. Adding blacklist intel_ips to /etc/modprobe.d/blacklist.conf solves the first message, but still I get the broken pipe and CPU stack traces. The current kernel version is 3.2.0-25, I've tried booting up in the 3.2.0-23(the one the installer came with, but same results.) Also uninstalled apparmor, didn't help. I've installed Ubuntu again, this time without copying the home directory, also same result. --- UPDATE --- This problem was solved before with removing backports, but its back again! I've updated my laptop last night and the problem came back. It's definitely one of these packages. My /var/log/apt/term.log and /var/log/apt/history.log. I'm almost having the same situation. --- UPDATE --- I realized this also have happened on times that I have updated(haven't restarted after it) and my computer power has been cut off and its shutdown due to lack of power. And I realized if I just do as I answered but not in somewhere without GUI(networking mode has the GUI) it wouldn't work.

    Read the article

  • Particle System in XNA - cannot draw particle

    - by Dave Voyles
    I'm trying to implement a simple particle system in my XNA project. I'm going by RB Whitaker's tutorial, and it seems simple enough. I'm trying to draw particles within my menu screen. Below I've included the code which I think is applicable. I'm coming up with one error in my build, and it is stating that I need to create a new instance of the EmitterLocation from the particleEngine. When I hover over particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); it states that particleEngine is returning a null value. What could be causing this? /// <summary> /// Base class for screens that contain a menu of options. The user can /// move up and down to select an entry, or cancel to back out of the screen. /// </summary> abstract class MenuScreen : GameScreen ParticleEngine particleEngine; public void LoadContent(ContentManager content) { if (content == null) { content = new ContentManager(ScreenManager.Game.Services, "Content"); } base.LoadContent(); List<Texture2D> textures = new List<Texture2D>(); textures.Add(content.Load<Texture2D>(@"gfx/circle")); textures.Add(content.Load<Texture2D>(@"gfx/star")); textures.Add(content.Load<Texture2D>(@"gfx/diamond")); particleEngine = new ParticleEngine(textures, new Vector2(400, 240)); } public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { base.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); // Update each nested MenuEntry object. for (int i = 0; i < menuEntries.Count; i++) { bool isSelected = IsActive && (i == selectedEntry); menuEntries[i].Update(this, isSelected, gameTime); } particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); particleEngine.Update(); } public override void Draw(GameTime gameTime) { // make sure our entries are in the right place before we draw them UpdateMenuEntryLocations(); GraphicsDevice graphics = ScreenManager.GraphicsDevice; SpriteBatch spriteBatch = ScreenManager.SpriteBatch; SpriteFont font = ScreenManager.Font; spriteBatch.Begin(); // Draw stuff logic spriteBatch.End(); particleEngine.Draw(spriteBatch); }

    Read the article

  • No suspend on lid closing on a Samsung Series 5 14" NP530U4BI

    - by dmeu
    Ok, i realize I am not the only one, but I will try to provide all info possible to make it exemplary as possible and narrow down the error sources. I have a fresh install of Ubuntu 12.04 and the suspend worked fine upon having it freshly installed but now it does not anymore. The suspend option from the system power button on the top right works fine. Things I did do which I don't know if they are related: Install and remove againthe FGLRX drivers (Radeon graphic card) Install Jupiter power managment (shutting it down is not changin anything) Plug in and out an external display The configuration I know of is well set: In System Settings/Power all is set to suspend when closing lid Double checked with dconf-editor, everything set to suspend So, from here on I don't know how to proceed.. what are common problems that cause this error? EDIT: My computer model is: Samsung Series 5 14" NP530U4BI $ sudo lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] 02:00.0 Network controller [0280]: Intel Corporation Centrino Advanced-N 6230 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller 04:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • Antenna Aligner Part 7: Connecting the dots

    - by Chris George
    The app is basically ready, so I eagerly started to sort out creating the application entry in iTunes Connect. It's mostly intuitive actually, although I did have to create yet another icon for iTunes sized 512x512 pixels, damn lucky I did the original graphics as vector! It took me longer to write the application description than anything else, I'm so not a tech author! I didn't like the way you have to 'make up' an SKU (Stock Keeping Unit) number. I have to do some googling to find out that it really doesn't matter what it is! It should be more obvious what to do from the actual website itself. That aside, the rest of it was actually fairly straightforward. As well as the details of the application, iPhone and iPad screenshots were also required. This posed somewhat of a problem. The iPhone ones were easy (as I have one!), but I do not (yet) own an iPad . So I thought I'd leave the iPad screenshots out for now. Once the application details were sorted, I moved onto the rights and pricing. At the start of the project I had made the decision that I wouldn't charge any more than the lowest amount £0.59. I believe there is a market for this, but as my first foray into app development I didn't want to take the mick. I did realise, however, that I had built my app with a developer certificate and provisioning profile. This was fairly quickly corrected, and again Nomad made this very easy to switch over to the distribution certificate and provisioning profile. With a sense of excitement I cracked open iTunes connect and clicked the upload button ... ...slight snag... . when the Nomad project was started, Apple allowed uploads of these binaries via iTunes Connect. But this is no longer possible, the only upload path is via the Application Loader available from the Apple Developer program. This itself has one limitation, it only runs on a mac! D'OH!!!  Actually my language was somewhat more colourful when this fact came to light. After picking my laptop up off the floor and putting it back together... ok only joking, but I did nearly throw it out of frustration!... I started to consider the options; I briefly entertained the idea of buying a cheap mac from ebay... no, that defeats the whole object of what I'm doing, plus my wife wouldn't be impressed there are some guys out there in the interweb who will upload your app for a small fee...but I don't really like the idea of giving some faceless email address my apple developer login details, as well as my app binary! find some willing friend with a mac who would kindly let me use it... obviously this is the only sensible option. In the meantime, I informed the Nomad team about this slight 'issue' and they are currently investigating possible solutions...

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Unable to configure/setup 5.1 audio with 12.04

    - by Vipin Vinayan
    I am kinda new to Ubuntu as well. I have been having this issue with audio for quite sometime now. Initially, when I installed version 11.10 (I guess), I was able to use my 5.1 speakers without any issues. If my memory serves me right, it was after an update that the 5.1 audio stopped working and the video resolution would not get saved. I temporarily fixed the resolution issue by creating a start-up shell script that would update the resolution and load it. But the issue with audio has been going on for quite sometime now. Even though I have option for 5.1, only two speakers seem to be working. I thought an upgrade should fix the issue and so upgraded the OS to version 12.04. I also tried uninstalling alsa and pulse audio, reinstalling them, changing the /etc/pulse/daemon.conf channels from 2 to 6. I have also tried installing pavucontrol but nothing seems to have worked and the issue still persists. Is there anything else you could suggest? The lspci log on my computer is as follows 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) 00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA Controller [IDE mode] (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Would really appreciate a response that will assist me in resolving my issue. Thanks in advance Vipin

    Read the article

  • Support ARMv7 instruction set in Windows Embedded Compact applications

    - by Valter Minute
    On of the most interesting new features of Windows Embedded Compact 7 is support for the ARMv5, ARMv6 and ARMv7 instruction sets instead of the ARMv4 “generic” support provided by the previous releases. This means that code build for Windows Embedded Compact 7 can leverage features (like the FPU unit for ARMv6 and v7) and instructions of the recent ARM cores and improve their performances. Those improvements are noticeable in graphics, floating point calculation and data processing. The ARMv7 instruction set is supported by the latest Cortex-A8, A9 and A15 processor families. Those processor are currently used in tablets, smartphones, in-car navigation systems and provide a great amount of processing power and a low amount of electric power making them very interesting for portable device but also for any kind of device that requires a rich user interface, processing power, connectivity and has to keep its power consumption low. The bad news is that the compiler provided with Visual Studio 2008 does not provide support for ARMv7, building native applications using just the ARMv4 instruction set. Porting a Visual Studio “Smart Device” native C/C++ project to Platform Builder is not easy and you’ll lack many of the features that the VS2008 application development environment provides. You’ll also need access to the BSP and OSDesign configuration for your device to be able to build and debug your application inside Platform Builder and this may prevent independent software vendors from using the new compiler to improve their applications performances. Adeneo Embedded now provides a whitepaper and a Visual Studio plug-in that allows usage of the new ARMv7 enabled compiler to build applications inside Visual Studio 2008. I worked on the whitepaper and the tools, with the help of my colleagues and now the results can be downloaded from Adeneo Embedded’s website: http://www.adeneo-embedded.com/OS-Technologies/Windows-Embedded (Click on the “WEC7 ARMv7 Whitepaper tab to access the download links, free registration required) A very basic benchmark showed a very good performance improvement in integer and floating-point operations. Obviously your mileage may vary and we can’t promise the same amount of improvement on any application, but with a small effort on your side (even smaller if you use the plug-in) you can try on your own application. ARMv7 support is provided using Platform Builder’s compiler and VS2008 application debugger is not able to debut ARMv7 code, so you may need to put in place some workaround like keeping ARMv4 code for debugging etc.

    Read the article

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • Unity Dash and top toolbar won't open after updating to 12.10

    - by pgrytdal
    Today, I updated to Ubuntu 12.10. After re-starting, like the updater suggested, the toolbar on the top of the screen, and the dash won't load. I seem to be missing other features, as well, like alttab to switch windows, etc. I am able to access the Terminal, by typing CtrlAltT, which is how I was bale to access Firefox. How do I fix this problem? Edit: 2:10 PM on 10/19/12 As Chris Carter suggested, I'm including the results of the teminal command lspci (Sorry... I dont know how to format between Back-tics): 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Acer Incorporated [ALI] AMD RS780/RS880 PCI to PCI bridge (int gfx) 00:04.0 PCI bridge: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 0) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 3a) 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor HyperTransport Configuration (rev 40) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Link Control 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RS780M/RS780MN [Mobility Radeon HD 3200 Graphics] 01:05.1 Audio device: Advanced Micro Devices [AMD] nee ATI RS780 HDMI Audio [Radeon HD 3000-3300 Series] 03:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe (rev 10) 09:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • With AMD style modules in JavaScript is there any benefit to namespaces?

    - by gman
    Coming from C++ originally and seeing lots of Java programmers doing the same we brought namespaces to JavaScript. See Google's closure library as an example where they have a main namespace, goog and under that many more namespaces like goog.async, goog.graphics But now, having learned the AMD style of requiring modules it seems like namespaces are kind of pointless in JavaScript. Not only pointless but even arguably an anti-pattern. What is AMD? It's a way of defining and including modules that removes all direct dependencies. Effectively you do this // some/module.js define([ 'name/of/needed/module', 'name/of/someother/needed/module', ], function( RefToNeededModule, RefToSomeOtherNeededModule) { ...code... return object or function }); This format lets the AMD support code know that this module needs name/of/needed/module.js and name/of/someother/needed/module.js loaded. The AMD code can load all the modules and then, assuming no circular dependencies, call the define function on each module in the correct order, record the object/function returned by the module as it calls them, and then call any other modules' define function with references to those modules. This seems to remove any need for namespaces. In your own code you can call the reference to any other module anything you want. For example if you had 2 string libraries, even if they define similar functions, as long as they follow the AMD pattern you can easily use both in the same module. No need for namespaces to solve that. It also means there's no hard coded dependencies. For example in Google's closure any module could directly reference another module with something like var value = goog.math.someMathFunc(otherValue) and if you're unlucky it will magically work where as with AMD style you'd have to explicitly include the math library otherwise the module wouldn't have a reference to it since there are no globals with AMD. On top of that dependency injection for testing becomes easy. None of the code in the AMD module references things by namespace so there is no hardcoded namespace paths, you can easily mock classes at testing time. Is there any other point to namespaces or is that something that C++ / Java programmers are bringing to JavaScript that arguably doesn't really belong?

    Read the article

  • 12.04 Unity 3D 80% CPU load with Compiz

    - by user39288
    EDIT : I have been able to to determine that the problem is not compiz, but is actually Xorg. I don't know why, but by quickly maximizing terminal and taking a screenshot with top running before the problem went away I am able to see xorg takes up 72% of cpu, with bamfdaemon taking up 18%, and compiz taking up 14%. Seems the nvidia drivers are to blame, will play more with settings and perhaps do a clean nvidia-current install to try to fix the problem. Having a very annoying problem with high CPU usage. Running 12.04 with latest drivers and nvidia-current installed. Have not had any issues for days, now I have a strange problem. Unity 3d runs great most of the time, 1-2% CPU usage with only transmission running in background. Windows open and close smoothly. However,no matter what programs are open, if I minimize all open programs to the unity bar on the left, my CPU jumps to about 80% and slows down all maximize and minimize effects. Mouse movement stays smooth the whole time, but unity becomes unresponsive for up to 30 seconds at times. Hitting alt + tab to bring up even a single window fixes the problem. The window I bring back up doesn't even have to be maximized to solve the problem. Hitting the super button to bring up the dash makes CPU drop back to idle until I close it, then high CPU usage resumes. Believe the problem is compiz, but even just having only terminal running "top", I have to minimize it to the tray for the problem to show, so I can't see the problem process. I can only tell about the high CPU usage using indicator-sysmonitor. Even tried quitting the indicator, but I can still tell very poor performance with all applications when minimized. Reset compiz back to defaults, tried going to the post-release update nvidia drivers, played with vsync settings in both the nvidia settings and compiz. Even forced refresh rate, but cannot solve the problem. The problem does NOT occur in Unity 2D. Specs are core 2 duo 2.0ghz, 4GB ddr2 ram, 2x 320's HDD in RAID 0, and Nvidia GTX 260M graphics card.

    Read the article

  • Best Frameworks/libraries/engines for 2D multiplayer C# Webbased RPG

    - by Thirlan
    Title is a mouthful but important because I'm looking to meet a specific criteria and it's complex enough that I need a lot of help in finding what I'm looking for. I really want people's suggestions because I trust it a lot more than anything else, so I just need to clearly define what it is I want heh : P Game is a 2D RPG. Think of Secret of Mana. Game is online multiplayer, but not MMO sized. Game must be webbased! I'm looking to the future and want to hit as many platforms as possible. I'm leaning to Webgl because of this, but still looking around. Since the users are seeing the game through the webbrowser the front-end should be mainly responsible for drawing, taking input and some basic checking such as preliminary collision detection. This is important because it means the game engine is NOT on the client's machine. The server should be responsible for the game engine and all the calculations. This means the server is doing all the work and the client is mostly a dumb terminal. Server language is c# I'm looking for fast project execution so I want to use as many pre-existing tools as possible. This would make sense because I'm making a game here, not an engine. I'm not creating some new revolutionary graphics or pushing the physics engines to the next level. Preference for commercially supported tools. For game mechanics reasons and for reasons 4 and 5, don't think I can use existing 2D rpg engines. I've seen them out there and I fear that if I try and use them they will have too many restrictions, but will be happy to hear out suggestions. So all this means I need a game engine on the server, or maybe just a physics engine, and then I need another engine/library to draw everything that the server is sending to the client on the webbrowser. Maybe this is how 50% of games work on the web and there are plenty of frameworks that support this! I wouldn't actually don't know : ( but my gut is telling me that most webgames are single player and 90% of the game is running on the client. So... any suggestions?

    Read the article

  • Why does my ID3DXSprite appear to be incorrectly scaled?

    - by Bjoern
    I am using D3D9 for rendering some simple things (a movie) as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that. Before adding the buttons everything seemed to have worked fine, and I was using a ID3DXSprite for the text as well (ID3DXFont), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 times their original size. In my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so I expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels) My rendering code is very simple (I am omitting error checks, et cetera, for brevity) // initially viewport was set to width/height of client area // clear device m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 ); // begin scene m_d3dDevice->BeginScene(); // render movie surface (just two triangles to which the movie is rendered) m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false); m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_d3dDevice->SetTexture( 0, m_movieTexture ); m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex)); m_d3dDevice->SetFVF(Vertex::FVF_Flags); m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); // render sprites m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE); // text drop shadow m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow ); // text m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) ); // control object m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this m_sprite->End() // end scene m_d3dDevice->EndScene(); What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. By the way I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device, etc, works fine too). For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file... I am really very inexperienced in D3D, so the answer might be very simple...

    Read the article

  • Why is my arrow texture being drawn in odd places?

    - by tyjkenn
    This is a script I wrote that places an arrow on the screen, pointing to an enemy off-screen, or, if the enemy is on-screen, it places an arrow hovering above the enemy. Everything seems to work, except for some odd reason, I see random arrows floating around, often skewed and resized (which I really don't understand, because I only rotate and place in this script). Even when I only have one enemy in the scene, I still see these random arrows. It should only be drawing one per enemy. Note: when all enemies are removed, no arrows appear. var arrow : Texture; var cam : Camera; var dim : int = 30; function OnGUI() { var objects = GameObject.FindGameObjectsWithTag("Enemy"); for(var ob : GameObject in objects) { var pos = cam.WorldToViewportPoint(ob.transform.position); if(gameObject.GetComponent(FollowCamera).target != null){ var tar = gameObject.GetComponent(FollowCamera).target.parent; } if(pos.z>1 && ob.transform != tar){ var xDiff = (pos.x*cam.pixelWidth)-(cam.pixelWidth/2); var yDiff = (pos.y*cam.pixelHeight)-(cam.pixelHeight/2); var angle = Mathf.Rad2Deg*Mathf.Atan(yDiff/xDiff)+180; if(xDiff>0) angle += 180; var dist = Mathf.Sqrt(xDiff*xDiff + yDiff*yDiff); var slope = yDiff/xDiff; var camSlope = cam.pixelHeight/cam.pixelWidth; var theX = -1000.0; var theY = -1000.0; var mult = 0; var temp; if(Mathf.Abs(xDiff)>(cam.pixelWidth/2)||Mathf.Abs(yDiff)>(cam.pixelHeight/2)){ //touching right if(slope<camSlope && slope>-camSlope) { if(xDiff>(cam.pixelWidth/2)) { theX = cam.pixelWidth - (dim/2); mult = -1; }else if(xDiff<-(cam.pixelWidth/2)) { theX = (dim/2); mult = 1; } temp = ((cam.pixelWidth/2)*yDiff)/xDiff; theY =(cam.pixelHeight/2)+(mult*temp); } else{ if(yDiff>(cam.pixelHeight/2)) { theY = (dim/2); mult = 1; }else if(yDiff<-(cam.pixelHeight/2)) { theY = cam.pixelHeight - (dim/2); mult = -1; } temp = ((cam.pixelHeight/2)*xDiff)/yDiff; theX =(cam.pixelWidth/2)+(mult*temp); } } else { angle = -90; theX = (cam.pixelWidth/2)+xDiff; theY = (cam.pixelHeight/2)-yDiff-dim; } GUIUtility.RotateAroundPivot(-angle, Vector2(theX, theY)); Graphics.DrawTexture(Rect(theX-(dim/2),theY-(dim/2),dim,dim),arrow,null); GUIUtility.RotateAroundPivot(angle, Vector2(theX, theY)); } } }

    Read the article

  • Setting up Ubuntu on my mother's computer

    - by idealmachine
    Intended use My mother had an old Compaq desktop computer running Windows 98, which she used for occasional Web browsing and playing cards. The name of her card game is Hoyle Card Games 3. Although I had to repair it several times over the last 10 years, it worked fine until it finally died at the end of last year. Hardware specifications A relative brought up a newer computer soon afterward: Operating system: Windows XP Asus K8N motherboard (with broken on-board sound; getting a sound card) Athlon 64? processor (don't remember the clock speed) 512 MB RAM Hope the graphics card works... Replacement sound card will be one of: Ensoniq ES1370 AudioPCI Diamond Monster Sound MX300 (Aureal chipset) Sound Blaster Audigy 2 SE Peripherals HP Scanjet 3400c scanner (USB connected) HP LaserJet multi-function printer (parallel port connected, and printing works with a PCL driver) Same serial mouse as old computer Question I had set up an SSH/VNC connection to allow for remotely working out problems. Or so I thought. A month later, the computer would not boot, rendering the SSH connection useless and an OS reinstall necessary. Unfortunately, I have neither the original Windows disc nor the product key. Unless I were to pay $200 for a full Windows 7 Home Premium license for my computer, I would not be able to re-install Windows XP on hers. I consider myself an advanced Linux user, having used Debian for years. So here are my questions. I have only one day to decide whether to use Ubuntu or buy Windows: A quick search leads me to believe all the hardware listed above is supposed to work with Linux, but am I mistaken? Would Ubuntu/Xubuntu suffice (specify which one if it matters), or would I be better off paying the $200 necessary for Windows XP? Is the card game likely to run on Wine? I believe the minimum system requirement is Windows 95. Failing Wine compatibility, will VirtualBox run fast enough on such a computer (Windows 98 as the guest OS)? Are there any free card games just as good? She plays mainly Bridge, Poker, and Solitaire. Is there any "Large Fonts" option for those with poor vision? The lack of it would be a big disadvantage. BONUS: Although I would probably replace the old mouse upon a move to Ubuntu, is it even possible to get a serial mouse working?

    Read the article

  • Second Monitor stays black/in power save mode

    - by Rob
    I'm using two Monitors, a Belinea o.display 1 (Recognized as a Rogen Tech Distribution Inc 20" by Ubuntu, but working fine) on the DVI-Output (connected via DVI-to-VGA-adapter) as my primary Monitor and a Dell 19" (Recognized correctly) on the HDMI-output (via HDMI-to-DVI adapter) as secondary monitor. The graphics controller is a GeForce 9500 GS. I'm running a fully updated Ubuntu 13.04 with nouveau 1:1.0.7-0ubuntu1. The problem is that the second monitor (Dell) never seems to come out of standby during boot: the screen stays black and the status led on the monitor stays orange (it's green when it's on). It is correctly recognized an the size of the desktop is set accordingly, it just stays black. Changing any setting via xrandr/arandr/etc. does nothing. The on-screen-menu of the monitor reports it to be in power save mode. When using the proprietary NVIDIA-Drivers, the second monitor works just find. But these drivers cause a lot of other problems on my system, so i would really like to avoid them. On Ubuntu 12.10 i had found a workaround: When moving the relative position of the second monitor slightly down and the up again, it would turn on and function normally: xrandr --output DVI-I-1 --mode 1680x1050 --pos 1280x0 --rotate normal --output HDMI-1 --mode 1280x1024 --pos 0x88 --rotate normal sleep 2 xrandr --output DVI-I-1 --mode 1680x1050 --pos 1280x0 --rotate normal --output HDMI-1 --mode 1280x1024 --pos 0x0 --rotate normal This workaround stop working after the update to 13.04, and now i'm looking for a new solution. Has anyone experienced something similarity? xrandr output: Screen 0: minimum 320 x 200, current 2960 x 1050, maximum 8192 x 8192 DVI-I-1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 433mm x 270mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1024x768 75.1 72.0 70.1 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 72.8 75.0 66.7 60.0 720x400 70.1 HDMI-1 connected 1280x1024+1680+0 (normal left inverted right x axis y axis) 376mm x 301mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 lshw -c video: *-display Beschreibung: VGA compatible controller Produkt: G96 [GeForce 9500 GS] Hersteller: NVIDIA Corporation Physische ID: 0 Bus-Informationen: pci@0000:01:00.0 Version: a1 Breite: 64 bits Takt: 33MHz Fähigkeiten: pm msi pciexpress vga_controller bus_master cap_list rom Konfiguration: driver=nouveau latency=0 Ressourcen: irq:16 memory:fa000000-faffffff memory:d0000000-dfffffff memory:f8000000-f9ffffff ioport:df00(Größe=128) memory:fb000000-fb07ffff Thanks for your help!

    Read the article

  • How can a developer realize the full value of his work [closed]

    - by Jubbat
    I, honestly, don't want to work as a developer in a company anymore after all I have seen. I want to continue developing software, yes, but not in the way I see it all around me. And I'm in London, a city that congregates lots of great developers from the whole world, so it shouldn't be a problem of location. So, what are my concerns? First of all, best case scenario: you are paying managers salary out of yours. You are consistently underpaid by making up for the average manager negative net return plus his whole salary. Typical scenario. I am a reasonably good developer with common sense who cares for readable code with attention to basic principles. I have found way too often, overconfident and arrogant developers with a severe lack of common sense. Personally, I don't want to follow TDD or Agile practices like all the cool kids nowadays. I would read about them, form my own opinion and take what I feel is useful, but don't follow it sheepishly. I want to work with people who understand that you have to design good interfaces, you absolutely have to document your code, that readability is at the top of your priorities. Also people who don't have a cargo cult mentality too. For instance, the same person who asked me about design patterns in a job interview, later told me that something like a List of Map of Vector of Map of Set (in Java) is very readable. Why would someone ask me about design patterns if they can't even grasp encapsulation? These kind of things are the norm. I've seen many examples. I've seen worse than that too, from very well paid senior devs, by the way. Every second that you spend working with people with such lack of common sense and clear thinking, you are effectively losing money by being terribly inefficient with your time. Yet, with all these inefficiencies, the average developer earns a high salary. So I tried working on my own then, although I don't like the idea. I prefer healthy exchange of opinions and ideas and task division. I then did a bit of online freelancing for a while but I think working in a sweatshop might be more enjoyable. Also, I studied computer engineering and you are in an environment in which your client will presume you don't have any formal education because there is no way to prove it. Again, you are undervalued. You could try building a product, yes. But, of course, luck is a big factor. I wonder if there is a way to work in something you can do well, software development, and be valued for the quality of your work and be paid accordingly, and where you and only you get fairly paid for the value you generate. I know that what I have written seems somehow unlikely but I strongly feel this way. Hopefully someone will understand me and has already figured this out. I don't think I'm alone in this kind of feeling.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >