Search Results

Search found 39156 results on 1567 pages for 'device driver development'.

Page 425/1567 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Video lags/freezes in SMPlayer and VLC

    - by RanRag
    When I try to play my video files in SMPlayer it works fine but as soon as I switch to fullscreen mode(16:9) following thing happens: 1) Video starts lagging. 2) Audio and video goes out of sync. 3) CPU usage rises to ~50%. 4) SMPlayer starts to hang. My current SMPlayer configuration: 1)Video Output Driver = x11(slow) 2)Audio Output Driver = alsa(0.0-HDA Intel) 3)Cache = 8192 KB 4)Threads for decoding(MPEG-1/2 and H.264 only = 2 Things I tried solve this problem: 1) Tried changing video o/p driver to xv,gl. 2) Tried changing audio o/p driver to pulse. 3) Tried increasing cache size and also tried using nocache. Everything works fine on windows but I don't want to switch to windows just to play video files. My system config: Acer Aspire One D270 Atom N2600(Cedar Trail) 1.6GHz 2GB Memory Intel GMA 3600 graphics. Ubuntu 12.04 Kernel Release: 3.2.0-23-generic-pae Rest all things are working fine I have no resolution issue, bluetooth, wireless also working fine. Just ask me to submit any other log file I will be happy to post. SMPlayer log MPlayer Terminal output Codec Information(currently playing file):

    Read the article

  • Switching DVI socket

    - by lurscher
    I have Ubuntu 10.10 x86_64 with Nvidia 9800 gt and Nvidia driver version 270.41.06 My video card has two DVI sockets, but I only use the single monitor configuration. Now, I think the main DVI socket might be busted, so I want to try to enable the other as the main one, however, I don't know how to achieve that. I tried just plugging the monitor in that socket but it won't auto-detect it (it would have been way too easy to just work). This is my xorg.conf: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor0" VendorName "Unknown" ModelName "AOC" HorizSync 31.5 - 84.7 VertRefresh 60.0 - 78.0 ModeLine "1080p" 172.8 1920 2040 2248 2576 1080 1081 1084 1118 -hsync +vsync Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9800 GT" EndSection Section "Screen" # Removed Option "metamodes" "1024x768 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "CustomEDID" "CRT-0:/home/charlesq/lg.bin" Option "TVStandard" "HD1080p" Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "1080p +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Why would video stutter on HDMI but not on DVI?

    - by CorvT
    I've got a system running Ubuntu 12.04 with an i3 2120T CPU/GPU. When I play video through mplayer, I notice when I'm hooked up to a screen via HDMI there is a small stutter (1-2 frames) every few seconds. I don't see this happening when I connect via DVI on the same screen. Resolution and refresh rate are same for both HDMI and DVI, so I'm not sure where else the problem could be coming from. I've also tried two different screens, and different cables. I see the stutter with either HDMI-HDMI cables, or DVI-HDMI cable with DVI from the PC and HDMI into the screen. I don't see the stutter with DVI-DVI cables, or when I use HDMI-DVI cables with HDMI from the PC to DVI into the screen. I've also tried using an AMD 5XXX series card with the open source radeon driver, and saw the same problem. I then tried an nVidia GeForce 210 card with the closed source driver, and the stutter went away. To me this smells like a driver/mesa/glx issue (since the problem went away with the nvidia card/driver), but I have no idea how to track this down.

    Read the article

  • I can't enable extra effects in Ubuntu 10.10. Please help?

    - by jasoncruz98
    I installed Ubuntu 10.10 alongside Ubuntu 11.10 to use an older version of Compiz. On Ubuntu 11.10, Compiz was enabled by default and I didn't need to use any graphics driver to enjoy the effects. All I had to do was install CompizConfig Settings Manager and enable those extra effects. That was Compiz 0.9.6. Now, after installing Ubuntu 10.10, when I first logged in, the graphics were slow. When I dragged a window from one end of the screen to the other, the whole screen would blur up and pixelate and it would be very laggy. I tried going to System Preference Appearance and selecting Extra effects on the Visual effects tab, but all I got was "Desktop effects could not be enabled". I don't know whether I should install the Additional drivers (proprietary) because my Internet is slow and it would take a long time. Furthermore, in Ubuntu 11.10, after I installed the proprietary graphics driver, I immediately went into fallback mode and wasn't even offered an option to set my desktop session to Ubuntu 3D. I didn't need the driver to run Compiz in Ubuntu 11.10, it just ran so smoothly. But in Ubuntu 10.10, everything is so laggy. Should I install the ATI/AMD Proprietary FGLRX Graphics Driver for Ubuntu 10.10 to enable extra effects? Or is there something else wrong with my system? Here is the output of lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation Sandy Bridge Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Device [1002:6760] Here is the output of the same command, but in Ubuntu 11.10 (in this case the one which is correct, because I don't have the Sandy Bridge Integrated graphics controller) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] [1002:6760]

    Read the article

  • Ubuntu 12.10 won't display properly after kernel upgrade

    - by Daniel
    After updating a system today, Ubuntu's doesn't display correctly. The desktop now looks like this. It was working properly before. I had to use the terminal to run synaptic package manager, so I could view the update history; which is as follows: Commit Log for Wed Nov 7 11:50:36 2012 Upgraded the following packages: linux-image-generic (3.5.0.17.19) to 3.5.0.18.21 Installed the following packages: linux-image-3.5.0-18-generic (3.5.0-18.29) linux-image-extra-3.5.0-18-generic (3.5.0-18.29) Prior to this issue, the last active driver was nvidia-current-updates, version 304.51. I tried using the nvidia-current driver, version 304.51.really.304.43 instead, but the problem persists. I tried running nvidia-settings from terminal, so I could try configuring something, but the application informs that the Nvidia driver is not being used. As the x-swat repository has nothing for Quantal, I desperately used the unstable x-edgers repository & upgraded, but to no avail; so I purged it. The display should normally be full HD, but the only available resolutions now are 1024x768(4:3) and 800x600(4:3). The system is Dell XPS-L702X, with NVIDIA GeForce GT 555M, and 17" screen. How can I fix this problem? Update: I tried using the Nouveau third-party driver & this fixes the issue. However, if you have any idea how to get the Nvidia drivers working properly with the latest kernel, please share; as I've noticed some videos playing very slowly on the system, though I'm not sure exactly why.

    Read the article

  • How to get Nvidia graphics working on Sony Z laptop?

    - by projectshave
    I have an older Sony VAIO Z 590 laptop with switchable graphics between Intel and Nvidia GeForce 9300M. It is NOT Optimus. I did a clean install of Ubuntu 12.04. Everything works, but it's using Unity 2D with the Intel drivers. I've tried loading the Nvidia drivers from "Additional Drivers", but it says "this driver is activated but not currently in use". When I run "nvidia-settings", an error window pops up to say "You do not appear to be using the NVIDIA X drivers." "lspci" shows both graphics cards. Let me know if I should add more info. How do I get the Nvidia graphics and Unity 3D working? More info: $ lshw -short -class display H/W path Device Class Description ============================================== /0/100/1/0 display G98 [GeForce 9300M GS] /0/100/2 display Mobile 4 Series Chipset Integrated Graphics C $ glxinfo name of display: :0 Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Excerpts from Xorg.0.log: [ 16.373] (II) LoadModule: "glx" [ 16.373] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so [ 16.386] (II) Module glx: vendor="NVIDIA Corporation" [ 16.386] compiled for 4.0.2, module version = 1.0.0 [ 16.386] Module class: X.Org Server Extension [ 16.386] (II) NVIDIA GLX Module 295.49 Tue May 1 00:09:10 PDT 2012 [ 16.608] (II) NVIDIA dlloader X Driver 295.49 Mon Apr 30 23:48:24 PDT 2012 [ 16.608] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 17.693] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found)

    Read the article

  • Network print to brother MFC-7420

    - by trampster
    I am trying to pint to a Brother MFC-7420 from my ubuntu 10.04 machine. The brother is attached to a windows XP machine and is shared. This is what I have tried: System-Administration-Printing, Add, Expand Network Printer, Windows Printer via SAMBA, Browse (I can find the printer no problems here), Foward, Choose Driver Dialog, Brother, My printer is not in this list So the next thing I tried was to download the printer driver from here http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/download_prn.html The driver installed fine but my printer still does not appear in the list. I also tried installing the cups wrapper but that gave the following error. Restarting Common Unix Printing System: cupsd [ OK ] cp: cannot stat `/usr/share/cups/model/MFC7420.ppd': No such file or directory dpkg: error processing cupswrappermfc7420 (--install): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: cupswrappermfc7420 I tried connecting the printer directly but even though I have installed the driver, when I go to printers and click on the printer (it shows up fine as a USB printer) then it say searching for drivers and then gives me a list, this is the same list as before which doesn't have my printer. It really shouldn't be this hard. on window you don't have to installing anything it just works and the same is true for my brothers Mac. How do I print to my printer?

    Read the article

  • nVidia 9800 GTX+ X11 fails to initialize. no unity or lightdm

    - by rlemon
    I have just upgraded my work pc to 12.04 (not upgrade, fresh install), installing updates during the install, and after everything has loaded (with no errors) and I restart I get brought to console 1 tty login. Console 7 looks like this: IIRC I did not have to finagle with my drivers on 11.10 to get this card working. If this is in fact a driver bug I will remove this post and submit the bug but i'm not 100% confident that it is. I attempted to run unity --reset and got this: Lastly I tried $ sudo apt-get install nvidia-current which tells me nvidia-current is already the newest version. so I ran $ sudo dpkg-reconfigure nvidia-current which says /usr/sbin/dpkg-reconfigure: nvidia-current is broken or not fully installed. Anything I can try from here would be awesome. Currently the only way to get the system up and running was to shut down, plug one of my monitors into the onboard video, enable the onboard video card from the BIOS, then boot back up (and on my single monitor everything is fine). update So I have been able to boot fresh with the ext card plugged in as long as I don't take the updates with the install. past this if I only install the nvidia drivers (nvidia-current or nvidia-current-updates) from the main server (or canadian) I then get the problems.. My proposal; which I don't know where to look for: Can I try installing the previous version of this driver? In the past, on another machine I had issues with my NIC driver being funky... downgraded to the previous driver and bam everything was merry and well.

    Read the article

  • How do I install Intel ethernet drivers?

    - by Pat
    I just bought a Lenovo T420, I booted the live CD and couldn't connect to my wireless router so I figured I needed the drivers for it. I have the last version of Ubuntu(12.04). It's a Intel® 82579 Gigabit Ethernet Controller So I went on intel's website and downloaded the Drivers for Linux and compiled them with make install. Here's the tutorial I followed: Linux Driver Install I have a few question though, What are these doing : modprobe e1000e ifconfig eth0 up dhclient eth0 ping intel.com As far as what I've read, modprobe seems to add the driver to the list of drivers for the kernel, but is that operation only good for one session? The two other lines, I don't understand either what they are doing. What does he mean at the end when he mentions : Note: Whenever the kernel version is upgraded you will need rebuild this driver. And are these steps permanent? Or do I have to add them to some boot.conf file for everytime I boot. Anyways, the steps worked for me and I have established a connection with my router, I just need to know if I need to do additional steps to keep the driver permanently etc.

    Read the article

  • Wine causes twin view to break

    - by deanvz
    I have endlessly been playing around with the Nvidia X Server settings and changing my xorg.conf file to try and work for me and on most days its fine. In each instance I get it working for a while and then this morning the most bizarre thing happens. The moment I open any type of Wine program (which never use to be a problem) my Twin View setup disappears and am left with mirrored displays. I try and change the settings in the Nvidia driver, but its not interested and the screens remain mirrored. I have a work around. Restart my pc... Below are the contents of my current xorg.conf file. # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:43:34 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "LG Electronics W1934" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9400 GT" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1440+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • xubutnu Nvidia Settings not remembered

    - by hozza
    I have an Nvidia card and I'm using NVIDIA Driver Version: 304.51 and the NVIDIA X Server Settings GUI. Everything works fine but when I reboot and login again both my two screens are set to +0 +0 so they mirror each other... I change the settings to screen 2 (NEC LED) to be left of screen 1 (DELL) click Apply and Save to X Configuration File... It all works but when I login again the settings are not remembered... This is my xorg config file, can anyone help out? # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 304.51 (buildd@komainu) Fri Oct 12 12:53:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 2005FPW" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 640" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: nvidia-auto-select +1280+0, DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • how to get wireless to work on my HP Compaq nx6325 Notebook on 12.04

    - by user211487
    When my laptop ran XP, the wireless worked perfectly. Now the only way I get internet is having it wired in my living room - which is MASSIVELY inconvenient for me seeing as my room is outside. I have downloaded the driver from 'Additional Drivers', reset my computer (with difficulty, it also doesn't seem to like turning off, so I have to 'kill' it) and still nothing! I have looked all over the internet trying to find solutions but I just cant seem to get it to work. I have a USB wireless stick but I can't load the driver from the CD. I got this comment on a previous question: "Using either Snaptic Package Manager or Ubuntu's own Software Centre, remove "bcmwl-kernel-source" if it's already installed. Next, search for "firmware-b43-installer” and “b43-fwcutter” and install both. Finally, running Software Sources and checking the Additional Drivers tab you should find Broadcom STA Wireless Driver that you can select, install and on reboot hopefully have wireless working ok..." Which I replied: "Right, so I tried to find/remove "bcmwl-kernel-source" but couldn't find anything. I managed to install both "firmware-b43-installer” and “b43-fwcutter” using terminal, after finding out how it worked. And lastly, I couldn't figure out how to find 'Broadcom STA Wireless Driver' - still no more additional drivers. And my wireless is still not working. My laptop has a button that turns wifi on and off and since I put Ubuntu on it, it hasn't worked. And also, my computer still doesn't seem to restart properly - it just gets stuck on the ubuntu loading screen." - hope that helps, somewhat I am VERY new to Linux, not very technical and in need of help! -Running Ubuntu 12.04 32bit on a HP Compaq nx6325 Notebook Thank you to anyone who replies, its much appreciated

    Read the article

  • decouple software components via nameconvention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • How to make the internal subwoofer work on an Asus G73JW?

    - by CodyLoco
    I have an Asus G73JW laptop which has an internal subwoofer built-in. Currently, the system detects the internal speakers as a 2.0 system (or I can change do 4.0 is the only other option). I found a bug report here: https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/673051 which discusses the bug and according to them a fix was sent upstream back at the end of 2010. I would have thought this would have made it into 12.04 but I guess not? I tried following the link given at the very bottom to install the latest ALSA drivers, here: https://wiki.ubuntu.com/Audio/InstallingLinuxAlsaDriverModules however I keep running into an error when trying to install: sudo apt-get install linux-alsa-driver-modules-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-alsa-driver-modules-3.2.0-24-generic E: Couldn't find any package by regex 'linux-alsa-driver-modules-3.2.0-24-generic' I believe I have added the repository correctly: sudo add-apt-repository ppa:ubuntu-audio-dev/ppa [sudo] password for codyloco: You are about to add the following PPA to your system: This PPA will be used to provide testing versions of packages for supported Ubuntu releases. More info: https://launchpad.net/~ubuntu-audio-dev/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.7apgZoNrqK --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 4E9F485BF943EF0EABA10B5BD225991A72B194E5 gpg: requesting key 72B194E5 from hkp server keyserver.ubuntu.com gpg: key 72B194E5: public key "Launchpad Ubuntu Audio Dev team PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) And I also ran an update as well (followed the instructions on the fix above). Any ideas?

    Read the article

  • Nvidia drivers with GeForce 680M problem [closed]

    - by biofermin
    Possible Duplicate: There's an issue with an Alpha/Beta Release of Ubuntu, what should I do? I have a new Alienware M17x which I have installed 64-bit Ubuntu 12.10 (dual boot). I have tried various ways to install the Nvidia drivers for the GeForce GTX 680M (which they claim they are compatible with): from Ubuntu Software Centre (though Additional Drivers fails to detect the card.) using sudo apt-get install nvidia-current downloading the drivers from nvidia and manually installing them (ctrl + alt + F6, sudo service lightdm stop, sudo ./NVIDIA*, sudo reboot) (each with a clean install of Ubuntu) but in all cases, when I reboot after installing nvidia-setings claims that I am not using the driver and should sudo nvidia-xconfig from the terminal, which I do but this has no effect and I am stuck in crappy resolution until I rm /etc/X11/xorg.conf and reboot. The xorg.conf that nvidia-xconfig generates is: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Ubuntu Wireless not working on Lenovo t400

    - by VmaxBoss
    This problem started after upgrading to 12.04, an my system is 'up2date' Have tried most of the solution-proposals found on the net. lspci -nnk | grep -iA2 net 00:19.0 Ethernet controller [0200]: Intel Corporation 82567LF Gigabit Network Connection [8086:10bf] (rev 03) Subsystem: Lenovo Device [17aa:20ee] Kernel driver in use: e1000e 03:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection [8086:4237] Subsystem: Intel Corporation WiFi Link 5100 AGN [8086:1211] Kernel driver in use: iwlagn iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off sudo lshw -C network *-network description: Ethernet interface product: 82567LF Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 03 serial: 00:22:68:1a:c4:75 size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.0.2-k2 duplex=full firmware=1.8-3 ip=192.168.2.154 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:29 memory:fc000000-fc01ffff memory:fc024000-fc024fff ioport:1820(size=32) *-network DISABLED description: Wireless interface product: PRO/Wireless 5100 AGN [Shiloh] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 00:26:c6:6c:2d:24 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn latency=0 multicast=yes wireless=IEEE 802.11abgn resources: irq:30 memory:f4300000-f4301fff Please help Br/VB

    Read the article

  • New mainboard with 890GX chipset disables lightdm, even when using old graphics card instead of onboard graphics, startx works (Xubuntu 12.04)

    - by user99250
    I am trying to migrate an installed Xubuntu 12.04 to a new mainbaord with 890GX chipset. The Chipset has a built-in Radeon HD 4290 graphics. The system boots, but X won't start. The most suspicious message in Xorg.0.log is "ddxSigGiveUp: Closing log". When searching for this message, I found some answers like "remove your xorg.conf" (but there is none on my system). Or bug reports for fglrx (but that's not installed on my system). Or NVidia-related questions ... Interestingly, "startx" succeeds in opening a basic XFCE session. Then, I tried to disable the onboard graphics in the BIOS setup and use the old PCIe graphics card (Radeon HD 5450). No change. I don't think I can just blacklist a module, because the graphics card and the onboard graphics are covered by the same module. At the moment, use the free radeon drivers, not the restricted fglrx driver. If possible, I would like to stay on the free driver for two reasons: The fglrx driver from the ubuntu packages fails to build the kernel module. In past, I had bad experiences with the fglrx driver and changing screen config with RandR. When I connect the harddisk and the graphics card to my old mainboard, everything works again. This means, I have not screwed up my system configuration wile installing and removing the fglrx drivers. When I ordered this mainboard, I thought the 890GX is old enough to be supported and if not, I could still use my graphics card as backup solution. But without even the backup soluton, I'm screwed ... Any ideas ? Thanks and Regards, Kubuntu-Man (now switched to Xubuntu)

    Read the article

  • VS 2010 SP1 (Beta) and IIS Express

    - by ScottGu
    Last month we released the VS 2010 Service Pack 1 (SP1) Beta.  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.  You can download and install the VS 2010 SP1 Beta here. IIS Express Earlier this summer I blogged about IIS Express.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  We think it combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into VS today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios. Visual Studio 2010 SP1 adds support for IIS Express – and you can start to take advantage of this starting with last month’s VS 2010 SP1 Beta release. Downloading and Installing IIS Express IIS Express isn’t included as part of the VS 2010 SP1 Beta.  Instead it is a separate ~4MB download which you can download and install using this link (it uses WebPI to install it).  Once IIS Express is installed, VS 2010 SP1 will enable some additional IIS Express commands and dialog options that allow you to easily use it. Enabling IIS Express for Existing Projects Visual Studio today defaults to using the built-in ASP.NET Development Server (aka Cassini) when running ASP.NET Projects: Converting your existing projects to use IIS Express is really easy.  You can do this by opening up the project properties dialog of an existing project, and then by clicking the “web” tab within it and selecting the “Use IIS Express” checkbox. Or even simpler, just right-click on your existing project, and select the “Use IIS Express…” menu command: And now when you run or debug your project you’ll see that IIS Express now starts up and runs automatically as your web-server: You can optionally right-click on the IIS Express icon within your system tray to see/browse all of sites and applications running on it: Note that if you ever want to revert back to using the ASP.NET Development Server you can do this by right-clicking the project again and then select the “Use Visual Studio Development Server” option (or go into the project properties, click the web tab, and uncheck IIS Express).  This will revert back to the ASP.NET Development Server the next time you run the project. IIS Express Properties Visual Studio 2010 SP1 exposes several new IIS Express configuration options that you couldn’t previously set with the ASP.NET Development Server.  Some of these are exposed via the property grid of your project (select the project node in the solution explorer and then change them via the property window): For example, enabling something like SSL support (which is not possible with the ASP.NET Development Server) can now be done simply by changing the “SSL Enabled” property to “True”: Once this is done IIS Express will expose both an HTTP and HTTPS endpoint for the project that we can use: SSL Self Signed Certs IIS Express ships with a self-signed SSL cert that it installs as part of setup – which removes the need for you to install your own certificate to use SSL during development.  Once you change the above drop-down to enable SSL, you’ll be able to browse to your site with the appropriate https:// URL prefix and it will connect via SSL. One caveat with self-signed certificates, though, is that browsers (like IE) will go out of their way to warn you that they aren’t to be trusted: You can mark the certificate as trusted to avoid seeing dialogs like this – or just keep the certificate un-trusted and press the “continue” button when the browser warns you not to trust your local web server. Additional IIS Settings IIS Express uses its own per-user ApplicationHost.config file to configure default server behavior.  Because it is per-user, it can be configured by developers who do not have admin credentials – unlike the full IIS.  You can customize all IIS features and settings via it if you want ultimate server customization (for example: to use your own certificates for SSL instead of self-signed ones). We recommend storing all app specific settings for IIS and ASP.NET within the web.config file which is part of your project – since that makes deploying apps easier (since the settings can be copied with the application content).  IIS (since IIS 7) no longer uses the metabase, and instead uses the same web.config configuration files that ASP.NET has always supported – which makes xcopy/ftp based deployment much easier. Making IIS Express your Default Web Server Above we looked at how we can convert existing sites that use the ASP.NET Developer Web Server to instead use IIS Express.  You can configure Visual Studio to use IIS Express as the default web server for all new projects by clicking the Tools->Options menu  command and opening up the Projects and Solutions->Web Projects node with the Options dialog: Clicking the “Use IIS Express for new file-based web site and projects” checkbox will cause Visual Studio to use it for all new web site and projects. Summary We think IIS Express makes it even easier to build, run and test web applications.  It works with all versions of ASP.NET and supports all ASP.NET application types (including obviously both ASP.NET Web Forms and ASP.NET MVC applications).  Because IIS Express is based on the IIS 7.5 codebase, you have a full web-server feature-set that you can use.  This means you can build and run your applications just like they’ll work on a real production web-server.  In addition to supporting ASP.NET, IIS Express also supports Classic ASP and other file-types and extensions supported by IIS – which also makes it ideal for sites that combine a variety of different technologies. Best of all – you do not need to change any code to take advantage of it.  As you can see above, updating existing Visual Studio web projects to use it is trivial.  You can begin to take advantage of IIS Express today using the VS 2010 SP1 Beta. Hope this helps, Scott

    Read the article

  • HTML5 or Javascript game engine to develop a browser game

    - by Jack Duluoz
    I would like to start developing a MMO browser game, like Travian or Ogame, probably involving also a bit of more sophisticated graphical features such as players interacting in real time with a 2d map or something like that. My main doubt is what kind of development tools I should use: I've a good experience with PHP and MySQL for the server side and Javascript (and jQuery) regarding the client side. Coding everything from scratch would be of course really painful so I was wondering if I should use a javascript game engine or not. Are there (possibly free) game engine you would recommend? Are they good enough to develop a big game? Also, I saw a lot of HTML5 games popping up lately but I'm now sure if using HTML5 is a good idea or not. Would you recommend it? What are the pro and cons about using HTML5? If you'd recommend it, do you have any good links regarding game development with HTML5? (PS: I know that HTML5 and a Javascript engine are not mutually exclusive, I just didn't know how to formulate a proper title since English is not my main language. So, please, answer addressing HTML5 and a game engine pro and cons separately)

    Read the article

  • Click Once Deployment Process and Issue Resolution

    - by Geordie
    Introduction We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments. Why Use Click Once over SCCM If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed. New Deployment Steps Overview The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results. 1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’) 2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) 3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) Detailed Deployment Steps Deploying Click Once Code From Visual Studio Open Visual Studio and create a new WinForms or WPF project.   In the solution explorer right click on the project and select ‘Publish’ in the context menu.   The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish. Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.   The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application. Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.   Directory structure created by Visual Studio     Application files created by Visual Studio   Development web site (install.htm) created by Visual Studio Migrating Click Once Code to a new Server without using Visual Studio To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web. When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’.  Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.   Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder). Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.   When MageUI tries to save the file it will prompt for the file to be signed.   This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.   Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.     Updating the Configuration File of a Click Once Deployment Package without using Visual Studio When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes. Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).   Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file. Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten. Settings configuration view in Visual Studio Production Deployment When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.   Troubleshooting Clicking the install button on the install.htm page fails. Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>   This is due to the setup.exe file pointing to the wrong location. ‘The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

    Read the article

  • javascript fixed timestep gameloop with requestanimation frame

    - by coffeecup
    hello i just started to read through several articles, including http://gafferongames.com/game-physics/fix-your-timestep/ ...://gamedev.stackexchange.com/questions/1589/fixed-time-step-vs-variable-time-step/ ...//dewitters.koonsolo.com/gameloop.html ...://nokarma.org/2011/02/02/javascript-game-development-the-game-loop/index.html my understanding of this is that i need the currentTime and the timeStep size and integrate all states to the next state the time which is left is then passed into the render function to do interpolation i tried to implement glenn fiedlers "the final touch", whats troubling me is that each FrameTime is about 15 (ms) and the update loop runs at about 1500 fps which seems a little bit off? heres my code this.t = 0 this.dt = 0.01 this.currTime = new Date().getTime() this.accumulator = 0.0 this.animate() animate: function(){ var newTime = new Date().getTime() , frameTime = newTime - this.currTime , alpha if ( frameTime > 0.25 ) frameTime = 0.25 this.currTime = newTime this.accumulator += frameTime while (this.accumulator >= this.dt ) { this.prev_state = this.curr_state this.update(this.t,this.dt) this.t += this.dt this.accumulator -= this.dt } alpha = this.accumulator / this.dt this.render( this.t, this.dt, alpha) requestAnimationFrame( this.animate ) } also i would like to know, are there differences between glenn fiedlers implementation and the last solution presented here ? gameloop1 gameloop2 [ sorry couldnt post more than 2 links.. ] edit : i looked into it again and adjusted the values this.currTime = new Date().getTime() this.accumulator = 0 this.p_t = 0 this.p_step = 1000/100 this.animate() animate: function(){ var newTime = new Date().getTime() , frameTime = newTime - this.currTime , alpha if(frameTime > 25) frameTime = 25 this.currTime = newTime this.accumulator += frameTime while(this.accumulator >= this.p_step){ // prevstate = currState this.update() this.p_t+=this.p_step this.accumulator -= this.p_step } alpha = this.accumulator / this.p_step this.render(alpha) requestAnimationFrame( this.animate ) now i can set the physics update rate, render runs at 60 fps and physics update at 100 fps, maybe someone could confirm this because its the first time i'm playing around with game development :-)

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • Game software design

    - by L. De Leo
    I have been working on a simple implementation of a card game in object oriented Python/HTML/Javascript and building on the top of Django. At this point the game is in its final stage of development but, while spotting a big issue about how I was keeping the application state (basically using a global variable), I reached the point that I'm stuck. The thing is that ignoring the design flaw, in a single-threaded environment such as under the Django development server, the game works perfectly. While I tried to design classes cleanly and keep methods short I now have in front of me an issue that has been keeping me busy for the last 2 days and that countless print statements and visual debugging hasn't helped me spot. The reason I think has to do with some side-effects of functions and to solve it I've been wondering if maybe refactoring the code entirely with static classes that keep no state and just passing the state around might be a good option to keep side-effects under control. Or maybe trying to program it in a functional programming style (although I'm not sure Python allows for a purely functional style). I feel that now there's already too many layers that the software (which I plan to make incredibly more complex by adding non trivial features) has already become unmanageable. How would you suggest I re-take control of my code-base that (despite being still only at < 1000 LOC) seems to have taken a life of its own?

    Read the article

  • Alternatives to a leveling system

    - by Bane
    I'm currently designing a rough prototype of a mecha fighting game. These are the basics I came up with: Multiplayer (matchmaking for up to 10 people, for now) Browser based (HTML5) 2D (<canvas>) Persistent (as in, players have accounts and don't have to use a new mech each time they start a match) Players earn money upon destroying another mech, which is used to buy parts (guns, armor, boosters, etc) Simplicity (both of the game itself, and of the development of said game) No "leveling" (as in, players don't get awarded with XP) The last part is bothering me. At first, I wanted to have players gain experience points (XP) when destroying other mechs, but gaining two things at once (money and XP) seemed to be in conflict with my last point, which is simplicity. If I were to have a leveling system, that would require additional development. But, the biggest problem is that I simply couldn't fit it anywhere! Adding levels would require adding meaning to these levels, and most of the things that I hoped to achieve could already be achieved with the money mechanic I introduced. So I decided to drop leveling off completely. That, in turn, removed a fairly popular and robust mean of progression in games from my game (not that I would use it well anyway). Is there another way of progression in games, aside from leveling and XP points, that wouldn't get rendered redundant by my money mechanic, would be somehow meaningful (even on a symbolic level), and wouldn't be in conflict with my last point, which is simplicity?

    Read the article

  • Per-vertex position/normal and per-index texture coordinate

    - by Boreal
    In my game, I have a mesh with a vertex buffer and index buffer up and running. The vertex buffer stores a Vector3 for the position and a Vector2 for the UV coordinate for each vertex. The index buffer is a list of ushorts. It works well, but I want to be able to use 3 discrete texture coordinates per triangle. I assume I have to create another vertex buffer, but how do I even use it? Here is my vertex/index buffer creation code: // vertices is a Vertex[] // indices is a ushort[] // VertexDefs stores the vertex size (sizeof(float) * 5) // vertex data numVertices = vertices.Length; DataStream data = new DataStream(VertexDefs.size * numVertices, true, true); data.WriteRange<Vertex>(vertices); data.Position = 0; // vertex buffer parameters BufferDescription vbDesc = new BufferDescription() { BindFlags = BindFlags.VertexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = VertexDefs.size * numVertices, StructureByteStride = VertexDefs.size, Usage = ResourceUsage.Default }; // create vertex buffer vertexBuffer = new Buffer(Graphics.device, data, vbDesc); vertexBufferBinding = new VertexBufferBinding(vertexBuffer, VertexDefs.size, 0); data.Dispose(); // index data numIndices = indices.Length; data = new DataStream(sizeof(ushort) * numIndices, true, true); data.WriteRange<ushort>(indices); data.Position = 0; // index buffer parameters BufferDescription ibDesc = new BufferDescription() { BindFlags = BindFlags.IndexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = sizeof(ushort) * numIndices, StructureByteStride = sizeof(ushort), Usage = ResourceUsage.Default }; // create index buffer indexBuffer = new Buffer(Graphics.device, data, ibDesc); data.Dispose(); Engine.Log(MessageType.Success, string.Format("Mesh created with {0} vertices and {1} indices", numVertices, numIndices)); And my drawing code: // ShaderEffect, ShaderTechnique, and ShaderPass all store effect data // e is of type ShaderEffect // get the technique ShaderTechnique t; if(!e.techniques.TryGetValue(techniqueName, out t)) return; // effect variables e.SetMatrix("worldView", worldView); e.SetMatrix("projection", projection); e.SetResource("diffuseMap", texture); e.SetSampler("textureSampler", sampler); // set per-mesh/technique settings Graphics.context.InputAssembler.SetVertexBuffers(0, vertexBufferBinding); Graphics.context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R16_UInt, 0); Graphics.context.PixelShader.SetSampler(sampler, 0); // render for each pass foreach(ShaderPass p in t.passes) { Graphics.context.InputAssembler.InputLayout = p.layout; p.pass.Apply(Graphics.context); Graphics.context.DrawIndexed(numIndices, 0, 0); } How can I do this?

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >