Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 296/1118 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • how to do partial updates in OpenGL?

    - by Will
    It is general wisdom that you redraw the entire viewport on each frame. I would like to use partial updates; what are the various ways can do that, and what are their pros, cons and relative performance? (Using textures, FBOs, the accumulator buffer, any kind of scissors that can affect swapbuffers etc?) A scenario: a scene with a fair few thousand visible trees; although the textures are mipmapped and they are drawn via VBOs roughly front-to-back with so on, its still a lot of polys. Would streaming a single screen-sized texture be better than throwing them at the screen every frame? You'd have to redraw and recapture them only on camera movement or as often as your wind model updates or whatever, which need not be every frame.

    Read the article

  • external SCSI tape drive DAT 72 problem Solaris 10

    - by Hassan
    Hi all, I have solaris 10 sparc running and working very well but i have problem with external SCSI tape drive DAT 72 problem it seems to me the tape drive is manufactured by SUN microsystems when i ran mt -f /dev/rmt/0 status it reveals the following output bash-3.00# mt -f /dev/rmt/0 status /dev/rmt/0: No such file or directory when i ran ls -l it reveals the following output ls -l /dev/rmt/0 lrwxrwxrwx 1 root root 43 Sep 20 2006 /dev/rmt/0 -> ../../devices/pci@8,600000/scsi@1,1/st@3,0: it seems to me everything is okay SCSI cable is connected properly to Tape device and to server as well the tape has SCSI termination dongle as well and connected properly to Tape device as well any ideas would be a great assist Thanks in advance

    Read the article

  • Unit Tests as a learning tool - a good idea?

    - by Ekkehard.Horner
    I'm interested in ways and means for learning (a) programming language(s) efficiently. I believe that using Unit Test concepts and infrastructure early in that process is a good thing, even better than starting with "Hello world". Why: To write a decent program even for a toy/restricted problem in a new language, you'll have to master many heterogenous concepts (control flow & variables & IO ...), you are tempted to glance over details just to get your program 'to work'. Putting (your understanding of) the facts about the new language in assertions with good descriptions (=success messages) enforces thinking thru/clearness/precision. Grouping topics and adding assertions to such groups is much easier than incorporation features from the 2. chapter of your "Learning X" book to your chapter 1 program. Why not: 'Real' Unit Tests are meant to output "1234 tests ok; 1 failure: saveWorld() chokes on negative input"; 'didactic' Unit Tests should output relevant facts about the new language like perl6 10-string.t # ### p5chop ... ok 13 - p5chop( "cbä" ) returns "ä" ok 14 - after that, victim is changed to "cb" # ### (p6) chop ... ok 27 - (p6) chop( "cbä" ) returns chopped copy: "cb" ok 18 - after that, victim is unchanged: "cbä" # ### chomp ... So (mis?)using Unit Tests may be counterproductive - practicing actions while learning you wouldn't use professionally. How: Writing 'didactic' Unit Tests in languages with lightweight testing systems (Perl 5/6) is easy; (mis?)using more elaborate systems (JUnit, CppUnit) may be not worth the effort or not suitable for a person just starting with a new language. So Is using Unit Tests as a learning tool a bad idea? Can the Unit Test tool(s) of your favourite language(s) used didactically? Should implementation details (eventually) be discussed here or over at stackoverflow.com?

    Read the article

  • Why doesn't lsof show localhost TCP connections in 14.04?

    - by sfussenegger
    I have two servers running 12.04.4 and 14.04.1 respectively. Both have nginx (port 80) and a Java process (port 8080). As expected, the lsof output for the Java process on the 12.04 machine shows a couple of established connections for port 8080 (e.g. TCP 127.0.0.1:8080->127.0.0.1:58067 (ESTABLISHED)) The 14.04 machine however does not. It only shows the listening port (TCP *:8080 (LISTEN)). I'm sure there are active connections though (confirmed by access logs, Java process status output, etc). What has changed from 12.04 to result in this behavior? Can this change be the cause for the "Too many open files" errors I'm getting since moving from 12.04 to 14.04? 12.04: $ dpkg -l lsof linux-image-virtual openjdk-7-jre nginx ||/ Name Version +++-===========================================-=========================================== ii linux-image-virtual 3.2.0.59.70 ii lsof 4.81.dfsg.1-1build1 ii nginx 1.6.1-1~precise ii openjdk-7-jre 7u65-2.5.1-4ubuntu1~0.12.04.1 14.04: $ dpkg -l lsof linux-image-virtual openjdk-7-jre nginx-full ||/ Name Version Architecture Description +++-=====================================-=======================-======================= ii linux-image-virtual 3.13.0.32.38 amd64 ii lsof 4.86+dfsg-1ubuntu2 amd64 ii nginx-full 1.4.6-1ubuntu3 amd64 ii openjdk-7-jre:amd64 7u65-2.5.1-4ubuntu1~0.1 amd64

    Read the article

  • Why does the MaxReceiveMessageSize in WCF matter in case of Streaming

    The default value of MaxReceiveMessageSize in WCF is 65,536.  When you choose streaming as TransferMode, WCF runtime will create 8192 as buffer size. So what happened now is that WCF channel stack will read the first 8192 bytes, and decode the first couple of bytes as the size of the entire envelope. Then we will do a size check, and send back fault if the actual size exceeds the limit.   According to MSDN documentation, the MaxReceiveMessageSize is something that prevents a DOS attack,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MiniMax function throws null pointer exception

    - by Sven
    I'm working on a school project, I have to build a tic tac toe game with the AI based on the MiniMax algorithm. The two player mode works like it should. I followed the code example on http://ethangunderson.com/blog/minimax-algorithm-in-c/. The only thing is that I get a NullPointer Exception when I run the code. And I can't wrap my finger around it. I placed a comment in the code where the exception is thrown. The recursive call is returning a null pointer, what is very strange because it can't.. When I place a breakpoint on the null return with the help of a if statement, then I see that there ARE still 2 to 3 empty places.. I probably overlooking something. Hope someone can tell me what I'm doing wrong. Here is the MiniMax code (the tic tac toe code is not important): /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package MiniMax; import Game.Block; import Game.Board; import java.util.ArrayList; public class MiniMax { public static Place getBestMove(Board gameBoard, Block.TYPE player) { Place bestPlace = null; ArrayList<Place> emptyPlaces = gameBoard.getEmptyPlaces(); Board newBoard; //loop trough all the empty places for(Place emptyPlace : emptyPlaces) { newBoard = gameBoard.clone(); newBoard.setBlock(emptyPlace.getRow(), emptyPlace.getCell(), player); //no game won and still room to move if(newBoard.getWinner() == Block.TYPE.NONE && newBoard.getEmptyPlaces().size() > 0) { //is an node (has children) Place tempPlace = getBestMove(newBoard, invertPlayer(player)); //ERROR is thrown here! tempPlace is null. emptyPlace.setScore(tempPlace.getScore()); } else { //is an leaf if(newBoard.getWinner() == Block.TYPE.NONE) { emptyPlace.setScore(0); } else if(newBoard.getWinner() == Block.TYPE.X) { emptyPlace.setScore(-1); } else if(newBoard.getWinner() == Block.TYPE.O) { emptyPlace.setScore(1); } //if this move is better then our prev move, take it! if((bestPlace == null) || (player == Block.TYPE.X && emptyPlace.getScore() < bestPlace.getScore()) || (player == Block.TYPE.O && emptyPlace.getScore() > bestPlace.getScore())) { bestPlace = emptyPlace; } } } //This should never be null, but it does.. return bestPlace; } private static Block.TYPE invertPlayer(Block.TYPE player) { if(player == Block.TYPE.X) { return Block.TYPE.O; } return Block.TYPE.X; } }

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • Resources for 2D rendering using OpenGL?

    - by nightcracker
    I noticed that there is quite some difference between 3D and 2D rendering using OpenGL, the techniques are different - pixel-perfect placing is a lot more desirable, among other things. Are there any good (complete) references on using OpenGL for rendering 2D graphics? There are quite a few "tutorials" around on the net that help you open a window, set up a half-decent environment and draw a sprite, but no real good information on rotation, blending, lightning, drawing order, using the z-buffer, particles, "complex" primitives (circles, stars, cross symbols), ensuring pixel-perfect rendering, instancing and many other staple 2D effects/techniques. Any books, great blogs, anything? Any particular awesome libraries to read?

    Read the article

  • Server not sending a SYN/ACK packet in response to a SYN packet

    - by jeff
    Using iptraf, tcpdump and wireshark I can see a SYN packet coming in but only the ACK FLAG is set in reply packet. I'm running Debian 5 with kernel 2.6.36 I've turned off window_scaling and tcp_timestamps, tcp_tw_recycle and tcp_tw_reuse: cat /etc/sysctl.conf net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_tw_reuse = 0 net.ipv4.tcp_window_scaling = 0 net.ipv4.tcp_timestamps = 0 I've attached an image of the wireshark output. http://imgur.com/pECG0.png Output to netstat netstat -natu | grep '72.23.130.104' tcp 0 0 97.107.134.212:18000 72.23.130.104:42905 SYN_RECV I've been doing everything possible to find a solution and have yet to figure out the problem, so any help/suggestions are much appreciated. UPDATE 1: I've set tcp_syncookies = 0 and noticed I am now replying with 1 SYN+ACK for every 50 SYN requests. The host trying to connect is sending a SYN request about once every second. PCAP FILE

    Read the article

  • How to revert back to older xorg?

    - by wouter205
    Since the last update of ubuntu 12.04, the system won't boot into gui anymore. It states that it was unable to load the graphics drivers and gives me 4 options, these are: run in low graphics mode for 1 session reconfigure graphics troubleshoot the error exit to console login whichever option I choose, it doesn't solve anything. For instance, when i choose to reconfigure graphics and then switch to vesa drivers, the screen goes back to the option list. So i configured xorg.conf to vesa myself, and i see in the update history that ubuntu updated xserver-xorg-core and xserver-common whereas i blocked these updates in synaptic since i'm aware that updating these files caused troubles with my particular video card (radeon hd6800). So my solution is probably to revert these files to the older (working) versions. How can I do this please (in particular xserver-xorg-core since i think this is the main cause of my problem). thanks! output of ls /etc/X11 -app-defaults xorg.conf Xreset -cursors xorg.conf-backup-120529144709 Xreset.d -default-display-manager xorg.conf.fglrx-0 -Xresources -fonts xorg.conf.fglrx-1 Xsession -rgb.txt xorg.conf.fglrx-2 Xsession.d -X xorg.conf.original-0 -Xsession.options -xinit xorg.conf.original-1 -Xwrapper.config -xkb xorg.conf.vesa Output of sudo aptitude show xserver-xorg-core | grep Versie (read: Version in Flemish) Versie: 2:1.11.4-0ubuntu10.2

    Read the article

  • What is the benefit of triple buffering?

    - by user782220
    I read everything written in a previous question. From what I understand in double buffering the program must wait until the finished drawing is copied or swapped before starting the next drawing. In triple buffering the program has two back buffers and can immediately start drawing in the one that is not involved in such copying. But with triple buffering if you're in a situation where you can take advantage of the third buffer doesn't that suggest that you are drawing frames faster than the monitor can refresh. So then you don't actually get a higher frame rate. So what is the benefit of triple buffering then?

    Read the article

  • Stairway to PowerPivot and DAX - Level 4: The DAX BLANK() Function

    Business Intelligence architect and author Bill Pearson exposes the DAX BLANK() function, and then provides some hands-on exposure to its use in managing empty values underlying our PowerPivot model designs. Save 45% on our top SQL Server database administration tools. Together they make up the SQL DBA Bundle, which supports your core tasks and helps your day run smoothly. Download a free trial now.

    Read the article

  • Is there a global "low resolution" filter for OpenGL?

    - by Ian Henry
    I'm trying to learn a little about OpenGL, so I'm making a simple 2D game (with OpenTK), and so far it's coming along well. I thought it would be fun to give it that, for lack of a better word, retropixelated look of games from the early nineties. I figured it would be an easy thing to do -- simply draw everything at half its normal size and scale up with no anti-aliasing. But I can't find any resources on how to do this. I can set the min/mag filters of my textures to nearest and that works fine for my sprites, but I'm using lots of primitives and I'd like the effect to apply to them as well. The one idea I had was to draw everything at half size, then somehow copy the render buffer to a texture, then render that texture full-size, but I don't know how to do that, and it seems like there must be a better way. Can anyone help me out?

    Read the article

  • nVidia Control Panel Won't Recognize SLI

    - by Nakaan
    So yesterday I bought a brand new, 1,000W PSU to replace my 650W one and to allow for SLI. I install it and my second card (both are GTX 560 ti cards). I reinstalled my drivers and my system recognizes both cards. The control panel says "GTX 560 ti x2", I can select between them for Physx, sound output, picture output (although I only have one hooked up to my monitor), etc. But the kicker is I can't turn on SLI. Everything I've read says I'll get a balloon notification saying I can use SLI and the control panel will have a new section, etc, but none of those things happens. Anyone have any ideas? System: Win7 Professional x64 2x GTX 560 ti 3.1 GHz Intel Core i5 1,000W Xion PSU 8GB RAM (Can't think of my motherboard right now) Note: My PSU and MOBO both say they're SLI certified and I'm using an SLI bridge. Edit: Formatting changes

    Read the article

  • VMware - I/O error - how to fix?

    - by Maya-G
    I'm running XP pro on VMware - it mounts and runs just fine. However, if I power down the VM and try to copy it, or even if I try to do a simple Mac Backup (using Carbon Copy Cloner), I get an i/o error at one very specific VMDK file. Here's a sample of the error - this from CCC: "12/20 22:49:30 Detected input/output error 12/20 22:49:33 rsync: read errors mapping "/Users/blahblah/Documents/Virtual Machines.localized/WindowsXP-Professional150G.vmwarevm/WindowsXP-Professional150G-000001-s065.vmdk": Input/output error (5)" How can I regain my ability to do backups of my Mac without this I/O error?

    Read the article

  • Why can't I use Unity 3D on a ATI Mobility Radeon HD 4200 Series with FGLRX drivers?

    - by user88257
    With a brand new install of 12.04 on my Dell Inspiron M5030, Unity 3D appears to load everything but the icons in the top left, and I am unable to click on anything. Unity 2D seems to works fine however. I have done nothing except install Synaptic Package manager and follow this guide to install FGLRX Drivers under Settings ? Additional Drivers, the driver shows as installed and functioning. Also after running: /usr/lib/nux/unity_support_test -p, I get this: OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: ATI Mobility Radeon HD 4200 Series OpenGL version string: 3.3.11653 Compatibility Profile Context Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes I am unsure as to what the problem is. I have tried the non-proprietary drivers, I could not get them to work either, but I would be willing to try again.

    Read the article

  • Ubuntu 12.04 problem with E160 huawei - can't detect the device nad freezing system

    - by Matt
    i just installed 12.04, plug in E160 and nothing happend - modem doesnt mount. i found this solution : Ubuntu does not mount some Huawei devices due to bugs, problems etc. See if these work: 1st option: Connect the USB modem. After 10 seconds, type this in a terminal window: lsusb The output will be like this: Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 004: ID 12d1:140b Huawei Technologies Co., Ltd. Bus 004 Device 002: ID 413c:3016 Dell Computer Corp. Optical 5-Button Wheel Mouse Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 005: ID 0b97:7762 O2 Micro, Inc. Oz776 SmartCard Reader Bus 002 Device 004: ID 413c:8103 Dell Computer Corp. Wireless 350 Bluetooth Bus 002 Device 003: ID 0b97:7761 O2 Micro, Inc. Oz776 1.1 Hub Bus 002 Device 002: ID 413c:a005 Dell Computer Corp. Internal 2.0 Hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub The device is a Huawei modem, so let's look at the output. The relevant entry is: Bus 004 Device 004: ID 12d1:140b Huawei Technologies Co., Ltd Hence, you must type: sudo modprobe usbserial vendor=0x12d1 product=0x140b 2nd option Download usb-modeswitch and usb-modeswitch-data packages from packages.ubuntu.com. Install them through the command: sudo dpkg -i usb-modeswitch*.deb 3rd option Try a combination of both. but with no result. Modem still is not detected. I've tried to add new conection but system can't see my device in setup dialogue. Also i noticed that when i open eg. terminal and try to type sth, system freeze for a while.. Thx for help!

    Read the article

  • External routing for local interfaces in a virtualized network

    - by Arkaitz Jimenez
    Current setup: br0| |-- tun10 -pipe-tun0(192.240.240.1) |-- tun11 -pipe-tun1(192.240.240.2) |-- tun12 -pipe-tun2(192.240.240.3) The pipe program is a custom program that forwards data back2back between two tun interfaces. The idea is puting 2 programs in .2 and .3 while keeping .1 as the local interface in the current machine. The main problem is that I want to route packets to .2 and to .3 through .1 and br0, but as they are local interfaces, the kernel ignores any routing instruction, it just delivers the packet to the proper interface. Tried iptables, but the nat table doesn't even see ping packets to those ifaces. A "ping 192.240.240.2" delivers a icmp packet with source and dest .2 to tun1, ideally it should deliver a source .1 dest .2 at tun1 through tun0-br0-tun1 Any hint? Here the output of some commands: Output

    Read the article

  • add height to accordion

    - by Hursh Prasad
    I'd like to know what is the best way to add overall height to the accordion example in the link below. I would like to make the ul sub-menu class taller, I would want the extra space to show as just empty with no list elements. http://vtimbuc.net/gallery/pure-css3-accordion-menu-tutorial/ I think it is possible by adding another tag like a div around the ul, but I am wondering is there an easier CSS way?

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • Outputting SVN hook messages

    - by Luke Segars
    Hi all, I have a subversion repository on my Linux machine that is set up to export a new build of a project every time a new commit occurs using a post-commit hook. I would really like to be able to provide an output message to the committer containing some status information once the hook completes. Is it possible to redirect the output of the hook to come after the standard commit messages? For example: owner@dev-machine:/working/dir$ svn commit Sending FILE1 Sending FILE2 Transmissing file data ... Committed revision 13. Exporting project... Successfully exported to mysite.com The addition of the last two lines is the functionality I'm looking for.

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • nvidia segfault crashes XServer (12.04 fresh install)

    - by Sébastien GILLES
    I am on a fresh install of 12.04, and my video card is a Quadro FX 570, which appears to be supported (see output of unity_support_test below). But my Xserver crashes randomly and causes my session to just shutdown and bring me back to the logging screen. This happens several times per hour. After checking syslog it that a segfault in the nvidia driver is the cause of the problem: Jun 13 11:14:17 lima kernel: [86569.828982] Xorg[17940]: segfault at ffebeb69 ip b4c8f7aa sp bf8076dc error 4 in nvidia_drv.so[b47a7000+706000] Jun 13 11:14:36 lima gnome-session[18119]: Gdk-WARNING: gnome-session: Fatal IO error 11 (Ressource temporairement non disponible) on X server :3.#012 The funny thing is that as I was reporting this bug, my xserver crashed again (!) and this time I got another error message: Jun 13 11:29:39 lima kernel: [87491.106441] NVRM: Xid (0000:02:00): 6, PE0001 Jun 13 11:29:39 lima gnome-session[26493]: Gdk-WARNING: gnome-session: Fatal IO error 11 (Ressource temporairement non disponible) on X server :4.#012 Any idea as to what can be the problem given that i am on a fresh install with a supported video card ? Sébastien == Output of unity_support_test $ /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: Quadro FX 570/PCIe/SSE2 OpenGL version string: 3.3.0 NVIDIA 295.40 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes

    Read the article

  • svn diff including annotate/blame-alike information of when changes where made by who

    - by Wouter Coekaerts
    Can you add annotate/blame-alike information to svn diff, so that for every changed line it includes which user and revision changed that line? For example, an annotate-diff comparing revisions 8-10 could output something like: 9 user1 - some line that user1 deleted in revision 9 10 user2 + some line that user2 added in revision 10 The context, lines around it which haven't changed, may be included as well or not, doesn't matter. It's not just a matter of "quickly" writing a shell script combining the output of svn diff and svn annotate. annotate for example will never show you who removed a line. It's also not a matter of doing annotate on a revision in the past: We're not interested in who originally added the line that got removed (that's not the one who "caused" the diff), we want to know who removed it. I suspect the only way to implement something to do this is to inspect each and every commit between the two revisions being compared (and somehow map all the changes in the separate diffs to lines in the total diff)... Does there exist a tool that does something like that?

    Read the article

  • iptables rule to submit packets matching a specific negative rule

    - by Aditya Sehgal
    I am using netfilter_queue to pick up certain packets from the kernel and do some processing on them. To, the netfilter queue, I need all packets from a particular source except UDP packets with src port 2152 & dst port 2152. I try to add the iptable rule as iptables -A OUTPUT ! s 192.168.0.3 ! -p udp ! --sport 2905 ! --dport 2905 -j NFQUEUE --queue-num 0 iptables throw up an error of Invalid Argument. Querying dmesg, I see the following error print ip_tables: udp match: only valid for protocol 17 I have tried the following variation with the same error thrown. iptables -A OUTPUT ! s 192.168.0.3 ! -p udp --sport 2905 --dport 2905 -j NFQUEUE --queue-num 0 Can you please advise on the correct usage of the iptables command for my case.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >