Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 683/1118 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • xrander detect only one display

    - by cupakob
    Hi all, i have a problem, to get a picture on my tv over VGA (and also over S-Video to SCART). I've tried it first over xorg, but without success. After that i tried xrand, but xrander detect only my laptop display, here the output bufka [~] $ xrandr -q Screen 0: minimum 1680 x 1050, current 1680 x 1050, maximum 1680 x 1050 default connected 1680x1050+0+0 0mm x 0mm 1680x1050 50.0* 51.0 52.0 Any suggestions, how to solve the problem? My video card is Nvidia Geforce 8600M GT, TV is LG M227WPD and OS Ubuntu Lucid...

    Read the article

  • Memory is free, but still swapping?

    - by japancheese
    Hello, I'm sure this is a pretty basic question, but I'm just trying to get a grasp of what's going on with my Ubuntu (Hardy Herron) server (running a Rails-based site). It seems that I have free memory available, yet the system is reporting that it is still swapping memory (unless I'm reading this incorrectly?). Here is the "free -m" output total used free shared buffers cached Mem: 1024 905 118 0 33 409 -/+ buffers/cache: 462 561 Swap: 2047 95 1952 Could anyone explain to me some possible reasons that it is maintaining 95mb of swap at all times (it is never less)? I'm just looking for some leads on things I could check out that would explain to me exactly how memory is utilized in Linux.

    Read the article

  • X Error of failed request: BadMatch [migrated]

    - by Andrew Grabko
    I'm trying to execute some "hello world" opengl code: #include <GL/freeglut.h> void displayCall() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); ... Some more code here glutSwapBuffers(); } int main(int argc, char *argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH); glutInitWindowSize(500, 500); glutInitWindowPosition(300, 200); glutInitContextVersion(4, 2); glutInitContextFlags(GLUT_FORWARD_COMPATIBLE); glutCreateWindow("Hello World!"); glutDisplayFunc(displayCall); glutMainLoop(); return 0; } As a result I get: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 128 (GLX) Minor opcode of failed request: 34 () Serial number of failed request: 39 Current serial number in output stream: 40 Here is the stack trace: fghCreateNewContext() at freeglut_window.c:737 0x7ffff7bbaa81 fgOpenWindow() at freeglut_window.c:878 0x7ffff7bbb2fb fgCreateWindow() at freeglut_structure.c:106 0x7ffff7bb9d86 glutCreateWindow() at freeglut_window.c:1,183 0x7ffff7bbb4f2 main() at AlphaTest.cpp:51 0x4007df Here is the last piece of code, after witch the program crashes: createContextAttribs = (CreateContextAttribsProc) fghGetProcAddress("glXCreateContextAttribsARB" ); if ( createContextAttribs == NULL ) { fgError( "glXCreateContextAttribsARB not found" ); } context = createContextAttribs( dpy, config, share_list, direct, attributes ); "glXCreateContextAttribsARB" address is obtained successfully, but the program crashes on its invocation. If I specify OpenGL version less than 4.2 in "glutInitContextVersion()" program runs without errors. Here is my glxinfo's OpelGL version: OpenGL version string: 4.2.0 NVIDIA 285.05.09 I would be very appreciate any further ideas.

    Read the article

  • Cannot get temperatures in Dell Studio 1558

    - by Athul Iddya
    I could never get proper temperatures on my Dell Studio 1558. lm-sensors and acpi give wrong readings. The output of sensors is, $ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +26.8°C (crit = +100.0°C) temp2: +0.0°C (crit = +100.0°C) acpi -V gives me, $ acpi -V Battery 0: Full, 100% Battery 0: design capacity 414 mAh, last full capacity 369 mAh = 89% Adapter 0: on-line Thermal 0: ok, 0.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 0: trip point 1 switches to mode passive at temperature 95.0 degrees C Thermal 0: trip point 2 switches to mode active at temperature 71.0 degrees C Thermal 0: trip point 3 switches to mode active at temperature 55.0 degrees C Thermal 1: ok, 26.8 degrees C Thermal 1: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 1: trip point 1 switches to mode active at temperature 71.0 degrees C Thermal 1: trip point 2 switches to mode active at temperature 55.0 degrees C Cooling 0: LCD 0 of 15 Cooling 1: Processor 0 of 10 Cooling 2: Processor 0 of 10 Cooling 3: Processor 0 of 10 Cooling 4: Processor 0 of 10 Cooling 5: Fan 0 of 1 Cooling 6: Fan 0 of 1 I suspect even hddtemp gives bogus readings as its always at 46 $ sudo hddtemp /dev/sda /dev/sda: ST9500420AS: 46°C I have gone through some bug reports and some used to have the same problem after resuming from suspend. But I always have this problem. I had updated to the latest BIOS from Windows a couple of weeks ago, will updating from Ubuntu change anything? CORRECTION: hddtemp's readings do change. Its now at 45.

    Read the article

  • Background process text appears in terminal vim

    - by Jezen Thomas
    First time poster, long time lurker, searched, couldn’t find etc, etc. I’m running vim in tmux, in iTerm2. I’m running a server with Grunt.js, which I have running in the background, out of my way. I start my grunt server in the background like this: grunt server & Grunt also watches a bunch of files, and runs some tasks when any of the watched files have been written to. The problem is, when I am in vim and I write a file, the output from grunt starts rendering in vim! Here are some screenshots to illustrate the problem: Before writing the file: And after writing the file: What have I tried? I’ve tried running a ‘stock’ vim by starting with this: vim -u NONE …But the problem remains. This suggests to me that the problem is not with my .vimrc. Perhaps it’s an issue with iTerm2, I don’t know. Help.

    Read the article

  • Retrieving a specific value from "df -h" using shell

    - by Diego Dias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • Added resolution not working after upgrading to 12.04

    - by David
    After upgrading, my screen resolution (added by xrandr comands on the start) did not work like before. Messages appear showing some errors (that i have never had in 11.10). "No se pudo aplicar la configuración almacenada para los monitores"/"Can't apply the stored configuration for the monitors." This script didn't work either. xrandr --newmode "1280x1024_60.00" 109.00 1280 1368 1496 1712 1024 1027 1034 1063 -hsync +vsync xrandr --addmode VGA1 1280x1024_60.00 xrandr --output VGA1 --mode 1280x1024_60.00 I also tryed deleting monitors.xml but, nothing. This only erase the window message. It's been sayd that ##It's a normal buggy and well know problem for Pc's with Intel integrated video cards.## The new version of gnome-settings-daemon stores its configuration information in dconf rather than gconf. I tryed something, but the problem persist. This is what i did. Install the dconf-tools package, and then run dconf-editor. In the tree on the left, navigate org - gnome - settings-daemon - plugins - xrandr. Uncheck the active checkbox. restart your XServer (Ctrl+Alt+Backspace) (It didn't worked out for me, but it may be helpful to someone)

    Read the article

  • Django HttpResponseRedirect acting as proxy rather than 302

    - by Trevor Burnham
    I have a Django method that's returning return HttpResponseRedirect("/redirect-target") When running the server locally, if I visit the page that returns that redirect, I get the log output [17/Oct/2013 15:26:02] "GET /redirecter HTTP/1.1" 302 0 [17/Oct/2013 15:26:02] "GET /redirect-target HTTP/1.1" 404 0 as expected. But, when I visit that page in Chrome, the Network tab shows the request to /redirecter with the response from /redirect-target, rather than showing the 302. cURL does the same: $ curl -I -X GET http://localhost/redirecter HTTP/1.1 404 Not Found date: Thu, 17 Oct 2013 19:32:30 GMT connection: keep-alive transfer-encoding: chunked In production, the same Django code does show a 302 redirect in Chrome and cURL. What could be going on here? Is there some kind of Django setting that might be causing it to proxy the target rather than send a redirect when HttpResponseRedirect is used (but lie about it in the log)? Or is there a quirk on my system (OS X) that might cause localhost redirects to behave this way?

    Read the article

  • How to re-add RAID-10 dropped drive?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • Creepy MySQL error during hard work

    - by Kiewic
    Hi, I have a MySQL database installed on a OpenSuse 11.1 server (it is a Bitnami image). The database works fine, it can stay many days without any error, but when MySQL receives a huge amount of transactions, it dies immediately. The next screen shows the error: Moreover, I don't know how to restart MySQL. I have tried this: /opt/bitnami/mysql/bin/mysqld start But it doesn't work, that gives me the next output: 110209 17:09:01 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root! 110209 17:09:01 [ERROR] Aborting 110209 17:09:01 [Note] /opt/bitnami/mysql/bin/mysqld.bin: Shutdown complete It doesn't matter which kind of statements are executing, if they are a huge amount, MySQL dies. The MySQL server version is 5.1.30 What can be causing these sudden failures?

    Read the article

  • OData &ndash; The easiest service I can create

    - by Jon Dalberg
    I wanted to create an OData service with the least amount of code so I fired up Visual Studio and got cracking. I decided to serve up a list of naughty words and make them read-only. Create a new web project. I created an empty MVC 2 application but MVC is not required for OData. Add a new WCF Data Service to the project. I named mine NastyWords.svc since I’m serving up a list of nasty words. Add a class to expose via the service: NastyWord 1: [DataServiceKey("Word")] 2: public class NastyWord 3: { 4: public string Word { get; set; } 5: }   I need to be able to uniquely identify instances of NastyWords for the DataService so I used the DataServiceKey attribute with the “Word” property as the key. I could have added an “ID” property which would have uniquely identified them and would then not need the “DataServiceKey” attribute because the DataService would apply some reflection and heuristics to guess at which property would be the unique identifier. However, the words themselves are unique so adding an “ID” property would be redundantly repetitive. Then I created a data source to expose my NastyWord objects to the service. This is just a simple class with IQueryable<T> properties exposing the entities for my service: 1: public class NastyWordsDataSource 2: { 3: private static IList<NastyWord> words = new List<NastyWord> 4: { 5: new NastyWord{ Word="crap"}, 6: new NastyWord{ Word="darn"}, 7: new NastyWord{ Word="hell"}, 8: new NastyWord{ Word="shucks"} 9: }; 10:   11: public NastyWordsDataSource() 12: { 13: NastyWords = words.AsQueryable(); 14: } 15:   16: public IQueryable<NastyWord> NastyWords { get; private set; } 17: }   Now I can go to the NastyWords.svc class and tell it which data source to use and which entities to expose: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Compile and browse to my NastWords.svc and weep with joy Now I can query my service just like any other OData service. Next time, I’ll modify this service to allow updates to sent so I can build up my list of nasty words. Enjoy!

    Read the article

  • How do I require use of the 5 GHz band when connecting to a Wireless N access point?

    - by cqcallaw
    What is says in the topic: there's a Wireless N access point to which I'd like to connect using the 5 GHz band exclusively. How does one accomplish this? Using the directive band=a in the connection configuration file in /etc/NetworkManager/system-connections doesn't seem to affect anything (/var/log/syslog still shows attempts by wpa_supplicant to connect using the 2.4 GHz band), and running iwconfig wlan0 freq 5G per this question results in the following error: Error for wireless request "Set Frequency" (8B04) : SET failed on device wlan0 ; Invalid argument. [Edit] I'm hoping the answer won't depend on the hardware in use, but here's some information about the hardware, just in case: System is an Asus ZenBook Prime UX31A-DB51, running Ubuntu 12.04. lspci output: 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:04.0 Signal processing controller: Intel Corporation Device 0153 (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 00:1f.6 Signal processing controller: Intel Corporation Panther Point Thermal Management Controller (rev 04) 02:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24) The driver for the wireless interface is iwlwifi.

    Read the article

  • How can I get ssh-agent working over ssh and in tmux (on OS X)?

    - by Rich
    I have a private key set up for my github account, the passphrase to which is, I believe, stored in OS X's keychain. I certainly don't have to type it in when I open a terminal window and enter ssh [email protected]. However, when I'm running bash over an ssh session, or locally inside a tmux session, I have to type in the passphrase every single time I attempt to ssh to github. This question suggests that a similar problem exists with screen, but I don't really understand the issue well enough to fix it in tmux. There's also this page which includes a fairly complicated solution, but for zsh. EDIT: In response to @Mikel's answer, from a local terminal I get the following output: [~] $ echo $SSH_AUTH_SOCK /tmp/launch-S4HBD6/Listeners [~] $ ssh-add -l 2048 [my key fingerprint] /Users/richie/.ssh/id_rsa (RSA) [~] $ typeset -p SSH_AUTH_SOCK declare -x SSH_AUTH_SOCK="/tmp/launch-S4HBD6/Listeners" Whereas over ssh or in tmux I get: [~] $ echo $SSH_AUTH_SOCK [~] $ ssh-add -l Could not open a connection to your authentication agent. [~] $ typeset -p SSH_AUTH_SOCK bash: typeset: SSH_AUTH_SOCK: not found echo $SSH_AGENT_PID returns nothing whatever shell I run it from.

    Read the article

  • Lagging digital tv over ethernet

    - by Steve
    I have a HD Home Run TV over ethernet device, which connects the aerial to my router, and from there the router connects to my PC over about 15m of 100Mbps ethernet cable. The TV output lags every second. It does not do so for a computer much closer to the router. It is odd to me that the network rate is around 7Mbps on a 100Mbps cable. I am not downloading or streaming anything else on the affected computer. Is this lag caused by the speed of the cable, the length of the cable, or interference on the cable? I am considering swapping the ethernet cable with shielded ethernet cable.

    Read the article

  • ORAchk version 2.2.5 is now available for download

    - by Gerry Haskins
    Those awfully nice ORAchk folks have asked me to let you know about their latest release... ORAchk version 2.2.5 is now available for download, new features in 2.2.5: Running checks for multiple databases in parallel Ability to schedule multiple automated runs via ORAchk daemon New "scratch area" for ORAchk temporary files moved from /tmp to a configurable $HOME directory location System health score calculation now ignores skipped checks Checks the health of pluggable databases using OS authentication New report section to report top 10 time consuming checks to be used for optimizing runtime in the future More readable report output for clusterwide checks Includes over 50 new Health Checks for the Oracle Stack Provides a single dashboard to view collections across your entire enterprise using the Collection Manager, now pre-bundled Expands coverage of pre and post upgrade checks to include standalone databases, with new profile options to run only these checks Expands to additional product areas in E-Business Suite of Workflow & Oracle Purchasing and in Enterprise Manager Cloud Control ORAchk has replaced the popular RACcheck tool, extending the coverage based on prioritization of top issues reported by users, to proactively scan for known problems within the area of: Oracle Database Standalone Database Grid Infrastructure & RAC Maximum Availability Architecture (MAA) Validation Upgrade Readiness Validation Golden Gate Enterprise Manager Cloud Control Repository E-Business Suite Oracle Payables (R12 only) Oracle Workflow Oracle Purchasing (R12 only) Oracle Sun Systems Oracle Solaris ORAchk features: Proactively scans for the most impactful problems across the various layers of your stack Streamlines how to investigate and analyze which known issues present a risk to you Executes lightweight checks in your environment, providing immediate results with no configuration data sent to Oracle Local reporting capability showing specific problems and their resolutions Ability to configure email notifications when problems are detected Provides a single dashboard to view collections across your entire enterprise using the Collection Manager ORAchk will expand in the future with high impact checks in existing and additional product areas. If you have particular checks or product areas you would like to see covered, please post suggestions in the ORAchk subspace in My Oracle Support Community. For more details about ORAchk see Document 1268927.2

    Read the article

  • Sitecore Item Web API and Json.Net Test Drive (Part II –Strongly Typed)

    - by jonel
    In the earlier post I did related to this topic, I have talked about using Json.Net to consume the result of Sitecore Item Web API. In that post, I have used the keyword dynamic to express my intention of consuming the returned json of the API. In this article, I will create some useful classes to write our implementation of consuming the API using strongly-typed. We will start of with the Record class which will hold the top most elements the API will present us. Pretty straight forward class. It has 2 properties to hold the statuscode and the result elements. If you intend to use a different property name in your class from the json property, you can do so by passing a string literal of the json property name to the JsonProperty attribute and name your class property differently. If you look at the earlier post, you will notice that the API returns an array of items that contains all of the Sitecore content item or items and stores them under the result->items array element. To be able to map that array of items, we have to write a collection property and decorate that with the JsonProperty attribute. The JsonItem class is a simple class which will map to the corresponding item property contained in the array. If you notice, these properties are just the basic Sitecore fields. And here’s the main portion of this post that will binds them all together. And here’s the output of this code. In closing, the same result can be achieved using the dynamic keyword or defining classes to map the json propery returned by the Sitecore Item Web API. With a little bit more of coding, you can take advantage of power of strongly-typed solution. Have a good week ahead of you.

    Read the article

  • Apache not directing to correct VHost

    - by BANANENMANNFRAU
    I have setup the following virtual host ServerAdmin [email protected] ServerName mysite.com ServerAlias www.mysite.com DocumentRoot /var/www/homepage/public_html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined When I hit my url Apache still shows the default page. Not the index Ive created in the give Document root. In my Domain i have set the A Record to the Ip of my VPS: apache2ctl -S: output: VirtualHost configuration: *:80 is a NameVirtualHost default server xxxxxx.stratoserver.net (/etc/apache2/sites-enabled/000-default.conf:1) port 80 namevhost xxxxxxx.stratoserver.net (/etc/apache2/sites-enabled/000-default.conf:1) port 80 namevhost mysite.com (/etc/apache2/sites-enabled/homepage.conf:1) alias www.mysite.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults Mutex watchdog-callback: using_defaults PidFile: "/var/run/apache2/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used How would I need to setup my Virtual host so that apache shows the correct site depending on the Domain im redirecting from.

    Read the article

  • How to create multiple OS on same DVD [duplicate]

    - by learner
    This question already has an answer here: How to make a multiboot CD that will start a user-chosen ISO file 7 answers I searched this forum but there are only general answers which doesn't give me desired output. Here is what I want to do. I have (1). Windows 7 ISO (2). Windows 8.1 previews ISO and (3). Ubuntu 12.10 ISO files. Using which I want to create single bootable DVD, so that after creating DVD it should ask to choose to install between 3 OS. Is it possible? If so please help me.

    Read the article

  • How to tell if OpenGL is really working in Ubuntu 10.04

    - by Jonathan
    I have a lenovo S9e running Intel integrated graphics. Here is my lspci output related to the graphics: 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: Lenovo Device 3870 Flags: bus master, fast devsel, latency 0 Memory at f0580000 (32-bit, non-prefetchable) [size=512K] Capabilities: [d0] Power Management version 2 I want to know how I can make sure OpenGL support is running in full on an Ubuntu 10.04 installation. I have a few hints to think that it is not: The "Desktop Effects" will not load Apps such as stardock, when attempting to use OpenGL rendering, will display black boxes instead of transparency In the games Pioneers, the number-tile icons are suspiciously just black circles Windows games running with Wine will only support software rendering, not hardware rendering When I boot into a Knoppix LiveCD, the desktop effects do work, splendidly, meaning compiz detects my computer as capable. My problem with troubleshooting is that Canonical has basically eliminated the conf-file-based mechanism of X11 as far as I can tell, thus making it even harder to ensure graphics modules are loading properly. How do I debug and test OpenGL on m Ubuntu 10.04 installation?

    Read the article

  • How to extract jpegs from a video file using ffmpeg

    - by Andrew Simpson
    I am using C# and ffmpeg. In this scenario I have 279 individual jpegs and i have used ffmpeg to create a AVI file from these images on my client. CMD Line: -f image2 -r 10 -i "C:\000EC902F17F\img%05d.jpg" -s 352x288 -y "C:\1\test.avi" I then upload to my server. CMD Line: -i c:\1\1.avi c:\1\img-%05d.jpg I then extract jpegs from the AVI file. I get 265 jpegs back. Obviously ffmpeg is dropping these frames (most probable) when the avi is 1st created. Is there a way to 'force' to encode using ALL the images I have? Thanks. PS I did not specify any command line option other than the size of the video output. As far as I am aware if none are specified then ffmpeg automatically chooses the best ones?

    Read the article

  • Computer cables explained

    - by Robert English
    I've noticed lately that places to learn about both power supply cables and also peripherals and fans aren't that easy to find. There's very little information available that gives detailed explanations of what cables are used inside a computer. What I found was very dated and often lacked detailed explanations. For someone planning out their first build it would be great way for this to be explained all in one place, like here! Important things to know about cables and connections in a computer? What are their names? Where do they connect to and why? What typical Voltages do they output? Changing Voltages for Overclocking? Please refernce PSU cables(Full modular, Modular and Non-Modular,24-pin, 20+4-pin etc), SATA(I, II, III), Molex etc. EDIT: Forgot to mention any information about PSU rails would also be appreciated :)

    Read the article

  • iptables ACCEPT policy

    - by kamae
    In Redhat EL 6, iptables INPUT policy is ACCEPT but INPUT chain has REJECT entry in the end. /etc/syconfig/iptables is as below: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT Do you know why the policy is ACCEPT not DROP? I think setting DROP policy is safer than ACCEPT in case to make mistake in the chain. Actually the policy is not applied to any packet: # iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes)

    Read the article

  • UIView with IrrlichtScene - iOS

    - by user1459024
    i have a UIViewController in a Storyboard and want to draw a IrrlichtScene in this View Controller. My Code: WWSViewController.h #import <UIKit/UIKit.h> @interface WWSViewController : UIViewController { IBOutlet UILabel *errorLabel; } @end WWSViewController.mm #import "WWSViewController.h" #include "../../ressources/irrlicht/include/irrlicht.h" using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; @interface WWSViewController () @end @implementation WWSViewController -(void)awakeFromNib { errorLabel = [[UILabel alloc] init]; errorLabel.text = @""; IrrlichtDevice *device = createDevice( video::EDT_OGLES1, dimension2d<u32>(640, 480), 16, false, false, false, 0); /* Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text. */ device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo"); /* Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment(). */ IVideoDriver* driver = device->getVideoDriver(); ISceneManager* smgr = device->getSceneManager(); IGUIEnvironment* guienv = device->getGUIEnvironment(); /* We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner. */ guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!", rect<s32>(10,10,260,22), true); /* To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors. Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins. */ IAnimatedMesh* mesh = smgr->getMesh("/Users/dbocksteger/Desktop/test/media/sydney.md2"); if (!mesh) { device->drop(); if (!errorLabel) { errorLabel = [[UILabel alloc] init]; } errorLabel.text = @"Konnte Mesh nicht laden."; return; } IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh ); /* To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color. */ if (node) { node->setMaterialFlag(EMF_LIGHTING, false); node->setMD2Animation(scene::EMAT_STAND); node->setMaterialTexture( 0, driver->getTexture("/Users/dbocksteger/Desktop/test/media/sydney.bmp") ); } /* To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is. */ smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0)); /* Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window). */ while(device->run()) { /* Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen. */ driver->beginScene(true, true, SColor(255,100,101,140)); smgr->drawAll(); guienv->drawAll(); driver->endScene(); } /* After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information. */ device->drop(); } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end Sadly the result is just a black View in the Simulator. :( Hope here is anyone who can explain me how i draw the scene in a UIView. Furthermore I'm getting this Error: Could not load sprite bank because the file does not exist: #DefaultFont How can i fix it ?

    Read the article

  • USB Keyboard doesn't work in Ubuntu 14.04

    - by Steven Crossan
    My USB keyboard isn't working in Ubuntu 14.04, but also didn't work in 13.10. I upgraded today in the hope that the issue would be resolved, but it wasn't. The keyboard works in BIOS and GRUB but stops working when I reach the login screen. It is detected by the system, but just doesn't work. Output of lsusb: Bus 002 Device 002: ID 0846:9011 NetGear, Inc. WNDA3100v2 802.11abgn [Broadcom BCM4323] Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 1532:000d Razer USA, Ltd Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 002: ID 060b:2231 Solid Year KSK-6001 UELX Keyboard Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub I added: hid usbhid hid_generic ohci_pci To /etc/initramfs-tools/modules and did update-initramfs -u, but that didn't work. I'm new to Ubuntu/Linux and any help you could provide would be greatly appreciated!

    Read the article

  • Gettings Terms asscoiated to a Specific list item

    - by Gino Abraham
    I had a fancy requirement where i had to get all tags associated to a document set in a document library. The normal tag could webpart was not working when i add it to the document set home page, so planned a custom webpart. Was checking in net to find a straight forward way to achieve this, but was not lucky enough to get something. Since i didnt get any samples in net, i looked into Microsoft.Sharerpoint.Portal.Webcontrols and found a solution.The socialdataframemanager control in 14Hive/Template/layouts/SocialDataFrame.aspx directed me to the solution. You can get the dll from ISAPI folder. Following Code snippet can get all Terms associated to the List Item given that you have list name and id for the list item. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.Office.Server.SocialData; namespace TagChecker { class Program { static void Main(string[] args) { // Your site url string siteUrl = http://contoso; // List Name string listName = "DocumentLibrary1"; // List Item Id for which you want to get all terms int listItemId = 35; using (SPSite site = new SPSite(siteUrl)) { using(SPWeb web = site.OpenWeb()) { SPListItem listItem = web.Lists[listName].GetItemById(listItemId); string url = string.Empty; // Based on the list type the url would be formed. Code Sniffed from Micosoft dlls :) if (listItem.ParentList.BaseType == SPBaseType.DocumentLibrary) { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.Url.TrimStart(new char[] { '/' }); } else if (SPFileSystemObjectType.Folder == listItem.FileSystemObjectType) { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.Folder.Url.TrimStart(new char[] { '/' }); } else { url = listItem.Web.Url.TrimEnd(new char[] { '/' }) + "/" + listItem.ParentList.Forms[PAGETYPE.PAGE_DISPLAYFORM].Url.TrimStart(new char[] { '/' }) + "?ID=" + listItem.ID.ToString(); } SPServiceContext serviceContext = SPServiceContext.GetContext(site); Uri uri = new Uri(url); SocialTagManager mgr = new SocialTagManager(serviceContext); SocialTerm[] terms = mgr.GetTerms(uri); foreach (SocialTerm term in terms) { Console.WriteLine(term.Term.Labels[0].Value ); } } } Console.Read(); } } } Reference dlls added are Microsoft.Sharepoint , Microsoft.Sharepoint.Taxonomy, Microsoft.office.server, Microsoft.Office.Server.UserProfiles from ISAPI folder. This logic can be used to make a custom tag cloud webpart by taking code from OOB tag cloud, so taht you can have you webpart anywhere in the site and still get Tags added to a specifc libdary/List. Hope this helps some one.

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >