Search Results

Search found 4690 results on 188 pages for 'ran'.

Page 84/188 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • After upgrading to trusty, ALSA midi connection (aconnect) doesn't seem to work right

    - by SougonNaTakumi
    Previously in kubuntu 13.10 I was able to open vmpk or plug in a midi keyboard, and provided that TiMidity was running in server mode, I could run aconnect [keyboard port (129:0 for vmpk)] 14:0 aconnect 14:0 128:0 and I could play the keyboard and get sound. But now, a while after upgrading to trusty, I tried to do that, and didn't get any sound. TiMidity itself still plays files fine, but if I try to play them with aplaymidi, I still just get silence. Oddly, the midi files are clearly being read. When I ran (where 130:0 was vmpk's input port) aplaymidi -p 130:0 ~/path/to/midi.mid vmpk was highlighting notes on the piano as if it were playing the midi. One time I tried this, TiMidity (?) very briefly played a fraction of a second of the first chord of my song before everything went silent and vmpk just highlighted the first voice on the keyboard as usual. Now the weirdest part of this is that probably about 40% of the time, when I've played at least one note with either aplaymidi or vmpk, when I run aconnect -x I get a sudden burst of a note or chord from my speakers (that is, if I played one note, I get a note; if I played multiple sequential notes, they turn into a chord), as if the notes were being queued up but not being played and that somehow liberated them. I have no idea what's going on there. A little while ago I remember having a problem with Audacity playing wav files sped up and also locking up if I tried to pause it, which it stopped doing when I set the audio devices to the actual audio devices rather than pulse. But now when I checked again, it's doing the opposite: it won't play audio at all and/or acts weirdly if I don't set the audio devices to pulse, and either way will very occasionally randomly do the speeding up thing regardless. Oddly in the midst of what's looking like a pretty screwed up sound system, sound in VLC and Firefox has been working fine and if I play a wav file with aplay ~/path/to/sound.wav that works fine too. Any idea what I could do to figure out what's wrong with ALSA and/or fix it?

    Read the article

  • How to disable an "always there" program if it isn't in the processes list?

    - by rumtscho
    I have Crashplan and it is constantly running in the background and making backups every 15 minutes. It caused some problems with the backup target folders, so I want it to be inactive while I am making changes to these folders. I started the application itself, but could not find some kind of "Pause" button. So I decided to just stop its process. I first tried the lazy way - the system monitor in the Gnome panel has a "Processes" tab - but didn't find it listed there. Then I did a sudo ps -A and read through the whole list. I don't recognize everything on the list (many process names are self-explaining, like evolution-alarm, but I don't recognize others like phy0) but there was nothing which sounded even remotely like crashplan. But I know that there must have been a process belonging to Crashplan running at this time, because the main Crashplan window was open when I ran the command. Do you have any advice how to stop this thing from running? The best solution would involve temporary preventing it from loading on boot too, since I may need to reboot while doing the maintenance there.

    Read the article

  • Antenna Aligner Part 6: Little Robots

    - by Chris George
    A week ago I took temporary ownership of a HTC Desire S so that I could start testing my app under Android. Support for Android was not in my original plan, but when Nomad added support for it recently, I starting thinking why not! So with some trepidation, I clicked the Build for Android button on the Nomad toolbar... nothing. Hmm... that's not right, I was expecting something to build. After a bit of faffing around I finally realised that I hadn't read the text on the Android setup page properly (yes that's right, RTFM!), and I needed a two-part application identifier, separated by a dot. I did this (not sure what the two part thing is all about, that one my list to investigate!) After making the change, the Android build worked and created the apk file. I uploaded this to the device and nervously ran it... it worked!!!  Well, more or less! So, there was not splash screen, but this was no surprise because I only have the iOS icons and splash screen in my project at the moment. What was more concerning was the compass update didn't seem to be working. I suspect this is a result of using an iOS specific option in the Phonegap compass watcher. Another thing to investigate. I've also just noticed that the css gradient background hasn't worked either... These issues aside, it was actually more successful than I was expecting, so happy days! Right, lets get Googling...   Next time: Preparing for submission to the App Store! :-)

    Read the article

  • What Does Installing Ubuntu "Alongside" Windows Entail?

    - by Soft Skeleton
    I recently posted a question about an error I was receiving trying to access Ubuntu from the boot menu. I am using Windows 7 and Ubuntu 12.x (I THINK because I haven't accessed it in over a year due to being unable to run an important program for one of my classes on Ubuntu). On another laptop, I partitioned the hard drive and installed Windows and Ubuntu on the partitions. On this laptop, I simply installed Ubuntu from Windows, picking the option "alongside Windows", and didn't partition my hard drive manually. I was under the impression "alongside" entailed that Ubuntu would partition my hard drive, and that if I were to return my Windows partition to factory settings it would not affect the Ubuntu partition. However, given my current problem, I am wondering if I was mistaken in this assumption? When installing Ubuntu from Windows, selecting "alongside" Windows as the option from the Ubuntu installer, does that simply install Ubuntu within the Windows partition and thus returning it to factory settings would wipe out anything I had on the Ubuntu OS as well? Ubuntu is still in the boot menu as an option, but when I try to access it it says the drive is "corrupt" and wubi is mentioned in the error. I additionally tried to download a program ran from Windows to investigate partitions and there were no sign of my Ubuntu partition viewable from Windows. Is it possible Windows just can't see it? Any insight, corrections or answers is appreciated.

    Read the article

  • Help with dual booting Windows 8.1 Professional and Ubuntu 13.10

    - by user1292548
    I recently installed a clean version of Windows 8.1 Professional on my Lenovo Y500 (with Samsung 256GB 840 Pro SSD). I have Windows all set up and running normally. I am trying to dual boot Windows 8.1 and Ubuntu 13.10, but the installation procedure don't allow me to either "Install alongside..." or shows my SSD partitions correctly when I chose the "Something Else" option. I have created a 25GB partition of free space in the Windows disk manager, but on the installation screen on Ubuntu, it shows the whole drive as a free space. I have tried installing with a burned .ISO disk and a bootable USB, the results are the same for both. Windows Disk Management screen: http://imageshack.us/a/img855/9504/59zu.jpg The Ubuntu installation screen: http://imageshack.us/a/img62/2712/9g6i.jpg I've ran into this problem before when trying to dual boot Ubuntu and Windows 7 Professional a month ago. But I gave up and never resolved the issue. --EDIT-- I tried what Eero Aaltonen suggested, and this is my result: ubuntu@ubuntu:~$ sudo parted /dev/sda print Warning: /dev/sda contains GPT signatures, indicating that it has a GPT table. However, it does not have a valid fake msdos partition table, as it should. Perhaps it was corrupted -- possibly by a program that doesn't understand GPT partition tables. Or perhaps you deleted the GPT table, and are now using an msdos partition table. Is this a GPT partition table? Yes/No? yes Model: ATA Samsung SSD 840 (scsi) Disk /dev/sda: 256GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags ubuntu@ubuntu:~$

    Read the article

  • User "oracle" unable to start or stop listeners

    - by user12620111
    Recently ran into a problem where user "oracle" was unable to start or stop listeners: oracle$ srvctl stop listener PRCR-1065 : Failed to stop resource ora.LISTENER.lsnr CRS-0245:  User doesn't have enough privilege to perform the operation CRS-0245:  User doesn't have enough privilege to perform the operation PRCR-1065 : Failed to stop resource ora.LISTENER_IB.lsnr CRS-0245:  User doesn't have enough privilege to perform the operation CRS-0245:  User doesn't have enough privilege to perform the operation The system is currently "fixed":oracle$ srvctl start listeneroracle$ srvctl status listenerListener LISTENER is enabledListener LISTENER is running on node(s): etc9cn02,etc9cn01Listener LISTENER_IB is enabledListener LISTENER_IB is running on node(s): etc9cn02,etc9cn01oracle$ srvctl stop listeneroracle$ srvctl status listenerListener LISTENER is enabledListener LISTENER is not runningListener LISTENER_IB is enabledListener LISTENER_IB is not runningoracle$ srvctl start listenerHow it was "fixed":Before:# crsctl status resource ora.LISTENER.lsnr -p | grep ACL=ACL=owner:root:rwx,pgrp:root:r-x,other::r--# crsctl status resource ora.LISTENER_IB.lsnr -p | grep ACL=ACL=owner:root:rwx,pgrp:root:r-x,other::r--"Fix":# crsctl setperm resource ora.LISTENER.lsnr -o oracle# crsctl setperm resource ora.LISTENER.lsnr -g oinstall# crsctl setperm resource ora.LISTENER_IB.lsnr -g oinstall# crsctl setperm resource ora.LISTENER_IB.lsnr -o oracleAfter:# crsctl status resource ora.LISTENER.lsnr -p | grep ACL=ACL=owner:oracle:rwx,pgrp:oinstall:r-x,other::r--# crsctl status resource ora.LISTENER_IB.lsnr -p | grep ACL=ACL=owner:oracle:rwx,pgrp:oinstall:r-x,other::r--I may never know how the system got into this state.

    Read the article

  • Install on Acer Aspire 4752

    - by user216962
    I am at my wits end with this computer. I bought and Acer Aspire 4752 with a fully loaded version of Windows 7 on it. I prefer Ubuntu so I began to install 14.04 from USB. Got the error: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. So I tried a different USB stick, same error. Tried different versions of Ubuntu, got the same error. I've used startup disk creator and Unetbootin to make start USB boot devices. I can boot with the USB drive and run Ubuntu that way. I even checked the hard drive using the tools in Ubuntu. Everything was fine, except it said the hard drive was hot. I tried a different hard drive. Got same error above. I ran a test with mem86, everything was fine. No matter what I do, using the USB gives me the Errno5 error. I then switched to using DVDs. Now I keep getting an uncompression error when installing Ubuntu 14.04 or 12.04. I can't figure out for the life of me why I get nothing but errors. Can anyone help?

    Read the article

  • Baseline for GIS Applications

    - by Geertjan
    The application I introduced here yesterday can best be understood via its author's explanation: "As I developed several different WorldWind-based applications, I noticed that they all started out the same. Terramenta was born so I wouldn't have to recreate the baseline every time, I could just provide NetBeans plugin modules to introduce the new features required by different projects." So, to try it out for myself, I checked out the sources from the Mercurial repo today, built them, and ran them. hg clone https://bitbucket.org/heidtmare/terramenta On Windows, things worked fine, on Ubuntu they didn't because the relevant native libraries aren't provided yet out of the box. Here's the result: The above provides the WorldWind globe, together with all the standard options, e.g., for showing names and other WorldWind features, together with several features that I don't understand yet, such as tools for creating shapes and a recorder for replaying sequences. The complete application is like this, i.e., one single functionality module is provided, which exposes several API packages that can be extended: It would really be cool if the above module could also be added to a Maven-based application via a reference to a Maven repository, in the way that Timon Veenstra and the AgroSense team have made available their GeoViewer. One cool thing from the GeoViewer solution is the Flamingo menubar, which I added to Terramenta by simply putting the dependency below into the application POM: <dependency>    <groupId>nl.cloudfarming.client</groupId>    <artifactId>menu</artifactId>    <version>1.0.24</version></dependency> The result, without doing anything other than the above: I am looking forward to helping to document the use cases and developer scenarios for Terramenta! Something like this, created by Timon to demonstrate the GeoViewer use case would be cool to have: http://java.net/projects/agrosense/pages/ExampleGeoviewerNormal

    Read the article

  • virtual box upgrade

    - by Husni
    I did upgrade virtualbox from 4.1 to 4.2 wheneverver I want to load my win xp vdi, it gives me the following error: "Kernel driver not installed (rc=-1908) The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing '/etc/init.d/vboxdrv setup' as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv kernel module if necessary." I ran the suggested step to reinstall the kernel module, and the log file files is as follow: Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. I still unable to re-run my win virtual XP vdi file. anyone have a clue?

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • What went wrong with my curl install?

    - by Danjah
    I'm fresher than the prince to Linux, I've been following the instructions here: http://chrisfulstow.com/running-node-js-on-windows-with-virtualbox-and-ubuntu (the link tells what I am generally trying to do). I'm all up and running in VBox, and am at the curl install part, I may have done the curl part a week ago I forget. So I ran this command anyway: danjah@danjah-VirtualBox:~$ sudo apt-get install curl Result: [sudo] password for danjah: Reading package lists... Done Building dependency tree Reading state information... Done curl is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Then: $ curl http://npmjs.org/install.sh | sudo npm_install=rc sh Result: fetching: { gzip: stdin: unexpected end of file /bin/tar: Child returned status 1 /bin/tar: Error is not recoverable: exiting now It failed Should I be concerned? How can I test curl? How can I avoid these situations? Perhaps there's a generic way of checking to see if I've already installed packages/etc? Case specific answers and general advice most appreciated. cheers, d

    Read the article

  • I can't enable extra effects in Ubuntu 10.10. Please help?

    - by jasoncruz98
    I installed Ubuntu 10.10 alongside Ubuntu 11.10 to use an older version of Compiz. On Ubuntu 11.10, Compiz was enabled by default and I didn't need to use any graphics driver to enjoy the effects. All I had to do was install CompizConfig Settings Manager and enable those extra effects. That was Compiz 0.9.6. Now, after installing Ubuntu 10.10, when I first logged in, the graphics were slow. When I dragged a window from one end of the screen to the other, the whole screen would blur up and pixelate and it would be very laggy. I tried going to System Preference Appearance and selecting Extra effects on the Visual effects tab, but all I got was "Desktop effects could not be enabled". I don't know whether I should install the Additional drivers (proprietary) because my Internet is slow and it would take a long time. Furthermore, in Ubuntu 11.10, after I installed the proprietary graphics driver, I immediately went into fallback mode and wasn't even offered an option to set my desktop session to Ubuntu 3D. I didn't need the driver to run Compiz in Ubuntu 11.10, it just ran so smoothly. But in Ubuntu 10.10, everything is so laggy. Should I install the ATI/AMD Proprietary FGLRX Graphics Driver for Ubuntu 10.10 to enable extra effects? Or is there something else wrong with my system? Here is the output of lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation Sandy Bridge Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Device [1002:6760] Here is the output of the same command, but in Ubuntu 11.10 (in this case the one which is correct, because I don't have the Sandy Bridge Integrated graphics controller) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] [1002:6760]

    Read the article

  • Ubuntu running like mud, system hangs and looks like its running at 3FPS

    - by user240803
    the system specks: AMD XP 3200+ 1GB DDR 333 RAM 160 GB HD IDE NVIDIA FX 5500 AGP Video Card Compaq Presario sr1230nx the system takes forever to boot and when it does it runs like total mud, reminds me of an overloaded system that has too many windows open or something... fresh install tried soo many thing like new memory (it had 512 stick) a new video card (onboard 8mb sis sounded like the problem, but wasn't... has gotten a little faster now but not by much) tried to disable all the things on the motherboard that could be, with no help... this machine runs windows XP, 7, and 8 JUST FINE!!! I mean for a single core CPU WIN8 runs AWESOME!!! BUT I already have a Gaming Desktop that has Windows 8 pro I want a Linux machine to get some time in and learn a few things... I want Ubuntu because of the Software center so I can install things I want until I am familiar with the command line.. I've worked on Computers since I was 12 I remember some of the DOS commands but I guess these are a little different... anyway any ideas? Ive also tried both drivers for the NVIDA card and that didn't help either... its not the card since it did this with both the NVIDA card and the SIS onboard... it also does this on live mode with the USB so I don't think its the HardDrive... I'm running out of options of hardware to try... I know this version of Linux works cuz Ive booted it on other machines and it ran great... what is with this Compaq? here is a vid of exactly what its doing... let me know if you need anything else I am right by the comptuer tonight so ask anything... http://youtu.be/-P-XNo81098

    Read the article

  • Wireless BMC4311 driver install

    - by user113910
    Exasperated with Ubuntu 12.04 and BCM4311 driver. HP2133 (was SUSE 10) Installed 12.04 in March. It ran well until November with no updates) then updates killed my wireless controller. Trolled forums & other for fix. Tried sudo -apt-get ing & gedit ing. Finally got it working (on full update with STA driver) Hwew!—thought that was it, only to fail again after 3-4 days. Tried all the fixes again but sudo -apt-get ing & gedit ing does not fix. I'm not without experience either: first 'boot-loader' I/P from 20 switch panel, seq to 4096 bit core-store: run to I/P larger processes from paper tape reader. Gone thru acoustic-coupled networks-to-billboards (before 'the net') pre-DOS, DOS, Windows 3.1 & all the others. This is a tough problem. Any advice?

    Read the article

  • Gaming Community CMS, with forum integration [closed]

    - by Tillman32
    Possible Duplicate: Which Content Management System (CMS) should I use? I've had a simple website that I coded myself for a while now, the site is a gaming community. It's very forum and news driven. It was a HORRIBLE idea to take on coding this thing myself. Although we've used it for about a year now, we're just getting too big, and I need to streamline our work. I need writers to post news, etc. I've been doing it through code. ( A year ago I thought it would be a cool idea ) Anyway, I've been messing with just about every CMS out there, and I'm struggling to get something that I really like. The main issue I'm facing, is a good news system, and good forum integration. I'm sort of picky when it comes to looks, its a curse. Reading on here, I see a lot of people saying Drupal is the best for the 3 things I need, community interaction, and forums. I think the main issue that I ran into with drupal, was ease of use, and themes. I am not a web designer, and I need a good theme. For an idea of what I'm looking for, go check out http://www.clgaming.net, they have forums integrated, a nice news area on home page/news section, and nice user accounts. It looks very professional, and I doubt I'll get close to that with a free theme, but their functionality is exactly what I need. Any ideas would be greatly appreciated.

    Read the article

  • The Lease Standard Train is Back on Track

    - by Theresa Hickman
    As I was walking to the elevator, I ran into Seamus Moran, our resident accounting expert. Me: “Hi Seamus, where have you been? You don’t write, you don’t call, and you don’t send me flowers. I’ve been hearing more and more about the Lease Accounting topic. It looks like Congress is weighing in on it too and putting heat on FASB. According to a recent article in Reuters  “representatives Brad Sherman, a Democrat, and Republican John Campbell, have written to the U.S. Financial Accounting Standards Board warning of dire economic fallout from a plan to have companies put leases on their balance sheets." Here’s what Seamus had to say: Yes, but there have been some recent developments. The FASB and IASB cleared a logjam, resolved a final “content of the standard” issue, and articulated a way to move forward on Leases last Wednesday.  It looks like the Lease Standard Train is back on track.   We’ve just had a briefing from PwC. The Lease timeline now looks like this: Now to June 2012: The staff will write up the decisions June 2012: Boards will meet on “logistical” issues (glossed over) Oct, Nov, most likely December 2012: A New Lease Exposure Draft will be crafted January – April 2013: Public Comment period begins April to September 2013: Everyone to digest the comments and draft the final standard End of 2013 (Probably more like Early 2014): Publish the new Lease Accounting Standards 2015: Retroactive reporting 2017: New standard is effective It seems that leases under one year will be treated as “rent expense”. If it doesn’t cross two (annual) balance sheets, it doesn’t really matter. This is good news in terms of clarity, resolution, and moving forward on one of the last remaining items to converge the IFRS and U.S. GAAP standards. There are ambiguities, issues, concerns, et cetera, of course, and there are bright lines (“rules”) that bother the “no rules, please” people and ambiguities (“judgments”) that bother the “clarity, please” people, but at least the train isn’t falling off the tracks.  

    Read the article

  • HP Envy dv6t-7300: Disabled WiFi through button and can't enable it anymore

    - by Mateus B. Cassiano
    Well, I have a HP Envy dv6t-7300 laptop that came with a Ralink RT5390 WiFi card. Everything was working perfectly, and eventually I press the WiFi button in my keyboard to toggle the card on/off. Until today, all worked right: if the wifi was off (wifi LED amber) and I press the wifi button, after a few seconds the LED turn white and everything works. If I repeat the process, the wifi LED turn amber and the card get disabled, but now, I can't turn it on anymore. running sudo rfkill list all I get: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes So, I ran sudo rfkill unblock all but nothing changed. As a side note, if I run sudo ifconfig wlan0 up, the indicator LED gets white (indicating that the card was enabled), but Ubuntu still say that the card is blocked by hardware. Extra information: the card works without issues in windows and in Ubuntu installer (booting from a live CD). I'm using the card out-of-box, using the drivers already included in Ubuntu 12.10. The module rt2800pci is loaded and working fine, not blacklisted, etc, etc. The card and the button toggle worked flawlessly until today, when I toggled it off and can't turn it on anymore... The problem is back, but in a different manner: if I don't press the wifi key a few times during the grub loading, in the login screen the wifi button will be ambar (disabled), pressing it will toggle it white (enabled) or ambar (disabled) again, but ubuntu still says that the network card was disabled by hardware and doesn't connect... In other words, if I don't press the WiFi button a few times when Ubuntu is booting, it will be stuck with the "network card was disabled by hardware" message, even if the light is white (enabled). Any clue? Maybe a error in some startup script or config file?

    Read the article

  • Install a i386 printer driver into an amd64 distribution or how can I find a good printer based on features?

    - by Yanick Rochon
    Hi, I just bought a Lexmark Interpret S408 all-in-one printer. The box said that it supported Ubuntu 8.04, but I told myself it should work with Lucid... well no. The only driver I have found is for i386 while I have a amd64 image installed; the architecture is incompatible. So, the quesiton is : Is it possible to install that driver anyway, somehow? Or do I need to take that printer back to the store and buy another one? If the latter is the only alternative, I need a printer that has wireless connection capability can do color printing is of good price (less than $200 CAD) Thank you for your answers, help, and tips. ** UPDATE ** The driver was given in the form of deb package (for Debian distributions) and I managed to extract the actual deb package driver out of the install program. I ran sudo dpkg -i --force-all lexmark-inkjet-09-driver-1.5-1.i386.deb and the driver installed, and I was able to print something out. But that pretty much ends there; I cannot access anymore of the printer settings, etc. (i.g. scanner, fax, wifi settings, etc.) I should suffice for now as I'm satisfied with the printer's features (and size, and prince), but if I could have a full-linux-supported printer like that one, I would return this one in exchange for the other.

    Read the article

  • IRQ Conflicts Causing Video Card and Boot Problems?

    - by sanpatricio
    tl;dr - I have 4 devices sharing 1 IRQ. Is this bad and how do I tell the BIOS to stop it? Background: I have an old Dell GX280 dual Pentium 4 that I (semi) resurrected last weekend with an installation of Ubuntu 12.04. Everything was going fine the first several hours until a problem that plagued me when WinXP was on that machine happened -- it froze. Completely froze. None of the myriad of ways I have found here on askubuntu helped me to regain control except a long-press of the power button to shut it off. Clearly, this wasn't a software/WinXP issue. After much googling, I found that hardware conflicts can often cause this sort of total lock-up and with all the odd blocks of yellow and flecks of color showing on my screen (both WinXP and Ubuntu) I figured my old GeForce 7600 was failing and causing me these odd issues. (A good canned-air dusting of the entire interior fixed the color fleck problem) Again, through much googling and numerous answers found on askubuntu, I somehow stumbled my way onto the lshw command. After going through it, line by line, I found that I have four devices sharing IRQ 16: eth0, wlan0, ide0 (DVD-RW), and my video card. In hindsight, I can recall weird instances of my Ethernet connection to another computer not working when I thought it should. I never full troubleshot those issues so it could be a coincidence. The other thing that has been plaguing me since installing Ubuntu (wasn't there during WinXP) has been periodic moments of my monitor getting no signal from Ubuntu during boot. The first couple days, it would disappear after the Dell boot screen and reappear at Ubuntu login. Now, it disappears after the Dell boot screen and doesn't return at all -- I have to hit F12 where I can load a safe mode version of Ubuntu and get more details like dmesg and lsdev. I also ran memtest86 overnight and woke up to zero errors, so failing RAM is out. Where do I go from here?

    Read the article

  • Why did the web win the space of remote applications and X not?

    - by Martin Josefsson
    The X Window System is 25 years old, it had it's birthday yesterday (on the 15'th). As you probably are aware of, one of it's most important features is the separation of the server side and the client side in a way that neither Microsoft's, Apples or Wayland's windowing systems have. Back in the days (sorry for the ambiguous phrasing) many believed X would dominate over other ways to make windows because of this separation of server and client, allowing the application to be ran on a server somewhere else while the user clicks and types on her own computer at home. This use obviously still exists, but is marginalized at best. When we write and use programs that run on a server we almost always use the web with it's html/css/js. Why did the web win, and X not? The technologies used for the web (said html/css/js) are a mess. Combined with all the back-end-frameworks (Rails, Django and all) it really is a jungle to navigate thru. Still the web thrives with creativity and progress, while remote X apps do not.

    Read the article

  • Weird 302 Redirects in Windows Azure

    - by Your DisplayName here!
    In IdentityServer I don’t use Forms Authentication but the session facility from WIF. That also means that I implemented my own redirect logic to a login page when needed. To achieve that I turned off the built-in authentication (authenticationMode="none") and added an Application_EndRequest handler that checks for 401s and does the redirect to my sign in route. The redirect only happens for web pages and not for web services. This all works fine in local IIS – but in the Azure Compute Emulator and Windows Azure many of my tests are failing and I suddenly see 302 status codes where I expected 401s (the web service calls). After some debugging kung-fu and enabling FREB I found out, that there is still the Forms Authentication module in effect turning 401s into 302s. My EndRequest handler never sees a 401 (despite turning forms auth off in config)! Not sure what’s going on (I suspect some inherited configuration that gets in my way here). Even if it shouldn’t be necessary, an explicit removal of the forms auth module from the module list fixed it, and I now have the same behavior in local IIS and Windows Azure. strange. <modules>   <remove name="FormsAuthentication" /> </modules> HTH Update: Brock ran into the same issue, and found the real reason. Read here.

    Read the article

  • Dependencies not met on 12.04?

    - by Mochan
    Now I'm very aware that there are many questions out there that are quite similar to what I'm experiencing, but I have looked through many and I have not found a suitable answer. You are welcome to suggest questions that are similar, but I doubt that it will help. Getting on to the issue at hand, whenever I do anything that involves installation, whether it be codecs for videos, new programs or whatever the latter, I always get the 'Dependencies not met' error. In addition, I also get this notification in the panel: When clicked, the menu says this: "An error occurred. Please run Package Manager from the right-click menu or run apt-get in a terminal to see what is wrong. The error message was: ' Error: Broken Count 0'. This usually means your installed packages have unmet dependencies." It gives me three items to click: Show Updates Install all updates Check for Updates And then finally: Show Notifications (with a tick) Preferences When I try 'Install all Updates' (also Check Updates Install) it says this: and also this: As well as 'Ubuntu has experienced an internal error' and 'Did this error occur when moving from one version of Ubuntu to another?' (I clicked NO, because it didn't). So I took it's advice and ran sudo apt-get install -f This is what results: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libapt-pkg4.12:i386 The following packages will be upgraded: libapt-pkg4.12:i386 1 upgraded, 0 newly installed, 0 to remove and 87 not upgraded. 1 not fully installed or removed. Need to get 0 B/941 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y E: Internal Error, No file name for libapt-pkg4.12 When running sudo apt-get update it's all fine, but running sudo apt-get install -f still results in the same thing. I really have no idea what to do... can anyone help me?

    Read the article

  • What to do when 'dpkg --configure -a' fails with too many errors?

    - by rudivonstaden
    During an upgrade from lucid (10.04) to precise (12.04), the X session froze, and I have been trying to recover the upgrade to get a stable system. I have performed the following steps: Used ssh to log in to the stalled system over the network. Checked the contents of the /var/log/dist-upgrade directory. There was no activity on main.log, apt.log or term.log. top showed that process 'precise' was using about 3% CPU, but I could find no evidence that the upgrade process was still doing anything. 'dpkg' did not show up in top, but it came up with pgrep dpkg | xargs ps Killed the 'dpkg' and 'precise' processes Tried to recover the upgrade by running sudo fuser -vki /var/lib/dpkg/lock;sudo dpkg --configure -a. This was partially successful (some packages were configured), but failed with the message Processing was halted because there were too many errors. I ran the same command a few times, and each time some packages were configured but others failed. Tried running sudo apt-get -f install. It fails with similar errors to dpkg. The current situation is that dpkg --configure -a and sudo apt-get -f install fails with two kinds of error: Dependency issues, e.g.: dpkg: dependency problems prevent configuration of cifs-utils: cifs-utils depends on samba-common; however: Package samba-common is not configured yet. dpkg: error processing cifs-utils (--configure): dependency problems - leaving unconfigured Resource conflict, e.g.: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Additionally, it seems there's reference to potential boot problems, so I'm not keen to reboot without fixing the install first: dpkg: too many errors, stopping Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab So my question is, how to get a working install when dpkg --configure -a fails?

    Read the article

  • emacs keybindings

    - by Max
    I read a lot about vim and emacs and how they make you much more productive, but I didn't know which one to pick. Finally when I decided to teach myself common lisp, the decision was straight forward: everybody says that there's no better editor for common lisp, than emacs + slime. So I started with emacs tutorial and immediately I ran into something that seems very unproductive to me. I'm talking about key bindings for cursor keys: forward/backward: Ctrl+f, Ctrl+b up/down: Ctrl+p, Ctrl+n I find these bindings very strange. I assume that fingers should be on their home rows (am I wrong here?), so to move cursor forward or backward I should use my left index finger and for up and down right pinky and right index fingers. When working with any of Windows IDEs and text editors to navigate text I usually place my right hand in a position so that my thumb is on the right ctrl and my index, ring and middle fingers are on the cursor keys. From this position it is very easy and comfortable to move cursor: I can do one-character moves with my 3 right fingers, or I can press ctrl with my right thumb and do word-moves instead. Also I can press shift with my left pinky and do single-character or word selections. Also it is a very comfortable position to reach PgUp, PgDn, Home, End, Delete and Backspace keys with my right hand. So I have even more navigation and selection possibilities. I understand that the decision not to use cursor keys is to allow one to use emacs to connect to remote terminal sessions, where these keys are not supported, but I still find the choice of cursor keys very unfortunate. Why not to use j, k, i, l instead? This way I could use my right hand without much finger stretching. So how is emacs more productive? What am I doing wrong?

    Read the article

  • Why does my root filesystem keep becoming read-only?

    - by Scott Severance
    I've lately been having an issue with my root filesystem becoming readonly. It happens some amount of time after boot. I don't know exactly when it happens, as I don't usually notice it until something such as suspending the computer or printing fails. It seems to be fairly random. Since most of my system is on that partition, I can't re-mount it without rebooting. After this happens, the system runs a fsck. Sometimes it prompts to fix problems; other times it apparently finds none. To troubleshoot, I've searched through the logs but found nothing relevant. This might be due in part to not knowing when the actual errors took place. The filesystem is apparently good to begin with, as when fsck runs its fixes it doesn't report any errors. I've scanned the disk with SpinRite. A while ago, SpinRite found and recovered from some bad sectors on the hard drive. I ran a level 4 scan (a thorough scan) after this probem appeared, but SpinRite found nothing. The SMART data reports that the disk is OK with 63 bad sectors. The number of bad sectors hasn't changed recently. I realize that the disk isn't in the best of conditions, and I have complete backups in case of catastrophic failure. Yet the lack of errors in the logs, combined with SpinRite's test results and the unchanged SMART data makes me think that this problem has some cause other than disk failure. Other than disk failure, what could cause my symptoms?

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >