Search Results

Search found 27870 results on 1115 pages for 'standard output'.

Page 669/1115 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • Ubuntu sound volume is 40% lower than in Windows

    - by ncomx
    I have 2x 2.1 speakers connected to the computer where I have Ubuntu 12.04 installed. On the software side I've set all the volume controls to 100% with the alsamixer program. The speakers have their own volume control, maintaining those at the same level, and switching between Ubuntu and Windows (XP and 7), on windows the output volume is at least 40% higher, even when having the windows volume control at 50% (without touching the speakers volume control) it's still much higher than the sound on Ubuntu. Why can this be happening? Are there some alternative sound drivers (other than the default ones) I could test to see if it makes a difference? some info about the card: root:$ cat /proc/asound/cards 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xfbff4000 irq 55 1 [Generic ]: HDA-Intel - HD-Audio Generic HD-Audio Generic at 0xfbcfc000 irq 56 root:$ lspci | grep -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 02:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Cayman/Antilles HDMI Audio [Radeon HD 6900 Series] I think the one i am using is the Intel one, the other seems to be from the vga card which is an ati radeon 6950. Running gstreamer-properties and switching between alsa, oss, ossv4 and pulseaudio doesn't seem to make any difference.

    Read the article

  • PHP include_path doesn't work

    - by 50ndr33
    I have the documents at http://www.example.com/ in /home/www/example.com/www running on Debian Squeeze. /home/www/example.com/ www/ index.php php/ include_me.php In the php.ini I've uncommented and changed to: include_path =".:/home/www/example.com" In a script index.php in www, I have require_once("/php/include_me.php"). The output I am getting from PHP is: Warning: require_once(/php/include_me.php) [function.require-once]: failed to open stream: No such file or directory in /home/www/example.com/www/index.php on line 2 Fatal error: require_once() [function.require]: Failed opening required '/php/include_me.php' (include_path='.:/home/www/example.com') in /home/www/example.com/www/index.php on line 2 As you can see, the include-path is set correctly according to the error. But if I do require_once("../php/include_me.php");, it works. Therefore, something has to be wrong with the include-path. Does anyone know what I can do to fix it?

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • Install on Acer Aspire 4752

    - by user216962
    I am at my wits end with this computer. I bought and Acer Aspire 4752 with a fully loaded version of Windows 7 on it. I prefer Ubuntu so I began to install 14.04 from USB. Got the error: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. So I tried a different USB stick, same error. Tried different versions of Ubuntu, got the same error. I've used startup disk creator and Unetbootin to make start USB boot devices. I can boot with the USB drive and run Ubuntu that way. I even checked the hard drive using the tools in Ubuntu. Everything was fine, except it said the hard drive was hot. I tried a different hard drive. Got same error above. I ran a test with mem86, everything was fine. No matter what I do, using the USB gives me the Errno5 error. I then switched to using DVDs. Now I keep getting an uncompression error when installing Ubuntu 14.04 or 12.04. I can't figure out for the life of me why I get nothing but errors. Can anyone help?

    Read the article

  • The cd command using variable to mapped NFS volume within ssh in linux script is not working

    - by Bhavya Maheshwari
    I have to do the following from within a bash script. The /VMNFS/ folder is present on linux box, from where script is run, and is mapped to the machine into which i am ssh'ing, as an NFS at /vmfs/volumes/VMNFS/. The second cd command doesn't work, neither with symbolic path name nor physical pathname. Why? and How to rectify this? #!/bin/bash ssh -2 [email protected] /bin/sh <<\EOF vmfile_path=`grep / vmvar_file` datastore_path=/vmfs/volumes/VMNFS/ cd $datastore_path && echo "The present working directory is" `pwd -P` esxi_vmfile_path_sub=`pwd -P` && echo "variable value is" $esxi_vmfile_path_sub esxi_vmfile_path=`echo $vmfile_path | sed "s:/VMNFS:$esxi_vmfile_path_sub:"` cd "$esxi_vmfile_path" EOF ***Output***: The current working directory is /vmfs/volumes/65335ec4-46d12e41 variable value is /vmfs/volumes/65335ec4-46d12e41 can't cd to /vmfs/volumes/65335ec4-46d12e41/TPAE7.5/

    Read the article

  • MySQL Server Not Starting on Boot

    - by Brian
    I have installed MySQL on a RHEL 5 server and I'm wanting to set it up so that the server starts on boot. I've ran the "chkconfig --list mysqld" command and it's currently running on levels 3, 4 and 5. However, when I reboot the server, no mysqld process is started. I've also tried manually starting the server by executing "/usr/bin/mysqld_safe" and I get the following output: Starting mysqld daemon with databases from /var/lib/mysql STOPPING server from pid file /var/run/mysqld/mysqld.pid 100319 10:31:30 mysqld ended I looked in /var/log/mysqld.log and I found the following: 100319 10:29:01 mysqld started 100319 10:29:02 InnoDB: Started; log sequence number 0 29752204 100319 10:29:02 [ERROR] Can't start server : Bind on unix socket: Permission denied 100319 10:29:02 [ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ? 100319 10:29:02 [ERROR] Aborting

    Read the article

  • How can I be prepared to join a company?

    - by Aerovistae
    There's more to it than that, but this title was the best way I could think of to sum it up. I'm a senior in a good computer science program, and I'm graduating early. About to start interviews and all whatnot. I'm not a super-experienced programmer, not one of those people who started in middle school. I'm decent at this, but I'm not among the best, not nearly. I have to do an awful lot of googling. So today I'm meeting some fellow for lunch at a campus cafe to discuss some front-end details when this tall, good-looking guy begs pardon, says he's new to campus, says he's wondering if we know where he can go to sign up for recruiting developers. Quickly evolves into long conversation: he's the CEO of a seems-to-be-doing-well start-up. Hiring passionate interns and full-times. Sounds great! I take one look at his site on my own computer later, immediately spot a major bug. No idea how to fix it, but I see it. I go over to the page code, and good god. It's the standard amount of code you would expect from a full-scale web application, a couple dozen pages of HTML and scripts. I don't even know where to start reading it. I've built sites from scratch, but obviously never on that scale, nor have I ever worked on one of that scale. I have no idea which bit might generate the bug. But that sets me thinking: How could someone like me possibly settle into an environment like that? A start-up is a very high-pressure working environment. I don't know if I can work at that pace under those constraints-- I would hate to let people down. And with only 10 employees, it's not like anyone has much time to help you get your bearings. Somewhere in there is a question. Can you see it? I'm asking for general advice here. Maybe even anecdotal advice. Is joining a start-up right out of college a scary process? Am I overestimating what it would take to figure out the mass of code behind this site? What's the likelihood a decent but only moderately-experienced coder could earn his pay at such a place? For instance, I know nothing of server-side/back-end programming. Never touched it. That scares me.

    Read the article

  • Does the method of adjustment matter, or just the final calibration?

    - by Steve
    A company produces software (and hardware) that is used to both perform automatic adjustments on electronic test equipment as well as perform calibrations of the same equipment. The results of the calibrations are put onto a certificate of calibration that is sent to the customer along with the equipment. This calibration certificate states various conditions of the calibration, such as what hardware (models/serial numbers) and software (version) was used to perform the calibration, as well as things like environmental conditions, etc. Making the assumption that the software used to produce the data (and listed on the calibration certificate) used on the certificate of calibration must have gone through a "test/release" process and must be considered "released" software - does this also mean that the software used for adjustment must also be released? I believe that the method (software/environmental conditions/etc) used or present during adjustment doesn't matter, all that really matters is the end result of the calibration, the conditions present during the calibration, and whether or not the equipment was within the specifications. The real question I'm hoping to get answered: Is there a reputable source (e.g. NIST or somewhere similar) that addresses this question? (I have searched...) The thinking is that during high volume production runs, the "unreleased" system can be used to perform adjustments, as long as a released system is used to perform the calibrations, since the time required to perform the adjustments is much longer than the calibration. This unreleased system will eventually become released for use, but currently is not. Also, please not that there is a distinction between "adjustment" and "calibration". The definition from BIPM International vocabulary of metrology, 2.39: Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication. Followed by NOTE 2 (emphasis in original text): Calibration should not be confused with adjustment of a measuring system, often mistakenly called "self-calibration", nor with verification of calibration As a side note, I'm not sure why this got down voted. It's regarding software and it's use before and after release for use. I believe there is a best practice that can be applied and this is (hopefully) not primarily opinion based.

    Read the article

  • Setting a wireless access point on Ubuntu server 11.10

    - by Solignis
    I am trying to setup a wifi access point with my Ubuntu server. I have managed to get my phone to connect the wireless and now it get a DHCP lease. Though it still cannot ping out or get pinged by anything on my network. I am prety sure my problem is iptables, but I not sure what would be wrong. Here is what my rules look like. (The ones pertaining to the bridge interface) # Allow traffic to / from wireless bridge interface iptables -A INPUT -i br0 -j ACCEPT iptables -A OUTPUT -o br0 -j ACCEPT I am guessing my rules are a little lean, the bridge exists on the same subnet as everything else on my network, I am using a 10.0.0.0/24 subnet. EDIT Oh yeah I should mention also, when I do a ping test, I get Destination Host Unreachable as the error.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • How to tell if linux disk IO is causing excessive (> 1 second) application stalls

    - by noahz
    I have a Java application performing a large volume (hundreds of MB) of continuous output (streaming plain text) to about a dozen files a ext3 SAN filesystem. Occasionally, this application pauses for several seconds at a time. I suspect that something related to ext3 vsfs (Veritas Filesystem) functionality (and/or how it interacts with the OS) is the culprit. What steps can I take to confirm or refute this theory? I am aware of iostat and /proc/diskstats as starting points. Revised title to de-emphasize journaling and emphasize "stalls" I have done some googling and found at least one article that seems to describe behavior like I am observing: Solving the ext3 latency problem Additional Information Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-194.32.1.el5 Primary application disk is fiber-channel SAN: lspci | grep -i fibre 14:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03) Mount info: type vxfs (rw,tmplog,largefiles,mincache=tmpcache,ioerror=mwdisable) 0 0 cat /sys/block/VxVM123456/queue/scheduler noop anticipatory [deadline] cfq

    Read the article

  • Monitoring host and app parameters in real-time

    - by devopsdude42
    I have a bunch of VMs that I need to monitor in real-time. For all nodes I need to watch host parameters like load, network usage and free memory; and for some I need app-specific metrics too, like redis (some vars from the output of INFO command) and nginx (like requests/sec, avg. request time). Ideally I'd also like to track some parameters from the custom apps that run on these node too. These parameters should get tracked as a bunch of line charts on a dashboard. I checked out graphite and it looks suitable (although the UX and aesthetics looks like it needs some love). But setting up and maintaining graphite looks to be a pain, esp. since we don't have a full-time person just for this. Are there any alternatives? Or at least something that is simpler to setup and will scale? Reasonable paid services are also ok.

    Read the article

  • ATI Radeon HD with Catalyst driver stuck mirroring screens

    - by Mike Axiak
    In 11.10 I replaced my aging Nvidia card with a new Radeon HD 6970 card. The single card has two DVI output ports which I've connected to two monitors. I installed Catalyst version 11.9 and I cannot get multiple monitors set up the way I want. I tried: $ sudo amdcccle and setting the mode to single desktop multiple monitors and whenever I do that Unity crashes and I get back to the login screen. Nothing shows up in the Xorg.*.log files for me to post here. There's only one card so I don't think xinerama would be any help here. Anyone have any ideas? EDIT: Here's my xorg.conf file: Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "0-DFP3" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x1024" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x1024" Option "TargetRefresh" "75" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" Option "Monitor-CRT1" "0-CRT1" BusID "PCI:5:0:0" EndSection Section "Device" Identifier "amdcccle-Device[5]-1" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" BusID "PCI:5:0:0" Screen 1 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" DefaultDepth 24 SubSection "Display" EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[5]-1" Device "amdcccle-Device[5]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • Performance issues with new dedicated server [closed]

    - by Pierre Espenan
    I have just subscribed to a new dedicated server and am getting worst than expected PHP execution performance. Execution times are twice as high as on my old mutualized server! I'm definitely not an expert at server management, so I'm wondering what I missed. Here are some stuff that can help you understand what's wrong here : My server (in french but easy to understand) : http://www.online.net/fr/serveur-dedie/dedibox-sc phpinfo(); output : http://jsfiddle.net/E8b7W/embedded/result/ PHP bench script (dedicated server) : http://jsfiddle.net/EhXzK/embedded/result/ PHP bench script (old mutualized) : http://jsfiddle.net/ANbWt/embedded/result/ Is it normal to get such poor performances after a kernel update and basics "apt-get install" for apache2 and php ? Thanks !

    Read the article

  • How to make sprint planning fun

    - by Jacob Spire
    Not only are our sprint planning meetings not fun, they're downright dreadful. The meetings are tedious, and boring, and take forever (a day, but it feels like a lot longer). The developers complain about it, and dread upcoming plannings. Our routine is pretty standard (user story inserted into sprint backlog by priority story is taken apart to tasks tasks are estimated in hours repeat), and I can't figure out what we're doing wrong. How can we make the meetings more enjoyable? ... Some more details, in response to requests for more information: Why are the backlog items not inserted and prioritized before sprint kickoff? User stories are indeed prioritized; we have no idea how long they'll take until we break them down into tasks! From the (excellent) answers here, I see that maybe we shouldn't estimate tasks at all, only the user stories. The reason we estimate tasks (and not stories) is because we've been getting story-estimates terribly wrong -- but I guess that's the subject for an altogether different question. Why are developers complaining? Meetings are long. Meetings are monotonous. Story after story, task after task, struggling (yes, struggling) to estimate how long it will take and what it involves. Estimating tasks makes user-story-estimation seem pointless. The longer the meeting, the less focus in the room. The less focused colleagues are, the longer the meeting takes. A recursive hate-spiral develops. We've considered splitting the meeting into two days in order to keep people focused, but the developers wouldn't hear of it. One day of planning is bad enough; now we'll have two?! Part of our problem is that we go into very small detail (in order to get more accurate estimations). But when we estimate roughly, we go way off the mark! To sum up the question: What are we doing wrong? What additional ways are there to make the meeting generally more enjoyable?

    Read the article

  • Ubuntu 12.04.1 LTS and Nvidia dirver (304.51) 64bit: problem 640x480

    - by nibianaswen
    I have a problem with this configuration: Asus K55V, Ubuntu 12.04 LTS and Nvidia driver 304.51. I have remove the nouveau driver with: apt-get --purge remove xserver-xorg-video-nouveau I installed the official nvidia driver (from www.nvidia.com) but when I reboot the PC the resolution of screen is only 640x480 and the monitor is resized. Mo solution at this problem if i change the xorg.conf. Now i have uninstall the nvidia driver and reinstall with sudo apt-get purge nvidia-current sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current When I reboot the screen resolution and size is OK, but if I start nvidia-setting I received the message: You do not appear to be using the NVIDIA X driver. and with command: sudo lshw -c display | grep driver I received configuration: driver=i915 latency=0 This sound like the system is using the Intel card. When I launch command lspci | grep VGA the output is: 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1058 (rev ff) And there is no /etc/X11/xorg.conf. I have read a lot of guides on internet but without success.. How i can use nvidia card with the driver that i have installed?

    Read the article

  • Troubleshooting "connection reset" error on my linux server

    - by Chris
    I fervently hope someone here can help me with the problem I am experiencing. I am a programmer, and I have very little understanding of linux sysadmin terminology/concepts. I am attempting to troubleshoot a problem with my website. It is a Facebook app, and whenever I try to connect using Chrome, I get an error stating that the "connection was reset". I have been Googling for four days straight trying to find a solution to this problem, but no joy. A big part of the problem is that I do not understand the terminology being employed, and the output from many of the tools referenced is likewise indecipherable to me. I am running a VPS with CentOS 5, apache, PHP, and MySQL. I could spam this post with a ton of information from my iptables, apache, etc but if anyone needs information from my server, please let me know how to get it, and I will post it here. Thank you for any help you can offer!

    Read the article

  • dpkg in uninterruptible sleep

    - by Khaled
    I have several Ubuntu servers 10.04. Today, I tried to upgrade some packages on one of these servers and the process got stuck. I logged in using another SSH session and I found that dpkg is in D state (uninterruptible sleep). According to what I have read, this state results generally from I/O waiting like waiting for NFS share. I can not understand why dpkg will block in this state. I can not see any obvious problems other than this. Here is the output of ps to show the blocking process: $ ps axo pid,cmd,s,wchan | grep dpkg 22571 /usr/bin/dpkg --status-fd 2 D call_rwsem_down_read_failed This process can not be killed even with kill -9. So, I will not be able to install/upgrade any package unless I reboot the server. What makes it worse is that the remote reboot does not succeed in such a case (having processes in D state). Can anyone help with this? How can I avoid this in the future.

    Read the article

  • Colored PS1 string

    - by Will Vousden
    Clarification: I want __foo to be executed each time the PS1 string is presented in the terminal, not when the PS1 string is constructed (hence its being in quotes). __foo contains logic that examines the current directory, so its execution must be deferred. I'm trying to use different colours in my Bash PS1 string from within a Bash function: LIGHTRED="\033[1;31m" LIGHTGREEN="\033[1;32m" RESET="\033[m" __foo () { # Do some stuff and genereate a string to echo in different colours: echo -n '\['$1'\]firstcolour \['$2'\]secondcolour' } PS1='$(__foo '$LIGHTRED' '$LIGHTGREEN')\['$RESET'\] \$' Essentially I want __foo to generate part of the PS1 in a bunch of different colours. My attempt doesn't seem to work, though; it produces the following output: -bash: 31m: command not found -bash: 32m: command not found \[]firstcolour \[\]secondcolour $ What gives, and how can I fix it?

    Read the article

  • Can't install Ubuntu Software Center

    - by byf-ferdy
    I'm running Ubuntu 13.10 32bit with Gnome 3.8 but am missing the Ubuntu Software Center. I tried to install it via terminal: $ sudo apt-get install software-center But that tells me that dependencies are not met The following packages have unmet dependencies: software-center : Depends: gir1.2-webkit-3.0 but it is not going to be installed gir1.2-webkit-3.0 depends on gir1.2-javascriptcoregtk-3.0 of version 1.10.2-0ubuntu2. But that package is only available as version 2.0.4-2~ubuntu13.04. I am missing the Ubuntu Software Center as well as the Update Manager and the packages update-notifyer and ubuntu-release-upgrader-gtk. How can I install the packages with correct dependencies? Edit: Output of apt-cache policy gir1.2-javascriptcoregtk-3.0: gir1.2-javascriptcoregtk-3.0: Installed: 2.0.4-2~ubuntu13.04.1 Candidate: 2.0.4-2~ubuntu13.04.1 Version table: *** 2.0.4-2~ubuntu13.04.1 0 100 /var/lib/dpkg/status 1.10.2-0ubuntu2 0 500 http://de.archive.ubuntu.com/ubuntu/ saucy/main i386 Packages My sources.list: deb http://de.archive.ubuntu.com/ubuntu/ saucy main restricted universe multiverse deb http://de.archive.ubuntu.com/ubuntu/ saucy-security main restricted universe multiverse deb http://de.archive.ubuntu.com/ubuntu/ saucy-updates main restricted universe multiverse deb http://archive.canonical.com/ubuntu saucy partner deb http://extras.ubuntu.com/ubuntu saucy main # spotify deb http://repository.spotify.com stable non-free Spotify I added myself.

    Read the article

  • Can't reach custom C# forms application remotely.

    - by gnucom
    Hello, I'm working in Windows Server 2008. I have a very basic C# forms application (not a service) that is listening on a port, say 56112. When using telnet I can connect from the localhost and send and receive data. For some reason I cannot remotely connect to the application. I know I have a connection because I can telnet to 23 on the remotely fine. I've opened this port on the firewall, created rules in/out in advanced firewall, disabled the firewall completely, and more. Any suggestions would be great! This is the telnet output: Microsoft Telnet> open server.cc 56112 Connecting server.cc...Could not open connection to the host, on port 56112: Connect failed

    Read the article

  • Is the Zune HD's audio card better or worse than the iPod touch's?

    - by MatthewThepc
    Firstly, if this is the wrong site to ask this question I apologize, but I didn't see one for "music players" on the stack exchange website :) After reading a few people online say that music playing from a Zune HD sounds better to them than that on an iPod touch, I was wondering whether there's any truth to that? From what I can tell, the Zune HD uses a Wolfson Microelectronics WM8352, while the first-generation iPod Touch (which the Zune HD was competing with) used a Wolfson Microelectronics WM8758BG, and newer models use the Cirrus Logic CS4398 and CS42L61. Which ones are better (to make the question less subjective, let's say in terms of quality, range, & accuracy of output)? Admittedly, I have almost no idea how everything compares and works together, but it would seem to me that, just by looking at the version numbers, the iPod has been better since it's launch. Is there anything else that effects sound quality? Thanks!

    Read the article

  • Problem with installing Nvidia display drivers on Ubuntu 13.10

    - by Pascal
    Hello everyone and thank you for taking a look at this topic! I'm currently trying out Ubuntu 13.10 but I keep hitting a wall when it comes to installing a driver. I've tried: sudo apt-get install nvidia-current This resulted in a un-bootable system. The screen just stayed black and the cursor displayed as an 'X'. After that I did had to re-install Ubuntu. The computer I'm using is an Acer-Aspire-V3 with a build in Nvidia geforce GT 630M and also with a Intel HD graphics chip-set (not sure if chip-set is the right word here). "lspci | grep VGA" output: pascal@pascal-Aspire-V3-571G:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108M [GeForce GT 630M] (rev a1) I've searched a bit here and there and found out that it would be wise to mention that this laptop is using (or so I think) Nvidia Optimus, not sure if it will add anything to the subject but at least I'll mention it just to be sure. Now to the questions: Q1 How is this caused and how can I fix it? Q2 What additional information could I provide to help you help me?

    Read the article

  • Does CPIO produce platform dependant archives?

    - by TiCL
    I made a CPIO archive with following command on Solaris 11 (SPARC): find . | cpio -ov >/tmp/myarchive.cpio I copied it to Intel based Solaris 11 machine and tried to extract using the following command cpio -icvdu < myarchive.cpio It gives me following error: cpio: Not a cpio file, bad header. 1 errors The MD5SUM hash matches and I can extract it on another SPARC machine. My question, does CPIO produce platform dependant output? Is there any way to convert it? I cannot use TAR at this moment, because the directory I am archiving has long symbolic links that are skipped by TAR command

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >